mirror of
https://github.com/catlog22/Claude-Code-Workflow.git
synced 2026-03-01 15:03:57 +08:00
Compare commits
19 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
b0bfdb907a | ||
|
|
39bbf960c2 | ||
|
|
5e35da32e8 | ||
|
|
84610661ca | ||
|
|
cd54c10256 | ||
|
|
c3ddf7e322 | ||
|
|
ab65caec45 | ||
|
|
3788ba1268 | ||
|
|
d14a710797 | ||
|
|
92ac9897f2 | ||
|
|
67e78b132c | ||
|
|
0a37bc3eaf | ||
|
|
604f89b7aa | ||
|
|
46989dcbad | ||
|
|
4763edb0e4 | ||
|
|
7d349a64fb | ||
|
|
22a98b7b6c | ||
|
|
a59a86fb9b | ||
|
|
101417e028 |
@@ -1,155 +1,105 @@
|
||||
---
|
||||
name: team-planex
|
||||
description: Unified team skill for plan-and-execute pipeline. 2-member team (planner + executor) with wave pipeline for concurrent planning and execution. All roles invoke this skill with --role arg. Triggers on "team planex".
|
||||
description: Unified team skill for plan-and-execute pipeline. Uses team-worker agent architecture with role-spec files for domain logic. Coordinator orchestrates pipeline, workers are team-worker agents. Triggers on "team planex".
|
||||
allowed-tools: TeamCreate(*), TeamDelete(*), SendMessage(*), TaskCreate(*), TaskUpdate(*), TaskList(*), TaskGet(*), Task(*), AskUserQuestion(*), Read(*), Write(*), Edit(*), Bash(*), Glob(*), Grep(*)
|
||||
---
|
||||
|
||||
# Team PlanEx
|
||||
|
||||
2 成员边规划边执行团队。通过逐 Issue 节拍流水线实现 planner 和 executor 并行工作:planner 每完成一个 issue 的 solution 后立即创建 EXEC-* 任务(含中间产物文件路径),executor 从文件加载 solution 开始实现。所有成员通过 `--role=xxx` 路由。
|
||||
Unified team skill: plan-and-execute pipeline for issue-based development. Built on **team-worker agent architecture** — all worker roles share a single agent definition with role-specific Phase 2-4 loaded from markdown specs.
|
||||
|
||||
## Architecture Overview
|
||||
> **Note**: This skill has its own coordinator implementation (`roles/coordinator/role.md`), independent of `team-lifecycle-v5`. It follows the same v5 architectural patterns (team-worker agents, role-specs, Spawn-and-Stop) but with a simplified 2-role pipeline (planner + executor) tailored for plan-and-execute workflows.
|
||||
|
||||
## Architecture
|
||||
|
||||
```
|
||||
┌──────────────────────────────────────────────┐
|
||||
│ Skill(skill="team-planex", args="--role=xxx") │
|
||||
└────────────────┬─────────────────────────────┘
|
||||
│ Role Router
|
||||
┌───────┴───────┐
|
||||
↓ ↓
|
||||
┌─────────┐ ┌──────────┐
|
||||
│ planner │ │ executor │
|
||||
│ PLAN-* │ │ EXEC-* │
|
||||
└─────────┘ └──────────┘
|
||||
┌─────────────────────────────────────────────┐
|
||||
│ Skill(skill="team-planex", args="需求描述") │
|
||||
└──────────────────┬──────────────────────────┘
|
||||
│ Always → coordinator
|
||||
↓
|
||||
┌──────────────┐
|
||||
│ coordinator │ Phase 1-5 + dispatch/monitor commands
|
||||
└───┬──────┬───┘
|
||||
│ │
|
||||
↓ ↓
|
||||
┌──────────┐ ┌──────────┐
|
||||
│ planner │ │ executor │ team-worker agents
|
||||
│ PLAN-* │ │ EXEC-* │ with role-spec injection
|
||||
└──────────┘ └──────────┘
|
||||
```
|
||||
|
||||
**设计原则**: 只有 2 个角色,没有独立 coordinator。SKILL.md 入口承担轻量编排(创建团队、派发初始任务链),然后 planner 担任 lead 角色持续推进。
|
||||
|
||||
## Role Router
|
||||
|
||||
This skill is **coordinator-only**. Workers do NOT invoke this skill — they are spawned as `team-worker` agents directly.
|
||||
|
||||
### Input Parsing
|
||||
|
||||
Parse `$ARGUMENTS` to extract `--role`. If absent -> Orchestration Mode (SKILL.md as lightweight coordinator).
|
||||
Parse `$ARGUMENTS`. No `--role` needed — always routes to coordinator.
|
||||
|
||||
Optional flags: `--team` (default: "planex"), `--exec` (execution method), `-y`/`--yes` (auto mode).
|
||||
Optional flags: `--exec` (execution method), `-y`/`--yes` (auto mode).
|
||||
|
||||
### Role Registry
|
||||
|
||||
| Role | File | Task Prefix | Type | Compact |
|
||||
|------|------|-------------|------|---------|
|
||||
| planner | [roles/planner.md](roles/planner.md) | PLAN-* | pipeline (lead) | **压缩后必须重读** |
|
||||
| executor | [roles/executor.md](roles/executor.md) | EXEC-* | pipeline | 压缩后必须重读 |
|
||||
|
||||
> **COMPACT PROTECTION**: 角色文件是执行文档,不是参考资料。当 context compression 发生后,角色指令仅剩摘要时,**必须立即 `Read` 对应 role.md 重新加载后再继续执行**。不得基于摘要执行任何 Phase。
|
||||
| Role | Spec | Task Prefix | Type | Inner Loop |
|
||||
|------|------|-------------|------|------------|
|
||||
| coordinator | [roles/coordinator/role.md](roles/coordinator/role.md) | (none) | orchestrator | - |
|
||||
| planner | [role-specs/planner.md](role-specs/planner.md) | PLAN-* | pipeline | true |
|
||||
| executor | [role-specs/executor.md](role-specs/executor.md) | EXEC-* | pipeline | true |
|
||||
|
||||
### Dispatch
|
||||
|
||||
1. Extract `--role` from arguments
|
||||
2. If no `--role` -> Orchestration Mode (SKILL.md as lightweight coordinator)
|
||||
3. Look up role in registry -> Read the role file -> Execute its phases
|
||||
4. Unknown role -> Error with available role list: planner, executor
|
||||
Always route to coordinator. Coordinator reads `roles/coordinator/role.md` and executes its phases.
|
||||
|
||||
## Input Types
|
||||
### Orchestration Mode
|
||||
|
||||
支持 3 种输入方式(通过 args 传入 planner):
|
||||
User provides task description.
|
||||
|
||||
| 输入类型 | 格式 | 示例 |
|
||||
|----------|------|------|
|
||||
| Issue IDs | 直接传入 ID | `--role=planner ISS-20260215-001 ISS-20260215-002` |
|
||||
| 需求文本 | `--text '...'` | `--role=planner --text '实现用户认证模块'` |
|
||||
| Plan 文件 | `--plan path` | `--role=planner --plan plan/2026-02-15-auth.md` |
|
||||
**Invocation**: `Skill(skill="team-planex", args="<task-description>")`
|
||||
|
||||
## Shared Infrastructure
|
||||
**Lifecycle**:
|
||||
```
|
||||
User provides task description
|
||||
-> coordinator Phase 1-3: Parse input -> TeamCreate -> Create task chain (dispatch)
|
||||
-> coordinator Phase 4: spawn planner worker (background) -> STOP
|
||||
-> Worker (team-worker agent) executes -> SendMessage callback -> coordinator advances
|
||||
-> Loop until pipeline complete -> Phase 5 report + completion action
|
||||
```
|
||||
|
||||
### Role Isolation Rules
|
||||
**User Commands** (wake paused coordinator):
|
||||
|
||||
#### Output Tagging(强制)
|
||||
|
||||
所有角色的输出(SendMessage、team_msg)必须带 `[role_name]` 标识前缀。
|
||||
|
||||
#### Planner 边界
|
||||
|
||||
| 允许 | 禁止 |
|
||||
|------|------|
|
||||
| 需求拆解 (issue 创建) | 直接编写/修改代码 |
|
||||
| 方案设计 (issue-plan-agent) | 调用 code-developer |
|
||||
| 冲突检查 (inline files_touched) | 运行测试 |
|
||||
| 创建 EXEC-* 任务 | git commit |
|
||||
| 监控进度 (消息总线) | |
|
||||
|
||||
#### Executor 边界
|
||||
|
||||
| 允许 | 禁止 |
|
||||
|------|------|
|
||||
| 处理 EXEC-* 前缀的任务 | 创建 issue |
|
||||
| 调用 code-developer 实现 | 修改 solution/queue |
|
||||
| 运行测试验证 | 为 planner 创建 PLAN-* 任务 |
|
||||
| git commit 提交 | 直接与用户交互 (AskUserQuestion) |
|
||||
| SendMessage 给 planner | |
|
||||
|
||||
### Team Configuration
|
||||
|
||||
| Key | Value |
|
||||
|-----|-------|
|
||||
| name | planex |
|
||||
| sessionDir | `.workflow/.team/PEX-{slug}-{date}/` |
|
||||
| artifactsDir | `.workflow/.team/PEX-{slug}-{date}/artifacts/` |
|
||||
| issueDataDir | `.workflow/issues/` |
|
||||
|
||||
### Message Bus
|
||||
|
||||
每次 SendMessage 前,先调用 `mcp__ccw-tools__team_msg` 记录:
|
||||
|
||||
- 参数: operation="log", team=`<session-id>`, from=`<role>`, to=`<target-role>`, type=`<type>`, summary="[`<role>`] `<summary>`", ref=`<file_path>`
|
||||
- **注意**: `team` 必须是 **session ID** (如 `PEX-project-2026-02-27`), 不是 team name. 从任务描述的 `Session:` 字段提取.
|
||||
- **CLI fallback**: 当 MCP 不可用时 -> `ccw team log --team <session-id> --from <role> --to <target> --type <type> --summary "[<role>] ..." --json`
|
||||
|
||||
**Message types by role**:
|
||||
|
||||
| Role | Types |
|
||||
|------|-------|
|
||||
| planner | `wave_ready`, `issue_ready`, `all_planned`, `error` |
|
||||
| executor | `impl_complete`, `impl_failed`, `wave_done`, `error` |
|
||||
|
||||
### Task Lifecycle (Both Roles)
|
||||
|
||||
每个 worker 启动后执行相同的任务发现流程:
|
||||
|
||||
1. 调用 `TaskList()` 获取所有任务
|
||||
2. 筛选: subject 匹配本角色前缀 + owner 是本角色 + status 为 pending + blockedBy 为空
|
||||
3. 无任务 -> idle 等待
|
||||
4. 有任务 -> `TaskGet` 获取详情 -> `TaskUpdate` 标记 in_progress
|
||||
5. Phase 2-4: Role-specific (see roles/{role}.md)
|
||||
6. Phase 5: Report + Loop
|
||||
|
||||
**Resume Artifact Check** (防止恢复后重复产出):
|
||||
- 检查本任务的输出产物是否已存在
|
||||
- 产物完整 -> 跳到 Phase 5 报告完成
|
||||
- 产物不完整或不存在 -> 正常执行 Phase 2-4
|
||||
| Command | Action |
|
||||
|---------|--------|
|
||||
| `check` / `status` | Output execution status graph, no advancement |
|
||||
| `resume` / `continue` | Check worker states, advance next step |
|
||||
| `add <issue-ids or --text '...' or --plan path>` | Append new tasks to planner queue |
|
||||
|
||||
---
|
||||
|
||||
## Wave Pipeline (逐 Issue 节拍)
|
||||
## Command Execution Protocol
|
||||
|
||||
```
|
||||
Issue 1: planner 规划 solution -> 写中间产物 -> 冲突检查 -> 创建 EXEC-* -> issue_ready
|
||||
↓ (executor 立即开始)
|
||||
Issue 2: planner 规划 solution -> 写中间产物 -> 冲突检查 -> 创建 EXEC-* -> issue_ready
|
||||
↓ (executor 并行消费)
|
||||
Issue N: ...
|
||||
Final: planner 发送 all_planned -> executor 完成剩余 EXEC-* -> 结束
|
||||
```
|
||||
When coordinator needs to execute a command (dispatch, monitor):
|
||||
|
||||
**节拍规则**:
|
||||
- planner 每完成一个 issue 的 solution 后,**立即**创建 EXEC-* 任务并发送 `issue_ready` 信号
|
||||
- solution 写入中间产物文件(`artifacts/solutions/{issueId}.json`),EXEC-* 任务包含 `solution_file` 路径
|
||||
- executor 从文件加载 solution(无需再调 `ccw issue solution`),fallback 兼容旧模式
|
||||
- planner 不等待 executor,持续推进下一个 issue
|
||||
- 当 planner 发送 `all_planned` 消息后,executor 完成所有剩余任务即可结束
|
||||
1. **Read the command file**: `roles/coordinator/commands/<command-name>.md`
|
||||
2. **Follow the workflow** defined in the command file (Phase 2-4 structure)
|
||||
3. **Commands are inline execution guides** - NOT separate agents or subprocesses
|
||||
4. **Execute synchronously** - complete the command workflow before proceeding
|
||||
|
||||
---
|
||||
|
||||
## Input Types
|
||||
|
||||
支持 3 种输入方式:
|
||||
|
||||
| 输入类型 | 格式 | 示例 |
|
||||
|----------|------|------|
|
||||
| Issue IDs | 直接传入 ID | `ISS-20260215-001 ISS-20260215-002` |
|
||||
| 需求文本 | `--text '...'` | `--text '实现用户认证模块'` |
|
||||
| Plan 文件 | `--plan path` | `--plan plan/2026-02-15-auth.md` |
|
||||
|
||||
## Execution Method Selection
|
||||
|
||||
在编排模式或直接调用 executor 前,**必须先确定执行方式**。支持 3 种执行后端:
|
||||
支持 3 种执行后端:
|
||||
|
||||
| Executor | 后端 | 适用场景 |
|
||||
|----------|------|----------|
|
||||
@@ -159,251 +109,145 @@ Final: planner 发送 all_planned -> executor 完成剩余 EXEC-* -> 结束
|
||||
|
||||
### Selection Decision Table
|
||||
|
||||
| Condition | Execution Method | Code Review |
|
||||
|-----------|-----------------|-------------|
|
||||
| `--exec=agent` specified | Agent | Skip |
|
||||
| `--exec=codex` specified | Codex | Skip |
|
||||
| `--exec=gemini` specified | Gemini | Skip |
|
||||
| `-y` or `--yes` flag present | Auto (default Agent) | Skip |
|
||||
| No flags (interactive) | AskUserQuestion -> user choice | AskUserQuestion -> user choice |
|
||||
| Auto + task_count <= 3 | Agent | Skip |
|
||||
| Auto + task_count > 3 | Codex | Skip |
|
||||
|
||||
### Interactive Prompt (no flags)
|
||||
|
||||
当无 `-y`/`--yes` 且无 `--exec` 时,通过 AskUserQuestion 交互选择:
|
||||
|
||||
- **执行方式选项**: Agent / Codex / Gemini / Auto
|
||||
- **代码审查选项**: Skip / Gemini Review / Codex Review / Agent Review
|
||||
|
||||
### 通过 args 指定
|
||||
|
||||
```bash
|
||||
# 显式指定
|
||||
Skill(skill="team-planex", args="--exec=codex ISS-xxx")
|
||||
Skill(skill="team-planex", args="--exec=agent --text '简单功能'")
|
||||
|
||||
# Auto 模式(跳过交互,-y 或 --yes)
|
||||
Skill(skill="team-planex", args="-y --text '添加日志'")
|
||||
```
|
||||
| Condition | Execution Method |
|
||||
|-----------|-----------------|
|
||||
| `--exec=agent` specified | Agent |
|
||||
| `--exec=codex` specified | Codex |
|
||||
| `--exec=gemini` specified | Gemini |
|
||||
| `-y` or `--yes` flag present | Auto (default Agent) |
|
||||
| No flags (interactive) | AskUserQuestion -> user choice |
|
||||
| Auto + task_count <= 3 | Agent |
|
||||
| Auto + task_count > 3 | Codex |
|
||||
|
||||
---
|
||||
|
||||
## Orchestration Mode
|
||||
## Coordinator Spawn Template
|
||||
|
||||
当不带 `--role` 调用时,SKILL.md 进入轻量编排模式(无独立 coordinator 角色,SKILL.md 自身承担编排)。
|
||||
### v5 Worker Spawn (all roles)
|
||||
|
||||
**Invocation**: `Skill(skill="team-planex", args="任务描述")`
|
||||
|
||||
**Lifecycle**:
|
||||
|
||||
```
|
||||
用户提供任务描述
|
||||
-> SKILL.md 解析输入(Issue IDs / 需求文本 / Plan 文件)
|
||||
-> 初始化 sessionDir + artifacts 目录
|
||||
-> 执行方式选择(见 Execution Method Selection)
|
||||
-> 创建 PLAN-001 任务(owner: planner)
|
||||
-> Spawn planner agent (后台)
|
||||
-> Spawn executor agent (后台)
|
||||
-> 返回(planner lead 后续推进)
|
||||
```
|
||||
|
||||
**User Commands** (唤醒 / 检查状态):
|
||||
|
||||
| Command | Action |
|
||||
|---------|--------|
|
||||
| `check` / `status` | 输出执行状态图,不推进 |
|
||||
| `resume` / `continue` | 检查 worker 状态,推进下一步 |
|
||||
| `add <issue-ids or --text '...' or --plan path>` | 追加新任务到 planner 队列,不影响已有任务 |
|
||||
|
||||
**`add` 命令处理逻辑**:
|
||||
|
||||
1. 解析输入(Issue IDs / `--text` / `--plan`)
|
||||
2. 获取当前最大 PLAN-* 序号(`TaskList` 筛选 `PLAN-*` prefix),计算下一个序号 N
|
||||
3. `TaskCreate({ subject: "PLAN-00N: ...", owner: "planner", status: "pending" })`,description 写入新 issue IDs 或需求文本
|
||||
4. 若 planner 已发送 `all_planned`(检查 team_msg 日志),额外 `SendMessage` 通知 planner 有新任务,使其重新进入 Loop Check
|
||||
5. 若 executor 已退出等待,同样发送消息唤醒 executor 继续轮询 `EXEC-*` 任务
|
||||
|
||||
### Coordinator Spawn Template
|
||||
|
||||
SKILL.md 编排模式 spawn workers 时使用后台模式 (Spawn-and-Go):
|
||||
|
||||
**Planner Spawn**:
|
||||
When coordinator spawns workers, use `team-worker` agent with role-spec path:
|
||||
|
||||
```
|
||||
Task({
|
||||
subagent_type: "general-purpose",
|
||||
description: "Spawn planner worker",
|
||||
subagent_type: "team-worker",
|
||||
description: "Spawn <role> worker",
|
||||
team_name: <team-name>,
|
||||
name: "planner",
|
||||
name: "<role>",
|
||||
run_in_background: true,
|
||||
prompt: `你是 team "<team-name>" 的 PLANNER。
|
||||
prompt: `## Role Assignment
|
||||
role: <role>
|
||||
role_spec: .claude/skills/team-planex/role-specs/<role>.md
|
||||
session: <session-folder>
|
||||
session_id: <session-id>
|
||||
team_name: <team-name>
|
||||
requirement: <task-description>
|
||||
inner_loop: <true|false>
|
||||
execution_method: <agent|codex|gemini>
|
||||
|
||||
## 首要指令
|
||||
你的所有工作必须通过调用 Skill 获取角色定义后执行:
|
||||
Skill(skill="team-planex", args="--role=planner")
|
||||
|
||||
当前输入: <planner-input>
|
||||
Session: <session-dir>
|
||||
|
||||
## 执行配置
|
||||
executor 的执行方式: <execution-method>
|
||||
创建 EXEC-* 任务时,description 中包含:
|
||||
execution_method: <method>
|
||||
code_review: <review-tool>
|
||||
|
||||
## 中间产物(必须)
|
||||
每个 issue 的 solution 写入: <session-dir>/artifacts/solutions/{issueId}.json
|
||||
EXEC-* 任务 description 必须包含 solution_file 字段指向该文件
|
||||
每完成一个 issue 立即发送 issue_ready 消息并创建 EXEC-* 任务
|
||||
|
||||
## 角色准则
|
||||
- 只处理 PLAN-* 任务,不执行其他角色工作
|
||||
- 所有输出带 [planner] 标识前缀
|
||||
- 仅与 coordinator 通信
|
||||
- 不使用 TaskCreate 为其他角色创建任务(EXEC-* 除外)
|
||||
- 每次 SendMessage 前先调用 mcp__ccw-tools__team_msg 记录
|
||||
|
||||
## 工作流程
|
||||
1. 调用 Skill -> 获取角色定义和执行逻辑
|
||||
2. 按 role.md 5-Phase 流程执行
|
||||
3. team_msg + SendMessage 结果给 coordinator
|
||||
4. TaskUpdate completed -> 检查下一个任务`
|
||||
Read role_spec file to load Phase 2-4 domain instructions.
|
||||
Execute built-in Phase 1 (task discovery) -> role-spec Phase 2-4 -> built-in Phase 5 (report).`
|
||||
})
|
||||
```
|
||||
|
||||
**Executor Spawn**:
|
||||
|
||||
```
|
||||
Task({
|
||||
subagent_type: "general-purpose",
|
||||
description: "Spawn executor worker",
|
||||
team_name: <team-name>,
|
||||
name: "executor",
|
||||
run_in_background: true,
|
||||
prompt: `你是 team "<team-name>" 的 EXECUTOR。
|
||||
|
||||
## 首要指令
|
||||
你的所有工作必须通过调用 Skill 获取角色定义后执行:
|
||||
Skill(skill="team-planex", args="--role=executor")
|
||||
|
||||
## 执行配置
|
||||
默认执行方式: <execution-method>
|
||||
代码审查: <review-tool>
|
||||
(每个 EXEC-* 任务 description 中可能包含 execution_method 覆盖)
|
||||
|
||||
## Solution 加载
|
||||
优先从 EXEC-* 任务 description 中的 solution_file 路径读取 solution JSON 文件
|
||||
无 solution_file 时 fallback 到 ccw issue solution 命令
|
||||
|
||||
## 角色准则
|
||||
- 只处理 EXEC-* 任务,不执行其他角色工作
|
||||
- 所有输出带 [executor] 标识前缀
|
||||
- 根据 execution_method 选择执行后端(Agent/Codex/Gemini)
|
||||
- 仅与 coordinator 通信
|
||||
- 每次 SendMessage 前先调用 mcp__ccw-tools__team_msg 记录
|
||||
|
||||
## 工作流程
|
||||
1. 调用 Skill -> 获取角色定义和执行逻辑
|
||||
2. 按 role.md 5-Phase 流程执行
|
||||
3. team_msg + SendMessage 结果给 coordinator
|
||||
4. TaskUpdate completed -> 检查下一个任务`
|
||||
})
|
||||
```
|
||||
**Inner Loop roles** (planner, executor): Set `inner_loop: true`. The team-worker agent handles the loop internally.
|
||||
|
||||
---
|
||||
|
||||
## Cadence Control
|
||||
## Pipeline Definitions
|
||||
|
||||
**节拍模型**: Wave beat -- planner 持续推进,executor 并行消费。每个 wave = planner 完成一个 issue -> executor 开始实现。
|
||||
### Pipeline Diagram
|
||||
|
||||
```
|
||||
Wave Beat Cycle (逐 Issue 节拍)
|
||||
===================================================================
|
||||
Event SKILL.md (编排) Workers
|
||||
-------------------------------------------------------------------
|
||||
用户调用 ---------> ┌─ 解析输入 ─────────┐
|
||||
│ 初始化 session │
|
||||
│ 选择执行方式 │
|
||||
├─ 创建 PLAN-001 ─────┤
|
||||
│ spawn planner ──────┼──> [Planner] Phase 1-5
|
||||
│ spawn executor ─────┼──> [Executor] Phase 1 (idle)
|
||||
└─ 返回 (编排结束) ───┘ │
|
||||
│
|
||||
Wave 1: Planner: issue-1 solution
|
||||
-> 写产物 -> 创建 EXEC-001
|
||||
-> issue_ready ---------> Executor 开始 EXEC-001
|
||||
Wave 2: Planner: issue-2 solution
|
||||
-> 写产物 -> 创建 EXEC-002
|
||||
-> issue_ready ---------> Executor 并行消费
|
||||
Issue-based beat pipeline (逐 Issue 节拍)
|
||||
═══════════════════════════════════════════════════
|
||||
PLAN-001 ──> [planner] issue-1 solution → EXEC-001
|
||||
issue-2 solution → EXEC-002
|
||||
...
|
||||
issue-N solution → EXEC-00N
|
||||
all_planned signal
|
||||
|
||||
EXEC-001 ──> [executor] implement issue-1
|
||||
EXEC-002 ──> [executor] implement issue-2
|
||||
...
|
||||
Wave N: Planner: all_planned
|
||||
Executor: 完成剩余 EXEC-*
|
||||
===================================================================
|
||||
EXEC-00N ──> [executor] implement issue-N
|
||||
═══════════════════════════════════════════════════
|
||||
```
|
||||
|
||||
**Pipeline 节拍视图**:
|
||||
### Cadence Control
|
||||
|
||||
**Beat model**: Event-driven Spawn-and-Stop. Each beat = coordinator wake -> process callback -> spawn next -> STOP.
|
||||
|
||||
```
|
||||
Wave pipeline (planner lead, executor follows)
|
||||
──────────────────────────────────────────────────────────
|
||||
Wave 1 2 3 ... N Final
|
||||
│ │ │ │ │
|
||||
P:iss-1 P:iss-2 P:iss-3 P:iss-N P:all_planned
|
||||
↓ ↓ ↓ ↓ ↓
|
||||
E:exec1 E:exec2 E:exec3 E:execN E:finish
|
||||
│ │
|
||||
(并行消费,executor 不等 planner 全部完成)
|
||||
|
||||
P=planner E=executor
|
||||
Beat Cycle (Coordinator Spawn-and-Stop)
|
||||
======================================================================
|
||||
Event Coordinator Workers
|
||||
----------------------------------------------------------------------
|
||||
用户调用 ----------> ┌─ Phase 1-3 ──────────┐
|
||||
│ 解析输入 │
|
||||
│ TeamCreate │
|
||||
│ 创建 PLAN-001 │
|
||||
├─ Phase 4 ─────────────┤
|
||||
│ spawn planner ────────┼──> [planner] Phase 1-5
|
||||
└─ STOP (idle) ──────────┘ │
|
||||
│
|
||||
callback <─ planner issue_ready ────────────────────────┘
|
||||
┌─ monitor.handleCallback ─┐
|
||||
│ 检查新 EXEC-* 任务 │
|
||||
│ spawn executor ─────────┼──> [executor] Phase 1-5
|
||||
└─ STOP (idle) ───────────┘ │
|
||||
│
|
||||
callback <─ executor impl_complete ────────┘
|
||||
┌─ monitor.handleCallback ─┐
|
||||
│ 标记完成 │
|
||||
│ 检查下一个 ready task │
|
||||
└─ spawn/STOP ────────────┘
|
||||
======================================================================
|
||||
```
|
||||
|
||||
**检查点 (Checkpoint)**:
|
||||
**Checkpoints**:
|
||||
|
||||
| 触发条件 | 位置 | 行为 |
|
||||
|----------|------|------|
|
||||
| Planner 全部完成 | all_planned 信号 | Executor 完成剩余 EXEC-* 后结束 |
|
||||
| Pipeline 停滞 | 无 ready + 无 running | Planner 检查并 escalate to user |
|
||||
| Executor 阻塞 | Executor blocked > 2 tasks | Planner escalate to user |
|
||||
| Pipeline 停滞 | 无 ready + 无 running | Coordinator escalate to user |
|
||||
| Executor 阻塞 | blocked > 2 tasks | Coordinator escalate to user |
|
||||
|
||||
**Stall 检测**:
|
||||
|
||||
| 检查项 | 条件 | 处理 |
|
||||
|--------|------|------|
|
||||
| Executor 无响应 | in_progress EXEC-* 无回调 | 报告等待中的任务列表 |
|
||||
| Pipeline 死锁 | 无 ready + 无 running + 有 pending | 检查 blockedBy 依赖链 |
|
||||
| Planner 规划失败 | issue planning error | Retry once, then skip to next issue |
|
||||
|
||||
---
|
||||
|
||||
## Task Metadata Registry
|
||||
### Task Metadata Registry
|
||||
|
||||
| Task ID | Role | Phase | Dependencies | Description |
|
||||
|---------|------|-------|-------------|-------------|
|
||||
| PLAN-001 | planner | planning | (none) | 初始规划:需求拆解、issue 创建、方案设计 |
|
||||
| EXEC-001 | executor | execution | PLAN-001 (implicit via issue_ready) | 第一个 issue 的代码实现 |
|
||||
| EXEC-002 | executor | execution | (planner issue_ready) | 第二个 issue 的代码实现 |
|
||||
| EXEC-N | executor | execution | (planner issue_ready) | 第 N 个 issue 的代码实现 |
|
||||
| EXEC-001 | executor | execution | (created by planner at runtime) | 第一个 issue 的代码实现 |
|
||||
| EXEC-N | executor | execution | (created by planner at runtime) | 第 N 个 issue 的代码实现 |
|
||||
|
||||
> 注: EXEC-* 任务由 planner 在运行时逐个创建(逐 Issue 节拍),不预先定义完整任务链。
|
||||
|
||||
---
|
||||
|
||||
## Wisdom Accumulation (所有角色)
|
||||
## Completion Action
|
||||
|
||||
跨任务知识积累。SKILL.md 编排模式在 session 初始化时创建 `wisdom/` 目录。
|
||||
When the pipeline completes (all tasks done, coordinator Phase 5):
|
||||
|
||||
**目录**:
|
||||
```
|
||||
<session-folder>/wisdom/
|
||||
├── learnings.md # 模式和洞察
|
||||
├── decisions.md # 架构和设计决策
|
||||
├── conventions.md # 代码库约定
|
||||
└── issues.md # 已知风险和问题
|
||||
AskUserQuestion({
|
||||
questions: [{
|
||||
question: "Team pipeline complete. What would you like to do?",
|
||||
header: "Completion",
|
||||
multiSelect: false,
|
||||
options: [
|
||||
{ label: "Archive & Clean (Recommended)", description: "Archive session, clean up tasks and team resources" },
|
||||
{ label: "Keep Active", description: "Keep session active for follow-up work or inspection" },
|
||||
{ label: "Export Results", description: "Export deliverables to a specified location, then clean" }
|
||||
]
|
||||
}]
|
||||
})
|
||||
```
|
||||
|
||||
**Worker 加载** (Phase 2): 从 task description 提取 `Session: <path>`, 读取 wisdom 目录下各文件。
|
||||
**Worker 贡献** (Phase 4/5): 将本任务发现写入对应 wisdom 文件。
|
||||
| Choice | Action |
|
||||
|--------|--------|
|
||||
| Archive & Clean | Update session status="completed" -> TeamDelete -> output final summary |
|
||||
| Keep Active | Update session status="paused" -> output resume instructions |
|
||||
| Export Results | AskUserQuestion for target path -> copy deliverables -> Archive & Clean |
|
||||
|
||||
---
|
||||
|
||||
@@ -426,14 +270,30 @@ P=planner E=executor
|
||||
|
||||
---
|
||||
|
||||
## Message Bus
|
||||
|
||||
每次 SendMessage 前,先调用 `mcp__ccw-tools__team_msg` 记录:
|
||||
|
||||
- 参数: operation="log", team=`<session-id>`, from=`<role>`, to=`<target-role>`, type=`<type>`, summary="[`<role>`] `<summary>`"
|
||||
- **注意**: `team` 必须是 **session ID** (如 `PEX-project-2026-02-27`), 不是 team name.
|
||||
|
||||
**Message types by role**:
|
||||
|
||||
| Role | Types |
|
||||
|------|-------|
|
||||
| planner | `issue_ready`, `all_planned`, `error` |
|
||||
| executor | `impl_complete`, `impl_failed`, `error` |
|
||||
|
||||
---
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Scenario | Resolution |
|
||||
|----------|------------|
|
||||
| Unknown --role value | Error with available role list: planner, executor |
|
||||
| Missing --role arg | Enter orchestration mode (SKILL.md as lightweight coordinator) |
|
||||
| Role file not found | Error with expected path (roles/{name}.md) |
|
||||
| Planner issue planning failure | Retry once, then report error and skip to next issue |
|
||||
| Executor impl failure | Report to planner, continue with next EXEC-* task |
|
||||
| No EXEC-* tasks yet | Executor idles, polls for new tasks |
|
||||
| Pipeline stall | Planner monitors -- if executor blocked > 2 tasks, escalate to user |
|
||||
| Role spec file not found | Error with expected path (role-specs/<name>.md) |
|
||||
| Command file not found | Fallback to inline execution in coordinator role.md |
|
||||
| team-worker agent unavailable | Error: requires .claude/agents/team-worker.md |
|
||||
| Planner issue planning failure | Retry once, then skip to next issue |
|
||||
| Executor impl failure | Report to coordinator, continue with next EXEC-* task |
|
||||
| Pipeline stall | Coordinator monitors, escalate to user |
|
||||
| Completion action timeout | Default to Keep Active |
|
||||
|
||||
102
.claude/skills/team-planex/role-specs/executor.md
Normal file
102
.claude/skills/team-planex/role-specs/executor.md
Normal file
@@ -0,0 +1,102 @@
|
||||
---
|
||||
prefix: EXEC
|
||||
inner_loop: true
|
||||
message_types:
|
||||
success: impl_complete
|
||||
error: impl_failed
|
||||
---
|
||||
|
||||
# Executor
|
||||
|
||||
Single-issue implementation agent. Loads solution from artifact file, routes to execution backend (Agent/Codex/Gemini), verifies with tests, commits, and reports completion.
|
||||
|
||||
## Phase 2: Task & Solution Loading
|
||||
|
||||
| Input | Source | Required |
|
||||
|-------|--------|----------|
|
||||
| Issue ID | Task description `Issue ID:` field | Yes |
|
||||
| Solution file | Task description `Solution file:` field | Yes |
|
||||
| Session folder | Task description `Session:` field | Yes |
|
||||
| Execution method | Task description `Execution method:` field | Yes |
|
||||
| Wisdom | `<session>/wisdom/` | No |
|
||||
|
||||
1. Extract issue ID, solution file path, session folder, execution method
|
||||
2. Load solution JSON from file (file-first)
|
||||
3. If file not found -> fallback: `ccw issue solution <issueId> --json`
|
||||
4. Load wisdom files for conventions and patterns
|
||||
5. Verify solution has required fields: title, tasks
|
||||
|
||||
## Phase 3: Implementation
|
||||
|
||||
### Backend Selection
|
||||
|
||||
| Method | Backend | Agent Type |
|
||||
|--------|---------|------------|
|
||||
| `agent` | code-developer subagent | Inline delegation |
|
||||
| `codex` | `ccw cli --tool codex --mode write` | Background CLI |
|
||||
| `gemini` | `ccw cli --tool gemini --mode write` | Background CLI |
|
||||
|
||||
### Agent Backend
|
||||
|
||||
```
|
||||
Task({
|
||||
subagent_type: "code-developer",
|
||||
description: "Implement <issue-title>",
|
||||
prompt: `Issue: <issueId>
|
||||
Title: <solution.title>
|
||||
Solution: <solution JSON>
|
||||
Implement all tasks from the solution plan.`,
|
||||
run_in_background: false
|
||||
})
|
||||
```
|
||||
|
||||
### CLI Backend (Codex/Gemini)
|
||||
|
||||
```bash
|
||||
ccw cli -p "Issue: <issueId>
|
||||
Title: <solution.title>
|
||||
Solution Plan: <solution JSON>
|
||||
Implement all tasks. Follow existing patterns. Run tests." \
|
||||
--tool <codex|gemini> --mode write
|
||||
```
|
||||
|
||||
Wait for CLI completion before proceeding.
|
||||
|
||||
## Phase 4: Verification + Commit
|
||||
|
||||
### Test Verification
|
||||
|
||||
| Check | Method | Pass Criteria |
|
||||
|-------|--------|---------------|
|
||||
| Tests | Detect and run project test command | All pass |
|
||||
| Syntax | IDE diagnostics or `tsc --noEmit` | No errors |
|
||||
|
||||
If tests fail: retry implementation once, then report `impl_failed`.
|
||||
|
||||
### Commit
|
||||
|
||||
```bash
|
||||
git add -A
|
||||
git commit -m "feat(<issueId>): <solution.title>"
|
||||
```
|
||||
|
||||
### Update Issue Status
|
||||
|
||||
```bash
|
||||
ccw issue update <issueId> --status completed
|
||||
```
|
||||
|
||||
### Report
|
||||
|
||||
Send `impl_complete` message to coordinator via team_msg + SendMessage:
|
||||
- summary: `[executor] Implemented <issueId>: <title>`
|
||||
|
||||
## Boundaries
|
||||
|
||||
| Allowed | Prohibited |
|
||||
|---------|-----------|
|
||||
| Load solution from file | Create or modify issues |
|
||||
| Implement via Agent/Codex/Gemini | Modify solution artifacts |
|
||||
| Run tests | Spawn additional agents |
|
||||
| git commit | Direct user interaction |
|
||||
| Update issue status | Create tasks for other roles |
|
||||
111
.claude/skills/team-planex/role-specs/planner.md
Normal file
111
.claude/skills/team-planex/role-specs/planner.md
Normal file
@@ -0,0 +1,111 @@
|
||||
---
|
||||
prefix: PLAN
|
||||
inner_loop: true
|
||||
subagents: [issue-plan-agent]
|
||||
message_types:
|
||||
success: issue_ready
|
||||
error: error
|
||||
---
|
||||
|
||||
# Planner
|
||||
|
||||
Requirement decomposition → issue creation → solution design → EXEC-* task creation. Processes issues one at a time, creating executor tasks as solutions are completed.
|
||||
|
||||
## Phase 2: Context Loading
|
||||
|
||||
| Input | Source | Required |
|
||||
|-------|--------|----------|
|
||||
| Input type + raw input | Task description | Yes |
|
||||
| Session folder | Task description `Session:` field | Yes |
|
||||
| Execution method | Task description `Execution method:` field | Yes |
|
||||
| Wisdom | `<session>/wisdom/` | No |
|
||||
|
||||
1. Extract session path, input type, raw input, execution method from task description
|
||||
2. Load wisdom files if available
|
||||
3. Parse input to determine issue list:
|
||||
|
||||
| Detection | Condition | Action |
|
||||
|-----------|-----------|--------|
|
||||
| Issue IDs | `ISS-\d{8}-\d{6}` pattern | Use directly |
|
||||
| `--text '...'` | Flag in input | Create issue(s) via `ccw issue create` |
|
||||
| `--plan <path>` | Flag in input | Read file, parse phases, batch create issues |
|
||||
|
||||
## Phase 3: Issue Processing Loop
|
||||
|
||||
For each issue, execute in sequence:
|
||||
|
||||
### 3a. Generate Solution
|
||||
|
||||
Delegate to `issue-plan-agent` subagent:
|
||||
|
||||
```
|
||||
Task({
|
||||
subagent_type: "issue-plan-agent",
|
||||
description: "Plan issue <issueId>",
|
||||
prompt: `issue_ids: ["<issueId>"]
|
||||
project_root: "<project-root>"
|
||||
Generate solution for this issue. Auto-bind single solution.`,
|
||||
run_in_background: false
|
||||
})
|
||||
```
|
||||
|
||||
### 3b. Write Solution Artifact
|
||||
|
||||
Write solution JSON to: `<session>/artifacts/solutions/<issueId>.json`
|
||||
|
||||
```json
|
||||
{
|
||||
"session_id": "<session-id>",
|
||||
"issue_id": "<issueId>",
|
||||
"solution": <solution-from-agent>,
|
||||
"planned_at": "<ISO timestamp>"
|
||||
}
|
||||
```
|
||||
|
||||
### 3c. Check Conflicts
|
||||
|
||||
Extract `files_touched` from solution. Compare against prior solutions in session.
|
||||
Overlapping files -> log warning to `wisdom/issues.md`, continue.
|
||||
|
||||
### 3d. Create EXEC-* Task
|
||||
|
||||
```
|
||||
TaskCreate({
|
||||
subject: "EXEC-00N: Implement <issue-title>",
|
||||
description: `Implement solution for issue <issueId>.
|
||||
|
||||
Issue ID: <issueId>
|
||||
Solution file: <session>/artifacts/solutions/<issueId>.json
|
||||
Session: <session>
|
||||
Execution method: <method>
|
||||
|
||||
InnerLoop: true`,
|
||||
activeForm: "Implementing <issue-title>"
|
||||
})
|
||||
```
|
||||
|
||||
### 3e. Signal issue_ready
|
||||
|
||||
Send message via team_msg + SendMessage to coordinator:
|
||||
- type: `issue_ready`
|
||||
- summary: `[planner] Solution ready for <issueId>`
|
||||
|
||||
### 3f. Continue Loop
|
||||
|
||||
Process next issue. Do NOT wait for executor.
|
||||
|
||||
## Phase 4: Completion Signal
|
||||
|
||||
After all issues processed:
|
||||
1. Send `all_planned` message to coordinator via team_msg + SendMessage
|
||||
2. Summary: total issues planned, EXEC-* tasks created
|
||||
|
||||
## Boundaries
|
||||
|
||||
| Allowed | Prohibited |
|
||||
|---------|-----------|
|
||||
| Parse input, create issues | Write/modify business code |
|
||||
| Generate solutions (issue-plan-agent) | Run tests |
|
||||
| Write solution artifacts | git commit |
|
||||
| Create EXEC-* tasks | Call code-developer |
|
||||
| Conflict checking | Direct user interaction |
|
||||
@@ -0,0 +1,87 @@
|
||||
# Command: dispatch
|
||||
|
||||
## Purpose
|
||||
|
||||
Create the initial task chain for team-planex pipeline. Creates PLAN-001 for planner. EXEC-* tasks are NOT pre-created — planner creates them at runtime per issue.
|
||||
|
||||
## Phase 2: Context Loading
|
||||
|
||||
| Input | Source | Required |
|
||||
|-------|--------|----------|
|
||||
| Input type | Phase 1 requirements | Yes |
|
||||
| Raw input | Phase 1 requirements | Yes |
|
||||
| Session folder | Phase 2 session init | Yes |
|
||||
| Execution method | Phase 1 requirements | Yes |
|
||||
|
||||
## Phase 3: Task Chain Creation
|
||||
|
||||
### Task Creation
|
||||
|
||||
Create a single PLAN-001 task for the planner:
|
||||
|
||||
```
|
||||
TaskCreate({
|
||||
subject: "PLAN-001: Requirement decomposition and solution design",
|
||||
description: `Decompose requirements into issues and generate solutions.
|
||||
|
||||
Input type: <issues|text|plan>
|
||||
Input: <raw-input>
|
||||
Session: <session-folder>
|
||||
Execution method: <agent|codex|gemini>
|
||||
|
||||
## Instructions
|
||||
1. Parse input to get issue list
|
||||
2. For each issue: call issue-plan-agent → write solution artifact
|
||||
3. After each solution: create EXEC-* task (owner: executor) with solution_file path
|
||||
4. After all issues: send all_planned signal
|
||||
|
||||
InnerLoop: true`,
|
||||
activeForm: "Planning requirements"
|
||||
})
|
||||
```
|
||||
|
||||
### EXEC-* Task Template (for planner reference)
|
||||
|
||||
Planner creates EXEC-* tasks at runtime using this template:
|
||||
|
||||
```
|
||||
TaskCreate({
|
||||
subject: "EXEC-00N: Implement <issue-title>",
|
||||
description: `Implement solution for issue <issueId>.
|
||||
|
||||
Issue ID: <issueId>
|
||||
Solution file: <session-folder>/artifacts/solutions/<issueId>.json
|
||||
Session: <session-folder>
|
||||
Execution method: <agent|codex|gemini>
|
||||
|
||||
InnerLoop: true`,
|
||||
activeForm: "Implementing <issue-title>"
|
||||
})
|
||||
```
|
||||
|
||||
### Add Command Task Template
|
||||
|
||||
When coordinator handles `add` command, create additional PLAN tasks:
|
||||
|
||||
```
|
||||
TaskCreate({
|
||||
subject: "PLAN-00N: Additional requirement decomposition",
|
||||
description: `Additional requirements to decompose.
|
||||
|
||||
Input type: <issues|text|plan>
|
||||
Input: <new-input>
|
||||
Session: <session-folder>
|
||||
Execution method: <execution-method>
|
||||
|
||||
InnerLoop: true`,
|
||||
activeForm: "Planning additional requirements"
|
||||
})
|
||||
```
|
||||
|
||||
## Phase 4: Validation
|
||||
|
||||
| Check | Criteria |
|
||||
|-------|----------|
|
||||
| PLAN-001 created | TaskList shows PLAN-001 |
|
||||
| Description complete | Contains Input, Session, Execution method |
|
||||
| No orphans | All tasks have valid owner |
|
||||
163
.claude/skills/team-planex/roles/coordinator/commands/monitor.md
Normal file
163
.claude/skills/team-planex/roles/coordinator/commands/monitor.md
Normal file
@@ -0,0 +1,163 @@
|
||||
# Command: monitor
|
||||
|
||||
## Purpose
|
||||
|
||||
Event-driven pipeline coordination with Spawn-and-Stop pattern. Three wake-up sources: worker callbacks, user `check`, user `resume`.
|
||||
|
||||
## Constants
|
||||
|
||||
| Constant | Value | Description |
|
||||
|----------|-------|-------------|
|
||||
| SPAWN_MODE | background | All workers spawned via `Task(run_in_background: true)` |
|
||||
| ONE_STEP_PER_INVOCATION | true | Coordinator does one operation then STOPS |
|
||||
| WORKER_AGENT | team-worker | All workers are team-worker agents |
|
||||
|
||||
## Phase 2: Context Loading
|
||||
|
||||
| Input | Source | Required |
|
||||
|-------|--------|----------|
|
||||
| Session file | `<session-folder>/team-session.json` | Yes |
|
||||
| Task list | `TaskList()` | Yes |
|
||||
| Active workers | session.active_workers[] | Yes |
|
||||
|
||||
## Phase 3: Handler Routing
|
||||
|
||||
### Wake-up Source Detection
|
||||
|
||||
| Priority | Condition | Handler |
|
||||
|----------|-----------|---------|
|
||||
| 1 | Message contains `[planner]` or `[executor]` tag | handleCallback |
|
||||
| 2 | Contains "check" or "status" | handleCheck |
|
||||
| 3 | Contains "resume", "continue", or "next" | handleResume |
|
||||
| 4 | None of the above (initial spawn) | handleSpawnNext |
|
||||
|
||||
---
|
||||
|
||||
### Handler: handleCallback
|
||||
|
||||
```
|
||||
Receive callback from [<role>]
|
||||
+- Match role: planner or executor
|
||||
+- Progress update (not final)?
|
||||
| +- YES -> Update session -> STOP
|
||||
+- Task status = completed?
|
||||
| +- YES -> remove from active_workers -> update session
|
||||
| | +- role = planner?
|
||||
| | | +- Check for new EXEC-* tasks (planner creates them)
|
||||
| | | +- -> handleSpawnNext (spawn executor for new EXEC-* tasks)
|
||||
| | +- role = executor?
|
||||
| | +- Mark issue done
|
||||
| | +- -> handleSpawnNext (check for more EXEC-* tasks)
|
||||
| +- NO -> progress message -> STOP
|
||||
+- No matching worker found
|
||||
+- Scan all active workers for completed tasks
|
||||
+- Found completed -> process -> handleSpawnNext
|
||||
+- None completed -> STOP
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Handler: handleCheck
|
||||
|
||||
Read-only status report. No advancement.
|
||||
|
||||
```
|
||||
[coordinator] PlanEx Pipeline Status
|
||||
[coordinator] Progress: <completed>/<total> (<percent>%)
|
||||
|
||||
[coordinator] Task Graph:
|
||||
PLAN-001: <status-icon> <summary>
|
||||
EXEC-001: <status-icon> <issue-title>
|
||||
EXEC-002: <status-icon> <issue-title>
|
||||
...
|
||||
|
||||
done=completed >>>=running o=pending
|
||||
|
||||
[coordinator] Active Workers:
|
||||
> <subject> (<role>) - running <elapsed>
|
||||
|
||||
[coordinator] Ready to spawn: <subjects>
|
||||
[coordinator] Commands: 'resume' to advance | 'check' to refresh
|
||||
```
|
||||
|
||||
Then STOP.
|
||||
|
||||
---
|
||||
|
||||
### Handler: handleResume
|
||||
|
||||
```
|
||||
Load active_workers
|
||||
+- No active workers -> handleSpawnNext
|
||||
+- Has active workers -> check each:
|
||||
+- completed -> mark done, log
|
||||
+- in_progress -> still running
|
||||
+- other -> worker failure -> reset to pending
|
||||
After:
|
||||
+- Some completed -> handleSpawnNext
|
||||
+- All running -> report status -> STOP
|
||||
+- All failed -> handleSpawnNext (retry)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Handler: handleSpawnNext
|
||||
|
||||
```
|
||||
Collect task states from TaskList()
|
||||
+- Filter tasks: PLAN-* and EXEC-* prefixes
|
||||
+- readySubjects: pending + not blocked (no blockedBy or all blockedBy completed)
|
||||
+- NONE ready + work in progress -> report waiting -> STOP
|
||||
+- NONE ready + nothing running -> PIPELINE_COMPLETE -> Phase 5
|
||||
+- HAS ready tasks -> for each:
|
||||
+- Inner Loop role AND already has active_worker for that role?
|
||||
| +- YES -> SKIP spawn (existing worker picks up via inner loop)
|
||||
| +- NO -> spawn below
|
||||
+- Determine role from task prefix:
|
||||
| +- PLAN-* -> planner
|
||||
| +- EXEC-* -> executor
|
||||
+- Spawn team-worker:
|
||||
Task({
|
||||
subagent_type: "team-worker",
|
||||
description: "Spawn <role> worker for <subject>",
|
||||
team_name: <team-name>,
|
||||
name: "<role>",
|
||||
run_in_background: true,
|
||||
prompt: `## Role Assignment
|
||||
role: <role>
|
||||
role_spec: .claude/skills/team-planex/role-specs/<role>.md
|
||||
session: <session-folder>
|
||||
session_id: <session-id>
|
||||
team_name: <team-name>
|
||||
requirement: <task-description>
|
||||
inner_loop: true
|
||||
execution_method: <method>`
|
||||
})
|
||||
+- Add to session.active_workers
|
||||
Update session -> output summary -> STOP
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Phase 4: Validation
|
||||
|
||||
| Check | Criteria |
|
||||
|-------|----------|
|
||||
| Session state consistent | active_workers matches in_progress tasks |
|
||||
| No orphaned tasks | Every in_progress has active_worker |
|
||||
| Pipeline completeness | All expected EXEC-* tasks accounted for |
|
||||
|
||||
## Worker Failure Handling
|
||||
|
||||
1. Reset task -> pending via TaskUpdate
|
||||
2. Log via team_msg (type: error)
|
||||
3. Report to user
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Scenario | Resolution |
|
||||
|----------|------------|
|
||||
| Session file not found | Error, suggest re-initialization |
|
||||
| Unknown role callback | Log, scan for other completions |
|
||||
| All workers running on resume | Report status, suggest check later |
|
||||
| Pipeline stall (no ready + no running + has pending) | Check blockedBy chains, report |
|
||||
154
.claude/skills/team-planex/roles/coordinator/role.md
Normal file
154
.claude/skills/team-planex/roles/coordinator/role.md
Normal file
@@ -0,0 +1,154 @@
|
||||
# Coordinator Role
|
||||
|
||||
Orchestrate the team-planex pipeline: parse input, create team, dispatch tasks, monitor progress via Spawn-and-Stop beats. Uses **team-worker agent** for all worker spawns.
|
||||
|
||||
## Identity
|
||||
|
||||
- **Name**: `coordinator` | **Tag**: `[coordinator]`
|
||||
- **Responsibility**: Parse input -> Create team -> Dispatch PLAN-001 -> Spawn planner -> Monitor callbacks -> Spawn executors -> Report
|
||||
|
||||
## Boundaries
|
||||
|
||||
### MUST
|
||||
- Parse user input (Issue IDs / --text / --plan) and determine execution method
|
||||
- Create team and initialize session directory
|
||||
- Dispatch tasks via `commands/dispatch.md`
|
||||
- Monitor progress via `commands/monitor.md` with Spawn-and-Stop pattern
|
||||
- Maintain session state (team-session.json)
|
||||
|
||||
### MUST NOT
|
||||
- Execute planning or implementation work directly (delegate to workers)
|
||||
- Modify solution artifacts or code (workers own their deliverables)
|
||||
- Call implementation subagents (code-developer, etc.) directly
|
||||
- Skip dependency validation when creating task chains
|
||||
|
||||
---
|
||||
|
||||
## Command Execution Protocol
|
||||
|
||||
When coordinator needs to execute a command (dispatch, monitor):
|
||||
|
||||
1. **Read the command file**: `roles/coordinator/commands/<command-name>.md`
|
||||
2. **Follow the workflow** defined in the command file
|
||||
3. **Commands are inline execution guides** - NOT separate agents
|
||||
4. **Execute synchronously** - complete the command workflow before proceeding
|
||||
|
||||
---
|
||||
|
||||
## Entry Router
|
||||
|
||||
When coordinator is invoked, detect invocation type:
|
||||
|
||||
| Detection | Condition | Handler |
|
||||
|-----------|-----------|---------|
|
||||
| Worker callback | Message contains `[planner]` or `[executor]` tag | -> handleCallback (monitor.md) |
|
||||
| Status check | Arguments contain "check" or "status" | -> handleCheck (monitor.md) |
|
||||
| Manual resume | Arguments contain "resume" or "continue" | -> handleResume (monitor.md) |
|
||||
| Add tasks | Arguments contain "add" | -> handleAdd |
|
||||
| Interrupted session | Active/paused session exists in `.workflow/.team/PEX-*` | -> Phase 0 |
|
||||
| New session | None of above | -> Phase 1 |
|
||||
|
||||
For callback/check/resume: load `commands/monitor.md` and execute the appropriate handler, then STOP.
|
||||
|
||||
### handleAdd
|
||||
|
||||
1. Parse new input (Issue IDs / `--text` / `--plan`)
|
||||
2. Get current max PLAN-* sequence from `TaskList`
|
||||
3. `TaskCreate` new PLAN-00N task (owner: planner)
|
||||
4. If planner already sent `all_planned` (check team_msg) -> `SendMessage` to planner to re-enter loop
|
||||
5. STOP
|
||||
|
||||
---
|
||||
|
||||
## Phase 0: Session Resume Check
|
||||
|
||||
1. Scan `.workflow/.team/PEX-*/team-session.json` for sessions with status "active" or "paused"
|
||||
2. No sessions found -> proceed to Phase 1
|
||||
3. Single session found -> resume (Session Reconciliation)
|
||||
4. Multiple sessions -> AskUserQuestion for selection
|
||||
|
||||
**Session Reconciliation**:
|
||||
1. Audit TaskList -> reconcile session state vs task status
|
||||
2. Reset in_progress tasks -> pending (they were interrupted)
|
||||
3. Rebuild team if needed (TeamCreate + spawn needed workers)
|
||||
4. Kick first executable task -> Phase 4
|
||||
|
||||
---
|
||||
|
||||
## Phase 1: Input Parsing + Execution Method
|
||||
|
||||
1. **Parse arguments**: Extract input type (Issue IDs / --text / --plan) and optional flags (--exec, -y)
|
||||
|
||||
2. **Determine execution method** (see SKILL.md Selection Decision Table):
|
||||
- Explicit `--exec` flag -> use specified method
|
||||
- `-y` / `--yes` flag -> Auto mode
|
||||
- No flags -> AskUserQuestion for method choice
|
||||
|
||||
3. **Store requirements**: input_type, raw_input, execution_method
|
||||
|
||||
---
|
||||
|
||||
## Phase 2: Create Team + Initialize Session
|
||||
|
||||
1. Generate session ID: `PEX-<slug>-<date>`
|
||||
2. Create session folder: `.workflow/.team/<session-id>/`
|
||||
3. Create subdirectories: `artifacts/solutions/`, `wisdom/`
|
||||
4. Call `TeamCreate` with team name (default: "planex")
|
||||
5. Initialize wisdom files (learnings.md, decisions.md, conventions.md, issues.md)
|
||||
6. Write team-session.json:
|
||||
|
||||
```
|
||||
{
|
||||
session_id: "<session-id>",
|
||||
input_type: "<issues|text|plan>",
|
||||
input: "<raw-input>",
|
||||
execution_method: "<agent|codex|gemini>",
|
||||
status: "active",
|
||||
active_workers: [],
|
||||
started_at: "<ISO timestamp>"
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Phase 3: Create Task Chain
|
||||
|
||||
Delegate to `commands/dispatch.md`:
|
||||
|
||||
1. Read `roles/coordinator/commands/dispatch.md`
|
||||
2. Execute its workflow to create PLAN-001 task
|
||||
3. PLAN-001 contains input info + execution method in description
|
||||
|
||||
---
|
||||
|
||||
## Phase 4: Spawn-and-Stop
|
||||
|
||||
1. Load `commands/monitor.md`
|
||||
2. Execute `handleSpawnNext` to find ready tasks and spawn planner worker
|
||||
3. Output status summary
|
||||
4. **STOP** (idle, wait for worker callback)
|
||||
|
||||
**ONE_STEP_PER_INVOCATION**: true — coordinator does one operation per wake-up, then STOPS.
|
||||
|
||||
---
|
||||
|
||||
## Phase 5: Report + Completion Action
|
||||
|
||||
When all tasks are complete (monitor.md detects PIPELINE_COMPLETE):
|
||||
|
||||
1. Load session state -> count completed tasks, duration
|
||||
2. List deliverables with output paths
|
||||
3. Update session status -> "completed"
|
||||
4. Execute Completion Action (see SKILL.md)
|
||||
|
||||
---
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Error | Resolution |
|
||||
|-------|------------|
|
||||
| Session file not found | Error, suggest re-initialization |
|
||||
| Unknown worker callback | Log, scan for other completions |
|
||||
| Pipeline stall | Check missing tasks, report to user |
|
||||
| Worker crash | Reset task to pending, re-spawn on next beat |
|
||||
| All workers running on resume | Report status, suggest check later |
|
||||
@@ -1,356 +0,0 @@
|
||||
# Executor Role
|
||||
|
||||
Load solution -> Route to backend (Agent/Codex/Gemini) based on execution_method -> Test verification -> Commit. Supports multiple CLI execution backends. Execution method is determined before skill invocation (see SKILL.md Execution Method Selection).
|
||||
|
||||
## Identity
|
||||
|
||||
- **Name**: `executor` | **Tag**: `[executor]`
|
||||
- **Task Prefix**: `EXEC-*`
|
||||
- **Responsibility**: Code implementation (solution -> route to backend -> test -> commit)
|
||||
|
||||
## Boundaries
|
||||
|
||||
### MUST
|
||||
|
||||
- Only process `EXEC-*` prefixed tasks
|
||||
- All output (SendMessage, team_msg, logs) must carry `[executor]` identifier
|
||||
- Select execution backend based on `execution_method` field in EXEC-* task
|
||||
- Notify planner after each issue completes
|
||||
- Continuously poll for new EXEC-* tasks (planner may create new waves anytime)
|
||||
|
||||
### MUST NOT
|
||||
|
||||
- Create issues (planner responsibility)
|
||||
- Modify solution or queue (planner responsibility)
|
||||
- Call issue-plan-agent or issue-queue-agent
|
||||
- Interact directly with user (AskUserQuestion)
|
||||
- Create PLAN-* tasks for planner
|
||||
|
||||
---
|
||||
|
||||
## Toolbox
|
||||
|
||||
### Execution Backends
|
||||
|
||||
| Backend | Tool | Invocation | Mode |
|
||||
|---------|------|------------|------|
|
||||
| `agent` | code-developer subagent | `Task({ subagent_type: "code-developer" })` | Synchronous |
|
||||
| `codex` | Codex CLI | `ccw cli --tool codex --mode write` | Background |
|
||||
| `gemini` | Gemini CLI | `ccw cli --tool gemini --mode write` | Background |
|
||||
|
||||
### Direct Capabilities
|
||||
|
||||
| Tool | Purpose |
|
||||
|------|---------|
|
||||
| `Read` | Read solution plan and queue files |
|
||||
| `Write` | Write implementation artifacts |
|
||||
| `Edit` | Edit source code |
|
||||
| `Bash` | Run tests, git operations, CLI calls |
|
||||
|
||||
### CLI Capabilities
|
||||
|
||||
| CLI Command | Purpose |
|
||||
|-------------|---------|
|
||||
| `ccw issue status <id> --json` | Check issue status |
|
||||
| `ccw issue solution <id> --json` | Load single issue's bound solution (requires issue ID) |
|
||||
| `ccw issue update <id> --status executing` | Update issue status to executing |
|
||||
| `ccw issue update <id> --status completed` | Mark issue as completed |
|
||||
|
||||
---
|
||||
|
||||
## Message Types
|
||||
|
||||
| Type | Direction | Trigger | Description |
|
||||
|------|-----------|---------|-------------|
|
||||
| `impl_complete` | executor -> planner | Implementation and tests pass | Single issue implementation complete |
|
||||
| `impl_failed` | executor -> planner | Implementation failed after retries | Implementation failure |
|
||||
| `wave_done` | executor -> planner | All EXEC tasks in a wave completed | Entire wave complete |
|
||||
| `error` | executor -> planner | Blocking error | Execution error |
|
||||
|
||||
## Message Bus
|
||||
|
||||
Before every SendMessage, log via `mcp__ccw-tools__team_msg`:
|
||||
|
||||
**NOTE**: `team` must be **session ID** (e.g., `PEX-project-2026-02-27`), NOT team name. Extract from `Session:` field in task description.
|
||||
|
||||
```
|
||||
mcp__ccw-tools__team_msg({
|
||||
operation: "log",
|
||||
team: <session-id>, // e.g., "PEX-project-2026-02-27", NOT "planex"
|
||||
from: "executor",
|
||||
to: "planner",
|
||||
type: <message-type>,
|
||||
summary: "[executor] <task-prefix> complete: <task-subject>",
|
||||
ref: <artifact-path>
|
||||
})
|
||||
```
|
||||
|
||||
**CLI fallback** (when MCP unavailable):
|
||||
|
||||
```
|
||||
Bash("ccw team log --team <session-id> --from executor --to planner --type <message-type> --summary \"[executor] <task-prefix> complete\" --ref <artifact-path> --json")
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Execution (5-Phase)
|
||||
|
||||
### Phase 1: Task Discovery
|
||||
|
||||
> See SKILL.md Shared Infrastructure -> Worker Phase 1: Task Discovery
|
||||
|
||||
Standard task discovery flow: TaskList -> filter by prefix `EXEC-*` + owner match + pending + unblocked -> TaskGet -> TaskUpdate in_progress.
|
||||
|
||||
### Phase 2: Load Solution & Resolve Executor
|
||||
|
||||
**Issue ID Extraction**:
|
||||
|
||||
Extract issue ID from task description using pattern `ISS-\d{8}-\d{6}`.
|
||||
|
||||
If no issue ID found:
|
||||
1. Log error via team_msg
|
||||
2. SendMessage error to planner
|
||||
3. TaskUpdate completed
|
||||
4. Return to idle
|
||||
|
||||
**Solution Loading (Dual Mode)**:
|
||||
|
||||
| Mode | Condition | Action |
|
||||
|------|-----------|--------|
|
||||
| File-first | Task description contains `solution_file: <path>` | Read JSON file, extract solution.bound |
|
||||
| CLI fallback | No solution_file field | Call `ccw issue solution <issueId> --json` |
|
||||
|
||||
If no bound solution found:
|
||||
1. Log error via team_msg
|
||||
2. SendMessage error to planner
|
||||
3. TaskUpdate completed
|
||||
4. Return to idle
|
||||
|
||||
**Execution Method Resolution**:
|
||||
|
||||
| Condition | Executor |
|
||||
|-----------|----------|
|
||||
| `execution_method: Agent` in task description | agent |
|
||||
| `execution_method: Codex` in task description | codex |
|
||||
| `execution_method: Gemini` in task description | gemini |
|
||||
| `execution_method: Auto` + task_count <= 3 | agent |
|
||||
| `execution_method: Auto` + task_count > 3 | codex |
|
||||
| Unknown or missing | agent (with warning) |
|
||||
|
||||
**Code Review Resolution**:
|
||||
|
||||
Extract `code_review` from task description. Values: Skip | Gemini Review | Codex Review | Agent Review. Default: Skip.
|
||||
|
||||
**Issue Status Update**:
|
||||
|
||||
```
|
||||
Bash("ccw issue update <issueId> --status executing")
|
||||
```
|
||||
|
||||
### Phase 3: Implementation (Multi-Backend Routing)
|
||||
|
||||
Route to execution backend based on resolved executor.
|
||||
|
||||
#### Option A: Agent Execution
|
||||
|
||||
**When**: executor === 'agent' (simple tasks, task_count <= 3)
|
||||
|
||||
**Tool call**:
|
||||
```
|
||||
Task({
|
||||
subagent_type: "code-developer",
|
||||
run_in_background: false,
|
||||
description: "Implement solution for <issueId>",
|
||||
prompt: <execution-prompt>
|
||||
})
|
||||
```
|
||||
|
||||
Synchronous execution - wait for completion before Phase 4.
|
||||
|
||||
#### Option B: Codex CLI Execution
|
||||
|
||||
**When**: executor === 'codex' (complex tasks, background execution)
|
||||
|
||||
**Tool call**:
|
||||
```
|
||||
Bash("ccw cli -p \"<execution-prompt>\" --tool codex --mode write --id planex-<issueId>", { run_in_background: true })
|
||||
```
|
||||
|
||||
**Resume on failure**:
|
||||
```
|
||||
ccw cli -p "Continue implementation" --resume planex-<issueId> --tool codex --mode write --id planex-<issueId>-retry
|
||||
```
|
||||
|
||||
STOP after spawn - CLI executes in background, wait for task hook callback.
|
||||
|
||||
#### Option C: Gemini CLI Execution
|
||||
|
||||
**When**: executor === 'gemini' (analysis-heavy tasks, background execution)
|
||||
|
||||
**Tool call**:
|
||||
```
|
||||
Bash("ccw cli -p \"<execution-prompt>\" --tool gemini --mode write --id planex-<issueId>", { run_in_background: true })
|
||||
```
|
||||
|
||||
STOP after spawn - CLI executes in background, wait for task hook callback.
|
||||
|
||||
### Execution Prompt Template
|
||||
|
||||
All backends use unified prompt structure:
|
||||
|
||||
```
|
||||
## Issue
|
||||
ID: <issueId>
|
||||
Title: <solution-title>
|
||||
|
||||
## Solution Plan
|
||||
<solution-bound-json>
|
||||
|
||||
## Implementation Requirements
|
||||
|
||||
1. Follow the solution plan tasks in order
|
||||
2. Write clean, minimal code following existing patterns
|
||||
3. Run tests after each significant change
|
||||
4. Ensure all existing tests still pass
|
||||
5. Do NOT over-engineer - implement exactly what the solution specifies
|
||||
|
||||
## Quality Checklist
|
||||
- [ ] All solution tasks implemented
|
||||
- [ ] No TypeScript/linting errors
|
||||
- [ ] Existing tests pass
|
||||
- [ ] New tests added where appropriate
|
||||
- [ ] No security vulnerabilities introduced
|
||||
|
||||
## Project Guidelines
|
||||
@.workflow/specs/*.md
|
||||
```
|
||||
|
||||
### Phase 4: Verify & Commit
|
||||
|
||||
**Test Detection**:
|
||||
|
||||
| Detection | Method |
|
||||
|-----------|--------|
|
||||
| package.json scripts.test | Use `npm test` |
|
||||
| package.json scripts.test:unit | Use `npm run test:unit` |
|
||||
| No test script found | Skip verification, proceed to commit |
|
||||
|
||||
**Test Verification**:
|
||||
|
||||
```
|
||||
Bash("<testCmd> 2>&1 || echo TEST_FAILED")
|
||||
```
|
||||
|
||||
Check output for `TEST_FAILED` or `FAIL` strings.
|
||||
|
||||
**Test Failure Handling**:
|
||||
|
||||
| Condition | Action |
|
||||
|-----------|--------|
|
||||
| Tests failing | Report impl_failed to planner with test output + resume command |
|
||||
| Tests passing | Proceed to code review (if configured) |
|
||||
|
||||
**Code Review (Optional)**:
|
||||
|
||||
| Review Tool | Execution |
|
||||
|-------------|-----------|
|
||||
| Gemini Review | `ccw cli -p "<review-prompt>" --tool gemini --mode analysis --id planex-review-<issueId>` (background) |
|
||||
| Codex Review | `ccw cli --tool codex --mode review --uncommitted` (background, no prompt with target flags) |
|
||||
| Agent Review | Current agent performs inline review against solution convergence criteria |
|
||||
|
||||
**Code Review Prompt**:
|
||||
```
|
||||
PURPOSE: Code review for <issueId> implementation against solution plan
|
||||
TASK: Verify solution convergence criteria | Check test coverage | Analyze code quality | Identify issues
|
||||
MODE: analysis
|
||||
CONTEXT: @**/* | Memory: Review planex execution for <issueId>
|
||||
EXPECTED: Quality assessment with issue identification and recommendations
|
||||
CONSTRAINTS: Focus on solution adherence and code quality | analysis=READ-ONLY
|
||||
```
|
||||
|
||||
**Issue Completion**:
|
||||
|
||||
```
|
||||
Bash("ccw issue update <issueId> --status completed")
|
||||
```
|
||||
|
||||
### Phase 5: Report + Loop
|
||||
|
||||
> See SKILL.md Shared Infrastructure -> Worker Phase 5: Report
|
||||
|
||||
**Success Report**:
|
||||
|
||||
```
|
||||
mcp__ccw-tools__team_msg({
|
||||
operation: "log",
|
||||
team: <session-id>, // e.g., "PEX-project-2026-02-27", NOT "planex"
|
||||
from: "executor",
|
||||
to: "planner",
|
||||
type: "impl_complete",
|
||||
summary: "[executor] Implementation complete for <issueId> via <executor>, tests passing"
|
||||
})
|
||||
|
||||
SendMessage({
|
||||
type: "message",
|
||||
recipient: "planner",
|
||||
content: `## [executor] Implementation Complete
|
||||
|
||||
**Issue**: <issueId>
|
||||
**Executor**: <executor>
|
||||
**Solution**: <solution-id>
|
||||
**Code Review**: <codeReview>
|
||||
**Status**: All tests passing
|
||||
**Issue Status**: Updated to resolved`,
|
||||
summary: "[executor] EXEC complete: <issueId> (<executor>)"
|
||||
})
|
||||
|
||||
TaskUpdate({ taskId: <task-id>, status: "completed" })
|
||||
```
|
||||
|
||||
**Loop Check**:
|
||||
|
||||
Query for next `EXEC-*` task with owner=executor, status=pending, blockedBy empty.
|
||||
|
||||
| Condition | Action |
|
||||
|-----------|--------|
|
||||
| Tasks available | Return to Phase 1 for next task |
|
||||
| No tasks + planner sent all_planned | Send wave_done and idle |
|
||||
| No tasks + planner still planning | Idle for more tasks |
|
||||
|
||||
**Wave Done Signal**:
|
||||
|
||||
```
|
||||
mcp__ccw-tools__team_msg({
|
||||
operation: "log",
|
||||
team: <session-id>, // e.g., "PEX-project-2026-02-27", NOT "planex"
|
||||
from: "executor",
|
||||
to: "planner",
|
||||
type: "wave_done",
|
||||
summary: "[executor] All EXEC tasks completed"
|
||||
})
|
||||
|
||||
SendMessage({
|
||||
type: "message",
|
||||
recipient: "planner",
|
||||
content: "## [executor] All Tasks Done\n\nAll EXEC-* tasks have been completed. Pipeline finished.",
|
||||
summary: "[executor] wave_done: all complete"
|
||||
})
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Scenario | Resolution |
|
||||
|----------|------------|
|
||||
| No EXEC-* tasks available | Idle, wait for planner to create tasks |
|
||||
| Solution plan not found | Report error to planner |
|
||||
| Unknown execution_method | Fallback to `agent` with warning |
|
||||
| Agent (code-developer) failure | Retry once, then report impl_failed |
|
||||
| CLI (Codex/Gemini) failure | Provide resume command with fixed ID, report impl_failed |
|
||||
| CLI timeout | Use fixed ID `planex-{issueId}` for resume |
|
||||
| Tests failing after implementation | Report impl_failed with test output + resume info |
|
||||
| Issue status update failure | Log warning, continue with report |
|
||||
| Dependency not yet complete | Wait - task is blocked by blockedBy |
|
||||
| All tasks done but planner still planning | Send wave_done, then idle for more |
|
||||
| Critical issue beyond scope | SendMessage error to planner |
|
||||
@@ -1,315 +0,0 @@
|
||||
# Planner Role
|
||||
|
||||
Demand decomposition -> Issue creation -> Solution design -> Conflict check -> EXEC task dispatch. Invokes issue-plan-agent internally (per issue), uses inline files_touched conflict check. Dispatches EXEC-* task immediately after each issue's solution is ready. Planner also serves as lead role (no separate coordinator).
|
||||
|
||||
## Identity
|
||||
|
||||
- **Name**: `planner` | **Tag**: `[planner]`
|
||||
- **Task Prefix**: `PLAN-*`
|
||||
- **Responsibility**: Planning lead (requirement -> issues -> solutions -> queue -> dispatch)
|
||||
|
||||
## Boundaries
|
||||
|
||||
### MUST
|
||||
|
||||
- Only process `PLAN-*` prefixed tasks
|
||||
- All output (SendMessage, team_msg, logs) must carry `[planner]` identifier
|
||||
- Immediately create `EXEC-*` task after completing each issue's solution and send `issue_ready` signal
|
||||
- Continue pushing forward without waiting for executor
|
||||
- Write solution artifacts to `<sessionDir>/artifacts/solutions/{issueId}.json`
|
||||
|
||||
### MUST NOT
|
||||
|
||||
- Directly write/modify business code (executor responsibility)
|
||||
- Call code-developer agent
|
||||
- Run project tests
|
||||
- git commit code changes
|
||||
|
||||
---
|
||||
|
||||
## Toolbox
|
||||
|
||||
### Subagent Capabilities
|
||||
|
||||
| Agent Type | Purpose |
|
||||
|------------|---------|
|
||||
| `issue-plan-agent` | Closed-loop planning: ACE exploration + solution generation + binding (single issue granularity) |
|
||||
|
||||
### CLI Capabilities
|
||||
|
||||
| CLI Command | Purpose |
|
||||
|-------------|---------|
|
||||
| `ccw issue create --data '{"title":"..."}' --json` | Create issue from text |
|
||||
| `ccw issue status <id> --json` | Check issue status |
|
||||
| `ccw issue solution <id> --json` | Get single issue's solutions (requires issue ID) |
|
||||
| `ccw issue solutions --status planned --brief` | Batch list all bound solutions (cross-issue) |
|
||||
| `ccw issue bind <id> <sol-id>` | Bind solution to issue |
|
||||
|
||||
### Skill Capabilities
|
||||
|
||||
| Skill | Purpose |
|
||||
|-------|---------|
|
||||
| `Skill(skill="issue:new", args="--text '...'")` | Create issue from text |
|
||||
|
||||
---
|
||||
|
||||
## Message Types
|
||||
|
||||
| Type | Direction | Trigger | Description |
|
||||
|------|-----------|---------|-------------|
|
||||
| `issue_ready` | planner -> executor | Single issue solution + EXEC task created | Per-issue beat signal |
|
||||
| `wave_ready` | planner -> executor | All issues in a wave dispatched | Wave summary signal |
|
||||
| `all_planned` | planner -> executor | All waves planning complete | Final signal |
|
||||
| `error` | planner -> executor | Blocking error | Planning failure |
|
||||
|
||||
## Message Bus
|
||||
|
||||
Before every SendMessage, log via `mcp__ccw-tools__team_msg`:
|
||||
|
||||
**NOTE**: `team` must be **session ID** (e.g., `PEX-project-2026-02-27`), NOT team name. Extract from `Session:` field in task description.
|
||||
|
||||
```
|
||||
mcp__ccw-tools__team_msg({
|
||||
operation: "log",
|
||||
team: <session-id>, // e.g., "PEX-project-2026-02-27", NOT "planex"
|
||||
from: "planner",
|
||||
to: "executor",
|
||||
type: <message-type>,
|
||||
summary: "[planner] <task-prefix> complete: <task-subject>",
|
||||
ref: <artifact-path>
|
||||
})
|
||||
```
|
||||
|
||||
**CLI fallback** (when MCP unavailable):
|
||||
|
||||
```
|
||||
Bash("ccw team log --team <session-id> --from planner --to executor --type <message-type> --summary \"[planner] <task-prefix> complete\" --ref <artifact-path> --json")
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Execution (5-Phase)
|
||||
|
||||
### Phase 1: Task Discovery
|
||||
|
||||
> See SKILL.md Shared Infrastructure -> Worker Phase 1: Task Discovery
|
||||
|
||||
Standard task discovery flow: TaskList -> filter by prefix `PLAN-*` + owner match + pending + unblocked -> TaskGet -> TaskUpdate in_progress.
|
||||
|
||||
### Phase 2: Input Parsing
|
||||
|
||||
Parse task description and arguments to determine input type.
|
||||
|
||||
**Input Type Detection**:
|
||||
|
||||
| Detection | Condition | Handler |
|
||||
|-----------|-----------|---------|
|
||||
| Issue IDs | Task description contains `ISS-\d{8}-\d{6}` pattern | Path C: Direct to planning |
|
||||
| Text input | Arguments contain `--text '...'` | Path A: Create issue first |
|
||||
| Plan file | Arguments contain `--plan <path>` | Path B: Parse and batch create |
|
||||
| Execution plan JSON | Plan file is `execution-plan.json` from req-plan | Path D: Wave-aware processing |
|
||||
| Description text | None of above | Treat task description as requirement text |
|
||||
|
||||
**Execution Config Extraction**:
|
||||
|
||||
From arguments, extract:
|
||||
- `execution_method`: Agent | Codex | Gemini | Auto (default: Auto)
|
||||
- `code_review`: Skip | Gemini Review | Codex Review | Agent Review (default: Skip)
|
||||
|
||||
### Phase 3: Issue Processing Pipeline
|
||||
|
||||
Execute different processing paths based on input type.
|
||||
|
||||
#### Path A: Text Input -> Create Issue
|
||||
|
||||
**Workflow**:
|
||||
1. Use `issue:new` skill to create issue from text
|
||||
2. Capture created issue ID
|
||||
3. Add to issue list for planning
|
||||
|
||||
**Tool calls**:
|
||||
```
|
||||
Skill(skill="issue:new", args="--text '<requirement-text>'")
|
||||
```
|
||||
|
||||
#### Path B: Plan File -> Batch Create Issues
|
||||
|
||||
**Workflow**:
|
||||
1. Read plan file content
|
||||
2. Parse phases/steps from markdown structure
|
||||
3. For each phase/step, create an issue
|
||||
4. Add all issue IDs to list for planning
|
||||
|
||||
**Plan Parsing Rules**:
|
||||
- Match `## Phase N: Title` or `## Step N: Title` or `### N. Title`
|
||||
- Each match creates one issue with title and description
|
||||
- Fallback: If no phase structure, entire content becomes single issue
|
||||
|
||||
#### Path C: Issue IDs -> Direct Planning
|
||||
|
||||
Issue IDs are ready, proceed directly to solution planning.
|
||||
|
||||
#### Path D: execution-plan.json -> Wave-Aware Processing
|
||||
|
||||
**Workflow**:
|
||||
1. Parse execution-plan.json with waves array
|
||||
2. For each wave, process issues sequentially
|
||||
3. For each issue in wave:
|
||||
- Call issue-plan-agent to generate solution
|
||||
- Write solution artifact to `<sessionDir>/artifacts/solutions/{issueId}.json`
|
||||
- Perform inline conflict check
|
||||
- Create EXEC-* task with solution_file path
|
||||
- Send issue_ready signal
|
||||
4. After each wave completes, send wave_ready signal
|
||||
5. After all waves, send all_planned signal
|
||||
|
||||
**Issue Planning (per issue)**:
|
||||
|
||||
```
|
||||
Task({
|
||||
subagent_type: "issue-plan-agent",
|
||||
run_in_background: false,
|
||||
description: "Plan solution for <issueId>",
|
||||
prompt: `issue_ids: ["<issueId>"]
|
||||
project_root: "<projectRoot>"
|
||||
|
||||
## Requirements
|
||||
- Generate solution for this issue
|
||||
- Auto-bind single solution
|
||||
- Issues come from req-plan decomposition (tags: req-plan)
|
||||
- Respect dependencies: <issue_dependencies>`
|
||||
})
|
||||
```
|
||||
|
||||
**Solution Artifact**:
|
||||
|
||||
```
|
||||
Write({
|
||||
file_path: "<sessionDir>/artifacts/solutions/<issueId>.json",
|
||||
content: JSON.stringify({
|
||||
session_id: <sessionId>,
|
||||
issue_id: <issueId>,
|
||||
...solution,
|
||||
execution_config: { execution_method, code_review },
|
||||
timestamp: <ISO-timestamp>
|
||||
}, null, 2)
|
||||
})
|
||||
```
|
||||
|
||||
**EXEC Task Creation**:
|
||||
|
||||
```
|
||||
TaskCreate({
|
||||
subject: "EXEC-W<waveNum>-<issueId>: Implement <solution-title>",
|
||||
description: `## Execution Task
|
||||
**Wave**: <waveNum>
|
||||
**Issue**: <issueId>
|
||||
**solution_file**: <solutionFile>
|
||||
**execution_method**: <method>
|
||||
**code_review**: <review>`,
|
||||
activeForm: "Implementing <issueId>",
|
||||
owner: "executor"
|
||||
})
|
||||
```
|
||||
|
||||
#### Wave Processing (Path A/B/C Convergence)
|
||||
|
||||
For non-execution-plan inputs, process all issues as a single logical wave:
|
||||
|
||||
**Workflow**:
|
||||
1. For each issue in list:
|
||||
- Call issue-plan-agent
|
||||
- Write solution artifact
|
||||
- Perform inline conflict check
|
||||
- Create EXEC-* task
|
||||
- Send issue_ready signal
|
||||
2. After all issues complete, send wave_ready signal
|
||||
|
||||
### Phase 4: Inline Conflict Check + Dispatch
|
||||
|
||||
Perform conflict detection using files_touched overlap analysis.
|
||||
|
||||
**Conflict Detection Rules**:
|
||||
|
||||
| Condition | Action |
|
||||
|-----------|--------|
|
||||
| File overlap detected | Add blockedBy dependency to previous task |
|
||||
| Explicit dependency in solution.bound.dependencies.on_issues | Add blockedBy to referenced task |
|
||||
| No conflict | No blockedBy, task is immediately executable |
|
||||
|
||||
**Inline Conflict Check Algorithm**:
|
||||
|
||||
1. Get current solution's files_touched (or affected_files)
|
||||
2. For each previously dispatched solution:
|
||||
- Check if any files overlap
|
||||
- If overlap, add previous execTaskId to blockedBy
|
||||
3. Check explicit dependencies from solution.bound.dependencies.on_issues
|
||||
4. Return blockedBy array for TaskUpdate
|
||||
|
||||
**Wave Summary Signal** (after all issues in wave):
|
||||
|
||||
```
|
||||
mcp__ccw-tools__team_msg({
|
||||
operation: "log", team: <session-id>, from: "planner", to: "executor", // team = session ID
|
||||
type: "wave_ready",
|
||||
summary: "[planner] Wave <waveNum> fully dispatched: <issueCount> issues"
|
||||
})
|
||||
|
||||
SendMessage({
|
||||
type: "message", recipient: "executor",
|
||||
content: "## [planner] Wave <waveNum> Complete\nAll issues dispatched, <count> EXEC tasks created.",
|
||||
summary: "[planner] wave_ready: wave <waveNum>"
|
||||
})
|
||||
```
|
||||
|
||||
### Phase 5: Report + Finalize
|
||||
|
||||
> See SKILL.md Shared Infrastructure -> Worker Phase 5: Report
|
||||
|
||||
**Final Signal** (all waves complete):
|
||||
|
||||
```
|
||||
mcp__ccw-tools__team_msg({
|
||||
operation: "log",
|
||||
team: <session-id>, // e.g., "PEX-project-2026-02-27", NOT "planex"
|
||||
from: "planner",
|
||||
to: "executor",
|
||||
type: "all_planned",
|
||||
summary: "[planner] All <waveCount> waves planned, <issueCount> issues total"
|
||||
})
|
||||
|
||||
SendMessage({
|
||||
type: "message",
|
||||
recipient: "executor",
|
||||
content: `## [planner] All Waves Planned
|
||||
|
||||
**Total Waves**: <waveCount>
|
||||
**Total Issues**: <issueCount>
|
||||
**Status**: All planning complete, waiting for executor to finish remaining EXEC-* tasks
|
||||
|
||||
Pipeline complete when executor sends wave_done confirmation.`,
|
||||
summary: "[planner] all_planned: <waveCount> waves, <issueCount> issues"
|
||||
})
|
||||
|
||||
TaskUpdate({ taskId: <task-id>, status: "completed" })
|
||||
```
|
||||
|
||||
**Loop Check**: Query for next `PLAN-*` task with owner=planner, status=pending, blockedBy empty. If found, return to Phase 1.
|
||||
|
||||
---
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Scenario | Resolution |
|
||||
|----------|------------|
|
||||
| No PLAN-* tasks available | Idle, wait for orchestrator |
|
||||
| Issue creation failure | Retry once with simplified text, then report error |
|
||||
| issue-plan-agent failure | Retry once, then report error and skip to next issue |
|
||||
| Inline conflict check failure | Skip conflict detection, create EXEC task without blockedBy |
|
||||
| Plan file not found | Report error with expected path |
|
||||
| execution-plan.json parse failure | Fallback to plan_file parsing (Path B) |
|
||||
| execution-plan.json missing waves | Report error, suggest re-running req-plan |
|
||||
| Empty input (no issues, no text, no plan) | AskUserQuestion for clarification |
|
||||
| Solution artifact write failure | Log warning, create EXEC task without solution_file (executor fallback) |
|
||||
| Wave partially failed | Report partial success, continue with successful issues |
|
||||
| Critical issue beyond scope | SendMessage error to executor |
|
||||
896
.claude/skills/workflow-wave-plan/SKILL.md
Normal file
896
.claude/skills/workflow-wave-plan/SKILL.md
Normal file
@@ -0,0 +1,896 @@
|
||||
---
|
||||
name: workflow-wave-plan
|
||||
description: CSV Wave planning and execution - explore via wave, resolve conflicts, execute from CSV with linked exploration context. Triggers on "workflow:wave-plan".
|
||||
argument-hint: "<task description> [--yes|-y] [--concurrency|-c N]"
|
||||
allowed-tools: Task, AskUserQuestion, Read, Write, Edit, Bash, Glob, Grep
|
||||
---
|
||||
|
||||
# Workflow Wave Plan
|
||||
|
||||
CSV Wave-based planning and execution. Uses structured CSV state for both exploration and execution, with cross-phase context propagation via `context_from` linking.
|
||||
|
||||
## Architecture
|
||||
|
||||
```
|
||||
Requirement
|
||||
↓
|
||||
┌─ Phase 1: Decompose ─────────────────────┐
|
||||
│ Analyze requirement → explore.csv │
|
||||
│ (1 row per exploration angle) │
|
||||
└────────────────────┬──────────────────────┘
|
||||
↓
|
||||
┌─ Phase 2: Wave Explore ──────────────────┐
|
||||
│ Wave loop: spawn Explore agents │
|
||||
│ → findings/key_files → explore.csv │
|
||||
└────────────────────┬──────────────────────┘
|
||||
↓
|
||||
┌─ Phase 3: Synthesize & Plan ─────────────┐
|
||||
│ Read explore findings → cross-reference │
|
||||
│ → resolve conflicts → tasks.csv │
|
||||
│ (context_from links to E* explore rows) │
|
||||
└────────────────────┬──────────────────────┘
|
||||
↓
|
||||
┌─ Phase 4: Wave Execute ──────────────────┐
|
||||
│ Wave loop: build prev_context from CSV │
|
||||
│ → spawn code-developer agents per wave │
|
||||
│ → results → tasks.csv │
|
||||
└────────────────────┬──────────────────────┘
|
||||
↓
|
||||
┌─ Phase 5: Aggregate ─────────────────────┐
|
||||
│ results.csv + context.md + summary │
|
||||
└───────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
## Context Flow
|
||||
|
||||
```
|
||||
explore.csv tasks.csv
|
||||
┌──────────┐ ┌──────────┐
|
||||
│ E1: arch │──────────→│ T1: setup│ context_from: E1;E2
|
||||
│ findings │ │ prev_ctx │← E1+E2 findings
|
||||
├──────────┤ ├──────────┤
|
||||
│ E2: deps │──────────→│ T2: impl │ context_from: E1;T1
|
||||
│ findings │ │ prev_ctx │← E1+T1 findings
|
||||
├──────────┤ ├──────────┤
|
||||
│ E3: test │──┐ ┌───→│ T3: test │ context_from: E3;T2
|
||||
│ findings │ └───┘ │ prev_ctx │← E3+T2 findings
|
||||
└──────────┘ └──────────┘
|
||||
|
||||
Two context channels:
|
||||
1. Directed: context_from → prev_context (from CSV findings)
|
||||
2. Broadcast: discoveries.ndjson (append-only shared board)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## CSV Schemas
|
||||
|
||||
### explore.csv
|
||||
|
||||
| Column | Type | Set By | Description |
|
||||
|--------|------|--------|-------------|
|
||||
| `id` | string | Decomposer | E1, E2, ... |
|
||||
| `angle` | string | Decomposer | Exploration angle name |
|
||||
| `description` | string | Decomposer | What to explore from this angle |
|
||||
| `focus` | string | Decomposer | Keywords and focus areas |
|
||||
| `deps` | string | Decomposer | Semicolon-separated dep IDs (usually empty) |
|
||||
| `wave` | integer | Wave Engine | Wave number (usually 1) |
|
||||
| `status` | enum | Agent | pending / completed / failed |
|
||||
| `findings` | string | Agent | Discoveries (max 800 chars) |
|
||||
| `key_files` | string | Agent | Relevant files (semicolon-separated) |
|
||||
| `error` | string | Agent | Error message if failed |
|
||||
|
||||
### tasks.csv
|
||||
|
||||
| Column | Type | Set By | Description |
|
||||
|--------|------|--------|-------------|
|
||||
| `id` | string | Planner | T1, T2, ... |
|
||||
| `title` | string | Planner | Task title |
|
||||
| `description` | string | Planner | Self-contained task description — what to implement |
|
||||
| `test` | string | Planner | Test cases: what tests to write and how to verify (unit/integration/edge) |
|
||||
| `acceptance_criteria` | string | Planner | Measurable conditions that define "done" |
|
||||
| `scope` | string | Planner | Target file/directory glob — constrains agent write area, prevents cross-task file conflicts |
|
||||
| `hints` | string | Planner | Implementation tips + reference files. Format: `tips text \|\| file1;file2`. Either part is optional |
|
||||
| `execution_directives` | string | Planner | Execution constraints: commands to run for verification, tool restrictions |
|
||||
| `deps` | string | Planner | Dependency task IDs: T1;T2 |
|
||||
| `context_from` | string | Planner | Context source IDs: **E1;E2;T1** |
|
||||
| `wave` | integer | Wave Engine | Wave number (computed from deps) |
|
||||
| `status` | enum | Agent | pending / completed / failed / skipped |
|
||||
| `findings` | string | Agent | Execution findings (max 500 chars) |
|
||||
| `files_modified` | string | Agent | Files modified (semicolon-separated) |
|
||||
| `tests_passed` | boolean | Agent | Whether all defined test cases passed (true/false) |
|
||||
| `acceptance_met` | string | Agent | Summary of which acceptance criteria were met/unmet |
|
||||
| `error` | string | Agent | Error if failed |
|
||||
|
||||
**context_from prefix convention**: `E*` → explore.csv lookup, `T*` → tasks.csv lookup.
|
||||
|
||||
---
|
||||
|
||||
## Session Structure
|
||||
|
||||
```
|
||||
.workflow/.wave-plan/{session-id}/
|
||||
├── explore.csv # Exploration state
|
||||
├── tasks.csv # Execution state
|
||||
├── discoveries.ndjson # Shared discovery board
|
||||
├── explore-results/ # Detailed per-angle results
|
||||
│ ├── E1.json
|
||||
│ └── E2.json
|
||||
├── task-results/ # Detailed per-task results
|
||||
│ ├── T1.json
|
||||
│ └── T2.json
|
||||
├── results.csv # Final results export
|
||||
└── context.md # Full context summary
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Session Initialization
|
||||
|
||||
```javascript
|
||||
const getUtc8ISOString = () => new Date(Date.now() + 8 * 60 * 60 * 1000).toISOString()
|
||||
|
||||
// Parse flags
|
||||
const AUTO_YES = $ARGUMENTS.includes('--yes') || $ARGUMENTS.includes('-y')
|
||||
const concurrencyMatch = $ARGUMENTS.match(/(?:--concurrency|-c)\s+(\d+)/)
|
||||
const maxConcurrency = concurrencyMatch ? parseInt(concurrencyMatch[1]) : 4
|
||||
|
||||
const requirement = $ARGUMENTS
|
||||
.replace(/--yes|-y|--concurrency\s+\d+|-c\s+\d+/g, '')
|
||||
.trim()
|
||||
|
||||
const slug = requirement.toLowerCase()
|
||||
.replace(/[^a-z0-9\u4e00-\u9fa5]+/g, '-')
|
||||
.substring(0, 40)
|
||||
const dateStr = getUtc8ISOString().substring(0, 10).replace(/-/g, '')
|
||||
const sessionId = `wp-${slug}-${dateStr}`
|
||||
const sessionFolder = `.workflow/.wave-plan/${sessionId}`
|
||||
|
||||
Bash(`mkdir -p ${sessionFolder}/explore-results ${sessionFolder}/task-results`)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Phase 1: Decompose → explore.csv
|
||||
|
||||
### 1.1 Analyze Requirement
|
||||
|
||||
```javascript
|
||||
const complexity = analyzeComplexity(requirement)
|
||||
// Low: 1 angle | Medium: 2-3 angles | High: 3-4 angles
|
||||
|
||||
const ANGLE_PRESETS = {
|
||||
architecture: ['architecture', 'dependencies', 'integration-points', 'modularity'],
|
||||
security: ['security', 'auth-patterns', 'dataflow', 'validation'],
|
||||
performance: ['performance', 'bottlenecks', 'caching', 'data-access'],
|
||||
bugfix: ['error-handling', 'dataflow', 'state-management', 'edge-cases'],
|
||||
feature: ['patterns', 'integration-points', 'testing', 'dependencies']
|
||||
}
|
||||
|
||||
function selectAngles(text, count) {
|
||||
let preset = 'feature'
|
||||
if (/refactor|architect|restructure|modular/.test(text)) preset = 'architecture'
|
||||
else if (/security|auth|permission|access/.test(text)) preset = 'security'
|
||||
else if (/performance|slow|optimi|cache/.test(text)) preset = 'performance'
|
||||
else if (/fix|bug|error|broken/.test(text)) preset = 'bugfix'
|
||||
return ANGLE_PRESETS[preset].slice(0, count)
|
||||
}
|
||||
|
||||
const angleCount = complexity === 'High' ? 4 : complexity === 'Medium' ? 3 : 1
|
||||
const angles = selectAngles(requirement, angleCount)
|
||||
```
|
||||
|
||||
### 1.2 Generate explore.csv
|
||||
|
||||
```javascript
|
||||
const header = 'id,angle,description,focus,deps,wave,status,findings,key_files,error'
|
||||
const rows = angles.map((angle, i) => {
|
||||
const id = `E${i + 1}`
|
||||
const desc = `Explore codebase from ${angle} perspective for: ${requirement}`
|
||||
return `"${id}","${angle}","${escCSV(desc)}","${angle}","",1,"pending","","",""`
|
||||
})
|
||||
|
||||
Write(`${sessionFolder}/explore.csv`, [header, ...rows].join('\n'))
|
||||
```
|
||||
|
||||
All exploration rows default to wave 1 (independent parallel). If angle dependencies exist, compute waves.
|
||||
|
||||
---
|
||||
|
||||
## Phase 2: Wave Explore
|
||||
|
||||
Execute exploration waves using `Task(Explore)` agents.
|
||||
|
||||
### 2.1 Wave Loop
|
||||
|
||||
```javascript
|
||||
const exploreCSV = parseCSV(Read(`${sessionFolder}/explore.csv`))
|
||||
const maxExploreWave = Math.max(...exploreCSV.map(r => parseInt(r.wave)))
|
||||
|
||||
for (let wave = 1; wave <= maxExploreWave; wave++) {
|
||||
const waveRows = exploreCSV.filter(r =>
|
||||
parseInt(r.wave) === wave && r.status === 'pending'
|
||||
)
|
||||
if (waveRows.length === 0) continue
|
||||
|
||||
// Skip rows with failed dependencies
|
||||
const validRows = waveRows.filter(r => {
|
||||
if (!r.deps) return true
|
||||
return r.deps.split(';').filter(Boolean).every(depId => {
|
||||
const dep = exploreCSV.find(d => d.id === depId)
|
||||
return dep && dep.status === 'completed'
|
||||
})
|
||||
})
|
||||
|
||||
waveRows.filter(r => !validRows.includes(r)).forEach(r => {
|
||||
r.status = 'skipped'
|
||||
r.error = 'Dependency failed/skipped'
|
||||
})
|
||||
|
||||
// ★ Spawn ALL explore agents in SINGLE message → parallel execution
|
||||
const results = validRows.map(row =>
|
||||
Task({
|
||||
subagent_type: "Explore",
|
||||
run_in_background: false,
|
||||
description: `Explore: ${row.angle}`,
|
||||
prompt: buildExplorePrompt(row, requirement, sessionFolder)
|
||||
})
|
||||
)
|
||||
|
||||
// Collect results from JSON files → update explore.csv
|
||||
validRows.forEach((row, i) => {
|
||||
const resultPath = `${sessionFolder}/explore-results/${row.id}.json`
|
||||
if (fileExists(resultPath)) {
|
||||
const result = JSON.parse(Read(resultPath))
|
||||
row.status = result.status || 'completed'
|
||||
row.findings = truncate(result.findings, 800)
|
||||
row.key_files = Array.isArray(result.key_files)
|
||||
? result.key_files.join(';')
|
||||
: (result.key_files || '')
|
||||
row.error = result.error || ''
|
||||
} else {
|
||||
// Fallback: parse from agent output text
|
||||
row.status = 'completed'
|
||||
row.findings = truncate(results[i], 800)
|
||||
}
|
||||
})
|
||||
|
||||
writeCSV(`${sessionFolder}/explore.csv`, exploreCSV)
|
||||
}
|
||||
```
|
||||
|
||||
### 2.2 Explore Agent Prompt
|
||||
|
||||
```javascript
|
||||
function buildExplorePrompt(row, requirement, sessionFolder) {
|
||||
return `## Exploration: ${row.angle}
|
||||
|
||||
**Requirement**: ${requirement}
|
||||
**Focus**: ${row.focus}
|
||||
|
||||
### MANDATORY FIRST STEPS
|
||||
1. Read shared discoveries: ${sessionFolder}/discoveries.ndjson (if exists, skip if not)
|
||||
2. Read project context: .workflow/project-tech.json (if exists)
|
||||
|
||||
---
|
||||
|
||||
## Instructions
|
||||
Explore the codebase from the **${row.angle}** perspective:
|
||||
1. Discover relevant files, modules, and patterns
|
||||
2. Identify integration points and dependencies
|
||||
3. Note constraints, risks, and conventions
|
||||
4. Find existing patterns to follow
|
||||
5. Share discoveries: append findings to ${sessionFolder}/discoveries.ndjson
|
||||
|
||||
## Output
|
||||
Write findings to: ${sessionFolder}/explore-results/${row.id}.json
|
||||
|
||||
JSON format:
|
||||
{
|
||||
"status": "completed",
|
||||
"findings": "Concise summary of ${row.angle} discoveries (max 800 chars)",
|
||||
"key_files": ["relevant/file1.ts", "relevant/file2.ts"],
|
||||
"details": {
|
||||
"patterns": ["pattern descriptions"],
|
||||
"integration_points": [{"file": "path", "description": "..."}],
|
||||
"constraints": ["constraint descriptions"],
|
||||
"recommendations": ["recommendation descriptions"]
|
||||
}
|
||||
}
|
||||
|
||||
Also provide a 2-3 sentence summary.`
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Phase 3: Synthesize & Plan → tasks.csv
|
||||
|
||||
Read exploration findings, cross-reference, resolve conflicts, generate execution tasks.
|
||||
|
||||
### 3.1 Load Explore Results
|
||||
|
||||
```javascript
|
||||
const exploreCSV = parseCSV(Read(`${sessionFolder}/explore.csv`))
|
||||
const completed = exploreCSV.filter(r => r.status === 'completed')
|
||||
|
||||
// Load detailed result JSONs where available
|
||||
const detailedResults = {}
|
||||
completed.forEach(r => {
|
||||
const path = `${sessionFolder}/explore-results/${r.id}.json`
|
||||
if (fileExists(path)) detailedResults[r.id] = JSON.parse(Read(path))
|
||||
})
|
||||
```
|
||||
|
||||
### 3.2 Conflict Resolution Protocol
|
||||
|
||||
Cross-reference findings across all exploration angles:
|
||||
|
||||
```javascript
|
||||
// 1. Identify common files referenced by multiple angles
|
||||
const fileRefs = {}
|
||||
completed.forEach(r => {
|
||||
r.key_files.split(';').filter(Boolean).forEach(f => {
|
||||
if (!fileRefs[f]) fileRefs[f] = []
|
||||
fileRefs[f].push({ angle: r.angle, id: r.id })
|
||||
})
|
||||
})
|
||||
const sharedFiles = Object.entries(fileRefs).filter(([_, refs]) => refs.length > 1)
|
||||
|
||||
// 2. Detect conflicting recommendations
|
||||
// Compare recommendations from different angles for same file/module
|
||||
// Flag contradictions (angle A says "refactor X" vs angle B says "extend X")
|
||||
|
||||
// 3. Resolution rules:
|
||||
// a. Safety first — when approaches conflict, choose safer option
|
||||
// b. Consistency — prefer approaches aligned with existing patterns
|
||||
// c. Scope — prefer minimal-change approaches
|
||||
// d. Document — note all resolved conflicts for transparency
|
||||
|
||||
const synthesis = {
|
||||
sharedFiles,
|
||||
conflicts: detectConflicts(completed, detailedResults),
|
||||
resolutions: [],
|
||||
allKeyFiles: [...new Set(completed.flatMap(r => r.key_files.split(';').filter(Boolean)))]
|
||||
}
|
||||
```
|
||||
|
||||
### 3.3 Generate tasks.csv
|
||||
|
||||
Decompose into execution tasks based on synthesized exploration:
|
||||
|
||||
```javascript
|
||||
// Task decomposition rules:
|
||||
// 1. Group by feature/module (not per-file)
|
||||
// 2. Each description is self-contained (agent sees only its row + prev_context)
|
||||
// 3. deps only when task B requires task A's output
|
||||
// 4. context_from links relevant explore rows (E*) and predecessor tasks (T*)
|
||||
// 5. Prefer parallel (minimize deps)
|
||||
// 6. Use exploration findings: key_files → target files, patterns → references,
|
||||
// integration_points → dependency relationships, constraints → included in description
|
||||
// 7. Each task MUST include: test (how to verify), acceptance_criteria (what defines done)
|
||||
// 8. scope must not overlap between tasks in the same wave
|
||||
// 9. hints = implementation tips + reference files (format: tips || file1;file2)
|
||||
// 10. execution_directives = commands to run for verification, tool restrictions
|
||||
|
||||
const tasks = []
|
||||
// Claude decomposes requirement using exploration synthesis
|
||||
// Example:
|
||||
// tasks.push({ id: 'T1', title: 'Setup types', description: '...', test: 'Verify types compile', acceptance_criteria: 'All interfaces exported', scope: 'src/types/**', hints: 'Follow existing type patterns || src/types/index.ts', execution_directives: 'tsc --noEmit', deps: '', context_from: 'E1;E2' })
|
||||
// tasks.push({ id: 'T2', title: 'Implement core', description: '...', test: 'Unit test: core logic', acceptance_criteria: 'All functions pass tests', scope: 'src/core/**', hints: 'Reuse BaseService || src/services/Base.ts', execution_directives: 'npm test -- --grep core', deps: 'T1', context_from: 'E1;E2;T1' })
|
||||
// tasks.push({ id: 'T3', title: 'Add tests', description: '...', test: 'Integration test suite', acceptance_criteria: '>80% coverage', scope: 'tests/**', hints: 'Follow existing test patterns || tests/auth.test.ts', execution_directives: 'npm test', deps: 'T2', context_from: 'E3;T2' })
|
||||
|
||||
// Compute waves
|
||||
const waves = computeWaves(tasks)
|
||||
tasks.forEach(t => { t.wave = waves[t.id] })
|
||||
|
||||
// Write tasks.csv
|
||||
const header = 'id,title,description,test,acceptance_criteria,scope,hints,execution_directives,deps,context_from,wave,status,findings,files_modified,tests_passed,acceptance_met,error'
|
||||
const rows = tasks.map(t =>
|
||||
[t.id, escCSV(t.title), escCSV(t.description), escCSV(t.test), escCSV(t.acceptance_criteria), escCSV(t.scope), escCSV(t.hints), escCSV(t.execution_directives), t.deps, t.context_from, t.wave, 'pending', '', '', '', '', '']
|
||||
.map(v => `"${v}"`).join(',')
|
||||
)
|
||||
|
||||
Write(`${sessionFolder}/tasks.csv`, [header, ...rows].join('\n'))
|
||||
```
|
||||
|
||||
### 3.4 User Confirmation
|
||||
|
||||
```javascript
|
||||
if (!AUTO_YES) {
|
||||
const maxWave = Math.max(...tasks.map(t => t.wave))
|
||||
|
||||
console.log(`
|
||||
## Execution Plan
|
||||
|
||||
Explore: ${completed.length} angles completed
|
||||
Conflicts resolved: ${synthesis.conflicts.length}
|
||||
Tasks: ${tasks.length} across ${maxWave} waves
|
||||
|
||||
${Array.from({length: maxWave}, (_, i) => i + 1).map(w => {
|
||||
const wt = tasks.filter(t => t.wave === w)
|
||||
return `### Wave ${w} (${wt.length} tasks, concurrent)
|
||||
${wt.map(t => ` - [${t.id}] ${t.title} (from: ${t.context_from})`).join('\n')}`
|
||||
}).join('\n')}
|
||||
`)
|
||||
|
||||
AskUserQuestion({
|
||||
questions: [{
|
||||
question: `Proceed with ${tasks.length} tasks across ${maxWave} waves?`,
|
||||
header: "Confirm",
|
||||
multiSelect: false,
|
||||
options: [
|
||||
{ label: "Execute", description: "Proceed with wave execution" },
|
||||
{ label: "Modify", description: `Edit ${sessionFolder}/tasks.csv then re-run` },
|
||||
{ label: "Cancel", description: "Abort" }
|
||||
]
|
||||
}]
|
||||
})
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Phase 4: Wave Execute
|
||||
|
||||
Execute tasks from tasks.csv in wave order, with prev_context built from both explore.csv and tasks.csv.
|
||||
|
||||
### 4.1 Wave Loop
|
||||
|
||||
```javascript
|
||||
const exploreCSV = parseCSV(Read(`${sessionFolder}/explore.csv`))
|
||||
const failedIds = new Set()
|
||||
const skippedIds = new Set()
|
||||
|
||||
let tasksCSV = parseCSV(Read(`${sessionFolder}/tasks.csv`))
|
||||
const maxWave = Math.max(...tasksCSV.map(r => parseInt(r.wave)))
|
||||
|
||||
for (let wave = 1; wave <= maxWave; wave++) {
|
||||
// Re-read master CSV (updated by previous wave)
|
||||
tasksCSV = parseCSV(Read(`${sessionFolder}/tasks.csv`))
|
||||
|
||||
const waveRows = tasksCSV.filter(r =>
|
||||
parseInt(r.wave) === wave && r.status === 'pending'
|
||||
)
|
||||
if (waveRows.length === 0) continue
|
||||
|
||||
// Skip on failed dependencies (cascade)
|
||||
const validRows = []
|
||||
for (const row of waveRows) {
|
||||
const deps = (row.deps || '').split(';').filter(Boolean)
|
||||
if (deps.some(d => failedIds.has(d) || skippedIds.has(d))) {
|
||||
skippedIds.add(row.id)
|
||||
row.status = 'skipped'
|
||||
row.error = 'Dependency failed/skipped'
|
||||
continue
|
||||
}
|
||||
validRows.push(row)
|
||||
}
|
||||
|
||||
if (validRows.length === 0) {
|
||||
writeCSV(`${sessionFolder}/tasks.csv`, tasksCSV)
|
||||
continue
|
||||
}
|
||||
|
||||
// Build prev_context for each row from explore.csv + tasks.csv
|
||||
validRows.forEach(row => {
|
||||
row._prev_context = buildPrevContext(row.context_from, exploreCSV, tasksCSV)
|
||||
})
|
||||
|
||||
// ★ Spawn ALL task agents in SINGLE message → parallel execution
|
||||
const results = validRows.map(row =>
|
||||
Task({
|
||||
subagent_type: "code-developer",
|
||||
run_in_background: false,
|
||||
description: row.title,
|
||||
prompt: buildExecutePrompt(row, requirement, sessionFolder)
|
||||
})
|
||||
)
|
||||
|
||||
// Collect results → update tasks.csv
|
||||
validRows.forEach((row, i) => {
|
||||
const resultPath = `${sessionFolder}/task-results/${row.id}.json`
|
||||
if (fileExists(resultPath)) {
|
||||
const result = JSON.parse(Read(resultPath))
|
||||
row.status = result.status || 'completed'
|
||||
row.findings = truncate(result.findings, 500)
|
||||
row.files_modified = Array.isArray(result.files_modified)
|
||||
? result.files_modified.join(';')
|
||||
: (result.files_modified || '')
|
||||
row.tests_passed = String(result.tests_passed ?? '')
|
||||
row.acceptance_met = result.acceptance_met || ''
|
||||
row.error = result.error || ''
|
||||
} else {
|
||||
row.status = 'completed'
|
||||
row.findings = truncate(results[i], 500)
|
||||
}
|
||||
|
||||
if (row.status === 'failed') failedIds.add(row.id)
|
||||
delete row._prev_context // runtime-only, don't persist
|
||||
})
|
||||
|
||||
writeCSV(`${sessionFolder}/tasks.csv`, tasksCSV)
|
||||
}
|
||||
```
|
||||
|
||||
### 4.2 prev_context Builder
|
||||
|
||||
The key function linking exploration context to execution:
|
||||
|
||||
```javascript
|
||||
function buildPrevContext(contextFrom, exploreCSV, tasksCSV) {
|
||||
if (!contextFrom) return 'No previous context available'
|
||||
|
||||
const ids = contextFrom.split(';').filter(Boolean)
|
||||
const entries = []
|
||||
|
||||
ids.forEach(id => {
|
||||
if (id.startsWith('E')) {
|
||||
// ← Look up in explore.csv (cross-phase link)
|
||||
const row = exploreCSV.find(r => r.id === id)
|
||||
if (row && row.status === 'completed' && row.findings) {
|
||||
entries.push(`[Explore ${row.angle}] ${row.findings}`)
|
||||
if (row.key_files) entries.push(` Key files: ${row.key_files}`)
|
||||
}
|
||||
} else if (id.startsWith('T')) {
|
||||
// ← Look up in tasks.csv (same-phase link)
|
||||
const row = tasksCSV.find(r => r.id === id)
|
||||
if (row && row.status === 'completed' && row.findings) {
|
||||
entries.push(`[Task ${row.id}: ${row.title}] ${row.findings}`)
|
||||
if (row.files_modified) entries.push(` Modified: ${row.files_modified}`)
|
||||
}
|
||||
}
|
||||
})
|
||||
|
||||
return entries.length > 0 ? entries.join('\n') : 'No previous context available'
|
||||
}
|
||||
```
|
||||
|
||||
### 4.3 Execute Agent Prompt
|
||||
|
||||
```javascript
|
||||
function buildExecutePrompt(row, requirement, sessionFolder) {
|
||||
return `## Task: ${row.title}
|
||||
|
||||
**ID**: ${row.id}
|
||||
**Goal**: ${requirement}
|
||||
**Scope**: ${row.scope || 'Not specified'}
|
||||
|
||||
## Description
|
||||
${row.description}
|
||||
|
||||
### Implementation Hints & Reference Files
|
||||
${row.hints || 'None'}
|
||||
|
||||
> Format: \`tips text || file1;file2\`. Read ALL reference files (after ||) before starting. Apply tips (before ||) as guidance.
|
||||
|
||||
### Execution Directives
|
||||
${row.execution_directives || 'None'}
|
||||
|
||||
> Commands to run for verification, tool restrictions, or environment requirements.
|
||||
|
||||
### Test Cases
|
||||
${row.test || 'None specified'}
|
||||
|
||||
### Acceptance Criteria
|
||||
${row.acceptance_criteria || 'None specified'}
|
||||
|
||||
## Previous Context (from exploration and predecessor tasks)
|
||||
${row._prev_context}
|
||||
|
||||
### MANDATORY FIRST STEPS
|
||||
1. Read shared discoveries: ${sessionFolder}/discoveries.ndjson (if exists, skip if not)
|
||||
2. Read project context: .workflow/project-tech.json (if exists)
|
||||
|
||||
---
|
||||
|
||||
## Execution Protocol
|
||||
|
||||
1. **Read references**: Parse hints — read all files listed after \`||\` to understand existing patterns
|
||||
2. **Read discoveries**: Load ${sessionFolder}/discoveries.ndjson for shared exploration findings
|
||||
3. **Use context**: Apply previous tasks' findings from prev_context above
|
||||
4. **Stay in scope**: ONLY create/modify files within ${row.scope || 'project'} — do NOT touch files outside this boundary
|
||||
5. **Apply hints**: Follow implementation tips from hints (before \`||\`)
|
||||
6. **Implement**: Execute changes described in the task description
|
||||
7. **Write tests**: Implement the test cases defined above
|
||||
8. **Run directives**: Execute commands from execution_directives to verify your work
|
||||
9. **Verify acceptance**: Ensure all acceptance criteria are met before reporting completion
|
||||
10. **Share discoveries**: Append exploration findings to shared board:
|
||||
\\\`\\\`\\\`bash
|
||||
echo '{"ts":"<ISO>","worker":"${row.id}","type":"<type>","data":{...}}' >> ${sessionFolder}/discoveries.ndjson
|
||||
\\\`\\\`\\\`
|
||||
11. **Report result**: Write JSON to output file
|
||||
|
||||
## Output
|
||||
Write results to: ${sessionFolder}/task-results/${row.id}.json
|
||||
|
||||
{
|
||||
"status": "completed" | "failed",
|
||||
"findings": "What was done (max 500 chars)",
|
||||
"files_modified": ["file1.ts", "file2.ts"],
|
||||
"tests_passed": true | false,
|
||||
"acceptance_met": "Summary of which acceptance criteria were met/unmet",
|
||||
"error": ""
|
||||
}
|
||||
|
||||
**IMPORTANT**: Set status to "completed" ONLY if:
|
||||
- All test cases pass
|
||||
- All acceptance criteria are met
|
||||
Otherwise set status to "failed" with details in error field.`
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Phase 5: Aggregate
|
||||
|
||||
### 5.1 Generate Results
|
||||
|
||||
```javascript
|
||||
const finalTasks = parseCSV(Read(`${sessionFolder}/tasks.csv`))
|
||||
const exploreCSV = parseCSV(Read(`${sessionFolder}/explore.csv`))
|
||||
|
||||
Bash(`cp "${sessionFolder}/tasks.csv" "${sessionFolder}/results.csv"`)
|
||||
|
||||
const completed = finalTasks.filter(r => r.status === 'completed')
|
||||
const failed = finalTasks.filter(r => r.status === 'failed')
|
||||
const skipped = finalTasks.filter(r => r.status === 'skipped')
|
||||
const maxWave = Math.max(...finalTasks.map(r => parseInt(r.wave)))
|
||||
```
|
||||
|
||||
### 5.2 Generate context.md
|
||||
|
||||
```javascript
|
||||
const contextMd = `# Wave Plan Results
|
||||
|
||||
**Requirement**: ${requirement}
|
||||
**Session**: ${sessionId}
|
||||
**Timestamp**: ${getUtc8ISOString()}
|
||||
|
||||
## Summary
|
||||
|
||||
| Metric | Count |
|
||||
|--------|-------|
|
||||
| Explore Angles | ${exploreCSV.length} |
|
||||
| Total Tasks | ${finalTasks.length} |
|
||||
| Completed | ${completed.length} |
|
||||
| Failed | ${failed.length} |
|
||||
| Skipped | ${skipped.length} |
|
||||
| Waves | ${maxWave} |
|
||||
|
||||
## Exploration Results
|
||||
|
||||
${exploreCSV.map(e => `### ${e.id}: ${e.angle} (${e.status})
|
||||
${e.findings || 'N/A'}
|
||||
Key files: ${e.key_files || 'none'}`).join('\n\n')}
|
||||
|
||||
## Task Results
|
||||
|
||||
${finalTasks.map(t => `### ${t.id}: ${t.title} (${t.status})
|
||||
|
||||
| Field | Value |
|
||||
|-------|-------|
|
||||
| Wave | ${t.wave} |
|
||||
| Scope | ${t.scope || 'none'} |
|
||||
| Dependencies | ${t.deps || 'none'} |
|
||||
| Context From | ${t.context_from || 'none'} |
|
||||
| Tests Passed | ${t.tests_passed || 'N/A'} |
|
||||
| Acceptance Met | ${t.acceptance_met || 'N/A'} |
|
||||
| Error | ${t.error || 'none'} |
|
||||
|
||||
**Description**: ${t.description}
|
||||
|
||||
**Test Cases**: ${t.test || 'N/A'}
|
||||
|
||||
**Acceptance Criteria**: ${t.acceptance_criteria || 'N/A'}
|
||||
|
||||
**Hints**: ${t.hints || 'N/A'}
|
||||
|
||||
**Execution Directives**: ${t.execution_directives || 'N/A'}
|
||||
|
||||
**Findings**: ${t.findings || 'N/A'}
|
||||
|
||||
**Files Modified**: ${t.files_modified || 'none'}`).join('\n\n---\n\n')}
|
||||
|
||||
## All Modified Files
|
||||
|
||||
${[...new Set(finalTasks.flatMap(t =>
|
||||
(t.files_modified || '').split(';')).filter(Boolean)
|
||||
)].map(f => '- ' + f).join('\n') || 'None'}
|
||||
`
|
||||
|
||||
Write(`${sessionFolder}/context.md`, contextMd)
|
||||
```
|
||||
|
||||
### 5.3 Summary & Next Steps
|
||||
|
||||
```javascript
|
||||
console.log(`
|
||||
## Wave Plan Complete
|
||||
|
||||
Session: ${sessionFolder}
|
||||
Explore: ${exploreCSV.filter(r => r.status === 'completed').length}/${exploreCSV.length} angles
|
||||
Tasks: ${completed.length}/${finalTasks.length} completed, ${failed.length} failed, ${skipped.length} skipped
|
||||
Waves: ${maxWave}
|
||||
|
||||
Files:
|
||||
- explore.csv — exploration state
|
||||
- tasks.csv — execution state
|
||||
- results.csv — final results
|
||||
- context.md — full report
|
||||
- discoveries.ndjson — shared discoveries
|
||||
`)
|
||||
|
||||
if (!AUTO_YES && failed.length > 0) {
|
||||
AskUserQuestion({
|
||||
questions: [{
|
||||
question: `${failed.length} tasks failed. Next action?`,
|
||||
header: "Next Step",
|
||||
multiSelect: false,
|
||||
options: [
|
||||
{ label: "Retry Failed", description: "Reset failed + skipped, re-execute Phase 4" },
|
||||
{ label: "View Report", description: "Display context.md" },
|
||||
{ label: "Done", description: "Complete session" }
|
||||
]
|
||||
}]
|
||||
})
|
||||
// If Retry: reset failed/skipped status to pending, re-run Phase 4
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Utilities
|
||||
|
||||
### Wave Computation (Kahn's BFS)
|
||||
|
||||
```javascript
|
||||
function computeWaves(tasks) {
|
||||
const inDegree = {}, adj = {}, depth = {}
|
||||
tasks.forEach(t => { inDegree[t.id] = 0; adj[t.id] = []; depth[t.id] = 1 })
|
||||
|
||||
tasks.forEach(t => {
|
||||
const deps = (t.deps || '').split(';').filter(Boolean)
|
||||
deps.forEach(dep => {
|
||||
if (adj[dep]) { adj[dep].push(t.id); inDegree[t.id]++ }
|
||||
})
|
||||
})
|
||||
|
||||
const queue = Object.keys(inDegree).filter(id => inDegree[id] === 0)
|
||||
queue.forEach(id => { depth[id] = 1 })
|
||||
|
||||
while (queue.length > 0) {
|
||||
const current = queue.shift()
|
||||
adj[current].forEach(next => {
|
||||
depth[next] = Math.max(depth[next], depth[current] + 1)
|
||||
inDegree[next]--
|
||||
if (inDegree[next] === 0) queue.push(next)
|
||||
})
|
||||
}
|
||||
|
||||
if (Object.values(inDegree).some(d => d > 0)) {
|
||||
throw new Error('Circular dependency detected')
|
||||
}
|
||||
|
||||
return depth // { taskId: waveNumber }
|
||||
}
|
||||
```
|
||||
|
||||
### CSV Helpers
|
||||
|
||||
```javascript
|
||||
function escCSV(s) { return String(s || '').replace(/"/g, '""') }
|
||||
|
||||
function parseCSV(content) {
|
||||
const lines = content.trim().split('\n')
|
||||
const header = lines[0].split(',').map(h => h.replace(/"/g, '').trim())
|
||||
return lines.slice(1).filter(l => l.trim()).map(line => {
|
||||
const values = parseCSVLine(line)
|
||||
const row = {}
|
||||
header.forEach((col, i) => { row[col] = (values[i] || '').replace(/^"|"$/g, '') })
|
||||
return row
|
||||
})
|
||||
}
|
||||
|
||||
function writeCSV(path, rows) {
|
||||
if (rows.length === 0) return
|
||||
// Exclude runtime-only columns (prefixed with _)
|
||||
const cols = Object.keys(rows[0]).filter(k => !k.startsWith('_'))
|
||||
const header = cols.join(',')
|
||||
const lines = rows.map(r =>
|
||||
cols.map(c => `"${escCSV(r[c])}"`).join(',')
|
||||
)
|
||||
Write(path, [header, ...lines].join('\n'))
|
||||
}
|
||||
|
||||
function truncate(s, max) {
|
||||
s = String(s || '')
|
||||
return s.length > max ? s.substring(0, max - 3) + '...' : s
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Discovery Board Protocol
|
||||
|
||||
Shared `discoveries.ndjson` — append-only NDJSON accessible to all agents across all phases.
|
||||
|
||||
**Lifecycle**:
|
||||
- Created by the first agent to write a discovery
|
||||
- Carries over across all phases and waves — never cleared
|
||||
- Agents append via `echo '...' >> discoveries.ndjson`
|
||||
|
||||
**Format**: NDJSON, each line is a self-contained JSON:
|
||||
|
||||
```jsonl
|
||||
{"ts":"...","worker":"E1","type":"code_pattern","data":{"name":"repo-pattern","file":"src/repos/Base.ts"}}
|
||||
{"ts":"...","worker":"T2","type":"integration_point","data":{"file":"src/auth/index.ts","exports":["auth"]}}
|
||||
```
|
||||
|
||||
**Discovery Types**:
|
||||
|
||||
| type | Dedup Key | Description |
|
||||
|------|-----------|-------------|
|
||||
| `code_pattern` | `data.name` | Reusable code pattern found |
|
||||
| `integration_point` | `data.file` | Module connection point |
|
||||
| `convention` | singleton | Code style conventions |
|
||||
| `blocker` | `data.issue` | Blocking issue encountered |
|
||||
| `tech_stack` | singleton | Project technology stack |
|
||||
| `test_command` | singleton | Test commands discovered |
|
||||
|
||||
**Protocol Rules**:
|
||||
1. Read board before own exploration → skip covered areas
|
||||
2. Write discoveries immediately via `echo >>` → don't batch
|
||||
3. Deduplicate — check existing entries; skip if same type + dedup key exists
|
||||
4. Append-only — never modify or delete existing lines
|
||||
|
||||
---
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Error | Resolution |
|
||||
|-------|------------|
|
||||
| Explore agent failure | Mark as failed in explore.csv, exclude from planning |
|
||||
| All explores failed | Fallback: plan directly from requirement without exploration |
|
||||
| Execute agent failure | Mark as failed, skip dependents (cascade) |
|
||||
| Agent timeout | Mark as failed in results, continue with wave |
|
||||
| Circular dependency | Abort wave computation, report cycle |
|
||||
| CSV parse error | Validate CSV format before execution, show line number |
|
||||
| discoveries.ndjson corrupt | Ignore malformed lines, continue with valid entries |
|
||||
|
||||
---
|
||||
|
||||
## Core Rules
|
||||
|
||||
1. **Wave Order is Sacred**: Never execute wave N before wave N-1 completes
|
||||
2. **CSV is Source of Truth**: Read master CSV before each wave, write after
|
||||
3. **Context via CSV**: prev_context built from CSV findings, not from memory
|
||||
4. **E* ↔ T* Linking**: tasks.csv `context_from` references explore.csv rows for cross-phase context
|
||||
5. **Skip on Failure**: Failed dep → skip dependent (cascade)
|
||||
6. **Discovery Board Append-Only**: Never clear or modify discoveries.ndjson
|
||||
7. **Explore Before Execute**: Phase 2 completes before Phase 4 starts
|
||||
8. **DO NOT STOP**: Continuous execution until all waves complete or remaining skipped
|
||||
|
||||
---
|
||||
|
||||
## Best Practices
|
||||
|
||||
1. **Exploration Angles**: 1 for simple, 3-4 for complex; avoid redundant angles
|
||||
2. **Context Linking**: Link every task to at least one explore row (E*) — exploration was done for a reason
|
||||
3. **Task Granularity**: 3-10 tasks optimal; too many = overhead, too few = no parallelism
|
||||
4. **Minimize Cross-Wave Deps**: More tasks in wave 1 = more parallelism
|
||||
5. **Specific Descriptions**: Agent sees only its CSV row + prev_context — make description self-contained
|
||||
6. **Non-Overlapping Scopes**: Same-wave tasks must not write to the same files
|
||||
7. **Context From ≠ Deps**: `deps` = execution order constraint; `context_from` = information flow
|
||||
|
||||
---
|
||||
|
||||
## Usage Recommendations
|
||||
|
||||
| Scenario | Recommended Approach |
|
||||
|----------|---------------------|
|
||||
| Complex feature (unclear architecture) | `workflow:wave-plan` — explore first, then plan |
|
||||
| Simple known-pattern task | `$csv-wave-pipeline` — skip exploration, direct execution |
|
||||
| Independent parallel tasks | `$csv-wave-pipeline -c 8` — single wave, max parallelism |
|
||||
| Diamond dependency (A→B,C→D) | `workflow:wave-plan` — 3 waves with context propagation |
|
||||
| Unknown codebase | `workflow:wave-plan` — exploration phase is essential |
|
||||
906
.codex/skills/csv-wave-pipeline/SKILL.md
Normal file
906
.codex/skills/csv-wave-pipeline/SKILL.md
Normal file
@@ -0,0 +1,906 @@
|
||||
---
|
||||
name: csv-wave-pipeline
|
||||
description: Requirement planning to wave-based CSV execution pipeline. Decomposes requirement into dependency-sorted CSV tasks, computes execution waves, runs wave-by-wave via spawn_agents_on_csv with cross-wave context propagation.
|
||||
argument-hint: "[-y|--yes] [-c|--concurrency N] [--continue] \"requirement description\""
|
||||
allowed-tools: spawn_agents_on_csv, Read, Write, Edit, Bash, Glob, Grep, AskUserQuestion
|
||||
---
|
||||
|
||||
## Auto Mode
|
||||
|
||||
When `--yes` or `-y`: Auto-confirm task decomposition, skip interactive validation, use defaults.
|
||||
|
||||
# CSV Wave Pipeline
|
||||
|
||||
## Usage
|
||||
|
||||
```bash
|
||||
$csv-wave-pipeline "Implement user authentication with OAuth, JWT, and 2FA"
|
||||
$csv-wave-pipeline -c 4 "Refactor payment module with Stripe and PayPal"
|
||||
$csv-wave-pipeline -y "Build notification system with email and SMS"
|
||||
$csv-wave-pipeline --continue "auth-20260228"
|
||||
```
|
||||
|
||||
**Flags**:
|
||||
- `-y, --yes`: Skip all confirmations (auto mode)
|
||||
- `-c, --concurrency N`: Max concurrent agents within each wave (default: 4)
|
||||
- `--continue`: Resume existing session
|
||||
|
||||
**Output Directory**: `.workflow/.csv-wave/{session-id}/`
|
||||
**Core Output**: `tasks.csv` (master state) + `results.csv` (final) + `discoveries.ndjson` (shared exploration) + `context.md` (human-readable report)
|
||||
|
||||
---
|
||||
|
||||
## Overview
|
||||
|
||||
Wave-based batch execution using `spawn_agents_on_csv` with **cross-wave context propagation**. Tasks are grouped into dependency waves; each wave executes concurrently, and its results feed into the next wave.
|
||||
|
||||
**Core workflow**: Decompose → Compute Waves → Execute Wave-by-Wave → Aggregate
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────────────────────┐
|
||||
│ CSV BATCH EXECUTION WORKFLOW │
|
||||
├─────────────────────────────────────────────────────────────────────────┤
|
||||
│ │
|
||||
│ Phase 1: Requirement → CSV │
|
||||
│ ├─ Parse requirement into subtasks (3-10 tasks) │
|
||||
│ ├─ Identify dependencies (deps column) │
|
||||
│ ├─ Compute dependency waves (topological sort → depth grouping) │
|
||||
│ ├─ Generate tasks.csv with wave column │
|
||||
│ └─ User validates task breakdown (skip if -y) │
|
||||
│ │
|
||||
│ Phase 2: Wave Execution Engine │
|
||||
│ ├─ For each wave (1..N): │
|
||||
│ │ ├─ Build wave CSV (filter rows for this wave) │
|
||||
│ │ ├─ Inject previous wave findings into prev_context column │
|
||||
│ │ ├─ spawn_agents_on_csv(wave CSV) │
|
||||
│ │ ├─ Collect results, merge into master tasks.csv │
|
||||
│ │ └─ Check: any failed? → skip dependents or retry │
|
||||
│ └─ discoveries.ndjson shared across all waves (append-only) │
|
||||
│ │
|
||||
│ Phase 3: Results Aggregation │
|
||||
│ ├─ Export final results.csv │
|
||||
│ ├─ Generate context.md with all findings │
|
||||
│ ├─ Display summary: completed/failed/skipped per wave │
|
||||
│ └─ Offer: view results | retry failed | done │
|
||||
│ │
|
||||
└─────────────────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## CSV Schema
|
||||
|
||||
### tasks.csv (Master State)
|
||||
|
||||
```csv
|
||||
id,title,description,test,acceptance_criteria,scope,hints,execution_directives,deps,context_from,wave,status,findings,files_modified,tests_passed,acceptance_met,error
|
||||
"1","Setup auth module","Create auth directory structure and base files","Verify directory exists and base files export expected interfaces","auth/ dir created; index.ts and types.ts export AuthProvider interface","src/auth/**","Follow monorepo module pattern || package.json;src/shared/types.ts","","","","1","","","","","",""
|
||||
"2","Implement OAuth","Add OAuth provider integration with Google and GitHub","Unit test: mock OAuth callback returns valid token; Integration test: verify redirect URL generation","OAuth login redirects to provider; callback returns JWT; supports Google and GitHub","src/auth/oauth/**","Use passport.js strategy pattern || src/auth/index.ts;docs/oauth-flow.md","Run npm test -- --grep oauth before completion","1","1","2","","","","","",""
|
||||
"3","Add JWT tokens","Implement JWT generation and validation","Unit test: sign/verify round-trip; Edge test: expired token returns 401","generateToken() returns valid JWT; verifyToken() rejects expired/tampered tokens","src/auth/jwt/**","Use jsonwebtoken library; Set default expiry 1h || src/config/auth.ts","Ensure tsc --noEmit passes","1","1","2","","","","","",""
|
||||
"4","Setup 2FA","Add TOTP-based 2FA with QR code generation","Unit test: TOTP verify with correct code; Test: QR data URL is valid","QR code generates scannable image; TOTP verification succeeds within time window","src/auth/2fa/**","Use speakeasy + qrcode libraries || src/auth/oauth/strategy.ts;src/auth/jwt/token.ts","Run full test suite: npm test","2;3","1;2;3","3","","","","","",""
|
||||
```
|
||||
|
||||
**Columns**:
|
||||
|
||||
| Column | Phase | Description |
|
||||
|--------|-------|-------------|
|
||||
| `id` | Input | Unique task identifier (string) |
|
||||
| `title` | Input | Short task title |
|
||||
| `description` | Input | Detailed task description — what to implement |
|
||||
| `test` | Input | Test cases: what tests to write and how to verify (unit/integration/edge) |
|
||||
| `acceptance_criteria` | Input | Acceptance criteria: measurable conditions that define "done" |
|
||||
| `scope` | Input | Target file/directory glob — constrains agent work area, prevents cross-task file conflicts |
|
||||
| `hints` | Input | Implementation tips + reference files. Format: `tips text \|\| file1;file2`. Before `\|\|` = how to implement; after `\|\|` = existing files to read before starting. Either part is optional |
|
||||
| `execution_directives` | Input | Execution constraints: commands to run for verification, tool restrictions, environment requirements |
|
||||
| `deps` | Input | Semicolon-separated dependency task IDs (empty = no deps) |
|
||||
| `context_from` | Input | Semicolon-separated task IDs whose findings this task needs |
|
||||
| `wave` | Computed | Wave number (computed by topological sort, 1-based) |
|
||||
| `status` | Output | `pending` → `completed` / `failed` / `skipped` |
|
||||
| `findings` | Output | Key discoveries or implementation notes (max 500 chars) |
|
||||
| `files_modified` | Output | Semicolon-separated file paths |
|
||||
| `tests_passed` | Output | Whether all defined test cases passed (true/false) |
|
||||
| `acceptance_met` | Output | Summary of which acceptance criteria were met/unmet |
|
||||
| `error` | Output | Error message if failed (empty if success) |
|
||||
|
||||
### Per-Wave CSV (Temporary)
|
||||
|
||||
Each wave generates a temporary `wave-{N}.csv` with an extra `prev_context` column:
|
||||
|
||||
```csv
|
||||
id,title,description,test,acceptance_criteria,scope,hints,execution_directives,deps,context_from,wave,prev_context
|
||||
"2","Implement OAuth","Add OAuth integration","Unit test: mock OAuth callback returns valid token","OAuth login redirects to provider; callback returns JWT","src/auth/oauth/**","Use passport.js strategy pattern || src/auth/index.ts;docs/oauth-flow.md","Run npm test -- --grep oauth","1","1","2","[Task 1] Created auth/ with index.ts and types.ts"
|
||||
"3","Add JWT tokens","Implement JWT","Unit test: sign/verify round-trip; Edge test: expired token returns 401","generateToken() returns valid JWT; verifyToken() rejects expired/tampered tokens","src/auth/jwt/**","Use jsonwebtoken library; Set default expiry 1h || src/config/auth.ts","Ensure tsc --noEmit passes","1","1","2","[Task 1] Created auth/ with index.ts and types.ts"
|
||||
```
|
||||
|
||||
The `prev_context` column is built from `context_from` by looking up completed tasks' `findings` in the master CSV.
|
||||
|
||||
---
|
||||
|
||||
## Output Artifacts
|
||||
|
||||
| File | Purpose | Lifecycle |
|
||||
|------|---------|-----------|
|
||||
| `tasks.csv` | Master state — all tasks with status/findings | Updated after each wave |
|
||||
| `wave-{N}.csv` | Per-wave input (temporary) | Created before wave, deleted after |
|
||||
| `results.csv` | Final export of all task results | Created in Phase 3 |
|
||||
| `discoveries.ndjson` | Shared exploration board across all agents | Append-only, carries across waves |
|
||||
| `context.md` | Human-readable execution report | Created in Phase 3 |
|
||||
|
||||
---
|
||||
|
||||
## Session Structure
|
||||
|
||||
```
|
||||
.workflow/.csv-wave/{session-id}/
|
||||
├── tasks.csv # Master state (updated per wave)
|
||||
├── results.csv # Final results export
|
||||
├── discoveries.ndjson # Shared discovery board (all agents)
|
||||
├── context.md # Human-readable report
|
||||
└── wave-{N}.csv # Temporary per-wave input (cleaned up)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Implementation
|
||||
|
||||
### Session Initialization
|
||||
|
||||
```javascript
|
||||
const getUtc8ISOString = () => new Date(Date.now() + 8 * 60 * 60 * 1000).toISOString()
|
||||
|
||||
// Parse flags
|
||||
const AUTO_YES = $ARGUMENTS.includes('--yes') || $ARGUMENTS.includes('-y')
|
||||
const continueMode = $ARGUMENTS.includes('--continue')
|
||||
const concurrencyMatch = $ARGUMENTS.match(/(?:--concurrency|-c)\s+(\d+)/)
|
||||
const maxConcurrency = concurrencyMatch ? parseInt(concurrencyMatch[1]) : 4
|
||||
|
||||
// Clean requirement text (remove flags)
|
||||
const requirement = $ARGUMENTS
|
||||
.replace(/--yes|-y|--continue|--concurrency\s+\d+|-c\s+\d+/g, '')
|
||||
.trim()
|
||||
|
||||
const slug = requirement.toLowerCase()
|
||||
.replace(/[^a-z0-9\u4e00-\u9fa5]+/g, '-')
|
||||
.substring(0, 40)
|
||||
const dateStr = getUtc8ISOString().substring(0, 10).replace(/-/g, '')
|
||||
const sessionId = `cwp-${slug}-${dateStr}`
|
||||
const sessionFolder = `.workflow/.csv-wave/${sessionId}`
|
||||
|
||||
// Continue mode: find existing session
|
||||
if (continueMode) {
|
||||
const existing = Bash(`ls -t .workflow/.csv-wave/ 2>/dev/null | head -1`).trim()
|
||||
if (existing) {
|
||||
sessionId = existing
|
||||
sessionFolder = `.workflow/.csv-wave/${sessionId}`
|
||||
// Read existing tasks.csv, find incomplete waves, resume from there
|
||||
const existingCsv = Read(`${sessionFolder}/tasks.csv`)
|
||||
// → jump to Phase 2 with remaining waves
|
||||
}
|
||||
}
|
||||
|
||||
Bash(`mkdir -p ${sessionFolder}`)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Phase 1: Requirement → CSV
|
||||
|
||||
**Objective**: Decompose requirement into tasks, compute dependency waves, generate tasks.csv.
|
||||
|
||||
**Steps**:
|
||||
|
||||
1. **Decompose Requirement**
|
||||
|
||||
```javascript
|
||||
// Use ccw cli to decompose requirement into subtasks
|
||||
Bash({
|
||||
command: `ccw cli -p "PURPOSE: Decompose requirement into 3-10 atomic tasks for batch agent execution. Each task must include implementation description, test cases, and acceptance criteria.
|
||||
TASK:
|
||||
• Parse requirement into independent subtasks
|
||||
• Identify dependencies between tasks (which must complete before others)
|
||||
• Identify context flow (which tasks need previous tasks' findings)
|
||||
• For each task, define concrete test cases (unit/integration/edge)
|
||||
• For each task, define measurable acceptance criteria (what defines 'done')
|
||||
• Each task must be executable by a single agent with file read/write access
|
||||
MODE: analysis
|
||||
CONTEXT: @**/*
|
||||
EXPECTED: JSON object with tasks array. Each task: {id: string, title: string, description: string, test: string, acceptance_criteria: string, scope: string, hints: string, execution_directives: string, deps: string[], context_from: string[]}.
|
||||
- description: what to implement (specific enough for an agent to execute independently)
|
||||
- test: what tests to write and how to verify (e.g. 'Unit test: X returns Y; Edge test: handles Z')
|
||||
- acceptance_criteria: measurable conditions that define done (e.g. 'API returns 200; token expires after 1h')
|
||||
- scope: target file/directory glob (e.g. 'src/auth/**') — tasks in same wave MUST have non-overlapping scopes
|
||||
- hints: implementation tips + reference files, format '<tips> || <ref_file1>;<ref_file2>' (e.g. 'Use strategy pattern || src/base/Strategy.ts;docs/design.md')
|
||||
- execution_directives: commands to run for verification or tool constraints (e.g. 'Run npm test --bail; Ensure tsc passes')
|
||||
- deps: task IDs that must complete first
|
||||
- context_from: task IDs whose findings are needed
|
||||
CONSTRAINTS: 3-10 tasks | Each task is atomic | No circular deps | test and acceptance_criteria must be concrete and verifiable | Same-wave tasks must have non-overlapping scopes
|
||||
|
||||
REQUIREMENT: ${requirement}" --tool gemini --mode analysis --rule planning-breakdown-task-steps`,
|
||||
run_in_background: true
|
||||
})
|
||||
// Wait for CLI completion via hook callback
|
||||
// Parse JSON from CLI output → decomposedTasks[]
|
||||
```
|
||||
|
||||
2. **Compute Waves** (Topological Sort → Depth Grouping)
|
||||
|
||||
```javascript
|
||||
function computeWaves(tasks) {
|
||||
// Build adjacency: task.deps → predecessors
|
||||
const taskMap = new Map(tasks.map(t => [t.id, t]))
|
||||
const inDegree = new Map(tasks.map(t => [t.id, 0]))
|
||||
const adjList = new Map(tasks.map(t => [t.id, []]))
|
||||
|
||||
for (const task of tasks) {
|
||||
for (const dep of task.deps) {
|
||||
if (taskMap.has(dep)) {
|
||||
adjList.get(dep).push(task.id)
|
||||
inDegree.set(task.id, inDegree.get(task.id) + 1)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// BFS-based topological sort with depth tracking
|
||||
const queue = [] // [taskId, depth]
|
||||
const waveAssignment = new Map()
|
||||
|
||||
for (const [id, deg] of inDegree) {
|
||||
if (deg === 0) {
|
||||
queue.push([id, 1])
|
||||
waveAssignment.set(id, 1)
|
||||
}
|
||||
}
|
||||
|
||||
let maxWave = 1
|
||||
let idx = 0
|
||||
while (idx < queue.length) {
|
||||
const [current, depth] = queue[idx++]
|
||||
for (const next of adjList.get(current)) {
|
||||
const newDeg = inDegree.get(next) - 1
|
||||
inDegree.set(next, newDeg)
|
||||
const nextDepth = Math.max(waveAssignment.get(next) || 0, depth + 1)
|
||||
waveAssignment.set(next, nextDepth)
|
||||
if (newDeg === 0) {
|
||||
queue.push([next, nextDepth])
|
||||
maxWave = Math.max(maxWave, nextDepth)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Detect cycles: any task without wave assignment
|
||||
for (const task of tasks) {
|
||||
if (!waveAssignment.has(task.id)) {
|
||||
throw new Error(`Circular dependency detected involving task ${task.id}`)
|
||||
}
|
||||
}
|
||||
|
||||
return { waveAssignment, maxWave }
|
||||
}
|
||||
|
||||
const { waveAssignment, maxWave } = computeWaves(decomposedTasks)
|
||||
```
|
||||
|
||||
3. **Generate tasks.csv**
|
||||
|
||||
```javascript
|
||||
const header = 'id,title,description,test,acceptance_criteria,scope,hints,execution_directives,deps,context_from,wave,status,findings,files_modified,tests_passed,acceptance_met,error'
|
||||
const rows = decomposedTasks.map(task => {
|
||||
const wave = waveAssignment.get(task.id)
|
||||
return [
|
||||
task.id,
|
||||
csvEscape(task.title),
|
||||
csvEscape(task.description),
|
||||
csvEscape(task.test),
|
||||
csvEscape(task.acceptance_criteria),
|
||||
csvEscape(task.scope),
|
||||
csvEscape(task.hints),
|
||||
csvEscape(task.execution_directives),
|
||||
task.deps.join(';'),
|
||||
task.context_from.join(';'),
|
||||
wave,
|
||||
'pending', // status
|
||||
'', // findings
|
||||
'', // files_modified
|
||||
'', // tests_passed
|
||||
'', // acceptance_met
|
||||
'' // error
|
||||
].map(cell => `"${String(cell).replace(/"/g, '""')}"`).join(',')
|
||||
})
|
||||
|
||||
Write(`${sessionFolder}/tasks.csv`, [header, ...rows].join('\n'))
|
||||
```
|
||||
|
||||
4. **User Validation** (skip if AUTO_YES)
|
||||
|
||||
```javascript
|
||||
if (!AUTO_YES) {
|
||||
// Display task breakdown with wave assignment
|
||||
console.log(`\n## Task Breakdown (${decomposedTasks.length} tasks, ${maxWave} waves)\n`)
|
||||
for (let w = 1; w <= maxWave; w++) {
|
||||
const waveTasks = decomposedTasks.filter(t => waveAssignment.get(t.id) === w)
|
||||
console.log(`### Wave ${w} (${waveTasks.length} tasks, concurrent)`)
|
||||
waveTasks.forEach(t => console.log(` - [${t.id}] ${t.title}`))
|
||||
}
|
||||
|
||||
const answer = AskUserQuestion({
|
||||
questions: [{
|
||||
question: "Approve task breakdown?",
|
||||
header: "Validation",
|
||||
multiSelect: false,
|
||||
options: [
|
||||
{ label: "Approve", description: "Proceed with wave execution" },
|
||||
{ label: "Modify", description: `Edit ${sessionFolder}/tasks.csv manually, then --continue` },
|
||||
{ label: "Cancel", description: "Abort" }
|
||||
]
|
||||
}]
|
||||
}) // BLOCKS
|
||||
|
||||
if (answer.Validation === "Modify") {
|
||||
console.log(`Edit: ${sessionFolder}/tasks.csv\nResume: $csv-wave-pipeline --continue`)
|
||||
return
|
||||
} else if (answer.Validation === "Cancel") {
|
||||
return
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Success Criteria**:
|
||||
- tasks.csv created with valid schema and wave assignments
|
||||
- No circular dependencies
|
||||
- User approved (or AUTO_YES)
|
||||
|
||||
---
|
||||
|
||||
### Phase 2: Wave Execution Engine
|
||||
|
||||
**Objective**: Execute tasks wave-by-wave via `spawn_agents_on_csv`. Each wave sees previous waves' results.
|
||||
|
||||
**Steps**:
|
||||
|
||||
1. **Wave Loop**
|
||||
|
||||
```javascript
|
||||
const failedIds = new Set()
|
||||
const skippedIds = new Set()
|
||||
|
||||
for (let wave = 1; wave <= maxWave; wave++) {
|
||||
console.log(`\n## Wave ${wave}/${maxWave}\n`)
|
||||
|
||||
// 1. Read current master CSV
|
||||
const masterCsv = parseCsv(Read(`${sessionFolder}/tasks.csv`))
|
||||
|
||||
// 2. Filter tasks for this wave
|
||||
const waveTasks = masterCsv.filter(row => parseInt(row.wave) === wave)
|
||||
|
||||
// 3. Skip tasks whose deps failed
|
||||
const executableTasks = []
|
||||
for (const task of waveTasks) {
|
||||
const deps = task.deps.split(';').filter(Boolean)
|
||||
if (deps.some(d => failedIds.has(d) || skippedIds.has(d))) {
|
||||
skippedIds.add(task.id)
|
||||
// Update master CSV: mark as skipped
|
||||
updateMasterCsvRow(sessionFolder, task.id, {
|
||||
status: 'skipped',
|
||||
error: 'Dependency failed or skipped'
|
||||
})
|
||||
console.log(` [${task.id}] ${task.title} → SKIPPED (dependency failed)`)
|
||||
continue
|
||||
}
|
||||
executableTasks.push(task)
|
||||
}
|
||||
|
||||
if (executableTasks.length === 0) {
|
||||
console.log(` No executable tasks in wave ${wave}`)
|
||||
continue
|
||||
}
|
||||
|
||||
// 4. Build prev_context for each task
|
||||
for (const task of executableTasks) {
|
||||
const contextIds = task.context_from.split(';').filter(Boolean)
|
||||
const prevFindings = contextIds
|
||||
.map(id => {
|
||||
const prevRow = masterCsv.find(r => r.id === id)
|
||||
if (prevRow && prevRow.status === 'completed' && prevRow.findings) {
|
||||
return `[Task ${id}: ${prevRow.title}] ${prevRow.findings}`
|
||||
}
|
||||
return null
|
||||
})
|
||||
.filter(Boolean)
|
||||
.join('\n')
|
||||
task.prev_context = prevFindings || 'No previous context available'
|
||||
}
|
||||
|
||||
// 5. Write wave CSV
|
||||
const waveHeader = 'id,title,description,test,acceptance_criteria,scope,hints,execution_directives,deps,context_from,wave,prev_context'
|
||||
const waveRows = executableTasks.map(t =>
|
||||
[t.id, t.title, t.description, t.test, t.acceptance_criteria, t.scope, t.hints, t.execution_directives, t.deps, t.context_from, t.wave, t.prev_context]
|
||||
.map(cell => `"${String(cell).replace(/"/g, '""')}"`)
|
||||
.join(',')
|
||||
)
|
||||
Write(`${sessionFolder}/wave-${wave}.csv`, [waveHeader, ...waveRows].join('\n'))
|
||||
|
||||
// 6. Execute wave
|
||||
console.log(` Executing ${executableTasks.length} tasks (concurrency: ${maxConcurrency})...`)
|
||||
|
||||
const waveResult = spawn_agents_on_csv({
|
||||
csv_path: `${sessionFolder}/wave-${wave}.csv`,
|
||||
id_column: "id",
|
||||
instruction: buildInstructionTemplate(sessionFolder, wave),
|
||||
max_concurrency: maxConcurrency,
|
||||
max_runtime_seconds: 600,
|
||||
output_csv_path: `${sessionFolder}/wave-${wave}-results.csv`,
|
||||
output_schema: {
|
||||
type: "object",
|
||||
properties: {
|
||||
id: { type: "string" },
|
||||
status: { type: "string", enum: ["completed", "failed"] },
|
||||
findings: { type: "string" },
|
||||
files_modified: { type: "array", items: { type: "string" } },
|
||||
tests_passed: { type: "boolean" },
|
||||
acceptance_met: { type: "string" },
|
||||
error: { type: "string" }
|
||||
},
|
||||
required: ["id", "status", "findings", "tests_passed"]
|
||||
}
|
||||
})
|
||||
// ↑ Blocks until all agents in this wave complete
|
||||
|
||||
// 7. Merge results into master CSV
|
||||
const waveResults = parseCsv(Read(`${sessionFolder}/wave-${wave}-results.csv`))
|
||||
for (const result of waveResults) {
|
||||
updateMasterCsvRow(sessionFolder, result.id, {
|
||||
status: result.status,
|
||||
findings: result.findings || '',
|
||||
files_modified: (result.files_modified || []).join(';'),
|
||||
tests_passed: String(result.tests_passed ?? ''),
|
||||
acceptance_met: result.acceptance_met || '',
|
||||
error: result.error || ''
|
||||
})
|
||||
|
||||
if (result.status === 'failed') {
|
||||
failedIds.add(result.id)
|
||||
console.log(` [${result.id}] ${result.title} → FAILED: ${result.error}`)
|
||||
} else {
|
||||
console.log(` [${result.id}] ${result.title} → COMPLETED`)
|
||||
}
|
||||
}
|
||||
|
||||
// 8. Cleanup temporary wave CSV
|
||||
Bash(`rm -f "${sessionFolder}/wave-${wave}.csv"`)
|
||||
|
||||
console.log(` Wave ${wave} done: ${waveResults.filter(r => r.status === 'completed').length} completed, ${waveResults.filter(r => r.status === 'failed').length} failed`)
|
||||
}
|
||||
```
|
||||
|
||||
2. **Instruction Template Builder**
|
||||
|
||||
```javascript
|
||||
function buildInstructionTemplate(sessionFolder, wave) {
|
||||
return `
|
||||
## TASK ASSIGNMENT
|
||||
|
||||
### MANDATORY FIRST STEPS
|
||||
1. Read shared discoveries: ${sessionFolder}/discoveries.ndjson (if exists, skip if not)
|
||||
2. Read project context: .workflow/project-tech.json (if exists)
|
||||
|
||||
---
|
||||
|
||||
## Your Task
|
||||
|
||||
**Task ID**: {id}
|
||||
**Title**: {title}
|
||||
**Description**: {description}
|
||||
**Scope**: {scope}
|
||||
|
||||
### Implementation Hints & Reference Files
|
||||
{hints}
|
||||
|
||||
> Format: \`<tips> || <ref_file1>;<ref_file2>\`. Read ALL reference files (after ||) before starting implementation. Apply tips (before ||) as implementation guidance.
|
||||
|
||||
### Execution Directives
|
||||
{execution_directives}
|
||||
|
||||
> Commands to run for verification, tool restrictions, or environment requirements. Follow these constraints during and after implementation.
|
||||
|
||||
### Test Cases
|
||||
{test}
|
||||
|
||||
### Acceptance Criteria
|
||||
{acceptance_criteria}
|
||||
|
||||
### Previous Tasks' Findings (Context)
|
||||
{prev_context}
|
||||
|
||||
---
|
||||
|
||||
## Execution Protocol
|
||||
|
||||
1. **Read references**: Parse {hints} — read all files listed after \`||\` to understand existing patterns
|
||||
2. **Read discoveries**: Load ${sessionFolder}/discoveries.ndjson for shared exploration findings
|
||||
3. **Use context**: Apply previous tasks' findings from prev_context above
|
||||
4. **Stay in scope**: ONLY create/modify files within {scope} — do NOT touch files outside this boundary
|
||||
5. **Apply hints**: Follow implementation tips from {hints} (before \`||\`)
|
||||
6. **Execute**: Implement the task as described
|
||||
7. **Write tests**: Implement the test cases defined above
|
||||
8. **Run directives**: Execute commands from {execution_directives} to verify your work
|
||||
9. **Verify acceptance**: Ensure all acceptance criteria are met before reporting completion
|
||||
10. **Share discoveries**: Append exploration findings to shared board:
|
||||
\`\`\`bash
|
||||
echo '{"ts":"<ISO8601>","worker":"{id}","type":"<type>","data":{...}}' >> ${sessionFolder}/discoveries.ndjson
|
||||
\`\`\`
|
||||
11. **Report result**: Return JSON via report_agent_job_result
|
||||
|
||||
### Discovery Types to Share
|
||||
- \`code_pattern\`: {name, file, description} — reusable patterns found
|
||||
- \`integration_point\`: {file, description, exports[]} — module connection points
|
||||
- \`convention\`: {naming, imports, formatting} — code style conventions
|
||||
- \`blocker\`: {issue, severity, impact} — blocking issues encountered
|
||||
|
||||
---
|
||||
|
||||
## Output (report_agent_job_result)
|
||||
|
||||
Return JSON:
|
||||
{
|
||||
"id": "{id}",
|
||||
"status": "completed" | "failed",
|
||||
"findings": "Key discoveries and implementation notes (max 500 chars)",
|
||||
"files_modified": ["path1", "path2"],
|
||||
"tests_passed": true | false,
|
||||
"acceptance_met": "Summary of which acceptance criteria were met/unmet",
|
||||
"error": ""
|
||||
}
|
||||
|
||||
**IMPORTANT**: Set status to "completed" ONLY if:
|
||||
- All test cases pass
|
||||
- All acceptance criteria are met
|
||||
Otherwise set status to "failed" with details in error field.
|
||||
`
|
||||
}
|
||||
```
|
||||
|
||||
3. **Master CSV Update Helper**
|
||||
|
||||
```javascript
|
||||
function updateMasterCsvRow(sessionFolder, taskId, updates) {
|
||||
const csvPath = `${sessionFolder}/tasks.csv`
|
||||
const content = Read(csvPath)
|
||||
const lines = content.split('\n')
|
||||
const header = lines[0].split(',')
|
||||
|
||||
for (let i = 1; i < lines.length; i++) {
|
||||
const cells = parseCsvLine(lines[i])
|
||||
if (cells[0] === taskId || cells[0] === `"${taskId}"`) {
|
||||
// Update specified columns
|
||||
for (const [col, val] of Object.entries(updates)) {
|
||||
const colIdx = header.indexOf(col)
|
||||
if (colIdx >= 0) {
|
||||
cells[colIdx] = `"${String(val).replace(/"/g, '""')}"`
|
||||
}
|
||||
}
|
||||
lines[i] = cells.join(',')
|
||||
break
|
||||
}
|
||||
}
|
||||
|
||||
Write(csvPath, lines.join('\n'))
|
||||
}
|
||||
```
|
||||
|
||||
**Success Criteria**:
|
||||
- All waves executed in order
|
||||
- Each wave's results merged into master CSV before next wave starts
|
||||
- Dependent tasks skipped when predecessor failed
|
||||
- discoveries.ndjson accumulated across all waves
|
||||
|
||||
---
|
||||
|
||||
### Phase 3: Results Aggregation
|
||||
|
||||
**Objective**: Generate final results and human-readable report.
|
||||
|
||||
**Steps**:
|
||||
|
||||
1. **Export results.csv**
|
||||
|
||||
```javascript
|
||||
const masterCsv = Read(`${sessionFolder}/tasks.csv`)
|
||||
// results.csv = master CSV (already has all results populated)
|
||||
Write(`${sessionFolder}/results.csv`, masterCsv)
|
||||
```
|
||||
|
||||
2. **Generate context.md**
|
||||
|
||||
```javascript
|
||||
const tasks = parseCsv(masterCsv)
|
||||
const completed = tasks.filter(t => t.status === 'completed')
|
||||
const failed = tasks.filter(t => t.status === 'failed')
|
||||
const skipped = tasks.filter(t => t.status === 'skipped')
|
||||
|
||||
const contextContent = `# CSV Batch Execution Report
|
||||
|
||||
**Session**: ${sessionId}
|
||||
**Requirement**: ${requirement}
|
||||
**Completed**: ${getUtc8ISOString()}
|
||||
**Waves**: ${maxWave} | **Concurrency**: ${maxConcurrency}
|
||||
|
||||
---
|
||||
|
||||
## Summary
|
||||
|
||||
| Metric | Count |
|
||||
|--------|-------|
|
||||
| Total Tasks | ${tasks.length} |
|
||||
| Completed | ${completed.length} |
|
||||
| Failed | ${failed.length} |
|
||||
| Skipped | ${skipped.length} |
|
||||
| Waves | ${maxWave} |
|
||||
|
||||
---
|
||||
|
||||
## Wave Execution
|
||||
|
||||
${Array.from({ length: maxWave }, (_, i) => i + 1).map(w => {
|
||||
const waveTasks = tasks.filter(t => parseInt(t.wave) === w)
|
||||
return `### Wave ${w}
|
||||
${waveTasks.map(t => `- **[${t.id}] ${t.title}**: ${t.status}${t.tests_passed ? ' ✓tests' : ''}${t.error ? ' — ' + t.error : ''}
|
||||
${t.findings ? 'Findings: ' + t.findings : ''}`).join('\n')}`
|
||||
}).join('\n\n')}
|
||||
|
||||
---
|
||||
|
||||
## Task Details
|
||||
|
||||
${tasks.map(t => `### ${t.id}: ${t.title}
|
||||
|
||||
| Field | Value |
|
||||
|-------|-------|
|
||||
| Status | ${t.status} |
|
||||
| Wave | ${t.wave} |
|
||||
| Scope | ${t.scope || 'none'} |
|
||||
| Dependencies | ${t.deps || 'none'} |
|
||||
| Context From | ${t.context_from || 'none'} |
|
||||
| Tests Passed | ${t.tests_passed || 'N/A'} |
|
||||
| Acceptance Met | ${t.acceptance_met || 'N/A'} |
|
||||
| Error | ${t.error || 'none'} |
|
||||
|
||||
**Description**: ${t.description}
|
||||
|
||||
**Test Cases**: ${t.test || 'N/A'}
|
||||
|
||||
**Acceptance Criteria**: ${t.acceptance_criteria || 'N/A'}
|
||||
|
||||
**Hints**: ${t.hints || 'N/A'}
|
||||
|
||||
**Execution Directives**: ${t.execution_directives || 'N/A'}
|
||||
|
||||
**Findings**: ${t.findings || 'N/A'}
|
||||
|
||||
**Files Modified**: ${t.files_modified || 'none'}
|
||||
`).join('\n---\n')}
|
||||
|
||||
---
|
||||
|
||||
## All Modified Files
|
||||
|
||||
${[...new Set(tasks.flatMap(t => (t.files_modified || '').split(';')).filter(Boolean))].map(f => '- ' + f).join('\n') || 'None'}
|
||||
`
|
||||
|
||||
Write(`${sessionFolder}/context.md`, contextContent)
|
||||
```
|
||||
|
||||
3. **Display Summary**
|
||||
|
||||
```javascript
|
||||
console.log(`
|
||||
## Execution Complete
|
||||
|
||||
- **Session**: ${sessionId}
|
||||
- **Waves**: ${maxWave}
|
||||
- **Completed**: ${completed.length}/${tasks.length}
|
||||
- **Failed**: ${failed.length}
|
||||
- **Skipped**: ${skipped.length}
|
||||
|
||||
**Results**: ${sessionFolder}/results.csv
|
||||
**Report**: ${sessionFolder}/context.md
|
||||
**Discoveries**: ${sessionFolder}/discoveries.ndjson
|
||||
`)
|
||||
```
|
||||
|
||||
4. **Offer Next Steps** (skip if AUTO_YES)
|
||||
|
||||
```javascript
|
||||
if (!AUTO_YES && failed.length > 0) {
|
||||
const answer = AskUserQuestion({
|
||||
questions: [{
|
||||
question: `${failed.length} tasks failed. Next action?`,
|
||||
header: "Next Step",
|
||||
multiSelect: false,
|
||||
options: [
|
||||
{ label: "Retry Failed", description: `Re-execute ${failed.length} failed tasks with updated context` },
|
||||
{ label: "View Report", description: "Display context.md" },
|
||||
{ label: "Done", description: "Complete session" }
|
||||
]
|
||||
}]
|
||||
}) // BLOCKS
|
||||
|
||||
if (answer['Next Step'] === "Retry Failed") {
|
||||
// Reset failed tasks to pending, re-run Phase 2 for their waves
|
||||
for (const task of failed) {
|
||||
updateMasterCsvRow(sessionFolder, task.id, { status: 'pending', error: '' })
|
||||
}
|
||||
// Also reset skipped tasks whose deps are now retrying
|
||||
for (const task of skipped) {
|
||||
updateMasterCsvRow(sessionFolder, task.id, { status: 'pending', error: '' })
|
||||
}
|
||||
// Re-execute Phase 2 (loop will skip already-completed tasks)
|
||||
// → goto Phase 2
|
||||
} else if (answer['Next Step'] === "View Report") {
|
||||
console.log(Read(`${sessionFolder}/context.md`))
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Success Criteria**:
|
||||
- results.csv exported
|
||||
- context.md generated
|
||||
- Summary displayed to user
|
||||
|
||||
---
|
||||
|
||||
## Shared Discovery Board Protocol
|
||||
|
||||
All agents across all waves share `discoveries.ndjson`. This eliminates redundant codebase exploration.
|
||||
|
||||
**Lifecycle**:
|
||||
- Created by the first agent to write a discovery
|
||||
- Carries over across waves — never cleared
|
||||
- Agents append via `echo '...' >> discoveries.ndjson`
|
||||
|
||||
**Format**: NDJSON, each line is a self-contained JSON:
|
||||
|
||||
```jsonl
|
||||
{"ts":"2026-02-28T10:00:00+08:00","worker":"1","type":"code_pattern","data":{"name":"repository-pattern","file":"src/repos/Base.ts","description":"Abstract CRUD repository"}}
|
||||
{"ts":"2026-02-28T10:01:00+08:00","worker":"2","type":"integration_point","data":{"file":"src/auth/index.ts","description":"Auth module entry","exports":["authenticate","authorize"]}}
|
||||
```
|
||||
|
||||
**Discovery Types**:
|
||||
|
||||
| type | Dedup Key | Description |
|
||||
|------|-----------|-------------|
|
||||
| `code_pattern` | `data.name` | Reusable code pattern found |
|
||||
| `integration_point` | `data.file` | Module connection point |
|
||||
| `convention` | singleton | Code style conventions |
|
||||
| `blocker` | `data.issue` | Blocking issue encountered |
|
||||
| `tech_stack` | singleton | Project technology stack |
|
||||
| `test_command` | singleton | Test commands discovered |
|
||||
|
||||
**Protocol Rules**:
|
||||
1. Read board before own exploration → skip covered areas
|
||||
2. Write discoveries immediately via `echo >>` → don't batch
|
||||
3. Deduplicate — check existing entries; skip if same type + dedup key exists
|
||||
4. Append-only — never modify or delete existing lines
|
||||
|
||||
---
|
||||
|
||||
## Wave Computation Details
|
||||
|
||||
### Algorithm
|
||||
|
||||
Kahn's BFS topological sort with depth tracking:
|
||||
|
||||
```
|
||||
Input: tasks[] with deps[]
|
||||
Output: waveAssignment (taskId → wave number)
|
||||
|
||||
1. Build in-degree map and adjacency list from deps
|
||||
2. Enqueue all tasks with in-degree 0 at wave 1
|
||||
3. BFS: for each dequeued task at wave W:
|
||||
- For each dependent task D:
|
||||
- Decrement D's in-degree
|
||||
- D.wave = max(D.wave, W + 1)
|
||||
- If D's in-degree reaches 0, enqueue D
|
||||
4. Any task without wave assignment → circular dependency error
|
||||
```
|
||||
|
||||
### Wave Properties
|
||||
|
||||
- **Wave 1**: No dependencies — all tasks in wave 1 are fully independent
|
||||
- **Wave N**: All dependencies are in waves 1..(N-1) — guaranteed completed before wave N starts
|
||||
- **Within a wave**: Tasks are independent of each other → safe for concurrent execution
|
||||
|
||||
### Example
|
||||
|
||||
```
|
||||
Task A (no deps) → Wave 1
|
||||
Task B (no deps) → Wave 1
|
||||
Task C (deps: A) → Wave 2
|
||||
Task D (deps: A, B) → Wave 2
|
||||
Task E (deps: C, D) → Wave 3
|
||||
|
||||
Execution:
|
||||
Wave 1: [A, B] ← concurrent
|
||||
Wave 2: [C, D] ← concurrent, sees A+B findings
|
||||
Wave 3: [E] ← sees A+B+C+D findings
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Context Propagation Flow
|
||||
|
||||
```
|
||||
Wave 1 agents:
|
||||
├─ Execute tasks (no prev_context)
|
||||
├─ Write findings to report_agent_job_result
|
||||
└─ Append discoveries to discoveries.ndjson
|
||||
|
||||
↓ merge results into master CSV
|
||||
|
||||
Wave 2 agents:
|
||||
├─ Read discoveries.ndjson (exploration sharing)
|
||||
├─ Read prev_context column (wave 1 findings from context_from)
|
||||
├─ Execute tasks with full upstream context
|
||||
├─ Write findings to report_agent_job_result
|
||||
└─ Append new discoveries to discoveries.ndjson
|
||||
|
||||
↓ merge results into master CSV
|
||||
|
||||
Wave 3 agents:
|
||||
├─ Read discoveries.ndjson (accumulated from waves 1+2)
|
||||
├─ Read prev_context column (wave 1+2 findings from context_from)
|
||||
├─ Execute tasks
|
||||
└─ ...
|
||||
```
|
||||
|
||||
**Two context channels**:
|
||||
1. **CSV findings** (structured): `context_from` column → `prev_context` injection — task-specific directed context
|
||||
2. **NDJSON discoveries** (broadcast): `discoveries.ndjson` — general exploration findings available to all
|
||||
|
||||
---
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Error | Resolution |
|
||||
|-------|------------|
|
||||
| Circular dependency | Detect in wave computation, abort with error message |
|
||||
| Agent timeout | Mark as failed in results, continue with wave |
|
||||
| Agent failed | Mark as failed, skip dependent tasks in later waves |
|
||||
| All agents in wave failed | Log error, offer retry or abort |
|
||||
| CSV parse error | Validate CSV format before execution, show line number |
|
||||
| discoveries.ndjson corrupt | Ignore malformed lines, continue with valid entries |
|
||||
| Continue mode: no session found | List available sessions, prompt user to select |
|
||||
|
||||
---
|
||||
|
||||
## Core Rules
|
||||
|
||||
1. **Start Immediately**: First action is session initialization, then Phase 1
|
||||
2. **Wave Order is Sacred**: Never execute wave N before wave N-1 completes and results are merged
|
||||
3. **CSV is Source of Truth**: Master tasks.csv holds all state — always read before wave, always write after
|
||||
4. **Context Propagation**: prev_context built from master CSV, not from memory
|
||||
5. **Discovery Board is Append-Only**: Never clear, modify, or recreate discoveries.ndjson
|
||||
6. **Skip on Failure**: If a dependency failed, skip the dependent task (don't attempt)
|
||||
7. **Cleanup Temp Files**: Remove wave-{N}.csv after results are merged
|
||||
8. **DO NOT STOP**: Continuous execution until all waves complete or all remaining tasks are skipped
|
||||
|
||||
---
|
||||
|
||||
## Best Practices
|
||||
|
||||
1. **Task Granularity**: 3-10 tasks optimal; too many = overhead, too few = no parallelism benefit
|
||||
2. **Minimize Cross-Wave Deps**: More tasks in wave 1 = more parallelism
|
||||
3. **Specific Descriptions**: Agent sees only its CSV row + prev_context — make description self-contained
|
||||
4. **Context From ≠ Deps**: `deps` = execution order constraint; `context_from` = information flow. A task can have `context_from` without `deps` (it just reads previous findings but doesn't require them to be done first in its wave)
|
||||
5. **Concurrency Tuning**: `-c 1` for serial execution (maximum context sharing); `-c 8` for I/O-bound tasks
|
||||
|
||||
---
|
||||
|
||||
## Usage Recommendations
|
||||
|
||||
| Scenario | Recommended Approach |
|
||||
|----------|---------------------|
|
||||
| Independent parallel tasks (no deps) | `$csv-wave-pipeline -c 8` — single wave, max parallelism |
|
||||
| Linear pipeline (A→B→C) | `$csv-wave-pipeline -c 1` — 3 waves, serial, full context |
|
||||
| Diamond dependency (A→B,C→D) | `$csv-wave-pipeline` — 3 waves, B+C concurrent in wave 2 |
|
||||
| Complex requirement, unclear tasks | Use `$roadmap-with-file` first for planning, then feed issues here |
|
||||
| Single complex task | Use `$lite-execute` instead |
|
||||
@@ -1,7 +1,6 @@
|
||||
---
|
||||
name: review-cycle
|
||||
description: Unified multi-dimensional code review with automated fix orchestration. Supports session-based (git changes) and module-based (path patterns) review modes with 7-dimension parallel analysis, iterative deep-dive, and automated fix pipeline. Triggers on "workflow:review-cycle", "workflow:review-session-cycle", "workflow:review-module-cycle", "workflow:review-cycle-fix".
|
||||
allowed-tools: spawn_agent, wait, send_input, close_agent, AskUserQuestion, Read, Write, Edit, Bash, Glob, Grep
|
||||
---
|
||||
|
||||
# Review Cycle
|
||||
|
||||
@@ -2,7 +2,6 @@
|
||||
name: roadmap-with-file
|
||||
description: Strategic requirement roadmap with iterative decomposition and issue creation. Outputs roadmap.md (human-readable, single source) + issues.jsonl (machine-executable). Handoff to team-planex.
|
||||
argument-hint: "[-y|--yes] [-c|--continue] [-m progressive|direct|auto] \"requirement description\""
|
||||
allowed-tools: spawn_agent, wait, send_input, close_agent, AskUserQuestion, Read, Write, Edit, Bash, Glob, Grep
|
||||
---
|
||||
|
||||
## Auto Mode
|
||||
|
||||
@@ -1,8 +1,6 @@
|
||||
---
|
||||
name: team-lifecycle
|
||||
description: Full lifecycle orchestrator - spec/impl/test. Spawn-wait-close pipeline with inline discuss subagent, shared explore cache, fast-advance, and consensus severity routing.
|
||||
agents: analyst, writer, planner, executor, tester, reviewer, architect, fe-developer, fe-qa
|
||||
phases: 5
|
||||
---
|
||||
|
||||
# Team Lifecycle Orchestrator
|
||||
261
.codex/skills/team-planex/SKILL.md
Normal file
261
.codex/skills/team-planex/SKILL.md
Normal file
@@ -0,0 +1,261 @@
|
||||
---
|
||||
name: team-planex
|
||||
description: |
|
||||
Inline planning + delegated execution pipeline. Main flow does planning directly,
|
||||
spawns Codex executor per issue immediately. All execution via Codex CLI only.
|
||||
---
|
||||
|
||||
# Team PlanEx (Codex)
|
||||
|
||||
主流程内联规划 + 委托执行。SKILL.md 自身完成规划(不再 spawn planner agent),每完成一个 issue 的 solution 后立即 spawn executor agent 并行实现,无需等待所有规划完成。
|
||||
|
||||
## Architecture
|
||||
|
||||
```
|
||||
┌────────────────────────────────────────┐
|
||||
│ SKILL.md (主流程 = 规划 + 节拍控制) │
|
||||
│ │
|
||||
│ Phase 1: 解析输入 + 初始化 session │
|
||||
│ Phase 2: 逐 issue 规划循环 (内联) │
|
||||
│ ├── issue-plan → 写 solution artifact │
|
||||
│ ├── spawn executor agent ────────────┼──> [executor] 实现
|
||||
│ └── continue (不等 executor) │
|
||||
│ Phase 3: 等待所有 executors │
|
||||
│ Phase 4: 汇总报告 │
|
||||
└────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
## Agent Registry
|
||||
|
||||
| Agent | Role File | Responsibility |
|
||||
|-------|-----------|----------------|
|
||||
| `executor` | `~/.codex/agents/planex-executor.md` | Codex CLI implementation per issue |
|
||||
|
||||
> Executor agent must be deployed to `~/.codex/agents/` before use.
|
||||
> Source: `.codex/skills/team-planex/agents/`
|
||||
|
||||
---
|
||||
|
||||
## Input Parsing
|
||||
|
||||
Supported input types (parse from `$ARGUMENTS`):
|
||||
|
||||
| Type | Detection | Handler |
|
||||
|------|-----------|---------|
|
||||
| Issue IDs | `ISS-\d{8}-\d{6}` regex | Use directly for planning |
|
||||
| Text | `--text '...'` flag | Create issue(s) first via CLI |
|
||||
| Plan file | `--plan <path>` flag | Read file, parse phases, batch create issues |
|
||||
|
||||
### Issue Creation (when needed)
|
||||
|
||||
For `--text` input:
|
||||
```bash
|
||||
ccw issue create --data '{"title":"<title>","description":"<description>"}' --json
|
||||
```
|
||||
|
||||
For `--plan` input:
|
||||
- Match `## Phase N: Title`, `## Step N: Title`, or `### N. Title`
|
||||
- Each match → one issue (title + description from section content)
|
||||
- Fallback: no structure found → entire file as single issue
|
||||
|
||||
---
|
||||
|
||||
## Session Setup
|
||||
|
||||
Before processing issues, initialize session directory:
|
||||
|
||||
```javascript
|
||||
const slug = toSlug(inputDescription).slice(0, 20)
|
||||
const date = new Date().toISOString().slice(0, 10).replace(/-/g, '')
|
||||
const sessionDir = `.workflow/.team/PEX-${slug}-${date}`
|
||||
const artifactsDir = `${sessionDir}/artifacts/solutions`
|
||||
|
||||
Bash(`mkdir -p "${artifactsDir}"`)
|
||||
|
||||
Write({
|
||||
file_path: `${sessionDir}/team-session.json`,
|
||||
content: JSON.stringify({
|
||||
session_id: `PEX-${slug}-${date}`,
|
||||
input_type: inputType,
|
||||
input: rawInput,
|
||||
status: "running",
|
||||
started_at: new Date().toISOString(),
|
||||
executors: []
|
||||
}, null, 2)
|
||||
})
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Phase 1: Parse Input + Initialize
|
||||
|
||||
1. Parse `$ARGUMENTS` to determine input type
|
||||
2. Create issues if needed (--text / --plan)
|
||||
3. Collect all issue IDs
|
||||
4. Initialize session directory
|
||||
|
||||
---
|
||||
|
||||
## Phase 2: Inline Planning Loop
|
||||
|
||||
For each issue, execute planning inline (no planner agent):
|
||||
|
||||
### 2a. Generate Solution via issue-plan-agent
|
||||
|
||||
```javascript
|
||||
const planAgent = spawn_agent({
|
||||
message: `
|
||||
## TASK ASSIGNMENT
|
||||
|
||||
### MANDATORY FIRST STEPS (Agent Execute)
|
||||
1. **Read role definition**: ~/.codex/agents/issue-plan-agent.md (MUST read first)
|
||||
|
||||
---
|
||||
|
||||
issue_ids: ["${issueId}"]
|
||||
project_root: "${projectRoot}"
|
||||
|
||||
## Requirements
|
||||
- Generate solution for this issue
|
||||
- Auto-bind single solution
|
||||
- Output solution JSON when complete
|
||||
`
|
||||
})
|
||||
|
||||
const result = wait({ ids: [planAgent], timeout_ms: 600000 })
|
||||
close_agent({ id: planAgent })
|
||||
```
|
||||
|
||||
### 2b. Write Solution Artifact
|
||||
|
||||
```javascript
|
||||
const solution = parseSolution(result)
|
||||
|
||||
Write({
|
||||
file_path: `${artifactsDir}/${issueId}.json`,
|
||||
content: JSON.stringify({
|
||||
session_id: sessionId,
|
||||
issue_id: issueId,
|
||||
solution: solution,
|
||||
planned_at: new Date().toISOString()
|
||||
}, null, 2)
|
||||
})
|
||||
```
|
||||
|
||||
### 2c. Spawn Executor Immediately
|
||||
|
||||
```javascript
|
||||
const executorId = spawn_agent({
|
||||
message: `
|
||||
## TASK ASSIGNMENT
|
||||
|
||||
### MANDATORY FIRST STEPS (Agent Execute)
|
||||
1. **Read role definition**: ~/.codex/agents/planex-executor.md (MUST read first)
|
||||
|
||||
---
|
||||
|
||||
## Issue
|
||||
Issue ID: ${issueId}
|
||||
Solution file: ${artifactsDir}/${issueId}.json
|
||||
Session: ${sessionDir}
|
||||
|
||||
## Execution
|
||||
Load solution from file → implement via Codex CLI → verify tests → commit → report.
|
||||
`
|
||||
})
|
||||
|
||||
executorIds.push(executorId)
|
||||
executorIssueMap[executorId] = issueId
|
||||
```
|
||||
|
||||
### 2d. Continue to Next Issue
|
||||
|
||||
Do NOT wait for executor. Proceed to next issue immediately.
|
||||
|
||||
---
|
||||
|
||||
## Phase 3: Wait All Executors
|
||||
|
||||
```javascript
|
||||
if (executorIds.length > 0) {
|
||||
const execResults = wait({ ids: executorIds, timeout_ms: 1800000 })
|
||||
|
||||
if (execResults.timed_out) {
|
||||
const pending = executorIds.filter(id => !execResults.status[id]?.completed)
|
||||
if (pending.length > 0) {
|
||||
const pendingIssues = pending.map(id => executorIssueMap[id])
|
||||
Write({
|
||||
file_path: `${sessionDir}/pending-executors.json`,
|
||||
content: JSON.stringify({ pending_issues: pendingIssues, executor_ids: pending }, null, 2)
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
// Collect summaries
|
||||
const summaries = executorIds.map(id => ({
|
||||
issue_id: executorIssueMap[id],
|
||||
status: execResults.status[id]?.completed ? 'completed' : 'timeout',
|
||||
output: execResults.status[id]?.completed ?? null
|
||||
}))
|
||||
|
||||
// Cleanup
|
||||
executorIds.forEach(id => {
|
||||
try { close_agent({ id }) } catch { /* already closed */ }
|
||||
})
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Phase 4: Report
|
||||
|
||||
```javascript
|
||||
const completed = summaries.filter(s => s.status === 'completed').length
|
||||
const failed = summaries.filter(s => s.status === 'timeout').length
|
||||
|
||||
// Update session
|
||||
Write({
|
||||
file_path: `${sessionDir}/team-session.json`,
|
||||
content: JSON.stringify({
|
||||
...session,
|
||||
status: "completed",
|
||||
completed_at: new Date().toISOString(),
|
||||
results: { total: executorIds.length, completed, failed }
|
||||
}, null, 2)
|
||||
})
|
||||
|
||||
return `
|
||||
## Pipeline Complete
|
||||
|
||||
**Total issues**: ${executorIds.length}
|
||||
**Completed**: ${completed}
|
||||
**Timed out**: ${failed}
|
||||
|
||||
${summaries.map(s => `- ${s.issue_id}: ${s.status}`).join('\n')}
|
||||
|
||||
Session: ${sessionDir}
|
||||
`
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## User Commands
|
||||
|
||||
During execution, the user may issue:
|
||||
|
||||
| Command | Action |
|
||||
|---------|--------|
|
||||
| `check` / `status` | Show executor progress summary |
|
||||
| `resume` / `continue` | Urge stalled executor |
|
||||
|
||||
---
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Scenario | Resolution |
|
||||
|----------|------------|
|
||||
| issue-plan-agent timeout (>10 min) | Skip issue, log error, continue to next |
|
||||
| issue-plan-agent failure | Retry once, then skip with error log |
|
||||
| Solution file not written | Executor reports error, logs to `${sessionDir}/errors.json` |
|
||||
| Executor (Codex CLI) failure | Executor handles resume; logs CLI resume command |
|
||||
| No issues to process | Report error: no issues found |
|
||||
@@ -23,13 +23,13 @@ completion report.
|
||||
|
||||
| Action | Allowed |
|
||||
|--------|---------|
|
||||
| Read solution artifact from disk | ✅ |
|
||||
| Implement via Codex CLI | ✅ |
|
||||
| Run tests for verification | ✅ |
|
||||
| git commit completed work | ✅ |
|
||||
| Create or modify issues | ❌ |
|
||||
| Spawn subagents | ❌ |
|
||||
| Interact with user (AskUserQuestion) | ❌ |
|
||||
| Read solution artifact from disk | Yes |
|
||||
| Implement via Codex CLI | Yes |
|
||||
| Run tests for verification | Yes |
|
||||
| git commit completed work | Yes |
|
||||
| Create or modify issues | No |
|
||||
| Spawn subagents | No |
|
||||
| Interact with user (AskUserQuestion) | No |
|
||||
|
||||
---
|
||||
|
||||
@@ -38,7 +38,6 @@ completion report.
|
||||
### Step 1: Load Context
|
||||
|
||||
After reading role definition:
|
||||
- Run: `ccw spec load --category execution`
|
||||
- Extract issue ID, solution file path, session dir from task message
|
||||
|
||||
### Step 2: Load Solution
|
||||
@@ -52,7 +51,7 @@ const solution = solutionData.solution
|
||||
|
||||
If file not found or invalid:
|
||||
- Log error: `[executor] ERROR: Solution file not found: ${solutionFile}`
|
||||
- Output: `EXEC_FAILED:{issueId}:solution_file_missing`
|
||||
- Output: `EXEC_FAILED:${issueId}:solution_file_missing`
|
||||
- Stop execution
|
||||
|
||||
Verify solution has required fields:
|
||||
@@ -81,10 +80,9 @@ ${JSON.stringify(solution.bound, null, 2)}
|
||||
## Implementation Requirements
|
||||
1. Follow the solution plan tasks in order
|
||||
2. Write clean, minimal code following existing patterns
|
||||
3. Read .workflow/specs/*.md for project conventions
|
||||
4. Run tests after each significant change
|
||||
5. Ensure all existing tests still pass
|
||||
6. Do NOT over-engineer - implement exactly what the solution specifies
|
||||
3. Run tests after each significant change
|
||||
4. Ensure all existing tests still pass
|
||||
5. Do NOT over-engineer - implement exactly what the solution specifies
|
||||
|
||||
## Quality Checklist
|
||||
- [ ] All solution tasks implemented
|
||||
@@ -92,9 +90,6 @@ ${JSON.stringify(solution.bound, null, 2)}
|
||||
- [ ] Existing tests pass
|
||||
- [ ] New tests added where specified in solution
|
||||
- [ ] No security vulnerabilities introduced
|
||||
|
||||
## Project Guidelines
|
||||
@.workflow/specs/*.md
|
||||
PROMPT_EOF
|
||||
)" --tool codex --mode write --id planex-${issueId}
|
||||
```
|
||||
@@ -120,7 +115,6 @@ if (testCmd) {
|
||||
const testResult = Bash(`${testCmd} 2>&1 || echo TEST_FAILED`)
|
||||
|
||||
if (testResult.includes('TEST_FAILED') || testResult.includes('FAIL')) {
|
||||
// Report failure with resume command
|
||||
const resumeCmd = `ccw cli -p "Fix failing tests" --resume planex-${issueId} --tool codex --mode write`
|
||||
|
||||
Write({
|
||||
@@ -179,7 +173,6 @@ EXEC_DONE:${issueId}
|
||||
If Codex CLI execution fails or times out:
|
||||
|
||||
```bash
|
||||
# Resume with same session ID
|
||||
ccw cli -p "Continue implementation from where stopped" \
|
||||
--resume planex-${issueId} \
|
||||
--tool codex --mode write \
|
||||
@@ -194,19 +187,19 @@ Resume command is always logged to `${sessionDir}/errors.json` on any failure.
|
||||
|
||||
| Scenario | Resolution |
|
||||
|----------|------------|
|
||||
| Solution file missing | Output `EXEC_FAILED:{id}:solution_file_missing`, stop |
|
||||
| Solution JSON malformed | Output `EXEC_FAILED:{id}:solution_invalid`, stop |
|
||||
| Solution file missing | Output `EXEC_FAILED:<id>:solution_file_missing`, stop |
|
||||
| Solution JSON malformed | Output `EXEC_FAILED:<id>:solution_invalid`, stop |
|
||||
| Issue status update fails | Log warning, continue |
|
||||
| Codex CLI failure | Log resume command to errors.json, output `EXEC_FAILED:{id}:codex_failed` |
|
||||
| Tests failing | Log test output + resume command, output `EXEC_FAILED:{id}:tests_failing` |
|
||||
| Commit fails | Log warning, still output `EXEC_DONE:{id}` (implementation complete) |
|
||||
| Codex CLI failure | Log resume command to errors.json, output `EXEC_FAILED:<id>:codex_failed` |
|
||||
| Tests failing | Log test output + resume command, output `EXEC_FAILED:<id>:tests_failing` |
|
||||
| Commit fails | Log warning, still output `EXEC_DONE:<id>` (implementation complete) |
|
||||
| No test command found | Skip test step, proceed to commit |
|
||||
|
||||
## Key Reminders
|
||||
|
||||
**ALWAYS**:
|
||||
- Output `EXEC_DONE:{issueId}` on its own line when implementation succeeds
|
||||
- Output `EXEC_FAILED:{issueId}:{reason}` on its own line when implementation fails
|
||||
- Output `EXEC_DONE:<issueId>` on its own line when implementation succeeds
|
||||
- Output `EXEC_FAILED:<issueId>:<reason>` on its own line when implementation fails
|
||||
- Log resume command to errors.json on any failure
|
||||
- Use `[executor]` prefix in all status messages
|
||||
|
||||
|
||||
@@ -1,183 +0,0 @@
|
||||
---
|
||||
name: planex-planner
|
||||
description: |
|
||||
PlanEx planner agent. Issue decomposition + solution design with beat protocol.
|
||||
Outputs ISSUE_READY:{id} after each solution, waits for "Continue" signal.
|
||||
Deploy to: ~/.codex/agents/planex-planner.md
|
||||
color: blue
|
||||
---
|
||||
|
||||
# PlanEx Planner
|
||||
|
||||
Requirement decomposition → issue creation → solution design, one issue at a time.
|
||||
Outputs `ISSUE_READY:{issueId}` after each solution and waits for orchestrator to signal
|
||||
"Continue". Only outputs `ALL_PLANNED:{count}` when all issues are processed.
|
||||
|
||||
## Identity
|
||||
|
||||
- **Tag**: `[planner]`
|
||||
- **Beat Protocol**: ISSUE_READY per issue → wait → ALL_PLANNED when done
|
||||
- **Boundary**: Planning only — no code writing, no test running, no git commits
|
||||
|
||||
## Core Responsibilities
|
||||
|
||||
| Action | Allowed |
|
||||
|--------|---------|
|
||||
| Parse input (Issue IDs / text / plan file) | ✅ |
|
||||
| Create issues via CLI | ✅ |
|
||||
| Generate solution via issue-plan-agent | ✅ |
|
||||
| Write solution artifacts to disk | ✅ |
|
||||
| Output ISSUE_READY / ALL_PLANNED signals | ✅ |
|
||||
| Write or modify business code | ❌ |
|
||||
| Run tests or git commit | ❌ |
|
||||
|
||||
---
|
||||
|
||||
## CLI Toolbox
|
||||
|
||||
| Command | Purpose |
|
||||
|---------|---------|
|
||||
| `ccw issue create --data '{"title":"...","description":"..."}' --json` | Create issue |
|
||||
| `ccw issue status <id> --json` | Check issue status |
|
||||
| `ccw issue plan <id>` | Plan single issue (generates solution) |
|
||||
|
||||
---
|
||||
|
||||
## Execution Flow
|
||||
|
||||
### Step 1: Load Context
|
||||
|
||||
After reading role definition, load project context:
|
||||
- Run: `ccw spec load --category planning`
|
||||
- Extract session directory and artifacts directory from task message
|
||||
|
||||
### Step 2: Parse Input
|
||||
|
||||
Determine input type from task message:
|
||||
|
||||
| Detection | Condition | Action |
|
||||
|-----------|-----------|--------|
|
||||
| Issue IDs | `ISS-\d{8}-\d{6}` pattern | Use directly for planning |
|
||||
| `--text '...'` | Flag in message | Create issue(s) first via CLI |
|
||||
| `--plan <path>` | Flag in message | Read file, parse phases, batch create issues |
|
||||
|
||||
**Plan file parsing rules** (when `--plan` is used):
|
||||
- Match `## Phase N: Title`, `## Step N: Title`, or `### N. Title`
|
||||
- Each match → one issue (title + description from section content)
|
||||
- Fallback: no structure found → entire file as single issue
|
||||
|
||||
### Step 3: Issue Processing Loop (Beat Protocol)
|
||||
|
||||
For each issue, execute in sequence:
|
||||
|
||||
#### 3a. Generate Solution
|
||||
|
||||
Use `issue-plan-agent` subagent to generate and bind solution:
|
||||
|
||||
```
|
||||
spawn_agent({
|
||||
message: `
|
||||
## TASK ASSIGNMENT
|
||||
|
||||
### MANDATORY FIRST STEPS (Agent Execute)
|
||||
1. **Read role definition**: ~/.codex/agents/issue-plan-agent.md (MUST read first)
|
||||
2. Run: `ccw spec load --category planning`
|
||||
|
||||
---
|
||||
|
||||
issue_ids: ["${issueId}"]
|
||||
project_root: "${projectRoot}"
|
||||
|
||||
## Requirements
|
||||
- Generate solution for this issue
|
||||
- Auto-bind single solution
|
||||
- Output solution JSON when complete
|
||||
`
|
||||
})
|
||||
|
||||
const result = wait({ ids: [agent], timeout_ms: 600000 })
|
||||
close_agent({ id: agent })
|
||||
```
|
||||
|
||||
#### 3b. Write Solution Artifact
|
||||
|
||||
```javascript
|
||||
// Extract solution from issue-plan-agent result
|
||||
const solution = parseSolution(result)
|
||||
|
||||
Write({
|
||||
file_path: `${artifactsDir}/${issueId}.json`,
|
||||
content: JSON.stringify({
|
||||
session_id: sessionId,
|
||||
issue_id: issueId,
|
||||
solution: solution,
|
||||
planned_at: new Date().toISOString()
|
||||
}, null, 2)
|
||||
})
|
||||
```
|
||||
|
||||
#### 3c. Output Beat Signal
|
||||
|
||||
Output EXACTLY (no surrounding text on this line):
|
||||
```
|
||||
ISSUE_READY:{issueId}
|
||||
```
|
||||
|
||||
Then STOP. Do not process next issue. Wait for "Continue" message from orchestrator.
|
||||
|
||||
### Step 4: After All Issues
|
||||
|
||||
When every issue has been processed and confirmed with "Continue":
|
||||
|
||||
Output EXACTLY:
|
||||
```
|
||||
ALL_PLANNED:{totalCount}
|
||||
```
|
||||
|
||||
Where `{totalCount}` is the integer count of issues planned.
|
||||
|
||||
---
|
||||
|
||||
## Issue Creation (when needed)
|
||||
|
||||
For `--text` input:
|
||||
|
||||
```bash
|
||||
ccw issue create --data '{"title":"<title>","description":"<description>"}' --json
|
||||
```
|
||||
|
||||
Parse returned JSON for `id` field → use as issue ID.
|
||||
|
||||
For `--plan` input, create issues one at a time:
|
||||
```bash
|
||||
# For each parsed phase/step:
|
||||
ccw issue create --data '{"title":"<phase-title>","description":"<phase-content>"}' --json
|
||||
```
|
||||
|
||||
Collect all created issue IDs before proceeding to Step 3.
|
||||
|
||||
---
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Scenario | Resolution |
|
||||
|----------|------------|
|
||||
| Issue creation failure | Retry once with simplified text, then report error |
|
||||
| `issue-plan-agent` failure | Retry once, then skip issue with `ISSUE_SKIP:{issueId}:reason` signal |
|
||||
| Plan file not found | Output error immediately, do not proceed |
|
||||
| Artifact write failure | Log warning inline, still output ISSUE_READY (executor will handle missing file) |
|
||||
| "Continue" not received after 5 min | Re-output `ISSUE_READY:{issueId}` once as reminder |
|
||||
|
||||
## Key Reminders
|
||||
|
||||
**ALWAYS**:
|
||||
- Output `ISSUE_READY:{issueId}` on its own line with no surrounding text
|
||||
- Wait after each ISSUE_READY — do NOT auto-continue
|
||||
- Write solution file before outputting ISSUE_READY
|
||||
- Use `[planner]` prefix in all status messages
|
||||
|
||||
**NEVER**:
|
||||
- Output multiple ISSUE_READY signals before waiting for "Continue"
|
||||
- Proceed to next issue without receiving "Continue"
|
||||
- Write or modify any business logic files
|
||||
- Run tests or execute git commands
|
||||
@@ -1,284 +0,0 @@
|
||||
---
|
||||
name: team-planex
|
||||
description: |
|
||||
Beat pipeline: planner decomposes requirements issue-by-issue, orchestrator spawns
|
||||
Codex executor per issue immediately. All execution via Codex CLI only.
|
||||
agents: 2
|
||||
phases: 3
|
||||
---
|
||||
|
||||
# Team PlanEx (Codex)
|
||||
|
||||
逐 Issue 节拍流水线。Planner 每完成一个 issue 的 solution 立即输出 `ISSUE_READY` 信号,Orchestrator 即刻 spawn 独立 Codex executor 并行实现,无需等待 planner 完成全部规划。
|
||||
|
||||
## Architecture
|
||||
|
||||
```
|
||||
Input (Issue IDs / --text / --plan)
|
||||
→ Orchestrator: parse input → init session → spawn planner
|
||||
→ Beat loop:
|
||||
wait(planner) → ISSUE_READY:{issueId} → spawn_agent(executor)
|
||||
→ send_input(planner, "Continue")
|
||||
→ ALL_PLANNED:{count} → close_agent(planner)
|
||||
→ wait(all executors) → report
|
||||
```
|
||||
|
||||
## Agent Registry
|
||||
|
||||
| Agent | Role File | Responsibility |
|
||||
|-------|-----------|----------------|
|
||||
| `planner` | `~/.codex/agents/planex-planner.md` | Issue decomp → solution design → ISSUE_READY signals |
|
||||
| `executor` | `~/.codex/agents/planex-executor.md` | Codex CLI implementation per issue |
|
||||
|
||||
> Both agents must be deployed to `~/.codex/agents/` before use.
|
||||
> Source: `.codex/skills/team-planex/agents/`
|
||||
|
||||
---
|
||||
|
||||
## Input Parsing
|
||||
|
||||
Supported input types (parse from `$ARGUMENTS`):
|
||||
|
||||
| Type | Detection | Handler |
|
||||
|------|-----------|---------|
|
||||
| Issue IDs | `ISS-\d{8}-\d{6}` regex | Pass directly to planner |
|
||||
| Text | `--text '...'` flag | Planner creates issue(s) first |
|
||||
| Plan file | `--plan <path>` flag | Planner reads file, batch creates issues |
|
||||
|
||||
---
|
||||
|
||||
## Session Setup
|
||||
|
||||
Before spawning agents, initialize session directory:
|
||||
|
||||
```javascript
|
||||
// Generate session slug from input description (max 20 chars, kebab-case)
|
||||
const slug = toSlug(inputDescription).slice(0, 20)
|
||||
const date = new Date().toISOString().slice(0, 10).replace(/-/g, '')
|
||||
const sessionDir = `.workflow/.team/PEX-${slug}-${date}`
|
||||
const artifactsDir = `${sessionDir}/artifacts/solutions`
|
||||
|
||||
Bash(`mkdir -p "${artifactsDir}"`)
|
||||
|
||||
// Write initial session state
|
||||
Write({
|
||||
file_path: `${sessionDir}/team-session.json`,
|
||||
content: JSON.stringify({
|
||||
session_id: `PEX-${slug}-${date}`,
|
||||
input_type: inputType,
|
||||
input: rawInput,
|
||||
status: "running",
|
||||
started_at: new Date().toISOString(),
|
||||
executors: []
|
||||
}, null, 2)
|
||||
})
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Phase 1: Spawn Planner
|
||||
|
||||
```javascript
|
||||
const plannerAgent = spawn_agent({
|
||||
message: `
|
||||
## TASK ASSIGNMENT
|
||||
|
||||
### MANDATORY FIRST STEPS (Agent Execute)
|
||||
1. **Read role definition**: ~/.codex/agents/planex-planner.md (MUST read first)
|
||||
2. Run: `ccw spec load --category "planning execution"`
|
||||
|
||||
---
|
||||
|
||||
## Session
|
||||
Session directory: ${sessionDir}
|
||||
Artifacts directory: ${artifactsDir}
|
||||
|
||||
## Input
|
||||
${inputType === 'issues' ? `Issue IDs: ${issueIds.join(' ')}` : ''}
|
||||
${inputType === 'text' ? `Requirement: ${requirementText}` : ''}
|
||||
${inputType === 'plan' ? `Plan file: ${planPath}` : ''}
|
||||
|
||||
## Beat Protocol (CRITICAL)
|
||||
Process issues one at a time. After completing each issue's solution:
|
||||
1. Write solution JSON to: ${artifactsDir}/{issueId}.json
|
||||
2. Output EXACTLY this line: ISSUE_READY:{issueId}
|
||||
3. STOP and wait — do NOT continue until you receive "Continue"
|
||||
|
||||
When ALL issues are processed:
|
||||
1. Output EXACTLY: ALL_PLANNED:{totalCount}
|
||||
`
|
||||
})
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Phase 2: Beat Loop
|
||||
|
||||
Orchestrator coordinates the planner-executor pipeline:
|
||||
|
||||
```javascript
|
||||
const executorIds = []
|
||||
const executorIssueMap = {}
|
||||
|
||||
while (true) {
|
||||
// Wait for planner beat signal (up to 10 min per issue)
|
||||
const plannerOut = wait({ ids: [plannerAgent], timeout_ms: 600000 })
|
||||
|
||||
// Handle timeout: urge convergence and retry
|
||||
if (plannerOut.timed_out) {
|
||||
send_input({
|
||||
id: plannerAgent,
|
||||
message: "Please output ISSUE_READY:{issueId} for current issue or ALL_PLANNED if done."
|
||||
})
|
||||
continue
|
||||
}
|
||||
|
||||
const output = plannerOut.status[plannerAgent].completed
|
||||
|
||||
// Detect ALL_PLANNED — pipeline complete
|
||||
if (output.includes('ALL_PLANNED')) {
|
||||
const match = output.match(/ALL_PLANNED:(\d+)/)
|
||||
const total = match ? parseInt(match[1]) : executorIds.length
|
||||
close_agent({ id: plannerAgent })
|
||||
break
|
||||
}
|
||||
|
||||
// Detect ISSUE_READY — spawn executor immediately
|
||||
const issueMatch = output.match(/ISSUE_READY:(ISS-\d{8}-\d{6}|[A-Z0-9-]+)/)
|
||||
if (issueMatch) {
|
||||
const issueId = issueMatch[1]
|
||||
const solutionFile = `${artifactsDir}/${issueId}.json`
|
||||
|
||||
const executorId = spawn_agent({
|
||||
message: `
|
||||
## TASK ASSIGNMENT
|
||||
|
||||
### MANDATORY FIRST STEPS (Agent Execute)
|
||||
1. **Read role definition**: ~/.codex/agents/planex-executor.md (MUST read first)
|
||||
2. Run: `ccw spec load --category "planning execution"`
|
||||
|
||||
---
|
||||
|
||||
## Issue
|
||||
Issue ID: ${issueId}
|
||||
Solution file: ${solutionFile}
|
||||
Session: ${sessionDir}
|
||||
|
||||
## Execution
|
||||
Load solution from file → implement via Codex CLI → verify tests → commit → report.
|
||||
`
|
||||
})
|
||||
|
||||
executorIds.push(executorId)
|
||||
executorIssueMap[executorId] = issueId
|
||||
|
||||
// Signal planner to continue to next issue
|
||||
send_input({ id: plannerAgent, message: "Continue with next issue." })
|
||||
continue
|
||||
}
|
||||
|
||||
// Unexpected output: urge convergence
|
||||
send_input({
|
||||
id: plannerAgent,
|
||||
message: "Output ISSUE_READY:{issueId} when solution is ready, or ALL_PLANNED when all done."
|
||||
})
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Phase 3: Wait All Executors
|
||||
|
||||
```javascript
|
||||
if (executorIds.length > 0) {
|
||||
// Extended timeout: Codex CLI execution per issue (~10-20 min each)
|
||||
const execResults = wait({ ids: executorIds, timeout_ms: 1800000 })
|
||||
|
||||
if (execResults.timed_out) {
|
||||
const completed = executorIds.filter(id => execResults.status[id]?.completed)
|
||||
const pending = executorIds.filter(id => !execResults.status[id]?.completed)
|
||||
// Log pending issues for manual follow-up
|
||||
if (pending.length > 0) {
|
||||
const pendingIssues = pending.map(id => executorIssueMap[id])
|
||||
Write({
|
||||
file_path: `${sessionDir}/pending-executors.json`,
|
||||
content: JSON.stringify({ pending_issues: pendingIssues, executor_ids: pending }, null, 2)
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
// Collect summaries
|
||||
const summaries = executorIds.map(id => ({
|
||||
issue_id: executorIssueMap[id],
|
||||
status: execResults.status[id]?.completed ? 'completed' : 'timeout',
|
||||
output: execResults.status[id]?.completed ?? null
|
||||
}))
|
||||
|
||||
// Cleanup
|
||||
executorIds.forEach(id => {
|
||||
try { close_agent({ id }) } catch { /* already closed */ }
|
||||
})
|
||||
|
||||
// Final report
|
||||
const completed = summaries.filter(s => s.status === 'completed').length
|
||||
const failed = summaries.filter(s => s.status === 'timeout').length
|
||||
|
||||
return `
|
||||
## Pipeline Complete
|
||||
|
||||
**Total issues**: ${executorIds.length}
|
||||
**Completed**: ${completed}
|
||||
**Timed out**: ${failed}
|
||||
|
||||
${summaries.map(s => `- ${s.issue_id}: ${s.status}`).join('\n')}
|
||||
|
||||
Session: ${sessionDir}
|
||||
`
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## User Commands
|
||||
|
||||
During execution, the user may issue:
|
||||
|
||||
| Command | Action |
|
||||
|---------|--------|
|
||||
| `check` / `status` | Show executor progress summary |
|
||||
| `resume` / `continue` | Urge stalled planner or executor |
|
||||
| `add <issue-ids>` | `send_input` to planner with new issue IDs |
|
||||
| `add --text '...'` | `send_input` to planner to create and plan new issue |
|
||||
| `add --plan <path>` | `send_input` to planner to parse and batch create from plan file |
|
||||
|
||||
**`add` handler** (inject mid-execution):
|
||||
|
||||
```javascript
|
||||
// Get current planner agent ID from session state
|
||||
const session = JSON.parse(Read(`${sessionDir}/team-session.json`))
|
||||
const plannerAgentId = session.planner_agent_id // saved during Phase 1
|
||||
|
||||
send_input({
|
||||
id: plannerAgentId,
|
||||
message: `
|
||||
## NEW ISSUES INJECTED
|
||||
${newInput}
|
||||
|
||||
Process these after current issue (or immediately if idle).
|
||||
Follow beat protocol: ISSUE_READY → wait for Continue → next issue.
|
||||
`
|
||||
})
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Scenario | Resolution |
|
||||
|----------|------------|
|
||||
| Planner timeout (>10 min per issue) | `send_input` urge convergence, re-enter loop |
|
||||
| Planner never outputs ISSUE_READY | After 3 retries, `close_agent` + report stall |
|
||||
| Solution file not written | Executor reports error, logs to `${sessionDir}/errors.json` |
|
||||
| Executor (Codex CLI) failure | Executor handles resume; logs CLI resume command |
|
||||
| ALL_PLANNED never received | After 60 min total, close planner, wait remaining executors |
|
||||
| No issues to process | AskUserQuestion for clarification |
|
||||
1141
.codex/skills/wave-plan-pipeline/SKILL.md
Normal file
1141
.codex/skills/wave-plan-pipeline/SKILL.md
Normal file
File diff suppressed because it is too large
Load Diff
105
.github/workflows/docs.yml
vendored
Normal file
105
.github/workflows/docs.yml
vendored
Normal file
@@ -0,0 +1,105 @@
|
||||
name: Docs Build & Deploy
|
||||
|
||||
on:
|
||||
push:
|
||||
branches:
|
||||
- main
|
||||
- develop
|
||||
paths:
|
||||
- 'docs/**'
|
||||
pull_request:
|
||||
branches:
|
||||
- main
|
||||
- develop
|
||||
paths:
|
||||
- 'docs/**'
|
||||
workflow_dispatch:
|
||||
|
||||
permissions:
|
||||
contents: read
|
||||
pages: write
|
||||
id-token: write
|
||||
|
||||
concurrency:
|
||||
group: "pages"
|
||||
cancel-in-progress: false
|
||||
|
||||
jobs:
|
||||
build:
|
||||
runs-on: ubuntu-latest
|
||||
defaults:
|
||||
run:
|
||||
working-directory: docs
|
||||
steps:
|
||||
- name: Checkout
|
||||
uses: actions/checkout@v4
|
||||
|
||||
- name: Setup Node.js
|
||||
uses: actions/setup-node@v4
|
||||
with:
|
||||
node-version: '20'
|
||||
cache: 'npm'
|
||||
cache-dependency-path: docs/package-lock.json
|
||||
|
||||
- name: Install dependencies
|
||||
run: npm ci
|
||||
|
||||
- name: Build
|
||||
run: npm run docs:build
|
||||
|
||||
- name: Upload artifact
|
||||
uses: actions/upload-pages-artifact@v3
|
||||
with:
|
||||
path: docs/.vitepress/dist
|
||||
|
||||
deploy:
|
||||
if: (github.event_name == 'push' || github.event_name == 'workflow_dispatch') && github.ref == 'refs/heads/main'
|
||||
environment:
|
||||
name: github-pages
|
||||
url: ${{ steps.deployment.outputs.page_url }}
|
||||
runs-on: ubuntu-latest
|
||||
needs: build
|
||||
steps:
|
||||
- name: Deploy to GitHub Pages
|
||||
id: deployment
|
||||
uses: actions/deploy-pages@v4
|
||||
|
||||
# Lighthouse CI for PRs
|
||||
lighthouse:
|
||||
if: github.event_name == 'pull_request'
|
||||
runs-on: ubuntu-latest
|
||||
needs: build
|
||||
defaults:
|
||||
run:
|
||||
working-directory: docs
|
||||
steps:
|
||||
- name: Checkout
|
||||
uses: actions/checkout@v4
|
||||
|
||||
- name: Setup Node.js
|
||||
uses: actions/setup-node@v4
|
||||
with:
|
||||
node-version: '20'
|
||||
cache: 'npm'
|
||||
cache-dependency-path: docs/package-lock.json
|
||||
|
||||
- name: Install dependencies
|
||||
run: npm ci
|
||||
|
||||
- name: Build
|
||||
run: npm run docs:build
|
||||
|
||||
- name: Start preview server
|
||||
run: |
|
||||
npm run docs:preview -- --host 127.0.0.1 --port 4173 &
|
||||
npx --yes wait-on http://127.0.0.1:4173/
|
||||
|
||||
- name: Run Lighthouse CI
|
||||
uses: treosh/lighthouse-ci-action@v10
|
||||
with:
|
||||
urls: |
|
||||
http://127.0.0.1:4173/
|
||||
uploadArtifacts: true
|
||||
temporaryPublicStorage: true
|
||||
commentPR: true
|
||||
budgetPath: docs/lighthouse-budget.json
|
||||
5
.gitignore
vendored
5
.gitignore
vendored
@@ -138,3 +138,8 @@ ccw/.tmp-ccw-auth-home/
|
||||
|
||||
# Skills library (local only)
|
||||
.claude/skills_lib/
|
||||
|
||||
# Docs site
|
||||
docs/node_modules/
|
||||
docs/.vitepress/dist/
|
||||
docs/.vitepress/cache/
|
||||
|
||||
@@ -1,18 +1,21 @@
|
||||
# Codex MCP 功能实现总结
|
||||
|
||||
> **注意**: 此文档描述的是旧的 vanilla JS 前端架构。当前版本 (v7.0+) 使用 React SPA 前端。
|
||||
> 请参考 `ccw/frontend/src/` 目录中的 React 组件。
|
||||
|
||||
## 📝 已完成的修复
|
||||
|
||||
### 1. CCW Tools MCP 卡片样式修复
|
||||
|
||||
**文件**: `ccw/src/templates/dashboard-js/views/mcp-manager.js`
|
||||
**文件**: `ccw/frontend/src/components/McpManager.tsx` (React)
|
||||
|
||||
**修改内容**:
|
||||
- ✅ 卡片边框: `border-primary` → `border-orange-500` (第345行)
|
||||
- ✅ 图标背景: `bg-primary` → `bg-orange-500` (第348行)
|
||||
- ✅ 图标颜色: `text-primary-foreground` → `text-white` (第349行)
|
||||
- ✅ "Available"徽章: `bg-primary/20 text-primary` → `bg-orange-500/20 text-orange-600` (第360行)
|
||||
- ✅ 选择按钮颜色: `text-primary` → `text-orange-500` (第378-379行)
|
||||
- ✅ 安装按钮: `bg-primary` → `bg-orange-500` (第386行、第399行)
|
||||
- ✅ 卡片边框: `border-primary` → `border-orange-500`
|
||||
- ✅ 图标背景: `bg-primary` → `bg-orange-500`
|
||||
- ✅ 图标颜色: `text-primary-foreground` → `text-white`
|
||||
- ✅ "Available"徽章: `bg-primary/20 text-primary` → `bg-orange-500/20 text-orange-600`
|
||||
- ✅ 选择按钮颜色: `text-primary` → `text-orange-500`
|
||||
- ✅ 安装按钮: `bg-primary` → `bg-orange-500`
|
||||
|
||||
**影响范围**: Claude 模式下的 CCW Tools MCP 卡片
|
||||
|
||||
@@ -20,10 +23,10 @@
|
||||
|
||||
### 2. Toast 消息显示时间增强
|
||||
|
||||
**文件**: `ccw/src/templates/dashboard-js/components/navigation.js`
|
||||
**文件**: `ccw/frontend/src/hooks/useToast.ts` (React)
|
||||
|
||||
**修改内容**:
|
||||
- ✅ 显示时间: 2000ms → 3500ms (第300行)
|
||||
- ✅ 显示时间: 2000ms → 3500ms
|
||||
|
||||
**影响范围**: 所有 Toast 消息(MCP 安装、删除、切换等操作反馈)
|
||||
|
||||
@@ -55,38 +58,33 @@ API 请求: POST /api/codex-mcp-add
|
||||
↓
|
||||
前端更新:
|
||||
1. loadMcpConfig() - 重新加载配置
|
||||
2. renderMcpManager() - 重新渲染 UI
|
||||
3. showRefreshToast(...) - 显示成功/失败消息 (3.5秒)
|
||||
2. 状态更新触发 UI 重新渲染
|
||||
3. Toast 显示成功/失败消息 (3.5秒)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 📍 关键代码位置
|
||||
|
||||
### 前端
|
||||
### 前端 (React SPA)
|
||||
|
||||
| 功能 | 文件 | 行号 | 说明 |
|
||||
|------|------|------|------|
|
||||
| 复制到 Codex | `components/mcp-manager.js` | 175-177 | `copyClaudeServerToCodex()` 函数 |
|
||||
| 添加到 Codex | `components/mcp-manager.js` | 87-114 | `addCodexMcpServer()` 函数 |
|
||||
| Toast 消息 | `components/navigation.js` | 286-301 | `showRefreshToast()` 函数 |
|
||||
| CCW Tools 样式 | `views/mcp-manager.js` | 342-415 | Claude 模式卡片渲染 |
|
||||
| 其他项目按钮 | `views/mcp-manager.js` | 1015-1020 | "Install to Codex" 按钮 |
|
||||
| 功能 | 文件 | 说明 |
|
||||
|------|------|------|
|
||||
| MCP 管理 | `ccw/frontend/src/components/McpManager.tsx` | MCP 管理组件 |
|
||||
| Toast 消息 | `ccw/frontend/src/hooks/useToast.ts` | Toast hook |
|
||||
| 复制到 Codex | `ccw/frontend/src/api/mcp.ts` | MCP API 调用 |
|
||||
|
||||
### 后端
|
||||
|
||||
| 功能 | 文件 | 行号 | 说明 |
|
||||
|------|------|------|------|
|
||||
| API 端点 | `core/routes/mcp-routes.ts` | 1001-1010 | `/api/codex-mcp-add` 路由 |
|
||||
| 添加服务器 | `core/routes/mcp-routes.ts` | 251-330 | `addCodexMcpServer()` 函数 |
|
||||
| TOML 序列化 | `core/routes/mcp-routes.ts` | 166-188 | `serializeToml()` 函数 |
|
||||
| 功能 | 文件 | 说明 |
|
||||
|------|------|------|
|
||||
| API 端点 | `ccw/src/core/routes/mcp-routes.ts` | `/api/codex-mcp-add` 路由 |
|
||||
| 添加服务器 | `ccw/src/core/routes/mcp-routes.ts` | `addCodexMcpServer()` 函数 |
|
||||
| TOML 序列化 | `ccw/src/core/routes/mcp-routes.ts` | `serializeToml()` 函数 |
|
||||
|
||||
### CSS
|
||||
### CSS (Tailwind)
|
||||
|
||||
| 功能 | 文件 | 行号 | 说明 |
|
||||
|------|------|------|------|
|
||||
| Toast 样式 | `dashboard-css/06-cards.css` | 1501-1538 | Toast 容器和类型样式 |
|
||||
| Toast 动画 | `dashboard-css/06-cards.css` | 1540-1551 | 滑入/淡出动画 |
|
||||
Toast 样式使用 Tailwind CSS 内联样式,定义在 React 组件中。
|
||||
|
||||
---
|
||||
|
||||
@@ -220,33 +218,31 @@ API 请求: POST /api/codex-mcp-add
|
||||
|
||||
## 📦 相关文件清单
|
||||
|
||||
### 已修改文件
|
||||
### 前端文件 (React SPA)
|
||||
|
||||
1. `ccw/src/templates/dashboard-js/views/mcp-manager.js`
|
||||
- 修改: CCW Tools 卡片样式(第342-415行)
|
||||
|
||||
2. `ccw/src/templates/dashboard-js/components/navigation.js`
|
||||
- 修改: Toast 显示时间(第300行)
|
||||
1. `ccw/frontend/src/components/McpManager.tsx`
|
||||
- MCP 管理组件(包含 CCW Tools 卡片样式)
|
||||
|
||||
### 核心功能文件(未修改但相关)
|
||||
2. `ccw/frontend/src/hooks/useToast.ts`
|
||||
- Toast 消息 hook(显示时间 3.5秒)
|
||||
|
||||
3. `ccw/src/templates/dashboard-js/components/mcp-manager.js`
|
||||
- 包含: `addCodexMcpServer()`, `copyClaudeServerToCodex()` 函数
|
||||
3. `ccw/frontend/src/api/mcp.ts`
|
||||
- MCP API 调用函数
|
||||
|
||||
### 后端文件
|
||||
|
||||
4. `ccw/src/core/routes/mcp-routes.ts`
|
||||
- 包含: Codex MCP API 端点和后端逻辑
|
||||
- Codex MCP API 端点和后端逻辑
|
||||
|
||||
5. `ccw/src/templates/dashboard-css/06-cards.css`
|
||||
- 包含: Toast 样式定义
|
||||
### 文档
|
||||
|
||||
### 新增文档
|
||||
|
||||
6. `ccw/docs/CODEX_MCP_TESTING_GUIDE.md`
|
||||
5. `ccw/docs/CODEX_MCP_TESTING_GUIDE.md`
|
||||
- 详细测试指南
|
||||
|
||||
7. `ccw/docs/QUICK_TEST_CODEX_MCP.md`
|
||||
6. `ccw/docs/QUICK_TEST_CODEX_MCP.md`
|
||||
- 快速测试步骤
|
||||
|
||||
|
||||
8. `ccw/docs/CODEX_MCP_IMPLEMENTATION_SUMMARY.md`
|
||||
- 本文档
|
||||
|
||||
|
||||
@@ -277,7 +277,7 @@ _____
|
||||
### Toast 消息机制
|
||||
|
||||
**实现位置**:
|
||||
- `ccw/src/templates/dashboard-js/components/navigation.js:286-301`
|
||||
- `ccw/frontend/src/hooks/useToast.ts` (React)
|
||||
- 显示时间:3500ms (3.5秒)
|
||||
- 淡出动画:300ms
|
||||
|
||||
|
||||
@@ -218,8 +218,8 @@ To verify the fix works:
|
||||
## Related Files
|
||||
|
||||
- **Implementation**: `ccw/src/core/routes/graph-routes.ts`
|
||||
- **Frontend**: `ccw/src/templates/dashboard-js/views/graph-explorer.js`
|
||||
- **Styles**: `ccw/src/templates/dashboard-css/14-graph-explorer.css`
|
||||
- **Frontend**: `ccw/frontend/src/components/GraphExplorer.tsx` (React SPA)
|
||||
- **Styles**: Embedded in React components
|
||||
- **API Docs**: `ccw/src/core/routes/graph-routes.md`
|
||||
- **Migration**: `codex-lens/src/codexlens/storage/migrations/migration_005_cleanup_unused_fields.py`
|
||||
|
||||
|
||||
@@ -238,23 +238,22 @@ ccw view
|
||||
rg "handleGraphRoutes" src/
|
||||
```
|
||||
|
||||
2. **检查前端是否包含 graph-explorer 视图**:
|
||||
2. **检查前端是否包含 Graph Explorer 组件**:
|
||||
```bash
|
||||
ls src/templates/dashboard-js/views/graph-explorer.js
|
||||
ls ccw/frontend/src/components/GraphExplorer.tsx
|
||||
```
|
||||
|
||||
3. **检查 dashboard-generator.ts 是否包含 graph explorer**:
|
||||
3. **检查 React 前端是否正确构建**:
|
||||
```bash
|
||||
rg "graph-explorer" src/core/dashboard-generator.ts
|
||||
ls ccw/frontend/dist/index.html
|
||||
```
|
||||
|
||||
### 解决方案
|
||||
|
||||
确保以下文件存在且正确:
|
||||
- `src/core/routes/graph-routes.ts` - API 路由处理
|
||||
- `src/templates/dashboard-js/views/graph-explorer.js` - 前端视图
|
||||
- `src/templates/dashboard-css/14-graph-explorer.css` - 样式
|
||||
- `src/templates/dashboard.html` - 包含 Graph 导航项(line 334)
|
||||
- `ccw/src/core/routes/graph-routes.ts` - API 路由处理
|
||||
- `ccw/frontend/src/components/GraphExplorer.tsx` - React 前端组件
|
||||
- `ccw/frontend/dist/index.html` - 构建后的前端入口
|
||||
|
||||
---
|
||||
|
||||
|
||||
@@ -53,13 +53,18 @@ function QueryInvalidator() {
|
||||
const registerQueryInvalidator = useWorkflowStore((state) => state.registerQueryInvalidator);
|
||||
|
||||
useEffect(() => {
|
||||
// Register callback to invalidate all 'workspace' prefixed queries
|
||||
// Register callback to invalidate all workspace-related queries on workspace switch
|
||||
const callback = () => {
|
||||
queryClient.invalidateQueries({
|
||||
predicate: (query) => {
|
||||
const queryKey = query.queryKey;
|
||||
// Check if the first element of the query key is 'workspace'
|
||||
return Array.isArray(queryKey) && queryKey[0] === 'workspace';
|
||||
if (!Array.isArray(queryKey)) return false;
|
||||
const prefix = queryKey[0];
|
||||
// Invalidate all query families that depend on workspace data
|
||||
return prefix === 'workspace'
|
||||
|| prefix === 'projectOverview'
|
||||
|| prefix === 'workflowStatusCounts'
|
||||
|| prefix === 'dashboardStats';
|
||||
},
|
||||
});
|
||||
};
|
||||
|
||||
@@ -69,6 +69,7 @@ const statusIcons: Record<string, React.ElementType> = {
|
||||
cancelled: XCircle,
|
||||
idle: Clock,
|
||||
initializing: Loader2,
|
||||
ready: CheckCircle2,
|
||||
};
|
||||
|
||||
// Status color mapping
|
||||
@@ -83,6 +84,7 @@ const statusColors: Record<string, string> = {
|
||||
cancelled: 'bg-destructive/20 text-destructive border-destructive/30',
|
||||
idle: 'bg-muted text-muted-foreground border-border',
|
||||
initializing: 'bg-info/20 text-info border-info/30',
|
||||
ready: 'bg-success/20 text-success border-success/30',
|
||||
};
|
||||
|
||||
// Status to i18n key mapping
|
||||
@@ -97,6 +99,7 @@ const statusI18nKeys: Record<string, string> = {
|
||||
cancelled: 'cancelled',
|
||||
idle: 'idle',
|
||||
initializing: 'initializing',
|
||||
ready: 'ready',
|
||||
};
|
||||
|
||||
// Lite task sub-type icons
|
||||
|
||||
@@ -119,6 +119,10 @@ const sessionStatusColors: Record<string, { bg: string; text: string }> = {
|
||||
in_progress: { bg: 'bg-warning/20', text: 'text-warning' },
|
||||
completed: { bg: 'bg-success/20', text: 'text-success' },
|
||||
paused: { bg: 'bg-slate-400/20', text: 'text-slate-500' },
|
||||
ready: { bg: 'bg-success/20', text: 'text-success' },
|
||||
initialized: { bg: 'bg-info/20', text: 'text-info' },
|
||||
archived: { bg: 'bg-slate-300/20', text: 'text-slate-400' },
|
||||
failed: { bg: 'bg-destructive/20', text: 'text-destructive' },
|
||||
};
|
||||
|
||||
// ---- Mini Stat Card with Sparkline ----
|
||||
|
||||
@@ -21,17 +21,17 @@ const statusConfig = {
|
||||
running: {
|
||||
icon: Clock,
|
||||
variant: 'warning' as const,
|
||||
label: 'issues.discovery.status.running',
|
||||
label: 'issues.discovery.session.status.running',
|
||||
},
|
||||
completed: {
|
||||
icon: CheckCircle,
|
||||
variant: 'success' as const,
|
||||
label: 'issues.discovery.status.completed',
|
||||
label: 'issues.discovery.session.status.completed',
|
||||
},
|
||||
failed: {
|
||||
icon: XCircle,
|
||||
variant: 'destructive' as const,
|
||||
label: 'issues.discovery.status.failed',
|
||||
label: 'issues.discovery.session.status.failed',
|
||||
},
|
||||
};
|
||||
|
||||
@@ -79,7 +79,7 @@ export function DiscoveryCard({ session, isActive, onClick }: DiscoveryCardProps
|
||||
<div className="flex items-center justify-between text-sm">
|
||||
<div className="flex items-center gap-4">
|
||||
<div className="flex items-center gap-1">
|
||||
<span className="text-muted-foreground">{formatMessage({ id: 'issues.discovery.findings' })}:</span>
|
||||
<span className="text-muted-foreground">{formatMessage({ id: 'issues.discovery.session.findings' }, { count: session.findings_count })}:</span>
|
||||
<span className="font-medium text-foreground">{session.findings_count}</span>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
@@ -12,6 +12,7 @@ import { Tabs, TabsList, TabsTrigger, TabsContent } from '@/components/ui/Tabs';
|
||||
import { Badge } from '@/components/ui/Badge';
|
||||
import { Progress } from '@/components/ui/Progress';
|
||||
import { IssueDrawer } from '@/components/issue/hub/IssueDrawer';
|
||||
import { FindingDrawer } from './FindingDrawer';
|
||||
import type { DiscoverySession, Finding } from '@/lib/api';
|
||||
import type { Issue } from '@/lib/api';
|
||||
import type { FindingFilters } from '@/hooks/useIssues';
|
||||
@@ -43,6 +44,7 @@ export function DiscoveryDetail({
|
||||
const { formatMessage } = useIntl();
|
||||
const [activeTab, setActiveTab] = useState('findings');
|
||||
const [selectedIssue, setSelectedIssue] = useState<Issue | null>(null);
|
||||
const [selectedFinding, setSelectedFinding] = useState<Finding | null>(null);
|
||||
const [selectedIds, setSelectedIds] = useState<string[]>([]);
|
||||
|
||||
const handleFindingClick = (finding: Finding) => {
|
||||
@@ -51,14 +53,21 @@ export function DiscoveryDetail({
|
||||
const relatedIssue = issues.find(i => i.id === finding.issue_id);
|
||||
if (relatedIssue) {
|
||||
setSelectedIssue(relatedIssue);
|
||||
return;
|
||||
}
|
||||
}
|
||||
// Otherwise, show the finding details in FindingDrawer
|
||||
setSelectedFinding(finding);
|
||||
};
|
||||
|
||||
const handleCloseDrawer = () => {
|
||||
const handleCloseIssueDrawer = () => {
|
||||
setSelectedIssue(null);
|
||||
};
|
||||
|
||||
const handleCloseFindingDrawer = () => {
|
||||
setSelectedFinding(null);
|
||||
};
|
||||
|
||||
const handleExportSelected = async () => {
|
||||
if (onExportSelected && selectedIds.length > 0) {
|
||||
await onExportSelected(selectedIds);
|
||||
@@ -130,7 +139,7 @@ export function DiscoveryDetail({
|
||||
<Badge
|
||||
variant={session.status === 'completed' ? 'success' : session.status === 'failed' ? 'destructive' : 'warning'}
|
||||
>
|
||||
{formatMessage({ id: `issues.discovery.status.${session.status}` })}
|
||||
{formatMessage({ id: `issues.discovery.session.status.${session.status}` })}
|
||||
</Badge>
|
||||
<span className="text-sm text-muted-foreground">
|
||||
{formatMessage({ id: 'issues.discovery.createdAt' })}: {formatDate(session.created_at)}
|
||||
@@ -192,7 +201,7 @@ export function DiscoveryDetail({
|
||||
<Badge
|
||||
variant={severity === 'critical' || severity === 'high' ? 'destructive' : severity === 'medium' ? 'warning' : 'secondary'}
|
||||
>
|
||||
{formatMessage({ id: `issues.discovery.severity.${severity}` })}
|
||||
{formatMessage({ id: `issues.discovery.findings.severity.${severity}` })}
|
||||
</Badge>
|
||||
<span className="font-medium">{count}</span>
|
||||
</div>
|
||||
@@ -240,7 +249,7 @@ export function DiscoveryDetail({
|
||||
<Badge
|
||||
variant={session.status === 'completed' ? 'success' : session.status === 'failed' ? 'destructive' : 'warning'}
|
||||
>
|
||||
{formatMessage({ id: `issues.discovery.status.${session.status}` })}
|
||||
{formatMessage({ id: `issues.discovery.session.status.${session.status}` })}
|
||||
</Badge>
|
||||
</div>
|
||||
<div>
|
||||
@@ -277,7 +286,14 @@ export function DiscoveryDetail({
|
||||
<IssueDrawer
|
||||
issue={selectedIssue}
|
||||
isOpen={selectedIssue !== null}
|
||||
onClose={handleCloseDrawer}
|
||||
onClose={handleCloseIssueDrawer}
|
||||
/>
|
||||
|
||||
{/* Finding Detail Drawer */}
|
||||
<FindingDrawer
|
||||
finding={selectedFinding}
|
||||
isOpen={selectedFinding !== null}
|
||||
onClose={handleCloseFindingDrawer}
|
||||
/>
|
||||
</div>
|
||||
);
|
||||
|
||||
213
ccw/frontend/src/components/issue/discovery/FindingDrawer.tsx
Normal file
213
ccw/frontend/src/components/issue/discovery/FindingDrawer.tsx
Normal file
@@ -0,0 +1,213 @@
|
||||
// ========================================
|
||||
// FindingDrawer Component
|
||||
// ========================================
|
||||
// Right-side finding detail drawer for displaying discovery finding details
|
||||
|
||||
import { useEffect } from 'react';
|
||||
import { useIntl } from 'react-intl';
|
||||
import { X, FileText, AlertTriangle, ExternalLink, MapPin, Code, Lightbulb, Target } from 'lucide-react';
|
||||
import { Badge } from '@/components/ui/Badge';
|
||||
import { Button } from '@/components/ui/Button';
|
||||
import { cn } from '@/lib/utils';
|
||||
import type { Finding } from '@/lib/api';
|
||||
|
||||
// ========== Types ==========
|
||||
export interface FindingDrawerProps {
|
||||
finding: Finding | null;
|
||||
isOpen: boolean;
|
||||
onClose: () => void;
|
||||
}
|
||||
|
||||
// ========== Severity Configuration ==========
|
||||
const severityConfig: Record<string, { label: string; variant: 'default' | 'secondary' | 'destructive' | 'outline' | 'success' | 'warning' | 'info' }> = {
|
||||
critical: { label: 'issues.discovery.findings.severity.critical', variant: 'destructive' },
|
||||
high: { label: 'issues.discovery.findings.severity.high', variant: 'destructive' },
|
||||
medium: { label: 'issues.discovery.findings.severity.medium', variant: 'warning' },
|
||||
low: { label: 'issues.discovery.findings.severity.low', variant: 'secondary' },
|
||||
};
|
||||
|
||||
function getSeverityConfig(severity: string) {
|
||||
return severityConfig[severity] || { label: 'issues.discovery.findings.severity.unknown', variant: 'outline' };
|
||||
}
|
||||
|
||||
// ========== Component ==========
|
||||
|
||||
export function FindingDrawer({ finding, isOpen, onClose }: FindingDrawerProps) {
|
||||
const { formatMessage } = useIntl();
|
||||
|
||||
// ESC key to close
|
||||
useEffect(() => {
|
||||
if (!isOpen) return;
|
||||
const handleEsc = (e: KeyboardEvent) => {
|
||||
if (e.key === 'Escape') onClose();
|
||||
};
|
||||
window.addEventListener('keydown', handleEsc);
|
||||
return () => window.removeEventListener('keydown', handleEsc);
|
||||
}, [isOpen, onClose]);
|
||||
|
||||
if (!finding || !isOpen) {
|
||||
return null;
|
||||
}
|
||||
|
||||
const severity = getSeverityConfig(finding.severity);
|
||||
|
||||
return (
|
||||
<>
|
||||
{/* Overlay */}
|
||||
<div
|
||||
className={cn(
|
||||
'fixed inset-0 bg-black/40 transition-opacity z-40',
|
||||
isOpen ? 'opacity-100' : 'opacity-0 pointer-events-none'
|
||||
)}
|
||||
onClick={onClose}
|
||||
aria-hidden="true"
|
||||
/>
|
||||
|
||||
{/* Drawer */}
|
||||
<div
|
||||
className={cn(
|
||||
'fixed top-0 right-0 h-full w-1/2 bg-background border-l border-border shadow-2xl z-50 flex flex-col transition-transform duration-300 ease-in-out',
|
||||
isOpen ? 'translate-x-0' : 'translate-x-full'
|
||||
)}
|
||||
role="dialog"
|
||||
aria-modal="true"
|
||||
style={{ minWidth: '400px', maxWidth: '800px' }}
|
||||
>
|
||||
{/* Header */}
|
||||
<div className="flex items-start justify-between p-6 border-b border-border bg-card">
|
||||
<div className="flex-1 min-w-0 mr-4">
|
||||
<div className="flex items-center gap-2 mb-2 flex-wrap">
|
||||
<span className="text-xs font-mono text-muted-foreground">{finding.id}</span>
|
||||
<Badge variant={severity.variant}>
|
||||
{formatMessage({ id: severity.label })}
|
||||
</Badge>
|
||||
{finding.type && (
|
||||
<Badge variant="outline">{finding.type}</Badge>
|
||||
)}
|
||||
{finding.category && (
|
||||
<Badge variant="info">{finding.category}</Badge>
|
||||
)}
|
||||
</div>
|
||||
<h2 className="text-lg font-semibold text-foreground">
|
||||
{finding.title}
|
||||
</h2>
|
||||
</div>
|
||||
<Button variant="ghost" size="icon" onClick={onClose} className="flex-shrink-0 hover:bg-secondary">
|
||||
<X className="h-5 w-5" />
|
||||
</Button>
|
||||
</div>
|
||||
|
||||
{/* Content */}
|
||||
<div className="flex-1 overflow-y-auto p-6 space-y-6">
|
||||
{/* Description */}
|
||||
<div>
|
||||
<h3 className="text-sm font-semibold text-foreground mb-2 flex items-center gap-2">
|
||||
<AlertTriangle className="h-4 w-4" />
|
||||
{formatMessage({ id: 'issues.discovery.findings.description' })}
|
||||
</h3>
|
||||
<p className="text-sm text-muted-foreground whitespace-pre-wrap">
|
||||
{finding.description}
|
||||
</p>
|
||||
</div>
|
||||
|
||||
{/* File Location */}
|
||||
{finding.file && (
|
||||
<div>
|
||||
<h3 className="text-sm font-semibold text-foreground mb-2 flex items-center gap-2">
|
||||
<MapPin className="h-4 w-4" />
|
||||
{formatMessage({ id: 'issues.discovery.findings.location' })}
|
||||
</h3>
|
||||
<div className="flex items-center gap-2 text-sm">
|
||||
<FileText className="h-4 w-4 text-muted-foreground" />
|
||||
<code className="px-2 py-1 bg-muted rounded text-xs">
|
||||
{finding.file}
|
||||
{finding.line && `:${finding.line}`}
|
||||
</code>
|
||||
</div>
|
||||
</div>
|
||||
)}
|
||||
|
||||
{/* Code Snippet */}
|
||||
{finding.code_snippet && (
|
||||
<div>
|
||||
<h3 className="text-sm font-semibold text-foreground mb-2 flex items-center gap-2">
|
||||
<Code className="h-4 w-4" />
|
||||
{formatMessage({ id: 'issues.discovery.findings.codeSnippet' })}
|
||||
</h3>
|
||||
<pre className="p-3 bg-muted rounded-md overflow-x-auto text-xs border border-border">
|
||||
<code>{finding.code_snippet}</code>
|
||||
</pre>
|
||||
</div>
|
||||
)}
|
||||
|
||||
{/* Suggested Fix */}
|
||||
{finding.suggested_issue && (
|
||||
<div>
|
||||
<h3 className="text-sm font-semibold text-foreground mb-2 flex items-center gap-2">
|
||||
<Lightbulb className="h-4 w-4" />
|
||||
{formatMessage({ id: 'issues.discovery.findings.suggestedFix' })}
|
||||
</h3>
|
||||
<p className="text-sm text-muted-foreground whitespace-pre-wrap">
|
||||
{finding.suggested_issue}
|
||||
</p>
|
||||
</div>
|
||||
)}
|
||||
|
||||
{/* Confidence */}
|
||||
{finding.confidence !== undefined && (
|
||||
<div>
|
||||
<h3 className="text-sm font-semibold text-foreground mb-2 flex items-center gap-2">
|
||||
<Target className="h-4 w-4" />
|
||||
{formatMessage({ id: 'issues.discovery.findings.confidence' })}
|
||||
</h3>
|
||||
<div className="flex items-center gap-2">
|
||||
<div className="flex-1 h-2 bg-muted rounded-full overflow-hidden">
|
||||
<div
|
||||
className={cn(
|
||||
"h-full transition-all",
|
||||
finding.confidence >= 0.9 ? "bg-green-500" :
|
||||
finding.confidence >= 0.7 ? "bg-yellow-500" : "bg-red-500"
|
||||
)}
|
||||
style={{ width: `${finding.confidence * 100}%` }}
|
||||
/>
|
||||
</div>
|
||||
<span className="text-sm font-medium">
|
||||
{Math.round(finding.confidence * 100)}%
|
||||
</span>
|
||||
</div>
|
||||
</div>
|
||||
)}
|
||||
|
||||
{/* Reference */}
|
||||
{finding.reference && (
|
||||
<div>
|
||||
<h3 className="text-sm font-semibold text-foreground mb-2 flex items-center gap-2">
|
||||
<ExternalLink className="h-4 w-4" />
|
||||
{formatMessage({ id: 'issues.discovery.findings.reference' })}
|
||||
</h3>
|
||||
<a
|
||||
href={finding.reference}
|
||||
target="_blank"
|
||||
rel="noopener noreferrer"
|
||||
className="text-sm text-primary hover:underline break-all"
|
||||
>
|
||||
{finding.reference}
|
||||
</a>
|
||||
</div>
|
||||
)}
|
||||
|
||||
{/* Perspective */}
|
||||
{finding.perspective && (
|
||||
<div className="pt-4 border-t border-border">
|
||||
<Badge variant="secondary" className="text-xs">
|
||||
{formatMessage({ id: 'issues.discovery.findings.perspective' })}: {finding.perspective}
|
||||
</Badge>
|
||||
</div>
|
||||
)}
|
||||
</div>
|
||||
</div>
|
||||
</>
|
||||
);
|
||||
}
|
||||
|
||||
export default FindingDrawer;
|
||||
@@ -24,14 +24,14 @@ interface FindingListProps {
|
||||
}
|
||||
|
||||
const severityConfig: Record<string, { variant: 'destructive' | 'warning' | 'secondary' | 'outline' | 'success' | 'info' | 'default'; label: string }> = {
|
||||
critical: { variant: 'destructive', label: 'issues.discovery.severity.critical' },
|
||||
high: { variant: 'destructive', label: 'issues.discovery.severity.high' },
|
||||
medium: { variant: 'warning', label: 'issues.discovery.severity.medium' },
|
||||
low: { variant: 'secondary', label: 'issues.discovery.severity.low' },
|
||||
critical: { variant: 'destructive', label: 'issues.discovery.findings.severity.critical' },
|
||||
high: { variant: 'destructive', label: 'issues.discovery.findings.severity.high' },
|
||||
medium: { variant: 'warning', label: 'issues.discovery.findings.severity.medium' },
|
||||
low: { variant: 'secondary', label: 'issues.discovery.findings.severity.low' },
|
||||
};
|
||||
|
||||
function getSeverityConfig(severity: string) {
|
||||
return severityConfig[severity] || { variant: 'outline', label: 'issues.discovery.severity.unknown' };
|
||||
return severityConfig[severity] || { variant: 'outline', label: 'issues.discovery.findings.severity.unknown' };
|
||||
}
|
||||
|
||||
export function FindingList({
|
||||
@@ -116,10 +116,10 @@ export function FindingList({
|
||||
</SelectTrigger>
|
||||
<SelectContent>
|
||||
<SelectItem value="all">{formatMessage({ id: 'issues.discovery.findings.severity.all' })}</SelectItem>
|
||||
<SelectItem value="critical">{formatMessage({ id: 'issues.discovery.severity.critical' })}</SelectItem>
|
||||
<SelectItem value="high">{formatMessage({ id: 'issues.discovery.severity.high' })}</SelectItem>
|
||||
<SelectItem value="medium">{formatMessage({ id: 'issues.discovery.severity.medium' })}</SelectItem>
|
||||
<SelectItem value="low">{formatMessage({ id: 'issues.discovery.severity.low' })}</SelectItem>
|
||||
<SelectItem value="critical">{formatMessage({ id: 'issues.discovery.findings.severity.critical' })}</SelectItem>
|
||||
<SelectItem value="high">{formatMessage({ id: 'issues.discovery.findings.severity.high' })}</SelectItem>
|
||||
<SelectItem value="medium">{formatMessage({ id: 'issues.discovery.findings.severity.medium' })}</SelectItem>
|
||||
<SelectItem value="low">{formatMessage({ id: 'issues.discovery.findings.severity.low' })}</SelectItem>
|
||||
</SelectContent>
|
||||
</Select>
|
||||
{uniqueTypes.length > 0 && (
|
||||
|
||||
@@ -38,7 +38,7 @@ export function useProjectOverview(options: UseProjectOverviewOptions = {}) {
|
||||
const queryEnabled = enabled && !!projectPath;
|
||||
|
||||
const query = useQuery({
|
||||
queryKey: projectOverviewKeys.detail(),
|
||||
queryKey: projectOverviewKeys.detail(projectPath),
|
||||
queryFn: () => fetchProjectOverview(projectPath),
|
||||
staleTime,
|
||||
enabled: queryEnabled,
|
||||
|
||||
@@ -1077,6 +1077,12 @@ export interface Finding {
|
||||
created_at: string;
|
||||
issue_id?: string; // Associated issue ID if exported
|
||||
exported?: boolean; // Whether this finding has been exported as an issue
|
||||
// Additional fields from discovery backend
|
||||
category?: string;
|
||||
suggested_issue?: string;
|
||||
confidence?: number;
|
||||
reference?: string;
|
||||
perspective?: string;
|
||||
}
|
||||
|
||||
export async function fetchDiscoveries(projectPath?: string): Promise<DiscoverySession[]> {
|
||||
@@ -1131,7 +1137,11 @@ export async function fetchDiscoveryFindings(
|
||||
? `/api/discoveries/${encodeURIComponent(sessionId)}/findings?path=${encodeURIComponent(projectPath)}`
|
||||
: `/api/discoveries/${encodeURIComponent(sessionId)}/findings`;
|
||||
const data = await fetchApi<{ findings?: Finding[] }>(url);
|
||||
return data.findings ?? [];
|
||||
// Map backend 'priority' to frontend 'severity' for compatibility
|
||||
return (data.findings ?? []).map(f => ({
|
||||
...f,
|
||||
severity: f.severity || (f as any).priority || 'medium'
|
||||
}));
|
||||
}
|
||||
|
||||
/**
|
||||
|
||||
@@ -82,7 +82,8 @@
|
||||
"label": "Status",
|
||||
"openIssues": "Open Issues",
|
||||
"enabled": "Enabled",
|
||||
"disabled": "Disabled"
|
||||
"disabled": "Disabled",
|
||||
"ready": "Ready"
|
||||
},
|
||||
"priority": {
|
||||
"low": "Low",
|
||||
|
||||
@@ -353,7 +353,14 @@
|
||||
"hasIssue": "Linked",
|
||||
"export": "Export",
|
||||
"selectAll": "Select All",
|
||||
"deselectAll": "Deselect All"
|
||||
"deselectAll": "Deselect All",
|
||||
"description": "Description",
|
||||
"location": "Location",
|
||||
"codeSnippet": "Code Snippet",
|
||||
"suggestedFix": "Suggested Fix",
|
||||
"confidence": "Confidence",
|
||||
"reference": "Reference",
|
||||
"perspective": "Perspective"
|
||||
},
|
||||
"tabs": {
|
||||
"findings": "Findings",
|
||||
|
||||
@@ -82,7 +82,8 @@
|
||||
"label": "状态",
|
||||
"openIssues": "开放问题",
|
||||
"enabled": "已启用",
|
||||
"disabled": "已禁用"
|
||||
"disabled": "已禁用",
|
||||
"ready": "就绪"
|
||||
},
|
||||
"priority": {
|
||||
"low": "低",
|
||||
|
||||
@@ -353,7 +353,14 @@
|
||||
"hasIssue": "已关联",
|
||||
"export": "导出",
|
||||
"selectAll": "全选",
|
||||
"deselectAll": "取消全选"
|
||||
"deselectAll": "取消全选",
|
||||
"description": "问题描述",
|
||||
"location": "文件位置",
|
||||
"codeSnippet": "代码片段",
|
||||
"suggestedFix": "建议修复",
|
||||
"confidence": "置信度",
|
||||
"reference": "参考链接",
|
||||
"perspective": "视角"
|
||||
},
|
||||
"tabs": {
|
||||
"findings": "发现",
|
||||
|
||||
@@ -7,7 +7,7 @@ import { useIntl } from 'react-intl';
|
||||
import { Radar, AlertCircle, Loader2 } from 'lucide-react';
|
||||
import { Card } from '@/components/ui/Card';
|
||||
import { Badge } from '@/components/ui/Badge';
|
||||
import { useIssueDiscovery } from '@/hooks/useIssues';
|
||||
import { useIssueDiscovery, useIssues } from '@/hooks/useIssues';
|
||||
import { DiscoveryCard } from '@/components/issue/discovery/DiscoveryCard';
|
||||
import { DiscoveryDetail } from '@/components/issue/discovery/DiscoveryDetail';
|
||||
|
||||
@@ -29,6 +29,12 @@ export function DiscoveryPage() {
|
||||
isExporting,
|
||||
} = useIssueDiscovery({ refetchInterval: 3000 });
|
||||
|
||||
// Fetch issues to find related ones when clicking findings
|
||||
const { issues } = useIssues({
|
||||
// Don't apply filters to get all issues for matching
|
||||
filter: undefined
|
||||
});
|
||||
|
||||
if (error) {
|
||||
return (
|
||||
<div className="space-y-6">
|
||||
@@ -167,6 +173,7 @@ export function DiscoveryPage() {
|
||||
onExport={exportFindings}
|
||||
onExportSelected={exportSelectedFindings}
|
||||
isExporting={isExporting}
|
||||
issues={issues}
|
||||
/>
|
||||
)}
|
||||
</div>
|
||||
|
||||
@@ -309,7 +309,7 @@ export const selectQueueSchedulerStatus = (state: QueueSchedulerStore): QueueSch
|
||||
|
||||
/** Select all queue items */
|
||||
export const selectQueueItems = (state: QueueSchedulerStore): QueueItem[] =>
|
||||
state?.items ?? [];
|
||||
state?.items ?? EMPTY_ITEMS;
|
||||
|
||||
/**
|
||||
* Select items that are ready to execute (status 'queued' or 'pending').
|
||||
@@ -347,7 +347,7 @@ export const selectExecutingItems = (state: QueueSchedulerStore): QueueItem[] =>
|
||||
*/
|
||||
export const selectSchedulerProgress = (state: QueueSchedulerStore): number => {
|
||||
if (!state) return 0;
|
||||
const items = state.items ?? [];
|
||||
const items = state.items ?? EMPTY_ITEMS;
|
||||
const total = items.length;
|
||||
if (total === 0) return 0;
|
||||
const terminal = items.filter(
|
||||
|
||||
@@ -341,6 +341,77 @@ export class A2UIWebSocketHandler {
|
||||
): boolean {
|
||||
const params = action.parameters ?? {};
|
||||
const questionId = typeof params.questionId === 'string' ? params.questionId : undefined;
|
||||
|
||||
// Handle submit-all first - it uses compositeId instead of questionId
|
||||
if (action.actionId === 'submit-all') {
|
||||
const compositeId = typeof params.compositeId === 'string' ? params.compositeId : undefined;
|
||||
const questionIds = Array.isArray(params.questionIds) ? params.questionIds as string[] : undefined;
|
||||
if (!compositeId || !questionIds) {
|
||||
return false;
|
||||
}
|
||||
|
||||
// DEBUG: NDJSON log for submit-all received
|
||||
console.log(JSON.stringify({
|
||||
timestamp: new Date().toISOString(),
|
||||
level: 'DEBUG',
|
||||
hid: 'H2',
|
||||
event: 'submit_all_received_in_handleQuestionAction',
|
||||
compositeId,
|
||||
questionCount: questionIds.length
|
||||
}));
|
||||
|
||||
// Collect answers for all sub-questions
|
||||
const answers: QuestionAnswer[] = [];
|
||||
for (const qId of questionIds) {
|
||||
const singleSel = this.singleSelectSelections.get(qId);
|
||||
const multiSel = this.multiSelectSelections.get(qId);
|
||||
const inputVal = this.inputValues.get(qId);
|
||||
const otherText = this.inputValues.get(`__other__:${qId}`);
|
||||
|
||||
if (singleSel !== undefined) {
|
||||
const value = singleSel === '__other__' && otherText ? otherText : singleSel;
|
||||
answers.push({ questionId: qId, value, cancelled: false });
|
||||
} else if (multiSel !== undefined) {
|
||||
const values = Array.from(multiSel).map(v =>
|
||||
v === '__other__' && otherText ? otherText : v
|
||||
);
|
||||
answers.push({ questionId: qId, value: values, cancelled: false });
|
||||
} else if (inputVal !== undefined) {
|
||||
answers.push({ questionId: qId, value: inputVal, cancelled: false });
|
||||
} else {
|
||||
answers.push({ questionId: qId, value: '', cancelled: false });
|
||||
}
|
||||
|
||||
// Cleanup per-question tracking
|
||||
this.singleSelectSelections.delete(qId);
|
||||
this.multiSelectSelections.delete(qId);
|
||||
this.inputValues.delete(qId);
|
||||
this.inputValues.delete(`__other__:${qId}`);
|
||||
}
|
||||
|
||||
// Call multi-answer callback
|
||||
let handled = false;
|
||||
if (this.multiAnswerCallback) {
|
||||
handled = this.multiAnswerCallback(compositeId, answers);
|
||||
}
|
||||
if (!handled) {
|
||||
// Store for HTTP polling retrieval
|
||||
this.resolvedMultiAnswers.set(compositeId, { compositeId, answers, timestamp: Date.now() });
|
||||
|
||||
console.log(JSON.stringify({
|
||||
timestamp: new Date().toISOString(),
|
||||
level: 'DEBUG',
|
||||
hid: 'H2',
|
||||
event: 'answer_stored_for_polling',
|
||||
compositeId,
|
||||
answerCount: answers.length
|
||||
}));
|
||||
}
|
||||
this.activeSurfaces.delete(compositeId);
|
||||
return true;
|
||||
}
|
||||
|
||||
// For other actions, questionId is required
|
||||
if (!questionId) {
|
||||
return false;
|
||||
}
|
||||
@@ -446,60 +517,6 @@ export class A2UIWebSocketHandler {
|
||||
return true;
|
||||
}
|
||||
|
||||
case 'submit-all': {
|
||||
// Multi-question composite submit
|
||||
const compositeId = typeof params.compositeId === 'string' ? params.compositeId : undefined;
|
||||
const questionIds = Array.isArray(params.questionIds) ? params.questionIds as string[] : undefined;
|
||||
if (!compositeId || !questionIds) {
|
||||
return false;
|
||||
}
|
||||
|
||||
// Collect answers for all sub-questions
|
||||
const answers: QuestionAnswer[] = [];
|
||||
for (const qId of questionIds) {
|
||||
const singleSel = this.singleSelectSelections.get(qId);
|
||||
const multiSel = this.multiSelectSelections.get(qId);
|
||||
const inputVal = this.inputValues.get(qId);
|
||||
const otherText = this.inputValues.get(`__other__:${qId}`);
|
||||
|
||||
if (singleSel !== undefined) {
|
||||
// Resolve __other__ to actual text input
|
||||
const value = singleSel === '__other__' && otherText ? otherText : singleSel;
|
||||
answers.push({ questionId: qId, value, cancelled: false });
|
||||
} else if (multiSel !== undefined) {
|
||||
// Resolve __other__ in multi-select: replace with actual text
|
||||
const values = Array.from(multiSel).map(v =>
|
||||
v === '__other__' && otherText ? otherText : v
|
||||
);
|
||||
answers.push({ questionId: qId, value: values, cancelled: false });
|
||||
} else if (inputVal !== undefined) {
|
||||
answers.push({ questionId: qId, value: inputVal, cancelled: false });
|
||||
} else {
|
||||
// No value recorded — include empty
|
||||
answers.push({ questionId: qId, value: '', cancelled: false });
|
||||
}
|
||||
|
||||
// Cleanup per-question tracking
|
||||
this.singleSelectSelections.delete(qId);
|
||||
this.multiSelectSelections.delete(qId);
|
||||
this.inputValues.delete(qId);
|
||||
this.inputValues.delete(`__other__:${qId}`);
|
||||
}
|
||||
|
||||
// Call multi-answer callback
|
||||
let handled = false;
|
||||
if (this.multiAnswerCallback) {
|
||||
handled = this.multiAnswerCallback(compositeId, answers);
|
||||
}
|
||||
if (!handled) {
|
||||
// Store for HTTP polling retrieval
|
||||
this.resolvedMultiAnswers.set(compositeId, { compositeId, answers, timestamp: Date.now() });
|
||||
}
|
||||
// Always clean up UI state
|
||||
this.activeSurfaces.delete(compositeId);
|
||||
return true;
|
||||
}
|
||||
|
||||
default:
|
||||
return false;
|
||||
}
|
||||
|
||||
@@ -262,7 +262,13 @@ function flattenFindings(perspectiveResults: any[]): any[] {
|
||||
const allFindings: any[] = [];
|
||||
for (const result of perspectiveResults) {
|
||||
if (result.findings) {
|
||||
allFindings.push(...result.findings);
|
||||
// Map backend 'priority' to frontend 'severity' for compatibility
|
||||
const mappedFindings = result.findings.map((f: any) => ({
|
||||
...f,
|
||||
severity: f.severity || f.priority || 'medium',
|
||||
sessionId: f.discovery_id || result.discovery_id
|
||||
}));
|
||||
allFindings.push(...mappedFindings);
|
||||
}
|
||||
}
|
||||
return allFindings;
|
||||
|
||||
@@ -31,6 +31,7 @@
|
||||
*/
|
||||
|
||||
import { join, dirname } from 'path';
|
||||
import { existsSync } from 'fs';
|
||||
import { randomBytes } from 'crypto';
|
||||
import { fileURLToPath } from 'url';
|
||||
import type { RouteContext } from './types.js';
|
||||
@@ -635,10 +636,24 @@ function getTemplatesDir(workflowDir: string): string {
|
||||
|
||||
/**
|
||||
* Get the builtin templates directory path (shipped with CCW)
|
||||
* Returns null if the directory doesn't exist (e.g., in npm package without src/)
|
||||
*/
|
||||
function getBuiltinTemplatesDir(): string {
|
||||
// Resolve relative to this module's location
|
||||
return join(__dirname, '..', '..', '..', 'templates', 'orchestrator');
|
||||
function getBuiltinTemplatesDir(): string | null {
|
||||
// Try multiple possible locations for builtin templates
|
||||
const possiblePaths = [
|
||||
// From dist/core/routes/ -> ccw/templates/orchestrator/
|
||||
join(__dirname, '..', '..', '..', 'templates', 'orchestrator'),
|
||||
// From dist/core/routes/ -> ccw/src/templates/orchestrator/ (dev mode)
|
||||
join(__dirname, '..', '..', '..', 'src', 'templates', 'orchestrator'),
|
||||
];
|
||||
|
||||
for (const path of possiblePaths) {
|
||||
if (existsSync(path)) {
|
||||
return path;
|
||||
}
|
||||
}
|
||||
|
||||
return null;
|
||||
}
|
||||
|
||||
/**
|
||||
@@ -745,10 +760,10 @@ async function listLocalTemplates(workflowDir: string): Promise<Template[]> {
|
||||
*/
|
||||
async function listBuiltinTemplates(): Promise<Template[]> {
|
||||
const { readdir, readFile } = await import('fs/promises');
|
||||
const { existsSync } = await import('fs');
|
||||
const builtinDir = getBuiltinTemplatesDir();
|
||||
|
||||
if (!existsSync(builtinDir)) {
|
||||
// getBuiltinTemplatesDir() returns null if no builtin templates directory exists
|
||||
if (!builtinDir) {
|
||||
return [];
|
||||
}
|
||||
|
||||
|
||||
@@ -611,8 +611,29 @@ export function handleAnswer(answer: QuestionAnswer): boolean {
|
||||
* @returns True if answer was processed
|
||||
*/
|
||||
export function handleMultiAnswer(compositeId: string, answers: QuestionAnswer[]): boolean {
|
||||
const handleStartTime = Date.now();
|
||||
|
||||
// DEBUG: NDJSON log for handleMultiAnswer start
|
||||
console.log(JSON.stringify({
|
||||
timestamp: new Date().toISOString(),
|
||||
level: 'DEBUG',
|
||||
hid: 'H3',
|
||||
event: 'handle_multi_answer_start',
|
||||
compositeId,
|
||||
answerCount: answers.length
|
||||
}));
|
||||
|
||||
const pending = getPendingQuestion(compositeId);
|
||||
if (!pending) {
|
||||
// DEBUG: NDJSON log for missing pending question
|
||||
console.log(JSON.stringify({
|
||||
timestamp: new Date().toISOString(),
|
||||
level: 'DEBUG',
|
||||
hid: 'H3',
|
||||
event: 'handle_multi_answer_no_pending',
|
||||
compositeId,
|
||||
elapsedMs: Date.now() - handleStartTime
|
||||
}));
|
||||
return false;
|
||||
}
|
||||
|
||||
@@ -625,6 +646,17 @@ export function handleMultiAnswer(compositeId: string, answers: QuestionAnswer[]
|
||||
});
|
||||
|
||||
removePendingQuestion(compositeId);
|
||||
|
||||
// DEBUG: NDJSON log for handleMultiAnswer complete
|
||||
console.log(JSON.stringify({
|
||||
timestamp: new Date().toISOString(),
|
||||
level: 'DEBUG',
|
||||
hid: 'H3',
|
||||
event: 'handle_multi_answer_complete',
|
||||
compositeId,
|
||||
elapsedMs: Date.now() - handleStartTime
|
||||
}));
|
||||
|
||||
return true;
|
||||
}
|
||||
|
||||
@@ -637,9 +669,24 @@ export function handleMultiAnswer(compositeId: string, answers: QuestionAnswer[]
|
||||
*/
|
||||
function startAnswerPolling(questionId: string, isComposite: boolean = false): void {
|
||||
const pollPath = `/api/a2ui/answer?questionId=${encodeURIComponent(questionId)}&composite=${isComposite}`;
|
||||
const startTime = Date.now();
|
||||
|
||||
// DEBUG: NDJSON log for polling start
|
||||
console.log(JSON.stringify({
|
||||
timestamp: new Date().toISOString(),
|
||||
level: 'DEBUG',
|
||||
hid: 'H1',
|
||||
event: 'polling_start',
|
||||
questionId,
|
||||
isComposite,
|
||||
port: DASHBOARD_PORT,
|
||||
firstPollDelayMs: POLL_INTERVAL_MS
|
||||
}));
|
||||
|
||||
console.error(`[A2UI-Poll] Starting polling for questionId=${questionId}, composite=${isComposite}, port=${DASHBOARD_PORT}`);
|
||||
|
||||
let pollCount = 0;
|
||||
|
||||
const poll = () => {
|
||||
// Stop if the question was already resolved or timed out
|
||||
if (!hasPendingQuestion(questionId)) {
|
||||
@@ -647,6 +694,20 @@ function startAnswerPolling(questionId: string, isComposite: boolean = false): v
|
||||
return;
|
||||
}
|
||||
|
||||
pollCount++;
|
||||
const pollStartTime = Date.now();
|
||||
|
||||
// DEBUG: NDJSON log for each poll attempt
|
||||
console.log(JSON.stringify({
|
||||
timestamp: new Date().toISOString(),
|
||||
level: 'DEBUG',
|
||||
hid: 'H1',
|
||||
event: 'poll_attempt',
|
||||
questionId,
|
||||
pollCount,
|
||||
elapsedMs: pollStartTime - startTime
|
||||
}));
|
||||
|
||||
const req = http.get({ hostname: '127.0.0.1', port: DASHBOARD_PORT, path: pollPath, timeout: 2000 }, (res) => {
|
||||
let data = '';
|
||||
res.on('data', (chunk: Buffer) => { data += chunk.toString(); });
|
||||
@@ -667,6 +728,20 @@ function startAnswerPolling(questionId: string, isComposite: boolean = false): v
|
||||
|
||||
console.error(`[A2UI-Poll] Answer received for questionId=${questionId}:`, JSON.stringify(parsed).slice(0, 200));
|
||||
|
||||
// DEBUG: NDJSON log for answer received
|
||||
console.log(JSON.stringify({
|
||||
timestamp: new Date().toISOString(),
|
||||
level: 'DEBUG',
|
||||
hid: 'H1',
|
||||
event: 'answer_received',
|
||||
questionId,
|
||||
pollCount,
|
||||
elapsedMs: Date.now() - startTime,
|
||||
pollLatencyMs: Date.now() - pollStartTime,
|
||||
isComposite,
|
||||
answerPreview: JSON.stringify(parsed).slice(0, 100)
|
||||
}));
|
||||
|
||||
if (isComposite && Array.isArray(parsed.answers)) {
|
||||
const ok = handleMultiAnswer(questionId, parsed.answers as QuestionAnswer[]);
|
||||
console.error(`[A2UI-Poll] handleMultiAnswer result: ${ok}`);
|
||||
|
||||
@@ -105,8 +105,8 @@ export const schema: ToolSchema = {
|
||||
name: 'write_file',
|
||||
description: `Write content to file. Auto-creates parent directories.
|
||||
|
||||
Usage: write_file(path="file.js", content="code here")
|
||||
Options: backup=true (backup before overwrite), createDirectories=false (disable auto-creation), encoding="utf8"`,
|
||||
Required: path (string), content (string)
|
||||
Options: backup=true, createDirectories=false, encoding="utf8"`,
|
||||
inputSchema: {
|
||||
type: 'object',
|
||||
properties: {
|
||||
|
||||
43
docs/.gitignore
vendored
Normal file
43
docs/.gitignore
vendored
Normal file
@@ -0,0 +1,43 @@
|
||||
# Dependencies
|
||||
node_modules/
|
||||
npm-debug.log*
|
||||
yarn-debug.log*
|
||||
yarn-error.log*
|
||||
pnpm-debug.log*
|
||||
|
||||
# Build outputs
|
||||
.vitepress/dist/
|
||||
.vitepress/cache/
|
||||
docs/.vitepress/dist/
|
||||
docs/.vitepress/cache/
|
||||
dist/
|
||||
public/search-index.*.json
|
||||
|
||||
# Editor directories and files
|
||||
.vscode/
|
||||
.idea/
|
||||
*.suo
|
||||
*.ntvs*
|
||||
*.njsproj
|
||||
*.sln
|
||||
*.sw?
|
||||
.DS_Store
|
||||
|
||||
# Temporary files
|
||||
*.tmp
|
||||
*.temp
|
||||
.cache/
|
||||
|
||||
# Logs
|
||||
logs/
|
||||
*.log
|
||||
|
||||
# Environment
|
||||
.env
|
||||
.env.local
|
||||
.env.*.local
|
||||
|
||||
# OS
|
||||
.DS_Store
|
||||
Thumbs.db
|
||||
.ace-tool/
|
||||
428
docs/.vitepress/config.ts
Normal file
428
docs/.vitepress/config.ts
Normal file
@@ -0,0 +1,428 @@
|
||||
import { defineConfig } from 'vitepress'
|
||||
|
||||
const repoName = process.env.GITHUB_REPOSITORY?.split('/')[1]
|
||||
const isUserOrOrgSite = Boolean(repoName && repoName.endsWith('.github.io'))
|
||||
|
||||
const base =
|
||||
process.env.CCW_DOCS_BASE ||
|
||||
(process.env.GITHUB_ACTIONS && repoName && !isUserOrOrgSite ? `/${repoName}/` : '/')
|
||||
|
||||
export default defineConfig({
|
||||
title: 'CCW Documentation',
|
||||
description: 'Claude Code Workspace - Advanced AI-Powered Development Environment',
|
||||
lang: 'zh-CN',
|
||||
base,
|
||||
|
||||
// Ignore dead links for incomplete docs
|
||||
ignoreDeadLinks: true,
|
||||
head: [
|
||||
['link', { rel: 'icon', href: '/favicon.svg', type: 'image/svg+xml' }],
|
||||
[
|
||||
'script',
|
||||
{},
|
||||
`(() => {
|
||||
try {
|
||||
const theme = localStorage.getItem('ccw-theme') || 'blue'
|
||||
document.documentElement.setAttribute('data-theme', theme)
|
||||
|
||||
const mode = localStorage.getItem('ccw-color-mode') || 'auto'
|
||||
const prefersDark = window.matchMedia && window.matchMedia('(prefers-color-scheme: dark)').matches
|
||||
const isDark = mode === 'dark' || (mode === 'auto' && prefersDark)
|
||||
document.documentElement.classList.toggle('dark', isDark)
|
||||
} catch {}
|
||||
})()`
|
||||
],
|
||||
['meta', { name: 'theme-color', content: '#3b82f6' }],
|
||||
['meta', { name: 'og:type', content: 'website' }],
|
||||
['meta', { name: 'og:locale', content: 'en_US' }],
|
||||
['meta', { name: 'og:locale:alternate', content: 'zh_CN' }]
|
||||
],
|
||||
|
||||
// Appearance
|
||||
appearance: false,
|
||||
|
||||
// Vite build/dev optimizations
|
||||
vite: {
|
||||
optimizeDeps: {
|
||||
include: ['flexsearch']
|
||||
},
|
||||
build: {
|
||||
target: 'es2019',
|
||||
cssCodeSplit: true
|
||||
}
|
||||
},
|
||||
|
||||
// Theme configuration
|
||||
themeConfig: {
|
||||
logo: '/logo.svg',
|
||||
|
||||
// Right-side table of contents (outline)
|
||||
outline: {
|
||||
level: [2, 3],
|
||||
label: 'On this page'
|
||||
},
|
||||
|
||||
// Navigation - 按照 Trellis 风格组织
|
||||
nav: [
|
||||
{ text: 'Guide', link: '/guide/ch01-what-is-claude-dms3' },
|
||||
{ text: 'Commands', link: '/commands/claude/' },
|
||||
{ text: 'Skills', link: '/skills/' },
|
||||
{ text: 'Features', link: '/features/spec' },
|
||||
{
|
||||
text: 'Languages',
|
||||
items: [
|
||||
{ text: '简体中文', link: '/zh/guide/ch01-what-is-claude-dms3' }
|
||||
]
|
||||
}
|
||||
],
|
||||
|
||||
// Sidebar - 优化导航结构,增加二级标题和归类
|
||||
sidebar: {
|
||||
'/guide/': [
|
||||
{
|
||||
text: '📖 指南',
|
||||
collapsible: false,
|
||||
items: [
|
||||
{ text: 'What is Claude_dms3', link: '/guide/ch01-what-is-claude-dms3' },
|
||||
{ text: 'Getting Started', link: '/guide/ch02-getting-started' },
|
||||
{ text: 'Core Concepts', link: '/guide/ch03-core-concepts' },
|
||||
{ text: 'Workflow Basics', link: '/guide/ch04-workflow-basics' },
|
||||
{ text: 'Advanced Tips', link: '/guide/ch05-advanced-tips' },
|
||||
{ text: 'Best Practices', link: '/guide/ch06-best-practices' }
|
||||
]
|
||||
},
|
||||
{
|
||||
text: '🚀 快速入口',
|
||||
collapsible: true,
|
||||
items: [
|
||||
{ text: 'Installation', link: '/guide/installation' },
|
||||
{ text: 'First Workflow', link: '/guide/first-workflow' },
|
||||
{ text: 'CLI Tools', link: '/guide/cli-tools' }
|
||||
]
|
||||
}
|
||||
],
|
||||
'/commands/': [
|
||||
{
|
||||
text: '🤖 Claude Commands',
|
||||
collapsible: true,
|
||||
items: [
|
||||
{ text: 'Overview', link: '/commands/claude/' },
|
||||
{ text: 'Core Orchestration', link: '/commands/claude/core-orchestration' },
|
||||
{ text: 'Workflow', link: '/commands/claude/workflow' },
|
||||
{ text: 'Session', link: '/commands/claude/session' },
|
||||
{ text: 'Issue', link: '/commands/claude/issue' },
|
||||
{ text: 'Memory', link: '/commands/claude/memory' },
|
||||
{ text: 'CLI', link: '/commands/claude/cli' },
|
||||
{ text: 'UI Design', link: '/commands/claude/ui-design' }
|
||||
]
|
||||
},
|
||||
{
|
||||
text: '📝 Codex Prompts',
|
||||
collapsible: true,
|
||||
items: [
|
||||
{ text: 'Overview', link: '/commands/codex/' },
|
||||
{ text: 'Prep', link: '/commands/codex/prep' },
|
||||
{ text: 'Review', link: '/commands/codex/review' }
|
||||
]
|
||||
}
|
||||
],
|
||||
'/skills/': [
|
||||
{
|
||||
text: '⚡ Claude Skills',
|
||||
collapsible: true,
|
||||
items: [
|
||||
{ text: 'Overview', link: '/skills/claude-index' },
|
||||
{ text: 'Collaboration', link: '/skills/claude-collaboration' },
|
||||
{ text: 'Workflow', link: '/skills/claude-workflow' },
|
||||
{ text: 'Memory', link: '/skills/claude-memory' },
|
||||
{ text: 'Review', link: '/skills/claude-review' },
|
||||
{ text: 'Meta', link: '/skills/claude-meta' }
|
||||
]
|
||||
},
|
||||
{
|
||||
text: '🔧 Codex Skills',
|
||||
collapsible: true,
|
||||
items: [
|
||||
{ text: 'Overview', link: '/skills/codex-index' },
|
||||
{ text: 'Lifecycle', link: '/skills/codex-lifecycle' },
|
||||
{ text: 'Workflow', link: '/skills/codex-workflow' },
|
||||
{ text: 'Specialized', link: '/skills/codex-specialized' }
|
||||
]
|
||||
},
|
||||
{
|
||||
text: '🎨 Custom Skills',
|
||||
collapsible: true,
|
||||
items: [
|
||||
{ text: 'Overview', link: '/skills/custom' },
|
||||
{ text: 'Core Skills', link: '/skills/core-skills' },
|
||||
{ text: 'Reference', link: '/skills/reference' }
|
||||
]
|
||||
}
|
||||
],
|
||||
'/features/': [
|
||||
{
|
||||
text: '⚙️ Core Features',
|
||||
collapsible: false,
|
||||
items: [
|
||||
{ text: 'Spec System', link: '/features/spec' },
|
||||
{ text: 'Memory System', link: '/features/memory' },
|
||||
{ text: 'CLI Call', link: '/features/cli' },
|
||||
{ text: 'Dashboard', link: '/features/dashboard' },
|
||||
{ text: 'CodexLens', link: '/features/codexlens' }
|
||||
]
|
||||
},
|
||||
{
|
||||
text: '🔌 Settings',
|
||||
collapsible: true,
|
||||
items: [
|
||||
{ text: 'API Settings', link: '/features/api-settings' },
|
||||
{ text: 'System Settings', link: '/features/system-settings' }
|
||||
]
|
||||
}
|
||||
],
|
||||
'/mcp/': [
|
||||
{
|
||||
text: '🔗 MCP Tools',
|
||||
collapsible: true,
|
||||
items: [
|
||||
{ text: 'Overview', link: '/mcp/tools' }
|
||||
]
|
||||
}
|
||||
],
|
||||
'/agents/': [
|
||||
{
|
||||
text: '🤖 Agents',
|
||||
collapsible: true,
|
||||
items: [
|
||||
{ text: 'Overview', link: '/agents/' },
|
||||
{ text: 'Built-in Agents', link: '/agents/builtin' },
|
||||
{ text: 'Custom Agents', link: '/agents/custom' }
|
||||
]
|
||||
}
|
||||
],
|
||||
'/workflows/': [
|
||||
{
|
||||
text: '🔄 Workflow System',
|
||||
collapsible: true,
|
||||
items: [
|
||||
{ text: 'Overview', link: '/workflows/' },
|
||||
{ text: '4-Level System', link: '/workflows/4-level' },
|
||||
{ text: 'Examples', link: '/workflows/examples' },
|
||||
{ text: 'Best Practices', link: '/workflows/best-practices' },
|
||||
{ text: 'Teams', link: '/workflows/teams' }
|
||||
]
|
||||
}
|
||||
]
|
||||
},
|
||||
|
||||
// Social links
|
||||
socialLinks: [
|
||||
{ icon: 'github', link: 'https://github.com/catlog22/Claude-Code-Workflow' }
|
||||
],
|
||||
|
||||
// Footer
|
||||
footer: {
|
||||
message: 'Released under the MIT License.',
|
||||
copyright: 'Copyright © 2025-present CCW Contributors'
|
||||
},
|
||||
|
||||
// Edit link
|
||||
editLink: {
|
||||
pattern: 'https://github.com/catlog22/Claude-Code-Workflow/edit/main/docs/:path',
|
||||
text: 'Edit this page on GitHub'
|
||||
},
|
||||
|
||||
// Last updated
|
||||
lastUpdated: {
|
||||
text: 'Last updated',
|
||||
formatOptions: {
|
||||
dateStyle: 'full',
|
||||
timeStyle: 'short'
|
||||
}
|
||||
},
|
||||
|
||||
// Search (handled by custom FlexSearch DocSearch component)
|
||||
search: false
|
||||
},
|
||||
|
||||
// Markdown configuration
|
||||
markdown: {
|
||||
lineNumbers: true,
|
||||
theme: {
|
||||
light: 'github-light',
|
||||
dark: 'github-dark'
|
||||
},
|
||||
languages: [
|
||||
'bash',
|
||||
'shell',
|
||||
'powershell',
|
||||
'json',
|
||||
'yaml',
|
||||
'toml',
|
||||
'javascript',
|
||||
'typescript',
|
||||
'jsx',
|
||||
'tsx',
|
||||
'vue',
|
||||
'html',
|
||||
'css',
|
||||
'markdown',
|
||||
'python',
|
||||
'ruby',
|
||||
'diff',
|
||||
'xml',
|
||||
'mermaid'
|
||||
],
|
||||
config: (md) => {
|
||||
// Add markdown-it plugins if needed
|
||||
}
|
||||
},
|
||||
|
||||
// locales
|
||||
locales: {
|
||||
root: {
|
||||
label: 'English',
|
||||
lang: 'en-US'
|
||||
},
|
||||
zh: {
|
||||
label: '简体中文',
|
||||
lang: 'zh-CN',
|
||||
title: 'CCW 文档',
|
||||
description: 'Claude Code Workspace - 高级 AI 驱动开发环境',
|
||||
themeConfig: {
|
||||
outline: {
|
||||
level: [2, 3],
|
||||
label: '本页目录'
|
||||
},
|
||||
nav: [
|
||||
{ text: '指南', link: '/zh/guide/ch01-what-is-claude-dms3' },
|
||||
{ text: '命令', link: '/zh/commands/claude/' },
|
||||
{ text: '技能', link: '/zh/skills/claude-index' },
|
||||
{ text: '功能', link: '/zh/features/spec' },
|
||||
{
|
||||
text: '语言',
|
||||
items: [
|
||||
{ text: 'English', link: '/guide/ch01-what-is-claude-dms3' }
|
||||
]
|
||||
}
|
||||
],
|
||||
sidebar: {
|
||||
'/zh/guide/': [
|
||||
{
|
||||
text: '📖 指南',
|
||||
collapsible: false,
|
||||
items: [
|
||||
{ text: 'Claude_dms3 是什么', link: '/zh/guide/ch01-what-is-claude-dms3' },
|
||||
{ text: '快速开始', link: '/zh/guide/ch02-getting-started' },
|
||||
{ text: '核心概念', link: '/zh/guide/ch03-core-concepts' },
|
||||
{ text: '工作流基础', link: '/zh/guide/ch04-workflow-basics' },
|
||||
{ text: '高级技巧', link: '/zh/guide/ch05-advanced-tips' },
|
||||
{ text: '最佳实践', link: '/zh/guide/ch06-best-practices' }
|
||||
]
|
||||
},
|
||||
{
|
||||
text: '🚀 快速入口',
|
||||
collapsible: true,
|
||||
items: [
|
||||
{ text: '安装', link: '/zh/guide/installation' },
|
||||
{ text: '第一个工作流', link: '/zh/guide/first-workflow' },
|
||||
{ text: 'CLI 工具', link: '/zh/guide/cli-tools' }
|
||||
]
|
||||
}
|
||||
],
|
||||
'/zh/commands/': [
|
||||
{
|
||||
text: '🤖 Claude 命令',
|
||||
collapsible: true,
|
||||
items: [
|
||||
{ text: '概述', link: '/zh/commands/claude/' },
|
||||
{ text: '核心编排', link: '/zh/commands/claude/core-orchestration' },
|
||||
{ text: '工作流', link: '/zh/commands/claude/workflow' },
|
||||
{ text: '会话管理', link: '/zh/commands/claude/session' },
|
||||
{ text: 'Issue', link: '/zh/commands/claude/issue' },
|
||||
{ text: 'Memory', link: '/zh/commands/claude/memory' },
|
||||
{ text: 'CLI', link: '/zh/commands/claude/cli' },
|
||||
{ text: 'UI 设计', link: '/zh/commands/claude/ui-design' }
|
||||
]
|
||||
},
|
||||
{
|
||||
text: '📝 Codex Prompts',
|
||||
collapsible: true,
|
||||
items: [
|
||||
{ text: '概述', link: '/zh/commands/codex/' },
|
||||
{ text: 'Prep', link: '/zh/commands/codex/prep' },
|
||||
{ text: 'Review', link: '/zh/commands/codex/review' }
|
||||
]
|
||||
}
|
||||
],
|
||||
'/zh/skills/': [
|
||||
{
|
||||
text: '⚡ Claude Skills',
|
||||
collapsible: true,
|
||||
items: [
|
||||
{ text: '概述', link: '/zh/skills/claude-index' },
|
||||
{ text: '协作', link: '/zh/skills/claude-collaboration' },
|
||||
{ text: '工作流', link: '/zh/skills/claude-workflow' },
|
||||
{ text: '记忆', link: '/zh/skills/claude-memory' },
|
||||
{ text: '审查', link: '/zh/skills/claude-review' },
|
||||
{ text: '元技能', link: '/zh/skills/claude-meta' }
|
||||
]
|
||||
},
|
||||
{
|
||||
text: '🔧 Codex Skills',
|
||||
collapsible: true,
|
||||
items: [
|
||||
{ text: '概述', link: '/zh/skills/codex-index' },
|
||||
{ text: '生命周期', link: '/zh/skills/codex-lifecycle' },
|
||||
{ text: '工作流', link: '/zh/skills/codex-workflow' },
|
||||
{ text: '专项', link: '/zh/skills/codex-specialized' }
|
||||
]
|
||||
},
|
||||
{
|
||||
text: '🎨 自定义技能',
|
||||
collapsible: true,
|
||||
items: [
|
||||
{ text: '概述', link: '/zh/skills/custom' },
|
||||
{ text: '核心技能', link: '/zh/skills/core-skills' },
|
||||
{ text: '参考', link: '/zh/skills/reference' }
|
||||
]
|
||||
}
|
||||
],
|
||||
'/zh/features/': [
|
||||
{
|
||||
text: '⚙️ 核心功能',
|
||||
collapsible: false,
|
||||
items: [
|
||||
{ text: 'Spec 规范系统', link: '/zh/features/spec' },
|
||||
{ text: 'Memory 记忆系统', link: '/zh/features/memory' },
|
||||
{ text: 'CLI 调用', link: '/zh/features/cli' },
|
||||
{ text: 'Dashboard 面板', link: '/zh/features/dashboard' },
|
||||
{ text: 'CodexLens', link: '/zh/features/codexlens' }
|
||||
]
|
||||
},
|
||||
{
|
||||
text: '🔌 设置',
|
||||
collapsible: true,
|
||||
items: [
|
||||
{ text: 'API 设置', link: '/zh/features/api-settings' },
|
||||
{ text: '系统设置', link: '/zh/features/system-settings' }
|
||||
]
|
||||
}
|
||||
],
|
||||
'/zh/workflows/': [
|
||||
{
|
||||
text: '🔄 工作流系统',
|
||||
collapsible: true,
|
||||
items: [
|
||||
{ text: '概述', link: '/zh/workflows/' },
|
||||
{ text: '四级体系', link: '/zh/workflows/4-level' },
|
||||
{ text: '最佳实践', link: '/zh/workflows/best-practices' },
|
||||
{ text: '团队协作', link: '/zh/workflows/teams' }
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
})
|
||||
25
docs/.vitepress/search/flexsearch.mjs
Normal file
25
docs/.vitepress/search/flexsearch.mjs
Normal file
@@ -0,0 +1,25 @@
|
||||
export const FLEXSEARCH_INDEX_VERSION = 1
|
||||
|
||||
export function flexsearchEncode(text) {
|
||||
const normalized = String(text ?? '')
|
||||
.toLowerCase()
|
||||
.normalize('NFKC')
|
||||
|
||||
const tokens = normalized.match(
|
||||
/[a-z0-9]+|[\u3040-\u30ff\u3400-\u4dbf\u4e00-\u9fff\uac00-\ud7af]/g
|
||||
)
|
||||
|
||||
return tokens ?? []
|
||||
}
|
||||
|
||||
export const FLEXSEARCH_OPTIONS = {
|
||||
tokenize: 'forward',
|
||||
resolution: 9,
|
||||
cache: 100,
|
||||
encode: flexsearchEncode
|
||||
}
|
||||
|
||||
export function createFlexSearchIndex(FlexSearch) {
|
||||
return new FlexSearch.Index(FLEXSEARCH_OPTIONS)
|
||||
}
|
||||
|
||||
216
docs/.vitepress/theme/components/AgentOrchestration.vue
Normal file
216
docs/.vitepress/theme/components/AgentOrchestration.vue
Normal file
@@ -0,0 +1,216 @@
|
||||
<template>
|
||||
<div class="agent-orchestration">
|
||||
<div class="orchestration-title">🤖 Agent Orchestration</div>
|
||||
|
||||
<div class="agent-flow">
|
||||
<!-- CLI Layer -->
|
||||
<div class="flow-layer cli-layer">
|
||||
<div class="layer-label">CLI Tools</div>
|
||||
<div class="agents-row">
|
||||
<div class="agent-card cli" @mouseenter="showTooltip('cli-explore')" @mouseleave="hideTooltip">🔍 Explore</div>
|
||||
<div class="agent-card cli" @mouseenter="showTooltip('cli-plan')" @mouseleave="hideTooltip">📋 Plan</div>
|
||||
<div class="agent-card cli" @mouseenter="showTooltip('cli-exec')" @mouseleave="hideTooltip">⚡ Execute</div>
|
||||
<div class="agent-card cli" @mouseenter="showTooltip('cli-discuss')" @mouseleave="hideTooltip">💬 Discuss</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<!-- Flow Arrow -->
|
||||
<div class="flow-arrow">▼</div>
|
||||
|
||||
<!-- Development Layer -->
|
||||
<div class="flow-layer dev-layer">
|
||||
<div class="layer-label">Development</div>
|
||||
<div class="agents-row">
|
||||
<div class="agent-card dev" @mouseenter="showTooltip('code-dev')" @mouseleave="hideTooltip">👨💻 Code</div>
|
||||
<div class="agent-card dev" @mouseenter="showTooltip('tdd')" @mouseleave="hideTooltip">🧪 TDD</div>
|
||||
<div class="agent-card dev" @mouseenter="showTooltip('test-fix')" @mouseleave="hideTooltip">🔧 Fix</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<!-- Flow Arrow -->
|
||||
<div class="flow-arrow">▼</div>
|
||||
|
||||
<!-- Output Layer -->
|
||||
<div class="flow-layer output-layer">
|
||||
<div class="layer-label">Output</div>
|
||||
<div class="agents-row">
|
||||
<div class="agent-card doc" @mouseenter="showTooltip('doc-gen')" @mouseleave="hideTooltip">📄 Docs</div>
|
||||
<div class="agent-card ui" @mouseenter="showTooltip('ui-design')" @mouseleave="hideTooltip">🎨 UI</div>
|
||||
<div class="agent-card universal" @mouseenter="showTooltip('universal')" @mouseleave="hideTooltip">🌐 Universal</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<div class="tooltip" v-if="tooltip" :class="{ visible: tooltip }">
|
||||
{{ tooltipText }}
|
||||
</div>
|
||||
</div>
|
||||
</template>
|
||||
|
||||
<script setup>
|
||||
import { ref } from 'vue'
|
||||
|
||||
const tooltip = ref(false)
|
||||
const tooltipText = ref('')
|
||||
|
||||
const tooltips = {
|
||||
'cli-explore': 'cli-explore-agent: 代码库探索和语义搜索',
|
||||
'cli-plan': 'cli-planning-agent: 任务规划和分解',
|
||||
'cli-exec': 'cli-execution-agent: 命令执行和结果处理',
|
||||
'cli-discuss': 'cli-discuss-agent: 多视角讨论和共识达成',
|
||||
'code-dev': 'code-developer: 代码实现和开发',
|
||||
'tdd': 'tdd-developer: 测试驱动开发',
|
||||
'test-fix': 'test-fix-agent: 测试修复循环',
|
||||
'doc-gen': 'doc-generator: 文档自动生成',
|
||||
'ui-design': 'ui-design-agent: UI设计和设计令牌',
|
||||
'universal': 'universal-executor: 通用任务执行器'
|
||||
}
|
||||
|
||||
function showTooltip(key) {
|
||||
tooltipText.value = tooltips[key] || ''
|
||||
tooltip.value = true
|
||||
}
|
||||
|
||||
function hideTooltip() {
|
||||
tooltip.value = false
|
||||
}
|
||||
</script>
|
||||
|
||||
<style scoped>
|
||||
.agent-orchestration {
|
||||
padding: 3rem 2rem;
|
||||
background: var(--vp-c-bg-soft);
|
||||
border: 1px solid var(--vp-c-divider);
|
||||
border-radius: 24px;
|
||||
margin: 2rem 0;
|
||||
position: relative;
|
||||
overflow: hidden;
|
||||
}
|
||||
|
||||
.orchestration-title {
|
||||
text-align: center;
|
||||
font-size: 1.75rem;
|
||||
font-weight: 700;
|
||||
color: var(--vp-c-text-1);
|
||||
margin-bottom: 2.5rem;
|
||||
}
|
||||
|
||||
.agent-flow {
|
||||
display: flex;
|
||||
flex-direction: column;
|
||||
align-items: center;
|
||||
gap: 1rem;
|
||||
}
|
||||
|
||||
.flow-layer {
|
||||
width: 100%;
|
||||
max-width: 600px;
|
||||
}
|
||||
|
||||
.layer-label {
|
||||
font-size: 0.7rem;
|
||||
font-weight: 800;
|
||||
text-transform: uppercase;
|
||||
letter-spacing: 0.15em;
|
||||
color: var(--vp-c-text-3);
|
||||
margin-bottom: 0.75rem;
|
||||
text-align: center;
|
||||
}
|
||||
|
||||
.agents-row {
|
||||
display: flex;
|
||||
justify-content: center;
|
||||
flex-wrap: wrap;
|
||||
gap: 1rem;
|
||||
}
|
||||
|
||||
.agent-card {
|
||||
padding: 0.8rem 1.5rem;
|
||||
border-radius: 12px;
|
||||
font-size: 0.9rem;
|
||||
font-weight: 600;
|
||||
cursor: pointer;
|
||||
transition: all 0.3s cubic-bezier(0.4, 0, 0.2, 1);
|
||||
position: relative;
|
||||
border: 1px solid transparent;
|
||||
}
|
||||
|
||||
.agent-card.cli {
|
||||
background: var(--vp-c-brand-soft);
|
||||
color: var(--vp-c-brand-1);
|
||||
border-color: rgba(59, 130, 246, 0.2);
|
||||
}
|
||||
|
||||
.agent-card.dev {
|
||||
background: rgba(16, 185, 129, 0.1);
|
||||
color: #10B981;
|
||||
border-color: rgba(16, 185, 129, 0.2);
|
||||
}
|
||||
|
||||
.agent-card.doc {
|
||||
background: rgba(139, 92, 246, 0.1);
|
||||
color: #8B5CF6;
|
||||
border-color: rgba(139, 92, 246, 0.2);
|
||||
}
|
||||
|
||||
.agent-card.ui {
|
||||
background: rgba(245, 158, 11, 0.1);
|
||||
color: #F59E0B;
|
||||
border-color: rgba(245, 158, 11, 0.2);
|
||||
}
|
||||
|
||||
.agent-card.universal {
|
||||
background: rgba(239, 68, 68, 0.1);
|
||||
color: #EF4444;
|
||||
border-color: rgba(239, 68, 68, 0.2);
|
||||
}
|
||||
|
||||
.agent-card:hover {
|
||||
transform: translateY(-3px);
|
||||
box-shadow: 0 8px 24px rgba(0, 0, 0, 0.12);
|
||||
}
|
||||
|
||||
.flow-arrow {
|
||||
color: var(--vp-c-divider);
|
||||
font-size: 1.25rem;
|
||||
animation: bounce 2s infinite;
|
||||
}
|
||||
|
||||
@keyframes bounce {
|
||||
0%, 100% { transform: translateY(0); opacity: 0.5; }
|
||||
50% { transform: translateY(8px); opacity: 1; }
|
||||
}
|
||||
|
||||
.tooltip {
|
||||
position: absolute;
|
||||
top: 1rem;
|
||||
right: 1rem;
|
||||
background: var(--vp-c-bg);
|
||||
border: 1px solid var(--vp-c-brand-1);
|
||||
color: var(--vp-c-text-1);
|
||||
padding: 0.75rem 1.25rem;
|
||||
border-radius: 10px;
|
||||
font-size: 0.85rem;
|
||||
font-weight: 500;
|
||||
opacity: 0;
|
||||
transition: all 0.3s;
|
||||
pointer-events: none;
|
||||
box-shadow: var(--vp-shadow-md);
|
||||
z-index: 10;
|
||||
}
|
||||
|
||||
.tooltip.visible {
|
||||
opacity: 1;
|
||||
}
|
||||
|
||||
@media (max-width: 640px) {
|
||||
.agent-orchestration {
|
||||
padding: 2rem 1rem;
|
||||
}
|
||||
|
||||
.agent-card {
|
||||
padding: 0.6rem 1rem;
|
||||
font-size: 0.8rem;
|
||||
}
|
||||
}
|
||||
</style>
|
||||
123
docs/.vitepress/theme/components/Breadcrumb.vue
Normal file
123
docs/.vitepress/theme/components/Breadcrumb.vue
Normal file
@@ -0,0 +1,123 @@
|
||||
<script setup lang="ts">
|
||||
import { computed } from 'vue'
|
||||
import { useData } from 'vitepress'
|
||||
|
||||
const { page } = useData()
|
||||
|
||||
interface BreadcrumbItem {
|
||||
text: string
|
||||
link?: string
|
||||
}
|
||||
|
||||
const breadcrumbs = computed<BreadcrumbItem[]>(() => {
|
||||
const items: BreadcrumbItem[] = [
|
||||
{ text: 'Home', link: '/' }
|
||||
]
|
||||
|
||||
const pathSegments = page.value.relativePath.split('/')
|
||||
const fileName = pathSegments.pop()?.replace(/\.md$/, '')
|
||||
|
||||
// Build breadcrumb from path
|
||||
let currentPath = ''
|
||||
for (const segment of pathSegments) {
|
||||
currentPath += `${segment}/`
|
||||
items.push({
|
||||
text: formatTitle(segment),
|
||||
link: `/${currentPath}`
|
||||
})
|
||||
}
|
||||
|
||||
// Add current page
|
||||
if (fileName && fileName !== 'index') {
|
||||
items.push({
|
||||
text: formatTitle(fileName)
|
||||
})
|
||||
}
|
||||
|
||||
return items
|
||||
})
|
||||
|
||||
const formatTitle = (str: string): string => {
|
||||
return str
|
||||
.split(/[-_]/)
|
||||
.map(word => word.charAt(0).toUpperCase() + word.slice(1))
|
||||
.join(' ')
|
||||
}
|
||||
</script>
|
||||
|
||||
<template>
|
||||
<nav v-if="breadcrumbs.length > 1" class="breadcrumb" aria-label="Breadcrumb">
|
||||
<ol class="breadcrumb-list">
|
||||
<li v-for="(item, index) in breadcrumbs" :key="index" class="breadcrumb-item">
|
||||
<router-link v-if="item.link && index < breadcrumbs.length - 1" :to="item.link" class="breadcrumb-link">
|
||||
{{ item.text }}
|
||||
</router-link>
|
||||
<span v-else class="breadcrumb-current">{{ item.text }}</span>
|
||||
<span v-if="index < breadcrumbs.length - 1" class="breadcrumb-separator">/</span>
|
||||
</li>
|
||||
</ol>
|
||||
</nav>
|
||||
</template>
|
||||
|
||||
<style scoped>
|
||||
.breadcrumb {
|
||||
padding: 12px 0;
|
||||
margin-bottom: 16px;
|
||||
}
|
||||
|
||||
.breadcrumb-list {
|
||||
display: flex;
|
||||
flex-wrap: wrap;
|
||||
align-items: center;
|
||||
gap: 4px;
|
||||
list-style: none;
|
||||
margin: 0;
|
||||
padding: 0;
|
||||
}
|
||||
|
||||
.breadcrumb-item {
|
||||
display: flex;
|
||||
align-items: center;
|
||||
gap: 4px;
|
||||
}
|
||||
|
||||
.breadcrumb-link {
|
||||
color: var(--vp-c-text-2);
|
||||
font-size: var(--vp-font-size-sm);
|
||||
text-decoration: none;
|
||||
transition: color var(--vp-transition-color);
|
||||
}
|
||||
|
||||
.breadcrumb-link:hover {
|
||||
color: var(--vp-c-primary);
|
||||
text-decoration: underline;
|
||||
}
|
||||
|
||||
.breadcrumb-current {
|
||||
color: var(--vp-c-text-1);
|
||||
font-size: var(--vp-font-size-sm);
|
||||
font-weight: 500;
|
||||
}
|
||||
|
||||
.breadcrumb-separator {
|
||||
color: var(--vp-c-text-3);
|
||||
font-size: var(--vp-font-size-sm);
|
||||
}
|
||||
|
||||
@media (max-width: 768px) {
|
||||
.breadcrumb {
|
||||
padding: 8px 0;
|
||||
margin-bottom: 12px;
|
||||
}
|
||||
|
||||
.breadcrumb-link,
|
||||
.breadcrumb-current,
|
||||
.breadcrumb-separator {
|
||||
font-size: 12px;
|
||||
}
|
||||
|
||||
.breadcrumb-list {
|
||||
gap: 2px;
|
||||
}
|
||||
}
|
||||
</style>
|
||||
72
docs/.vitepress/theme/components/ColorSchemeSelector.vue
Normal file
72
docs/.vitepress/theme/components/ColorSchemeSelector.vue
Normal file
@@ -0,0 +1,72 @@
|
||||
<script setup lang="ts">
|
||||
// This component is integrated into ThemeSwitcher
|
||||
// Kept as separate component for modularity
|
||||
const emit = defineEmits<{
|
||||
(e: 'select', scheme: string): void
|
||||
}>()
|
||||
|
||||
const schemes = [
|
||||
{ id: 'blue', name: 'Blue', color: '#3b82f6' },
|
||||
{ id: 'green', name: 'Green', color: '#10b981' },
|
||||
{ id: 'orange', name: 'Orange', color: '#f59e0b' },
|
||||
{ id: 'purple', name: 'Purple', color: '#8b5cf6' }
|
||||
]
|
||||
|
||||
const selectScheme = (schemeId: string) => {
|
||||
emit('select', schemeId)
|
||||
}
|
||||
</script>
|
||||
|
||||
<template>
|
||||
<div class="color-scheme-selector">
|
||||
<button
|
||||
v-for="scheme in schemes"
|
||||
:key="scheme.id"
|
||||
:class="['scheme-option']"
|
||||
:style="{ '--scheme-color': scheme.color }"
|
||||
:aria-label="scheme.name"
|
||||
@click="selectScheme(scheme.id)"
|
||||
>
|
||||
<span class="scheme-indicator"></span>
|
||||
<span class="scheme-name">{{ scheme.name }}</span>
|
||||
</button>
|
||||
</div>
|
||||
</template>
|
||||
|
||||
<style scoped>
|
||||
.color-scheme-selector {
|
||||
display: flex;
|
||||
flex-direction: column;
|
||||
gap: 4px;
|
||||
}
|
||||
|
||||
.scheme-option {
|
||||
display: flex;
|
||||
align-items: center;
|
||||
gap: 12px;
|
||||
padding: 8px 12px;
|
||||
border: 1px solid var(--vp-c-border);
|
||||
border-radius: var(--vp-radius-md);
|
||||
background: var(--vp-c-bg);
|
||||
cursor: pointer;
|
||||
transition: all var(--vp-transition-color);
|
||||
}
|
||||
|
||||
.scheme-option:hover {
|
||||
border-color: var(--vp-c-primary);
|
||||
background: var(--vp-c-bg-soft);
|
||||
}
|
||||
|
||||
.scheme-indicator {
|
||||
width: 16px;
|
||||
height: 16px;
|
||||
border-radius: var(--vp-radius-full);
|
||||
background: var(--scheme-color);
|
||||
border: 2px solid var(--vp-c-border);
|
||||
}
|
||||
|
||||
.scheme-name {
|
||||
font-size: var(--vp-font-size-sm);
|
||||
color: var(--vp-c-text-1);
|
||||
}
|
||||
</style>
|
||||
135
docs/.vitepress/theme/components/CopyCodeButton.vue
Normal file
135
docs/.vitepress/theme/components/CopyCodeButton.vue
Normal file
@@ -0,0 +1,135 @@
|
||||
<script setup lang="ts">
|
||||
import { ref } from 'vue'
|
||||
|
||||
const props = defineProps<{
|
||||
code: string
|
||||
}>()
|
||||
|
||||
const emit = defineEmits<{
|
||||
(e: 'copy'): void
|
||||
}>()
|
||||
|
||||
const copied = ref(false)
|
||||
|
||||
const copyToClipboard = async () => {
|
||||
try {
|
||||
await navigator.clipboard.writeText(props.code)
|
||||
copied.value = true
|
||||
emit('copy')
|
||||
setTimeout(() => {
|
||||
copied.value = false
|
||||
}, 2000)
|
||||
} catch (err) {
|
||||
console.error('Failed to copy:', err)
|
||||
}
|
||||
}
|
||||
|
||||
// Also handle Ctrl+C
|
||||
const handleKeydown = (e: KeyboardEvent) => {
|
||||
if (e.ctrlKey && e.key === 'c') {
|
||||
copyToClipboard()
|
||||
}
|
||||
}
|
||||
</script>
|
||||
|
||||
<template>
|
||||
<button
|
||||
class="copy-code-button"
|
||||
:class="{ copied }"
|
||||
:aria-label="copied ? 'Copied!' : 'Copy code'"
|
||||
:title="copied ? 'Copied!' : 'Copy code (Ctrl+C)'"
|
||||
@click="copyToClipboard"
|
||||
@keydown="handleKeydown"
|
||||
>
|
||||
<svg v-if="!copied" class="icon copy-icon" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2">
|
||||
<rect x="9" y="9" width="13" height="13" rx="2" ry="2"/>
|
||||
<path d="M5 15H4a2 2 0 0 1-2-2V4a2 2 0 0 1 2-2h9a2 2 0 0 1 2 2v1"/>
|
||||
</svg>
|
||||
<svg v-else class="icon check-icon" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2">
|
||||
<polyline points="20 6 9 17 4 12"/>
|
||||
</svg>
|
||||
<span v-if="copied" class="copy-feedback">Copied!</span>
|
||||
</button>
|
||||
</template>
|
||||
|
||||
<style scoped>
|
||||
.copy-code-button {
|
||||
position: absolute;
|
||||
top: 12px;
|
||||
right: 12px;
|
||||
display: flex;
|
||||
align-items: center;
|
||||
gap: 6px;
|
||||
padding: 6px 12px;
|
||||
border: 1px solid var(--vp-c-border);
|
||||
border-radius: var(--vp-radius-md);
|
||||
background: var(--vp-c-bg);
|
||||
color: var(--vp-c-text-2);
|
||||
font-size: var(--vp-font-size-sm);
|
||||
cursor: pointer;
|
||||
opacity: 0;
|
||||
transition: all var(--vp-transition-color);
|
||||
z-index: 10;
|
||||
}
|
||||
|
||||
.copy-code-button:hover {
|
||||
background: var(--vp-c-bg-soft);
|
||||
color: var(--vp-c-text-1);
|
||||
border-color: var(--vp-c-primary);
|
||||
}
|
||||
|
||||
.copy-code-button.copied {
|
||||
background: var(--vp-c-secondary-500);
|
||||
color: white;
|
||||
border-color: var(--vp-c-secondary-500);
|
||||
}
|
||||
|
||||
.copy-code-button .icon {
|
||||
width: 16px;
|
||||
height: 16px;
|
||||
}
|
||||
|
||||
.copy-feedback {
|
||||
position: absolute;
|
||||
top: 100%;
|
||||
right: 0;
|
||||
margin-top: 4px;
|
||||
padding: 4px 8px;
|
||||
background: var(--vp-c-bg);
|
||||
border: 1px solid var(--vp-c-border);
|
||||
border-radius: var(--vp-radius-md);
|
||||
font-size: 12px;
|
||||
white-space: nowrap;
|
||||
animation: fadeIn 0.2s ease;
|
||||
}
|
||||
|
||||
@keyframes fadeIn {
|
||||
from {
|
||||
opacity: 0;
|
||||
transform: translateY(-4px);
|
||||
}
|
||||
to {
|
||||
opacity: 1;
|
||||
transform: translateY(0);
|
||||
}
|
||||
}
|
||||
|
||||
/* Show button on code block hover */
|
||||
div[class*='language-']:hover .copy-code-button,
|
||||
.copy-code-button:focus {
|
||||
opacity: 1;
|
||||
}
|
||||
|
||||
@media (max-width: 768px) {
|
||||
.copy-code-button {
|
||||
opacity: 1;
|
||||
top: 8px;
|
||||
right: 8px;
|
||||
padding: 8px;
|
||||
}
|
||||
|
||||
.copy-feedback {
|
||||
display: none;
|
||||
}
|
||||
}
|
||||
</style>
|
||||
133
docs/.vitepress/theme/components/DarkModeToggle.vue
Normal file
133
docs/.vitepress/theme/components/DarkModeToggle.vue
Normal file
@@ -0,0 +1,133 @@
|
||||
<script setup lang="ts">
|
||||
import { ref, onMounted } from 'vue'
|
||||
|
||||
type ColorMode = 'light' | 'dark' | 'auto'
|
||||
|
||||
const colorMode = ref<ColorMode>('auto')
|
||||
|
||||
const modes: { id: ColorMode; name: string; icon: string }[] = [
|
||||
{ id: 'light', name: 'Light', icon: 'sun' },
|
||||
{ id: 'dark', name: 'Dark', icon: 'moon' },
|
||||
{ id: 'auto', name: 'Auto', icon: 'computer' }
|
||||
]
|
||||
|
||||
const setMode = (mode: ColorMode) => {
|
||||
colorMode.value = mode
|
||||
localStorage.setItem('ccw-color-mode', mode)
|
||||
applyMode(mode)
|
||||
}
|
||||
|
||||
const applyMode = (mode: ColorMode) => {
|
||||
const html = document.documentElement
|
||||
|
||||
if (mode === 'auto') {
|
||||
const prefersDark = window.matchMedia('(prefers-color-scheme: dark)').matches
|
||||
html.classList.toggle('dark', prefersDark)
|
||||
} else {
|
||||
html.classList.toggle('dark', mode === 'dark')
|
||||
}
|
||||
}
|
||||
|
||||
onMounted(() => {
|
||||
const savedMode = localStorage.getItem('ccw-color-mode') as ColorMode
|
||||
if (savedMode && modes.find(m => m.id === savedMode)) {
|
||||
setMode(savedMode)
|
||||
} else {
|
||||
setMode('auto')
|
||||
}
|
||||
|
||||
// Listen for system theme changes
|
||||
window.matchMedia('(prefers-color-scheme: dark)').addEventListener('change', () => {
|
||||
if (colorMode.value === 'auto') {
|
||||
applyMode('auto')
|
||||
}
|
||||
})
|
||||
})
|
||||
</script>
|
||||
|
||||
<template>
|
||||
<div class="dark-mode-toggle">
|
||||
<button
|
||||
v-for="mode in modes"
|
||||
:key="mode.id"
|
||||
:class="['mode-button', { active: colorMode === mode.id }]"
|
||||
:aria-label="`Switch to ${mode.name} mode`"
|
||||
:title="mode.name"
|
||||
@click="setMode(mode.id)"
|
||||
>
|
||||
<svg v-if="mode.icon === 'sun'" class="icon" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2">
|
||||
<circle cx="12" cy="12" r="5"/>
|
||||
<line x1="12" y1="1" x2="12" y2="3"/>
|
||||
<line x1="12" y1="21" x2="12" y2="23"/>
|
||||
<line x1="4.22" y1="4.22" x2="5.64" y2="5.64"/>
|
||||
<line x1="18.36" y1="18.36" x2="19.78" y2="19.78"/>
|
||||
<line x1="1" y1="12" x2="3" y2="12"/>
|
||||
<line x1="21" y1="12" x2="23" y2="12"/>
|
||||
<line x1="4.22" y1="19.78" x2="5.64" y2="18.36"/>
|
||||
<line x1="18.36" y1="5.64" x2="19.78" y2="4.22"/>
|
||||
</svg>
|
||||
<svg v-else-if="mode.icon === 'moon'" class="icon" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2">
|
||||
<path d="M21 12.79A9 9 0 1 1 11.21 3 7 7 0 0 0 21 12.79z"/>
|
||||
</svg>
|
||||
<svg v-else class="icon" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2">
|
||||
<rect x="2" y="3" width="20" height="14" rx="2" ry="2"/>
|
||||
<line x1="8" y1="21" x2="16" y2="21"/>
|
||||
<line x1="12" y1="17" x2="12" y2="21"/>
|
||||
</svg>
|
||||
</button>
|
||||
</div>
|
||||
</template>
|
||||
|
||||
<style scoped>
|
||||
.dark-mode-toggle {
|
||||
display: flex;
|
||||
align-items: center;
|
||||
gap: 2px;
|
||||
padding: 4px;
|
||||
background: var(--vp-c-bg-soft);
|
||||
border-radius: var(--vp-radius-full);
|
||||
}
|
||||
|
||||
.mode-button {
|
||||
position: relative;
|
||||
width: 36px;
|
||||
height: 32px;
|
||||
border: none;
|
||||
background: transparent;
|
||||
border-radius: var(--vp-radius-md);
|
||||
cursor: pointer;
|
||||
display: flex;
|
||||
align-items: center;
|
||||
justify-content: center;
|
||||
color: var(--vp-c-text-2);
|
||||
transition: all var(--vp-transition-color);
|
||||
}
|
||||
|
||||
.mode-button:hover {
|
||||
color: var(--vp-c-text-1);
|
||||
background: var(--vp-c-bg-mute);
|
||||
}
|
||||
|
||||
.mode-button.active {
|
||||
background: var(--vp-c-bg);
|
||||
color: var(--vp-c-primary);
|
||||
box-shadow: var(--vp-shadow-sm);
|
||||
}
|
||||
|
||||
.mode-button .icon {
|
||||
width: 18px;
|
||||
height: 18px;
|
||||
}
|
||||
|
||||
@media (max-width: 768px) {
|
||||
.mode-button {
|
||||
width: 40px;
|
||||
height: 36px;
|
||||
}
|
||||
|
||||
.mode-button .icon {
|
||||
width: 20px;
|
||||
height: 20px;
|
||||
}
|
||||
}
|
||||
</style>
|
||||
492
docs/.vitepress/theme/components/DocSearch.vue
Normal file
492
docs/.vitepress/theme/components/DocSearch.vue
Normal file
@@ -0,0 +1,492 @@
|
||||
<script setup lang="ts">
|
||||
import { computed, nextTick, onBeforeUnmount, onMounted, ref, shallowRef, watch } from 'vue'
|
||||
import { useData, useRouter, withBase } from 'vitepress'
|
||||
|
||||
type LocaleKey = 'root' | 'zh'
|
||||
|
||||
interface SearchDoc {
|
||||
id: number
|
||||
title: string
|
||||
url: string
|
||||
excerpt?: string
|
||||
}
|
||||
|
||||
interface SearchIndexPayload {
|
||||
version: number
|
||||
locale: LocaleKey
|
||||
index: Record<string, string>
|
||||
docs: SearchDoc[]
|
||||
}
|
||||
|
||||
const { page } = useData()
|
||||
const router = useRouter()
|
||||
|
||||
const localeKey = computed<LocaleKey>(() =>
|
||||
page.value.relativePath.startsWith('zh/') ? 'zh' : 'root'
|
||||
)
|
||||
|
||||
const isOpen = ref(false)
|
||||
const isLoading = ref(false)
|
||||
const error = ref<string | null>(null)
|
||||
|
||||
const query = ref('')
|
||||
const results = ref<SearchDoc[]>([])
|
||||
const activeIndex = ref(0)
|
||||
|
||||
const inputRef = ref<HTMLInputElement | null>(null)
|
||||
const buttonRef = ref<HTMLButtonElement | null>(null)
|
||||
|
||||
const modifierKey = ref('Ctrl')
|
||||
|
||||
const loadedIndex = shallowRef<any | null>(null)
|
||||
const loadedDocsById = shallowRef<Map<number, SearchDoc> | null>(null)
|
||||
const cache = new Map<LocaleKey, { index: any; docsById: Map<number, SearchDoc> }>()
|
||||
|
||||
const placeholder = computed(() => (localeKey.value === 'zh' ? '搜索文档' : 'Search docs'))
|
||||
const cancelText = computed(() => (localeKey.value === 'zh' ? '关闭' : 'Close'))
|
||||
const loadingText = computed(() => (localeKey.value === 'zh' ? '正在加载索引…' : 'Loading index…'))
|
||||
const hintText = computed(() => (localeKey.value === 'zh' ? '输入关键词开始搜索' : 'Type to start searching'))
|
||||
const noResultsText = computed(() => (localeKey.value === 'zh' ? '未找到结果' : 'No results'))
|
||||
|
||||
function isEditableTarget(target: EventTarget | null) {
|
||||
const el = target as HTMLElement | null
|
||||
if (!el) return false
|
||||
const tag = el.tagName?.toLowerCase()
|
||||
if (!tag) return false
|
||||
return tag === 'input' || tag === 'textarea' || tag === 'select' || el.isContentEditable
|
||||
}
|
||||
|
||||
async function loadLocaleIndex(key: LocaleKey) {
|
||||
const cached = cache.get(key)
|
||||
if (cached) {
|
||||
loadedIndex.value = cached.index
|
||||
loadedDocsById.value = cached.docsById
|
||||
return
|
||||
}
|
||||
|
||||
isLoading.value = true
|
||||
error.value = null
|
||||
|
||||
try {
|
||||
const res = await fetch(withBase(`/search-index.${key}.json`))
|
||||
if (!res.ok) throw new Error(`HTTP ${res.status}`)
|
||||
const payload = (await res.json()) as SearchIndexPayload
|
||||
|
||||
if (!payload || payload.locale !== key) throw new Error('Invalid index payload')
|
||||
|
||||
const [{ default: FlexSearch }, { createFlexSearchIndex }] = await Promise.all([
|
||||
import('flexsearch'),
|
||||
import('../../search/flexsearch.mjs')
|
||||
])
|
||||
|
||||
const index = createFlexSearchIndex(FlexSearch)
|
||||
await Promise.all(Object.entries(payload.index).map(([k, v]) => index.import(k, v)))
|
||||
|
||||
const docsById = new Map<number, SearchDoc>()
|
||||
for (const doc of payload.docs) docsById.set(doc.id, doc)
|
||||
|
||||
cache.set(key, { index, docsById })
|
||||
loadedIndex.value = index
|
||||
loadedDocsById.value = docsById
|
||||
} catch (e) {
|
||||
error.value = e instanceof Error ? e.message : String(e)
|
||||
loadedIndex.value = null
|
||||
loadedDocsById.value = null
|
||||
} finally {
|
||||
isLoading.value = false
|
||||
}
|
||||
}
|
||||
|
||||
async function ensureReady() {
|
||||
await loadLocaleIndex(localeKey.value)
|
||||
}
|
||||
|
||||
let searchTimer: number | undefined
|
||||
async function runSearch() {
|
||||
const q = query.value.trim()
|
||||
if (!q) {
|
||||
results.value = []
|
||||
activeIndex.value = 0
|
||||
return
|
||||
}
|
||||
|
||||
await ensureReady()
|
||||
if (!loadedIndex.value || !loadedDocsById.value) {
|
||||
results.value = []
|
||||
activeIndex.value = 0
|
||||
return
|
||||
}
|
||||
|
||||
const ids = loadedIndex.value.search(q, 12) as number[]
|
||||
const docsById = loadedDocsById.value
|
||||
results.value = ids
|
||||
.map((id) => docsById.get(id))
|
||||
.filter((d): d is SearchDoc => Boolean(d))
|
||||
activeIndex.value = 0
|
||||
}
|
||||
|
||||
function navigate(url: string) {
|
||||
router.go(withBase(url))
|
||||
close()
|
||||
}
|
||||
|
||||
async function open() {
|
||||
isOpen.value = true
|
||||
document.body.style.overflow = 'hidden'
|
||||
await ensureReady()
|
||||
await nextTick()
|
||||
inputRef.value?.focus()
|
||||
}
|
||||
|
||||
function close() {
|
||||
isOpen.value = false
|
||||
query.value = ''
|
||||
results.value = []
|
||||
activeIndex.value = 0
|
||||
error.value = null
|
||||
document.body.style.overflow = ''
|
||||
buttonRef.value?.focus()
|
||||
}
|
||||
|
||||
function onInputKeydown(e: KeyboardEvent) {
|
||||
if (e.key === 'Escape') {
|
||||
e.preventDefault()
|
||||
close()
|
||||
return
|
||||
}
|
||||
|
||||
if (e.key === 'ArrowDown') {
|
||||
e.preventDefault()
|
||||
if (results.value.length > 0) {
|
||||
activeIndex.value = (activeIndex.value + 1) % results.value.length
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
if (e.key === 'ArrowUp') {
|
||||
e.preventDefault()
|
||||
if (results.value.length > 0) {
|
||||
activeIndex.value =
|
||||
(activeIndex.value - 1 + results.value.length) % results.value.length
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
if (e.key === 'Enter') {
|
||||
const hit = results.value[activeIndex.value]
|
||||
if (hit) {
|
||||
e.preventDefault()
|
||||
navigate(hit.url)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
let onGlobalKeydown: ((e: KeyboardEvent) => void) | null = null
|
||||
|
||||
onMounted(() => {
|
||||
modifierKey.value = /mac/i.test(navigator.platform) ? '⌘' : 'Ctrl'
|
||||
|
||||
onGlobalKeydown = (e: KeyboardEvent) => {
|
||||
if ((e.ctrlKey || e.metaKey) && e.key.toLowerCase() === 'k') {
|
||||
e.preventDefault()
|
||||
if (!isOpen.value) open()
|
||||
return
|
||||
}
|
||||
|
||||
if (e.key === '/' && !e.ctrlKey && !e.metaKey && !e.altKey) {
|
||||
if (isEditableTarget(e.target)) return
|
||||
e.preventDefault()
|
||||
if (!isOpen.value) open()
|
||||
}
|
||||
}
|
||||
|
||||
window.addEventListener('keydown', onGlobalKeydown)
|
||||
})
|
||||
|
||||
onBeforeUnmount(() => {
|
||||
if (onGlobalKeydown) window.removeEventListener('keydown', onGlobalKeydown)
|
||||
})
|
||||
|
||||
watch(query, () => {
|
||||
if (searchTimer) window.clearTimeout(searchTimer)
|
||||
searchTimer = window.setTimeout(() => {
|
||||
runSearch()
|
||||
}, 60)
|
||||
})
|
||||
|
||||
watch(
|
||||
() => page.value.relativePath,
|
||||
() => {
|
||||
if (isOpen.value) close()
|
||||
}
|
||||
)
|
||||
</script>
|
||||
|
||||
<template>
|
||||
<div class="DocSearch">
|
||||
<button
|
||||
ref="buttonRef"
|
||||
type="button"
|
||||
class="DocSearch-Button"
|
||||
:aria-label="placeholder"
|
||||
@click="open"
|
||||
>
|
||||
<svg class="DocSearch-Button-Icon" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2">
|
||||
<circle cx="11" cy="11" r="7" />
|
||||
<line x1="21" y1="21" x2="16.65" y2="16.65" />
|
||||
</svg>
|
||||
<span class="DocSearch-Button-Placeholder">{{ placeholder }}</span>
|
||||
<span class="DocSearch-Button-Keys" aria-hidden="true">
|
||||
<kbd class="DocSearch-Button-Key">{{ modifierKey }}</kbd>
|
||||
<kbd class="DocSearch-Button-Key">K</kbd>
|
||||
</span>
|
||||
</button>
|
||||
|
||||
<Teleport to="body">
|
||||
<div v-if="isOpen" class="DocSearch-Modal">
|
||||
<div class="DocSearch-Overlay" @click="close" />
|
||||
|
||||
<div class="DocSearch-Container" role="dialog" aria-modal="true">
|
||||
<div class="DocSearch-SearchBar">
|
||||
<svg class="DocSearch-SearchIcon" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2">
|
||||
<circle cx="11" cy="11" r="7" />
|
||||
<line x1="21" y1="21" x2="16.65" y2="16.65" />
|
||||
</svg>
|
||||
<input
|
||||
ref="inputRef"
|
||||
v-model="query"
|
||||
class="DocSearch-Input"
|
||||
type="search"
|
||||
:placeholder="placeholder"
|
||||
:aria-label="placeholder"
|
||||
@keydown="onInputKeydown"
|
||||
/>
|
||||
<button type="button" class="DocSearch-Cancel" @click="close">{{ cancelText }}</button>
|
||||
</div>
|
||||
|
||||
<div class="DocSearch-Body">
|
||||
<div v-if="isLoading" class="DocSearch-Status">{{ loadingText }}</div>
|
||||
<div v-else-if="error" class="DocSearch-Status DocSearch-Status--error">{{ error }}</div>
|
||||
<div v-else-if="query.trim().length === 0" class="DocSearch-Status">{{ hintText }}</div>
|
||||
<div v-else>
|
||||
<ul v-if="results.length > 0" class="DocSearch-Results">
|
||||
<li
|
||||
v-for="(item, i) in results"
|
||||
:key="item.id"
|
||||
:class="['DocSearch-Result', { active: i === activeIndex }]"
|
||||
@mousemove="activeIndex = i"
|
||||
>
|
||||
<a
|
||||
class="DocSearch-Result-Link"
|
||||
:href="withBase(item.url)"
|
||||
@click.prevent="navigate(item.url)"
|
||||
>
|
||||
<div class="DocSearch-Result-Title">{{ item.title }}</div>
|
||||
<div v-if="item.excerpt" class="DocSearch-Result-Excerpt">
|
||||
{{ item.excerpt }}
|
||||
</div>
|
||||
</a>
|
||||
</li>
|
||||
</ul>
|
||||
<div v-else class="DocSearch-Status">{{ noResultsText }}</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</Teleport>
|
||||
</div>
|
||||
</template>
|
||||
|
||||
<style scoped>
|
||||
.DocSearch {
|
||||
display: flex;
|
||||
align-items: center;
|
||||
}
|
||||
|
||||
.DocSearch-Button {
|
||||
display: inline-flex;
|
||||
align-items: center;
|
||||
gap: 10px;
|
||||
height: 36px;
|
||||
padding: 0 12px;
|
||||
border: 1px solid var(--vp-c-border);
|
||||
border-radius: var(--vp-radius-full);
|
||||
background: var(--vp-c-bg-soft);
|
||||
color: var(--vp-c-text-2);
|
||||
cursor: pointer;
|
||||
transition: all var(--vp-transition-color);
|
||||
}
|
||||
|
||||
.DocSearch-Button:hover {
|
||||
border-color: var(--vp-c-primary);
|
||||
color: var(--vp-c-text-1);
|
||||
background: var(--vp-c-bg);
|
||||
}
|
||||
|
||||
.DocSearch-Button-Icon {
|
||||
width: 16px;
|
||||
height: 16px;
|
||||
flex: 0 0 auto;
|
||||
}
|
||||
|
||||
.DocSearch-Button-Placeholder {
|
||||
font-size: var(--vp-font-size-sm);
|
||||
white-space: nowrap;
|
||||
}
|
||||
|
||||
.DocSearch-Button-Keys {
|
||||
display: inline-flex;
|
||||
gap: 4px;
|
||||
margin-left: 8px;
|
||||
}
|
||||
|
||||
.DocSearch-Button-Key {
|
||||
font-family: var(--vp-font-family-mono);
|
||||
font-size: 12px;
|
||||
line-height: 1;
|
||||
padding: 4px 6px;
|
||||
border-radius: var(--vp-radius-sm);
|
||||
border: 1px solid var(--vp-c-divider);
|
||||
background: var(--vp-c-bg);
|
||||
color: var(--vp-c-text-2);
|
||||
}
|
||||
|
||||
.DocSearch-Modal {
|
||||
position: fixed;
|
||||
inset: 0;
|
||||
z-index: var(--vp-z-index-modal);
|
||||
}
|
||||
|
||||
.DocSearch-Overlay {
|
||||
position: absolute;
|
||||
inset: 0;
|
||||
background: rgba(0, 0, 0, 0.5);
|
||||
}
|
||||
|
||||
.DocSearch-Container {
|
||||
position: relative;
|
||||
width: min(720px, calc(100vw - 32px));
|
||||
margin: 10vh auto 0;
|
||||
background: var(--vp-c-bg);
|
||||
border: 1px solid var(--vp-c-divider);
|
||||
border-radius: var(--vp-radius-xl);
|
||||
box-shadow: var(--vp-shadow-xl);
|
||||
overflow: hidden;
|
||||
}
|
||||
|
||||
.DocSearch-SearchBar {
|
||||
display: flex;
|
||||
align-items: center;
|
||||
gap: 12px;
|
||||
padding: 12px 14px;
|
||||
border-bottom: 1px solid var(--vp-c-divider);
|
||||
background: var(--vp-c-bg-soft);
|
||||
}
|
||||
|
||||
.DocSearch-SearchIcon {
|
||||
width: 18px;
|
||||
height: 18px;
|
||||
color: var(--vp-c-text-3);
|
||||
flex: 0 0 auto;
|
||||
}
|
||||
|
||||
.DocSearch-Input {
|
||||
flex: 1;
|
||||
height: 40px;
|
||||
padding: 0 12px;
|
||||
border: 1px solid var(--vp-c-border);
|
||||
border-radius: var(--vp-radius-md);
|
||||
background: var(--vp-c-bg);
|
||||
color: var(--vp-c-text-1);
|
||||
font-size: 14px;
|
||||
}
|
||||
|
||||
.DocSearch-Input:focus-visible {
|
||||
outline: 2px solid var(--vp-c-primary);
|
||||
outline-offset: 2px;
|
||||
}
|
||||
|
||||
.DocSearch-Cancel {
|
||||
height: 40px;
|
||||
padding: 0 12px;
|
||||
border: 1px solid var(--vp-c-border);
|
||||
border-radius: var(--vp-radius-md);
|
||||
background: var(--vp-c-bg);
|
||||
color: var(--vp-c-text-2);
|
||||
cursor: pointer;
|
||||
transition: all var(--vp-transition-color);
|
||||
}
|
||||
|
||||
.DocSearch-Cancel:hover {
|
||||
border-color: var(--vp-c-primary);
|
||||
color: var(--vp-c-text-1);
|
||||
background: var(--vp-c-bg-soft);
|
||||
}
|
||||
|
||||
.DocSearch-Body {
|
||||
max-height: 60vh;
|
||||
overflow: auto;
|
||||
}
|
||||
|
||||
.DocSearch-Status {
|
||||
padding: 18px 16px;
|
||||
color: var(--vp-c-text-2);
|
||||
font-size: var(--vp-font-size-sm);
|
||||
}
|
||||
|
||||
.DocSearch-Status--error {
|
||||
color: #ef4444;
|
||||
}
|
||||
|
||||
.DocSearch-Results {
|
||||
list-style: none;
|
||||
margin: 0;
|
||||
padding: 8px;
|
||||
}
|
||||
|
||||
.DocSearch-Result {
|
||||
border-radius: var(--vp-radius-lg);
|
||||
transition: background var(--vp-transition-color);
|
||||
}
|
||||
|
||||
.DocSearch-Result.active,
|
||||
.DocSearch-Result:hover {
|
||||
background: var(--vp-c-bg-soft);
|
||||
}
|
||||
|
||||
.DocSearch-Result-Link {
|
||||
display: block;
|
||||
padding: 12px 12px;
|
||||
text-decoration: none;
|
||||
}
|
||||
|
||||
.DocSearch-Result-Title {
|
||||
font-weight: 600;
|
||||
color: var(--vp-c-text-1);
|
||||
font-size: 14px;
|
||||
}
|
||||
|
||||
.DocSearch-Result-Excerpt {
|
||||
margin-top: 4px;
|
||||
color: var(--vp-c-text-3);
|
||||
font-size: 12px;
|
||||
line-height: 1.4;
|
||||
}
|
||||
|
||||
@media (max-width: 768px) {
|
||||
.DocSearch-Button-Placeholder,
|
||||
.DocSearch-Button-Keys {
|
||||
display: none;
|
||||
}
|
||||
|
||||
.DocSearch-Button {
|
||||
width: 40px;
|
||||
justify-content: center;
|
||||
padding: 0;
|
||||
}
|
||||
|
||||
.DocSearch-Container {
|
||||
margin-top: 6vh;
|
||||
}
|
||||
}
|
||||
</style>
|
||||
214
docs/.vitepress/theme/components/HeroAnimation.vue
Normal file
214
docs/.vitepress/theme/components/HeroAnimation.vue
Normal file
@@ -0,0 +1,214 @@
|
||||
<template>
|
||||
<div class="hero-animation-container" :class="{ 'is-visible': isVisible }">
|
||||
<div class="glow-bg"></div>
|
||||
<svg viewBox="0 0 400 320" class="hero-svg" preserveAspectRatio="xMidYMid meet">
|
||||
<defs>
|
||||
<filter id="glow" x="-20%" y="-20%" width="140%" height="140%">
|
||||
<feGaussianBlur stdDeviation="3.5" result="coloredBlur"/>
|
||||
<feMerge>
|
||||
<feMergeNode in="coloredBlur"/>
|
||||
<feMergeNode in="SourceGraphic"/>
|
||||
</feMerge>
|
||||
</filter>
|
||||
<linearGradient id="pathGrad" x1="0%" y1="0%" x2="100%" y2="0%">
|
||||
<stop offset="0%" stop-color="var(--vp-c-brand-1)" stop-opacity="0" />
|
||||
<stop offset="50%" stop-color="var(--vp-c-brand-1)" stop-opacity="0.5" />
|
||||
<stop offset="100%" stop-color="var(--vp-c-brand-1)" stop-opacity="0" />
|
||||
</linearGradient>
|
||||
</defs>
|
||||
|
||||
<!-- Connection Lines -->
|
||||
<g class="data-paths">
|
||||
<path v-for="(path, i) in paths" :key="'path-'+i" :d="path" class="connection-path" />
|
||||
<circle v-for="(path, i) in paths" :key="'dot-'+i" r="2" class="data-pulse">
|
||||
<animateMotion :dur="2 + i * 0.4 + 's'" repeatCount="indefinite" :path="path" />
|
||||
</circle>
|
||||
</g>
|
||||
|
||||
<!-- Orbit Rings -->
|
||||
<g class="orbit-rings">
|
||||
<circle cx="200" cy="160" r="130" class="orbit-ring ring-outer" />
|
||||
<circle cx="200" cy="160" r="95" class="orbit-ring ring-inner" />
|
||||
</g>
|
||||
|
||||
<!-- Agent Nodes -->
|
||||
<g v-for="(agent, i) in agents" :key="'agent-'+i" class="agent-node" :style="{ '--delay': i * 0.4 + 's' }">
|
||||
<g class="agent-group" :style="{ transform: `translate(${agent.x}px, ${agent.y}px)` }">
|
||||
<circle r="8" :fill="agent.color" class="agent-circle" filter="url(#glow)" />
|
||||
<circle r="12" :stroke="agent.color" fill="none" class="agent-halo" />
|
||||
<text y="22" text-anchor="middle" class="agent-label">{{ agent.name }}</text>
|
||||
</g>
|
||||
</g>
|
||||
|
||||
<!-- Central Core -->
|
||||
<g class="central-core" transform="translate(200, 160)">
|
||||
<circle r="40" class="core-bg" />
|
||||
<circle r="32" fill="var(--vp-c-brand-1)" filter="url(#glow)" class="core-inner" />
|
||||
<text y="8" text-anchor="middle" class="core-text">CCW</text>
|
||||
|
||||
<!-- Scanning Effect -->
|
||||
<path d="M-32 0 A32 32 0 0 1 32 0" fill="none" stroke="white" stroke-width="2" stroke-linecap="round" class="core-scanner" />
|
||||
</g>
|
||||
</svg>
|
||||
</div>
|
||||
</template>
|
||||
|
||||
<script setup>
|
||||
import { ref, onMounted } from 'vue'
|
||||
|
||||
const isVisible = ref(false)
|
||||
|
||||
onMounted(() => {
|
||||
setTimeout(() => {
|
||||
isVisible.value = true
|
||||
}, 100)
|
||||
})
|
||||
|
||||
const agents = [
|
||||
{ name: 'Analyze', x: 200, y: 35, color: '#3B82F6' },
|
||||
{ name: 'Plan', x: 315, y: 110, color: '#10B981' },
|
||||
{ name: 'Code', x: 285, y: 245, color: '#8B5CF6' },
|
||||
{ name: 'Test', x: 115, y: 245, color: '#F59E0B' },
|
||||
{ name: 'Review', x: 85, y: 110, color: '#EF4444' }
|
||||
]
|
||||
|
||||
const paths = [
|
||||
'M200,160 L200,35',
|
||||
'M200,160 L315,110',
|
||||
'M200,160 L285,245',
|
||||
'M200,160 L115,245',
|
||||
'M200,160 L85,110',
|
||||
'M200,35 Q260,35 315,110',
|
||||
'M315,110 Q315,180 285,245',
|
||||
'M285,245 Q200,285 115,245',
|
||||
'M115,245 Q85,180 85,110',
|
||||
'M85,110 Q85,35 200,35'
|
||||
]
|
||||
</script>
|
||||
|
||||
<style scoped>
|
||||
.hero-animation-container {
|
||||
width: 100%;
|
||||
max-width: 480px;
|
||||
position: relative;
|
||||
opacity: 0;
|
||||
transform: scale(0.95);
|
||||
transition: all 1s cubic-bezier(0.2, 0.8, 0.2, 1);
|
||||
}
|
||||
|
||||
.hero-animation-container.is-visible {
|
||||
opacity: 1;
|
||||
transform: scale(1);
|
||||
}
|
||||
|
||||
.glow-bg {
|
||||
position: absolute;
|
||||
top: 50%;
|
||||
left: 50%;
|
||||
width: 150px;
|
||||
height: 150px;
|
||||
background: var(--vp-c-brand-1);
|
||||
filter: blur(80px);
|
||||
opacity: 0.15;
|
||||
transform: translate(-50%, -50%);
|
||||
pointer-events: none;
|
||||
}
|
||||
|
||||
.hero-svg {
|
||||
width: 100%;
|
||||
height: auto;
|
||||
overflow: visible;
|
||||
}
|
||||
|
||||
.orbit-ring {
|
||||
fill: none;
|
||||
stroke: var(--vp-c-brand-1);
|
||||
stroke-width: 0.5;
|
||||
opacity: 0.1;
|
||||
stroke-dasharray: 4 4;
|
||||
}
|
||||
|
||||
.ring-outer { animation: rotate 60s linear infinite; transform-origin: 200px 160px; }
|
||||
.ring-inner { animation: rotate 40s linear infinite reverse; transform-origin: 200px 160px; }
|
||||
|
||||
.connection-path {
|
||||
fill: none;
|
||||
stroke: var(--vp-c-brand-1);
|
||||
stroke-width: 0.8;
|
||||
opacity: 0.05;
|
||||
}
|
||||
|
||||
.data-pulse {
|
||||
fill: var(--vp-c-brand-2);
|
||||
filter: drop-shadow(0 0 4px var(--vp-c-brand-2));
|
||||
}
|
||||
|
||||
.agent-group {
|
||||
transition: all 0.3s ease;
|
||||
}
|
||||
|
||||
.agent-circle {
|
||||
transition: all 0.3s ease;
|
||||
}
|
||||
|
||||
.agent-halo {
|
||||
opacity: 0.2;
|
||||
animation: agent-pulse 2s ease-in-out infinite;
|
||||
transform-origin: center;
|
||||
}
|
||||
|
||||
.agent-label {
|
||||
font-size: 10px;
|
||||
fill: var(--vp-c-text-2);
|
||||
font-weight: 600;
|
||||
text-transform: uppercase;
|
||||
letter-spacing: 0.05em;
|
||||
opacity: 0.7;
|
||||
}
|
||||
|
||||
.core-bg {
|
||||
fill: var(--vp-c-bg-soft);
|
||||
stroke: var(--vp-c-brand-soft);
|
||||
stroke-width: 1;
|
||||
}
|
||||
|
||||
.core-inner {
|
||||
opacity: 0.8;
|
||||
}
|
||||
|
||||
.core-text {
|
||||
font-size: 14px;
|
||||
font-weight: 800;
|
||||
fill: white;
|
||||
letter-spacing: 0.05em;
|
||||
}
|
||||
|
||||
.core-scanner {
|
||||
animation: rotate 3s linear infinite;
|
||||
opacity: 0.6;
|
||||
}
|
||||
|
||||
@keyframes rotate {
|
||||
from { transform: rotate(0deg); }
|
||||
to { transform: rotate(360deg); }
|
||||
}
|
||||
|
||||
@keyframes agent-pulse {
|
||||
0%, 100% { transform: scale(1); opacity: 0.2; }
|
||||
50% { transform: scale(1.3); opacity: 0.1; }
|
||||
}
|
||||
|
||||
.agent-node {
|
||||
animation: agent-float 4s ease-in-out infinite;
|
||||
animation-delay: var(--delay);
|
||||
}
|
||||
|
||||
@keyframes agent-float {
|
||||
0%, 100% { transform: translateY(0); }
|
||||
50% { transform: translateY(-5px); }
|
||||
}
|
||||
|
||||
.hero-animation-container:hover .agent-circle {
|
||||
filter: blur(2px) brightness(1.5);
|
||||
}
|
||||
</style>
|
||||
184
docs/.vitepress/theme/components/PageToc.vue
Normal file
184
docs/.vitepress/theme/components/PageToc.vue
Normal file
@@ -0,0 +1,184 @@
|
||||
<script setup lang="ts">
|
||||
import { computed, onMounted, ref } from 'vue'
|
||||
import { useData } from 'vitepress'
|
||||
|
||||
const { page } = useData()
|
||||
|
||||
interface TocItem {
|
||||
id: string
|
||||
text: string
|
||||
level: number
|
||||
children?: TocItem[]
|
||||
}
|
||||
|
||||
const toc = computed<TocItem[]>(() => {
|
||||
return page.value.headers || []
|
||||
})
|
||||
|
||||
const activeId = ref('')
|
||||
|
||||
const onItemClick = (id: string) => {
|
||||
activeId.value = id
|
||||
const element = document.getElementById(id)
|
||||
if (element) {
|
||||
element.scrollIntoView({ behavior: 'smooth', block: 'start' })
|
||||
}
|
||||
}
|
||||
|
||||
onMounted(() => {
|
||||
// Update active heading on scroll
|
||||
const observer = new IntersectionObserver(
|
||||
(entries) => {
|
||||
entries.forEach((entry) => {
|
||||
if (entry.isIntersecting) {
|
||||
activeId.value = entry.target.id
|
||||
}
|
||||
})
|
||||
},
|
||||
{ rootMargin: '-80px 0px -80% 0px' }
|
||||
)
|
||||
|
||||
page.value.headers.forEach((header) => {
|
||||
const element = document.getElementById(header.id)
|
||||
if (element) {
|
||||
observer.observe(element)
|
||||
}
|
||||
})
|
||||
|
||||
return () => observer.disconnect()
|
||||
})
|
||||
</script>
|
||||
|
||||
<template>
|
||||
<nav v-if="toc.length > 0" class="page-toc" aria-label="Page navigation">
|
||||
<div class="toc-header">On this page</div>
|
||||
<ul class="toc-list">
|
||||
<li
|
||||
v-for="item in toc"
|
||||
:key="item.id"
|
||||
:class="['toc-item', `toc-level-${item.level}`, { active: item.id === activeId }]"
|
||||
>
|
||||
<a
|
||||
:href="`#${item.id}`"
|
||||
class="toc-link"
|
||||
@click.prevent="onItemClick(item.id)"
|
||||
>
|
||||
{{ item.text }}
|
||||
</a>
|
||||
<ul v-if="item.children && item.children.length > 0" class="toc-list toc-sublist">
|
||||
<li
|
||||
v-for="child in item.children"
|
||||
:key="child.id"
|
||||
:class="['toc-item', `toc-level-${child.level}`, { active: child.id === activeId }]"
|
||||
>
|
||||
<a
|
||||
:href="`#${child.id}`"
|
||||
class="toc-link"
|
||||
@click.prevent="onItemClick(child.id)"
|
||||
>
|
||||
{{ child.text }}
|
||||
</a>
|
||||
</li>
|
||||
</ul>
|
||||
</li>
|
||||
</ul>
|
||||
</nav>
|
||||
</template>
|
||||
|
||||
<style scoped>
|
||||
.page-toc {
|
||||
position: sticky;
|
||||
top: calc(var(--vp-nav-height) + 24px);
|
||||
max-height: calc(100vh - var(--vp-nav-height) - 48px);
|
||||
overflow-y: auto;
|
||||
padding: 16px;
|
||||
background: var(--vp-c-bg-soft);
|
||||
border-radius: var(--vp-radius-lg);
|
||||
border: 1px solid var(--vp-c-divider);
|
||||
}
|
||||
|
||||
.toc-header {
|
||||
font-size: var(--vp-font-size-sm);
|
||||
font-weight: 600;
|
||||
color: var(--vp-c-text-1);
|
||||
margin-bottom: 12px;
|
||||
padding-bottom: 8px;
|
||||
border-bottom: 1px solid var(--vp-c-divider);
|
||||
text-transform: uppercase;
|
||||
letter-spacing: 0.05em;
|
||||
}
|
||||
|
||||
.toc-list {
|
||||
list-style: none;
|
||||
margin: 0;
|
||||
padding: 0;
|
||||
}
|
||||
|
||||
.toc-sublist {
|
||||
margin-left: 12px;
|
||||
padding-left: 12px;
|
||||
border-left: 1px solid var(--vp-c-divider);
|
||||
}
|
||||
|
||||
.toc-item {
|
||||
margin: 4px 0;
|
||||
}
|
||||
|
||||
.toc-link {
|
||||
display: block;
|
||||
padding: 4px 8px;
|
||||
color: var(--vp-c-text-2);
|
||||
font-size: var(--vp-font-size-sm);
|
||||
text-decoration: none;
|
||||
border-left: 2px solid transparent;
|
||||
transition: all var(--vp-transition-color);
|
||||
border-radius: 0 var(--vp-radius-sm) var(--vp-radius-sm) 0;
|
||||
}
|
||||
|
||||
.toc-link:hover {
|
||||
color: var(--vp-c-primary);
|
||||
background: var(--vp-c-bg-soft);
|
||||
}
|
||||
|
||||
.toc-item.active > .toc-link {
|
||||
color: var(--vp-c-primary);
|
||||
border-left-color: var(--vp-c-primary);
|
||||
background: var(--vp-c-bg-mute);
|
||||
font-weight: 500;
|
||||
}
|
||||
|
||||
.toc-level-3 .toc-link {
|
||||
font-size: 13px;
|
||||
padding-left: 16px;
|
||||
}
|
||||
|
||||
.toc-level-4 .toc-link {
|
||||
font-size: 12px;
|
||||
padding-left: 20px;
|
||||
}
|
||||
|
||||
/* Hide on mobile */
|
||||
@media (max-width: 1024px) {
|
||||
.page-toc {
|
||||
display: none;
|
||||
}
|
||||
}
|
||||
|
||||
/* Scrollbar styling for TOC */
|
||||
.page-toc::-webkit-scrollbar {
|
||||
width: 4px;
|
||||
}
|
||||
|
||||
.page-toc::-webkit-scrollbar-track {
|
||||
background: transparent;
|
||||
}
|
||||
|
||||
.page-toc::-webkit-scrollbar-thumb {
|
||||
background: var(--vp-c-divider);
|
||||
border-radius: var(--vp-radius-full);
|
||||
}
|
||||
|
||||
.page-toc::-webkit-scrollbar-thumb:hover {
|
||||
background: var(--vp-c-text-3);
|
||||
}
|
||||
</style>
|
||||
1090
docs/.vitepress/theme/components/ProfessionalHome.vue
Normal file
1090
docs/.vitepress/theme/components/ProfessionalHome.vue
Normal file
File diff suppressed because it is too large
Load Diff
42
docs/.vitepress/theme/components/SkipLink.vue
Normal file
42
docs/.vitepress/theme/components/SkipLink.vue
Normal file
@@ -0,0 +1,42 @@
|
||||
<script setup lang="ts">
|
||||
/**
|
||||
* Skip Link Component
|
||||
* Accessibility feature allowing keyboard users to skip to main content
|
||||
*/
|
||||
</script>
|
||||
|
||||
<template>
|
||||
<a href="#VPContent" class="skip-link">
|
||||
Skip to main content
|
||||
</a>
|
||||
</template>
|
||||
|
||||
<style scoped>
|
||||
.skip-link {
|
||||
position: absolute;
|
||||
top: -100px;
|
||||
left: 0;
|
||||
padding: 8px 16px;
|
||||
background: var(--vp-c-primary);
|
||||
color: white;
|
||||
text-decoration: none;
|
||||
z-index: 9999;
|
||||
transition: top 0.3s ease;
|
||||
font-size: 14px;
|
||||
font-weight: 500;
|
||||
}
|
||||
|
||||
.skip-link:focus {
|
||||
top: 0;
|
||||
}
|
||||
|
||||
.skip-link:hover {
|
||||
background: var(--vp-c-primary-600);
|
||||
}
|
||||
|
||||
@media print {
|
||||
.skip-link {
|
||||
display: none;
|
||||
}
|
||||
}
|
||||
</style>
|
||||
108
docs/.vitepress/theme/components/ThemeSwitcher.vue
Normal file
108
docs/.vitepress/theme/components/ThemeSwitcher.vue
Normal file
@@ -0,0 +1,108 @@
|
||||
<script setup lang="ts">
|
||||
import { ref, onMounted } from 'vue'
|
||||
|
||||
const currentTheme = ref<'blue' | 'green' | 'orange' | 'purple'>('blue')
|
||||
|
||||
const themes = [
|
||||
{ id: 'blue', name: 'Blue', color: '#3b82f6' },
|
||||
{ id: 'green', name: 'Green', color: '#10b981' },
|
||||
{ id: 'orange', name: 'Orange', color: '#f59e0b' },
|
||||
{ id: 'purple', name: 'Purple', color: '#8b5cf6' }
|
||||
]
|
||||
|
||||
const setTheme = (themeId: typeof currentTheme.value) => {
|
||||
currentTheme.value = themeId
|
||||
document.documentElement.setAttribute('data-theme', themeId)
|
||||
localStorage.setItem('ccw-theme', themeId)
|
||||
}
|
||||
|
||||
onMounted(() => {
|
||||
const savedTheme = localStorage.getItem('ccw-theme') as typeof currentTheme.value
|
||||
if (savedTheme && themes.find(t => t.id === savedTheme)) {
|
||||
setTheme(savedTheme)
|
||||
}
|
||||
})
|
||||
</script>
|
||||
|
||||
<template>
|
||||
<div class="theme-switcher">
|
||||
<div class="theme-buttons">
|
||||
<button
|
||||
v-for="theme in themes"
|
||||
:key="theme.id"
|
||||
:class="['theme-button', { active: currentTheme === theme.id }]"
|
||||
:style="{ '--theme-color': theme.color }"
|
||||
:aria-label="`Switch to ${theme.name} theme`"
|
||||
:title="theme.name"
|
||||
@click="setTheme(theme.id)"
|
||||
>
|
||||
<span class="theme-dot"></span>
|
||||
</button>
|
||||
</div>
|
||||
</div>
|
||||
</template>
|
||||
|
||||
<style scoped>
|
||||
.theme-switcher {
|
||||
display: flex;
|
||||
align-items: center;
|
||||
gap: 8px;
|
||||
}
|
||||
|
||||
.theme-buttons {
|
||||
display: flex;
|
||||
gap: 4px;
|
||||
padding: 4px;
|
||||
background: var(--vp-c-bg-soft);
|
||||
border-radius: var(--vp-radius-full);
|
||||
}
|
||||
|
||||
.theme-button {
|
||||
position: relative;
|
||||
width: 32px;
|
||||
height: 32px;
|
||||
border: none;
|
||||
background: transparent;
|
||||
border-radius: var(--vp-radius-full);
|
||||
cursor: pointer;
|
||||
display: flex;
|
||||
align-items: center;
|
||||
justify-content: center;
|
||||
transition: all var(--vp-transition-color);
|
||||
}
|
||||
|
||||
.theme-button:hover {
|
||||
background: var(--vp-c-bg-mute);
|
||||
}
|
||||
|
||||
.theme-button.active {
|
||||
background: var(--vp-c-bg);
|
||||
box-shadow: var(--vp-shadow-sm);
|
||||
}
|
||||
|
||||
.theme-dot {
|
||||
width: 16px;
|
||||
height: 16px;
|
||||
border-radius: var(--vp-radius-full);
|
||||
background: var(--theme-color);
|
||||
border: 2px solid transparent;
|
||||
transition: all var(--vp-transition-color);
|
||||
}
|
||||
|
||||
.theme-button.active .theme-dot {
|
||||
border-color: var(--vp-c-text-1);
|
||||
transform: scale(1.1);
|
||||
}
|
||||
|
||||
@media (max-width: 768px) {
|
||||
.theme-button {
|
||||
width: 36px;
|
||||
height: 36px;
|
||||
}
|
||||
|
||||
.theme-dot {
|
||||
width: 20px;
|
||||
height: 20px;
|
||||
}
|
||||
}
|
||||
</style>
|
||||
202
docs/.vitepress/theme/components/WorkflowAnimation.vue
Normal file
202
docs/.vitepress/theme/components/WorkflowAnimation.vue
Normal file
@@ -0,0 +1,202 @@
|
||||
<template>
|
||||
<div class="workflow-animation">
|
||||
<div class="workflow-container">
|
||||
<div class="workflow-node coordinator">
|
||||
<div class="node-icon">🎯</div>
|
||||
<div class="node-label">Coordinator</div>
|
||||
</div>
|
||||
|
||||
<div class="workflow-paths">
|
||||
<svg class="path-svg" viewBox="0 0 400 200">
|
||||
<!-- Spec Path -->
|
||||
<path class="flow-path path-spec" d="M50,100 Q150,20 250,50" fill="none" stroke="#3B82F6" stroke-width="2"/>
|
||||
<circle class="flow-dot dot-spec" r="6" fill="#3B82F6">
|
||||
<animateMotion dur="3s" repeatCount="indefinite" path="M50,100 Q150,20 250,50"/>
|
||||
</circle>
|
||||
|
||||
<!-- Impl Path -->
|
||||
<path class="flow-path path-impl" d="M50,100 Q150,100 250,100" fill="none" stroke="#10B981" stroke-width="2"/>
|
||||
<circle class="flow-dot dot-impl" r="6" fill="#10B981">
|
||||
<animateMotion dur="2.5s" repeatCount="indefinite" path="M50,100 Q150,100 250,100"/>
|
||||
</circle>
|
||||
|
||||
<!-- Test Path -->
|
||||
<path class="flow-path path-test" d="M50,100 Q150,180 250,150" fill="none" stroke="#F59E0B" stroke-width="2"/>
|
||||
<circle class="flow-dot dot-test" r="6" fill="#F59E0B">
|
||||
<animateMotion dur="3.5s" repeatCount="indefinite" path="M50,100 Q150,180 250,150"/>
|
||||
</circle>
|
||||
</svg>
|
||||
</div>
|
||||
|
||||
<div class="workflow-nodes">
|
||||
<div class="workflow-node analyst">
|
||||
<div class="node-icon">📊</div>
|
||||
<div class="node-label">Analyst</div>
|
||||
</div>
|
||||
<div class="workflow-node writer">
|
||||
<div class="node-icon">✍️</div>
|
||||
<div class="node-label">Writer</div>
|
||||
</div>
|
||||
<div class="workflow-node executor">
|
||||
<div class="node-icon">⚡</div>
|
||||
<div class="node-label">Executor</div>
|
||||
</div>
|
||||
<div class="workflow-node tester">
|
||||
<div class="node-icon">🧪</div>
|
||||
<div class="node-label">Tester</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<div class="workflow-legend">
|
||||
<div class="legend-item"><span class="dot spec"></span> Spec Phase</div>
|
||||
<div class="legend-item"><span class="dot impl"></span> Impl Phase</div>
|
||||
<div class="legend-item"><span class="dot test"></span> Test Phase</div>
|
||||
</div>
|
||||
</div>
|
||||
</template>
|
||||
|
||||
<script setup>
|
||||
import { onMounted, onUnmounted } from 'vue'
|
||||
|
||||
onMounted(() => {
|
||||
// Add animation class after mount
|
||||
document.querySelector('.workflow-animation')?.classList.add('animate')
|
||||
})
|
||||
</script>
|
||||
|
||||
<style scoped>
|
||||
.workflow-animation {
|
||||
padding: 2rem;
|
||||
background: var(--vp-c-bg-soft);
|
||||
border: 1px solid var(--vp-c-divider);
|
||||
border-radius: 24px;
|
||||
margin: 2rem 0;
|
||||
overflow: hidden;
|
||||
}
|
||||
|
||||
.workflow-container {
|
||||
display: flex;
|
||||
align-items: center;
|
||||
justify-content: space-around;
|
||||
flex-wrap: wrap;
|
||||
gap: 2rem;
|
||||
min-height: 200px;
|
||||
}
|
||||
|
||||
.workflow-paths {
|
||||
flex: 1.2;
|
||||
min-width: 280px;
|
||||
max-width: 450px;
|
||||
}
|
||||
|
||||
.path-svg {
|
||||
width: 100%;
|
||||
height: auto;
|
||||
}
|
||||
|
||||
.flow-path {
|
||||
stroke-dasharray: 6, 6;
|
||||
opacity: 0.3;
|
||||
animation: dash 30s linear infinite;
|
||||
}
|
||||
|
||||
@keyframes dash {
|
||||
to {
|
||||
stroke-dashoffset: -120;
|
||||
}
|
||||
}
|
||||
|
||||
.flow-dot {
|
||||
filter: drop-shadow(0 0 4px currentColor);
|
||||
}
|
||||
|
||||
.workflow-nodes {
|
||||
display: grid;
|
||||
grid-template-columns: repeat(2, 1fr);
|
||||
gap: 1rem;
|
||||
flex: 0.8;
|
||||
min-width: 200px;
|
||||
}
|
||||
|
||||
.workflow-node {
|
||||
display: flex;
|
||||
flex-direction: column;
|
||||
align-items: center;
|
||||
padding: 1.25rem;
|
||||
background: var(--vp-c-bg);
|
||||
border: 1px solid var(--vp-c-divider);
|
||||
border-radius: 16px;
|
||||
box-shadow: var(--vp-shadow-sm);
|
||||
transition: all 0.3s cubic-bezier(0.4, 0, 0.2, 1);
|
||||
}
|
||||
|
||||
.workflow-node:hover {
|
||||
transform: translateY(-4px);
|
||||
border-color: var(--vp-c-brand-1);
|
||||
box-shadow: var(--vp-shadow-md);
|
||||
}
|
||||
|
||||
.node-icon {
|
||||
font-size: 2rem;
|
||||
margin-bottom: 0.5rem;
|
||||
}
|
||||
|
||||
.node-label {
|
||||
font-size: 0.85rem;
|
||||
font-weight: 600;
|
||||
color: var(--vp-c-text-1);
|
||||
}
|
||||
|
||||
.workflow-node.coordinator {
|
||||
grid-column: span 2;
|
||||
background: linear-gradient(135deg, var(--vp-c-brand-1), var(--vp-c-brand-2));
|
||||
border: none;
|
||||
color: white;
|
||||
}
|
||||
|
||||
.workflow-node.coordinator .node-label {
|
||||
color: white;
|
||||
}
|
||||
|
||||
.workflow-legend {
|
||||
display: flex;
|
||||
justify-content: center;
|
||||
gap: 2.5rem;
|
||||
margin-top: 2rem;
|
||||
flex-wrap: wrap;
|
||||
}
|
||||
|
||||
.legend-item {
|
||||
display: flex;
|
||||
align-items: center;
|
||||
gap: 0.75rem;
|
||||
font-size: 0.85rem;
|
||||
font-weight: 500;
|
||||
color: var(--vp-c-text-2);
|
||||
}
|
||||
|
||||
.dot {
|
||||
width: 10px;
|
||||
height: 10px;
|
||||
border-radius: 50%;
|
||||
}
|
||||
|
||||
.dot.spec { background: var(--vp-c-brand-1); }
|
||||
.dot.impl { background: var(--vp-c-secondary-500); }
|
||||
.dot.test { background: var(--vp-c-accent-400); }
|
||||
|
||||
@media (max-width: 768px) {
|
||||
.workflow-animation {
|
||||
padding: 1.5rem;
|
||||
}
|
||||
|
||||
.workflow-container {
|
||||
flex-direction: column;
|
||||
}
|
||||
|
||||
.workflow-nodes {
|
||||
width: 100%;
|
||||
}
|
||||
}
|
||||
</style>
|
||||
27
docs/.vitepress/theme/index.ts
Normal file
27
docs/.vitepress/theme/index.ts
Normal file
@@ -0,0 +1,27 @@
|
||||
import DefaultTheme from 'vitepress/theme'
|
||||
import ThemeSwitcher from './components/ThemeSwitcher.vue'
|
||||
import DocSearch from './components/DocSearch.vue'
|
||||
import DarkModeToggle from './components/DarkModeToggle.vue'
|
||||
import CopyCodeButton from './components/CopyCodeButton.vue'
|
||||
import Breadcrumb from './components/Breadcrumb.vue'
|
||||
import PageToc from './components/PageToc.vue'
|
||||
import ProfessionalHome from './components/ProfessionalHome.vue'
|
||||
import Layout from './layouts/Layout.vue'
|
||||
import './styles/variables.css'
|
||||
import './styles/custom.css'
|
||||
import './styles/mobile.css'
|
||||
|
||||
export default {
|
||||
extends: DefaultTheme,
|
||||
Layout,
|
||||
enhanceApp({ app, router, siteData }) {
|
||||
// Register global components
|
||||
app.component('ThemeSwitcher', ThemeSwitcher)
|
||||
app.component('DocSearch', DocSearch)
|
||||
app.component('DarkModeToggle', DarkModeToggle)
|
||||
app.component('CopyCodeButton', CopyCodeButton)
|
||||
app.component('Breadcrumb', Breadcrumb)
|
||||
app.component('PageToc', PageToc)
|
||||
app.component('ProfessionalHome', ProfessionalHome)
|
||||
}
|
||||
}
|
||||
153
docs/.vitepress/theme/layouts/Layout.vue
Normal file
153
docs/.vitepress/theme/layouts/Layout.vue
Normal file
@@ -0,0 +1,153 @@
|
||||
<script setup lang="ts">
|
||||
import DefaultTheme from 'vitepress/theme'
|
||||
import { onBeforeUnmount, onMounted } from 'vue'
|
||||
|
||||
let mediaQuery: MediaQueryList | null = null
|
||||
let systemThemeChangeHandler: (() => void) | null = null
|
||||
let storageHandler: ((e: StorageEvent) => void) | null = null
|
||||
|
||||
function applyTheme() {
|
||||
const savedTheme = localStorage.getItem('ccw-theme') || 'blue'
|
||||
document.documentElement.setAttribute('data-theme', savedTheme)
|
||||
}
|
||||
|
||||
function applyColorMode() {
|
||||
const mode = localStorage.getItem('ccw-color-mode') || 'auto'
|
||||
const prefersDark = window.matchMedia('(prefers-color-scheme: dark)').matches
|
||||
const isDark = mode === 'dark' || (mode === 'auto' && prefersDark)
|
||||
document.documentElement.classList.toggle('dark', isDark)
|
||||
}
|
||||
|
||||
onMounted(() => {
|
||||
applyTheme()
|
||||
applyColorMode()
|
||||
|
||||
mediaQuery = window.matchMedia('(prefers-color-scheme: dark)')
|
||||
systemThemeChangeHandler = () => {
|
||||
const mode = localStorage.getItem('ccw-color-mode') || 'auto'
|
||||
if (mode === 'auto') applyColorMode()
|
||||
}
|
||||
mediaQuery.addEventListener('change', systemThemeChangeHandler)
|
||||
|
||||
storageHandler = (e: StorageEvent) => {
|
||||
if (e.key === 'ccw-theme') applyTheme()
|
||||
if (e.key === 'ccw-color-mode') applyColorMode()
|
||||
}
|
||||
window.addEventListener('storage', storageHandler)
|
||||
})
|
||||
|
||||
onBeforeUnmount(() => {
|
||||
if (mediaQuery && systemThemeChangeHandler) {
|
||||
mediaQuery.removeEventListener('change', systemThemeChangeHandler)
|
||||
}
|
||||
if (storageHandler) window.removeEventListener('storage', storageHandler)
|
||||
})
|
||||
</script>
|
||||
|
||||
<template>
|
||||
<DefaultTheme.Layout>
|
||||
<template #home-hero-after>
|
||||
<div class="hero-extensions">
|
||||
<div class="hero-stats">
|
||||
<div class="stat-item">
|
||||
<div class="stat-value">27+</div>
|
||||
<div class="stat-label">Built-in Skills</div>
|
||||
</div>
|
||||
<div class="stat-item">
|
||||
<div class="stat-value">10+</div>
|
||||
<div class="stat-label">Agent Types</div>
|
||||
</div>
|
||||
<div class="stat-item">
|
||||
<div class="stat-value">4</div>
|
||||
<div class="stat-label">Workflow Levels</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</template>
|
||||
|
||||
<template #layout-top>
|
||||
<a href="#VPContent" class="skip-link">Skip to main content</a>
|
||||
</template>
|
||||
|
||||
<template #nav-bar-content-after>
|
||||
<div class="nav-extensions">
|
||||
<DocSearch />
|
||||
<DarkModeToggle />
|
||||
<ThemeSwitcher />
|
||||
</div>
|
||||
</template>
|
||||
</DefaultTheme.Layout>
|
||||
</template>
|
||||
|
||||
<style scoped>
|
||||
.hero-extensions {
|
||||
margin-top: 40px;
|
||||
text-align: center;
|
||||
}
|
||||
|
||||
.hero-stats {
|
||||
display: flex;
|
||||
justify-content: center;
|
||||
gap: 48px;
|
||||
flex-wrap: wrap;
|
||||
}
|
||||
|
||||
.stat-item {
|
||||
text-align: center;
|
||||
}
|
||||
|
||||
.stat-value {
|
||||
font-size: 32px;
|
||||
font-weight: 700;
|
||||
color: var(--vp-c-primary);
|
||||
}
|
||||
|
||||
.stat-label {
|
||||
font-size: 14px;
|
||||
color: var(--vp-c-text-2);
|
||||
margin-top: 4px;
|
||||
}
|
||||
|
||||
.nav-extensions {
|
||||
display: flex;
|
||||
align-items: center;
|
||||
gap: 12px;
|
||||
margin-left: auto;
|
||||
padding-left: 16px;
|
||||
}
|
||||
|
||||
.skip-link {
|
||||
position: absolute;
|
||||
top: -100px;
|
||||
left: 0;
|
||||
padding: 8px 16px;
|
||||
background: var(--vp-c-primary);
|
||||
color: white;
|
||||
text-decoration: none;
|
||||
z-index: 9999;
|
||||
transition: top 0.3s;
|
||||
}
|
||||
|
||||
.skip-link:focus {
|
||||
top: 0;
|
||||
}
|
||||
|
||||
@media (max-width: 768px) {
|
||||
.hero-stats {
|
||||
gap: 24px;
|
||||
}
|
||||
|
||||
.stat-value {
|
||||
font-size: 24px;
|
||||
}
|
||||
|
||||
.stat-label {
|
||||
font-size: 12px;
|
||||
}
|
||||
|
||||
.nav-extensions {
|
||||
gap: 8px;
|
||||
padding-left: 8px;
|
||||
}
|
||||
}
|
||||
</style>
|
||||
352
docs/.vitepress/theme/styles/custom.css
Normal file
352
docs/.vitepress/theme/styles/custom.css
Normal file
@@ -0,0 +1,352 @@
|
||||
/**
|
||||
* VitePress Custom Styles
|
||||
* Overrides and extensions for default VitePress theme
|
||||
* Design System: ui-ux-pro-max — dark-mode-first, developer-focused
|
||||
*/
|
||||
|
||||
/* ============================================
|
||||
* Global Theme Variables
|
||||
* ============================================ */
|
||||
:root {
|
||||
--vp-c-brand: var(--vp-c-primary);
|
||||
--vp-c-brand-light: var(--vp-c-primary-300);
|
||||
--vp-c-brand-lighter: var(--vp-c-primary-200);
|
||||
--vp-c-brand-dark: var(--vp-c-primary-700);
|
||||
--vp-c-brand-darker: var(--vp-c-primary-800);
|
||||
|
||||
--vp-home-hero-name-color: var(--vp-c-primary);
|
||||
--vp-home-hero-name-background: linear-gradient(120deg, var(--vp-c-primary-500) 30%, var(--vp-c-secondary-500));
|
||||
|
||||
--vp-button-brand-bg: var(--vp-c-primary);
|
||||
--vp-button-brand-hover-bg: var(--vp-c-primary-600);
|
||||
--vp-button-brand-active-bg: var(--vp-c-primary-700);
|
||||
|
||||
--vp-custom-block-tip-bg: var(--vp-c-primary-50);
|
||||
--vp-custom-block-tip-border: var(--vp-c-primary-200);
|
||||
--vp-custom-block-tip-text: var(--vp-c-primary-700);
|
||||
|
||||
--vp-custom-block-warning-bg: var(--vp-c-accent-50);
|
||||
--vp-custom-block-warning-border: var(--vp-c-accent-200);
|
||||
--vp-custom-block-warning-text: var(--vp-c-accent-700);
|
||||
|
||||
--vp-custom-block-danger-bg: #fef2f2;
|
||||
--vp-custom-block-danger-border: #fecaca;
|
||||
--vp-custom-block-danger-text: #b91c1c;
|
||||
|
||||
/* Layout Width Adjustments */
|
||||
--vp-layout-max-width: 1600px;
|
||||
--vp-content-width: 1000px;
|
||||
--vp-sidebar-width: 272px;
|
||||
}
|
||||
|
||||
.dark {
|
||||
--vp-custom-block-tip-bg: rgba(59, 130, 246, 0.1);
|
||||
--vp-custom-block-tip-border: rgba(59, 130, 246, 0.3);
|
||||
--vp-custom-block-tip-text: var(--vp-c-primary-300);
|
||||
|
||||
--vp-custom-block-warning-bg: rgba(217, 119, 6, 0.1);
|
||||
--vp-custom-block-warning-border: rgba(217, 119, 6, 0.3);
|
||||
--vp-custom-block-warning-text: var(--vp-c-accent-300);
|
||||
|
||||
--vp-custom-block-danger-bg: rgba(185, 28, 28, 0.1);
|
||||
--vp-custom-block-danger-border: rgba(185, 28, 28, 0.3);
|
||||
--vp-custom-block-danger-text: #fca5a5;
|
||||
}
|
||||
|
||||
/* ============================================
|
||||
* Layout Container Adjustments
|
||||
* ============================================ */
|
||||
.VPDoc .content-container {
|
||||
max-width: var(--vp-content-width);
|
||||
padding: 0 32px;
|
||||
}
|
||||
|
||||
/* Adjust sidebar and content layout */
|
||||
.VPDoc {
|
||||
padding-left: var(--vp-sidebar-width);
|
||||
}
|
||||
|
||||
/* Right side outline (TOC) adjustments */
|
||||
.VPDocOutline {
|
||||
padding-left: 24px;
|
||||
}
|
||||
|
||||
.VPDocOutline .outline-link {
|
||||
font-size: 13px;
|
||||
line-height: 1.6;
|
||||
padding: 4px 8px;
|
||||
}
|
||||
|
||||
/* ============================================
|
||||
* Home Page Override
|
||||
* ============================================ */
|
||||
.VPHome {
|
||||
padding-bottom: 0;
|
||||
}
|
||||
|
||||
.VPHomeHero {
|
||||
padding: 80px 24px;
|
||||
background: linear-gradient(180deg, var(--vp-c-bg-soft) 0%, var(--vp-c-bg) 100%);
|
||||
}
|
||||
|
||||
/* ============================================
|
||||
* Documentation Content Typography
|
||||
* ============================================ */
|
||||
.vp-doc h1 {
|
||||
font-weight: 800;
|
||||
letter-spacing: -0.02em;
|
||||
margin-bottom: 1.5rem;
|
||||
}
|
||||
|
||||
.vp-doc h2 {
|
||||
font-weight: 700;
|
||||
margin-top: 3rem;
|
||||
padding-top: 2rem;
|
||||
letter-spacing: -0.01em;
|
||||
border-top: 1px solid var(--vp-c-divider);
|
||||
}
|
||||
|
||||
.vp-doc h2:first-of-type {
|
||||
margin-top: 1.5rem;
|
||||
border-top: none;
|
||||
}
|
||||
|
||||
.vp-doc h3 {
|
||||
font-weight: 600;
|
||||
margin-top: 2.5rem;
|
||||
}
|
||||
|
||||
.vp-doc h4 {
|
||||
font-weight: 600;
|
||||
margin-top: 2rem;
|
||||
}
|
||||
|
||||
.vp-doc p {
|
||||
line-height: 1.8;
|
||||
margin: 1.25rem 0;
|
||||
}
|
||||
|
||||
.vp-doc ul,
|
||||
.vp-doc ol {
|
||||
margin: 1.25rem 0;
|
||||
padding-left: 1.5rem;
|
||||
}
|
||||
|
||||
.vp-doc li {
|
||||
line-height: 1.8;
|
||||
margin: 0.5rem 0;
|
||||
}
|
||||
|
||||
.vp-doc li + li {
|
||||
margin-top: 0.5rem;
|
||||
}
|
||||
|
||||
/* Better spacing for code blocks in lists */
|
||||
.vp-doc li > code {
|
||||
margin: 0 2px;
|
||||
}
|
||||
|
||||
/* ============================================
|
||||
* Command Reference Specific Styles
|
||||
* ============================================ */
|
||||
.vp-doc h3[id^="ccw"],
|
||||
.vp-doc h3[id^="workflow"],
|
||||
.vp-doc h3[id^="issue"],
|
||||
.vp-doc h3[id^="cli"],
|
||||
.vp-doc h3[id^="memory"] {
|
||||
scroll-margin-top: 80px;
|
||||
position: relative;
|
||||
}
|
||||
|
||||
/* Add subtle separator between command sections */
|
||||
.vp-doc hr {
|
||||
border: none;
|
||||
border-top: 1px solid var(--vp-c-divider);
|
||||
margin: 3rem 0;
|
||||
}
|
||||
|
||||
/* ============================================
|
||||
* Custom Container Blocks
|
||||
* ============================================ */
|
||||
.custom-container {
|
||||
margin: 20px 0;
|
||||
padding: 16px 20px;
|
||||
border-radius: 12px;
|
||||
border-left: 4px solid;
|
||||
}
|
||||
|
||||
.custom-container.info {
|
||||
background: var(--vp-c-bg-soft);
|
||||
border-color: var(--vp-c-primary);
|
||||
}
|
||||
|
||||
.custom-container.success {
|
||||
background: var(--vp-c-secondary-50);
|
||||
border-color: var(--vp-c-secondary);
|
||||
}
|
||||
|
||||
.dark .custom-container.success {
|
||||
background: rgba(16, 185, 129, 0.1);
|
||||
}
|
||||
|
||||
.custom-container.tip {
|
||||
border-radius: 12px;
|
||||
}
|
||||
|
||||
.custom-container.warning {
|
||||
border-radius: 12px;
|
||||
}
|
||||
|
||||
.custom-container.danger {
|
||||
border-radius: 12px;
|
||||
}
|
||||
|
||||
/* ============================================
|
||||
* Code Block Improvements
|
||||
* ============================================ */
|
||||
.vp-code-group {
|
||||
margin: 20px 0;
|
||||
border-radius: 12px;
|
||||
overflow: hidden;
|
||||
}
|
||||
|
||||
.vp-code-group .tabs {
|
||||
background: var(--vp-c-bg-soft);
|
||||
border-bottom: 1px solid var(--vp-c-divider);
|
||||
}
|
||||
|
||||
.vp-code-group div[class*='language-'] {
|
||||
margin: 0;
|
||||
border-radius: 0;
|
||||
}
|
||||
|
||||
div[class*='language-'] {
|
||||
border-radius: 12px;
|
||||
margin: 20px 0;
|
||||
}
|
||||
|
||||
div[class*='language-'] pre {
|
||||
line-height: 1.65;
|
||||
}
|
||||
|
||||
/* Inline code */
|
||||
.vp-doc :not(pre) > code {
|
||||
border-radius: 6px;
|
||||
padding: 2px 6px;
|
||||
font-size: 0.875em;
|
||||
font-weight: 500;
|
||||
}
|
||||
|
||||
/* ============================================
|
||||
* Table Styling
|
||||
* ============================================ */
|
||||
table {
|
||||
border-collapse: collapse;
|
||||
width: 100%;
|
||||
margin: 20px 0;
|
||||
border-radius: 12px;
|
||||
overflow: hidden;
|
||||
}
|
||||
|
||||
table th,
|
||||
table td {
|
||||
padding: 12px 16px;
|
||||
border: 1px solid var(--vp-c-divider);
|
||||
text-align: left;
|
||||
}
|
||||
|
||||
table th {
|
||||
background: var(--vp-c-bg-soft);
|
||||
font-weight: 600;
|
||||
font-size: 0.875rem;
|
||||
text-transform: uppercase;
|
||||
letter-spacing: 0.03em;
|
||||
color: var(--vp-c-text-2);
|
||||
}
|
||||
|
||||
table tr:hover {
|
||||
background: var(--vp-c-bg-soft);
|
||||
}
|
||||
|
||||
/* ============================================
|
||||
* Sidebar Polish
|
||||
* ============================================ */
|
||||
.VPSidebar .group + .group {
|
||||
margin-top: 0.5rem;
|
||||
padding-top: 0.5rem;
|
||||
border-top: 1px solid var(--vp-c-divider);
|
||||
}
|
||||
|
||||
/* ============================================
|
||||
* Scrollbar Styling
|
||||
* ============================================ */
|
||||
::-webkit-scrollbar {
|
||||
width: 6px;
|
||||
height: 6px;
|
||||
}
|
||||
|
||||
::-webkit-scrollbar-track {
|
||||
background: transparent;
|
||||
}
|
||||
|
||||
::-webkit-scrollbar-thumb {
|
||||
background: var(--vp-c-surface-2);
|
||||
border-radius: var(--vp-radius-full);
|
||||
}
|
||||
|
||||
::-webkit-scrollbar-thumb:hover {
|
||||
background: var(--vp-c-surface-3);
|
||||
}
|
||||
|
||||
/* ============================================
|
||||
* Link Improvements
|
||||
* ============================================ */
|
||||
a {
|
||||
text-decoration: none;
|
||||
transition: color 0.2s ease;
|
||||
}
|
||||
|
||||
a:hover {
|
||||
text-decoration: underline;
|
||||
}
|
||||
|
||||
/* ============================================
|
||||
* Focus States — Accessibility
|
||||
* ============================================ */
|
||||
:focus-visible {
|
||||
outline: 2px solid var(--vp-c-primary);
|
||||
outline-offset: 2px;
|
||||
}
|
||||
|
||||
/* ============================================
|
||||
* Skip Link — Accessibility
|
||||
* ============================================ */
|
||||
.skip-link {
|
||||
position: absolute;
|
||||
top: -100px;
|
||||
left: 0;
|
||||
background: var(--vp-c-bg);
|
||||
padding: 8px 16px;
|
||||
z-index: 9999;
|
||||
transition: top 0.3s;
|
||||
}
|
||||
|
||||
.skip-link:focus {
|
||||
top: 0;
|
||||
}
|
||||
|
||||
/* ============================================
|
||||
* Print Styles
|
||||
* ============================================ */
|
||||
@media print {
|
||||
.VPNav,
|
||||
.VPSidebar,
|
||||
.skip-link {
|
||||
display: none;
|
||||
}
|
||||
|
||||
.VPContent {
|
||||
margin: 0 !important;
|
||||
padding: 0 !important;
|
||||
}
|
||||
}
|
||||
346
docs/.vitepress/theme/styles/mobile.css
Normal file
346
docs/.vitepress/theme/styles/mobile.css
Normal file
@@ -0,0 +1,346 @@
|
||||
/**
|
||||
* Mobile-Responsive Styles
|
||||
* Breakpoints: 320px-768px (mobile), 768px-1024px (tablet), 1024px+ (desktop)
|
||||
* WCAG 2.1 AA compliant
|
||||
*/
|
||||
|
||||
/* ============================================
|
||||
* Mobile First Approach
|
||||
* ============================================ */
|
||||
|
||||
/* Base Mobile Styles (320px+) */
|
||||
@media (max-width: 768px) {
|
||||
/* Typography */
|
||||
:root {
|
||||
--vp-font-size-base: 14px;
|
||||
--vp-content-width: 100%;
|
||||
}
|
||||
|
||||
/* Container */
|
||||
.container {
|
||||
padding: 0 16px;
|
||||
}
|
||||
|
||||
/* Navigation */
|
||||
.VPNav {
|
||||
height: 56px;
|
||||
}
|
||||
|
||||
.VPNavBar {
|
||||
padding: 0 16px;
|
||||
}
|
||||
|
||||
/* Sidebar */
|
||||
.VPSidebar {
|
||||
width: 100%;
|
||||
max-width: 320px;
|
||||
}
|
||||
|
||||
/* Content */
|
||||
.VPContent {
|
||||
padding: 16px;
|
||||
}
|
||||
|
||||
/* Doc content adjustments */
|
||||
.VPDoc .content-container {
|
||||
padding: 0 16px;
|
||||
}
|
||||
|
||||
/* Hide outline on mobile */
|
||||
.VPDocOutline {
|
||||
display: none;
|
||||
}
|
||||
|
||||
/* Hero Section */
|
||||
.VPHomeHero {
|
||||
padding: 40px 16px;
|
||||
}
|
||||
|
||||
.VPHomeHero h1 {
|
||||
font-size: 28px;
|
||||
line-height: 1.2;
|
||||
}
|
||||
|
||||
.VPHomeHero p {
|
||||
font-size: 14px;
|
||||
}
|
||||
|
||||
/* Code Blocks */
|
||||
div[class*='language-'] {
|
||||
margin: 12px -16px;
|
||||
border-radius: 0;
|
||||
}
|
||||
|
||||
div[class*='language-'] pre {
|
||||
padding: 12px 16px;
|
||||
font-size: 12px;
|
||||
}
|
||||
|
||||
/* Tables - make them scrollable */
|
||||
.vp-doc table {
|
||||
display: block;
|
||||
width: 100%;
|
||||
overflow-x: auto;
|
||||
-webkit-overflow-scrolling: touch;
|
||||
}
|
||||
|
||||
table {
|
||||
font-size: 12px;
|
||||
}
|
||||
|
||||
table th,
|
||||
table td {
|
||||
padding: 8px 12px;
|
||||
}
|
||||
|
||||
/* Buttons */
|
||||
.VPButton {
|
||||
padding: 8px 16px;
|
||||
font-size: 14px;
|
||||
}
|
||||
|
||||
/* Cards */
|
||||
.VPFeature {
|
||||
padding: 16px;
|
||||
}
|
||||
|
||||
/* Touch-friendly tap targets (min 44x44px per WCAG) */
|
||||
button,
|
||||
a,
|
||||
input,
|
||||
select,
|
||||
textarea {
|
||||
min-height: 44px;
|
||||
min-width: 44px;
|
||||
}
|
||||
|
||||
/* Search */
|
||||
.DocSearch {
|
||||
width: 100%;
|
||||
}
|
||||
|
||||
/* Theme Switcher */
|
||||
.theme-switcher {
|
||||
padding: 12px;
|
||||
}
|
||||
|
||||
/* Breadcrumbs */
|
||||
.breadcrumb {
|
||||
padding: 8px 0;
|
||||
font-size: 12px;
|
||||
}
|
||||
|
||||
/* Table of Contents - hidden on mobile */
|
||||
.page-toc {
|
||||
display: none;
|
||||
}
|
||||
|
||||
/* Typography adjustments for mobile */
|
||||
.vp-doc h1 {
|
||||
font-size: 1.75rem;
|
||||
margin-bottom: 1rem;
|
||||
}
|
||||
|
||||
.vp-doc h2 {
|
||||
font-size: 1.375rem;
|
||||
margin-top: 2rem;
|
||||
padding-top: 1.5rem;
|
||||
}
|
||||
|
||||
.vp-doc h3 {
|
||||
font-size: 1.125rem;
|
||||
margin-top: 1.5rem;
|
||||
}
|
||||
|
||||
.vp-doc p {
|
||||
line-height: 1.7;
|
||||
margin: 1rem 0;
|
||||
}
|
||||
|
||||
.vp-doc ul,
|
||||
.vp-doc ol {
|
||||
margin: 1rem 0;
|
||||
padding-left: 1.25rem;
|
||||
}
|
||||
|
||||
.vp-doc li {
|
||||
margin: 0.375rem 0;
|
||||
}
|
||||
}
|
||||
|
||||
/* ============================================
|
||||
* Tablet Styles (768px - 1024px)
|
||||
* ============================================ */
|
||||
@media (min-width: 768px) and (max-width: 1024px) {
|
||||
:root {
|
||||
--vp-content-width: 760px;
|
||||
--vp-sidebar-width: 240px;
|
||||
}
|
||||
|
||||
.VPContent {
|
||||
padding: 24px;
|
||||
}
|
||||
|
||||
.VPDoc .content-container {
|
||||
padding: 0 24px;
|
||||
max-width: var(--vp-content-width);
|
||||
}
|
||||
|
||||
.VPHomeHero {
|
||||
padding: 60px 24px;
|
||||
}
|
||||
|
||||
.VPHomeHero h1 {
|
||||
font-size: 36px;
|
||||
}
|
||||
|
||||
div[class*='language-'] {
|
||||
margin: 12px 0;
|
||||
}
|
||||
|
||||
/* Outline visible but narrower */
|
||||
.VPDocOutline {
|
||||
width: 200px;
|
||||
padding-left: 16px;
|
||||
}
|
||||
|
||||
.VPDocOutline .outline-link {
|
||||
font-size: 12px;
|
||||
}
|
||||
}
|
||||
|
||||
/* ============================================
|
||||
* Desktop Styles (1024px+)
|
||||
* ============================================ */
|
||||
@media (min-width: 1024px) {
|
||||
:root {
|
||||
--vp-layout-max-width: 1600px;
|
||||
--vp-content-width: 960px;
|
||||
--vp-sidebar-width: 272px;
|
||||
}
|
||||
|
||||
.VPContent {
|
||||
padding: 32px 48px;
|
||||
max-width: var(--vp-layout-max-width);
|
||||
}
|
||||
|
||||
.VPDoc .content-container {
|
||||
max-width: var(--vp-content-width);
|
||||
padding: 0 40px;
|
||||
}
|
||||
|
||||
/* Outline - sticky on desktop with good width */
|
||||
.VPDocOutline {
|
||||
position: sticky;
|
||||
top: calc(var(--vp-nav-height) + 24px);
|
||||
width: 256px;
|
||||
padding-left: 24px;
|
||||
max-height: calc(100vh - var(--vp-nav-height) - 48px);
|
||||
overflow-y: auto;
|
||||
}
|
||||
|
||||
.VPDocOutline .outline-marker {
|
||||
display: block;
|
||||
}
|
||||
|
||||
.VPDocOutline .outline-link {
|
||||
font-size: 13px;
|
||||
line-height: 1.6;
|
||||
padding: 4px 12px;
|
||||
transition: color 0.2s ease;
|
||||
}
|
||||
|
||||
.VPDocOutline .outline-link:hover {
|
||||
color: var(--vp-c-primary);
|
||||
}
|
||||
|
||||
/* Two-column layout for content + TOC */
|
||||
.content-with-toc {
|
||||
display: grid;
|
||||
grid-template-columns: 1fr 280px;
|
||||
gap: 32px;
|
||||
}
|
||||
}
|
||||
|
||||
/* ============================================
|
||||
* Large Desktop (1440px+)
|
||||
* ============================================ */
|
||||
@media (min-width: 1440px) {
|
||||
:root {
|
||||
--vp-content-width: 1040px;
|
||||
--vp-sidebar-width: 280px;
|
||||
}
|
||||
|
||||
.VPDoc .content-container {
|
||||
padding: 0 48px;
|
||||
}
|
||||
|
||||
.VPDocOutline {
|
||||
width: 280px;
|
||||
}
|
||||
}
|
||||
|
||||
/* ============================================
|
||||
* Landscape Orientation
|
||||
* ============================================ */
|
||||
@media (max-height: 500px) and (orientation: landscape) {
|
||||
.VPNav {
|
||||
height: 48px;
|
||||
}
|
||||
|
||||
.VPHomeHero {
|
||||
padding: 20px 16px;
|
||||
}
|
||||
}
|
||||
|
||||
/* ============================================
|
||||
* High DPI Displays
|
||||
* ============================================ */
|
||||
@media (-webkit-min-device-pixel-ratio: 2),
|
||||
(min-resolution: 192dpi) {
|
||||
/* Optimize images for retina displays */
|
||||
img {
|
||||
image-rendering: -webkit-optimize-contrast;
|
||||
image-rendering: crisp-edges;
|
||||
}
|
||||
}
|
||||
|
||||
/* ============================================
|
||||
* Reduced Motion (Accessibility)
|
||||
* ============================================ */
|
||||
@media (prefers-reduced-motion: reduce) {
|
||||
*,
|
||||
*::before,
|
||||
*::after {
|
||||
animation-duration: 0.01ms !important;
|
||||
animation-iteration-count: 1 !important;
|
||||
transition-duration: 0.01ms !important;
|
||||
}
|
||||
}
|
||||
|
||||
/* ============================================
|
||||
* Dark Mode Specific
|
||||
* ============================================ */
|
||||
@media (max-width: 768px) {
|
||||
.dark {
|
||||
--vp-c-bg: #0f172a;
|
||||
--vp-c-text-1: #f1f5f9;
|
||||
}
|
||||
}
|
||||
|
||||
/* ============================================
|
||||
* Print Styles for Mobile
|
||||
* ============================================ */
|
||||
@media print and (max-width: 768px) {
|
||||
.VPContent {
|
||||
font-size: 10pt;
|
||||
}
|
||||
|
||||
h1 {
|
||||
font-size: 14pt;
|
||||
}
|
||||
|
||||
h2 {
|
||||
font-size: 12pt;
|
||||
}
|
||||
}
|
||||
221
docs/.vitepress/theme/styles/variables.css
Normal file
221
docs/.vitepress/theme/styles/variables.css
Normal file
@@ -0,0 +1,221 @@
|
||||
/**
|
||||
* Design Tokens for CCW Documentation
|
||||
* Based on ui-ux-pro-max design system
|
||||
* 8 themes: 4 colors × 2 modes (light/dark)
|
||||
*/
|
||||
|
||||
:root {
|
||||
/* ============================================
|
||||
* Color Scheme: Blue (Default)
|
||||
* ============================================ */
|
||||
|
||||
/* Primary Colors */
|
||||
--vp-c-primary-50: #eff6ff;
|
||||
--vp-c-primary-100: #dbeafe;
|
||||
--vp-c-primary-200: #bfdbfe;
|
||||
--vp-c-primary-300: #93c5fd;
|
||||
--vp-c-primary-400: #60a5fa;
|
||||
--vp-c-primary-500: #3b82f6;
|
||||
--vp-c-primary-600: #2563eb;
|
||||
--vp-c-primary-700: #1d4ed8;
|
||||
--vp-c-primary-800: #1e40af;
|
||||
--vp-c-primary-900: #1e3a8a;
|
||||
--vp-c-primary: var(--vp-c-primary-500);
|
||||
--vp-c-brand-1: var(--vp-c-primary-500);
|
||||
--vp-c-brand-2: var(--vp-c-secondary-500);
|
||||
--vp-c-brand-soft: rgba(59, 130, 246, 0.1);
|
||||
|
||||
/* Secondary Colors (Green) */
|
||||
--vp-c-secondary-50: #ecfdf5;
|
||||
--vp-c-secondary-100: #d1fae5;
|
||||
--vp-c-secondary-200: #a7f3d0;
|
||||
--vp-c-secondary-300: #6ee7b7;
|
||||
--vp-c-secondary-400: #34d399;
|
||||
--vp-c-secondary-500: #10b981;
|
||||
--vp-c-secondary-600: #059669;
|
||||
--vp-c-secondary-700: #047857;
|
||||
--vp-c-secondary-800: #065f46;
|
||||
--vp-c-secondary-900: #064e3b;
|
||||
--vp-c-secondary: var(--vp-c-secondary-500);
|
||||
|
||||
/* Accent Colors */
|
||||
--vp-c-accent-50: #fef3c7;
|
||||
--vp-c-accent-100: #fde68a;
|
||||
--vp-c-accent-200: #fcd34d;
|
||||
--vp-c-accent-300: #fbbf24;
|
||||
--vp-c-accent-400: #f59e0b;
|
||||
--vp-c-accent-500: #d97706;
|
||||
--vp-c-accent: var(--vp-c-accent-400);
|
||||
|
||||
/* Background Colors (Light Mode) */
|
||||
--vp-c-bg: #ffffff;
|
||||
--vp-c-bg-soft: #f9fafb;
|
||||
--vp-c-bg-mute: #f3f4f6;
|
||||
--vp-c-bg-alt: #ffffff;
|
||||
|
||||
/* Surface Colors */
|
||||
--vp-c-surface: #f9fafb;
|
||||
--vp-c-surface-1: #f3f4f6;
|
||||
--vp-c-surface-2: #e5e7eb;
|
||||
--vp-c-surface-3: #d1d5db;
|
||||
|
||||
/* Border Colors */
|
||||
--vp-c-border: #e5e7eb;
|
||||
--vp-c-border-soft: #f3f4f6;
|
||||
--vp-c-divider: #e5e7eb;
|
||||
|
||||
/* Text Colors */
|
||||
--vp-c-text-1: #111827;
|
||||
--vp-c-text-2: #374151;
|
||||
--vp-c-text-3: #6b7280;
|
||||
--vp-c-text-4: #9ca3af;
|
||||
--vp-c-text-code: #ef4444;
|
||||
|
||||
/* Typography */
|
||||
--vp-font-family-base: 'Inter', -apple-system, BlinkMacSystemFont, 'Segoe UI', Roboto, Oxygen, Ubuntu, Cantarell, 'Fira Sans', 'Droid Sans', 'Helvetica Neue', sans-serif;
|
||||
--vp-font-family-mono: 'Fira Code', 'Cascadia Code', 'JetBrains Mono', Consolas, 'Courier New', monospace;
|
||||
|
||||
/* Font Sizes */
|
||||
--vp-font-size-base: 16px;
|
||||
--vp-font-size-sm: 14px;
|
||||
--vp-font-size-lg: 18px;
|
||||
--vp-font-size-xl: 20px;
|
||||
|
||||
/* Spacing */
|
||||
--vp-spacing-xs: 0.25rem; /* 4px */
|
||||
--vp-spacing-sm: 0.5rem; /* 8px */
|
||||
--vp-spacing-md: 1rem; /* 16px */
|
||||
--vp-spacing-lg: 1.5rem; /* 24px */
|
||||
--vp-spacing-xl: 2rem; /* 32px */
|
||||
--vp-spacing-2xl: 3rem; /* 48px */
|
||||
|
||||
/* Border Radius */
|
||||
--vp-radius-sm: 0.25rem; /* 4px */
|
||||
--vp-radius-md: 0.375rem; /* 6px */
|
||||
--vp-radius-lg: 0.5rem; /* 8px */
|
||||
--vp-radius-xl: 0.75rem; /* 12px */
|
||||
--vp-radius-full: 9999px;
|
||||
|
||||
/* Shadows */
|
||||
--vp-shadow-sm: 0 1px 2px 0 rgb(0 0 0 / 0.05);
|
||||
--vp-shadow-md: 0 4px 6px -1px rgb(0 0 0 / 0.1);
|
||||
--vp-shadow-lg: 0 10px 15px -3px rgb(0 0 0 / 0.1);
|
||||
--vp-shadow-xl: 0 20px 25px -5px rgb(0 0 0 / 0.1);
|
||||
|
||||
/* Transitions */
|
||||
--vp-transition-color: 0.2s ease;
|
||||
--vp-transition-transform: 0.2s ease;
|
||||
--vp-transition-all: 0.3s cubic-bezier(0.4, 0, 0.2, 1);
|
||||
|
||||
/* Z-Index */
|
||||
--vp-z-index-base: 1;
|
||||
--vp-z-index-dropdown: 10;
|
||||
--vp-z-index-sticky: 20;
|
||||
--vp-z-index-fixed: 50;
|
||||
--vp-z-index-modal: 100;
|
||||
--vp-z-index-toast: 200;
|
||||
}
|
||||
|
||||
/* ============================================
|
||||
* Dark Mode
|
||||
* ============================================ */
|
||||
.dark {
|
||||
/* Background Colors (Dark Mode) */
|
||||
--vp-c-bg: #111827;
|
||||
--vp-c-bg-soft: #1f2937;
|
||||
--vp-c-bg-mute: #374151;
|
||||
--vp-c-bg-alt: #0f172a;
|
||||
|
||||
/* Surface Colors */
|
||||
--vp-c-surface: #1f2937;
|
||||
--vp-c-surface-1: #374151;
|
||||
--vp-c-surface-2: #4b5563;
|
||||
--vp-c-surface-3: #6b7280;
|
||||
|
||||
/* Border Colors */
|
||||
--vp-c-border: #374151;
|
||||
--vp-c-border-soft: #1f2937;
|
||||
--vp-c-divider: #374151;
|
||||
|
||||
/* Text Colors */
|
||||
--vp-c-text-1: #f9fafb;
|
||||
--vp-c-text-2: #e5e7eb;
|
||||
--vp-c-text-3: #d1d5db;
|
||||
--vp-c-text-4: #9ca3af;
|
||||
--vp-c-text-code: #fca5a5;
|
||||
|
||||
/* Primary Colors (adjusted for dark mode) */
|
||||
--vp-c-primary-50: #1e3a8a;
|
||||
--vp-c-primary-100: #1e40af;
|
||||
--vp-c-primary-200: #1d4ed8;
|
||||
--vp-c-primary-300: #2563eb;
|
||||
--vp-c-primary-400: #3b82f6;
|
||||
--vp-c-primary-500: #60a5fa;
|
||||
--vp-c-primary-600: #93c5fd;
|
||||
--vp-c-primary-700: #bfdbfe;
|
||||
--vp-c-primary-800: #dbeafe;
|
||||
--vp-c-primary-900: #eff6ff;
|
||||
--vp-c-primary: var(--vp-c-primary-400);
|
||||
--vp-c-brand-1: var(--vp-c-primary-400);
|
||||
--vp-c-brand-2: var(--vp-c-secondary-400);
|
||||
--vp-c-brand-soft: rgba(96, 165, 250, 0.2);
|
||||
|
||||
/* Secondary Colors (adjusted for dark mode) */
|
||||
--vp-c-secondary-50: #064e3b;
|
||||
--vp-c-secondary-100: #065f46;
|
||||
--vp-c-secondary-200: #047857;
|
||||
--vp-c-secondary-300: #059669;
|
||||
--vp-c-secondary-400: #10b981;
|
||||
--vp-c-secondary-500: #34d399;
|
||||
--vp-c-secondary-600: #6ee7b7;
|
||||
--vp-c-secondary-700: #a7f3d0;
|
||||
--vp-c-secondary-800: #d1fae5;
|
||||
--vp-c-secondary-900: #ecfdf5;
|
||||
--vp-c-secondary: var(--vp-c-secondary-400);
|
||||
}
|
||||
|
||||
/* ============================================
|
||||
* Color Scheme: Green
|
||||
* ============================================ */
|
||||
[data-theme="green"] {
|
||||
--vp-c-primary: var(--vp-c-secondary-500);
|
||||
}
|
||||
[data-theme="green"].dark {
|
||||
--vp-c-primary: var(--vp-c-secondary-400);
|
||||
}
|
||||
|
||||
/* ============================================
|
||||
* Color Scheme: Orange
|
||||
* ============================================ */
|
||||
[data-theme="orange"] {
|
||||
--vp-c-primary: var(--vp-c-accent-500);
|
||||
}
|
||||
[data-theme="orange"].dark {
|
||||
--vp-c-primary: var(--vp-c-accent-400);
|
||||
}
|
||||
|
||||
/* ============================================
|
||||
* Color Scheme: Purple
|
||||
* ============================================ */
|
||||
[data-theme="purple"] {
|
||||
--vp-c-primary-500: #8b5cf6;
|
||||
--vp-c-primary-600: #7c3aed;
|
||||
--vp-c-primary-700: #6d28d9;
|
||||
--vp-c-primary: var(--vp-c-primary-500);
|
||||
}
|
||||
[data-theme="purple"].dark {
|
||||
--vp-c-primary-400: #a78bfa;
|
||||
--vp-c-primary-500: #8b5cf6;
|
||||
--vp-c-primary: var(--vp-c-primary-400);
|
||||
}
|
||||
|
||||
/* ============================================
|
||||
* Utility Classes
|
||||
* ============================================ */
|
||||
.text-primary { color: var(--vp-c-primary); }
|
||||
.text-secondary { color: var(--vp-c-secondary); }
|
||||
.bg-surface { background-color: var(--vp-c-surface); }
|
||||
.border-primary { border-color: var(--vp-c-primary); }
|
||||
.rounded-md { border-radius: var(--vp-radius-md); }
|
||||
.shadow-md { box-shadow: var(--vp-shadow-md); }
|
||||
.transition { transition: var(--vp-transition-all); }
|
||||
524
docs/agents/builtin.md
Normal file
524
docs/agents/builtin.md
Normal file
@@ -0,0 +1,524 @@
|
||||
# Built-in Agents
|
||||
|
||||
CCW includes **21 specialized agents** organized across 5 categories, each designed for specific development tasks. Agents can work independently or be orchestrated together for complex workflows.
|
||||
|
||||
## Categories Overview
|
||||
|
||||
| Category | Count | Primary Use |
|
||||
|----------|-------|-------------|
|
||||
| [CLI](#cli-agents) | 6 | CLI-based interactions, exploration, and planning |
|
||||
| [Development](#development-agents) | 5 | Code implementation and debugging |
|
||||
| [Planning](#planning-agents) | 4 | Strategic planning and issue management |
|
||||
| [Testing](#testing-agents) | 3 | Test generation, execution, and quality assurance |
|
||||
| [Documentation](#documentation-agents) | 3 | Documentation and design systems |
|
||||
|
||||
---
|
||||
|
||||
## CLI Agents
|
||||
|
||||
### cli-explore-agent
|
||||
|
||||
**Purpose**: Specialized CLI exploration with 3 analysis modes
|
||||
|
||||
**Capabilities**:
|
||||
- Quick-scan (Bash only)
|
||||
- Deep-scan (Bash + Gemini)
|
||||
- Dependency-map (graph construction)
|
||||
- 4-phase workflow: Task Understanding → Analysis Execution → Schema Validation → Output Generation
|
||||
|
||||
**Tools**: `Bash`, `Read`, `Grep`, `Glob`, `ccw cli (gemini/qwen/codex)`, `ACE search_context`
|
||||
|
||||
```javascript
|
||||
Task({
|
||||
subagent_type: "cli-explore-agent",
|
||||
prompt: "Analyze authentication module dependencies"
|
||||
})
|
||||
```
|
||||
|
||||
### cli-discuss-agent
|
||||
|
||||
**Purpose**: Multi-CLI collaborative discussion with cross-verification
|
||||
|
||||
**Capabilities**:
|
||||
- 5-phase workflow: Context Prep → CLI Execution → Cross-Verify → Synthesize → Output
|
||||
- Loads discussion history
|
||||
- Maintains context across sessions
|
||||
|
||||
**Tools**: `Read`, `Grep`, `Glob`, `ccw cli`
|
||||
|
||||
**Calls**: `cli-explore-agent` for codebase discovery before discussions
|
||||
|
||||
```javascript
|
||||
Task({
|
||||
subagent_type: "cli-discuss-agent",
|
||||
prompt: "Discuss architecture patterns for microservices"
|
||||
})
|
||||
```
|
||||
|
||||
### cli-execution-agent
|
||||
|
||||
**Purpose**: Intelligent CLI execution with automated context discovery
|
||||
|
||||
**Capabilities**:
|
||||
- 5-phase workflow: Task Understanding → Context Discovery → Prompt Enhancement → Tool Execution → Output Routing
|
||||
- Background execution support
|
||||
- Result polling
|
||||
|
||||
**Tools**: `Bash`, `Read`, `Grep`, `Glob`, `ccw cli`, `TaskOutput`
|
||||
|
||||
**Calls**: `cli-explore-agent` for discovery before execution
|
||||
|
||||
```javascript
|
||||
Task({
|
||||
subagent_type: "cli-execution-agent",
|
||||
prompt: "Execute security scan on authentication module"
|
||||
})
|
||||
```
|
||||
|
||||
### cli-lite-planning-agent
|
||||
|
||||
**Purpose**: Lightweight planning for quick task breakdowns
|
||||
|
||||
**Capabilities**:
|
||||
- Creates simplified task JSONs without complex schema validation
|
||||
- For straightforward implementation tasks
|
||||
|
||||
**Tools**: `Read`, `Write`, `Bash`, `Grep`
|
||||
|
||||
```javascript
|
||||
Task({
|
||||
subagent_type: "cli-lite-planning-agent",
|
||||
prompt: "Plan user registration feature"
|
||||
})
|
||||
```
|
||||
|
||||
### cli-planning-agent
|
||||
|
||||
**Purpose**: Full-featured planning for complex implementations
|
||||
|
||||
**Capabilities**:
|
||||
- 6-field schema with context loading
|
||||
- Flow control and artifact integration
|
||||
- Comprehensive task JSON generation
|
||||
|
||||
**Tools**: `Read`, `Write`, `Bash`, `Grep`, `Glob`, `mcp__ace-tool__search_context`
|
||||
|
||||
```javascript
|
||||
Task({
|
||||
subagent_type: "cli-planning-agent",
|
||||
prompt: "Plan microservices architecture migration"
|
||||
})
|
||||
```
|
||||
|
||||
### cli-roadmap-plan-agent
|
||||
|
||||
**Purpose**: Strategic planning for roadmap and milestone generation
|
||||
|
||||
**Capabilities**:
|
||||
- Creates long-term project plans
|
||||
- Generates epics, milestones, and delivery timelines
|
||||
- Issue creation via ccw
|
||||
|
||||
**Tools**: `Read`, `Write`, `Bash`, `Grep`
|
||||
|
||||
```javascript
|
||||
Task({
|
||||
subagent_type: "cli-roadmap-plan-agent",
|
||||
prompt: "Create Q1 roadmap for payment system"
|
||||
})
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Development Agents
|
||||
|
||||
### code-developer
|
||||
|
||||
**Purpose**: Core code execution for any implementation task
|
||||
|
||||
**Capabilities**:
|
||||
- Adapts to any domain while maintaining quality standards
|
||||
- Supports analysis, implementation, documentation, research
|
||||
- Complex multi-step workflows
|
||||
|
||||
**Tools**: `Read`, `Edit`, `Write`, `Bash`, `Grep`, `Glob`, `Task`, `mcp__ccw-tools__edit_file`, `mcp__ccw-tools__write_file`
|
||||
|
||||
```javascript
|
||||
Task({
|
||||
subagent_type: "code-developer",
|
||||
prompt: "Implement user authentication with JWT"
|
||||
})
|
||||
```
|
||||
|
||||
### tdd-developer
|
||||
|
||||
**Purpose**: TDD-aware code execution for Red-Green-Refactor workflows
|
||||
|
||||
**Capabilities**:
|
||||
- Extends code-developer with TDD cycle awareness
|
||||
- Automatic test-fix iteration
|
||||
- CLI session resumption
|
||||
|
||||
**Tools**: `Read`, `Edit`, `Write`, `Bash`, `Grep`, `Glob`, `ccw cli`
|
||||
|
||||
**Extends**: `code-developer`
|
||||
|
||||
```javascript
|
||||
Task({
|
||||
subagent_type: "tdd-developer",
|
||||
prompt: "Implement payment processing with TDD"
|
||||
})
|
||||
```
|
||||
|
||||
### context-search-agent
|
||||
|
||||
**Purpose**: Specialized context collector for brainstorming workflows
|
||||
|
||||
**Capabilities**:
|
||||
- Analyzes existing codebase
|
||||
- Identifies patterns
|
||||
- Generates standardized context packages
|
||||
|
||||
**Tools**: `mcp__ace-tool__search_context`, `mcp__ccw-tools__smart_search`, `Read`, `Grep`, `Glob`, `Bash`
|
||||
|
||||
```javascript
|
||||
Task({
|
||||
subagent_type: "context-search-agent",
|
||||
prompt: "Gather context for API refactoring"
|
||||
})
|
||||
```
|
||||
|
||||
### debug-explore-agent
|
||||
|
||||
**Purpose**: Debugging specialist for code analysis and problem diagnosis
|
||||
|
||||
**Capabilities**:
|
||||
- Hypothesis-driven debugging with NDJSON logging
|
||||
- CLI-assisted analysis
|
||||
- Iterative verification
|
||||
- Traces execution flow, identifies failure points, analyzes state at failure
|
||||
|
||||
**Tools**: `Read`, `Grep`, `Bash`, `ccw cli`
|
||||
|
||||
**Workflow**: Bug Analysis → Hypothesis Generation → Instrumentation → Log Analysis → Fix Verification
|
||||
|
||||
```javascript
|
||||
Task({
|
||||
subagent_type: "debug-explore-agent",
|
||||
prompt: "Debug memory leak in connection handler"
|
||||
})
|
||||
```
|
||||
|
||||
### universal-executor
|
||||
|
||||
**Purpose**: Versatile execution for implementing any task efficiently
|
||||
|
||||
**Capabilities**:
|
||||
- Adapts to any domain while maintaining quality standards
|
||||
- Handles analysis, implementation, documentation, research
|
||||
- Complex multi-step workflows
|
||||
|
||||
**Tools**: `Read`, `Edit`, `Write`, `Bash`, `Grep`, `Glob`, `Task`, `mcp__ace-tool__search_context`, `mcp__exa__web_search_exa`
|
||||
|
||||
```javascript
|
||||
Task({
|
||||
subagent_type: "universal-executor",
|
||||
prompt: "Implement GraphQL API with authentication"
|
||||
})
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Planning Agents
|
||||
|
||||
### action-planning-agent
|
||||
|
||||
**Purpose**: Pure execution agent for creating implementation plans
|
||||
|
||||
**Capabilities**:
|
||||
- Transforms requirements and brainstorming artifacts into structured plans
|
||||
- Quantified deliverables and measurable acceptance criteria
|
||||
- Control flags for execution modes
|
||||
|
||||
**Tools**: `Read`, `Write`, `Bash`, `Grep`, `Glob`, `mcp__ace-tool__search_context`, `mcp__ccw-tools__smart_search`
|
||||
|
||||
```javascript
|
||||
Task({
|
||||
subagent_type: "action-planning-agent",
|
||||
prompt: "Create implementation plan for user dashboard"
|
||||
})
|
||||
```
|
||||
|
||||
### conceptual-planning-agent
|
||||
|
||||
**Purpose**: High-level planning for architectural and conceptual design
|
||||
|
||||
**Capabilities**:
|
||||
- Creates system designs
|
||||
- Architecture patterns
|
||||
- Technical strategies
|
||||
|
||||
**Tools**: `Read`, `Write`, `Bash`, `Grep`, `ccw cli`
|
||||
|
||||
```javascript
|
||||
Task({
|
||||
subagent_type: "conceptual-planning-agent",
|
||||
prompt: "Design event-driven architecture for order system"
|
||||
})
|
||||
```
|
||||
|
||||
### issue-plan-agent
|
||||
|
||||
**Purpose**: Issue resolution planning with closed-loop exploration
|
||||
|
||||
**Capabilities**:
|
||||
- Analyzes issues and generates solution plans
|
||||
- Creates task JSONs with dependencies and acceptance criteria
|
||||
- 5-phase tasks from exploration to solution
|
||||
|
||||
**Tools**: `Read`, `Write`, `Bash`, `Grep`, `mcp__ace-tool__search_context`
|
||||
|
||||
```javascript
|
||||
Task({
|
||||
subagent_type: "issue-plan-agent",
|
||||
prompt: "Plan resolution for issue #123"
|
||||
})
|
||||
```
|
||||
|
||||
### issue-queue-agent
|
||||
|
||||
**Purpose**: Solution ordering agent for queue formation
|
||||
|
||||
**Capabilities**:
|
||||
- Receives solutions from bound issues
|
||||
- Uses Gemini for intelligent conflict detection
|
||||
- Produces ordered execution queue
|
||||
|
||||
**Tools**: `Read`, `Write`, `Bash`, `ccw cli (gemini)`, `mcp__ace-tool__search_context`, `mcp__ccw-tools__smart_search`
|
||||
|
||||
**Calls**: `issue-plan-agent`
|
||||
|
||||
```javascript
|
||||
Task({
|
||||
subagent_type: "issue-queue-agent",
|
||||
prompt: "Form execution queue for issues #101, #102, #103"
|
||||
})
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Testing Agents
|
||||
|
||||
### test-action-planning-agent
|
||||
|
||||
**Purpose**: Specialized agent for test planning documents
|
||||
|
||||
**Capabilities**:
|
||||
- Extends action-planning-agent for test planning
|
||||
- Progressive L0-L3 test layers (Static, Unit, Integration, E2E)
|
||||
- AI code issue detection (L0.5) with CRITICAL/ERROR/WARNING severity
|
||||
- Project-specific templates
|
||||
- Test anti-pattern detection with quality gates
|
||||
|
||||
**Tools**: `Read`, `Write`, `Bash`, `Grep`, `Glob`
|
||||
|
||||
**Extends**: `action-planning-agent`
|
||||
|
||||
```javascript
|
||||
Task({
|
||||
subagent_type: "test-action-planning-agent",
|
||||
prompt: "Create test plan for payment module"
|
||||
})
|
||||
```
|
||||
|
||||
### test-context-search-agent
|
||||
|
||||
**Purpose**: Specialized context collector for test generation workflows
|
||||
|
||||
**Capabilities**:
|
||||
- Analyzes test coverage
|
||||
- Identifies missing tests
|
||||
- Loads implementation context from source sessions
|
||||
- Generates standardized test-context packages
|
||||
|
||||
**Tools**: `mcp__ccw-tools__codex_lens`, `Read`, `Glob`, `Bash`, `Grep`
|
||||
|
||||
```javascript
|
||||
Task({
|
||||
subagent_type: "test-context-search-agent",
|
||||
prompt: "Gather test context for authentication module"
|
||||
})
|
||||
```
|
||||
|
||||
### test-fix-agent
|
||||
|
||||
**Purpose**: Execute tests, diagnose failures, and fix code until all tests pass
|
||||
|
||||
**Capabilities**:
|
||||
- Multi-layered test execution (L0-L3)
|
||||
- Analyzes failures and modifies source code
|
||||
- Quality gate for passing tests
|
||||
|
||||
**Tools**: `Bash`, `Read`, `Edit`, `Write`, `Grep`, `ccw cli`
|
||||
|
||||
```javascript
|
||||
Task({
|
||||
subagent_type: "test-fix-agent",
|
||||
prompt: "Run tests for user service and fix failures"
|
||||
})
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Documentation Agents
|
||||
|
||||
### doc-generator
|
||||
|
||||
**Purpose**: Documentation generation for technical docs, API references, and code comments
|
||||
|
||||
**Capabilities**:
|
||||
- Synthesizes context from multiple sources
|
||||
- Produces comprehensive documentation
|
||||
- Flow_control-based task execution
|
||||
|
||||
**Tools**: `Read`, `Write`, `Bash`, `Grep`, `Glob`
|
||||
|
||||
```javascript
|
||||
Task({
|
||||
subagent_type: "doc-generator",
|
||||
prompt: "Generate API documentation for REST endpoints"
|
||||
})
|
||||
```
|
||||
|
||||
### memory-bridge
|
||||
|
||||
**Purpose**: Documentation update coordinator for complex projects
|
||||
|
||||
**Capabilities**:
|
||||
- Orchestrates parallel CLAUDE.md updates
|
||||
- Uses ccw tool exec update_module_claude
|
||||
- Processes every module path
|
||||
|
||||
**Tools**: `Bash`, `ccw tool exec`, `TodoWrite`
|
||||
|
||||
```javascript
|
||||
Task({
|
||||
subagent_type: "memory-bridge",
|
||||
prompt: "Update CLAUDE.md for all modules"
|
||||
})
|
||||
```
|
||||
|
||||
### ui-design-agent
|
||||
|
||||
**Purpose**: UI design token management and prototype generation
|
||||
|
||||
**Capabilities**:
|
||||
- W3C Design Tokens Format compliance
|
||||
- State-based component definitions (default, hover, focus, active, disabled)
|
||||
- Complete component library coverage (12+ interactive components)
|
||||
- Animation-component state integration
|
||||
- WCAG AA compliance validation
|
||||
- Token-driven prototype generation
|
||||
|
||||
**Tools**: `Read`, `Write`, `Edit`, `Bash`, `mcp__exa__web_search_exa`, `mcp__exa__get_code_context_exa`
|
||||
|
||||
```javascript
|
||||
Task({
|
||||
subagent_type: "ui-design-agent",
|
||||
prompt: "Generate design tokens for dashboard components"
|
||||
})
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Orchestration Patterns
|
||||
|
||||
Agents can be combined using these orchestration patterns:
|
||||
|
||||
### Inheritance Chain
|
||||
|
||||
Agent extends another agent's capabilities:
|
||||
|
||||
| Parent | Child | Extension |
|
||||
|--------|-------|-----------|
|
||||
| code-developer | tdd-developer | Adds TDD Red-Green-Refactor workflow, test-fix cycle |
|
||||
| action-planning-agent | test-action-planning-agent | Adds L0-L3 test layers, AI issue detection |
|
||||
|
||||
### Sequential Delegation
|
||||
|
||||
Agent calls another agent for preprocessing:
|
||||
|
||||
| Caller | Callee | Purpose |
|
||||
|--------|--------|---------|
|
||||
| cli-discuss-agent | cli-explore-agent | Codebase discovery before discussion |
|
||||
| cli-execution-agent | cli-explore-agent | Discovery before CLI command execution |
|
||||
|
||||
### Queue Formation
|
||||
|
||||
Agent collects outputs from multiple agents and orders them:
|
||||
|
||||
| Collector | Source | Purpose |
|
||||
|-----------|--------|---------|
|
||||
| issue-queue-agent | issue-plan-agent | Collect solutions, detect conflicts, produce ordered queue |
|
||||
|
||||
### Context Loading Chain
|
||||
|
||||
Agent generates context packages used by execution agents:
|
||||
|
||||
| Context Provider | Consumer | Purpose |
|
||||
|------------------|----------|---------|
|
||||
| context-search-agent | code-developer | Provides brainstorming context packages |
|
||||
| test-context-search-agent | test-fix-agent | Provides test context packages |
|
||||
|
||||
### Quality Gate Chain
|
||||
|
||||
Sequential execution through validation gates:
|
||||
|
||||
```
|
||||
code-developer (IMPL-001)
|
||||
→ test-fix-agent (IMPL-001.3 validation)
|
||||
→ test-fix-agent (IMPL-001.5 review)
|
||||
→ test-fix-agent (IMPL-002 fix)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Agent Selection Guide
|
||||
|
||||
| Task | Recommended Agent | Alternative |
|
||||
|------|------------------|-------------|
|
||||
| Explore codebase | cli-explore-agent | context-search-agent |
|
||||
| Implement code | code-developer | tdd-developer |
|
||||
| Debug issues | debug-explore-agent | cli-execution-agent |
|
||||
| Plan implementation | cli-planning-agent | action-planning-agent |
|
||||
| Generate tests | test-action-planning-agent | test-fix-agent |
|
||||
| Review code | test-fix-agent | doc-generator |
|
||||
| Create documentation | doc-generator | ui-design-agent |
|
||||
| UI design | ui-design-agent | - |
|
||||
| Manage issues | issue-plan-agent | issue-queue-agent |
|
||||
|
||||
---
|
||||
|
||||
## Tool Dependencies
|
||||
|
||||
### Core Tools
|
||||
|
||||
All agents have access to: `Read`, `Write`, `Edit`, `Bash`, `Grep`, `Glob`
|
||||
|
||||
### MCP Tools
|
||||
|
||||
Specialized agents use: `mcp__ace-tool__search_context`, `mcp__ccw-tools__smart_search`, `mcp__ccw-tools__edit_file`, `mcp__ccw-tools__write_file`, `mcp__ccw-tools__codex_lens`, `mcp__exa__web_search_exa`
|
||||
|
||||
### CLI Tools
|
||||
|
||||
CLI-capable agents use: `ccw cli`, `ccw tool exec`
|
||||
|
||||
### Workflow Tools
|
||||
|
||||
Coordinating agents use: `Task`, `TaskCreate`, `TaskUpdate`, `TaskList`, `TaskOutput`, `TodoWrite`, `SendMessage`
|
||||
|
||||
::: info See Also
|
||||
- [Agents Overview](./index.md) - Agent system introduction
|
||||
- [Custom Agents](./custom.md) - Create custom agents
|
||||
- [Team Skills](../skills/core-skills.md#team-skills) - Multi-agent team skills
|
||||
:::
|
||||
263
docs/agents/custom.md
Normal file
263
docs/agents/custom.md
Normal file
@@ -0,0 +1,263 @@
|
||||
# Custom Agents
|
||||
|
||||
Guide to creating and configuring custom CCW agents.
|
||||
|
||||
## Agent Structure
|
||||
|
||||
```
|
||||
~/.claude/agents/my-agent/
|
||||
├── AGENT.md # Agent definition
|
||||
├── index.ts # Agent logic
|
||||
├── tools/ # Agent-specific tools
|
||||
└── examples/ # Usage examples
|
||||
```
|
||||
|
||||
## Creating an Agent
|
||||
|
||||
### 1. Define Agent
|
||||
|
||||
Create `AGENT.md`:
|
||||
|
||||
```markdown
|
||||
---
|
||||
name: my-agent
|
||||
type: development
|
||||
version: 1.0.0
|
||||
capabilities: [react, typescript, testing]
|
||||
---
|
||||
|
||||
# My Custom Agent
|
||||
|
||||
Specialized agent for React component development with TypeScript.
|
||||
|
||||
## Capabilities
|
||||
|
||||
- Generate React components with hooks
|
||||
- TypeScript type definitions
|
||||
- Vitest testing setup
|
||||
- Tailwind CSS styling
|
||||
|
||||
## Usage
|
||||
|
||||
\`\`\`javascript
|
||||
Task({
|
||||
subagent_type: "my-agent",
|
||||
prompt: "Create a user profile component"
|
||||
})
|
||||
\`\`\`
|
||||
```
|
||||
|
||||
### 2. Implement Agent Logic
|
||||
|
||||
Create `index.ts`:
|
||||
|
||||
```typescript
|
||||
import type { AgentContext, AgentResult } from '@ccw/types'
|
||||
|
||||
export async function execute(
|
||||
prompt: string,
|
||||
context: AgentContext
|
||||
): Promise<AgentResult> {
|
||||
// Analyze request
|
||||
const intent = analyzeIntent(prompt)
|
||||
|
||||
// Execute based on intent
|
||||
switch (intent.type) {
|
||||
case 'generate-component':
|
||||
return await generateComponent(intent.options)
|
||||
case 'add-tests':
|
||||
return await addTests(intent.options)
|
||||
default:
|
||||
return await handleGeneral(prompt)
|
||||
}
|
||||
}
|
||||
|
||||
function analyzeIntent(prompt: string) {
|
||||
// Parse user intent from prompt
|
||||
// Return structured intent object
|
||||
}
|
||||
```
|
||||
|
||||
## Agent Capabilities
|
||||
|
||||
### Code Generation
|
||||
|
||||
```typescript
|
||||
async function generateComponent(options: ComponentOptions) {
|
||||
return {
|
||||
files: [
|
||||
{
|
||||
path: 'src/components/UserProfile.tsx',
|
||||
content: generateReactComponent(options)
|
||||
},
|
||||
{
|
||||
path: 'src/components/UserProfile.test.tsx',
|
||||
content: generateTests(options)
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Analysis
|
||||
|
||||
```typescript
|
||||
async function analyzeCodebase(context: AgentContext) {
|
||||
const files = await context.filesystem.read('src/**/*.ts')
|
||||
const patterns = identifyPatterns(files)
|
||||
return {
|
||||
patterns,
|
||||
recommendations: generateRecommendations(patterns)
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Testing
|
||||
|
||||
```typescript
|
||||
async function generateTests(options: TestOptions) {
|
||||
return {
|
||||
framework: 'vitest',
|
||||
files: [
|
||||
{
|
||||
path: `${options.file}.test.ts`,
|
||||
content: generateTestCode(options)
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Agent Tools
|
||||
|
||||
Agents can define custom tools:
|
||||
|
||||
```typescript
|
||||
export const tools = {
|
||||
'my-tool': {
|
||||
description: 'My custom tool',
|
||||
parameters: {
|
||||
type: 'object',
|
||||
properties: {
|
||||
input: { type: 'string' }
|
||||
}
|
||||
},
|
||||
execute: async (params) => {
|
||||
// Tool implementation
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Agent Communication
|
||||
|
||||
Agents communicate via message bus:
|
||||
|
||||
```typescript
|
||||
// Send message to another agent
|
||||
await context.messaging.send({
|
||||
to: 'tester',
|
||||
type: 'task-complete',
|
||||
data: { files: generatedFiles }
|
||||
})
|
||||
|
||||
// Receive messages
|
||||
context.messaging.on('task-complete', async (message) => {
|
||||
if (message.from === 'executor') {
|
||||
await startTesting(message.data.files)
|
||||
}
|
||||
})
|
||||
```
|
||||
|
||||
## Agent Configuration
|
||||
|
||||
Configure agents in `~/.claude/agents/config.json`:
|
||||
|
||||
```json
|
||||
{
|
||||
"my-agent": {
|
||||
"enabled": true,
|
||||
"priority": 10,
|
||||
"capabilities": {
|
||||
"frameworks": ["react", "vue"],
|
||||
"languages": ["typescript", "javascript"],
|
||||
"tools": ["vitest", "playwright"]
|
||||
},
|
||||
"limits": {
|
||||
"maxFiles": 100,
|
||||
"maxSize": "10MB"
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Agent Best Practices
|
||||
|
||||
### 1. Clear Purpose
|
||||
|
||||
Define specific, focused capabilities:
|
||||
|
||||
```markdown
|
||||
# Good: Focused
|
||||
name: react-component-agent
|
||||
purpose: Generate React components with TypeScript
|
||||
|
||||
# Bad: Too broad
|
||||
name: fullstack-agent
|
||||
purpose: Handle everything
|
||||
```
|
||||
|
||||
### 2. Tool Selection
|
||||
|
||||
Use appropriate tools for tasks:
|
||||
|
||||
```typescript
|
||||
// File operations
|
||||
context.filesystem.read(path)
|
||||
context.filesystem.write(path, content)
|
||||
|
||||
// Code analysis
|
||||
context.codebase.search(query)
|
||||
context.codebase.analyze(pattern)
|
||||
|
||||
// Communication
|
||||
context.messaging.send(to, type, data)
|
||||
```
|
||||
|
||||
### 3. Error Handling
|
||||
|
||||
```typescript
|
||||
try {
|
||||
const result = await executeTask(prompt)
|
||||
return { success: true, result }
|
||||
} catch (error) {
|
||||
return {
|
||||
success: false,
|
||||
error: error.message,
|
||||
recovery: suggestRecovery(error)
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Testing Agents
|
||||
|
||||
```typescript
|
||||
import { describe, it, expect } from 'vitest'
|
||||
import { execute } from '../index'
|
||||
|
||||
describe('my-agent', () => {
|
||||
it('should generate component', async () => {
|
||||
const result = await execute(
|
||||
'Create a UserCard component',
|
||||
mockContext
|
||||
)
|
||||
expect(result.success).toBe(true)
|
||||
expect(result.files).toHaveLength(2) // component + test
|
||||
})
|
||||
})
|
||||
```
|
||||
|
||||
::: info See Also
|
||||
- [Built-in Agents](./builtin.md) - Pre-configured agents
|
||||
- [Agents Overview](./index.md) - Agent system introduction
|
||||
:::
|
||||
315
docs/agents/index.md
Normal file
315
docs/agents/index.md
Normal file
@@ -0,0 +1,315 @@
|
||||
# Agents
|
||||
|
||||
CCW provides specialized agents for different development workflows.
|
||||
|
||||
## What are Agents?
|
||||
|
||||
Agents are specialized AI assistants with specific expertise and tools for different aspects of software development. They are invoked via the `Task` tool in skills and workflows.
|
||||
|
||||
## Built-in Agents
|
||||
|
||||
### Execution Agents
|
||||
|
||||
#### code-developer
|
||||
|
||||
Pure code execution agent for implementing programming tasks and writing tests.
|
||||
|
||||
**Expertise:**
|
||||
- Feature implementation
|
||||
- Code generation and modification
|
||||
- Test writing
|
||||
- Bug fixes
|
||||
- All programming languages and frameworks
|
||||
|
||||
```javascript
|
||||
Task({
|
||||
subagent_type: "code-developer",
|
||||
prompt: "Implement user authentication API",
|
||||
run_in_background: false
|
||||
})
|
||||
```
|
||||
|
||||
#### tdd-developer
|
||||
|
||||
TDD-aware execution agent supporting Red-Green-Refactor workflow.
|
||||
|
||||
**Expertise:**
|
||||
- Test-first development
|
||||
- Red-Green-Refactor cycle
|
||||
- Test-driven implementation
|
||||
- Refactoring with test safety
|
||||
|
||||
```javascript
|
||||
Task({
|
||||
subagent_type: "tdd-developer",
|
||||
prompt: "Execute TDD task IMPL-1 with test-first development",
|
||||
run_in_background: false
|
||||
})
|
||||
```
|
||||
|
||||
#### test-fix-agent
|
||||
|
||||
Executes tests, diagnoses failures, and fixes code until tests pass.
|
||||
|
||||
**Expertise:**
|
||||
- Test execution and analysis
|
||||
- Failure diagnosis
|
||||
- Automated fixing
|
||||
- Iterative test-fix cycles
|
||||
|
||||
```javascript
|
||||
Task({
|
||||
subagent_type: "test-fix-agent",
|
||||
prompt: "Run tests and fix any failures",
|
||||
run_in_background: false
|
||||
})
|
||||
```
|
||||
|
||||
#### universal-executor
|
||||
|
||||
Universal executor for general-purpose execution tasks.
|
||||
|
||||
**Expertise:**
|
||||
- General task execution
|
||||
- Document generation
|
||||
- Multi-step workflows
|
||||
- Cross-domain tasks
|
||||
|
||||
```javascript
|
||||
Task({
|
||||
subagent_type: "universal-executor",
|
||||
prompt: "Generate documentation for the API",
|
||||
run_in_background: false
|
||||
})
|
||||
```
|
||||
|
||||
### Analysis Agents
|
||||
|
||||
#### context-search-agent
|
||||
|
||||
Intelligent context collector for development tasks.
|
||||
|
||||
**Expertise:**
|
||||
- Codebase exploration
|
||||
- Pattern discovery
|
||||
- Context gathering
|
||||
- File relationship analysis
|
||||
|
||||
```javascript
|
||||
Task({
|
||||
subagent_type: "context-search-agent",
|
||||
prompt: "Gather context for implementing user authentication",
|
||||
run_in_background: false
|
||||
})
|
||||
```
|
||||
|
||||
#### debug-explore-agent
|
||||
|
||||
Hypothesis-driven debugging agent with NDJSON logging.
|
||||
|
||||
**Expertise:**
|
||||
- Root cause analysis
|
||||
- Hypothesis generation and testing
|
||||
- Debug logging
|
||||
- Systematic troubleshooting
|
||||
|
||||
```javascript
|
||||
Task({
|
||||
subagent_type: "debug-explore-agent",
|
||||
prompt: "Debug the WebSocket connection timeout issue",
|
||||
run_in_background: false
|
||||
})
|
||||
```
|
||||
|
||||
#### cli-explore-agent
|
||||
|
||||
CLI-based code exploration agent.
|
||||
|
||||
**Expertise:**
|
||||
- CLI code analysis
|
||||
- External tool integration
|
||||
- Shell-based exploration
|
||||
- Command-line workflows
|
||||
|
||||
```javascript
|
||||
Task({
|
||||
subagent_type: "cli-explore-agent",
|
||||
prompt: "Explore codebase for authentication patterns",
|
||||
run_in_background: false
|
||||
})
|
||||
```
|
||||
|
||||
### Planning Agents
|
||||
|
||||
#### action-planning-agent
|
||||
|
||||
Creates implementation plans based on requirements and control flags.
|
||||
|
||||
**Expertise:**
|
||||
- Task breakdown
|
||||
- Implementation planning
|
||||
- Dependency analysis
|
||||
- Priority sequencing
|
||||
|
||||
```javascript
|
||||
Task({
|
||||
subagent_type: "action-planning-agent",
|
||||
prompt: "Create implementation plan for OAuth2 authentication",
|
||||
run_in_background: false
|
||||
})
|
||||
```
|
||||
|
||||
#### issue-plan-agent
|
||||
|
||||
Planning agent specialized for issue resolution.
|
||||
|
||||
**Expertise:**
|
||||
- Issue analysis
|
||||
- Solution planning
|
||||
- Task generation
|
||||
- Impact assessment
|
||||
|
||||
```javascript
|
||||
Task({
|
||||
subagent_type: "issue-plan-agent",
|
||||
prompt: "Plan solution for GitHub issue #123",
|
||||
run_in_background: false
|
||||
})
|
||||
```
|
||||
|
||||
### Specialized Agents
|
||||
|
||||
#### team-worker
|
||||
|
||||
Unified team worker agent for role-based collaboration.
|
||||
|
||||
**Expertise:**
|
||||
- Multi-role execution (analyst, writer, planner, executor, tester, reviewer)
|
||||
- Team coordination
|
||||
- Lifecycle management
|
||||
- Inter-role communication
|
||||
|
||||
```javascript
|
||||
Task({
|
||||
subagent_type: "team-worker",
|
||||
description: "Spawn executor worker",
|
||||
team_name: "my-team",
|
||||
name: "executor",
|
||||
run_in_background: true,
|
||||
prompt: "## Role Assignment\nrole: executor\n..."
|
||||
})
|
||||
```
|
||||
|
||||
#### doc-generator
|
||||
|
||||
Documentation generation agent.
|
||||
|
||||
**Expertise:**
|
||||
- API documentation
|
||||
- User guides
|
||||
- Technical writing
|
||||
- Diagram generation
|
||||
|
||||
```javascript
|
||||
Task({
|
||||
subagent_type: "doc-generator",
|
||||
prompt: "Generate documentation for the REST API",
|
||||
run_in_background: false
|
||||
})
|
||||
```
|
||||
|
||||
## Agent Categories
|
||||
|
||||
| Category | Agents | Purpose |
|
||||
|----------|--------|---------|
|
||||
| **Execution** | code-developer, tdd-developer, test-fix-agent, universal-executor | Implement code and run tasks |
|
||||
| **Analysis** | context-search-agent, debug-explore-agent, cli-explore-agent | Explore and analyze code |
|
||||
| **Planning** | action-planning-agent, issue-plan-agent, cli-planning-agent | Create plans and strategies |
|
||||
| **Specialized** | team-worker, doc-generator, ui-design-agent | Domain-specific tasks |
|
||||
|
||||
## Agent Communication
|
||||
|
||||
Agents can communicate and coordinate with each other:
|
||||
|
||||
```javascript
|
||||
// Agent sends message
|
||||
SendMessage({
|
||||
type: "message",
|
||||
recipient: "tester",
|
||||
content: "Feature implementation complete, ready for testing"
|
||||
})
|
||||
|
||||
// Agent receives message via system
|
||||
```
|
||||
|
||||
## Team Workflows
|
||||
|
||||
Multiple agents can work together on complex tasks:
|
||||
|
||||
```
|
||||
[analyst] -> RESEARCH (requirements analysis)
|
||||
|
|
||||
v
|
||||
[writer] -> DRAFT (specification creation)
|
||||
|
|
||||
v
|
||||
[planner] -> PLAN (implementation planning)
|
||||
|
|
||||
+--[executor] -> IMPL (code implementation)
|
||||
| |
|
||||
| v
|
||||
+-----------[tester] -> TEST (testing)
|
||||
|
|
||||
v
|
||||
[reviewer] -> REVIEW (code review)
|
||||
```
|
||||
|
||||
## Using Agents
|
||||
|
||||
### Via Task Tool
|
||||
|
||||
```javascript
|
||||
// Foreground execution
|
||||
Task({
|
||||
subagent_type: "code-developer",
|
||||
prompt: "Implement user dashboard",
|
||||
run_in_background: false
|
||||
})
|
||||
|
||||
// Background execution
|
||||
Task({
|
||||
subagent_type: "code-developer",
|
||||
prompt: "Implement user dashboard",
|
||||
run_in_background: true
|
||||
})
|
||||
```
|
||||
|
||||
### Via Team Skills
|
||||
|
||||
Team skills automatically coordinate multiple agents:
|
||||
|
||||
```javascript
|
||||
Skill({
|
||||
skill: "team-lifecycle",
|
||||
args: "Build user authentication system"
|
||||
})
|
||||
```
|
||||
|
||||
### Configuration
|
||||
|
||||
Agent behavior is configured via role-spec files in team workflows:
|
||||
|
||||
```markdown
|
||||
---
|
||||
role: executor
|
||||
prefix: IMPL
|
||||
inner_loop: true
|
||||
subagents: [explore]
|
||||
---
|
||||
```
|
||||
|
||||
::: info See Also
|
||||
- [Skills](../skills/) - Reusable skill library
|
||||
- [Workflows](../workflows/) - Orchestration system
|
||||
- [Teams](../workflows/teams.md) - Team workflow reference
|
||||
:::
|
||||
889
docs/cli/commands.md
Normal file
889
docs/cli/commands.md
Normal file
@@ -0,0 +1,889 @@
|
||||
# CLI Commands Reference
|
||||
|
||||
Complete reference for all **43 CCW commands** organized by category, with **7 workflow chains** for common development scenarios.
|
||||
|
||||
## Command Categories
|
||||
|
||||
| Category | Commands | Description |
|
||||
|----------|----------|-------------|
|
||||
| [Orchestrators](#orchestrators) | 3 | Main workflow orchestration |
|
||||
| [Workflow](#workflow-commands) | 10 | Project initialization and management |
|
||||
| [Session](#session-commands) | 6 | Session lifecycle management |
|
||||
| [Analysis](#analysis-commands) | 4 | Code analysis and debugging |
|
||||
| [Planning](#planning-commands) | 3 | Brainstorming and planning |
|
||||
| [Execution](#execution-commands) | 1 | Universal execution engine |
|
||||
| [UI Design](#ui-design-commands) | 10 | Design token extraction and prototyping |
|
||||
| [Issue](#issue-commands) | 8 | Issue discovery and resolution |
|
||||
| [Memory](#memory-commands) | 2 | Memory and context management |
|
||||
| [CLI](#cli-commands) | 2 | CLI configuration and review |
|
||||
|
||||
---
|
||||
|
||||
## Orchestrators
|
||||
|
||||
### ccw
|
||||
|
||||
**Purpose**: Main workflow orchestrator - analyze intent, select workflow, execute command chain
|
||||
|
||||
**Description**: Analyzes user intent, selects appropriate workflow, and executes command chain in main process.
|
||||
|
||||
**Flags**:
|
||||
- `-y, --yes` - Skip all confirmations
|
||||
|
||||
**Mapped Skills**:
|
||||
- workflow-lite-plan, workflow-plan, workflow-execute, workflow-tdd
|
||||
- workflow-test-fix, workflow-multi-cli-plan, review-cycle, brainstorm
|
||||
- team-planex, team-iterdev, team-lifecycle, team-issue
|
||||
- team-testing, team-quality-assurance, team-brainstorm, team-uidesign
|
||||
|
||||
```bash
|
||||
ccw -y
|
||||
```
|
||||
|
||||
### ccw-coordinator
|
||||
|
||||
**Purpose**: Command orchestration tool with external CLI execution
|
||||
|
||||
**Description**: Analyzes requirements, recommends chain, executes sequentially with state persistence. Uses background tasks with hook callbacks.
|
||||
|
||||
**Tools**: `Task`, `AskUserQuestion`, `Read`, `Write`, `Bash`, `Glob`, `Grep`
|
||||
|
||||
```bash
|
||||
ccw-coordinator
|
||||
```
|
||||
|
||||
### flow-create
|
||||
|
||||
**Purpose**: Generate workflow templates for meta-skill/flow-coordinator
|
||||
|
||||
```bash
|
||||
flow-create
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Workflow Commands
|
||||
|
||||
### workflow init
|
||||
|
||||
**Purpose**: Initialize project-level state with intelligent project analysis
|
||||
|
||||
**Description**: Uses cli-explore-agent for intelligent project analysis, generating project-tech.json and specification files.
|
||||
|
||||
**Flags**:
|
||||
- `--regenerate` - Force regeneration
|
||||
- `--skip-specs` - Skip specification generation
|
||||
|
||||
**Output**:
|
||||
- `.workflow/project-tech.json`
|
||||
- `.workflow/specs/*.md`
|
||||
|
||||
**Delegates to**: `cli-explore-agent`
|
||||
|
||||
```bash
|
||||
workflow init --regenerate
|
||||
```
|
||||
|
||||
### workflow init-specs
|
||||
|
||||
**Purpose**: Interactive wizard for creating individual specs or personal constraints
|
||||
|
||||
**Flags**:
|
||||
- `--scope <global|project>` - Scope selection
|
||||
- `--dimension <specs|personal>` - Dimension selection
|
||||
- `--category <general|exploration|planning|execution>` - Category selection
|
||||
|
||||
```bash
|
||||
workflow init-specs --scope project --dimension specs
|
||||
```
|
||||
|
||||
### workflow init-guidelines
|
||||
|
||||
**Purpose**: Interactive wizard to fill specs/*.md based on project analysis
|
||||
|
||||
**Flags**:
|
||||
- `--reset` - Reset existing guidelines
|
||||
|
||||
```bash
|
||||
workflow init-guidelines --reset
|
||||
```
|
||||
|
||||
### workflow clean
|
||||
|
||||
**Purpose**: Intelligent code cleanup with mainline detection
|
||||
|
||||
**Description**: Discovers stale artifacts and executes safe cleanup operations.
|
||||
|
||||
**Flags**:
|
||||
- `-y, --yes` - Skip confirmation
|
||||
- `--dry-run` - Preview without changes
|
||||
|
||||
**Delegates to**: `cli-explore-agent`
|
||||
|
||||
```bash
|
||||
workflow clean --dry-run
|
||||
```
|
||||
|
||||
### workflow unified-execute-with-file
|
||||
|
||||
**Purpose**: Universal execution engine for any planning/brainstorm/analysis output
|
||||
|
||||
**Flags**:
|
||||
- `-y, --yes` - Skip confirmation
|
||||
- `-p, --plan <path>` - Plan file path
|
||||
- `--auto-commit` - Auto-commit after execution
|
||||
|
||||
**Execution Methods**: Agent, CLI-Codex, CLI-Gemini, Auto
|
||||
|
||||
**Output**: `.workflow/.execution/{session-id}/execution-events.md`
|
||||
|
||||
```bash
|
||||
workflow unified-execute-with-file -p plan.json --auto-commit
|
||||
```
|
||||
|
||||
### workflow brainstorm-with-file
|
||||
|
||||
**Purpose**: Interactive brainstorming with multi-CLI collaboration
|
||||
|
||||
**Description**: Documents thought evolution with idea expansion.
|
||||
|
||||
**Flags**:
|
||||
- `-y, --yes` - Skip confirmation
|
||||
- `-c, --continue` - Continue previous session
|
||||
- `-m, --mode <creative|structured>` - Brainstorming mode
|
||||
|
||||
**Delegates to**: `cli-explore-agent`, `Multi-CLI (Gemini/Codex/Claude)`
|
||||
|
||||
**Output**: `.workflow/.brainstorm/{session-id}/synthesis.json`
|
||||
|
||||
```bash
|
||||
workflow brainstorm-with-file -m creative
|
||||
```
|
||||
|
||||
### workflow analyze-with-file
|
||||
|
||||
**Purpose**: Interactive collaborative analysis with documented discussions
|
||||
|
||||
**Flags**:
|
||||
- `-y, --yes` - Skip confirmation
|
||||
- `-c, --continue` - Continue previous session
|
||||
|
||||
**Delegates to**: `cli-explore-agent`, `Gemini/Codex`
|
||||
|
||||
**Output**: `.workflow/.analysis/{session-id}/discussion.md`
|
||||
|
||||
```bash
|
||||
workflow analyze-with-file
|
||||
```
|
||||
|
||||
### workflow debug-with-file
|
||||
|
||||
**Purpose**: Interactive hypothesis-driven debugging
|
||||
|
||||
**Description**: Documents exploration with Gemini-assisted correction.
|
||||
|
||||
**Flags**:
|
||||
- `-y, --yes` - Skip confirmation
|
||||
|
||||
**Output**: `.workflow/.debug/{session-id}/understanding.md`
|
||||
|
||||
```bash
|
||||
workflow debug-with-file
|
||||
```
|
||||
|
||||
### workflow collaborative-plan-with-file
|
||||
|
||||
**Purpose**: Collaborative planning with Plan Note
|
||||
|
||||
**Description**: Parallel agents fill pre-allocated sections with conflict detection.
|
||||
|
||||
**Flags**:
|
||||
- `-y, --yes` - Skip confirmation
|
||||
- `--max-agents=5` - Maximum parallel agents
|
||||
|
||||
**Output**: `.workflow/.planning/{session-id}/plan-note.md`
|
||||
|
||||
```bash
|
||||
workflow collaborative-plan-with-file --max-agents=5
|
||||
```
|
||||
|
||||
### workflow roadmap-with-file
|
||||
|
||||
**Purpose**: Strategic requirement roadmap with iterative decomposition
|
||||
|
||||
**Flags**:
|
||||
- `-y, --yes` - Skip confirmation
|
||||
- `-c, --continue` - Continue previous session
|
||||
- `-m, --mode <progressive|direct|auto>` - Decomposition mode
|
||||
|
||||
**Output**:
|
||||
- `.workflow/.roadmap/{session-id}/roadmap.md`
|
||||
- `.workflow/issues/issues.jsonl`
|
||||
|
||||
**Handoff to**: `team-planex`
|
||||
|
||||
```bash
|
||||
workflow roadmap-with-file -m progressive
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Session Commands
|
||||
|
||||
### workflow session start
|
||||
|
||||
**Purpose**: Discover existing sessions or start new workflow session
|
||||
|
||||
**Flags**:
|
||||
- `--type <workflow|review|tdd|test|docs>` - Session type
|
||||
- `--auto|--new` - Auto-discover or force new
|
||||
|
||||
**Calls first**: `workflow init`
|
||||
|
||||
```bash
|
||||
workflow session start --type tdd
|
||||
```
|
||||
|
||||
### workflow session resume
|
||||
|
||||
**Purpose**: Resume the most recently paused workflow session
|
||||
|
||||
```bash
|
||||
workflow session resume
|
||||
```
|
||||
|
||||
### workflow session list
|
||||
|
||||
**Purpose**: List all workflow sessions with status filtering
|
||||
|
||||
```bash
|
||||
workflow session list
|
||||
```
|
||||
|
||||
### workflow session sync
|
||||
|
||||
**Purpose**: Quick-sync session work to specs/*.md and project-tech
|
||||
|
||||
**Flags**:
|
||||
- `-y, --yes` - Skip confirmation
|
||||
|
||||
**Updates**: `.workflow/specs/*.md`, `.workflow/project-tech.json`
|
||||
|
||||
```bash
|
||||
workflow session sync -y
|
||||
```
|
||||
|
||||
### workflow session solidify
|
||||
|
||||
**Purpose**: Crystallize session learnings into permanent project guidelines
|
||||
|
||||
**Flags**:
|
||||
- `-y, --yes` - Skip confirmation
|
||||
- `--type <convention|constraint|learning|compress>` - Solidification type
|
||||
- `--category <category>` - Category for guidelines
|
||||
- `--limit <N>` - Limit for compress mode
|
||||
|
||||
```bash
|
||||
workflow session solidify --type learning
|
||||
```
|
||||
|
||||
### workflow session complete
|
||||
|
||||
**Purpose**: Mark active workflow session as complete
|
||||
|
||||
**Description**: Archives with lessons learned, auto-calls sync.
|
||||
|
||||
**Flags**:
|
||||
- `-y, --yes` - Skip confirmation
|
||||
- `--detailed` - Detailed completion report
|
||||
|
||||
**Auto-calls**: `workflow session sync -y`
|
||||
|
||||
```bash
|
||||
workflow session complete --detailed
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Analysis Commands
|
||||
|
||||
### workflow integration-test-cycle
|
||||
|
||||
**Purpose**: Self-iterating integration test workflow
|
||||
|
||||
**Description**: Autonomous test-fix cycles with reflection-driven adjustment.
|
||||
|
||||
**Flags**:
|
||||
- `-y, --yes` - Skip confirmation
|
||||
- `-c, --continue` - Continue previous session
|
||||
- `--max-iterations=N` - Maximum iterations
|
||||
|
||||
**Output**: `.workflow/.integration-test/{session-id}/reflection-log.md`
|
||||
|
||||
```bash
|
||||
workflow integration-test-cycle --max-iterations=5
|
||||
```
|
||||
|
||||
### workflow refactor-cycle
|
||||
|
||||
**Purpose**: Tech debt discovery and self-iterating refactoring
|
||||
|
||||
**Flags**:
|
||||
- `-y, --yes` - Skip confirmation
|
||||
- `-c, --continue` - Continue previous session
|
||||
- `--scope=module|project` - Refactoring scope
|
||||
|
||||
**Output**: `.workflow/.refactor/{session-id}/reflection-log.md`
|
||||
|
||||
```bash
|
||||
workflow refactor-cycle --scope project
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Planning Commands
|
||||
|
||||
### workflow req-plan-with-file
|
||||
|
||||
**Purpose**: Requirement-level progressive roadmap planning with issue creation
|
||||
|
||||
**Description**: Decomposes requirements into convergent layers or task sequences.
|
||||
|
||||
```bash
|
||||
workflow req-plan-with-file
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Execution Commands
|
||||
|
||||
### workflow execute
|
||||
|
||||
**Purpose**: Coordinate agent execution for workflow tasks
|
||||
|
||||
**Description**: Automatic session discovery, parallel task processing, and status tracking.
|
||||
|
||||
**Triggers**: `workflow:execute`
|
||||
|
||||
```bash
|
||||
workflow execute
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## UI Design Commands
|
||||
|
||||
### workflow ui-design style-extract
|
||||
|
||||
**Purpose**: Extract design style from reference images or text prompts
|
||||
|
||||
**Flags**:
|
||||
- `-y, --yes` - Skip confirmation
|
||||
- `--design-id <id>` - Design identifier
|
||||
- `--session <id>` - Session identifier
|
||||
- `--images <glob>` - Image file pattern
|
||||
- `--prompt <desc>` - Text description
|
||||
- `--variants <count>` - Number of variants
|
||||
- `--interactive` - Interactive mode
|
||||
- `--refine` - Refinement mode
|
||||
|
||||
**Modes**: Exploration, Refinement
|
||||
|
||||
**Output**: `style-extraction/style-{id}/design-tokens.json`
|
||||
|
||||
```bash
|
||||
workflow ui-design style-extract --images "design/*.png" --variants 3
|
||||
```
|
||||
|
||||
### workflow ui-design layout-extract
|
||||
|
||||
**Purpose**: Extract structural layout from reference images or text prompts
|
||||
|
||||
**Flags**:
|
||||
- `-y, --yes` - Skip confirmation
|
||||
- `--design-id <id>` - Design identifier
|
||||
- `--session <id>` - Session identifier
|
||||
- `--images <glob>` - Image file pattern
|
||||
- `--prompt <desc>` - Text description
|
||||
- `--targets <list>` - Target components
|
||||
- `--variants <count>` - Number of variants
|
||||
- `--device-type <desktop|mobile|tablet|responsive>` - Device type
|
||||
- `--interactive` - Interactive mode
|
||||
- `--refine` - Refinement mode
|
||||
|
||||
**Delegates to**: `ui-design-agent`
|
||||
|
||||
**Output**: `layout-extraction/layout-*.json`
|
||||
|
||||
```bash
|
||||
workflow ui-design layout-extract --prompt "dashboard layout" --device-type responsive
|
||||
```
|
||||
|
||||
### workflow ui-design generate
|
||||
|
||||
**Purpose**: Assemble UI prototypes by combining layout templates with design tokens
|
||||
|
||||
**Flags**:
|
||||
- `--design-id <id>` - Design identifier
|
||||
- `--session <id>` - Session identifier
|
||||
|
||||
**Delegates to**: `ui-design-agent`
|
||||
|
||||
**Prerequisites**: `workflow ui-design style-extract`, `workflow ui-design layout-extract`
|
||||
|
||||
```bash
|
||||
workflow ui-design generate --design-id dashboard-001
|
||||
```
|
||||
|
||||
### workflow ui-design animation-extract
|
||||
|
||||
**Purpose**: Extract animation and transition patterns
|
||||
|
||||
**Flags**:
|
||||
- `-y, --yes` - Skip confirmation
|
||||
- `--design-id <id>` - Design identifier
|
||||
- `--session <id>` - Session identifier
|
||||
- `--images <glob>` - Image file pattern
|
||||
- `--focus <types>` - Animation types
|
||||
- `--interactive` - Interactive mode
|
||||
- `--refine` - Refinement mode
|
||||
|
||||
**Delegates to**: `ui-design-agent`
|
||||
|
||||
**Output**: `animation-extraction/animation-tokens.json`
|
||||
|
||||
```bash
|
||||
workflow ui-design animation-extract --focus "transition,keyframe"
|
||||
```
|
||||
|
||||
### workflow ui-design import-from-code
|
||||
|
||||
**Purpose**: Import design system from code files
|
||||
|
||||
**Description**: Automatic file discovery for CSS/JS/HTML/SCSS.
|
||||
|
||||
**Flags**:
|
||||
- `--design-id <id>` - Design identifier
|
||||
- `--session <id>` - Session identifier
|
||||
- `--source <path>` - Source path
|
||||
|
||||
**Delegates to**: Style Agent, Animation Agent, Layout Agent
|
||||
|
||||
**Output**: `style-extraction`, `animation-extraction`, `layout-extraction`
|
||||
|
||||
```bash
|
||||
workflow ui-design import-from-code --source src/styles
|
||||
```
|
||||
|
||||
### workflow ui-design codify-style
|
||||
|
||||
**Purpose**: Extract styles from code and generate shareable reference package
|
||||
|
||||
**Flags**:
|
||||
- `--package-name <name>` - Package name
|
||||
- `--output-dir <path>` - Output directory
|
||||
- `--overwrite` - Overwrite existing
|
||||
|
||||
**Orchestrates**: `workflow ui-design import-from-code`, `workflow ui-design reference-page-generator`
|
||||
|
||||
**Output**: `.workflow/reference_style/{package-name}/`
|
||||
|
||||
```bash
|
||||
workflow ui-design codify-style --package-name my-design-system
|
||||
```
|
||||
|
||||
### workflow ui-design reference-page-generator
|
||||
|
||||
**Purpose**: Generate multi-component reference pages from design run extraction
|
||||
|
||||
**Flags**:
|
||||
- `--design-run <path>` - Design run path
|
||||
- `--package-name <name>` - Package name
|
||||
- `--output-dir <path>` - Output directory
|
||||
|
||||
**Output**: `.workflow/reference_style/{package-name}/preview.html`
|
||||
|
||||
```bash
|
||||
workflow ui-design reference-page-generator --design-run .workflow/design-run-001
|
||||
```
|
||||
|
||||
### workflow ui-design design-sync
|
||||
|
||||
**Purpose**: Synchronize finalized design system references to brainstorming artifacts
|
||||
|
||||
**Flags**:
|
||||
- `--session <session_id>` - Session identifier
|
||||
- `--selected-prototypes <list>` - Selected prototypes
|
||||
|
||||
**Updates**: Role analysis documents, context-package.json
|
||||
|
||||
```bash
|
||||
workflow ui-design design-sync --session design-001
|
||||
```
|
||||
|
||||
### workflow ui-design explore-auto
|
||||
|
||||
**Purpose**: Interactive exploratory UI design with style-centric batch generation
|
||||
|
||||
**Flags**:
|
||||
- `--input <value>` - Input source
|
||||
- `--targets <list>` - Target components
|
||||
- `--target-type <page|component>` - Target type
|
||||
- `--session <id>` - Session identifier
|
||||
- `--style-variants <count>` - Style variants
|
||||
- `--layout-variants <count>` - Layout variants
|
||||
|
||||
**Orchestrates**: `import-from-code`, `style-extract`, `animation-extract`, `layout-extract`, `generate`
|
||||
|
||||
```bash
|
||||
workflow ui-design explore-auto --input "dashboard" --style-variants 3
|
||||
```
|
||||
|
||||
### workflow ui-design imitate-auto
|
||||
|
||||
**Purpose**: UI design workflow with direct code/image input
|
||||
|
||||
**Flags**:
|
||||
- `--input <value>` - Input source
|
||||
- `--session <id>` - Session identifier
|
||||
|
||||
**Orchestrates**: Same as explore-auto
|
||||
|
||||
```bash
|
||||
workflow ui-design imitate-auto --input ./reference.png
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Issue Commands
|
||||
|
||||
### issue new
|
||||
|
||||
**Purpose**: Create structured issue from GitHub URL or text description
|
||||
|
||||
**Flags**:
|
||||
- `-y, --yes` - Skip confirmation
|
||||
- `--priority 1-5` - Issue priority
|
||||
|
||||
**Features**: Clarity detection
|
||||
|
||||
**Output**: `.workflow/issues/issues.jsonl`
|
||||
|
||||
```bash
|
||||
issue new --priority 3
|
||||
```
|
||||
|
||||
### issue discover
|
||||
|
||||
**Purpose**: Discover potential issues from multiple perspectives
|
||||
|
||||
**Flags**:
|
||||
- `-y, --yes` - Skip confirmation
|
||||
- `--perspectives=bug,ux,...` - Analysis perspectives
|
||||
- `--external` - Include external research
|
||||
|
||||
**Perspectives**: bug, ux, test, quality, security, performance, maintainability, best-practices
|
||||
|
||||
**Delegates to**: `cli-explore-agent`
|
||||
|
||||
**Output**: `.workflow/issues/discoveries/{discovery-id}/`
|
||||
|
||||
```bash
|
||||
issue discover --perspectives=bug,security,performance
|
||||
```
|
||||
|
||||
### issue discover-by-prompt
|
||||
|
||||
**Purpose**: Discover issues from user prompt with Gemini-planned exploration
|
||||
|
||||
**Flags**:
|
||||
- `-y, --yes` - Skip confirmation
|
||||
- `--scope=src/**` - File scope
|
||||
- `--depth=standard|deep` - Analysis depth
|
||||
- `--max-iterations=5` - Maximum iterations
|
||||
|
||||
**Delegates to**: `Gemini CLI`, `ACE search`, `multi-agent exploration`
|
||||
|
||||
```bash
|
||||
issue discover-by-prompt --depth deep --scope "src/auth/**"
|
||||
```
|
||||
|
||||
### issue plan
|
||||
|
||||
**Purpose**: Batch plan issue resolution using issue-plan-agent
|
||||
|
||||
**Flags**:
|
||||
- `-y, --yes` - Skip confirmation
|
||||
- `--all-pending` - Plan all pending issues
|
||||
- `--batch-size 3` - Batch size
|
||||
|
||||
**Delegates to**: `issue-plan-agent`
|
||||
|
||||
**Output**: `.workflow/issues/solutions/{issue-id}.jsonl`
|
||||
|
||||
```bash
|
||||
issue plan --all-pending --batch-size 5
|
||||
```
|
||||
|
||||
### issue queue
|
||||
|
||||
**Purpose**: Form execution queue from bound solutions
|
||||
|
||||
**Flags**:
|
||||
- `-y, --yes` - Skip confirmation
|
||||
- `--queues <n>` - Number of queues
|
||||
- `--issue <id>` - Specific issue
|
||||
|
||||
**Delegates to**: `issue-queue-agent`
|
||||
|
||||
**Output**: `.workflow/issues/queues/QUE-xxx.json`
|
||||
|
||||
```bash
|
||||
issue queue --queues 2
|
||||
```
|
||||
|
||||
### issue execute
|
||||
|
||||
**Purpose**: Execute queue with DAG-based parallel orchestration
|
||||
|
||||
**Flags**:
|
||||
- `-y, --yes` - Skip confirmation
|
||||
- `--queue <queue-id>` - Queue identifier
|
||||
- `--worktree [<path>]` - Use worktree isolation
|
||||
|
||||
**Executors**: Codex (recommended), Gemini, Agent
|
||||
|
||||
```bash
|
||||
issue execute --queue QUE-001 --worktree
|
||||
```
|
||||
|
||||
### issue convert-to-plan
|
||||
|
||||
**Purpose**: Convert planning artifacts to issue solutions
|
||||
|
||||
**Flags**:
|
||||
- `-y, --yes` - Skip confirmation
|
||||
- `--issue <id>` - Issue identifier
|
||||
- `--supplement` - Supplement existing solution
|
||||
|
||||
**Sources**: lite-plan, workflow-session, markdown, json
|
||||
|
||||
```bash
|
||||
issue convert-to-plan --issue 123 --supplement
|
||||
```
|
||||
|
||||
### issue from-brainstorm
|
||||
|
||||
**Purpose**: Convert brainstorm session ideas into issue with executable solution
|
||||
|
||||
**Flags**:
|
||||
- `-y, --yes` - Skip confirmation
|
||||
- `--idea=<index>` - Idea index
|
||||
- `--auto` - Auto-select best idea
|
||||
|
||||
**Input Sources**: `synthesis.json`, `perspectives.json`, `.brainstorming/**`
|
||||
|
||||
**Output**: `issues.jsonl`, `solutions/{issue-id}.jsonl`
|
||||
|
||||
```bash
|
||||
issue from-brainstorm --auto
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Memory Commands
|
||||
|
||||
### memory prepare
|
||||
|
||||
**Purpose**: Delegate to universal-executor agent for project analysis
|
||||
|
||||
**Description**: Returns JSON core content package for memory loading.
|
||||
|
||||
**Flags**:
|
||||
- `--tool gemini|qwen` - AI tool selection
|
||||
|
||||
**Delegates to**: `universal-executor agent`
|
||||
|
||||
```bash
|
||||
memory prepare --tool gemini
|
||||
```
|
||||
|
||||
### memory style-skill-memory
|
||||
|
||||
**Purpose**: Generate SKILL memory package from style reference
|
||||
|
||||
**Flags**:
|
||||
- `--regenerate` - Force regeneration
|
||||
|
||||
**Input**: `.workflow/reference_style/{package-name}/`
|
||||
|
||||
**Output**: `.claude/skills/style-{package-name}/SKILL.md`
|
||||
|
||||
```bash
|
||||
memory style-skill-memory --regenerate
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## CLI Commands
|
||||
|
||||
### cli init
|
||||
|
||||
**Purpose**: Generate .gemini/ and .qwen/ config directories
|
||||
|
||||
**Description**: Creates settings.json and ignore files based on workspace technology detection.
|
||||
|
||||
**Flags**:
|
||||
- `--tool gemini|qwen|all` - Tool selection
|
||||
- `--output path` - Output path
|
||||
- `--preview` - Preview without writing
|
||||
|
||||
**Output**: `.gemini/`, `.qwen/`, `.geminiignore`, `.qwenignore`
|
||||
|
||||
```bash
|
||||
cli init --tool all --preview
|
||||
```
|
||||
|
||||
### cli codex-review
|
||||
|
||||
**Purpose**: Interactive code review using Codex CLI
|
||||
|
||||
**Flags**:
|
||||
- `--uncommitted` - Review uncommitted changes
|
||||
- `--base <branch>` - Compare to branch
|
||||
- `--commit <sha>` - Review specific commit
|
||||
- `--model <model>` - Model selection
|
||||
- `--title <title>` - Review title
|
||||
|
||||
```bash
|
||||
cli codex-review --base main --title "Security Review"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Workflow Chains
|
||||
|
||||
Pre-defined command combinations for common development scenarios:
|
||||
|
||||
### 1. Project Initialization Chain
|
||||
|
||||
**Purpose**: Initialize project state and guidelines
|
||||
|
||||
```bash
|
||||
workflow init
|
||||
workflow init-specs --scope project
|
||||
workflow init-guidelines
|
||||
```
|
||||
|
||||
**Output**: `.workflow/project-tech.json`, `.workflow/specs/*.md`
|
||||
|
||||
---
|
||||
|
||||
### 2. Session Lifecycle Chain
|
||||
|
||||
**Purpose**: Complete session management workflow
|
||||
|
||||
```bash
|
||||
workflow session start --type workflow
|
||||
# ... work on tasks ...
|
||||
workflow session sync -y
|
||||
workflow session solidify --type learning
|
||||
workflow session complete --detailed
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### 3. Issue Workflow Chain
|
||||
|
||||
**Purpose**: Full issue discovery to execution cycle
|
||||
|
||||
```bash
|
||||
issue discover --perspectives=bug,security
|
||||
issue plan --all-pending
|
||||
issue queue --queues 2
|
||||
issue execute --queue QUE-001
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### 4. Brainstorm to Issue Chain
|
||||
|
||||
**Purpose**: Convert brainstorm to executable issue
|
||||
|
||||
```bash
|
||||
workflow brainstorm-with-file -m creative
|
||||
issue from-brainstorm --auto
|
||||
issue queue
|
||||
issue execute
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### 5. UI Design Full Cycle
|
||||
|
||||
**Purpose**: Complete UI design workflow
|
||||
|
||||
```bash
|
||||
workflow ui-design style-extract --images "design/*.png"
|
||||
workflow ui-design layout-extract --images "design/*.png"
|
||||
workflow ui-design generate --design-id main-001
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### 6. UI Design from Code Chain
|
||||
|
||||
**Purpose**: Extract design system from existing code
|
||||
|
||||
```bash
|
||||
workflow ui-design import-from-code --source src/styles
|
||||
workflow ui-design reference-page-generator --design-run .workflow/style-extraction
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### 7. Roadmap to Team Execution Chain
|
||||
|
||||
**Purpose**: Strategic planning to team execution
|
||||
|
||||
```bash
|
||||
workflow roadmap-with-file -m progressive
|
||||
# Handoff to team-planex skill
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Command Dependencies
|
||||
|
||||
Some commands have prerequisites or call other commands:
|
||||
|
||||
| Command | Depends On |
|
||||
|---------|------------|
|
||||
| `workflow session start` | `workflow init` |
|
||||
| `workflow session complete` | `workflow session sync` |
|
||||
| `workflow ui-design generate` | `style-extract`, `layout-extract` |
|
||||
| `workflow ui-design codify-style` | `import-from-code`, `reference-page-generator` |
|
||||
| `issue from-brainstorm` | `workflow brainstorm-with-file` |
|
||||
| `issue queue` | `issue plan` |
|
||||
| `issue execute` | `issue queue` |
|
||||
| `memory style-skill-memory` | `workflow ui-design codify-style` |
|
||||
|
||||
---
|
||||
|
||||
## Agent Delegations
|
||||
|
||||
Commands delegate work to specialized agents:
|
||||
|
||||
| Agent | Commands |
|
||||
|-------|----------|
|
||||
| `cli-explore-agent` | `workflow init`, `workflow clean`, `workflow brainstorm-with-file`, `workflow analyze-with-file`, `issue discover` |
|
||||
| `universal-executor` | `memory prepare` |
|
||||
| `issue-plan-agent` | `issue plan` |
|
||||
| `issue-queue-agent` | `issue queue` |
|
||||
| `ui-design-agent` | `workflow ui-design layout-extract`, `generate`, `animation-extract` |
|
||||
|
||||
::: info See Also
|
||||
- [CLI Tools Configuration](../guide/cli-tools.md) - Configure CLI tools
|
||||
- [Skills Library](../skills/core-skills.md) - Built-in skills
|
||||
- [Agents](../agents/builtin.md) - Specialized agents
|
||||
:::
|
||||
152
docs/commands/claude/cli.md
Normal file
152
docs/commands/claude/cli.md
Normal file
@@ -0,0 +1,152 @@
|
||||
# CLI Tool Commands
|
||||
|
||||
## One-Liner
|
||||
|
||||
**CLI tool commands are the bridge to external model invocation** — integrating Gemini, Qwen, Codex and other multi-model capabilities into workflows.
|
||||
|
||||
## Core Concepts
|
||||
|
||||
| Concept | Description | Configuration |
|
||||
|----------|-------------|---------------|
|
||||
| **CLI Tool** | External AI model invocation interface | `cli-tools.json` |
|
||||
| **Endpoint** | Available model services | gemini, qwen, codex, claude |
|
||||
| **Mode** | analysis / write / review | Permission level |
|
||||
|
||||
## Command List
|
||||
|
||||
| Command | Function | Syntax |
|
||||
|---------|----------|--------|
|
||||
| [`cli-init`](#cli-init) | Generate configuration directory and settings files | `/cli:cli-init [--tool gemini\|qwen\|all] [--output path] [--preview]` |
|
||||
| [`codex-review`](#codex-review) | Interactive code review using Codex CLI | `/cli:codex-review [--uncommitted\|--base <branch>\|--commit <sha>] [prompt]` |
|
||||
|
||||
## Command Details
|
||||
|
||||
### cli-init
|
||||
|
||||
**Function**: Generate `.gemini/` and `.qwen/` configuration directories based on workspace tech detection, including settings.json and ignore files.
|
||||
|
||||
**Syntax**:
|
||||
```
|
||||
/cli:cli-init [--tool gemini|qwen|all] [--output path] [--preview]
|
||||
```
|
||||
|
||||
**Options**:
|
||||
- `--tool=tool`: gemini, qwen or all
|
||||
- `--output=path`: Output directory
|
||||
- `--preview`: Preview mode (don't actually create)
|
||||
|
||||
**Generated File Structure**:
|
||||
```
|
||||
.gemini/
|
||||
├── settings.json # Gemini configuration
|
||||
└── ignore # Ignore patterns
|
||||
|
||||
.qwen/
|
||||
├── settings.json # Qwen configuration
|
||||
└── ignore # Ignore patterns
|
||||
```
|
||||
|
||||
**Tech Detection**:
|
||||
|
||||
| Detection Item | Generated Config |
|
||||
|----------------|------------------|
|
||||
| TypeScript | tsconfig-related config |
|
||||
| React | React-specific config |
|
||||
| Vue | Vue-specific config |
|
||||
| Python | Python-specific config |
|
||||
|
||||
**Examples**:
|
||||
```bash
|
||||
# Initialize all tools
|
||||
/cli:cli-init --tool all
|
||||
|
||||
# Initialize specific tool
|
||||
/cli:cli-init --tool gemini
|
||||
|
||||
# Specify output directory
|
||||
/cli:cli-init --output ./configs
|
||||
|
||||
# Preview mode
|
||||
/cli:cli-init --preview
|
||||
```
|
||||
|
||||
### codex-review
|
||||
|
||||
**Function**: Interactive code review using Codex CLI via ccw endpoint, supporting configurable review targets, models, and custom instructions.
|
||||
|
||||
**Syntax**:
|
||||
```
|
||||
/cli:codex-review [--uncommitted|--base <branch>|--commit <sha>] [--model <model>] [--title <title>] [prompt]
|
||||
```
|
||||
|
||||
**Options**:
|
||||
- `--uncommitted`: Review uncommitted changes
|
||||
- `--base <branch>`: Compare with branch
|
||||
- `--commit <sha>`: Review specific commit
|
||||
- `--model <model>`: Specify model
|
||||
- `--title <title>`: Review title
|
||||
|
||||
**Note**: Target flags and prompt are mutually exclusive
|
||||
|
||||
**Examples**:
|
||||
```bash
|
||||
# Review uncommitted changes
|
||||
/cli:codex-review --uncommitted
|
||||
|
||||
# Compare with main branch
|
||||
/cli:codex-review --base main
|
||||
|
||||
# Review specific commit
|
||||
/cli:codex-review --commit abc123
|
||||
|
||||
# With custom instructions
|
||||
/cli:codex-review --uncommitted "focus on security issues"
|
||||
|
||||
# Specify model and title
|
||||
/cli:codex-review --model gpt-5.2 --title "Authentication module review"
|
||||
```
|
||||
|
||||
## CLI Tool Configuration
|
||||
|
||||
### cli-tools.json Structure
|
||||
|
||||
```json
|
||||
{
|
||||
"version": "3.3.0",
|
||||
"tools": {
|
||||
"gemini": {
|
||||
"enabled": true,
|
||||
"primaryModel": "gemini-2.5-flash",
|
||||
"secondaryModel": "gemini-2.5-flash",
|
||||
"tags": ["analysis", "Debug"],
|
||||
"type": "builtin"
|
||||
},
|
||||
"qwen": {
|
||||
"enabled": true,
|
||||
"primaryModel": "coder-model",
|
||||
"tags": [],
|
||||
"type": "builtin"
|
||||
},
|
||||
"codex": {
|
||||
"enabled": true,
|
||||
"primaryModel": "gpt-5.2",
|
||||
"tags": [],
|
||||
"type": "builtin"
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Mode Descriptions
|
||||
|
||||
| Mode | Permission | Use Cases |
|
||||
|------|------------|-----------|
|
||||
| `analysis` | Read-only | Code review, architecture analysis, pattern discovery |
|
||||
| `write` | Create/modify/delete | Feature implementation, bug fixes, documentation creation |
|
||||
| `review` | Git-aware code review | Review uncommitted changes, branch diffs, specific commits |
|
||||
|
||||
## Related Documentation
|
||||
|
||||
- [CLI Invocation System](../../features/cli.md)
|
||||
- [Core Orchestration](./core-orchestration.md)
|
||||
- [Workflow Commands](./workflow.md)
|
||||
166
docs/commands/claude/core-orchestration.md
Normal file
166
docs/commands/claude/core-orchestration.md
Normal file
@@ -0,0 +1,166 @@
|
||||
# Core Orchestration Commands
|
||||
|
||||
## One-Liner
|
||||
|
||||
**Core orchestration commands are the workflow brain of Claude_dms3** — analyzing task intent, selecting appropriate workflows, and automatically executing command chains.
|
||||
|
||||
## Command List
|
||||
|
||||
| Command | Function | Syntax |
|
||||
|---------|----------|--------|
|
||||
| [`/ccw`](#ccw) | Main workflow orchestrator - intent analysis -> workflow selection -> command chain execution | `/ccw "task description"` |
|
||||
| [`/ccw-coordinator`](#ccw-coordinator) | Command orchestration tool - chained command execution and state persistence | `/ccw-coordinator "task description"` |
|
||||
|
||||
## Command Details
|
||||
|
||||
### /ccw
|
||||
|
||||
**Function**: Main workflow orchestrator - intent analysis -> workflow selection -> command chain execution
|
||||
|
||||
**Syntax**:
|
||||
```
|
||||
/ccw "task description"
|
||||
```
|
||||
|
||||
**Options**:
|
||||
- `--yes` / `-y`: Auto mode, skip confirmation steps
|
||||
|
||||
**Workflow**:
|
||||
|
||||
```mermaid
|
||||
graph TD
|
||||
A[User Input] --> B[Analyze Intent]
|
||||
B --> C{Clarity Score}
|
||||
C -->|>=2| D[Select Workflow Directly]
|
||||
C -->|<2| E[Clarify Requirements]
|
||||
E --> D
|
||||
D --> F[Build Command Chain]
|
||||
F --> G{User Confirm?}
|
||||
G -->|Yes| H[Execute Command Chain]
|
||||
G -->|Cancel| I[End]
|
||||
H --> J{More Steps?}
|
||||
J -->|Yes| F
|
||||
J -->|No| K[Complete]
|
||||
```
|
||||
|
||||
**Task Type Detection**:
|
||||
|
||||
| Type | Trigger Keywords | Workflow |
|
||||
|------|------------------|----------|
|
||||
| **Bug Fix** | urgent, production, critical + fix, bug | lite-fix |
|
||||
| **Brainstorming** | brainstorm, ideation | brainstorm-with-file |
|
||||
| **Debug Document** | debug document, hypothesis | debug-with-file |
|
||||
| **Collaborative Analysis** | analyze document | analyze-with-file |
|
||||
| **Collaborative Planning** | collaborative plan | collaborative-plan-with-file |
|
||||
| **Requirements Roadmap** | roadmap | req-plan-with-file |
|
||||
| **Integration Test** | integration test | integration-test-cycle |
|
||||
| **Refactoring** | refactor | refactor-cycle |
|
||||
| **Team Workflow** | team + keywords | corresponding team workflow |
|
||||
| **TDD** | tdd, test-first | tdd-plan -> execute |
|
||||
| **Test Fix** | test fix, failing test | test-fix-gen -> test-cycle-execute |
|
||||
|
||||
**Examples**:
|
||||
|
||||
```bash
|
||||
# Basic usage - auto-select workflow
|
||||
/ccw "implement user authentication"
|
||||
|
||||
# Bug fix
|
||||
/ccw "fix login failure bug"
|
||||
|
||||
# TDD development
|
||||
/ccw "implement payment using TDD"
|
||||
|
||||
# Team collaboration
|
||||
/ccw "team-planex implement user notification system"
|
||||
```
|
||||
|
||||
### /ccw-coordinator
|
||||
|
||||
**Function**: Command orchestration tool - analyze tasks, recommend command chains, sequential execution, state persistence
|
||||
|
||||
**Syntax**:
|
||||
```
|
||||
/ccw-coordinator "task description"
|
||||
```
|
||||
|
||||
**Minimal Execution Units**:
|
||||
|
||||
| Unit Name | Command Chain | Output |
|
||||
|-----------|---------------|--------|
|
||||
| **Quick Implementation** | lite-plan -> lite-execute | Working code |
|
||||
| **Multi-CLI Planning** | multi-cli-plan -> lite-execute | Working code |
|
||||
| **Bug Fix** | lite-plan (--bugfix) -> lite-execute | Fixed code |
|
||||
| **Full Plan+Execute** | plan -> execute | Working code |
|
||||
| **Verified Plan+Execute** | plan -> plan-verify -> execute | Working code |
|
||||
| **TDD Plan+Execute** | tdd-plan -> execute | Working code |
|
||||
| **Test Gen+Execute** | test-gen -> execute | Generated tests |
|
||||
| **Review Cycle** | review-session-cycle -> review-cycle-fix | Fixed code |
|
||||
| **Issue Workflow** | discover -> plan -> queue -> execute | Completed issue |
|
||||
|
||||
**Workflow**:
|
||||
|
||||
```mermaid
|
||||
graph TD
|
||||
A[Task Analysis] --> B[Discover Commands]
|
||||
B --> C[Recommend Command Chain]
|
||||
C --> D{User Confirm?}
|
||||
D -->|Yes| E[Sequential Execute]
|
||||
D -->|Modify| F[Adjust Command Chain]
|
||||
F --> D
|
||||
E --> G[Persist State]
|
||||
G --> H{More Steps?}
|
||||
H -->|Yes| B
|
||||
H -->|No| I[Complete]
|
||||
```
|
||||
|
||||
**Examples**:
|
||||
|
||||
```bash
|
||||
# Auto-orchestrate bug fix
|
||||
/ccw-coordinator "production login failure"
|
||||
|
||||
# Auto-orchestrate feature implementation
|
||||
/ccw-coordinator "add user avatar upload"
|
||||
```
|
||||
|
||||
## Auto Mode
|
||||
|
||||
Both commands support the `--yes` flag for auto mode:
|
||||
|
||||
```bash
|
||||
# Auto mode - skip all confirmations
|
||||
/ccw "implement user authentication" --yes
|
||||
/ccw-coordinator "fix login bug" --yes
|
||||
```
|
||||
|
||||
**Auto mode behavior**:
|
||||
- Skip requirement clarification
|
||||
- Skip user confirmation
|
||||
- Execute command chain directly
|
||||
|
||||
## Related Skills
|
||||
|
||||
| Skill | Function |
|
||||
|-------|----------|
|
||||
| `workflow-lite-plan` | Lightweight planning workflow |
|
||||
| `workflow-plan` | Full planning workflow |
|
||||
| `workflow-execute` | Execution workflow |
|
||||
| `workflow-tdd` | TDD workflow |
|
||||
| `review-cycle` | Code review cycle |
|
||||
|
||||
## Comparison
|
||||
|
||||
| Feature | /ccw | /ccw-coordinator |
|
||||
|---------|------|------------------|
|
||||
| **Execution Location** | Main process | External CLI + background tasks |
|
||||
| **State Persistence** | No | Yes |
|
||||
| **Hook Callbacks** | Not supported | Supported |
|
||||
| **Complex Workflows** | Simple chains | Supports parallel, dependencies |
|
||||
| **Use Cases** | Daily development | Complex projects, team collaboration |
|
||||
|
||||
## Related Documentation
|
||||
|
||||
- [Workflow Commands](./workflow.md)
|
||||
- [Session Management](./session.md)
|
||||
- [CLI Invocation System](../features/cli.md)
|
||||
222
docs/commands/claude/index.md
Normal file
222
docs/commands/claude/index.md
Normal file
@@ -0,0 +1,222 @@
|
||||
# Claude Commands
|
||||
|
||||
## One-Liner
|
||||
|
||||
**Claude Commands is the core command system of Claude_dms3** — invoking various workflows, tools, and collaboration features through slash commands.
|
||||
|
||||
## Core Concepts
|
||||
|
||||
| Category | Command Count | Description |
|
||||
|----------|---------------|-------------|
|
||||
| **Core Orchestration** | 2 | Main workflow orchestrators (ccw, ccw-coordinator) |
|
||||
| **Workflow** | 20+ | Planning, execution, review, TDD, testing workflows |
|
||||
| **Session Management** | 6 | Session creation, listing, resuming, completion |
|
||||
| **Issue Workflow** | 7 | Issue discovery, planning, queue, execution |
|
||||
| **Memory** | 8 | Memory capture, update, document generation |
|
||||
| **CLI Tools** | 2 | CLI initialization, Codex review |
|
||||
| **UI Design** | 10 | UI design prototype generation, style extraction |
|
||||
|
||||
## Command Categories
|
||||
|
||||
### 1. Core Orchestration Commands
|
||||
|
||||
| Command | Function | Difficulty |
|
||||
|---------|----------|------------|
|
||||
| [`/ccw`](./core-orchestration.md#ccw) | Main workflow orchestrator - intent analysis -> workflow selection -> command chain execution | Intermediate |
|
||||
| [`/ccw-coordinator`](./core-orchestration.md#ccw-coordinator) | Command orchestration tool - chained command execution and state persistence | Intermediate |
|
||||
|
||||
### 2. Workflow Commands
|
||||
|
||||
| Command | Function | Difficulty |
|
||||
|---------|----------|------------|
|
||||
| [`/workflow:lite-lite-lite`](./workflow.md#lite-lite-lite) | Ultra-lightweight multi-tool analysis and direct execution | Intermediate |
|
||||
| [`/workflow:lite-plan`](./workflow.md#lite-plan) | Lightweight interactive planning workflow | Intermediate |
|
||||
| [`/workflow:lite-execute`](./workflow.md#lite-execute) | Execute tasks based on in-memory plan | Intermediate |
|
||||
| [`/workflow:lite-fix`](./workflow.md#lite-fix) | Lightweight bug diagnosis and fix | Intermediate |
|
||||
| [`/workflow:plan`](./workflow.md#plan) | 5-phase planning workflow | Intermediate |
|
||||
| [`/workflow:execute`](./workflow.md#execute) | Coordinate agent execution of workflow tasks | Intermediate |
|
||||
| [`/workflow:replan`](./workflow.md#replan) | Interactive workflow replanning | Intermediate |
|
||||
| [`/workflow:multi-cli-plan`](./workflow.md#multi-cli-plan) | Multi-CLI collaborative planning | Intermediate |
|
||||
| [`/workflow:review`](./workflow.md#review) | Post-implementation review | Intermediate |
|
||||
| [`/workflow:clean`](./workflow.md#clean) | Smart code cleanup | Intermediate |
|
||||
| [`/workflow:init`](./workflow.md#init) | Initialize project state | Intermediate |
|
||||
| [`/workflow:brainstorm-with-file`](./workflow.md#brainstorm-with-file) | Interactive brainstorming | Intermediate |
|
||||
| [`/workflow:analyze-with-file`](./workflow.md#analyze-with-file) | Interactive collaborative analysis | Beginner |
|
||||
| [`/workflow:debug-with-file`](./workflow.md#debug-with-file) | Interactive hypothesis-driven debugging | Intermediate |
|
||||
| [`/workflow:unified-execute-with-file`](./workflow.md#unified-execute-with-file) | Universal execution engine | Intermediate |
|
||||
|
||||
### 3. Session Management Commands
|
||||
|
||||
| Command | Function | Difficulty |
|
||||
|---------|----------|------------|
|
||||
| [`/workflow:session:start`](./session.md#start) | Discover existing sessions or start new workflow session | Intermediate |
|
||||
| [`/workflow:session:list`](./session.md#list) | List all workflow sessions | Beginner |
|
||||
| [`/workflow:session:resume`](./session.md#resume) | Resume most recently paused workflow session | Intermediate |
|
||||
| [`/workflow:session:complete`](./session.md#complete) | Mark active workflow session as completed | Intermediate |
|
||||
| [`/workflow:session:solidify`](./session.md#solidify) | Crystallize session learnings into project guidelines | Intermediate |
|
||||
|
||||
### 4. Issue Workflow Commands
|
||||
|
||||
| Command | Function | Difficulty |
|
||||
|---------|----------|------------|
|
||||
| [`/issue:new`](./issue.md#new) | Create structured issue from GitHub URL or text description | Intermediate |
|
||||
| [`/issue:discover`](./issue.md#discover) | Discover potential issues from multiple perspectives | Intermediate |
|
||||
| [`/issue:discover-by-prompt`](./issue.md#discover-by-prompt) | Discover issues via user prompt | Intermediate |
|
||||
| [`/issue:plan`](./issue.md#plan) | Batch plan issue solutions | Intermediate |
|
||||
| [`/issue:queue`](./issue.md#queue) | Form execution queue | Intermediate |
|
||||
| [`/issue:execute`](./issue.md#execute) | Execute queue | Intermediate |
|
||||
| [`/issue:convert-to-plan`](./issue.md#convert-to-plan) | Convert planning artifact to issue solution | Intermediate |
|
||||
|
||||
### 5. Memory Commands
|
||||
|
||||
| Command | Function | Difficulty |
|
||||
|---------|----------|------------|
|
||||
| [`/memory:compact`](./memory.md#compact) | Compress current session memory to structured text | Intermediate |
|
||||
| [`/memory:tips`](./memory.md#tips) | Quick note-taking | Beginner |
|
||||
| [`/memory:load`](./memory.md#load) | Load task context via CLI project analysis | Intermediate |
|
||||
| [`/memory:update-full`](./memory.md#update-full) | Update all CLAUDE.md files | Intermediate |
|
||||
| [`/memory:update-related`](./memory.md#update-related) | Update CLAUDE.md for git-changed modules | Intermediate |
|
||||
| [`/memory:docs-full-cli`](./memory.md#docs-full-cli) | Generate full project documentation using CLI | Intermediate |
|
||||
| [`/memory:docs-related-cli`](./memory.md#docs-related-cli) | Generate documentation for git-changed modules | Intermediate |
|
||||
| [`/memory:style-skill-memory`](./memory.md#style-skill-memory) | Generate SKILL memory package from style reference | Intermediate |
|
||||
|
||||
### 6. CLI Tool Commands
|
||||
|
||||
| Command | Function | Difficulty |
|
||||
|---------|----------|------------|
|
||||
| [`/cli:cli-init`](./cli.md#cli-init) | Generate configuration directory and settings files | Intermediate |
|
||||
| [`/cli:codex-review`](./cli.md#codex-review) | Interactive code review using Codex CLI | Intermediate |
|
||||
|
||||
### 7. UI Design Commands
|
||||
|
||||
| Command | Function | Difficulty |
|
||||
|---------|----------|------------|
|
||||
| [`/workflow:ui-design:explore-auto`](./ui-design.md#explore-auto) | Interactive exploratory UI design workflow | Intermediate |
|
||||
| [`/workflow:ui-design:imitate-auto`](./ui-design.md#imitate-auto) | Direct code/image input UI design | Intermediate |
|
||||
| [`/workflow:ui-design:style-extract`](./ui-design.md#style-extract) | Extract design styles from reference images or prompts | Intermediate |
|
||||
| [`/workflow:ui-design:layout-extract`](./ui-design.md#layout-extract) | Extract layout information from reference images | Intermediate |
|
||||
| [`/workflow:ui-design:animation-extract`](./ui-design.md#animation-extract) | Extract animation and transition patterns | Intermediate |
|
||||
| [`/workflow:ui-design:codify-style`](./ui-design.md#codify-style) | Extract styles from code and generate shareable reference package | Intermediate |
|
||||
| [`/workflow:ui-design:generate`](./ui-design.md#generate) | Combine layout templates with design tokens to generate prototypes | Intermediate |
|
||||
|
||||
## Auto Mode
|
||||
|
||||
Most commands support the `--yes` or `-y` flag to enable auto mode and skip confirmation steps.
|
||||
|
||||
```bash
|
||||
# Standard mode - requires confirmation
|
||||
/ccw "implement user authentication"
|
||||
|
||||
# Auto mode - execute directly without confirmation
|
||||
/ccw "implement user authentication" --yes
|
||||
```
|
||||
|
||||
## Usage Examples
|
||||
|
||||
### Quick Analysis
|
||||
|
||||
```bash
|
||||
# Analyze codebase structure
|
||||
/ccw "Analyze the authentication module architecture"
|
||||
|
||||
# Quick bug diagnosis
|
||||
/ccw "Diagnose why the login timeout issue occurs"
|
||||
```
|
||||
|
||||
### Planning & Implementation
|
||||
|
||||
```bash
|
||||
# Create implementation plan
|
||||
/workflow:plan "Add OAuth2 authentication with Google and GitHub providers"
|
||||
|
||||
# Execute with auto mode
|
||||
/workflow:execute --yes
|
||||
```
|
||||
|
||||
### Code Review
|
||||
|
||||
```bash
|
||||
# Review current changes
|
||||
/cli:codex-review
|
||||
|
||||
# Focus on specific area
|
||||
/cli:codex-review "Focus on security vulnerabilities in auth module"
|
||||
```
|
||||
|
||||
### Session Management
|
||||
|
||||
```bash
|
||||
# List all sessions
|
||||
/workflow:session:list
|
||||
|
||||
# Resume a paused session
|
||||
/workflow:session:resume "WFS-001"
|
||||
|
||||
# Mark session as complete
|
||||
/workflow:session:complete "WFS-001"
|
||||
```
|
||||
|
||||
### Issue Workflow
|
||||
|
||||
```bash
|
||||
# Discover issues from codebase
|
||||
/issue:discover
|
||||
|
||||
# Create plan for specific issue
|
||||
/issue:plan "ISSUE-001"
|
||||
|
||||
# Execute the fix
|
||||
/issue:execute --commit
|
||||
```
|
||||
|
||||
### Memory Management
|
||||
|
||||
```bash
|
||||
# Capture current session learnings
|
||||
/memory:capture "Key insights from authentication refactoring"
|
||||
|
||||
# List all memories
|
||||
/memory:list
|
||||
|
||||
# Search memories
|
||||
/memory:search "authentication patterns"
|
||||
```
|
||||
|
||||
### CLI Tool Invocation
|
||||
|
||||
```bash
|
||||
# Initialize CLI configuration
|
||||
/cli:cli-init
|
||||
|
||||
# Run Gemini analysis
|
||||
ccw cli -p "Analyze code patterns in src/auth" --tool gemini --mode analysis
|
||||
|
||||
# Run with specific rule template
|
||||
ccw cli -p "Review code quality" --tool gemini --mode analysis --rule analysis-review-code-quality
|
||||
```
|
||||
|
||||
### UI Design Workflow
|
||||
|
||||
```bash
|
||||
# Extract styles from reference image
|
||||
/workflow:ui-design:style-extract --input "path/to/reference.png"
|
||||
|
||||
# Generate prototype
|
||||
/workflow:ui-design:generate --layout "dashboard" --tokens "design-tokens.json"
|
||||
```
|
||||
|
||||
## Tips
|
||||
|
||||
1. **Use Auto Mode Sparingly**: Only use `--yes` or `-y` for routine tasks. Keep manual confirmation for complex decisions.
|
||||
|
||||
2. **Session Persistence**: Always complete sessions with `/workflow:session:complete` to preserve learnings.
|
||||
|
||||
3. **Memory Capture**: Regularly capture important insights with `/memory:capture` to build project knowledge.
|
||||
|
||||
4. **CLI Tool Selection**: Let `/ccw` auto-select the appropriate tool, or explicitly specify with `--tool gemini|qwen|codex`.
|
||||
|
||||
## Related Documentation
|
||||
|
||||
- [Skills Reference](../skills/)
|
||||
- [CLI Invocation System](../features/cli.md)
|
||||
- [Workflow Guide](../guide/ch04-workflow-basics.md)
|
||||
294
docs/commands/claude/issue.md
Normal file
294
docs/commands/claude/issue.md
Normal file
@@ -0,0 +1,294 @@
|
||||
# Issue Workflow Commands
|
||||
|
||||
## One-Liner
|
||||
|
||||
**Issue workflow commands are the closed-loop system for issue management** — from discovery, planning to execution, fully tracking the issue resolution process.
|
||||
|
||||
## Core Concepts
|
||||
|
||||
| Concept | Description | Location |
|
||||
|---------|-------------|----------|
|
||||
| **Issue** | Structured issue definition | `.workflow/issues/ISS-*.json` |
|
||||
| **Solution** | Execution plan | `.workflow/solutions/SOL-*.json` |
|
||||
| **Queue** | Execution queue | `.workflow/queues/QUE-*.json` |
|
||||
| **Execution State** | Progress tracking | State within queue |
|
||||
|
||||
## Command List
|
||||
|
||||
| Command | Function | Syntax |
|
||||
|---------|----------|--------|
|
||||
| [`new`](#new) | Create structured issue from GitHub URL or text description | `/issue:new [-y] <github-url \| description> [--priority 1-5]` |
|
||||
| [`discover`](#discover) | Discover potential issues from multiple perspectives | `/issue:discover [-y] <path pattern> [--perspectives=dimensions] [--external]` |
|
||||
| [`discover-by-prompt`](#discover-by-prompt) | Discover issues via user prompt | `/issue:discover-by-prompt [-y] <prompt> [--scope=src/**]` |
|
||||
| [`plan`](#plan) | Batch plan issue solutions | `/issue:plan [-y] --all-pending <issue-id>[,...] [--batch-size 3]` |
|
||||
| [`queue`](#queue) | Form execution queue | `/issue:queue [-y] [--queues N] [--issue id]` |
|
||||
| [`execute`](#execute) | Execute queue | `/issue:execute [-y] --queue <queue-id> [--worktree [path]]` |
|
||||
| [`convert-to-plan`](#convert-to-plan) | Convert planning artifact to issue solution | `/issue:convert-to-plan [-y] [--issue id] [--supplement] <source>` |
|
||||
|
||||
## Command Details
|
||||
|
||||
### new
|
||||
|
||||
**Function**: Create structured issue from GitHub URL or text description, supporting requirement clarity detection.
|
||||
|
||||
**Syntax**:
|
||||
```
|
||||
/issue:new [-y|--yes] <github-url | text description> [--priority 1-5]
|
||||
```
|
||||
|
||||
**Options**:
|
||||
- `--priority 1-5`: Priority (1=critical, 5=low)
|
||||
|
||||
**Clarity Detection**:
|
||||
|
||||
| Input Type | Clarity | Behavior |
|
||||
|------------|---------|----------|
|
||||
| GitHub URL | 3 | Direct creation |
|
||||
| Structured text | 2 | Direct creation |
|
||||
| Long text | 1 | Partial clarification |
|
||||
| Short text | 0 | Full clarification |
|
||||
|
||||
**Issue Structure**:
|
||||
```typescript
|
||||
interface Issue {
|
||||
id: string; // GH-123 or ISS-YYYYMMDD-HHMMSS
|
||||
title: string;
|
||||
status: 'registered' | 'planned' | 'queued' | 'in_progress' | 'completed' | 'failed';
|
||||
priority: number; // 1-5
|
||||
context: string; // Issue description (single source of truth)
|
||||
source: 'github' | 'text' | 'discovery';
|
||||
source_url?: string;
|
||||
|
||||
// Binding
|
||||
bound_solution_id: string | null;
|
||||
|
||||
// Feedback history
|
||||
feedback?: Array<{
|
||||
type: 'failure' | 'clarification' | 'rejection';
|
||||
stage: string;
|
||||
content: string;
|
||||
created_at: string;
|
||||
}>;
|
||||
}
|
||||
```
|
||||
|
||||
**Examples**:
|
||||
```bash
|
||||
# Create from GitHub
|
||||
/issue:new https://github.com/owner/repo/issues/123
|
||||
|
||||
# Create from text (structured)
|
||||
/issue:new "login failed: expected success, actual 500 error"
|
||||
|
||||
# Create from text (vague - will ask)
|
||||
/issue:new "auth has problems"
|
||||
|
||||
# Specify priority
|
||||
/issue:new --priority 2 "payment timeout issue"
|
||||
```
|
||||
|
||||
### discover
|
||||
|
||||
**Function**: Discover potential issues from multiple perspectives (Bug, UX, Test, Quality, Security, Performance, Maintainability, Best Practices).
|
||||
|
||||
**Syntax**:
|
||||
```
|
||||
/issue:discover [-y|--yes] <path pattern> [--perspectives=bug,ux,...] [--external]
|
||||
```
|
||||
|
||||
**Options**:
|
||||
- `--perspectives=dimensions`: Analysis dimensions
|
||||
- `bug`: Potential bugs
|
||||
- `ux`: UX issues
|
||||
- `test`: Test coverage
|
||||
- `quality`: Code quality
|
||||
- `security`: Security issues
|
||||
- `performance`: Performance issues
|
||||
- `maintainability`: Maintainability
|
||||
- `best-practices`: Best practices
|
||||
- `--external`: Use Exa external research (security, best practices)
|
||||
|
||||
**Examples**:
|
||||
```bash
|
||||
# Comprehensive scan
|
||||
/issue:discover src/
|
||||
|
||||
# Specific dimensions
|
||||
/issue:discover src/auth/ --perspectives=security,bug
|
||||
|
||||
# With external research
|
||||
/issue:discover src/payment/ --perspectives=security --external
|
||||
```
|
||||
|
||||
### discover-by-prompt
|
||||
|
||||
**Function**: Discover issues via user prompt, using Gemini-planned iterative multi-agent exploration, supporting cross-module comparison.
|
||||
|
||||
**Syntax**:
|
||||
```
|
||||
/issue:discover-by-prompt [-y|--yes] <prompt> [--scope=src/**] [--depth=standard|deep] [--max-iterations=5]
|
||||
```
|
||||
|
||||
**Options**:
|
||||
- `--scope=path`: Scan scope
|
||||
- `--depth=depth`: standard or deep
|
||||
- `--max-iterations=N`: Maximum iteration count
|
||||
|
||||
**Examples**:
|
||||
```bash
|
||||
# Standard scan
|
||||
/issue:discover-by-prompt "find auth module issues"
|
||||
|
||||
# Deep scan
|
||||
/issue:discover-by-prompt "analyze API performance bottlenecks" --depth=deep
|
||||
|
||||
# Specify scope
|
||||
/issue:discover-by-prompt "check database query optimization" --scope=src/db/
|
||||
```
|
||||
|
||||
### plan
|
||||
|
||||
**Function**: Batch plan issue solutions, using issue-plan-agent (explore + plan closed loop).
|
||||
|
||||
**Syntax**:
|
||||
```
|
||||
/issue:plan [-y|--yes] --all-pending <issue-id>[,<issue-id>,...] [--batch-size 3]
|
||||
```
|
||||
|
||||
**Options**:
|
||||
- `--all-pending`: Plan all pending issues
|
||||
- `--batch-size=N`: Issues per batch
|
||||
|
||||
**Examples**:
|
||||
```bash
|
||||
# Plan specific issues
|
||||
/issue:plan ISS-20240115-001,ISS-20240115-002
|
||||
|
||||
# Plan all pending issues
|
||||
/issue:plan --all-pending
|
||||
|
||||
# Specify batch size
|
||||
/issue:plan --all-pending --batch-size 5
|
||||
```
|
||||
|
||||
### queue
|
||||
|
||||
**Function**: Form execution queue from bound solutions, using issue-queue-agent (solution level).
|
||||
|
||||
**Syntax**:
|
||||
```
|
||||
/issue:queue [-y|--yes] [--queues <n>] [--issue <id>]
|
||||
```
|
||||
|
||||
**Options**:
|
||||
- `--queues N`: Number of queues to create
|
||||
- `--issue id`: Specific issue
|
||||
|
||||
**Examples**:
|
||||
```bash
|
||||
# Form queue
|
||||
/issue:queue
|
||||
|
||||
# Create multiple queues
|
||||
/issue:queue --queues 3
|
||||
|
||||
# Specific issue
|
||||
/issue:queue --issue ISS-20240115-001
|
||||
```
|
||||
|
||||
### execute
|
||||
|
||||
**Function**: Execute queue, using DAG parallel orchestration (one commit per solution).
|
||||
|
||||
**Syntax**:
|
||||
```
|
||||
/issue:execute [-y|--yes] --queue <queue-id> [--worktree [<existing-path>]]
|
||||
```
|
||||
|
||||
**Options**:
|
||||
- `--queue id`: Queue ID
|
||||
- `--worktree [path]`: Optional worktree path
|
||||
|
||||
**Examples**:
|
||||
```bash
|
||||
# Execute queue
|
||||
/issue:execute --queue QUE-20240115-001
|
||||
|
||||
# Use worktree
|
||||
/issue:execute --queue QUE-20240115-001 --worktree ../feature-branch
|
||||
```
|
||||
|
||||
### convert-to-plan
|
||||
|
||||
**Function**: Convert planning artifact (lite-plan, workflow session, markdown) to issue solution.
|
||||
|
||||
**Syntax**:
|
||||
```
|
||||
/issue:convert-to-plan [-y|--yes] [--issue <id>] [--supplement] <source>
|
||||
```
|
||||
|
||||
**Options**:
|
||||
- `--issue id`: Bind to existing issue
|
||||
- `--supplement`: Supplement mode (add to existing solution)
|
||||
|
||||
**Source Types**:
|
||||
- lite-plan artifact
|
||||
- workflow session
|
||||
- Markdown file
|
||||
|
||||
**Examples**:
|
||||
```bash
|
||||
# Convert from lite-plan
|
||||
/issue:convert-to-plan .workflow/sessions/WFS-xxx/artifacts/lite-plan.md
|
||||
|
||||
# Bind to issue
|
||||
/issue:convert-to-plan --issue ISS-20240115-001 plan.md
|
||||
|
||||
# Supplement mode
|
||||
/issue:convert-to-plan --supplement additional-plan.md
|
||||
```
|
||||
|
||||
### from-brainstorm
|
||||
|
||||
**Function**: Convert brainstorm session ideas to issues and generate executable solutions.
|
||||
|
||||
**Syntax**:
|
||||
```
|
||||
/issue:from-brainstorm SESSION="session-id" [--idea=<index>] [--auto] [-y|--yes]
|
||||
```
|
||||
|
||||
**Options**:
|
||||
- `--idea=index`: Specific idea index
|
||||
- `--auto`: Auto mode
|
||||
|
||||
**Examples**:
|
||||
```bash
|
||||
# Convert all ideas
|
||||
/issue:from-brainstorm SESSION="WFS-brainstorm-2024-01-15"
|
||||
|
||||
# Convert specific idea
|
||||
/issue:from-brainstorm SESSION="WFS-brainstorm-2024-01-15" --idea=3
|
||||
|
||||
# Auto mode
|
||||
/issue:from-brainstorm --auto SESSION="WFS-brainstorm-2024-01-15"
|
||||
```
|
||||
|
||||
## Issue Workflow
|
||||
|
||||
```mermaid
|
||||
graph TD
|
||||
A[Discover Issue] --> B[Create Issue]
|
||||
B --> C[Plan Solution]
|
||||
C --> D[Form Execution Queue]
|
||||
D --> E[Execute Queue]
|
||||
E --> F{Success?}
|
||||
F -->|Yes| G[Complete]
|
||||
F -->|No| H[Feedback Learning]
|
||||
H --> C
|
||||
```
|
||||
|
||||
## Related Documentation
|
||||
|
||||
- [Workflow Commands](./workflow.md)
|
||||
- [Core Orchestration](./core-orchestration.md)
|
||||
- [Team System](../../features/)
|
||||
262
docs/commands/claude/memory.md
Normal file
262
docs/commands/claude/memory.md
Normal file
@@ -0,0 +1,262 @@
|
||||
# Memory Commands
|
||||
|
||||
## One-Liner
|
||||
|
||||
**Memory commands are the cross-session knowledge persistence system** — capturing context, updating memory, generating documentation, making AI remember the project.
|
||||
|
||||
## Core Concepts
|
||||
|
||||
| Concept | Description | Location |
|
||||
|---------|-------------|----------|
|
||||
| **Memory Package** | Structured project context | MCP core_memory |
|
||||
| **CLAUDE.md** | Module-level project guide | Each module/directory |
|
||||
| **Tips** | Quick notes | `MEMORY.md` |
|
||||
| **Project Documentation** | Generated documentation | `docs/` directory |
|
||||
|
||||
## Command List
|
||||
|
||||
| Command | Function | Syntax |
|
||||
|---------|----------|--------|
|
||||
| [`compact`](#compact) | Compress current session memory to structured text | `/memory:compact [optional: session description]` |
|
||||
| [`tips`](#tips) | Quick note-taking | `/memory:tips <note content> [--tag tags] [--context context]` |
|
||||
| [`load`](#load) | Load task context via CLI project analysis | `/memory:load [--tool gemini\|qwen] "task context description"` |
|
||||
| [`update-full`](#update-full) | Update all CLAUDE.md files | `/memory:update-full [--tool gemini\|qwen\|codex] [--path directory]` |
|
||||
| [`update-related`](#update-related) | Update CLAUDE.md for git-changed modules | `/memory:update-related [--tool gemini\|qwen\|codex]` |
|
||||
| [`docs-full-cli`](#docs-full-cli) | Generate full project documentation using CLI | `/memory:docs-full-cli [path] [--tool tool]` |
|
||||
| [`docs-related-cli`](#docs-related-cli) | Generate documentation for git-changed modules | `/memory:docs-related-cli [--tool tool]` |
|
||||
| [`style-skill-memory`](#style-skill-memory) | Generate SKILL memory package from style reference | `/memory:style-skill-memory [package-name] [--regenerate]` |
|
||||
|
||||
## Command Details
|
||||
|
||||
### compact
|
||||
|
||||
**Function**: Compress current session memory to structured text, extracting objectives, plans, files, decisions, constraints, and state, saving via MCP core_memory tool.
|
||||
|
||||
**Syntax**:
|
||||
```
|
||||
/memory:compact [optional: session description]
|
||||
```
|
||||
|
||||
**Extracted Content**:
|
||||
- Objectives
|
||||
- Plans
|
||||
- Files
|
||||
- Decisions
|
||||
- Constraints
|
||||
- State
|
||||
|
||||
**Examples**:
|
||||
```bash
|
||||
# Basic compression
|
||||
/memory:compact
|
||||
|
||||
# With description
|
||||
/memory:compact "user authentication implementation session"
|
||||
```
|
||||
|
||||
### tips
|
||||
|
||||
**Function**: Quick note-taking command, capturing thoughts, snippets, reminders, and insights for future reference.
|
||||
|
||||
**Syntax**:
|
||||
```
|
||||
/memory:tips <note content> [--tag <tag1,tag2>] [--context <context>]
|
||||
```
|
||||
|
||||
**Options**:
|
||||
- `--tag=tags`: Tags (comma-separated)
|
||||
- `--context=context`: Context information
|
||||
|
||||
**Examples**:
|
||||
```bash
|
||||
# Basic note
|
||||
/memory:tips "remember to use rate limiting for API calls"
|
||||
|
||||
# With tags
|
||||
/memory:tips "auth middleware needs to handle token expiry" --tag auth,api
|
||||
|
||||
# With context
|
||||
/memory:tips "use Redis to cache user sessions" --context "login optimization"
|
||||
```
|
||||
|
||||
### load
|
||||
|
||||
**Function**: Delegate to universal-executor agent, analyzing project via Gemini/Qwen CLI and returning JSON core content package for task context.
|
||||
|
||||
**Syntax**:
|
||||
```
|
||||
/memory:load [--tool gemini|qwen] "task context description"
|
||||
```
|
||||
|
||||
**Options**:
|
||||
- `--tool=tool`: CLI tool to use
|
||||
|
||||
**Output**: JSON format project context package
|
||||
|
||||
**Examples**:
|
||||
```bash
|
||||
# Use default tool
|
||||
/memory:load "user authentication module"
|
||||
|
||||
# Specify tool
|
||||
/memory:load --tool gemini "payment system architecture"
|
||||
```
|
||||
|
||||
### update-full
|
||||
|
||||
**Function**: Update all CLAUDE.md files, using layer-based execution (Layer 3->1), batch agent processing (4 modules/agent), and gemini->qwen->codex fallback.
|
||||
|
||||
**Syntax**:
|
||||
```
|
||||
/memory:update-full [--tool gemini|qwen|codex] [--path <directory>]
|
||||
```
|
||||
|
||||
**Options**:
|
||||
- `--tool=tool`: CLI tool to use
|
||||
- `--path=directory`: Specific directory
|
||||
|
||||
**Layer Structure**:
|
||||
- Layer 3: Project-level analysis
|
||||
- Layer 2: Module-level analysis
|
||||
- Layer 1: File-level analysis
|
||||
|
||||
**Examples**:
|
||||
```bash
|
||||
# Update entire project
|
||||
/memory:update-full
|
||||
|
||||
# Update specific directory
|
||||
/memory:update-full --path src/auth/
|
||||
|
||||
# Specify tool
|
||||
/memory:update-full --tool qwen
|
||||
```
|
||||
|
||||
### update-related
|
||||
|
||||
**Function**: Update CLAUDE.md files for git-changed modules, using batch agent execution (4 modules/agent) and gemini->qwen->codex fallback.
|
||||
|
||||
**Syntax**:
|
||||
```
|
||||
/memory:update-related [--tool gemini|qwen|codex]
|
||||
```
|
||||
|
||||
**Options**:
|
||||
- `--tool=tool`: CLI tool to use
|
||||
|
||||
**Examples**:
|
||||
```bash
|
||||
# Default update
|
||||
/memory:update-related
|
||||
|
||||
# Specify tool
|
||||
/memory:update-related --tool gemini
|
||||
```
|
||||
|
||||
### docs-full-cli
|
||||
|
||||
**Function**: Generate full project documentation using CLI (Layer 3->1), batch agent processing (4 modules/agent), gemini->qwen->codex fallback, direct parallel for <20 modules.
|
||||
|
||||
**Syntax**:
|
||||
```
|
||||
/memory:docs-full-cli [path] [--tool <gemini|qwen|codex>]
|
||||
```
|
||||
|
||||
**Examples**:
|
||||
```bash
|
||||
# Generate entire project documentation
|
||||
/memory:docs-full-cli
|
||||
|
||||
# Generate specific directory documentation
|
||||
/memory:docs-full-cli src/
|
||||
|
||||
# Specify tool
|
||||
/memory:docs-full-cli --tool gemini
|
||||
```
|
||||
|
||||
### docs-related-cli
|
||||
|
||||
**Function**: Generate documentation for git-changed modules using CLI, batch agent processing (4 modules/agent), gemini->qwen->codex fallback, direct execution for <15 modules.
|
||||
|
||||
**Syntax**:
|
||||
```
|
||||
/memory:docs-related-cli [--tool <gemini|qwen|codex>]
|
||||
```
|
||||
|
||||
**Examples**:
|
||||
```bash
|
||||
# Default generation
|
||||
/memory:docs-related-cli
|
||||
|
||||
# Specify tool
|
||||
/memory:docs-related-cli --tool qwen
|
||||
```
|
||||
|
||||
### style-skill-memory
|
||||
|
||||
**Function**: Generate SKILL memory package from style reference, facilitating loading and consistent design system usage.
|
||||
|
||||
**Syntax**:
|
||||
```
|
||||
/memory:style-skill-memory [package-name] [--regenerate]
|
||||
```
|
||||
|
||||
**Options**:
|
||||
- `--regenerate`: Regenerate
|
||||
|
||||
**Examples**:
|
||||
```bash
|
||||
# Generate style memory package
|
||||
/memory:style-skill-memory my-design-system
|
||||
|
||||
# Regenerate
|
||||
/memory:style-skill-memory my-design-system --regenerate
|
||||
```
|
||||
|
||||
## Memory System Workflow
|
||||
|
||||
```mermaid
|
||||
graph TD
|
||||
A[In Session] --> B[Capture Context]
|
||||
B --> C{Session Complete?}
|
||||
C -->|Yes| D[Compress Memory]
|
||||
C -->|No| E[Continue Work]
|
||||
D --> F[Save to core_memory]
|
||||
F --> G[Update CLAUDE.md]
|
||||
G --> H[Generate Documentation]
|
||||
H --> I[New Session Starts]
|
||||
I --> J[Load Memory Package]
|
||||
J --> K[Restore Context]
|
||||
K --> A
|
||||
```
|
||||
|
||||
## CLAUDE.md Structure
|
||||
|
||||
```markdown
|
||||
# Module Name
|
||||
|
||||
## One-Liner
|
||||
Core value description of the module
|
||||
|
||||
## Tech Stack
|
||||
- Framework/library
|
||||
- Main dependencies
|
||||
|
||||
## Key Files
|
||||
- File path: Description
|
||||
|
||||
## Code Conventions
|
||||
- Naming conventions
|
||||
- Architecture patterns
|
||||
- Best practices
|
||||
|
||||
## TODO
|
||||
- Planned features
|
||||
- Known issues
|
||||
```
|
||||
|
||||
## Related Documentation
|
||||
|
||||
- [Memory System](../../features/memory.md)
|
||||
- [Core Orchestration](./core-orchestration.md)
|
||||
- [Core Concepts Guide](../../guide/ch03-core-concepts.md)
|
||||
256
docs/commands/claude/session.md
Normal file
256
docs/commands/claude/session.md
Normal file
@@ -0,0 +1,256 @@
|
||||
# Session Management Commands
|
||||
|
||||
## One-Liner
|
||||
|
||||
**Session management commands are the workflow state managers** — creating, tracking, resuming, and completing workflow sessions.
|
||||
|
||||
## Core Concepts
|
||||
|
||||
| Concept | Description | Location |
|
||||
|---------|-------------|----------|
|
||||
| **Session ID** | Unique identifier (WFS-YYYY-MM-DD) | `.workflow/active/WFS-xxx/` |
|
||||
| **Session Type** | workflow, review, tdd, test, docs | Session metadata |
|
||||
| **Session State** | active, paused, completed | workflow-session.json |
|
||||
| **Artifacts** | Plans, tasks, TODOs, etc. | Session directory |
|
||||
|
||||
## Command List
|
||||
|
||||
| Command | Function | Syntax |
|
||||
|---------|----------|--------|
|
||||
| [`start`](#start) | Discover existing sessions or start new workflow session | `/workflow:session:start [--type type] [--auto\|--new] [description]` |
|
||||
| [`list`](#list) | List all workflow sessions | `/workflow:session:list` |
|
||||
| [`resume`](#resume) | Resume most recently paused workflow session | `/workflow:session:resume` |
|
||||
| [`complete`](#complete) | Mark active workflow session as completed | `/workflow:session:complete [-y] [--detailed]` |
|
||||
| [`solidify`](#solidify) | Crystallize session learnings into project guidelines | `/workflow:session:solidify [-y] [--type type] [--category category] "rule"` |
|
||||
|
||||
## Command Details
|
||||
|
||||
### start
|
||||
|
||||
**Function**: Discover existing sessions or start new workflow session, supporting intelligent session management and conflict detection.
|
||||
|
||||
**Syntax**:
|
||||
```
|
||||
/workflow:session:start [--type <workflow|review|tdd|test|docs>] [--auto|--new] [optional: task description]
|
||||
```
|
||||
|
||||
**Options**:
|
||||
- `--type=type`: Session type
|
||||
- `workflow`: Standard implementation (default)
|
||||
- `review`: Code review
|
||||
- `tdd`: TDD development
|
||||
- `test`: Test generation/fix
|
||||
- `docs`: Documentation session
|
||||
- `--auto`: Smart mode (auto detect/create)
|
||||
- `--new`: Force create new session
|
||||
|
||||
**Session Types**:
|
||||
|
||||
| Type | Description | Default Source |
|
||||
|------|-------------|----------------|
|
||||
| `workflow` | Standard implementation | workflow-plan skill |
|
||||
| `review` | Code review | review-cycle skill |
|
||||
| `tdd` | TDD development | workflow-tdd skill |
|
||||
| `test` | Test generation/fix | workflow-test-fix skill |
|
||||
| `docs` | Documentation session | memory-manage skill |
|
||||
|
||||
**Workflow**:
|
||||
|
||||
```mermaid
|
||||
graph TD
|
||||
A[Start] --> B{Project State Exists?}
|
||||
B -->|No| C[Call workflow:init]
|
||||
C --> D
|
||||
B -->|Yes| D{Mode}
|
||||
D -->|Default| E[List Active Sessions]
|
||||
D -->|auto| F{Active Sessions Count?}
|
||||
D -->|new| G[Create New Session]
|
||||
F -->|0| G
|
||||
F -->|1| H[Use Existing Session]
|
||||
F -->|>1| I[User Selects]
|
||||
E --> J{User Selects}
|
||||
J -->|Existing| K[Return Session ID]
|
||||
J -->|New| G
|
||||
G --> L[Generate Session ID]
|
||||
L --> M[Create Directory Structure]
|
||||
M --> N[Initialize Metadata]
|
||||
N --> O[Return Session ID]
|
||||
```
|
||||
|
||||
**Examples**:
|
||||
|
||||
```bash
|
||||
# Discovery mode - list active sessions
|
||||
/workflow:session:start
|
||||
|
||||
# Auto mode - smart select/create
|
||||
/workflow:session:start --auto "implement user authentication"
|
||||
|
||||
# New mode - force create new session
|
||||
/workflow:session:start --new "refactor payment module"
|
||||
|
||||
# Specify type
|
||||
/workflow:session:start --type review "review auth code"
|
||||
/workflow:session:start --type tdd --auto "implement login feature"
|
||||
```
|
||||
|
||||
### list
|
||||
|
||||
**Function**: List all workflow sessions, supporting state filtering, displaying session metadata and progress information.
|
||||
|
||||
**Syntax**:
|
||||
```
|
||||
/workflow:session:list
|
||||
```
|
||||
|
||||
**Output Format**:
|
||||
|
||||
| Session ID | Type | State | Description | Progress |
|
||||
|------------|------|-------|-------------|----------|
|
||||
| WFS-2024-01-15 | workflow | active | User authentication | 5/10 |
|
||||
| WFS-2024-01-14 | review | paused | Code review | 8/8 |
|
||||
| WFS-2024-01-13 | tdd | completed | TDD development | 12/12 |
|
||||
|
||||
**Examples**:
|
||||
```bash
|
||||
# List all sessions
|
||||
/workflow:session:list
|
||||
```
|
||||
|
||||
### resume
|
||||
|
||||
**Function**: Resume most recently paused workflow session, supporting automatic session discovery and state update.
|
||||
|
||||
**Syntax**:
|
||||
```
|
||||
/workflow:session:resume
|
||||
```
|
||||
|
||||
**Workflow**:
|
||||
|
||||
```mermaid
|
||||
graph TD
|
||||
A[Start] --> B[Find Paused Sessions]
|
||||
B --> C{Found Paused Session?}
|
||||
C -->|Yes| D[Load Session]
|
||||
C -->|No| E[Error Message]
|
||||
D --> F[Update State to Active]
|
||||
F --> G[Return Session ID]
|
||||
```
|
||||
|
||||
**Examples**:
|
||||
```bash
|
||||
# Resume most recently paused session
|
||||
/workflow:session:resume
|
||||
```
|
||||
|
||||
### complete
|
||||
|
||||
**Function**: Mark active workflow session as completed, archive and learn from experience, update checklist and remove active flag.
|
||||
|
||||
**Syntax**:
|
||||
```
|
||||
/workflow:session:complete [-y|--yes] [--detailed]
|
||||
```
|
||||
|
||||
**Options**:
|
||||
- `--detailed`: Detailed mode, collect more learnings
|
||||
|
||||
**Workflow**:
|
||||
|
||||
```mermaid
|
||||
graph TD
|
||||
A[Start] --> B[Confirm Completion]
|
||||
B --> C{Detailed Mode?}
|
||||
C -->|Yes| D[Collect Detailed Feedback]
|
||||
C -->|No| E[Collect Basic Feedback]
|
||||
D --> F[Generate Learning Document]
|
||||
E --> F
|
||||
F --> G[Archive Session]
|
||||
G --> H[Update Checklist]
|
||||
H --> I[Remove Active Flag]
|
||||
I --> J[Complete]
|
||||
```
|
||||
|
||||
**Examples**:
|
||||
```bash
|
||||
# Standard completion
|
||||
/workflow:session:complete
|
||||
|
||||
# Detailed completion
|
||||
/workflow:session:complete --detailed
|
||||
|
||||
# Auto mode
|
||||
/workflow:session:complete -y
|
||||
```
|
||||
|
||||
### solidify
|
||||
|
||||
**Function**: Crystallize session learnings and user-defined constraints into permanent project guidelines.
|
||||
|
||||
**Syntax**:
|
||||
```
|
||||
/workflow:session:solidify [-y|--yes] [--type <convention|constraint|learning>] [--category <category>] "rule or insight"
|
||||
```
|
||||
|
||||
**Options**:
|
||||
- `--type=type`:
|
||||
- `convention`: Code convention
|
||||
- `constraint`: Constraint condition
|
||||
- `learning`: Experience learning
|
||||
- `--category=category`: Category name (e.g., `authentication`, `testing`)
|
||||
|
||||
**Output Locations**:
|
||||
- Conventions: `.workflow/specs/conventions/<category>.md`
|
||||
- Constraints: `.workflow/specs/constraints/<category>.md`
|
||||
- Learnings: `.workflow/specs/learnings/<category>.md`
|
||||
|
||||
**Examples**:
|
||||
```bash
|
||||
# Add code convention
|
||||
/workflow:session:solidify --type=convention --category=auth "all auth functions must use rate limiting"
|
||||
|
||||
# Add constraint
|
||||
/workflow:session:solidify --type=constraint --category=database "no N+1 queries"
|
||||
|
||||
# Add learning
|
||||
/workflow:session:solidify --type=learning --category=api "REST API design lessons learned"
|
||||
```
|
||||
|
||||
## Session Directory Structure
|
||||
|
||||
```
|
||||
.workflow/
|
||||
├── active/ # Active sessions
|
||||
│ └── WFS-2024-01-15/ # Session directory
|
||||
│ ├── workflow-session.json # Session metadata
|
||||
│ ├── tasks/ # Task definitions
|
||||
│ ├── artifacts/ # Artifact files
|
||||
│ └── context/ # Context files
|
||||
└── archived/ # Archived sessions
|
||||
└── WFS-2024-01-14/
|
||||
```
|
||||
|
||||
## Session Metadata
|
||||
|
||||
```json
|
||||
{
|
||||
"session_id": "WFS-2024-01-15",
|
||||
"type": "workflow",
|
||||
"status": "active",
|
||||
"created_at": "2024-01-15T10:00:00Z",
|
||||
"updated_at": "2024-01-15T14:30:00Z",
|
||||
"description": "User authentication feature implementation",
|
||||
"progress": {
|
||||
"total": 10,
|
||||
"completed": 5,
|
||||
"percentage": 50
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Related Documentation
|
||||
|
||||
- [Workflow Commands](./workflow.md)
|
||||
- [Core Orchestration](./core-orchestration.md)
|
||||
- [Workflow Basics](../../guide/ch04-workflow-basics.md)
|
||||
307
docs/commands/claude/ui-design.md
Normal file
307
docs/commands/claude/ui-design.md
Normal file
@@ -0,0 +1,307 @@
|
||||
# UI Design Commands
|
||||
|
||||
## One-Liner
|
||||
|
||||
**UI design commands are the interface prototype generation system** — from style extraction, layout analysis to prototype assembly, fully covering the UI design workflow.
|
||||
|
||||
## Core Concepts
|
||||
|
||||
| Concept | Description | Storage Location |
|
||||
|----------|-------------|------------------|
|
||||
| **Design Run** | Design session directory | `.workflow/ui-design-runs/<run-id>/` |
|
||||
| **Design Tokens** | Style variables | `design-tokens.json` |
|
||||
| **Layout Templates** | Structure definitions | `layouts/` |
|
||||
| **Prototypes** | Generated components | `prototypes/` |
|
||||
|
||||
## Command List
|
||||
|
||||
### Discovery and Extraction
|
||||
|
||||
| Command | Function | Syntax |
|
||||
|---------|----------|--------|
|
||||
| [`explore-auto`](#explore-auto) | Interactive exploratory UI design workflow | `/workflow:ui-design:explore-auto [--input "value"] [--targets "list"]` |
|
||||
| [`imitate-auto`](#imitate-auto) | Direct code/image input UI design | `/workflow:ui-design:imitate-auto [--input "value"] [--session id]` |
|
||||
| [`style-extract`](#style-extract) | Extract design styles from reference images or prompts | `/workflow:ui-design:style-extract [-y] [--design-id id]` |
|
||||
| [`layout-extract`](#layout-extract) | Extract layout information from reference images | `/workflow:ui-design:layout-extract [-y] [--design-id id]` |
|
||||
| [`animation-extract`](#animation-extract) | Extract animation and transition patterns | `/workflow:ui-design:animation-extract [-y] [--design-id id]` |
|
||||
|
||||
### Import and Export
|
||||
|
||||
| Command | Function | Syntax |
|
||||
|---------|----------|--------|
|
||||
| [`import-from-code`](#import-from-code) | Import design system from code files | `/workflow:ui-design:import-from-code [--design-id id] [--session id] [--source path]` |
|
||||
| [`codify-style`](#codify-style) | Extract styles from code and generate shareable reference package | `/workflow:ui-design:codify-style <path> [--package-name name]` |
|
||||
| [`reference-page-generator`](#reference-page-generator) | Generate multi-component reference page from design run | `/workflow:ui-design:reference-page-generator [--design-run path]` |
|
||||
|
||||
### Generation and Sync
|
||||
|
||||
| Command | Function | Syntax |
|
||||
|---------|----------|--------|
|
||||
| [`generate`](#generate) | Combine layout templates with design tokens to generate prototypes | `/workflow:ui-design:generate [--design-id id] [--session id]` |
|
||||
| [`design-sync`](#design-sync) | Sync final design system reference to brainstorm artifacts | `/workflow:ui-design:design-sync --session <session_id>` |
|
||||
|
||||
## Command Details
|
||||
|
||||
### explore-auto
|
||||
|
||||
**Function**: Interactive exploratory UI design workflow, style-centric batch generation, creating design variants from prompts/images, supporting parallel execution and user selection.
|
||||
|
||||
**Syntax**:
|
||||
```
|
||||
/workflow:ui-design:explore-auto [--input "<value>"] [--targets "<list>"] [--target-type "page|component"] [--session <id>] [--style-variants <count>] [--layout-variants <count>]
|
||||
```
|
||||
|
||||
**Options**:
|
||||
- `--input=value`: Input prompt or image path
|
||||
- `--targets=list`: Target component list (comma-separated)
|
||||
- `--target-type=type`: page or component
|
||||
- `--session=id`: Session ID
|
||||
- `--style-variants=N`: Number of style variants
|
||||
- `--layout-variants=N`: Number of layout variants
|
||||
|
||||
**Examples**:
|
||||
```bash
|
||||
# Page design exploration
|
||||
/workflow:ui-design:explore-auto --input "modern e-commerce homepage" --target-type page --style-variants 3
|
||||
|
||||
# Component design exploration
|
||||
/workflow:ui-design:explore-auto --input "user card component" --target-type component --layout-variants 5
|
||||
|
||||
# Multi-target design
|
||||
/workflow:ui-design:explore-auto --targets "header,sidebar,footer" --style-variants 2
|
||||
```
|
||||
|
||||
### imitate-auto
|
||||
|
||||
**Function**: UI design workflow supporting direct code/image input for design token extraction and prototype generation.
|
||||
|
||||
**Syntax**:
|
||||
```
|
||||
/workflow:ui-design:imitate-auto [--input "<value>"] [--session <id>]
|
||||
```
|
||||
|
||||
**Options**:
|
||||
- `--input=value`: Code file path or image path
|
||||
- `--session=id`: Session ID
|
||||
|
||||
**Examples**:
|
||||
```bash
|
||||
# Imitate from code
|
||||
/workflow:ui-design:imitate-auto --input "./src/components/Button.tsx"
|
||||
|
||||
# Imitate from image
|
||||
/workflow:ui-design:imitate-auto --input "./designs/mockup.png"
|
||||
```
|
||||
|
||||
### style-extract
|
||||
|
||||
**Function**: Extract design styles using Claude analysis from reference images or text prompts, supporting variant generation or refine mode.
|
||||
|
||||
**Syntax**:
|
||||
```
|
||||
/workflow:ui-design:style-extract [-y|--yes] [--design-id <id>] [--session <id>] [--images "<glob>"] [--prompt "<description>"] [--variants <count>] [--interactive] [--refine]
|
||||
```
|
||||
|
||||
**Options**:
|
||||
- `--images=glob`: Image glob pattern
|
||||
- `--prompt=description`: Text description
|
||||
- `--variants=N`: Number of variants
|
||||
- `--interactive`: Interactive mode
|
||||
- `--refine`: Refine mode
|
||||
|
||||
**Examples**:
|
||||
```bash
|
||||
# Extract styles from images
|
||||
/workflow:ui-design:style-extract --images "./designs/*.png" --variants 3
|
||||
|
||||
# Extract from prompt
|
||||
/workflow:ui-design:style-extract --prompt "dark theme, blue primary, rounded corners"
|
||||
|
||||
# Interactive refine
|
||||
/workflow:ui-design:style-extract --images "reference.png" --refine --interactive
|
||||
```
|
||||
|
||||
### layout-extract
|
||||
|
||||
**Function**: Extract structural layout information using Claude analysis from reference images or text prompts, supporting variant generation or refine mode.
|
||||
|
||||
**Syntax**:
|
||||
```
|
||||
/workflow:ui-design:layout-extract [-y|--yes] [--design-id <id>] [--session <id>] [--images "<glob>"] [--prompt "<description>"] [--targets "<list>"] [--variants <count>] [--device-type <desktop|mobile|tablet|responsive>] [--interactive] [--refine]
|
||||
```
|
||||
|
||||
**Options**:
|
||||
- `--device-type=type`: desktop, mobile, tablet or responsive
|
||||
- Other options same as style-extract
|
||||
|
||||
**Examples**:
|
||||
```bash
|
||||
# Extract desktop layout
|
||||
/workflow:ui-design:layout-extract --images "desktop-mockup.png" --device-type desktop
|
||||
|
||||
# Extract responsive layout
|
||||
/workflow:ui-design:layout-extract --prompt "three-column layout, responsive design" --device-type responsive
|
||||
|
||||
# Multiple variants
|
||||
/workflow:ui-design:layout-extract --images "layout.png" --variants 5
|
||||
```
|
||||
|
||||
### animation-extract
|
||||
|
||||
**Function**: Extract animation and transition patterns from prompts inference and image references for design system documentation.
|
||||
|
||||
**Syntax**:
|
||||
```
|
||||
/workflow:ui-design:animation-extract [-y|--yes] [--design-id <id>] [--session <id>] [--images "<glob>"] [--focus "<type>"] [--interactive] [--refine]
|
||||
```
|
||||
|
||||
**Options**:
|
||||
- `--focus=type`: Specific animation type (e.g., fade, slide, scale)
|
||||
|
||||
**Examples**:
|
||||
```bash
|
||||
# Extract all animations
|
||||
/workflow:ui-design:animation-extract --images "./animations/*.gif"
|
||||
|
||||
# Extract specific type
|
||||
/workflow:ui-design:animation-extract --focus "fade,slide" --interactive
|
||||
```
|
||||
|
||||
### import-from-code
|
||||
|
||||
**Function**: Import design system from code files (CSS/JS/HTML/SCSS), using automatic file discovery and parallel agent analysis.
|
||||
|
||||
**Syntax**:
|
||||
```
|
||||
/workflow:ui-design:import-from-code [--design-id <id>] [--session <id>] [--source <path>]
|
||||
```
|
||||
|
||||
**Options**:
|
||||
- `--source=path`: Source code directory
|
||||
|
||||
**Examples**:
|
||||
```bash
|
||||
# Import from project
|
||||
/workflow:ui-design:import-from-code --source "./src/styles/"
|
||||
|
||||
# Specify design ID
|
||||
/workflow:ui-design:import-from-code --design-id my-design --source "./theme/"
|
||||
```
|
||||
|
||||
### codify-style
|
||||
|
||||
**Function**: Orchestrator extracts styles from code and generates shareable reference package, supporting preview (automatic file discovery).
|
||||
|
||||
**Syntax**:
|
||||
```
|
||||
/workflow:ui-design:codify-style <path> [--package-name <name>] [--output-dir <path>] [--overwrite]
|
||||
```
|
||||
|
||||
**Options**:
|
||||
- `--package-name=name`: Package name
|
||||
- `--output-dir=path`: Output directory
|
||||
- `--overwrite`: Overwrite existing files
|
||||
|
||||
**Examples**:
|
||||
```bash
|
||||
# Generate style package
|
||||
/workflow:ui-design:codify-style ./src/styles/ --package-name my-design-system
|
||||
|
||||
# Specify output directory
|
||||
/workflow:ui-design:codify-style ./theme/ --output-dir ./design-packages/
|
||||
```
|
||||
|
||||
### reference-page-generator
|
||||
|
||||
**Function**: Extract and generate multi-component reference pages and documentation from design run.
|
||||
|
||||
**Syntax**:
|
||||
```
|
||||
/workflow:ui-design:reference-page-generator [--design-run <path>] [--package-name <name>] [--output-dir <path>]
|
||||
```
|
||||
|
||||
**Examples**:
|
||||
```bash
|
||||
# Generate reference page
|
||||
/workflow:ui-design:reference-page-generator --design-run .workflow/ui-design-runs/latest/
|
||||
|
||||
# Specify package name
|
||||
/workflow:ui-design:reference-page-generator --package-name component-library
|
||||
```
|
||||
|
||||
### generate
|
||||
|
||||
**Function**: Assemble UI prototypes, combining layout templates with design tokens (default animation support), pure assembler with no new content generation.
|
||||
|
||||
**Syntax**:
|
||||
```
|
||||
/workflow:ui-design:generate [--design-id <id>] [--session <id>]
|
||||
```
|
||||
|
||||
**Examples**:
|
||||
```bash
|
||||
# Generate prototypes
|
||||
/workflow:ui-design:generate
|
||||
|
||||
# Use specific design
|
||||
/workflow:ui-design:generate --design-id my-design
|
||||
```
|
||||
|
||||
### design-sync
|
||||
|
||||
**Function**: Sync final design system reference to brainstorm artifacts, preparing for consumption by `/workflow:plan`.
|
||||
|
||||
**Syntax**:
|
||||
```
|
||||
/workflow:ui-design:design-sync --session <session_id> [--selected-prototypes "<list>"]
|
||||
```
|
||||
|
||||
**Options**:
|
||||
- `--selected-prototypes=list`: List of selected prototypes
|
||||
|
||||
**Examples**:
|
||||
```bash
|
||||
# Sync all prototypes
|
||||
/workflow:ui-design:design-sync --session WFS-design-2024-01-15
|
||||
|
||||
# Sync selected prototypes
|
||||
/workflow:ui-design:design-sync --session WFS-design-2024-01-15 --selected-prototypes "header,button,card"
|
||||
```
|
||||
|
||||
## UI Design Workflow
|
||||
|
||||
```mermaid
|
||||
graph TD
|
||||
A[Input] --> B{Input Type}
|
||||
B -->|Image/Prompt| C[Extract Styles]
|
||||
B -->|Code| D[Import Styles]
|
||||
C --> E[Extract Layouts]
|
||||
D --> E
|
||||
E --> F[Extract Animations]
|
||||
F --> G[Design Run]
|
||||
G --> H[Generate Prototypes]
|
||||
H --> I[Sync to Brainstorm]
|
||||
I --> J[Planning Workflow]
|
||||
```
|
||||
|
||||
## Design Run Structure
|
||||
|
||||
```
|
||||
.workflow/ui-design-runs/<run-id>/
|
||||
├── design-tokens.json # Design tokens
|
||||
├── layouts/ # Layout templates
|
||||
│ ├── header.json
|
||||
│ ├── footer.json
|
||||
│ └── ...
|
||||
├── prototypes/ # Generated prototypes
|
||||
│ ├── header/
|
||||
│ ├── button/
|
||||
│ └── ...
|
||||
└── reference-pages/ # Reference pages
|
||||
```
|
||||
|
||||
## Related Documentation
|
||||
|
||||
- [Core Orchestration](./core-orchestration.md)
|
||||
- [Workflow Commands](./workflow.md)
|
||||
- [Brainstorming](../../features/)
|
||||
338
docs/commands/claude/workflow.md
Normal file
338
docs/commands/claude/workflow.md
Normal file
@@ -0,0 +1,338 @@
|
||||
# Workflow Commands
|
||||
|
||||
## One-Liner
|
||||
|
||||
**Workflow commands are the execution engine of Claude_dms3** — providing complete workflow support from lightweight tasks to complex projects.
|
||||
|
||||
## Command List
|
||||
|
||||
### Lightweight Workflows
|
||||
|
||||
| Command | Function | Syntax |
|
||||
|---------|----------|--------|
|
||||
| [`lite-lite-lite`](#lite-lite-lite) | Ultra-lightweight multi-tool analysis and direct execution | `/workflow:lite-lite-lite [-y] <task>` |
|
||||
| [`lite-plan`](#lite-plan) | Lightweight interactive planning workflow | `/workflow:lite-plan [-y] [-e] "task"` |
|
||||
| [`lite-execute`](#lite-execute) | Execute tasks based on in-memory plan | `/workflow:lite-execute [-y] [--in-memory] [task]` |
|
||||
| [`lite-fix`](#lite-fix) | Lightweight bug diagnosis and fix | `/workflow:lite-fix [-y] [--hotfix] "bug description"` |
|
||||
|
||||
### Standard Workflows
|
||||
|
||||
| Command | Function | Syntax |
|
||||
|---------|----------|--------|
|
||||
| [`plan`](#plan) | 5-phase planning workflow | `/workflow:plan [-y] "description"\|file.md` |
|
||||
| [`execute`](#execute) | Coordinate agent execution of workflow tasks | `/workflow:execute [-y] [--resume-session=ID]` |
|
||||
| [`replan`](#replan) | Interactive workflow replanning | `/workflow:replan [-y] [--session ID] [task-id] "requirement"` |
|
||||
|
||||
### Collaborative Workflows
|
||||
|
||||
| Command | Function | Syntax |
|
||||
|---------|----------|--------|
|
||||
| [`multi-cli-plan`](#multi-cli-plan) | Multi-CLI collaborative planning | `/workflow:multi-cli-plan [-y] <task> [--max-rounds=N]` |
|
||||
| [`brainstorm-with-file`](#brainstorm-with-file) | Interactive brainstorming | `/workflow:brainstorm-with-file [-y] [-c] "idea"` |
|
||||
| [`analyze-with-file`](#analyze-with-file) | Interactive collaborative analysis | `/workflow:analyze-with-file [-y] [-c] "topic"` |
|
||||
| [`debug-with-file`](#debug-with-file) | Interactive hypothesis-driven debugging | `/workflow:debug-with-file [-y] "bug description"` |
|
||||
| [`unified-execute-with-file`](#unified-execute-with-file) | Universal execution engine | `/workflow:unified-execute-with-file [-y] [-p path] [context]` |
|
||||
|
||||
### TDD Workflows
|
||||
|
||||
| Command | Function | Syntax |
|
||||
|---------|----------|--------|
|
||||
| [`tdd-plan`](#tdd-plan) | TDD planning workflow | `/workflow:tdd-plan "feature description"` |
|
||||
| [`tdd-verify`](#tdd-verify) | Verify TDD workflow compliance | `/workflow:tdd-verify [--session ID]` |
|
||||
|
||||
### Test Workflows
|
||||
|
||||
| Command | Function | Syntax |
|
||||
|---------|----------|--------|
|
||||
| [`test-fix-gen`](#test-fix-gen) | Create test-fix workflow session | `/workflow:test-fix-gen (session-id\|"description"\|file.md)` |
|
||||
| [`test-gen`](#test-gen) | Create test session from implementation session | `/workflow:test-gen source-session-id` |
|
||||
| [`test-cycle-execute`](#test-cycle-execute) | Execute test-fix workflow | `/workflow:test-cycle-execute [--resume-session=ID]` |
|
||||
|
||||
### Review Workflows
|
||||
|
||||
| Command | Function | Syntax |
|
||||
|---------|----------|--------|
|
||||
| [`review`](#review) | Post-implementation review | `/workflow:review [--type=type] [--archived] [session-id]` |
|
||||
| [`review-module-cycle`](#review-module-cycle) | Standalone multi-dimensional code review | `/workflow:review-module-cycle <path> [--dimensions=dimensions]` |
|
||||
| [`review-session-cycle`](#review-session-cycle) | Session-based review | `/workflow:review-session-cycle [session-id] [--dimensions=dimensions]` |
|
||||
| [`review-cycle-fix`](#review-cycle-fix) | Auto-fix review findings | `/workflow:review-cycle-fix <export-file\|review-dir>` |
|
||||
|
||||
### Specialized Workflows
|
||||
|
||||
| Command | Function | Syntax |
|
||||
|---------|----------|--------|
|
||||
| [`clean`](#clean) | Smart code cleanup | `/workflow:clean [-y] [--dry-run] ["focus area"]` |
|
||||
| [`init`](#init) | Initialize project state | `/workflow:init [--regenerate]` |
|
||||
| [`plan-verify`](#plan-verify) | Verify planning consistency | `/workflow:plan-verify [--session session-id]` |
|
||||
|
||||
## Command Details
|
||||
|
||||
### lite-lite-lite
|
||||
|
||||
**Function**: Ultra-lightweight multi-tool analysis and direct execution. Simple tasks have no artifacts, complex tasks automatically create planning documents in `.workflow/.scratchpad/`.
|
||||
|
||||
**Syntax**:
|
||||
```
|
||||
/workflow:lite-lite-lite [-y|--yes] <task description>
|
||||
```
|
||||
|
||||
**Use Cases**:
|
||||
- Ultra-simple quick tasks
|
||||
- Code modifications not needing planning documents
|
||||
- Automatic tool selection
|
||||
|
||||
**Examples**:
|
||||
```bash
|
||||
# Ultra-simple task
|
||||
/workflow:lite-lite-lite "fix header styles"
|
||||
|
||||
# Auto mode
|
||||
/workflow:lite-lite-lite -y "update README links"
|
||||
```
|
||||
|
||||
### lite-plan
|
||||
|
||||
**Function**: Lightweight interactive planning workflow, supporting in-memory planning, code exploration, and execution to lite-execute.
|
||||
|
||||
**Syntax**:
|
||||
```
|
||||
/workflow:lite-plan [-y|--yes] [-e|--explore] "task description" | file.md
|
||||
```
|
||||
|
||||
**Options**:
|
||||
- `-e, --explore`: Execute code exploration first
|
||||
|
||||
**Examples**:
|
||||
```bash
|
||||
# Basic planning
|
||||
/workflow:lite-plan "add user avatar feature"
|
||||
|
||||
# With exploration
|
||||
/workflow:lite-plan -e "refactor authentication module"
|
||||
```
|
||||
|
||||
### lite-execute
|
||||
|
||||
**Function**: Execute tasks based on in-memory plan, prompt description, or file content.
|
||||
|
||||
**Syntax**:
|
||||
```
|
||||
/workflow:lite-execute [-y|--yes] [--in-memory] ["task description" | file-path]
|
||||
```
|
||||
|
||||
**Options**:
|
||||
- `--in-memory`: Use in-memory plan
|
||||
|
||||
**Examples**:
|
||||
```bash
|
||||
# Execute task
|
||||
/workflow:lite-execute "implement avatar upload API"
|
||||
|
||||
# Use in-memory plan
|
||||
/workflow:lite-execute --in-memory
|
||||
```
|
||||
|
||||
### lite-fix
|
||||
|
||||
**Function**: Lightweight bug diagnosis and fix workflow, supporting intelligent severity assessment and optional hotfix mode.
|
||||
|
||||
**Syntax**:
|
||||
```
|
||||
/workflow:lite-fix [-y|--yes] [--hotfix] "bug description or issue reference"
|
||||
```
|
||||
|
||||
**Options**:
|
||||
- `--hotfix`: Hotfix mode (quick fix for production incidents)
|
||||
|
||||
**Examples**:
|
||||
```bash
|
||||
# Bug fix
|
||||
/workflow:lite-fix "login returns 500 error"
|
||||
|
||||
# Hotfix
|
||||
/workflow:lite-fix --hotfix "payment gateway timeout"
|
||||
```
|
||||
|
||||
### plan
|
||||
|
||||
**Function**: 5-phase planning workflow, outputting IMPL_PLAN.md and task JSON.
|
||||
|
||||
**Syntax**:
|
||||
```
|
||||
/workflow:plan [-y|--yes] "text description" | file.md
|
||||
```
|
||||
|
||||
**Phases**:
|
||||
1. Session initialization
|
||||
2. Context collection
|
||||
3. Specification loading
|
||||
4. Task generation
|
||||
5. Verification/replanning
|
||||
|
||||
**Examples**:
|
||||
```bash
|
||||
# Plan from description
|
||||
/workflow:plan "implement user notification system"
|
||||
|
||||
# Plan from file
|
||||
/workflow:plan requirements.md
|
||||
```
|
||||
|
||||
### execute
|
||||
|
||||
**Function**: Coordinate agent execution of workflow tasks, supporting automatic session discovery, parallel task processing, and state tracking.
|
||||
|
||||
**Syntax**:
|
||||
```
|
||||
/workflow:execute [-y|--yes] [--resume-session="session-id"]
|
||||
```
|
||||
|
||||
**Examples**:
|
||||
```bash
|
||||
# Execute current session
|
||||
/workflow:execute
|
||||
|
||||
# Resume and execute session
|
||||
/workflow:execute --resume-session=WFS-2024-01-15
|
||||
```
|
||||
|
||||
### replan
|
||||
|
||||
**Function**: Interactive workflow replanning, supporting session-level artifact updates and scope clarification.
|
||||
|
||||
**Syntax**:
|
||||
```
|
||||
/workflow:replan [-y|--yes] [--session session-id] [task-id] "requirement" | file.md [--interactive]
|
||||
```
|
||||
|
||||
**Examples**:
|
||||
```bash
|
||||
# Replan entire session
|
||||
/workflow:replan --session=WFS-xxx "add user permission checks"
|
||||
|
||||
# Replan specific task
|
||||
/workflow:replan TASK-001 "change to use RBAC"
|
||||
```
|
||||
|
||||
### multi-cli-plan
|
||||
|
||||
**Function**: Multi-CLI collaborative planning workflow, using ACE context collection and iterative cross-validation.
|
||||
|
||||
**Syntax**:
|
||||
```
|
||||
/workflow:multi-cli-plan [-y|--yes] <task description> [--max-rounds=3] [--tools=gemini,codex] [--mode=parallel|serial]
|
||||
```
|
||||
|
||||
**Options**:
|
||||
- `--max-rounds=N`: Maximum discussion rounds
|
||||
- `--tools=tools`: CLI tools to use
|
||||
- `--mode=mode`: Parallel or serial mode
|
||||
|
||||
**Examples**:
|
||||
```bash
|
||||
# Multi-CLI planning
|
||||
/workflow:multi-cli-plan "design microservice architecture"
|
||||
|
||||
# Specify tools and rounds
|
||||
/workflow:multi-cli-plan --tools=gemini,codex --max-rounds=5 "database migration plan"
|
||||
```
|
||||
|
||||
### brainstorm-with-file
|
||||
|
||||
**Function**: Interactive brainstorming, multi-CLI collaboration, idea expansion, and documented thinking evolution.
|
||||
|
||||
**Syntax**:
|
||||
```
|
||||
/workflow:brainstorm-with-file [-y|--yes] [-c|--continue] [-m|--mode creative|structured] "idea or topic"
|
||||
```
|
||||
|
||||
**Options**:
|
||||
- `-c, --continue`: Continue existing session
|
||||
- `-m, --mode=mode`: creative or structured
|
||||
|
||||
**Examples**:
|
||||
```bash
|
||||
# Creative brainstorming
|
||||
/workflow:brainstorm-with-file --mode creative "user growth strategies"
|
||||
|
||||
# Structured brainstorming
|
||||
/workflow:brainstorm-with-file --mode structured "API versioning approach"
|
||||
```
|
||||
|
||||
### analyze-with-file
|
||||
|
||||
**Function**: Interactive collaborative analysis with documented discussion, CLI-assisted exploration, and evolving understanding.
|
||||
|
||||
**Syntax**:
|
||||
```
|
||||
/workflow:analyze-with-file [-y|--yes] [-c|--continue] "topic or question"
|
||||
```
|
||||
|
||||
**Examples**:
|
||||
```bash
|
||||
# Analyze topic
|
||||
/workflow:analyze-with-file "authentication architecture design"
|
||||
|
||||
# Continue discussion
|
||||
/workflow:analyze-with-file -c
|
||||
```
|
||||
|
||||
### debug-with-file
|
||||
|
||||
**Function**: Interactive hypothesis-driven debugging with documented exploration, understanding evolution, and Gemini-assisted correction.
|
||||
|
||||
**Syntax**:
|
||||
```
|
||||
/workflow:debug-with-file [-y|--yes] "bug description or error message"
|
||||
```
|
||||
|
||||
**Examples**:
|
||||
```bash
|
||||
# Debug bug
|
||||
/workflow:debug-with-file "WebSocket connection randomly disconnects"
|
||||
|
||||
# Debug error
|
||||
/workflow:debug-with-file "TypeError: Cannot read property 'id'"
|
||||
```
|
||||
|
||||
### unified-execute-with-file
|
||||
|
||||
**Function**: Universal execution engine consuming any planning/brainstorming/analysis output, supporting minimal progress tracking, multi-agent coordination, and incremental execution.
|
||||
|
||||
**Syntax**:
|
||||
```
|
||||
/workflow:unified-execute-with-file [-y|--yes] [-p|--plan <path>] [-m|--mode sequential|parallel] ["execution context"]
|
||||
```
|
||||
|
||||
**Examples**:
|
||||
```bash
|
||||
# Execute plan
|
||||
/workflow:unified-execute-with-file -p plan.md
|
||||
|
||||
# Parallel execution
|
||||
/workflow:unified-execute-with-file -m parallel
|
||||
```
|
||||
|
||||
## Workflow Diagram
|
||||
|
||||
```mermaid
|
||||
graph TD
|
||||
A[Task Input] --> B{Task Complexity}
|
||||
B -->|Simple| C[Lite Workflow]
|
||||
B -->|Standard| D[Plan Workflow]
|
||||
B -->|Complex| E[Multi-CLI Workflow]
|
||||
|
||||
C --> F[Direct Execution]
|
||||
D --> G[Plan] --> H[Execute]
|
||||
E --> I[Multi-CLI Discussion] --> G
|
||||
|
||||
F --> J[Complete]
|
||||
H --> J
|
||||
I --> J
|
||||
```
|
||||
|
||||
## Related Documentation
|
||||
|
||||
- [Session Management](./session.md)
|
||||
- [Core Orchestration](./core-orchestration.md)
|
||||
- [Workflow Guide](../../guide/ch04-workflow-basics.md)
|
||||
56
docs/commands/codex/index.md
Normal file
56
docs/commands/codex/index.md
Normal file
@@ -0,0 +1,56 @@
|
||||
# Codex Prompts
|
||||
|
||||
## One-Liner
|
||||
|
||||
**Codex Prompts is the prompt template system used by Codex CLI** — standardized prompt formats ensure consistent code quality and review effectiveness.
|
||||
|
||||
## Core Concepts
|
||||
|
||||
| Concept | Description | Use Cases |
|
||||
|----------|-------------|-----------|
|
||||
| **Prep Prompts** | Project context preparation prompts | Analyze project structure, extract relevant files |
|
||||
| **Review Prompts** | Code review prompts | Multi-dimensional code quality checks |
|
||||
|
||||
## Prompt List
|
||||
|
||||
### Prep Series
|
||||
|
||||
| Prompt | Function | Use Cases |
|
||||
|--------|----------|-----------|
|
||||
| [`memory:prepare`](./prep.md#memory-prepare) | Project context preparation | Prepare structured project context for tasks |
|
||||
|
||||
### Review Series
|
||||
|
||||
| Prompt | Function | Use Cases |
|
||||
|--------|----------|-----------|
|
||||
| [`codex-review`](./review.md#codex-review) | Interactive code review | Code review using Codex CLI |
|
||||
|
||||
## Prompt Template Format
|
||||
|
||||
All Codex Prompts follow the standard CCW CLI prompt template:
|
||||
|
||||
```
|
||||
PURPOSE: [objective] + [reason] + [success criteria] + [constraints/scope]
|
||||
TASK: • [step 1] • [step 2] • [step 3]
|
||||
MODE: review
|
||||
CONTEXT: [review target description] | Memory: [relevant context]
|
||||
EXPECTED: [deliverable format] + [quality criteria]
|
||||
CONSTRAINTS: [focus constraints]
|
||||
```
|
||||
|
||||
## Field Descriptions
|
||||
|
||||
| Field | Description | Example |
|
||||
|-------|-------------|---------|
|
||||
| **PURPOSE** | Objective and reason | "Identify security vulnerabilities to ensure code safety" |
|
||||
| **TASK** | Specific steps | "• Scan for injection vulnerabilities • Check authentication logic" |
|
||||
| **MODE** | Execution mode | analysis, write, review |
|
||||
| **CONTEXT** | Context information | "@CLAUDE.md @src/auth/**" |
|
||||
| **EXPECTED** | Output format | "Structured report with severity levels" |
|
||||
| **CONSTRAINTS** | Constraint conditions | "Focus on actionable suggestions" |
|
||||
|
||||
## Related Documentation
|
||||
|
||||
- [Claude Commands](../claude/)
|
||||
- [CLI Invocation System](../../features/cli.md)
|
||||
- [Code Review](../../features/)
|
||||
168
docs/commands/codex/prep.md
Normal file
168
docs/commands/codex/prep.md
Normal file
@@ -0,0 +1,168 @@
|
||||
# Prep Prompts
|
||||
|
||||
## One-Liner
|
||||
|
||||
**Prep prompts are standardized templates for project context preparation** — generating structured project core content packages through agent-driven analysis.
|
||||
|
||||
## Core Content Package Structure
|
||||
|
||||
```json
|
||||
{
|
||||
"task_context": "Task context description",
|
||||
"keywords": ["keyword1", "keyword2"],
|
||||
"project_summary": {
|
||||
"architecture": "Architecture description",
|
||||
"tech_stack": ["tech1", "tech2"],
|
||||
"key_patterns": ["pattern1", "pattern2"]
|
||||
},
|
||||
"relevant_files": [
|
||||
{
|
||||
"path": "file path",
|
||||
"relevance": "relevance description",
|
||||
"priority": "high|medium|low"
|
||||
}
|
||||
],
|
||||
"integration_points": [
|
||||
"integration point 1",
|
||||
"integration point 2"
|
||||
],
|
||||
"constraints": [
|
||||
"constraint 1",
|
||||
"constraint 2"
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
## memory:prepare
|
||||
|
||||
**Function**: Delegate to universal-executor agent, analyzing project via Gemini/Qwen CLI and returning JSON core content package for task context.
|
||||
|
||||
**Syntax**:
|
||||
```
|
||||
/memory:prepare [--tool gemini|qwen] "task context description"
|
||||
```
|
||||
|
||||
**Options**:
|
||||
- `--tool=tool`: Specify CLI tool (default: gemini)
|
||||
- `gemini`: Large context window, suitable for complex project analysis
|
||||
- `qwen`: Gemini alternative with similar capabilities
|
||||
|
||||
**Execution Flow**:
|
||||
|
||||
```mermaid
|
||||
graph TD
|
||||
A[Start] --> B[Analyze Project Structure]
|
||||
B --> C[Load Documentation]
|
||||
C --> D[Extract Keywords]
|
||||
D --> E[Discover Files]
|
||||
E --> F[CLI Deep Analysis]
|
||||
F --> G[Generate Content Package]
|
||||
G --> H[Load to Main Thread Memory]
|
||||
```
|
||||
|
||||
**Agent Call Prompt**:
|
||||
```
|
||||
## Mission: Prepare Project Memory Context
|
||||
|
||||
**Task**: Prepare project memory context for: "{task_description}"
|
||||
**Mode**: analysis
|
||||
**Tool Preference**: {tool}
|
||||
|
||||
### Step 1: Foundation Analysis
|
||||
1. Project Structure: get_modules_by_depth.sh
|
||||
2. Core Documentation: CLAUDE.md, README.md
|
||||
|
||||
### Step 2: Keyword Extraction & File Discovery
|
||||
1. Extract core keywords from task description
|
||||
2. Discover relevant files using ripgrep and find
|
||||
|
||||
### Step 3: Deep Analysis via CLI
|
||||
Execute Gemini/Qwen CLI for deep analysis
|
||||
|
||||
### Step 4: Generate Core Content Package
|
||||
Return structured JSON with required fields
|
||||
|
||||
### Step 5: Return Content Package
|
||||
Load JSON into main thread memory
|
||||
```
|
||||
|
||||
**Examples**:
|
||||
|
||||
```bash
|
||||
# Basic usage
|
||||
/memory:prepare "develop user authentication on current frontend"
|
||||
|
||||
# Specify tool
|
||||
/memory:prepare --tool qwen "refactor payment module API"
|
||||
|
||||
# Bug fix context
|
||||
/memory:prepare "fix login validation error"
|
||||
```
|
||||
|
||||
**Returned Content Package**:
|
||||
|
||||
```json
|
||||
{
|
||||
"task_context": "develop user authentication on current frontend",
|
||||
"keywords": ["frontend", "user", "authentication", "auth", "login"],
|
||||
"project_summary": {
|
||||
"architecture": "TypeScript + React frontend, Vite build system",
|
||||
"tech_stack": ["React", "TypeScript", "Vite", "TailwindCSS"],
|
||||
"key_patterns": [
|
||||
"State management via Context API",
|
||||
"Functional components with Hooks pattern",
|
||||
"API calls wrapped in custom hooks"
|
||||
]
|
||||
},
|
||||
"relevant_files": [
|
||||
{
|
||||
"path": "src/components/Auth/LoginForm.tsx",
|
||||
"relevance": "Existing login form component",
|
||||
"priority": "high"
|
||||
},
|
||||
{
|
||||
"path": "src/contexts/AuthContext.tsx",
|
||||
"relevance": "Authentication state management context",
|
||||
"priority": "high"
|
||||
},
|
||||
{
|
||||
"path": "CLAUDE.md",
|
||||
"relevance": "Project development standards",
|
||||
"priority": "high"
|
||||
}
|
||||
],
|
||||
"integration_points": [
|
||||
"Must integrate with existing AuthContext",
|
||||
"Follow component organization pattern: src/components/[Feature]/",
|
||||
"API calls should use src/hooks/useApi.ts wrapper"
|
||||
],
|
||||
"constraints": [
|
||||
"Maintain backward compatibility",
|
||||
"Follow TypeScript strict mode",
|
||||
"Use existing UI component library"
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
## Quality Checklist
|
||||
|
||||
Before generating content package, verify:
|
||||
- [ ] Valid JSON format
|
||||
- [ ] All required fields complete
|
||||
- [ ] relevant_files contains minimum 3-10 files
|
||||
- [ ] project_summary accurately reflects architecture
|
||||
- [ ] integration_points clearly specify integration paths
|
||||
- [ ] keywords accurately extracted (3-8 keywords)
|
||||
- [ ] Content is concise, avoid redundancy (< 5KB total)
|
||||
|
||||
## Memory Persistence
|
||||
|
||||
- **Session Scope**: Content package valid for current session
|
||||
- **Subsequent References**: All subsequent agents/commands can access
|
||||
- **Reload Required**: New sessions need to re-execute `/memory:prepare`
|
||||
|
||||
## Related Documentation
|
||||
|
||||
- [Memory Commands](../claude/memory.md)
|
||||
- [Review Prompts](./review.md)
|
||||
- [CLI Invocation System](../../features/cli.md)
|
||||
197
docs/commands/codex/review.md
Normal file
197
docs/commands/codex/review.md
Normal file
@@ -0,0 +1,197 @@
|
||||
# Review Prompts
|
||||
|
||||
## One-Liner
|
||||
|
||||
**Review prompts are standardized templates for code review** — multi-dimensional code quality checks ensuring code meets best practices.
|
||||
|
||||
## Review Dimensions
|
||||
|
||||
| Dimension | Check Items | Severity |
|
||||
|-----------|-------------|----------|
|
||||
| **Correctness** | Logic errors, boundary conditions, type safety | Critical |
|
||||
| **Security** | Injection vulnerabilities, authentication, input validation | Critical |
|
||||
| **Performance** | Algorithm complexity, N+1 queries, caching opportunities | High |
|
||||
| **Maintainability** | SOLID principles, code duplication, naming conventions | Medium |
|
||||
| **Documentation** | Comment completeness, README updates | Low |
|
||||
|
||||
## codex-review
|
||||
|
||||
**Function**: Interactive code review using Codex CLI via ccw endpoint, supporting configurable review targets, models, and custom instructions.
|
||||
|
||||
**Syntax**:
|
||||
```
|
||||
/cli:codex-review [--uncommitted|--base <branch>|--commit <sha>] [--model <model>] [--title <title>] [prompt]
|
||||
```
|
||||
|
||||
**Parameters**:
|
||||
- `--uncommitted`: Review staged, unstaged, and untracked changes
|
||||
- `--base <branch>`: Compare changes with base branch
|
||||
- `--commit <sha>`: Review changes introduced by specific commit
|
||||
- `--model <model>`: Override default model (gpt-5.2, o3, gpt-4.1, o4-mini)
|
||||
- `--title <title>`: Optional commit title for review summary
|
||||
|
||||
**Note**: Target flags and prompt are mutually exclusive (see constraints section)
|
||||
|
||||
### Review Focus Selection
|
||||
|
||||
| Focus | Template | Key Checks |
|
||||
|-------|----------|------------|
|
||||
| **Comprehensive Review** | Universal template | Correctness, style, bugs, documentation |
|
||||
| **Security Focus** | Security template | Injection, authentication, validation, exposure |
|
||||
| **Performance Focus** | Performance template | Complexity, memory, queries, caching |
|
||||
| **Code Quality** | Quality template | SOLID, duplication, naming, tests |
|
||||
|
||||
### Prompt Templates
|
||||
|
||||
#### Comprehensive Review Template
|
||||
|
||||
```
|
||||
PURPOSE: Comprehensive code review to identify issues, improve quality, and ensure best practices; success = actionable feedback and clear priorities
|
||||
TASK: • Review code correctness and logic errors • Check coding standards and consistency • Identify potential bugs and edge cases • Evaluate documentation completeness
|
||||
MODE: review
|
||||
CONTEXT: {target description} | Memory: Project conventions from CLAUDE.md
|
||||
EXPECTED: Structured review report with: severity levels (Critical/High/Medium/Low), file:line references, specific improvement suggestions, priority rankings
|
||||
CONSTRAINTS: Focus on actionable feedback
|
||||
```
|
||||
|
||||
#### Security Focus Template
|
||||
|
||||
```
|
||||
PURPOSE: Security-focused code review to identify vulnerabilities and security risks; success = all security issues documented with fixes
|
||||
TASK: • Scan for injection vulnerabilities (SQL, XSS, command) • Check authentication and authorization logic • Evaluate input validation and sanitization • Identify sensitive data exposure risks
|
||||
MODE: review
|
||||
CONTEXT: {target description} | Memory: Security best practices, OWASP Top 10
|
||||
EXPECTED: Security report with: vulnerability classification, applicable CVE references, fix code snippets, risk severity matrix
|
||||
CONSTRAINTS: Security-first analysis | Flag all potential vulnerabilities
|
||||
```
|
||||
|
||||
#### Performance Focus Template
|
||||
|
||||
```
|
||||
PURPOSE: Performance-focused code review to identify bottlenecks and optimization opportunities; success = measurable improvement suggestions
|
||||
TASK: • Analyze algorithm complexity (Big-O) • Identify memory allocation issues • Check N+1 queries and blocking operations • Evaluate caching opportunities
|
||||
MODE: review
|
||||
CONTEXT: {target description} | Memory: Performance patterns and anti-patterns
|
||||
EXPECTED: Performance report with: complexity analysis, bottleneck identification, optimization suggestions with expected impact, benchmark recommendations
|
||||
CONSTRAINTS: Performance optimization focus
|
||||
```
|
||||
|
||||
#### Code Quality Template
|
||||
|
||||
```
|
||||
PURPOSE: Code quality review to improve maintainability and readability; success = cleaner, more maintainable code
|
||||
TASK: • Evaluate SOLID principles compliance • Identify code duplication and abstraction opportunities • Review naming conventions and clarity • Evaluate test coverage impact
|
||||
MODE: review
|
||||
CONTEXT: {target description} | Memory: Project coding standards
|
||||
EXPECTED: Quality report with: principle violations, refactoring suggestions, naming improvements, maintainability score
|
||||
CONSTRAINTS: Code quality and maintainability focus
|
||||
```
|
||||
|
||||
### Usage Examples
|
||||
|
||||
#### Direct Execution (No Interaction)
|
||||
|
||||
```bash
|
||||
# Review uncommitted changes with default settings
|
||||
/cli:codex-review --uncommitted
|
||||
|
||||
# Compare with main branch
|
||||
/cli:codex-review --base main
|
||||
|
||||
# Review specific commit
|
||||
/cli:codex-review --commit abc123
|
||||
|
||||
# Use custom model
|
||||
/cli:codex-review --uncommitted --model o3
|
||||
|
||||
# Security focus review
|
||||
/cli:codex-review --uncommitted security
|
||||
|
||||
# Full options
|
||||
/cli:codex-review --base main --model o3 --title "Authentication feature" security
|
||||
```
|
||||
|
||||
#### Interactive Mode
|
||||
|
||||
```bash
|
||||
# Start interactive selection (guided flow)
|
||||
/cli:codex-review
|
||||
```
|
||||
|
||||
### Constraints and Validation
|
||||
|
||||
**Important**: Target flags and prompt are mutually exclusive
|
||||
|
||||
Codex CLI has a constraint that target flags (`--uncommitted`, `--base`, `--commit`) cannot be used with the `[PROMPT]` positional parameter:
|
||||
|
||||
```
|
||||
error: the argument '--uncommitted' cannot be used with '[PROMPT]'
|
||||
error: the argument '--base <BRANCH>' cannot be used with '[PROMPT]'
|
||||
error: the argument '--commit <SHA>' cannot be used with '[PROMPT]'
|
||||
```
|
||||
|
||||
**Valid Combinations**:
|
||||
|
||||
| Command | Result |
|
||||
|---------|--------|
|
||||
| `codex review "focus on security"` | ✓ Custom prompt, reviews uncommitted (default) |
|
||||
| `codex review --uncommitted` | ✓ No prompt, uses default review |
|
||||
| `codex review --base main` | ✓ No prompt, uses default review |
|
||||
| `codex review --commit abc123` | ✓ No prompt, uses default review |
|
||||
| `codex review --uncommitted "prompt"` | ✗ Invalid - mutually exclusive |
|
||||
| `codex review --base main "prompt"` | ✗ Invalid - mutually exclusive |
|
||||
| `codex review --commit abc123 "prompt"` | ✗ Invalid - mutually exclusive |
|
||||
|
||||
**Valid Examples**:
|
||||
```bash
|
||||
# ✓ Valid: Prompt only (defaults to reviewing uncommitted)
|
||||
ccw cli -p "focus on security" --tool codex --mode review
|
||||
|
||||
# ✓ Valid: Target flags only (no prompt)
|
||||
ccw cli --tool codex --mode review --uncommitted
|
||||
ccw cli --tool codex --mode review --base main
|
||||
ccw cli --tool codex --mode review --commit abc123
|
||||
|
||||
# ✗ Invalid: Target flags with prompt (will fail)
|
||||
ccw cli -p "review this" --tool codex --mode review --uncommitted
|
||||
```
|
||||
|
||||
## Focus Area Mapping
|
||||
|
||||
| User Selection | Prompt Focus | Key Checks |
|
||||
|----------------|--------------|------------|
|
||||
| Comprehensive Review | Comprehensive | Correctness, style, bugs, documentation |
|
||||
| Security Focus | Security-first | Injection, authentication, validation, exposure |
|
||||
| Performance Focus | Optimization | Complexity, memory, queries, caching |
|
||||
| Code Quality | Maintainability | SOLID, duplication, naming, tests |
|
||||
|
||||
## Error Handling
|
||||
|
||||
### No Changes to Review
|
||||
|
||||
```
|
||||
No changes found for review target. Suggestions:
|
||||
- For --uncommitted: Make some code changes first
|
||||
- For --base: Ensure branch exists and has diverged
|
||||
- For --commit: Verify commit SHA exists
|
||||
```
|
||||
|
||||
### Invalid Branch
|
||||
|
||||
```bash
|
||||
# Show available branches
|
||||
git branch -a --list | head -20
|
||||
```
|
||||
|
||||
### Invalid Commit
|
||||
|
||||
```bash
|
||||
# Show recent commits
|
||||
git log --oneline -10
|
||||
```
|
||||
|
||||
## Related Documentation
|
||||
|
||||
- [Prep Prompts](./prep.md)
|
||||
- [CLI Tool Commands](../claude/cli.md)
|
||||
- [Code Review](../../features/)
|
||||
34
docs/commands/index.md
Normal file
34
docs/commands/index.md
Normal file
@@ -0,0 +1,34 @@
|
||||
# Commands Overview
|
||||
|
||||
## One-Liner
|
||||
|
||||
**Commands is the command system of Claude_dms3** — comprising Claude Code Commands and Codex Prompts, providing a complete command-line toolchain from specification to implementation to testing to review.
|
||||
|
||||
## Command Categories
|
||||
|
||||
| Category | Description | Path |
|
||||
|----------|-------------|------|
|
||||
| **Claude Commands** | Claude Code extension commands | `/commands/claude/` |
|
||||
| **Codex Prompts** | Codex CLI prompts | `/commands/codex/` |
|
||||
|
||||
## Quick Navigation
|
||||
|
||||
### Claude Commands
|
||||
|
||||
- [Core Orchestration](/commands/claude/) - ccw, ccw-coordinator
|
||||
- [Workflow](/commands/claude/workflow) - workflow series commands
|
||||
- [Session Management](/commands/claude/session) - session series commands
|
||||
- [Issue](/commands/claude/issue) - issue series commands
|
||||
- [Memory](/commands/claude/memory) - memory series commands
|
||||
- [CLI](/commands/claude/cli) - cli series commands
|
||||
- [UI Design](/commands/claude/ui-design) - ui-design series commands
|
||||
|
||||
### Codex Prompts
|
||||
|
||||
- [Prep](/commands/codex/prep) - prep-cycle, prep-plan
|
||||
- [Review](/commands/codex/review) - review prompts
|
||||
|
||||
## Related Documentation
|
||||
|
||||
- [Skills](/skills/) - Skill system
|
||||
- [Features](/features/spec) - Core features
|
||||
101
docs/features/api-settings.md
Normal file
101
docs/features/api-settings.md
Normal file
@@ -0,0 +1,101 @@
|
||||
# API Settings
|
||||
|
||||
## One-Liner
|
||||
|
||||
**API Settings manages AI model endpoint configuration** — Centralizes API keys, base URLs, and model selection for all supported AI backends.
|
||||
|
||||
---
|
||||
|
||||
## Configuration Locations
|
||||
|
||||
| Location | Scope | Priority |
|
||||
|----------|-------|----------|
|
||||
| `~/.claude/cli-tools.json` | Global | Base |
|
||||
| `.claude/settings.json` | Project | Override |
|
||||
| `.claude/settings.local.json` | Local | Highest |
|
||||
|
||||
---
|
||||
|
||||
## Supported Backends
|
||||
|
||||
| Backend | Type | Models |
|
||||
|---------|------|--------|
|
||||
| **Gemini** | Builtin | gemini-2.5-flash, gemini-2.5-pro |
|
||||
| **Qwen** | Builtin | coder-model |
|
||||
| **Codex** | Builtin | gpt-4o, gpt-4o-mini |
|
||||
| **Claude** | Builtin | claude-sonnet, claude-haiku |
|
||||
| **OpenCode** | Builtin | opencode/glm-4.7-free |
|
||||
|
||||
---
|
||||
|
||||
## Configuration Example
|
||||
|
||||
```json
|
||||
// ~/.claude/cli-tools.json
|
||||
{
|
||||
"version": "3.3.0",
|
||||
"tools": {
|
||||
"gemini": {
|
||||
"enabled": true,
|
||||
"primaryModel": "gemini-2.5-flash",
|
||||
"secondaryModel": "gemini-2.5-flash",
|
||||
"tags": ["analysis", "debug"],
|
||||
"type": "builtin"
|
||||
},
|
||||
"codex": {
|
||||
"enabled": true,
|
||||
"primaryModel": "gpt-4o",
|
||||
"tags": [],
|
||||
"type": "builtin"
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Environment Variables
|
||||
|
||||
```bash
|
||||
# API Keys
|
||||
LITELLM_API_KEY=your-api-key
|
||||
LITELLM_API_BASE=https://api.example.com/v1
|
||||
LITELLM_MODEL=gpt-4o-mini
|
||||
|
||||
# Reranker (optional)
|
||||
RERANKER_API_KEY=your-reranker-key
|
||||
RERANKER_API_BASE=https://api.siliconflow.cn
|
||||
RERANKER_PROVIDER=siliconflow
|
||||
RERANKER_MODEL=BAAI/bge-reranker-v2-m3
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Model Selection
|
||||
|
||||
### Via CLI Flag
|
||||
|
||||
```bash
|
||||
ccw cli -p "Analyze code" --tool gemini --model gemini-2.5-pro
|
||||
```
|
||||
|
||||
### Via Config
|
||||
|
||||
```json
|
||||
{
|
||||
"tools": {
|
||||
"gemini": {
|
||||
"primaryModel": "gemini-2.5-flash",
|
||||
"secondaryModel": "gemini-2.5-pro"
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Related Links
|
||||
|
||||
- [CLI Call](/features/cli) - Command invocation
|
||||
- [System Settings](/features/system-settings) - System configuration
|
||||
- [CodexLens](/features/codexlens) - Code indexing
|
||||
104
docs/features/cli.md
Normal file
104
docs/features/cli.md
Normal file
@@ -0,0 +1,104 @@
|
||||
# CLI Call System
|
||||
|
||||
## One-Liner
|
||||
|
||||
**The CLI Call System is a unified AI model invocation framework** — Provides a consistent interface for calling multiple AI models (Gemini, Qwen, Codex, Claude) with standardized prompts, modes, and templates.
|
||||
|
||||
---
|
||||
|
||||
## Pain Points Solved
|
||||
|
||||
| Pain Point | Current State | CLI Call Solution |
|
||||
|------------|---------------|-------------------|
|
||||
| **Single model limitation** | Can only use one AI model | Multi-model collaboration |
|
||||
| **Inconsistent prompts** | Different prompt formats per model | Unified prompt template |
|
||||
| **No mode control** | AI can modify files unexpectedly | analysis/write/review modes |
|
||||
| **No templates** | Reinvent prompts each time | 30+ pre-built templates |
|
||||
|
||||
---
|
||||
|
||||
## vs Traditional Methods
|
||||
|
||||
| Dimension | Direct API | Individual CLIs | **CCW CLI** |
|
||||
|-----------|------------|-----------------|-------------|
|
||||
| Multi-model | Manual switch | Multiple tools | **Unified interface** |
|
||||
| Prompt format | Per-model | Per-tool | **Standardized template** |
|
||||
| Permission | Unclear | Unclear | **Explicit modes** |
|
||||
| Templates | None | None | **30+ templates** |
|
||||
|
||||
---
|
||||
|
||||
## Core Concepts
|
||||
|
||||
| Concept | Description | Usage |
|
||||
|---------|-------------|-------|
|
||||
| **Tool** | AI model backend | `--tool gemini/qwen/codex/claude` |
|
||||
| **Mode** | Permission level | `analysis/write/review` |
|
||||
| **Template** | Pre-built prompt | `--rule analysis-review-code` |
|
||||
| **Session** | Conversation continuity | `--resume <id>` |
|
||||
|
||||
---
|
||||
|
||||
## Usage
|
||||
|
||||
### Basic Command
|
||||
|
||||
```bash
|
||||
ccw cli -p "Analyze authentication flow" --tool gemini --mode analysis
|
||||
```
|
||||
|
||||
### With Template
|
||||
|
||||
```bash
|
||||
ccw cli -p "PURPOSE: Security audit
|
||||
TASK: • Check SQL injection • Verify CSRF tokens
|
||||
MODE: analysis
|
||||
CONTEXT: @src/auth/**/*
|
||||
EXPECTED: Report with severity levels
|
||||
CONSTRAINTS: Focus on authentication" --tool gemini --mode analysis --rule analysis-assess-security-risks
|
||||
```
|
||||
|
||||
### Session Resume
|
||||
|
||||
```bash
|
||||
ccw cli -p "Continue analysis" --resume
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Configuration
|
||||
|
||||
```json
|
||||
// ~/.claude/cli-tools.json
|
||||
{
|
||||
"tools": {
|
||||
"gemini": {
|
||||
"enabled": true,
|
||||
"primaryModel": "gemini-2.5-flash",
|
||||
"tags": ["analysis", "debug"]
|
||||
},
|
||||
"qwen": {
|
||||
"enabled": true,
|
||||
"primaryModel": "coder-model"
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Available Templates
|
||||
|
||||
| Category | Templates |
|
||||
|----------|-----------|
|
||||
| Analysis | `analysis-review-code`, `analysis-diagnose-bug`, `analysis-assess-security` |
|
||||
| Planning | `planning-plan-architecture`, `planning-breakdown-task` |
|
||||
| Development | `development-implement-feature`, `development-refactor-codebase` |
|
||||
|
||||
---
|
||||
|
||||
## Related Links
|
||||
|
||||
- [Spec System](/features/spec) - Constraint injection
|
||||
- [Memory System](/features/memory) - Persistent context
|
||||
- [CodexLens](/features/codexlens) - Code indexing
|
||||
115
docs/features/codexlens.md
Normal file
115
docs/features/codexlens.md
Normal file
@@ -0,0 +1,115 @@
|
||||
# CodexLens Code Indexing
|
||||
|
||||
## One-Liner
|
||||
|
||||
**CodexLens is a semantic code search engine** — Based on vector databases and LSP integration, it enables AI to understand code semantics rather than just keyword matching.
|
||||
|
||||
---
|
||||
|
||||
## Pain Points Solved
|
||||
|
||||
| Pain Point | Current State | CodexLens Solution |
|
||||
|------------|---------------|-------------------|
|
||||
| **Imprecise search** | Keywords can't find semantically related code | Semantic vector search |
|
||||
| **No context** | Search results lack call chain context | LSP integration provides reference chains |
|
||||
| **No understanding** | AI doesn't understand code relationships | Static analysis + semantic indexing |
|
||||
| **Slow navigation** | Manual file traversal | Instant semantic navigation |
|
||||
|
||||
---
|
||||
|
||||
## vs Traditional Methods
|
||||
|
||||
| Dimension | Text Search | IDE Search | **CodexLens** |
|
||||
|-----------|-------------|------------|---------------|
|
||||
| Search type | Keyword | Keyword + symbol | **Semantic vector** |
|
||||
| Context | None | File-level | **Call chain + imports** |
|
||||
| AI-ready | No | No | **Direct AI consumption** |
|
||||
| Multi-file | Poor | Good | **Excellent** |
|
||||
|
||||
---
|
||||
|
||||
## Core Concepts
|
||||
|
||||
| Concept | Description | Location |
|
||||
|---------|-------------|----------|
|
||||
| **Index** | Vector representation of code | `.codex-lens/index/` |
|
||||
| **Chunk** | Code segment for embedding | Configurable size |
|
||||
| **Retrieval** | Hybrid search (vector + keyword) | HybridSearch engine |
|
||||
| **LSP** | Language Server Protocol integration | Built-in LSP client |
|
||||
|
||||
---
|
||||
|
||||
## Usage
|
||||
|
||||
### Indexing Project
|
||||
|
||||
```bash
|
||||
ccw index
|
||||
ccw index --watch # Continuous indexing
|
||||
```
|
||||
|
||||
### Searching
|
||||
|
||||
```bash
|
||||
ccw search "authentication logic"
|
||||
ccw search "where is user validation" --top 10
|
||||
```
|
||||
|
||||
### Via MCP Tool
|
||||
|
||||
```typescript
|
||||
// ACE semantic search
|
||||
mcp__ace-tool__search_context({
|
||||
project_root_path: "/path/to/project",
|
||||
query: "authentication logic"
|
||||
})
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Configuration
|
||||
|
||||
```json
|
||||
// ~/.codexlens/settings.json
|
||||
{
|
||||
"embedding": {
|
||||
"backend": "litellm",
|
||||
"model": "Qwen/Qwen3-Embedding-8B",
|
||||
"use_gpu": false
|
||||
},
|
||||
"indexing": {
|
||||
"static_graph_enabled": true,
|
||||
"chunk_size": 512
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Architecture
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────┐
|
||||
│ CodexLens │
|
||||
├─────────────────────────────────────────┤
|
||||
│ ┌──────────┐ ┌──────────┐ ┌────────┐ │
|
||||
│ │ Parsers │ │ Chunker │ │ LSP │ │
|
||||
│ │(TS/Py/..)│ │ │ │ Client │ │
|
||||
│ └────┬─────┘ └────┬─────┘ └───┬────┘ │
|
||||
│ │ │ │ │
|
||||
│ └─────────────┼────────────┘ │
|
||||
│ │ │
|
||||
│ ┌──────┴──────┐ │
|
||||
│ │ Hybrid │ │
|
||||
│ │ Search │ │
|
||||
│ └─────────────┘ │
|
||||
└─────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Related Links
|
||||
|
||||
- [CLI Call](/features/cli) - AI model invocation
|
||||
- [Memory System](/features/memory) - Persistent context
|
||||
- [MCP Tools](/mcp/tools) - MCP integration
|
||||
79
docs/features/dashboard.md
Normal file
79
docs/features/dashboard.md
Normal file
@@ -0,0 +1,79 @@
|
||||
# Dashboard Panel
|
||||
|
||||
## One-Liner
|
||||
|
||||
**The Dashboard is a VS Code webview-based management interface** — Provides visual access to project configuration, specs, memory, and settings through an intuitive GUI.
|
||||
|
||||
---
|
||||
|
||||
## Pain Points Solved
|
||||
|
||||
| Pain Point | Current State | Dashboard Solution |
|
||||
|------------|---------------|-------------------|
|
||||
| **Config complexity** | JSON files hard to edit | Visual form-based editing |
|
||||
| **No overview** | Can't see project state at a glance | Unified dashboard view |
|
||||
| **Scattered settings** | Settings in multiple files | Centralized management |
|
||||
| **No visual feedback** | CLI only, no UI | Interactive webview |
|
||||
|
||||
---
|
||||
|
||||
## Core Features
|
||||
|
||||
| Feature | Description |
|
||||
|---------|-------------|
|
||||
| **Project Overview** | Tech stack, dependencies, status |
|
||||
| **Spec Manager** | View and edit specification files |
|
||||
| **Memory Viewer** | Browse persistent memories |
|
||||
| **API Settings** | Configure AI model endpoints |
|
||||
| **System Settings** | Global and project settings |
|
||||
|
||||
---
|
||||
|
||||
## Access
|
||||
|
||||
Open via VS Code command palette:
|
||||
```
|
||||
CCW: Open Dashboard
|
||||
```
|
||||
|
||||
Or via CLI:
|
||||
```bash
|
||||
ccw view
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Dashboard Sections
|
||||
|
||||
### 1. Project Overview
|
||||
- Technology stack detection
|
||||
- Dependency status
|
||||
- Workflow session status
|
||||
|
||||
### 2. Spec Manager
|
||||
- List all specs
|
||||
- Edit spec content
|
||||
- Enable/disable specs
|
||||
|
||||
### 3. Memory Viewer
|
||||
- Browse memory entries
|
||||
- Search memories
|
||||
- Export/import
|
||||
|
||||
### 4. API Settings
|
||||
- Configure API keys
|
||||
- Set model endpoints
|
||||
- Manage rate limits
|
||||
|
||||
### 5. System Settings
|
||||
- Global configuration
|
||||
- Project overrides
|
||||
- Hook management
|
||||
|
||||
---
|
||||
|
||||
## Related Links
|
||||
|
||||
- [API Settings](/features/api-settings) - API configuration
|
||||
- [System Settings](/features/system-settings) - System configuration
|
||||
- [Spec System](/features/spec) - Specification management
|
||||
80
docs/features/memory.md
Normal file
80
docs/features/memory.md
Normal file
@@ -0,0 +1,80 @@
|
||||
# Memory System
|
||||
|
||||
## One-Liner
|
||||
|
||||
**The Memory System is a cross-session knowledge persistence mechanism** — Stores project context, decisions, and learnings so AI remembers across sessions without re-explanation.
|
||||
|
||||
---
|
||||
|
||||
## Pain Points Solved
|
||||
|
||||
| Pain Point | Current State | Memory System Solution |
|
||||
|------------|---------------|------------------------|
|
||||
| **Cross-session amnesia** | New session requires re-explaining project | Persistent memory across sessions |
|
||||
| **Lost decisions** | Architecture decisions forgotten | Decision log persists |
|
||||
| **Repeated explanations** | Same context explained multiple times | Memory auto-injection |
|
||||
| **Knowledge silos** | Each developer maintains own context | Shared team memory |
|
||||
|
||||
---
|
||||
|
||||
## vs Traditional Methods
|
||||
|
||||
| Dimension | CLAUDE.md | Notes | **Memory System** |
|
||||
|-----------|-----------|-------|-------------------|
|
||||
| Persistence | Static file | Manual | **Auto-extracted from sessions** |
|
||||
| Search | Text search | Folder search | **Semantic vector search** |
|
||||
| Updates | Manual edit | Manual note | **Auto-capture from conversations** |
|
||||
| Sharing | Git | Manual | **Auto-sync via workflow** |
|
||||
|
||||
---
|
||||
|
||||
## Core Concepts
|
||||
|
||||
| Concept | Description | Location |
|
||||
|---------|-------------|----------|
|
||||
| **Core Memory** | Persistent project knowledge | `~/.claude/memory/` |
|
||||
| **Session Memory** | Current session context | `.workflow/.memory/` |
|
||||
| **Memory Entry** | Individual knowledge item | JSONL format |
|
||||
| **Memory Index** | Searchable index | Embedding-based |
|
||||
|
||||
---
|
||||
|
||||
## Usage
|
||||
|
||||
### Viewing Memory
|
||||
|
||||
```bash
|
||||
ccw memory list
|
||||
ccw memory search "authentication"
|
||||
```
|
||||
|
||||
### Memory Categories
|
||||
|
||||
- **Project Context**: Architecture, tech stack, patterns
|
||||
- **Decisions**: ADRs, design choices
|
||||
- **Learnings**: What worked, what didn't
|
||||
- **Conventions**: Coding standards, naming
|
||||
|
||||
---
|
||||
|
||||
## Configuration
|
||||
|
||||
```json
|
||||
// ~/.claude/cli-settings.json
|
||||
{
|
||||
"memory": {
|
||||
"enabled": true,
|
||||
"maxEntries": 1000,
|
||||
"autoCapture": true,
|
||||
"embeddingModel": "text-embedding-3-small"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Related Links
|
||||
|
||||
- [Spec System](/features/spec) - Constraint injection
|
||||
- [CLI Call](/features/cli) - Command line invocation
|
||||
- [CodexLens](/features/codexlens) - Code indexing
|
||||
96
docs/features/spec.md
Normal file
96
docs/features/spec.md
Normal file
@@ -0,0 +1,96 @@
|
||||
# Spec System
|
||||
|
||||
## One-Liner
|
||||
|
||||
**The Spec System is an automatic constraint injection mechanism** — Through specification documents defined in YAML frontmatter, relevant constraints are automatically loaded at the start of AI sessions, ensuring AI follows project coding standards and architectural requirements.
|
||||
|
||||
---
|
||||
|
||||
## Pain Points Solved
|
||||
|
||||
| Pain Point | Current State | Spec System Solution |
|
||||
|------------|---------------|---------------------|
|
||||
| **AI ignores standards** | CLAUDE.md written but AI ignores it after 5 turns | Hook auto-injection, every session carries specs |
|
||||
| **Standards scattered** | Coding conventions in different places, hard to maintain | Unified in `.workflow/specs/*.md` |
|
||||
| **Context loss** | New session requires re-explaining constraints | Spec auto-loads based on task context |
|
||||
| **Inconsistent code** | Different developers write different styles | Shared Spec ensures consistency |
|
||||
|
||||
---
|
||||
|
||||
## vs Traditional Methods
|
||||
|
||||
| Dimension | CLAUDE.md | `.cursorrules` | **Spec System** |
|
||||
|-----------|-----------|----------------|-----------------|
|
||||
| Injection | Auto-load but easily truncated | Manual load | **Hook auto-injection, task-precise loading** |
|
||||
| Granularity | One large file | One large file | **Per-module files, combined by task** |
|
||||
| Cross-session memory | None | None | **Workflow journal persistence** |
|
||||
| Team sharing | Single person | Single person | **Git versioned Spec library** |
|
||||
|
||||
---
|
||||
|
||||
## Core Concepts
|
||||
|
||||
| Concept | Description | Location |
|
||||
|---------|-------------|----------|
|
||||
| **Spec File** | Markdown document with YAML frontmatter | `.workflow/specs/*.md` |
|
||||
| **Hook** | Script that auto-injects specs into AI context | `.claude/hooks/` |
|
||||
| **Spec Index** | Registry of all available specs | `.workflow/specs/index.yaml` |
|
||||
| **Spec Selector** | Logic that chooses relevant specs for a task | Built into CCW |
|
||||
|
||||
---
|
||||
|
||||
## Usage
|
||||
|
||||
### Creating a Spec
|
||||
|
||||
```markdown
|
||||
---
|
||||
name: coding-standards
|
||||
description: Project coding standards
|
||||
triggers:
|
||||
- pattern: "**/*.ts"
|
||||
- command: "/implement"
|
||||
- skill: "code-developer"
|
||||
applyTo:
|
||||
- "src/**"
|
||||
priority: high
|
||||
---
|
||||
|
||||
# Coding Standards
|
||||
|
||||
## TypeScript Guidelines
|
||||
- Use strict mode
|
||||
- Prefer interfaces over types
|
||||
- ...
|
||||
```
|
||||
|
||||
### Spec Loading
|
||||
|
||||
Specs are automatically loaded based on:
|
||||
1. File patterns being edited
|
||||
2. Commands being executed
|
||||
3. Skills being invoked
|
||||
|
||||
---
|
||||
|
||||
## Configuration
|
||||
|
||||
```yaml
|
||||
# .workflow/specs/index.yaml
|
||||
specs:
|
||||
- name: coding-standards
|
||||
path: ./coding-standards.md
|
||||
enabled: true
|
||||
|
||||
- name: api-conventions
|
||||
path: ./api-conventions.md
|
||||
enabled: true
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Related Links
|
||||
|
||||
- [Memory System](/features/memory) - Persistent context
|
||||
- [CLI Call](/features/cli) - Command line invocation
|
||||
- [Dashboard](/features/dashboard) - Visual management
|
||||
131
docs/features/system-settings.md
Normal file
131
docs/features/system-settings.md
Normal file
@@ -0,0 +1,131 @@
|
||||
# System Settings
|
||||
|
||||
## One-Liner
|
||||
|
||||
**System Settings manages global and project-level configuration** — Controls hooks, agents, skills, and core system behavior.
|
||||
|
||||
---
|
||||
|
||||
## Configuration Files
|
||||
|
||||
| File | Scope | Purpose |
|
||||
|------|-------|---------|
|
||||
| `~/.claude/CLAUDE.md` | Global | Global instructions |
|
||||
| `.claude/CLAUDE.md` | Project | Project instructions |
|
||||
| `~/.claude/cli-tools.json` | Global | CLI tool config |
|
||||
| `.claude/settings.json` | Project | Project settings |
|
||||
| `.claude/settings.local.json` | Local | Local overrides |
|
||||
|
||||
---
|
||||
|
||||
## Settings Schema
|
||||
|
||||
```json
|
||||
{
|
||||
"permissions": {
|
||||
"allow": ["Bash(npm install)", "Bash(git status)"],
|
||||
"deny": ["Bash(rm -rf)"]
|
||||
},
|
||||
"env": {
|
||||
"ANTHROPIC_API_KEY": "your-key"
|
||||
},
|
||||
"enableAll": false,
|
||||
"autoCheck": true
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Key Settings
|
||||
|
||||
### Permissions
|
||||
|
||||
```json
|
||||
{
|
||||
"permissions": {
|
||||
"allow": [
|
||||
"Bash(npm run*)",
|
||||
"Read(**)",
|
||||
"Edit(**/*.ts)"
|
||||
],
|
||||
"deny": [
|
||||
"Bash(rm -rf /*)"
|
||||
]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Hooks
|
||||
|
||||
```json
|
||||
{
|
||||
"hooks": {
|
||||
"PreToolUse": [
|
||||
{
|
||||
"matcher": "Bash",
|
||||
"hooks": [".claude/hooks/pre-bash.sh"]
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### MCP Servers
|
||||
|
||||
```json
|
||||
{
|
||||
"mcpServers": {
|
||||
"ccw-tools": {
|
||||
"command": "node",
|
||||
"args": ["./mcp-server/dist/index.js"]
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Hook Configuration
|
||||
|
||||
Hooks are scripts that run at specific events:
|
||||
|
||||
| Event | When | Use Case |
|
||||
|-------|------|----------|
|
||||
| `PreToolUse` | Before tool execution | Validation, logging |
|
||||
| `PostToolUse` | After tool execution | Cleanup, notifications |
|
||||
| `Notification` | On notifications | Custom handlers |
|
||||
| `Stop` | On session end | Cleanup |
|
||||
|
||||
### Hook Example
|
||||
|
||||
```bash
|
||||
#!/bin/bash
|
||||
# .claude/hooks/pre-bash.sh
|
||||
echo "Executing: $1" >> ~/.claude/bash.log
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Agent Configuration
|
||||
|
||||
```json
|
||||
// .claude/agents/my-agent.md
|
||||
---
|
||||
description: Custom analysis agent
|
||||
model: claude-sonnet
|
||||
tools:
|
||||
- Read
|
||||
- Grep
|
||||
---
|
||||
|
||||
# Agent Instructions
|
||||
...
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Related Links
|
||||
|
||||
- [API Settings](/features/api-settings) - API configuration
|
||||
- [CLI Call](/features/cli) - Command invocation
|
||||
- [Dashboard](/features/dashboard) - Visual management
|
||||
87
docs/guide/ch01-what-is-claude-dms3.md
Normal file
87
docs/guide/ch01-what-is-claude-dms3.md
Normal file
@@ -0,0 +1,87 @@
|
||||
# What is Claude_dms3
|
||||
|
||||
## One-Line Positioning
|
||||
|
||||
**Claude_dms3 is an AI-powered development workbench for VS Code** — Through semantic code indexing, multi-model CLI invocation, and team collaboration systems, it enables AI to deeply understand your project and generate high-quality code according to specifications.
|
||||
|
||||
> AI capabilities bloom like vines — Claude_dms3 is the trellis that guides AI along your project's architecture, coding standards, and team workflows.
|
||||
|
||||
---
|
||||
|
||||
## 1.1 Pain Points Solved
|
||||
|
||||
| Pain Point | Current State | Claude_dms3 Solution |
|
||||
|------------|---------------|---------------------|
|
||||
| **AI doesn't understand the project** | Every new session requires re-explaining project background, tech stack, and coding standards | Memory system persists project context, AI remembers project knowledge across sessions |
|
||||
| **Difficult code search** | Keyword search can't find semantically related code, don't know where functions are called | CodexLens semantic indexing, supports natural language search and call chain tracing |
|
||||
| **Single model limitation** | Can only call one AI model, different models excel in different scenarios | CCW unified invocation framework, supports multi-model collaboration (Gemini, Qwen, Codex, Claude) |
|
||||
| **Chaotic collaboration process** | Team members work independently, inconsistent code styles, standards hard to implement | Team workflow system (PlanEx, IterDev, Lifecycle) ensures standard execution |
|
||||
| **Standards hard to implement** | CLAUDE.md written but AI doesn't follow, project constraints ignored | Spec + Hook auto-injection, AI forced to follow project standards |
|
||||
|
||||
---
|
||||
|
||||
## 1.2 vs Traditional Methods
|
||||
|
||||
| Dimension | Traditional AI Assistant | **Claude_dms3** |
|
||||
|-----------|-------------------------|-----------------|
|
||||
| **Code Search** | Text keyword search | **Semantic vector search + LSP call chain** |
|
||||
| **AI Invocation** | Single model fixed call | **Multi-model collaboration, optimal model per task** |
|
||||
| **Project Memory** | Re-explain each session | **Cross-session persistent Memory** |
|
||||
| **Standard Execution** | Relies on Prompt reminders | **Spec + Hook auto-injection** |
|
||||
| **Team Collaboration** | Each person for themselves | **Structured workflow system** |
|
||||
| **Code Quality** | Depends on AI capability | **Multi-dimensional review + auto-fix cycle** |
|
||||
|
||||
---
|
||||
|
||||
## 1.3 Core Concepts Overview
|
||||
|
||||
| Concept | Description | Location/Command |
|
||||
|---------|-------------|------------------|
|
||||
| **CodexLens** | Semantic code indexing and search engine | `ccw search` |
|
||||
| **CCW** | Unified CLI tool invocation framework | `ccw cli` |
|
||||
| **Memory** | Cross-session knowledge persistence | `ccw memory` |
|
||||
| **Spec** | Project specification and constraint system | `.workflow/specs/` |
|
||||
| **Hook** | Auto-triggered context injection scripts | `.claude/hooks/` |
|
||||
| **Agent** | Specialized AI subprocess for specific roles | `.claude/agents/` |
|
||||
| **Skill** | Reusable AI capability modules | `.claude/skills/` |
|
||||
| **Workflow** | Multi-phase development orchestration | `/workflow:*` |
|
||||
|
||||
---
|
||||
|
||||
## 1.4 Architecture Overview
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────────┐
|
||||
│ Claude_dms3 Architecture │
|
||||
├─────────────────────────────────────────────────────────────┤
|
||||
│ │
|
||||
│ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ │
|
||||
│ │ CodexLens │ │ CCW │ │ Memory │ │
|
||||
│ │ (Semantic │ │ (CLI Call │ │ (Persistent │ │
|
||||
│ │ Index) │ │ Framework) │ │ Context) │ │
|
||||
│ └──────┬──────┘ └──────┬──────┘ └──────┬──────┘ │
|
||||
│ │ │ │ │
|
||||
│ └────────────────┼────────────────┘ │
|
||||
│ │ │
|
||||
│ ┌─────┴─────┐ │
|
||||
│ │ Spec │ │
|
||||
│ │ System │ │
|
||||
│ └─────┬─────┘ │
|
||||
│ │ │
|
||||
│ ┌────────────────┼────────────────┐ │
|
||||
│ │ │ │ │
|
||||
│ ┌────┴────┐ ┌─────┴─────┐ ┌────┴────┐ │
|
||||
│ │ Hooks │ │ Skills │ │ Agents │ │
|
||||
│ │(Inject) │ │(Reusable) │ │(Roles) │ │
|
||||
│ └─────────┘ └───────────┘ └─────────┘ │
|
||||
│ │
|
||||
└─────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Next Steps
|
||||
|
||||
- [Getting Started](/guide/ch02-getting-started) - Install and configure
|
||||
- [Core Concepts](/guide/ch03-core-concepts) - Understand the fundamentals
|
||||
- [Workflow Basics](/guide/ch04-workflow-basics) - Start your first workflow
|
||||
295
docs/guide/ch02-getting-started.md
Normal file
295
docs/guide/ch02-getting-started.md
Normal file
@@ -0,0 +1,295 @@
|
||||
# Getting Started
|
||||
|
||||
## One-Line Positioning
|
||||
|
||||
**Getting Started is a 5-minute quick-start guide** — Installation, first command, first workflow, quickly experience Claude_dms3's core features.
|
||||
|
||||
---
|
||||
|
||||
## 2.1 Installation
|
||||
|
||||
### 2.1.1 Prerequisites
|
||||
|
||||
| Requirement | Version | Description |
|
||||
| --- | --- | --- |
|
||||
| **Node.js** | 18+ | Required for CCW modules |
|
||||
| **Python** | 3.10+ | Required for CodexLens modules |
|
||||
| **VS Code** | Latest | Extension runtime environment |
|
||||
| **Git** | Latest | Version control |
|
||||
|
||||
### 2.1.2 Clone Project
|
||||
|
||||
```bash
|
||||
# Clone repository
|
||||
git clone https://github.com/your-repo/claude-dms3.git
|
||||
cd claude-dms3
|
||||
|
||||
# Install dependencies
|
||||
npm install
|
||||
```
|
||||
|
||||
### 2.1.3 Configure API Keys
|
||||
|
||||
Configure API Keys in `~/.claude/settings.json`:
|
||||
|
||||
```json
|
||||
{
|
||||
"openai": {
|
||||
"apiKey": "sk-xxx"
|
||||
},
|
||||
"anthropic": {
|
||||
"apiKey": "sk-ant-xxx"
|
||||
},
|
||||
"google": {
|
||||
"apiKey": "AIza-xxx"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
::: tip Tip
|
||||
API Keys can also be configured at the project level in `.claude/settings.json`. Project-level configuration takes priority over global configuration.
|
||||
:::
|
||||
|
||||
---
|
||||
|
||||
## 2.2 Initialize Project
|
||||
|
||||
### 2.2.1 Start Workflow Session
|
||||
|
||||
Open your project in VS Code, then run:
|
||||
|
||||
```
|
||||
/workflow:session:start
|
||||
```
|
||||
|
||||
This creates a new workflow session. All subsequent operations will be performed within this session context.
|
||||
|
||||
### 2.2.2 Initialize Project Specs
|
||||
|
||||
```
|
||||
/workflow:init
|
||||
```
|
||||
|
||||
This creates the `project-tech.json` file, recording your project's technology stack information.
|
||||
|
||||
### 2.2.3 Populate Project Specs
|
||||
|
||||
```
|
||||
/workflow:init-guidelines
|
||||
```
|
||||
|
||||
Interactively populate project specifications, including coding style, architectural decisions, and other information.
|
||||
|
||||
---
|
||||
|
||||
## 2.3 First Command
|
||||
|
||||
### 2.3.1 Code Analysis
|
||||
|
||||
Use CCW CLI tool to analyze code:
|
||||
|
||||
```bash
|
||||
ccw cli -p "Analyze the code structure and design patterns of this file" --tool gemini --mode analysis
|
||||
```
|
||||
|
||||
**Parameter Description**:
|
||||
- `-p`: Prompt (task description)
|
||||
- `--tool gemini`: Use Gemini model
|
||||
- `--mode analysis`: Analysis mode (read-only, no file modifications)
|
||||
|
||||
### 2.3.2 Code Generation
|
||||
|
||||
Use CCW CLI tool to generate code:
|
||||
|
||||
```bash
|
||||
ccw cli -p "Create a React component implementing user login form" --tool qwen --mode write
|
||||
```
|
||||
|
||||
**Parameter Description**:
|
||||
- `--mode write`: Write mode (can create/modify files)
|
||||
|
||||
::: danger Warning
|
||||
`--mode write` will modify files. Ensure your code is committed or backed up.
|
||||
:::
|
||||
|
||||
---
|
||||
|
||||
## 2.4 First Workflow
|
||||
|
||||
### 2.4.1 Start Planning Workflow
|
||||
|
||||
```
|
||||
/workflow:plan
|
||||
```
|
||||
|
||||
This launches the PlanEx workflow, including the following steps:
|
||||
|
||||
1. **Analyze Requirements** - Understand user intent
|
||||
2. **Explore Code** - Search related code and patterns
|
||||
3. **Generate Plan** - Create structured task list
|
||||
4. **Execute Tasks** - Execute development according to plan
|
||||
|
||||
### 2.4.2 Brainstorming
|
||||
|
||||
```
|
||||
/brainstorm
|
||||
```
|
||||
|
||||
Multi-perspective brainstorming for diverse viewpoints:
|
||||
|
||||
| Perspective | Role | Focus |
|
||||
| --- | --- | --- |
|
||||
| Product | Product Manager | Market fit, user value |
|
||||
| Technical | Tech Lead | Feasibility, technical debt |
|
||||
| Quality | QA Lead | Completeness, testability |
|
||||
| Risk | Risk Analyst | Risk identification, dependencies |
|
||||
|
||||
---
|
||||
|
||||
## 2.5 Using Memory
|
||||
|
||||
### 2.5.1 View Project Memory
|
||||
|
||||
```bash
|
||||
ccw memory list
|
||||
```
|
||||
|
||||
Display all project memories, including learnings, decisions, conventions, and issues.
|
||||
|
||||
### 2.5.2 Search Related Memory
|
||||
|
||||
```bash
|
||||
ccw memory search "authentication"
|
||||
```
|
||||
|
||||
Semantic search for memories related to "authentication".
|
||||
|
||||
### 2.5.3 Add Memory
|
||||
|
||||
```
|
||||
/memory-capture
|
||||
```
|
||||
|
||||
Interactively capture important knowledge points from the current session.
|
||||
|
||||
---
|
||||
|
||||
## 2.6 Code Search
|
||||
|
||||
### 2.6.1 Semantic Search
|
||||
|
||||
Use CodexLens search in VS Code:
|
||||
|
||||
```bash
|
||||
# Search via CodexLens MCP endpoint
|
||||
ccw search "user login logic"
|
||||
```
|
||||
|
||||
### 2.6.2 Call Chain Tracing
|
||||
|
||||
Search function definitions and all call locations:
|
||||
|
||||
```bash
|
||||
ccw search --trace "authenticateUser"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 2.7 Dashboard Panel
|
||||
|
||||
### 2.7.1 Open Dashboard
|
||||
|
||||
Run in VS Code:
|
||||
|
||||
```
|
||||
ccw-dashboard.open
|
||||
```
|
||||
|
||||
Or use Command Palette (Ctrl+Shift+P) and search "CCW Dashboard".
|
||||
|
||||
### 2.7.2 Panel Features
|
||||
|
||||
| Feature | Description |
|
||||
| --- | --- |
|
||||
| **Tech Stack** | Display frameworks and libraries used |
|
||||
| **Specs Docs** | Quick view of project specifications |
|
||||
| **Memory** | Browse and search project memory |
|
||||
| **Code Search** | Integrated CodexLens semantic search |
|
||||
|
||||
---
|
||||
|
||||
## 2.8 FAQ
|
||||
|
||||
### 2.8.1 API Key Configuration
|
||||
|
||||
**Q: Where to configure API Keys?**
|
||||
|
||||
A: Can be configured in two locations:
|
||||
- Global configuration: `~/.claude/settings.json`
|
||||
- Project configuration: `.claude/settings.json`
|
||||
|
||||
Project configuration takes priority over global configuration.
|
||||
|
||||
### 2.8.2 Model Selection
|
||||
|
||||
**Q: How to choose the right model?**
|
||||
|
||||
A: Select based on task type:
|
||||
- Code analysis, architecture design → Gemini
|
||||
- General development → Qwen
|
||||
- Code review → Codex (GPT)
|
||||
- Long text understanding → Claude
|
||||
|
||||
### 2.8.3 Workflow Selection
|
||||
|
||||
**Q: When to use which workflow?**
|
||||
|
||||
A: Select based on task objective:
|
||||
- New feature development → `/workflow:plan`
|
||||
- Problem diagnosis → `/debug-with-file`
|
||||
- Code review → `/review-code`
|
||||
- Refactoring planning → `/refactor-cycle`
|
||||
- UI development → `/workflow:ui-design`
|
||||
|
||||
---
|
||||
|
||||
## 2.9 Quick Reference
|
||||
|
||||
### Installation Steps
|
||||
|
||||
```bash
|
||||
# 1. Clone project
|
||||
git clone https://github.com/your-repo/claude-dms3.git
|
||||
cd claude-dms3
|
||||
|
||||
# 2. Install dependencies
|
||||
npm install
|
||||
|
||||
# 3. Configure API Keys
|
||||
# Edit ~/.claude/settings.json
|
||||
|
||||
# 4. Start workflow session
|
||||
/workflow:session:start
|
||||
|
||||
# 5. Initialize project
|
||||
/workflow:init
|
||||
```
|
||||
|
||||
### Common Commands
|
||||
|
||||
| Command | Function |
|
||||
| --- | --- |
|
||||
| `/workflow:session:start` | Start session |
|
||||
| `/workflow:plan` | Planning workflow |
|
||||
| `/brainstorm` | Brainstorming |
|
||||
| `/review-code` | Code review |
|
||||
| `ccw memory list` | View Memory |
|
||||
| `ccw cli -p "..."` | CLI invocation |
|
||||
|
||||
---
|
||||
|
||||
## Next Steps
|
||||
|
||||
- [Core Concepts](ch03-core-concepts.md) — Deep dive into Commands, Skills, Prompts
|
||||
- [Workflow Basics](ch04-workflow-basics.md) — Learn various workflow commands
|
||||
- [Advanced Tips](ch05-advanced-tips.md) — CLI toolchain, multi-model collaboration, memory management optimization
|
||||
264
docs/guide/ch03-core-concepts.md
Normal file
264
docs/guide/ch03-core-concepts.md
Normal file
@@ -0,0 +1,264 @@
|
||||
# Core Concepts
|
||||
|
||||
## One-Line Positioning
|
||||
|
||||
**Core Concepts are the foundation for understanding Claude_dms3** — Three-layer abstraction of Commands, Skills, Prompts, Workflow session management, and team collaboration patterns.
|
||||
|
||||
---
|
||||
|
||||
## 3.1 Three-Layer Abstraction
|
||||
|
||||
Claude_dms3's command system is divided into three layers of abstraction:
|
||||
|
||||
### 3.1.1 Commands - Built-in Commands
|
||||
|
||||
**Commands are the atomic operations of Claude_dms3** — Predefined reusable commands that complete specific tasks.
|
||||
|
||||
| Category | Count | Description |
|
||||
| --- | --- | --- |
|
||||
| **Core Orchestration** | 2 | ccw, ccw-coordinator |
|
||||
| **CLI Tools** | 2 | cli-init, codex-review |
|
||||
| **Issue Workflow** | 8 | discover, plan, execute, queue, etc. |
|
||||
| **Memory** | 2 | prepare, style-skill-memory |
|
||||
| **Workflow Session** | 6 | start, resume, list, complete, etc. |
|
||||
| **Workflow Analysis** | 10+ | analyze, brainstorm, debug, refactor, etc. |
|
||||
| **Workflow UI Design** | 9 | generate, import-from-code, style-extract, etc. |
|
||||
|
||||
::: tip Tip
|
||||
Commands are defined in the `.claude/commands/` directory. Each command is a Markdown file.
|
||||
:::
|
||||
|
||||
### 3.1.2 Skills - Composite Skills
|
||||
|
||||
**Skills are combined encapsulations of Commands** — Reusable skills for specific scenarios, containing multiple steps and best practices.
|
||||
|
||||
| Skill | Function | Trigger |
|
||||
| --- | --- | --- |
|
||||
| **brainstorm** | Multi-perspective brainstorming | `/brainstorm` |
|
||||
| **ccw-help** | CCW command help | `/ccw-help` |
|
||||
| **command-generator** | Generate Claude commands | `/command-generator` |
|
||||
| **issue-manage** | Issue management | `/issue-manage` |
|
||||
| **memory-capture** | Memory compression and capture | `/memory-capture` |
|
||||
| **memory-manage** | Memory updates | `/memory-manage` |
|
||||
| **review-code** | Multi-dimensional code review | `/review-code` |
|
||||
| **review-cycle** | Code review and fix cycle | `/review-cycle` |
|
||||
| **skill-generator** | Generate Claude skills | `/skill-generator` |
|
||||
| **skill-tuning** | Skill diagnosis and tuning | `/skill-tuning` |
|
||||
|
||||
::: tip Tip
|
||||
Skills are defined in the `.claude/skills/` directory, containing SKILL.md specification files and reference documentation.
|
||||
:::
|
||||
|
||||
### 3.1.3 Prompts - Codex Prompts
|
||||
|
||||
**Prompts are instruction templates for the Codex model** — Prompt templates optimized for the Codex (GPT) model.
|
||||
|
||||
| Prompt | Function |
|
||||
| --- | --- |
|
||||
| **prep-cycle** | Prep cycle prompt |
|
||||
| **prep-plan** | Prep planning prompt |
|
||||
|
||||
::: tip Tip
|
||||
Codex Prompts are defined in the `.codex/prompts/` directory, optimized specifically for the Codex model.
|
||||
:::
|
||||
|
||||
---
|
||||
|
||||
## 3.2 Three-Layer Relationship
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────────┐
|
||||
│ User Request │
|
||||
└────────────────────┬────────────────────────────────┘
|
||||
│
|
||||
▼
|
||||
┌─────────────────────────────────────────────────────┐
|
||||
│ ccw (Orchestrator) │
|
||||
│ Intent Analysis → Workflow Selection → Execution │
|
||||
└────────────────────┬────────────────────────────────┘
|
||||
│
|
||||
┌───────────┼───────────┐
|
||||
▼ ▼ ▼
|
||||
┌────────┐ ┌────────┐ ┌────────┐
|
||||
│ Command│ │ Skill │ │ Prompt │
|
||||
│ (Atom) │ │(Composite)│ │(Template)│
|
||||
└────────┘ └────────┘ └────────┘
|
||||
│ │ │
|
||||
└───────────┼───────────┘
|
||||
▼
|
||||
┌────────────────┐
|
||||
│ AI Model Call │
|
||||
└────────────────┘
|
||||
```
|
||||
|
||||
### 3.2.1 Call Path
|
||||
|
||||
1. **User initiates request** → Enter command or describe requirements in VS Code
|
||||
2. **ccw orchestration** → Intent analysis, select appropriate workflow
|
||||
3. **Execute Command** → Execute atomic command operations
|
||||
4. **Call Skill** → For complex logic, call composite skills
|
||||
5. **Use Prompt** → For specific models, use optimized prompts
|
||||
6. **AI model execution** → Call configured AI model
|
||||
7. **Return result** → Format output to user
|
||||
|
||||
---
|
||||
|
||||
## 3.3 Workflow Session Management
|
||||
|
||||
### 3.3.1 Session Lifecycle
|
||||
|
||||
```
|
||||
┌─────────┐ ┌─────────┐ ┌─────────┐ ┌─────────┐
|
||||
│ Start │────▶│ Resume │────▶│ Execute │────▶│Complete │
|
||||
│ Launch │ │ Resume │ │ Execute │ │ Complete│
|
||||
└─────────┘ └─────────┘ └─────────┘ └─────────┘
|
||||
│ │
|
||||
▼ ▼
|
||||
┌─────────┐ ┌─────────┐
|
||||
│ List │ │ Solidify│
|
||||
│ List │ │ Solidify│
|
||||
└─────────┘ └─────────┘
|
||||
```
|
||||
|
||||
### 3.3.2 Session Commands
|
||||
|
||||
| Command | Function | Example |
|
||||
| --- | --- | --- |
|
||||
| **start** | Start new session | `/workflow:session:start` |
|
||||
| **resume** | Resume existing session | `/workflow:session:resume <session-id>` |
|
||||
| **list** | List all sessions | `/workflow:session:list` |
|
||||
| **sync** | Sync session state | `/workflow:session:sync` |
|
||||
| **complete** | Complete current session | `/workflow:session:complete` |
|
||||
| **solidify** | Solidify session results | `/workflow:session:solidify` |
|
||||
|
||||
### 3.3.3 Session Directory Structure
|
||||
|
||||
```
|
||||
.workflow/
|
||||
├── .team/
|
||||
│ └── TC-<project>-<date>/ # Session directory
|
||||
│ ├── spec/ # Session specifications
|
||||
│ │ ├── discovery-context.json
|
||||
│ │ └── requirements.md
|
||||
│ ├── artifacts/ # Session artifacts
|
||||
│ ├── wisdom/ # Session wisdom
|
||||
│ │ ├── learnings.md
|
||||
│ │ ├── decisions.md
|
||||
│ │ ├── conventions.md
|
||||
│ │ └── issues.md
|
||||
│ └── .team-msg/ # Message bus
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 3.4 Team Collaboration Patterns
|
||||
|
||||
### 3.4.1 Role System
|
||||
|
||||
Claude_dms3 supports 8 team workflows, each defining different roles:
|
||||
|
||||
| Workflow | Roles | Description |
|
||||
| --- | --- | --- |
|
||||
| **PlanEx** | planner, executor | Planning-execution separation |
|
||||
| **IterDev** | developer, reviewer | Iterative development |
|
||||
| **Lifecycle** | analyzer, developer, tester, reviewer | Lifecycle coverage |
|
||||
| **Issue** | discoverer, planner, executor | Issue-driven |
|
||||
| **Testing** | tester, developer | Test-driven |
|
||||
| **QA** | qa, developer | Quality assurance |
|
||||
| **Brainstorm** | facilitator, perspectives | Multi-perspective analysis |
|
||||
| **UIDesign** | designer, developer | UI design generation |
|
||||
|
||||
### 3.4.2 Message Bus
|
||||
|
||||
Team members communicate via the message bus:
|
||||
|
||||
```
|
||||
┌────────────┐ ┌────────────┐
|
||||
│ Planner │ │ Executor │
|
||||
└─────┬──────┘ └──────┬─────┘
|
||||
│ │
|
||||
│ [plan_ready] │
|
||||
├────────────────────────────────▶
|
||||
│ │
|
||||
│ [task_complete]
|
||||
│◀────────────────────────────────┤
|
||||
│ │
|
||||
│ [plan_approved] │
|
||||
├────────────────────────────────▶
|
||||
│ │
|
||||
```
|
||||
|
||||
### 3.4.3 Workflow Selection Guide
|
||||
|
||||
| Task Objective | Recommended Workflow | Command |
|
||||
| --- | --- | --- |
|
||||
| New feature development | PlanEx | `/workflow:plan` |
|
||||
| Bug fix | Lifecycle | `/debug-with-file` |
|
||||
| Code refactoring | IterDev | `/refactor-cycle` |
|
||||
| Technical decision | Brainstorm | `/brainstorm-with-file` |
|
||||
| UI development | UIDesign | `/workflow:ui-design` |
|
||||
| Integration testing | Testing | `/integration-test-cycle` |
|
||||
| Code review | QA | `/review-cycle` |
|
||||
| Issue management | Issue | `/issue` series |
|
||||
|
||||
---
|
||||
|
||||
## 3.5 Core Concepts Overview
|
||||
|
||||
| Concept | Description | Location/Command |
|
||||
| --- | --- | --- |
|
||||
| **Command** | Atomic operation commands | `.claude/commands/` |
|
||||
| **Skill** | Composite skill encapsulation | `.claude/skills/` |
|
||||
| **Prompt** | Codex prompt templates | `.codex/prompts/` |
|
||||
| **Workflow** | Team collaboration process | `/workflow:*` |
|
||||
| **Session** | Session context management | `/workflow:session:*` |
|
||||
| **Memory** | Cross-session knowledge persistence | `ccw memory` |
|
||||
| **Spec** | Project specification constraints | `.workflow/specs/` |
|
||||
| **CodexLens** | Semantic code indexing | `.codex-lens/` |
|
||||
| **CCW** | CLI invocation framework | `ccw` directory |
|
||||
|
||||
---
|
||||
|
||||
## 3.6 Data Flow
|
||||
|
||||
```
|
||||
User Request
|
||||
│
|
||||
▼
|
||||
┌──────────────┐
|
||||
│ CCW Orchestrator│ ──▶ Intent Analysis
|
||||
└──────────────┘
|
||||
│
|
||||
├─▶ Workflow Selection
|
||||
│ │
|
||||
│ ├─▶ PlanEx
|
||||
│ ├─▶ IterDev
|
||||
│ ├─▶ Lifecycle
|
||||
│ └─▶ ...
|
||||
│
|
||||
├─▶ Command Execution
|
||||
│ │
|
||||
│ ├─▶ Built-in commands
|
||||
│ └─▶ Skill calls
|
||||
│
|
||||
├─▶ AI Model Invocation
|
||||
│ │
|
||||
│ ├─▶ Gemini
|
||||
│ ├─▶ Qwen
|
||||
│ ├─▶ Codex
|
||||
│ └─▶ Claude
|
||||
│
|
||||
└─▶ Result Return
|
||||
│
|
||||
├─▶ File modification
|
||||
├─▶ Memory update
|
||||
└─▶ Dashboard update
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Next Steps
|
||||
|
||||
- [Workflow Basics](ch04-workflow-basics.md) — Learn various workflow commands
|
||||
- [Advanced Tips](ch05-advanced-tips.md) — CLI toolchain, multi-model collaboration
|
||||
- [Best Practices](ch06-best-practices.md) — Team collaboration standards, code review process
|
||||
328
docs/guide/ch04-workflow-basics.md
Normal file
328
docs/guide/ch04-workflow-basics.md
Normal file
@@ -0,0 +1,328 @@
|
||||
# Workflow Basics
|
||||
|
||||
## One-Line Positioning
|
||||
|
||||
**Workflows are the core of team collaboration** — 8 workflows covering the full development lifecycle, from planning to execution, from analysis to testing.
|
||||
|
||||
---
|
||||
|
||||
## 4.1 Workflow Overview
|
||||
|
||||
| Workflow | Core Command | Use Case | Roles |
|
||||
| --- | --- | --- | --- |
|
||||
| **PlanEx** | `/workflow:plan` | New feature development, requirement implementation | planner, executor |
|
||||
| **IterDev** | `/refactor-cycle` | Code refactoring, technical debt handling | developer, reviewer |
|
||||
| **Lifecycle** | `/unified-execute-with-file` | Complete development cycle | analyzer, developer, tester, reviewer |
|
||||
| **Issue** | `/issue:*` | Issue-driven development | discoverer, planner, executor |
|
||||
| **Testing** | `/integration-test-cycle` | Integration testing, test generation | tester, developer |
|
||||
| **QA** | `/review-cycle` | Code review and quality assurance | qa, developer |
|
||||
| **Brainstorm** | `/brainstorm-with-file` | Multi-perspective analysis, technical decisions | facilitator, perspectives |
|
||||
| **UIDesign** | `/workflow:ui-design` | UI design and code generation | designer, developer |
|
||||
|
||||
---
|
||||
|
||||
## 4.2 PlanEx - Planning-Execution Workflow
|
||||
|
||||
### 4.2.1 One-Line Positioning
|
||||
|
||||
**PlanEx is a planning-execution separation workflow** — Plan first, then execute, ensuring tasks are clear before starting work.
|
||||
|
||||
### 4.2.2 Launch Method
|
||||
|
||||
```
|
||||
/workflow:plan
|
||||
```
|
||||
|
||||
Or describe requirements directly:
|
||||
|
||||
```
|
||||
Implement user login functionality, supporting email and phone number login
|
||||
```
|
||||
|
||||
### 4.2.3 Workflow Process
|
||||
|
||||
```
|
||||
┌─────────────┐ ┌─────────────┐ ┌─────────────┐
|
||||
│ Planner │────▶│ Executor │────▶│ Reviewer │
|
||||
│ Planning │ │ Execution │ │ Review │
|
||||
└─────────────┘ └─────────────┘ └─────────────┘
|
||||
│ │ │
|
||||
▼ ▼ ▼
|
||||
┌─────────────┐ ┌─────────────┐ ┌─────────────┐
|
||||
│ Requirements│ │ Task │ │ Code │
|
||||
│ Analysis │ │ Execution │ │ Review │
|
||||
│ Task Breakdown│ Code Gen │ │ Quality │
|
||||
│ Plan Gen │ │ Test Write │ │ Feedback │
|
||||
└─────────────┘ └─────────────┘ └─────────────┘
|
||||
```
|
||||
|
||||
### 4.2.4 Output Artifacts
|
||||
|
||||
| Artifact | Location | Description |
|
||||
| --- | --- | --- |
|
||||
| **Requirements Analysis** | `artifacts/requirements.md` | Detailed requirement analysis |
|
||||
| **Task Plan** | `artifacts/plan.md` | Structured task list |
|
||||
| **Execution Artifacts** | `artifacts/implementation/` | Code and tests |
|
||||
| **Wisdom Accumulation** | `wisdom/learnings.md` | Lessons learned |
|
||||
|
||||
---
|
||||
|
||||
## 4.3 IterDev - Iterative Development Workflow
|
||||
|
||||
### 4.3.1 One-Line Positioning
|
||||
|
||||
**IterDev is an iterative refactoring workflow** — Discover technical debt, plan refactoring, improve iteratively.
|
||||
|
||||
### 4.3.2 Launch Method
|
||||
|
||||
```
|
||||
/refactor-cycle
|
||||
```
|
||||
|
||||
### 4.3.3 Workflow Process
|
||||
|
||||
```
|
||||
┌─────────────┐ ┌─────────────┐ ┌─────────────┐
|
||||
│ Discover │────▶│ Plan │────▶│ Refactor │
|
||||
│ Discovery │ │ Planning │ │ Refactoring │
|
||||
└─────────────┘ └─────────────┘ └─────────────┘
|
||||
│ │ │
|
||||
▼ ▼ ▼
|
||||
┌─────────────┐ ┌─────────────┐ ┌─────────────┐
|
||||
│ Code │ │ Refactor │ │ Code │
|
||||
│ Analysis │ │ Strategy │ │ Modification│
|
||||
│ Problem │ │ Priority │ │ Test │
|
||||
│ ID │ │ Task Breakdown│ │ Verification│
|
||||
│ Tech Debt │ │ │ │ Doc Update │
|
||||
└─────────────┘ └─────────────┘ └─────────────┘
|
||||
```
|
||||
|
||||
### 4.3.4 Use Cases
|
||||
|
||||
| Scenario | Example |
|
||||
| --- | --- |
|
||||
| **Code Smells** | Long functions, duplicate code |
|
||||
| **Architecture Improvement** | Decoupling, modularization |
|
||||
| **Performance Optimization** | Algorithm optimization, caching strategy |
|
||||
| **Security Hardening** | Fix security vulnerabilities |
|
||||
| **Standard Unification** | Code style consistency |
|
||||
|
||||
---
|
||||
|
||||
## 4.4 Lifecycle - Lifecycle Workflow
|
||||
|
||||
### 4.4.1 One-Line Positioning
|
||||
|
||||
**Lifecycle is a full-lifecycle coverage workflow** — From analysis to testing to review, complete closed loop.
|
||||
|
||||
### 4.4.2 Launch Method
|
||||
|
||||
```
|
||||
/unified-execute-with-file <file>
|
||||
```
|
||||
|
||||
### 4.4.3 Role Responsibilities
|
||||
|
||||
| Role | Responsibility | Output |
|
||||
| --- | --- | --- |
|
||||
| **Analyzer** | Analyze requirements, explore code | Analysis report |
|
||||
| **Developer** | Implement features, write tests | Code + tests |
|
||||
| **Tester** | Run tests, verify functionality | Test report |
|
||||
| **Reviewer** | Code review, quality check | Review report |
|
||||
|
||||
### 4.4.4 Workflow Process
|
||||
|
||||
```
|
||||
┌─────────┐ ┌─────────┐ ┌─────────┐ ┌─────────┐
|
||||
│Analyzer │──▶│Developer│──▶│ Tester │──▶│Reviewer │
|
||||
│ Analysis │ │ Develop │ │ Test │ │ Review │
|
||||
└─────────┘ └─────────┘ └─────────┘ └─────────┘
|
||||
│ │ │ │
|
||||
▼ ▼ ▼ ▼
|
||||
Requirement Code Test Quality
|
||||
Analysis Implementation Verification Gate
|
||||
Code Unit Regression Final
|
||||
Exploration Test Test Confirmation
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 4.5 Issue - Issue Management Workflow
|
||||
|
||||
### 4.5.1 One-Line Positioning
|
||||
|
||||
**Issue is an issue-driven development workflow** — From issue discovery to planning to execution, complete tracking.
|
||||
|
||||
### 4.5.2 Issue Commands
|
||||
|
||||
| Command | Function | Example |
|
||||
| --- | --- | --- |
|
||||
| **discover** | Discover Issue | `/issue discover https://github.com/xxx/issue/1` |
|
||||
| **discover-by-prompt** | Create from Prompt | `/issue discover-by-prompt "Login failed"` |
|
||||
| **from-brainstorm** | Create from brainstorm | `/issue from-brainstorm` |
|
||||
| **plan** | Batch plan Issues | `/issue plan` |
|
||||
| **queue** | Form execution queue | `/issue queue` |
|
||||
| **execute** | Execute Issue queue | `/issue execute` |
|
||||
|
||||
### 4.5.3 Workflow Process
|
||||
|
||||
```
|
||||
┌─────────┐ ┌─────────┐ ┌─────────┐ ┌─────────┐
|
||||
│Discover │──▶│ Plan │──▶│ Queue │──▶│ Execute │
|
||||
│Discovery│ │ Plan │ │ Queue │ │ Execute │
|
||||
└─────────┘ └─────────┘ └─────────┘ └─────────┘
|
||||
│ │ │ │
|
||||
▼ ▼ ▼ ▼
|
||||
Identify Analyze Priority Implement
|
||||
Problems Requirements Sort Solution
|
||||
Define Plan Dependencies Verify
|
||||
Scope Results
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 4.6 Testing - Testing Workflow
|
||||
|
||||
### 4.6.1 One-Line Positioning
|
||||
|
||||
**Testing is a self-iterating test workflow** — Auto-generate tests, iteratively improve test coverage.
|
||||
|
||||
### 4.6.2 Launch Method
|
||||
|
||||
```
|
||||
/integration-test-cycle
|
||||
```
|
||||
|
||||
### 4.6.3 Workflow Process
|
||||
|
||||
```
|
||||
┌─────────┐ ┌─────────┐ ┌─────────┐
|
||||
│Generate │──▶│ Execute │──▶│ Verify │
|
||||
│ Generate │ │ Execute │ │ Verify │
|
||||
└─────────┘ └─────────┘ └─────────┘
|
||||
│ │ │
|
||||
▼ ▼ ▼
|
||||
Test Cases Run Tests Coverage
|
||||
Mock Data Failure Analysis
|
||||
Analysis Gap Fill
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 4.7 QA - Quality Assurance Workflow
|
||||
|
||||
### 4.7.1 One-Line Positioning
|
||||
|
||||
**QA is a code review workflow** — 6-dimensional code review, auto-discover issues.
|
||||
|
||||
### 4.7.2 Launch Method
|
||||
|
||||
```
|
||||
/review-cycle
|
||||
```
|
||||
|
||||
### 4.7.3 Review Dimensions
|
||||
|
||||
| Dimension | Check Items |
|
||||
| --- | --- |
|
||||
| **Correctness** | Logic correct, boundary handling |
|
||||
| **Performance** | Algorithm efficiency, resource usage |
|
||||
| **Security** | Injection vulnerabilities, permission checks |
|
||||
| **Maintainability** | Code clarity, modularity |
|
||||
| **Test Coverage** | Unit tests, boundary tests |
|
||||
| **Standard Compliance** | Coding standards, project conventions |
|
||||
|
||||
---
|
||||
|
||||
## 4.8 Brainstorm - Brainstorming Workflow
|
||||
|
||||
### 4.8.1 One-Line Positioning
|
||||
|
||||
**Brainstorm is a multi-perspective analysis workflow** — Analyze problems from multiple viewpoints for comprehensive insights.
|
||||
|
||||
### 4.8.2 Launch Method
|
||||
|
||||
```
|
||||
/brainstorm-with-file <file>
|
||||
```
|
||||
|
||||
### 4.8.3 Analysis Perspectives
|
||||
|
||||
| Perspective | Role | Focus |
|
||||
| --- | --- | --- |
|
||||
| **Product** | Product Manager | Market fit, user value |
|
||||
| **Technical** | Tech Lead | Feasibility, technical debt |
|
||||
| **Quality** | QA Lead | Completeness, testability |
|
||||
| **Risk** | Risk Analyst | Risk identification, dependencies |
|
||||
|
||||
### 4.8.4 Output Format
|
||||
|
||||
```markdown
|
||||
## Consensus Points
|
||||
- [Consensus point 1]
|
||||
- [Consensus point 2]
|
||||
|
||||
## Divergences
|
||||
- [Divergence 1]
|
||||
- Perspective A: ...
|
||||
- Perspective B: ...
|
||||
- Recommendation: ...
|
||||
|
||||
## Action Items
|
||||
- [ ] [Action item 1]
|
||||
- [ ] [Action item 2]
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 4.9 UIDesign - UI Design Workflow
|
||||
|
||||
### 4.9.1 One-Line Positioning
|
||||
|
||||
**UIDesign is a UI design generation workflow** — From design to code, auto-extract styles and layouts.
|
||||
|
||||
### 4.9.2 UI Design Commands
|
||||
|
||||
| Command | Function |
|
||||
| --- | --- |
|
||||
| **generate** | Generate UI components |
|
||||
| **import-from-code** | Import styles from code |
|
||||
| **style-extract** | Extract style specifications |
|
||||
| **layout-extract** | Extract layout structure |
|
||||
| **imitate-auto** | Imitate reference page |
|
||||
| **codify-style** | Convert styles to code |
|
||||
| **design-sync** | Sync design changes |
|
||||
|
||||
---
|
||||
|
||||
## 4.10 Quick Reference
|
||||
|
||||
### Workflow Selection Guide
|
||||
|
||||
| Requirement | Recommended Workflow | Command |
|
||||
| --- | --- | --- |
|
||||
| New feature development | PlanEx | `/workflow:plan` |
|
||||
| Code refactoring | IterDev | `/refactor-cycle` |
|
||||
| Complete development | Lifecycle | `/unified-execute-with-file` |
|
||||
| Issue management | Issue | `/issue:*` |
|
||||
| Test generation | Testing | `/integration-test-cycle` |
|
||||
| Code review | QA | `/review-cycle` |
|
||||
| Multi-perspective analysis | Brainstorm | `/brainstorm-with-file` |
|
||||
| UI development | UIDesign | `/workflow:ui-design` |
|
||||
|
||||
### Session Management Commands
|
||||
|
||||
| Command | Function |
|
||||
| --- | --- |
|
||||
| `/workflow:session:start` | Start new session |
|
||||
| `/workflow:session:resume` | Resume session |
|
||||
| `/workflow:session:list` | List sessions |
|
||||
| `/workflow:session:complete` | Complete session |
|
||||
| `/workflow:session:solidify` | Solidify results |
|
||||
|
||||
---
|
||||
|
||||
## Next Steps
|
||||
|
||||
- [Advanced Tips](ch05-advanced-tips.md) — CLI toolchain, multi-model collaboration
|
||||
- [Best Practices](ch06-best-practices.md) — Team collaboration standards, code review process
|
||||
331
docs/guide/ch05-advanced-tips.md
Normal file
331
docs/guide/ch05-advanced-tips.md
Normal file
@@ -0,0 +1,331 @@
|
||||
# Advanced Tips
|
||||
|
||||
## One-Line Positioning
|
||||
|
||||
**Advanced Tips are the key to efficiency improvement** — Deep CLI toolchain usage, multi-model collaboration optimization, memory management best practices.
|
||||
|
||||
---
|
||||
|
||||
## 5.1 CLI Toolchain Usage
|
||||
|
||||
### 5.1.1 CLI Configuration
|
||||
|
||||
CLI tool configuration file: `~/.claude/cli-tools.json`
|
||||
|
||||
```json
|
||||
{
|
||||
"version": "3.3.0",
|
||||
"tools": {
|
||||
"gemini": {
|
||||
"enabled": true,
|
||||
"primaryModel": "gemini-2.5-flash",
|
||||
"secondaryModel": "gemini-2.5-flash",
|
||||
"tags": ["analysis", "Debug"],
|
||||
"type": "builtin"
|
||||
},
|
||||
"qwen": {
|
||||
"enabled": true,
|
||||
"primaryModel": "coder-model",
|
||||
"secondaryModel": "coder-model",
|
||||
"tags": [],
|
||||
"type": "builtin"
|
||||
},
|
||||
"codex": {
|
||||
"enabled": true,
|
||||
"primaryModel": "gpt-5.2",
|
||||
"secondaryModel": "gpt-5.2",
|
||||
"tags": [],
|
||||
"type": "builtin"
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### 5.1.2 Tag Routing
|
||||
|
||||
Automatically select models based on task type:
|
||||
|
||||
| Tag | Applicable Model | Task Type |
|
||||
| --- | --- | --- |
|
||||
| **analysis** | Gemini | Code analysis, architecture design |
|
||||
| **Debug** | Gemini | Root cause analysis, problem diagnosis |
|
||||
| **implementation** | Qwen | Feature development, code generation |
|
||||
| **review** | Codex | Code review, Git operations |
|
||||
|
||||
### 5.1.3 CLI Command Templates
|
||||
|
||||
#### Analysis Task
|
||||
|
||||
```bash
|
||||
ccw cli -p "PURPOSE: Identify security vulnerabilities
|
||||
TASK: • Scan for SQL injection • Check XSS • Verify CSRF
|
||||
MODE: analysis
|
||||
CONTEXT: @src/auth/**/*
|
||||
EXPECTED: Security report with severity grading and fix recommendations
|
||||
CONSTRAINTS: Focus on auth module" --tool gemini --mode analysis --rule analysis-assess-security-risks
|
||||
```
|
||||
|
||||
#### Implementation Task
|
||||
|
||||
```bash
|
||||
ccw cli -p "PURPOSE: Implement rate limiting
|
||||
TASK: • Create middleware • Configure routes • Redis backend
|
||||
MODE: write
|
||||
CONTEXT: @src/middleware/**/* @src/config/**/*
|
||||
EXPECTED: Production code + unit tests + integration tests
|
||||
CONSTRAINTS: Follow existing middleware patterns" --tool qwen --mode write --rule development-implement-feature
|
||||
```
|
||||
|
||||
### 5.1.4 Rule Templates
|
||||
|
||||
| Rule | Purpose |
|
||||
| --- | --- |
|
||||
| **analysis-diagnose-bug-root-cause** | Bug root cause analysis |
|
||||
| **analysis-analyze-code-patterns** | Code pattern analysis |
|
||||
| **analysis-review-architecture** | Architecture review |
|
||||
| **analysis-assess-security-risks** | Security assessment |
|
||||
| **development-implement-feature** | Feature implementation |
|
||||
| **development-refactor-codebase** | Code refactoring |
|
||||
| **development-generate-tests** | Test generation |
|
||||
|
||||
---
|
||||
|
||||
## 5.2 Multi-Model Collaboration
|
||||
|
||||
### 5.2.1 Model Selection Guide
|
||||
|
||||
| Task | Recommended Model | Reason |
|
||||
| --- | --- | --- |
|
||||
| **Code Analysis** | Gemini | Strong at deep code understanding and pattern recognition |
|
||||
| **Bug Diagnosis** | Gemini | Powerful root cause analysis capability |
|
||||
| **Feature Development** | Qwen | High code generation efficiency |
|
||||
| **Code Review** | Codex (GPT) | Good Git integration, standard review format |
|
||||
| **Long Text** | Claude | Large context window |
|
||||
|
||||
### 5.2.2 Collaboration Patterns
|
||||
|
||||
#### Serial Collaboration
|
||||
|
||||
```bash
|
||||
# Step 1: Gemini analysis
|
||||
ccw cli -p "Analyze code architecture" --tool gemini --mode analysis
|
||||
|
||||
# Step 2: Qwen implementation
|
||||
ccw cli -p "Implement feature based on analysis" --tool qwen --mode write
|
||||
|
||||
# Step 3: Codex review
|
||||
ccw cli -p "Review implementation code" --tool codex --mode review
|
||||
```
|
||||
|
||||
#### Parallel Collaboration
|
||||
|
||||
Use `--tool gemini` and `--tool qwen` to analyze the same problem simultaneously:
|
||||
|
||||
```bash
|
||||
# Terminal 1
|
||||
ccw cli -p "Analyze from performance perspective" --tool gemini --mode analysis &
|
||||
|
||||
# Terminal 2
|
||||
ccw cli -p "Analyze from security perspective" --tool codex --mode analysis &
|
||||
```
|
||||
|
||||
### 5.2.3 Session Resume
|
||||
|
||||
Cross-model session resume:
|
||||
|
||||
```bash
|
||||
# First call
|
||||
ccw cli -p "Analyze authentication module" --tool gemini --mode analysis
|
||||
|
||||
# Resume session to continue
|
||||
ccw cli -p "Based on previous analysis, design improvement plan" --tool qwen --mode write --resume
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 5.3 Memory Management
|
||||
|
||||
### 5.3.1 Memory Categories
|
||||
|
||||
| Category | Purpose | Example Content |
|
||||
| --- | --- | --- |
|
||||
| **learnings** | Learning insights | New technology usage experience |
|
||||
| **decisions** | Architecture decisions | Technology selection rationale |
|
||||
| **conventions** | Coding standards | Naming conventions, patterns |
|
||||
| **issues** | Known issues | Bugs, limitations, TODOs |
|
||||
|
||||
### 5.3.2 Memory Commands
|
||||
|
||||
| Command | Function | Example |
|
||||
| --- | --- | --- |
|
||||
| **list** | List all memories | `ccw memory list` |
|
||||
| **search** | Search memories | `ccw memory search "authentication"` |
|
||||
| **export** | Export memory | `ccw memory export <id>` |
|
||||
| **import** | Import memory | `ccw memory import "..."` |
|
||||
| **embed** | Generate embeddings | `ccw memory embed <id>` |
|
||||
|
||||
### 5.3.3 Memory Best Practices
|
||||
|
||||
::: tip Tip
|
||||
- **Regular cleanup**: Organize Memory weekly, delete outdated content
|
||||
- **Structure**: Use standard format for easy search and reuse
|
||||
- **Context**: Record decision background, not just conclusions
|
||||
- **Linking**: Cross-reference related content
|
||||
:::
|
||||
|
||||
### 5.3.4 Memory Template
|
||||
|
||||
```markdown
|
||||
## Title
|
||||
### Background
|
||||
- **Problem**: ...
|
||||
- **Impact**: ...
|
||||
|
||||
### Decision
|
||||
- **Solution**: ...
|
||||
- **Rationale**: ...
|
||||
|
||||
### Result
|
||||
- **Effect**: ...
|
||||
- **Lessons Learned**: ...
|
||||
|
||||
### Related
|
||||
- [Related Memory 1](memory-id-1)
|
||||
- [Related Documentation](link)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 5.4 CodexLens Advanced Usage
|
||||
|
||||
### 5.4.1 Hybrid Search
|
||||
|
||||
Combine vector search and keyword search:
|
||||
|
||||
```bash
|
||||
# Pure vector search
|
||||
ccw search --mode vector "user authentication"
|
||||
|
||||
# Hybrid search (default)
|
||||
ccw search --mode hybrid "user authentication"
|
||||
|
||||
# Pure keyword search
|
||||
ccw search --mode keyword "authenticate"
|
||||
```
|
||||
|
||||
### 5.4.2 Call Chain Tracing
|
||||
|
||||
Trace complete call chains of functions:
|
||||
|
||||
```bash
|
||||
# Trace up (who called me)
|
||||
ccw search --trace-up "authenticateUser"
|
||||
|
||||
# Trace down (who I called)
|
||||
ccw search --trace-down "authenticateUser"
|
||||
|
||||
# Full call chain
|
||||
ccw search --trace-full "authenticateUser"
|
||||
```
|
||||
|
||||
### 5.4.3 Semantic Search Techniques
|
||||
|
||||
| Technique | Example | Effect |
|
||||
| --- | --- | --- |
|
||||
| **Functional description** | "handle user login" | Find login-related code |
|
||||
| **Problem description** | "memory leak locations" | Find potential issues |
|
||||
| **Pattern description** | "singleton implementation" | Find design patterns |
|
||||
| **Technical description** | "using React Hooks" | Find related usage |
|
||||
|
||||
---
|
||||
|
||||
## 5.5 Hook Auto-Injection
|
||||
|
||||
### 5.5.1 Hook Types
|
||||
|
||||
| Hook Type | Trigger Time | Purpose |
|
||||
| --- | --- | --- |
|
||||
| **pre-command** | Before command execution | Inject specifications, load context |
|
||||
| **post-command** | After command execution | Save Memory, update state |
|
||||
| **pre-commit** | Before Git commit | Code review, standard checks |
|
||||
| **file-change** | On file change | Auto-format, update index |
|
||||
|
||||
### 5.5.2 Hook Configuration
|
||||
|
||||
Configure in `.claude/hooks.json`:
|
||||
|
||||
```json
|
||||
{
|
||||
"pre-command": [
|
||||
{
|
||||
"name": "inject-specs",
|
||||
"description": "Inject project specifications",
|
||||
"command": "cat .workflow/specs/project-constraints.md"
|
||||
},
|
||||
{
|
||||
"name": "load-memory",
|
||||
"description": "Load related Memory",
|
||||
"command": "ccw memory search \"{query}\""
|
||||
}
|
||||
],
|
||||
"post-command": [
|
||||
{
|
||||
"name": "save-memory",
|
||||
"description": "Save important decisions",
|
||||
"command": "ccw memory import \"{content}\""
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 5.6 Performance Optimization
|
||||
|
||||
### 5.6.1 Indexing Optimization
|
||||
|
||||
| Optimization | Description |
|
||||
| --- | --- |
|
||||
| **Incremental indexing** | Only index changed files |
|
||||
| **Parallel indexing** | Multi-process parallel processing |
|
||||
| **Caching strategy** | Vector embedding cache |
|
||||
|
||||
### 5.6.2 Search Optimization
|
||||
|
||||
| Optimization | Description |
|
||||
| --- | --- |
|
||||
| **Result caching** | Same query returns cached results |
|
||||
| **Paginated loading** | Large result sets paginated |
|
||||
| **Smart deduplication** | Auto-duplicate similar results |
|
||||
|
||||
---
|
||||
|
||||
## 5.7 Quick Reference
|
||||
|
||||
### CLI Command Cheatsheet
|
||||
|
||||
| Command | Function |
|
||||
| --- | --- |
|
||||
| `ccw cli -p "..." --tool gemini --mode analysis` | Analysis task |
|
||||
| `ccw cli -p "..." --tool qwen --mode write` | Implementation task |
|
||||
| `ccw cli -p "..." --tool codex --mode review` | Review task |
|
||||
| `ccw memory list` | List memories |
|
||||
| `ccw memory search "..."` | Search memories |
|
||||
| `ccw search "..."` | Semantic search |
|
||||
| `ccw search --trace "..."` | Call chain tracing |
|
||||
|
||||
### Rule Template Cheatsheet
|
||||
|
||||
| Rule | Purpose |
|
||||
| --- | --- |
|
||||
| `analysis-diagnose-bug-root-cause` | Bug analysis |
|
||||
| `analysis-assess-security-risks` | Security assessment |
|
||||
| `development-implement-feature` | Feature implementation |
|
||||
| `development-refactor-codebase` | Code refactoring |
|
||||
| `development-generate-tests` | Test generation |
|
||||
|
||||
---
|
||||
|
||||
## Next Steps
|
||||
|
||||
- [Best Practices](ch06-best-practices.md) — Team collaboration standards, code review process, documentation maintenance strategy
|
||||
330
docs/guide/ch06-best-practices.md
Normal file
330
docs/guide/ch06-best-practices.md
Normal file
@@ -0,0 +1,330 @@
|
||||
# Best Practices
|
||||
|
||||
## One-Line Positioning
|
||||
|
||||
**Best Practices ensure efficient team collaboration** — Practical experience summary on standard formulation, code review, documentation maintenance, and team collaboration.
|
||||
|
||||
---
|
||||
|
||||
## 6.1 Team Collaboration Standards
|
||||
|
||||
### 6.1.1 Role Responsibility Definitions
|
||||
|
||||
| Role | Responsibility | Skill Requirements |
|
||||
| --- | --- | --- |
|
||||
| **Planner** | Requirement analysis, task breakdown | Architectural thinking, communication skills |
|
||||
| **Developer** | Code implementation, unit testing | Coding ability, testing awareness |
|
||||
| **Reviewer** | Code review, quality gatekeeping | Code sensitivity, standard understanding |
|
||||
| **QA** | Quality assurance, test verification | Test design, risk identification |
|
||||
| **Facilitator** | Coordination, progress tracking | Project management, conflict resolution |
|
||||
|
||||
### 6.1.2 Workflow Selection
|
||||
|
||||
| Scenario | Recommended Workflow | Rationale |
|
||||
| --- | --- | --- |
|
||||
| **New Feature Development** | PlanEx | Planning-execution separation, reduces risk |
|
||||
| **Bug Fix** | Lifecycle | Complete closed loop, ensures fix quality |
|
||||
| **Code Refactoring** | IterDev | Iterative improvement, continuous optimization |
|
||||
| **Technical Decision** | Brainstorm | Multi-perspective analysis, comprehensive consideration |
|
||||
| **Issue Management** | Issue | Traceable, manageable |
|
||||
| **UI Development** | UIDesign | Seamless design-to-code transition |
|
||||
|
||||
### 6.1.3 Communication Protocols
|
||||
|
||||
#### Message Format
|
||||
|
||||
```
|
||||
[<Role>] <Action> <Object>: <Result>
|
||||
|
||||
Examples:
|
||||
[Planner] Task breakdown complete: Generated 5 subtasks
|
||||
[Developer] Code implementation complete: user-auth.ts, 324 lines
|
||||
[Reviewer] Code review complete: Found 2 issues, suggested 1 optimization
|
||||
```
|
||||
|
||||
#### Status Reporting
|
||||
|
||||
| Status | Meaning | Next Action |
|
||||
| --- | --- | --- |
|
||||
| **pending** | Pending | Wait for dependencies to complete |
|
||||
| **in_progress** | In progress | Continue execution |
|
||||
| **completed** | Completed | Can be depended upon |
|
||||
| **blocked** | Blocked | Manual intervention required |
|
||||
|
||||
---
|
||||
|
||||
## 6.2 Code Review Process
|
||||
|
||||
### 6.2.1 Review Dimensions
|
||||
|
||||
| Dimension | Check Items | Severity |
|
||||
| --- | --- | --- |
|
||||
| **Correctness** | Logic correct, boundary handling | HIGH |
|
||||
| **Performance** | Algorithm efficiency, resource usage | MEDIUM |
|
||||
| **Security** | Injection vulnerabilities, permission checks | HIGH |
|
||||
| **Maintainability** | Code clarity, modularity | MEDIUM |
|
||||
| **Test Coverage** | Unit tests, boundary tests | MEDIUM |
|
||||
| **Standard Compliance** | Coding standards, project conventions | LOW |
|
||||
|
||||
### 6.2.2 Review Process
|
||||
|
||||
```
|
||||
┌─────────┐ ┌─────────┐ ┌─────────┐ ┌─────────┐
|
||||
│ Submit │──▶│ Review │──▶│ Feedback│──▶│ Fix │
|
||||
│ Code │ │ Code │ │ Comments│ │ Issues │
|
||||
└─────────┘ └─────────┘ └─────────┘ └─────────┘
|
||||
│ │ │ │
|
||||
▼ ▼ ▼ ▼
|
||||
Push PR Auto Review Manual Review Fix Verify
|
||||
CI Check 6 Dimensions Code Walkthrough Re-review
|
||||
```
|
||||
|
||||
### 6.2.3 Review Checklist
|
||||
|
||||
#### Code Correctness
|
||||
- [ ] Logic correct, no bugs
|
||||
- [ ] Boundary condition handling
|
||||
- [ ] Complete error handling
|
||||
- [ ] Proper resource cleanup
|
||||
|
||||
#### Performance
|
||||
- [ ] Reasonable algorithm complexity
|
||||
- [ ] No memory leaks
|
||||
- [ ] No redundant computation
|
||||
- [ ] Reasonable caching strategy
|
||||
|
||||
#### Security
|
||||
- [ ] No SQL injection
|
||||
- [ ] No XSS vulnerabilities
|
||||
- [ ] Complete permission checks
|
||||
- [ ] Sensitive data protection
|
||||
|
||||
#### Maintainability
|
||||
- [ ] Clear naming
|
||||
- [ ] Good modularity
|
||||
- [ ] Sufficient comments
|
||||
- [ ] Easy to test
|
||||
|
||||
#### Test Coverage
|
||||
- [ ] Complete unit tests
|
||||
- [ ] Boundary test coverage
|
||||
- [ ] Exception case testing
|
||||
- [ ] Integration test verification
|
||||
|
||||
#### Standard Compliance
|
||||
- [ ] Unified coding style
|
||||
- [ ] Naming standard compliance
|
||||
- [ ] Project convention adherence
|
||||
- [ ] Complete documentation
|
||||
|
||||
### 6.2.4 Feedback Format
|
||||
|
||||
```markdown
|
||||
## Review Results
|
||||
|
||||
### Issues
|
||||
1. **[HIGH]** SQL injection risk
|
||||
- Location: `src/auth/login.ts:45`
|
||||
- Recommendation: Use parameterized queries
|
||||
|
||||
2. **[MEDIUM]** Performance issue
|
||||
- Location: `src/utils/cache.ts:78`
|
||||
- Recommendation: Use LRU cache
|
||||
|
||||
### Suggestions
|
||||
1. Naming optimization: `data` → `userData`
|
||||
2. Module separation: Consider extracting Auth logic independently
|
||||
|
||||
### Approval Conditions
|
||||
- [ ] Fix HIGH issues
|
||||
- [ ] Consider MEDIUM suggestions
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 6.3 Documentation Maintenance Strategy
|
||||
|
||||
### 6.3.1 Documentation Classification
|
||||
|
||||
| Type | Location | Update Frequency | Owner |
|
||||
| --- | --- | --- | --- |
|
||||
| **Spec Documents** | `.workflow/specs/` | As needed | Architect |
|
||||
| **Reference Docs** | `docs/ref/` | Every change | Developer |
|
||||
| **Guide Docs** | `docs/guide/` | Monthly | Technical Writer |
|
||||
| **API Docs** | `docs/api/` | Auto-generated | Tools |
|
||||
| **FAQ** | `docs/faq.md` | Weekly | Support Team |
|
||||
|
||||
### 6.3.2 Documentation Update Triggers
|
||||
|
||||
| Event | Update Content |
|
||||
| --- | --- |
|
||||
| **New Feature** | Add feature documentation and API reference |
|
||||
| **Spec Change** | Update spec documents and migration guide |
|
||||
| **Bug Fix** | Update FAQ and known issues |
|
||||
| **Architecture Change** | Update architecture docs and decision records |
|
||||
| **Code Review** | Supplement missing comments and docs |
|
||||
|
||||
### 6.3.3 Documentation Quality Standards
|
||||
|
||||
| Standard | Requirement |
|
||||
| --- | --- |
|
||||
| **Accuracy** | Consistent with actual code |
|
||||
| **Completeness** | Cover all public APIs |
|
||||
| **Clarity** | Easy to understand, sufficient examples |
|
||||
| **Timeliness** | Updated promptly, not lagging |
|
||||
| **Navigability** | Clear structure, easy to find |
|
||||
|
||||
---
|
||||
|
||||
## 6.4 Memory Management Best Practices
|
||||
|
||||
### 6.4.1 Memory Recording Triggers
|
||||
|
||||
| Trigger | Record Content |
|
||||
| --- | --- |
|
||||
| **Architecture Decisions** | Technology selection, design decisions |
|
||||
| **Problem Resolution** | Bug root cause, solutions |
|
||||
| **Experience Summary** | Best practices, gotchas |
|
||||
| **Standard Conventions** | Coding standards, naming conventions |
|
||||
| **Known Issues** | Bugs, limitations, TODOs |
|
||||
|
||||
### 6.4.2 Memory Format Standards
|
||||
|
||||
```markdown
|
||||
## [Type] Title
|
||||
|
||||
### Background
|
||||
- **Problem**: ...
|
||||
- **Impact**: ...
|
||||
- **Context**: ...
|
||||
|
||||
### Analysis/Decision
|
||||
- **Solution**: ...
|
||||
- **Rationale**: ...
|
||||
- **Alternatives**: ...
|
||||
|
||||
### Result
|
||||
- **Effect**: ...
|
||||
- **Data**: ...
|
||||
|
||||
### Related
|
||||
- [Related Memory](memory-id)
|
||||
- [Related Code](file-link)
|
||||
- [Related Documentation](doc-link)
|
||||
```
|
||||
|
||||
### 6.4.3 Memory Maintenance
|
||||
|
||||
| Maintenance Item | Frequency | Content |
|
||||
| --- | --- | --- |
|
||||
| **Deduplication** | Weekly | Merge duplicate memories |
|
||||
| **Update** | As needed | Update outdated content |
|
||||
| **Archive** | Monthly | Archive historical memories |
|
||||
| **Cleanup** | Quarterly | Delete useless memories |
|
||||
|
||||
---
|
||||
|
||||
## 6.5 Hook Usage Standards
|
||||
|
||||
### 6.5.1 Hook Types
|
||||
|
||||
| Hook Type | Purpose | Example |
|
||||
| --- | --- | --- |
|
||||
| **pre-command** | Inject specifications, load context | Auto-load CLAUDE.md |
|
||||
| **post-command** | Save Memory, update index | Auto-save decisions |
|
||||
| **pre-commit** | Code review, standard checks | Auto-run Lint |
|
||||
| **file-change** | Auto-format, update index | Auto-format code |
|
||||
|
||||
### 6.5.2 Hook Configuration Principles
|
||||
|
||||
| Principle | Description |
|
||||
| --- | --- |
|
||||
| **Minimize** | Only configure necessary Hooks |
|
||||
| **Idempotent** | Hook execution results are repeatable |
|
||||
| **Recoverable** | Hook failure doesn't affect main flow |
|
||||
| **Observable** | Hook execution has logging |
|
||||
|
||||
---
|
||||
|
||||
## 6.6 Team Collaboration Techniques
|
||||
|
||||
### 6.6.1 Conflict Resolution
|
||||
|
||||
| Conflict Type | Resolution Strategy |
|
||||
| --- | --- |
|
||||
| **Standard Conflict** | Team discussion, unify standards |
|
||||
| **Technical Disagreement** | Brainstorm, data-driven |
|
||||
| **Schedule Conflict** | Priority sorting, resource adjustment |
|
||||
| **Quality Conflict** | Set standards, automated checks |
|
||||
|
||||
### 6.6.2 Knowledge Sharing
|
||||
|
||||
| Method | Frequency | Content |
|
||||
| --- | --- | --- |
|
||||
| **Tech Sharing** | Weekly | New technologies, best practices |
|
||||
| **Code Walkthrough** | Every PR | Code logic, design approach |
|
||||
| **Documentation Sync** | Monthly | Documentation updates, standard changes |
|
||||
| **Incident Retrospective** | Every incident | Root cause analysis, improvements |
|
||||
|
||||
### 6.6.3 Efficiency Improvement
|
||||
|
||||
| Technique | Effect |
|
||||
| --- | --- |
|
||||
| **Templating** | Reuse successful patterns |
|
||||
| **Automation** | Reduce repetitive work |
|
||||
| **Tooling** | Improve development efficiency |
|
||||
| **Standardization** | Lower communication cost |
|
||||
|
||||
---
|
||||
|
||||
## 6.7 Quick Reference
|
||||
|
||||
### Workflow Selection Guide
|
||||
|
||||
| Scenario | Workflow | Command |
|
||||
| --- | --- | --- |
|
||||
| New Feature | PlanEx | `/workflow:plan` |
|
||||
| Bug Fix | Lifecycle | `/unified-execute-with-file` |
|
||||
| Refactoring | IterDev | `/refactor-cycle` |
|
||||
| Decision | Brainstorm | `/brainstorm-with-file` |
|
||||
| UI Development | UIDesign | `/workflow:ui-design` |
|
||||
|
||||
### Code Review Checklist
|
||||
|
||||
- [ ] Correctness check
|
||||
- [ ] Performance check
|
||||
- [ ] Security check
|
||||
- [ ] Maintainability check
|
||||
- [ ] Test coverage check
|
||||
- [ ] Standard compliance check
|
||||
|
||||
### Memory Maintenance
|
||||
|
||||
| Operation | Command |
|
||||
| --- | --- |
|
||||
| List memories | `ccw memory list` |
|
||||
| Search memories | `ccw memory search "..."` |
|
||||
| Import memory | `ccw memory import "..."` |
|
||||
| Export memory | `ccw memory export <id>` |
|
||||
|
||||
---
|
||||
|
||||
## Summary
|
||||
|
||||
Claude_dms3 best practices can be summarized as:
|
||||
|
||||
1. **Standards First** - Establish clear team standards
|
||||
2. **Process Assurance** - Use appropriate workflows
|
||||
3. **Quality Gatekeeping** - Strict code review
|
||||
4. **Knowledge Accumulation** - Continuously maintain Memory and documentation
|
||||
5. **Continuous Improvement** - Regular retrospectives and optimization
|
||||
|
||||
---
|
||||
|
||||
## Related Links
|
||||
|
||||
- [What is Claude_dms3](ch01-what-is-claude-dms3.md)
|
||||
- [Getting Started](ch02-getting-started.md)
|
||||
- [Core Concepts](ch03-core-concepts.md)
|
||||
- [Workflow Basics](ch04-workflow-basics.md)
|
||||
- [Advanced Tips](ch05-advanced-tips.md)
|
||||
225
docs/guide/claude-md.md
Normal file
225
docs/guide/claude-md.md
Normal file
@@ -0,0 +1,225 @@
|
||||
# CLAUDE.md Guide
|
||||
|
||||
Configure project-specific instructions for CCW using CLAUDE.md.
|
||||
|
||||
## What is CLAUDE.md?
|
||||
|
||||
`CLAUDE.md` is a special file that contains project-specific instructions, conventions, and preferences for CCW. It's automatically loaded when CCW operates on your project.
|
||||
|
||||
## File Location
|
||||
|
||||
Place `CLAUDE.md` in your project root:
|
||||
|
||||
```
|
||||
my-project/
|
||||
├── CLAUDE.md # Project instructions
|
||||
├── package.json
|
||||
└── src/
|
||||
```
|
||||
|
||||
## File Structure
|
||||
|
||||
```markdown
|
||||
# Project Name
|
||||
|
||||
## Overview
|
||||
Brief description of the project.
|
||||
|
||||
## Tech Stack
|
||||
- Frontend: Framework + libraries
|
||||
- Backend: Runtime + framework
|
||||
- Database: Storage solution
|
||||
|
||||
## Coding Standards
|
||||
- Style guide
|
||||
- Linting rules
|
||||
- Formatting preferences
|
||||
|
||||
## Architecture
|
||||
- Project structure
|
||||
- Key patterns
|
||||
- Important conventions
|
||||
|
||||
## Development Guidelines
|
||||
- How to add features
|
||||
- Testing requirements
|
||||
- Documentation standards
|
||||
```
|
||||
|
||||
## Example CLAUDE.md
|
||||
|
||||
```markdown
|
||||
# E-Commerce Platform
|
||||
|
||||
## Overview
|
||||
Multi-tenant e-commerce platform with headless architecture.
|
||||
|
||||
## Tech Stack
|
||||
- Frontend: Vue 3 + TypeScript + Vite
|
||||
- Backend: Node.js + NestJS
|
||||
- Database: PostgreSQL + Redis
|
||||
- Queue: BullMQ
|
||||
|
||||
## Coding Standards
|
||||
|
||||
### TypeScript
|
||||
- Use strict mode
|
||||
- No implicit any
|
||||
- Explicit return types
|
||||
|
||||
### Naming Conventions
|
||||
- Components: PascalCase (UserProfile.ts)
|
||||
- Utilities: camelCase (formatDate.ts)
|
||||
- Constants: UPPER_SNAKE_CASE (API_URL)
|
||||
|
||||
### File Structure
|
||||
```
|
||||
src/
|
||||
├── components/ # Vue components
|
||||
├── composables/ # Vue composables
|
||||
├── services/ # Business logic
|
||||
├── types/ # TypeScript types
|
||||
└── utils/ # Utilities
|
||||
```
|
||||
|
||||
## Architecture
|
||||
|
||||
### Layered Architecture
|
||||
1. **Presentation Layer**: Vue components
|
||||
2. **Application Layer**: Composables and services
|
||||
3. **Domain Layer**: Business logic
|
||||
4. **Infrastructure Layer**: External services
|
||||
|
||||
### Key Patterns
|
||||
- Repository pattern for data access
|
||||
- Factory pattern for complex objects
|
||||
- Strategy pattern for payments
|
||||
|
||||
## Development Guidelines
|
||||
|
||||
### Adding Features
|
||||
1. Create feature branch from develop
|
||||
2. Implement feature with tests
|
||||
3. Update documentation
|
||||
4. Create PR with template
|
||||
|
||||
### Testing
|
||||
- Unit tests: Vitest
|
||||
- E2E tests: Playwright
|
||||
- Coverage: >80%
|
||||
|
||||
### Commits
|
||||
Follow conventional commits:
|
||||
- feat: New feature
|
||||
- fix: Bug fix
|
||||
- docs: Documentation
|
||||
- refactor: Refactoring
|
||||
- test: Tests
|
||||
|
||||
## Important Notes
|
||||
- Always use TypeScript strict mode
|
||||
- Never commit .env files
|
||||
- Run linter before commit
|
||||
- Update API docs for backend changes
|
||||
```
|
||||
|
||||
## Sections
|
||||
|
||||
### Required Sections
|
||||
|
||||
| Section | Purpose |
|
||||
|---------|---------|
|
||||
| Overview | Project description |
|
||||
| Tech Stack | Technologies used |
|
||||
| Coding Standards | Style conventions |
|
||||
| Architecture | System design |
|
||||
|
||||
### Optional Sections
|
||||
|
||||
| Section | Purpose |
|
||||
|---------|---------|
|
||||
| Testing | Test requirements |
|
||||
| Deployment | Deploy process |
|
||||
| Troubleshooting | Common issues |
|
||||
| References | External docs |
|
||||
|
||||
## Best Practices
|
||||
|
||||
### 1. Keep It Current
|
||||
|
||||
Update CLAUDE.md when:
|
||||
- Tech stack changes
|
||||
- New patterns adopted
|
||||
- Standards updated
|
||||
|
||||
### 2. Be Specific
|
||||
|
||||
Instead of:
|
||||
```markdown
|
||||
## Style
|
||||
Follow good practices
|
||||
```
|
||||
|
||||
Use:
|
||||
```markdown
|
||||
## Style
|
||||
- Use ESLint with project config
|
||||
- Max line length: 100
|
||||
- Use single quotes for strings
|
||||
```
|
||||
|
||||
### 3. Provide Examples
|
||||
|
||||
```markdown
|
||||
## Naming
|
||||
Components use PascalCase:
|
||||
- UserProfile.vue ✓
|
||||
- userProfile.vue ✗
|
||||
```
|
||||
|
||||
## Multiple Projects
|
||||
|
||||
For monorepos, use multiple CLAUDE.md files:
|
||||
|
||||
```
|
||||
monorepo/
|
||||
├── CLAUDE.md # Root instructions
|
||||
├── packages/
|
||||
│ ├── frontend/
|
||||
│ │ └── CLAUDE.md # Frontend specific
|
||||
│ └── backend/
|
||||
│ └── CLAUDE.md # Backend specific
|
||||
```
|
||||
|
||||
## Template
|
||||
|
||||
```markdown
|
||||
# [Project Name]
|
||||
|
||||
## Overview
|
||||
[1-2 sentence description]
|
||||
|
||||
## Tech Stack
|
||||
- [Framework/Language]
|
||||
- [Key libraries]
|
||||
|
||||
## Coding Standards
|
||||
- [Style guide]
|
||||
- [Linting]
|
||||
|
||||
## Architecture
|
||||
- [Structure]
|
||||
- [Patterns]
|
||||
|
||||
## Development
|
||||
- [How to add features]
|
||||
- [Testing approach]
|
||||
|
||||
## Notes
|
||||
- [Important conventions]
|
||||
```
|
||||
|
||||
::: info See Also
|
||||
- [Configuration](./cli-tools.md) - CLI tools config
|
||||
- [Workflows](../workflows/) - Development workflows
|
||||
:::
|
||||
272
docs/guide/cli-tools.md
Normal file
272
docs/guide/cli-tools.md
Normal file
@@ -0,0 +1,272 @@
|
||||
# CLI Tools Configuration
|
||||
|
||||
Configure and customize CCW CLI tools for your development workflow.
|
||||
|
||||
## Configuration File
|
||||
|
||||
CCW CLI tools are configured in `~/.claude/cli-tools.json`:
|
||||
|
||||
```json
|
||||
{
|
||||
"version": "3.3.0",
|
||||
"tools": {
|
||||
"tool-id": {
|
||||
"enabled": true,
|
||||
"primaryModel": "model-name",
|
||||
"secondaryModel": "fallback-model",
|
||||
"tags": ["tag1", "tag2"],
|
||||
"type": "builtin | api-endpoint | cli-wrapper"
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Tool Types
|
||||
|
||||
### Builtin Tools
|
||||
|
||||
Full-featured tools with all capabilities:
|
||||
|
||||
```json
|
||||
{
|
||||
"gemini": {
|
||||
"enabled": true,
|
||||
"primaryModel": "gemini-2.5-flash",
|
||||
"secondaryModel": "gemini-2.5-pro",
|
||||
"tags": ["analysis", "debug"],
|
||||
"type": "builtin"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Capabilities**: Analysis + Write tools
|
||||
|
||||
### API Endpoint Tools
|
||||
|
||||
Analysis-only tools for specialized tasks:
|
||||
|
||||
```json
|
||||
{
|
||||
"custom-api": {
|
||||
"enabled": true,
|
||||
"primaryModel": "custom-model",
|
||||
"tags": ["specialized-analysis"],
|
||||
"type": "api-endpoint"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Capabilities**: Analysis only
|
||||
|
||||
## CLI Command Format
|
||||
|
||||
### Universal Template
|
||||
|
||||
```bash
|
||||
ccw cli -p "PURPOSE: [goal] + [why] + [success criteria]
|
||||
TASK: • [step 1] • [step 2] • [step 3]
|
||||
MODE: [analysis|write|review]
|
||||
CONTEXT: @[file patterns] | Memory: [context]
|
||||
EXPECTED: [output format]
|
||||
CONSTRAINTS: [constraints]" --tool <tool-id> --mode <mode> --rule <template>
|
||||
```
|
||||
|
||||
### Required Parameters
|
||||
|
||||
| Parameter | Description | Options |
|
||||
|-----------|-------------|---------|
|
||||
| `--mode <mode>` | **REQUIRED** - Execution permission level | `analysis` (read-only) \| `write` (create/modify) \| `review` (git-aware review) |
|
||||
| `-p <prompt>` | **REQUIRED** - Task prompt with structured template | - |
|
||||
|
||||
### Optional Parameters
|
||||
|
||||
| Parameter | Description | Example |
|
||||
|-----------|-------------|---------|
|
||||
| `--tool <tool>` | Explicit tool selection | `--tool gemini` |
|
||||
| `--rule <template>` | Load rule template for structured prompts | `--rule analysis-review-architecture` |
|
||||
| `--resume [id]` | Resume previous session | `--resume` or `--resume session-id` |
|
||||
| `--cd <path>` | Set working directory | `--cd src/auth` |
|
||||
| `--includeDirs <dirs>` | Include additional directories (comma-separated) | `--includeDirs ../shared,../types` |
|
||||
| `--model <model>` | Override tool's primary model | `--model gemini-2.5-pro` |
|
||||
|
||||
## Tool Selection
|
||||
|
||||
### Tag-Based Routing
|
||||
|
||||
Tools are selected based on task requirements:
|
||||
|
||||
```bash
|
||||
# Task with "analysis" tag routes to gemini
|
||||
ccw cli -p "PURPOSE: Debug authentication issue
|
||||
TASK: • Trace auth flow • Identify failure point
|
||||
MODE: analysis" --tool gemini --mode analysis
|
||||
|
||||
# No tags - uses first enabled tool
|
||||
ccw cli -p "PURPOSE: Implement feature X
|
||||
TASK: • Create component • Add tests
|
||||
MODE: write" --mode write
|
||||
```
|
||||
|
||||
### Explicit Selection
|
||||
|
||||
Override automatic selection:
|
||||
|
||||
```bash
|
||||
ccw cli -p "Task description" --tool codex --mode write
|
||||
```
|
||||
|
||||
### Rule Templates
|
||||
|
||||
Auto-load structured prompt templates:
|
||||
|
||||
```bash
|
||||
# Architecture review template
|
||||
ccw cli -p "Analyze system architecture" --mode analysis --rule analysis-review-architecture
|
||||
|
||||
# Feature implementation template
|
||||
ccw cli -p "Add OAuth2 authentication" --mode write --rule development-implement-feature
|
||||
```
|
||||
|
||||
## Model Configuration
|
||||
|
||||
### Primary vs Secondary
|
||||
|
||||
```json
|
||||
{
|
||||
"codex": {
|
||||
"primaryModel": "gpt-5.2",
|
||||
"secondaryModel": "gpt-5.2"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
- **primaryModel**: Default model for the tool
|
||||
- **secondaryModel**: Fallback if primary fails
|
||||
|
||||
### Available Models
|
||||
|
||||
| Tool | Available Models |
|
||||
|------|------------------|
|
||||
| gemini | gemini-3-pro-preview, gemini-2.5-pro, gemini-2.5-flash, gemini-2.0-flash |
|
||||
| codex | gpt-5.2 |
|
||||
| claude | sonnet, haiku |
|
||||
| qwen | coder-model |
|
||||
|
||||
## Tool Tags
|
||||
|
||||
Tags enable automatic tool selection:
|
||||
|
||||
| Tag | Use Case |
|
||||
|-----|----------|
|
||||
| analysis | Code review, architecture analysis |
|
||||
| debug | Bug diagnosis, troubleshooting |
|
||||
| implementation | Feature development, code generation |
|
||||
| documentation | Doc generation, technical writing |
|
||||
| testing | Test generation, coverage analysis |
|
||||
|
||||
## Example Configurations
|
||||
|
||||
### Development Setup
|
||||
|
||||
```json
|
||||
{
|
||||
"version": "3.3.0",
|
||||
"tools": {
|
||||
"gemini": {
|
||||
"enabled": true,
|
||||
"primaryModel": "gemini-2.5-flash",
|
||||
"tags": ["development", "debug"],
|
||||
"type": "builtin"
|
||||
},
|
||||
"codex": {
|
||||
"enabled": true,
|
||||
"primaryModel": "gpt-5.2",
|
||||
"tags": ["implementation", "review"],
|
||||
"type": "builtin"
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Cost Optimization
|
||||
|
||||
```json
|
||||
{
|
||||
"tools": {
|
||||
"gemini": {
|
||||
"enabled": true,
|
||||
"primaryModel": "gemini-2.0-flash",
|
||||
"tags": ["analysis"],
|
||||
"type": "builtin"
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Quality Focus
|
||||
|
||||
```json
|
||||
{
|
||||
"tools": {
|
||||
"codex": {
|
||||
"enabled": true,
|
||||
"primaryModel": "gpt-5.2",
|
||||
"tags": ["review", "implementation"],
|
||||
"type": "builtin"
|
||||
},
|
||||
"claude": {
|
||||
"enabled": true,
|
||||
"primaryModel": "sonnet",
|
||||
"tags": ["documentation"],
|
||||
"type": "builtin"
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Validation
|
||||
|
||||
To verify your configuration, check the config file directly:
|
||||
|
||||
```bash
|
||||
cat ~/.claude/cli-tools.json
|
||||
```
|
||||
|
||||
Or test tool availability:
|
||||
|
||||
```bash
|
||||
ccw cli -p "PURPOSE: Test tool availability
|
||||
TASK: Verify tool is working
|
||||
MODE: analysis" --mode analysis
|
||||
```
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Tool Not Available
|
||||
|
||||
```bash
|
||||
Error: Tool 'custom-tool' not found
|
||||
```
|
||||
|
||||
**Solution**: Check tool is enabled in config:
|
||||
|
||||
```json
|
||||
{
|
||||
"custom-tool": {
|
||||
"enabled": true
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Model Not Found
|
||||
|
||||
```bash
|
||||
Error: Model 'invalid-model' not available
|
||||
```
|
||||
|
||||
**Solution**: Use valid model name from available models list.
|
||||
|
||||
::: info See Also
|
||||
- [CLI Reference](../cli/commands.md) - CLI usage
|
||||
- [Modes](#modes) - Execution modes
|
||||
:::
|
||||
93
docs/guide/first-workflow.md
Normal file
93
docs/guide/first-workflow.md
Normal file
@@ -0,0 +1,93 @@
|
||||
# First Workflow: Build a Simple API
|
||||
|
||||
Complete your first CCW workflow in 30 minutes. We'll build a simple REST API from specification to implementation.
|
||||
|
||||
## What We'll Build
|
||||
|
||||
A simple users API with:
|
||||
- GET /users - List all users
|
||||
- GET /users/:id - Get user by ID
|
||||
- POST /users - Create new user
|
||||
- PUT /users/:id - Update user
|
||||
- DELETE /users/:id - Delete user
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- CCW installed ([Installation Guide](./installation.md))
|
||||
- Node.js >= 18.0.0
|
||||
- Code editor (VS Code recommended)
|
||||
|
||||
## Step 1: Create Project (5 minutes)
|
||||
|
||||
```bash
|
||||
# Create project directory
|
||||
mkdir user-api
|
||||
cd user-api
|
||||
|
||||
# Initialize npm project
|
||||
npm init -y
|
||||
|
||||
# Install dependencies
|
||||
npm install express
|
||||
npm install --save-dev typescript @types/node @types/express
|
||||
```
|
||||
|
||||
## Step 2: Generate Specification (5 minutes)
|
||||
|
||||
```bash
|
||||
# Use CCW to generate API specification
|
||||
ccw cli -p "Generate a REST API specification for a users resource with CRUD operations" --mode analysis
|
||||
```
|
||||
|
||||
CCW will analyze your request and generate a specification document.
|
||||
|
||||
## Step 3: Implement API (15 minutes)
|
||||
|
||||
```bash
|
||||
# Implement the API
|
||||
ccw cli -p "Implement the users API following the specification with Express and TypeScript" --mode write
|
||||
```
|
||||
|
||||
CCW will:
|
||||
1. Create the project structure
|
||||
2. Implement the routes
|
||||
3. Add type definitions
|
||||
4. Include error handling
|
||||
|
||||
## Step 4: Review Code (5 minutes)
|
||||
|
||||
```bash
|
||||
# Review the implementation
|
||||
ccw cli -p "Review the users API code for quality, security, and best practices" --mode analysis
|
||||
```
|
||||
|
||||
## Step 5: Test and Run
|
||||
|
||||
```bash
|
||||
# Compile TypeScript
|
||||
npx tsc
|
||||
|
||||
# Run the server
|
||||
node dist/index.js
|
||||
|
||||
# Test the API
|
||||
curl http://localhost:3000/users
|
||||
```
|
||||
|
||||
## Expected Result
|
||||
|
||||
You should have:
|
||||
- `src/index.ts` - Main server file
|
||||
- `src/routes/users.ts` - User routes
|
||||
- `src/types/user.ts` - User types
|
||||
- `src/middleware/error.ts` - Error handling
|
||||
|
||||
## Next Steps
|
||||
|
||||
- [CLI Reference](../cli/commands.md) - Learn all CLI commands
|
||||
- [Skills Library](../skills/core-skills.md) - Explore built-in skills
|
||||
- [Workflow System](../workflows/4-level.md) - Understand workflow orchestration
|
||||
|
||||
::: tip Congratulations! 🎉
|
||||
You've completed your first CCW workflow. You can now use CCW for more complex projects.
|
||||
:::
|
||||
55
docs/guide/getting-started.md
Normal file
55
docs/guide/getting-started.md
Normal file
@@ -0,0 +1,55 @@
|
||||
# Getting Started with CCW
|
||||
|
||||
Welcome to CCW (Claude Code Workspace) - an advanced AI-powered development environment that helps you write better code faster.
|
||||
|
||||
## What is CCW?
|
||||
|
||||
CCW is a comprehensive development environment that combines:
|
||||
|
||||
- **Main Orchestrator (`/ccw`)**: Intent-aware workflow selection and automatic command routing
|
||||
- **AI-Powered CLI Tools**: Analyze, review, and implement code with multiple AI backends
|
||||
- **Specialized Agents**: Code execution, TDD development, testing, debugging, and documentation
|
||||
- **Workflow Orchestration**: 4-level workflow system from spec to implementation
|
||||
- **Extensible Skills**: 50+ built-in skills with custom skill support
|
||||
- **MCP Integration**: Model Context Protocol for enhanced tool integration
|
||||
|
||||
## Quick Start
|
||||
|
||||
### Installation
|
||||
|
||||
```bash
|
||||
# Install CCW globally
|
||||
npm install -g claude-code-workflow
|
||||
|
||||
# Or use with npx
|
||||
npx ccw --help
|
||||
```
|
||||
|
||||
### Your First Workflow
|
||||
|
||||
Create a simple workflow in under 5 minutes:
|
||||
|
||||
```bash
|
||||
# Main orchestrator - automatically selects workflow based on intent
|
||||
/ccw "Create a new project" # Auto-selects appropriate workflow
|
||||
/ccw "Analyze the codebase structure" # Auto-selects analysis workflow
|
||||
/ccw "Add user authentication" # Auto-selects implementation workflow
|
||||
|
||||
# Auto-mode - skip confirmation
|
||||
/ccw -y "Fix the login timeout issue" # Execute without confirmation prompts
|
||||
|
||||
# Or use specific workflow commands
|
||||
/workflow:init # Initialize project state
|
||||
/workflow:plan "Add OAuth2 authentication" # Create implementation plan
|
||||
/workflow:execute # Execute planned tasks
|
||||
```
|
||||
|
||||
## Next Steps
|
||||
|
||||
- [Installation Guide](./installation.md) - Detailed installation instructions
|
||||
- [First Workflow](./first-workflow.md) - 30-minute quickstart tutorial
|
||||
- [CLI Tools](./cli-tools.md) - Customize your CCW setup
|
||||
|
||||
::: tip Need Help?
|
||||
Check out our [GitHub Discussions](https://github.com/your-repo/ccw/discussions) or join our [Discord community](https://discord.gg/ccw).
|
||||
:::
|
||||
191
docs/guide/installation.md
Normal file
191
docs/guide/installation.md
Normal file
@@ -0,0 +1,191 @@
|
||||
# Installation
|
||||
|
||||
Learn how to install and configure CCW on your system.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
Before installing CCW, make sure you have:
|
||||
|
||||
- **Node.js** >= 18.0.0
|
||||
- **npm** >= 9.0.0 or **yarn** >= 1.22.0
|
||||
- **Git** for version control features
|
||||
|
||||
## Install CCW
|
||||
|
||||
### Global Installation (Recommended)
|
||||
|
||||
```bash
|
||||
npm install -g claude-code-workflow
|
||||
```
|
||||
|
||||
### Project-Specific Installation
|
||||
|
||||
```bash
|
||||
# In your project directory
|
||||
npm install --save-dev claude-code-workflow
|
||||
|
||||
# Run with npx
|
||||
npx ccw [command]
|
||||
```
|
||||
|
||||
### Using Yarn
|
||||
|
||||
```bash
|
||||
# Global
|
||||
yarn global add claude-code-workflow
|
||||
|
||||
# Project-specific
|
||||
yarn add -D claude-code-workflow
|
||||
```
|
||||
|
||||
## Verify Installation
|
||||
|
||||
```bash
|
||||
ccw --version
|
||||
# Output: CCW v1.0.0
|
||||
|
||||
ccw --help
|
||||
# Shows all available commands
|
||||
```
|
||||
|
||||
## Configuration
|
||||
|
||||
### CLI Tools Configuration
|
||||
|
||||
Create or edit `~/.claude/cli-tools.json`:
|
||||
|
||||
```json
|
||||
{
|
||||
"version": "3.3.0",
|
||||
"tools": {
|
||||
"gemini": {
|
||||
"enabled": true,
|
||||
"primaryModel": "gemini-2.5-flash",
|
||||
"secondaryModel": "gemini-2.5-flash",
|
||||
"tags": ["analysis", "debug"],
|
||||
"type": "builtin"
|
||||
},
|
||||
"codex": {
|
||||
"enabled": true,
|
||||
"primaryModel": "gpt-5.2",
|
||||
"secondaryModel": "gpt-5.2",
|
||||
"tags": [],
|
||||
"type": "builtin"
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### CLAUDE.md Instructions
|
||||
|
||||
Create `CLAUDE.md` in your project root:
|
||||
|
||||
```markdown
|
||||
# Project Instructions
|
||||
|
||||
## Coding Standards
|
||||
- Use TypeScript for type safety
|
||||
- Follow ESLint configuration
|
||||
- Write tests for all new features
|
||||
|
||||
## Architecture
|
||||
- Frontend: Vue 3 + Vite
|
||||
- Backend: Node.js + Express
|
||||
- Database: PostgreSQL
|
||||
```
|
||||
|
||||
## Updating CCW
|
||||
|
||||
```bash
|
||||
# Update to the latest version
|
||||
npm update -g claude-code-workflow
|
||||
|
||||
# Or install a specific version
|
||||
npm install -g claude-code-workflow@latest
|
||||
```
|
||||
|
||||
## Uninstallation
|
||||
|
||||
```bash
|
||||
npm uninstall -g claude-code-workflow
|
||||
|
||||
# Remove configuration (optional)
|
||||
rm -rf ~/.claude
|
||||
```
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Permission Issues
|
||||
|
||||
If you encounter permission errors:
|
||||
|
||||
```bash
|
||||
# Use sudo (not recommended)
|
||||
sudo npm install -g claude-code-workflow
|
||||
|
||||
# Or fix npm permissions (recommended)
|
||||
mkdir ~/.npm-global
|
||||
npm config set prefix '~/.npm-global'
|
||||
export PATH=~/.npm-global/bin:$PATH
|
||||
```
|
||||
|
||||
### PATH Issues
|
||||
|
||||
Add npm global bin to your PATH:
|
||||
|
||||
```bash
|
||||
# For bash/zsh
|
||||
echo 'export PATH=$(npm config get prefix)/bin:$PATH' >> ~/.bashrc
|
||||
|
||||
# For fish
|
||||
echo 'set -gx PATH (npm config get prefix)/bin $PATH' >> ~/.config/fish/config.fish
|
||||
```
|
||||
|
||||
::: info Next Steps
|
||||
After installation, check out the [First Workflow](./first-workflow.md) guide.
|
||||
:::
|
||||
|
||||
## Quick Start Example
|
||||
|
||||
After installation, try these commands to verify everything works:
|
||||
|
||||
```bash
|
||||
# 1. Initialize in your project
|
||||
cd your-project
|
||||
ccw init
|
||||
|
||||
# 2. Try a simple analysis
|
||||
ccw cli -p "Analyze the project structure" --tool gemini --mode analysis
|
||||
|
||||
# 3. Run the main orchestrator
|
||||
/ccw "Summarize the codebase architecture"
|
||||
|
||||
# 4. Check available commands
|
||||
ccw --help
|
||||
```
|
||||
|
||||
### Expected Output
|
||||
|
||||
```
|
||||
$ ccw --version
|
||||
CCW v7.0.5
|
||||
|
||||
$ ccw init
|
||||
✔ Created .claude/CLAUDE.md
|
||||
✔ Created .ccw/workflows/
|
||||
✔ Configuration complete
|
||||
|
||||
$ ccw cli -p "Analyze project" --tool gemini --mode analysis
|
||||
Analyzing with Gemini...
|
||||
✔ Analysis complete
|
||||
```
|
||||
|
||||
### Common First-Time Issues
|
||||
|
||||
| Issue | Solution |
|
||||
|-------|----------|
|
||||
| `ccw: command not found` | Add npm global bin to PATH, or reinstall |
|
||||
| `Permission denied` | Use `sudo` or fix npm permissions |
|
||||
| `API key not found` | Configure API keys in `~/.claude/cli-tools.json` |
|
||||
| `Node version mismatch` | Update to Node.js >= 18.0.0 |
|
||||
|
||||
209
docs/guide/workflows.md
Normal file
209
docs/guide/workflows.md
Normal file
@@ -0,0 +1,209 @@
|
||||
# CCW Workflows
|
||||
|
||||
Understanding and using CCW's workflow system for efficient development.
|
||||
|
||||
## What are Workflows?
|
||||
|
||||
CCW workflows are orchestrated sequences of tasks that guide a project from initial concept to completed implementation. They ensure consistency, quality, and proper documentation throughout the development lifecycle.
|
||||
|
||||
## Workflow Levels
|
||||
|
||||
CCW uses a 4-level workflow system:
|
||||
|
||||
```
|
||||
Level 1: SPECIFICATION
|
||||
↓
|
||||
Level 2: PLANNING
|
||||
↓
|
||||
Level 3: IMPLEMENTATION
|
||||
↓
|
||||
Level 4: VALIDATION
|
||||
```
|
||||
|
||||
## Using Workflows
|
||||
|
||||
### Starting a Workflow
|
||||
|
||||
Begin a new workflow with the team-lifecycle-v4 skill:
|
||||
|
||||
```javascript
|
||||
Skill(skill="team-lifecycle-v4", args="Build user authentication system")
|
||||
```
|
||||
|
||||
This creates a complete workflow:
|
||||
1. Specification phase (RESEARCH-001 through QUALITY-001)
|
||||
2. Planning phase (PLAN-001)
|
||||
3. Implementation phase (IMPL-001)
|
||||
4. Validation phase (TEST-001 and REVIEW-001)
|
||||
|
||||
### Workflow Execution
|
||||
|
||||
The workflow executes automatically:
|
||||
|
||||
1. **Specification**: Analyst and writer agents research and document requirements
|
||||
2. **Checkpoint**: User reviews and approves specification
|
||||
3. **Planning**: Planner creates implementation plan with task breakdown
|
||||
4. **Implementation**: Executor writes code
|
||||
5. **Validation**: Tester and reviewer validate quality
|
||||
|
||||
### Resume Workflow
|
||||
|
||||
After a checkpoint, resume the workflow:
|
||||
|
||||
```bash
|
||||
ccw workflow resume
|
||||
```
|
||||
|
||||
## Workflow Tasks
|
||||
|
||||
### Specification Tasks
|
||||
|
||||
| Task | Agent | Output |
|
||||
|------|-------|--------|
|
||||
| RESEARCH-001 | analyst | Discovery context |
|
||||
| DRAFT-001 | writer | Product brief |
|
||||
| DRAFT-002 | writer | Requirements (PRD) |
|
||||
| DRAFT-003 | writer | Architecture design |
|
||||
| DRAFT-004 | writer | Epics & stories |
|
||||
| QUALITY-001 | reviewer | Readiness report |
|
||||
|
||||
### Implementation Tasks
|
||||
|
||||
| Task | Agent | Output |
|
||||
|------|-------|--------|
|
||||
| PLAN-001 | planner | Implementation plan |
|
||||
| IMPL-001 | executor | Source code |
|
||||
| TEST-001 | tester | Test results |
|
||||
| REVIEW-001 | reviewer | Code review |
|
||||
|
||||
## Custom Workflows
|
||||
|
||||
Create custom workflows for your team:
|
||||
|
||||
```yaml
|
||||
# .ccw/workflows/feature-development.yaml
|
||||
name: Feature Development
|
||||
description: Standard workflow for new features
|
||||
|
||||
levels:
|
||||
- name: specification
|
||||
tasks:
|
||||
- type: research
|
||||
agent: analyst
|
||||
- type: document
|
||||
agent: writer
|
||||
documents: [prd, architecture]
|
||||
|
||||
- name: planning
|
||||
tasks:
|
||||
- type: plan
|
||||
agent: planner
|
||||
|
||||
- name: implementation
|
||||
tasks:
|
||||
- type: implement
|
||||
agent: executor
|
||||
- type: test
|
||||
agent: tester
|
||||
|
||||
- name: validation
|
||||
tasks:
|
||||
- type: review
|
||||
agent: reviewer
|
||||
|
||||
checkpoints:
|
||||
- after: specification
|
||||
action: await_user_approval
|
||||
- after: validation
|
||||
action: verify_quality_gates
|
||||
```
|
||||
|
||||
## Workflow Configuration
|
||||
|
||||
Configure workflow behavior in `~/.claude/workflows/config.json`:
|
||||
|
||||
```json
|
||||
{
|
||||
"defaults": {
|
||||
"autoAdvance": true,
|
||||
"checkpoints": ["specification", "implementation"],
|
||||
"parallel": true
|
||||
},
|
||||
"agents": {
|
||||
"analyst": {
|
||||
"timeout": 300000,
|
||||
"retries": 3
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Best Practices
|
||||
|
||||
### 1. Clear Requirements
|
||||
|
||||
Start with clear, specific requirements:
|
||||
|
||||
```javascript
|
||||
// Good: Specific
|
||||
Skill(skill="team-lifecycle-v4", args="Build JWT authentication with refresh tokens")
|
||||
|
||||
// Bad: Vague
|
||||
Skill(skill="team-lifecycle-v4", args="Add auth")
|
||||
```
|
||||
|
||||
### 2. Checkpoint Reviews
|
||||
|
||||
Always review checkpoints:
|
||||
|
||||
- Specification checkpoint: Validate requirements
|
||||
- Implementation checkpoint: Verify progress
|
||||
|
||||
### 3. Feedback Loops
|
||||
|
||||
Provide feedback during workflow:
|
||||
|
||||
```bash
|
||||
# Add feedback during review
|
||||
ccw workflow feedback --task REVIEW-001 --message "Tests need more edge cases"
|
||||
```
|
||||
|
||||
### 4. Monitor Progress
|
||||
|
||||
Track workflow status:
|
||||
|
||||
```bash
|
||||
# Check workflow status
|
||||
ccw workflow status
|
||||
|
||||
# View task details
|
||||
ccw workflow task IMPL-001
|
||||
```
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Stalled Workflow
|
||||
|
||||
If a workflow stalls:
|
||||
|
||||
```bash
|
||||
# Check for blocked tasks
|
||||
ccw workflow status --blocked
|
||||
|
||||
# Reset stuck tasks
|
||||
ccw workflow reset --task IMPL-001
|
||||
```
|
||||
|
||||
### Failed Tasks
|
||||
|
||||
Retry failed tasks:
|
||||
|
||||
```bash
|
||||
# Retry with new prompt
|
||||
ccw workflow retry --task IMPL-001 --prompt "Fix the TypeScript errors"
|
||||
```
|
||||
|
||||
::: info See Also
|
||||
- [4-Level System](../workflows/4-level.md) - Detailed workflow explanation
|
||||
- [Best Practices](../workflows/best-practices.md) - Workflow optimization
|
||||
:::
|
||||
7
docs/index.md
Normal file
7
docs/index.md
Normal file
@@ -0,0 +1,7 @@
|
||||
---
|
||||
layout: page
|
||||
title: CCW Documentation
|
||||
titleTemplate: Claude Code Workspace
|
||||
---
|
||||
|
||||
<ProfessionalHome lang="en" />
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user