feat: Add roles for issue resolution pipeline including planner, reviewer, integrator, and implementer

- Implemented `planner` role for solution design and task decomposition using issue-plan-agent.
- Introduced `reviewer` role for solution review, technical feasibility validation, and risk assessment.
- Created `integrator` role for queue formation and conflict detection using issue-queue-agent.
- Added `implementer` role for code implementation and test verification via code-developer.
- Defined message types and role boundaries for each role to ensure clear responsibilities.
- Established a team configuration file to manage roles, pipelines, and collaboration patterns for the issue processing pipeline.
This commit is contained in:
catlog22
2026-02-15 13:51:50 +08:00
parent a897858c6a
commit 80b7dfc817
45 changed files with 6369 additions and 505 deletions

View File

@@ -0,0 +1,389 @@
---
name: team-issue
description: Unified team skill for issue resolution. All roles invoke this skill with --role arg for role-specific execution. Triggers on "team issue".
allowed-tools: TeamCreate(*), TeamDelete(*), SendMessage(*), TaskCreate(*), TaskUpdate(*), TaskList(*), TaskGet(*), Task(*), AskUserQuestion(*), Read(*), Write(*), Edit(*), Bash(*), Glob(*), Grep(*)
---
# Team Issue Resolution
Unified team skill for issue processing pipeline. All team members invoke this skill with `--role=xxx` for role-specific execution.
**Scope**: Issue 处理流程plan → queue → execute。Issue 创建/发现由 `issue-discover` 独立处理CRUD 管理由 `issue-manage` 独立处理。
## Architecture Overview
```
┌───────────────────────────────────────────┐
│ Skill(skill="team-issue") │
│ args="--role=xxx [issue-ids] [--mode=M]" │
└───────────────┬───────────────────────────┘
│ Role Router
┌───────────┼───────────┬───────────┬───────────┬───────────┐
↓ ↓ ↓ ↓ ↓ ↓
┌──────────┐┌──────────┐┌──────────┐┌──────────┐┌──────────┐┌──────────┐
│coordinator││explorer ││planner ││reviewer ││integrator││implementer│
│ 编排调度 ││EXPLORE-* ││SOLVE-* ││AUDIT-* ││MARSHAL-* ││BUILD-* │
└──────────┘└──────────┘└──────────┘└──────────┘└──────────┘└──────────┘
```
## Command Architecture
```
roles/
├── coordinator.md # Pipeline 编排Phase 1/5 inline, Phase 2-4 core logic
├── explorer.md # 上下文分析ACE + cli-explore-agent
├── planner.md # 方案设计wraps issue-plan-agent
├── reviewer.md # 方案审查(技术可行性 + 风险评估)
├── integrator.md # 队列编排wraps issue-queue-agent
└── implementer.md # 代码实现wraps code-developer
```
## Role Router
### Input Parsing
Parse `$ARGUMENTS` to extract `--role`:
```javascript
const args = "$ARGUMENTS"
const roleMatch = args.match(/--role[=\s]+(\w+)/)
if (!roleMatch) {
// No --role: this is a coordinator entry point
// Extract issue IDs and mode from args directly
// → Read roles/coordinator.md and execute
Read("roles/coordinator.md")
return
}
const role = roleMatch[1]
const teamName = "issue"
```
### Role Dispatch
```javascript
const VALID_ROLES = {
"coordinator": { file: "roles/coordinator.md", prefix: null },
"explorer": { file: "roles/explorer.md", prefix: "EXPLORE" },
"planner": { file: "roles/planner.md", prefix: "SOLVE" },
"reviewer": { file: "roles/reviewer.md", prefix: "AUDIT" },
"integrator": { file: "roles/integrator.md", prefix: "MARSHAL" },
"implementer": { file: "roles/implementer.md", prefix: "BUILD" }
}
if (!VALID_ROLES[role]) {
throw new Error(`Unknown role: ${role}. Available: ${Object.keys(VALID_ROLES).join(', ')}`)
}
// Read and execute role-specific logic
Read(VALID_ROLES[role].file)
// → Execute the 5-phase process defined in that file
```
### Available Roles
| Role | Task Prefix | Responsibility | Reuses Agent | Role File |
|------|-------------|----------------|--------------|-----------|
| `coordinator` | N/A | Pipeline 编排、模式选择、任务分派 | - | [roles/coordinator.md](roles/coordinator.md) |
| `explorer` | EXPLORE-* | 上下文分析、影响面评估 | cli-explore-agent | [roles/explorer.md](roles/explorer.md) |
| `planner` | SOLVE-* | 方案设计、任务分解 | issue-plan-agent | [roles/planner.md](roles/planner.md) |
| `reviewer` | AUDIT-* | 方案审查、风险评估 | - (新角色) | [roles/reviewer.md](roles/reviewer.md) |
| `integrator` | MARSHAL-* | 冲突检测、队列编排 | issue-queue-agent | [roles/integrator.md](roles/integrator.md) |
| `implementer` | BUILD-* | 代码实现、结果提交 | code-developer | [roles/implementer.md](roles/implementer.md) |
## Shared Infrastructure
### Role Isolation Rules
**核心原则**: 每个角色仅能执行自己职责范围内的工作。
#### Output Tagging强制
所有角色的输出必须带 `[role_name]` 标识前缀:
```javascript
// SendMessage — content 和 summary 都必须带标识
SendMessage({
content: `## [${role}] ...`,
summary: `[${role}] ...`
})
// team_msg — summary 必须带标识
mcp__ccw-tools__team_msg({
summary: `[${role}] ...`
})
```
#### Coordinator 隔离
| 允许 | 禁止 |
|------|------|
| 需求澄清 (AskUserQuestion) | ❌ 直接编写/修改代码 |
| 创建任务链 (TaskCreate) | ❌ 调用 issue-plan-agent 等实现类 agent |
| 分发任务给 worker | ❌ 直接执行分析/审查 |
| 监控进度 (消息总线) | ❌ 绕过 worker 自行完成任务 |
| 汇报结果给用户 | ❌ 修改源代码或产物文件 |
#### Worker 隔离
| 允许 | 禁止 |
|------|------|
| 处理自己前缀的任务 | ❌ 处理其他角色前缀的任务 |
| SendMessage 给 coordinator | ❌ 直接与其他 worker 通信 |
| 使用 Toolbox 中声明的工具 | ❌ 为其他角色创建任务 (TaskCreate) |
| 委派给复用的 agent | ❌ 修改不属于本职责的资源 |
### Team Configuration
```javascript
const TEAM_CONFIG = {
name: "issue",
sessionDir: ".workflow/.team-plan/issue/",
msgDir: ".workflow/.team-msg/issue/",
issueDataDir: ".workflow/issues/"
}
```
### Message Bus (All Roles)
Every SendMessage **before**, must call `mcp__ccw-tools__team_msg` to log:
```javascript
mcp__ccw-tools__team_msg({
operation: "log",
team: "issue",
from: role, // current role name
to: "coordinator",
type: "<type>",
summary: "[role] <summary>",
ref: "<file_path>" // optional
})
```
**Message types by role**:
| Role | Types |
|------|-------|
| coordinator | `task_assigned`, `pipeline_update`, `escalation`, `shutdown`, `error` |
| explorer | `context_ready`, `impact_assessed`, `error` |
| planner | `solution_ready`, `multi_solution`, `error` |
| reviewer | `approved`, `rejected`, `concerns`, `error` |
| integrator | `queue_ready`, `conflict_found`, `error` |
| implementer | `impl_complete`, `impl_failed`, `error` |
### CLI 回退
`mcp__ccw-tools__team_msg` MCP 不可用时:
```javascript
Bash(`ccw team log --team "issue" --from "${role}" --to "coordinator" --type "<type>" --summary "<摘要>" --json`)
```
### Task Lifecycle (All Roles)
```javascript
// Standard task lifecycle every role follows
// Phase 1: Discovery
const tasks = TaskList()
const myTasks = tasks.filter(t =>
t.subject.startsWith(`${VALID_ROLES[role].prefix}-`) &&
t.owner === role &&
t.status === 'pending' &&
t.blockedBy.length === 0
)
if (myTasks.length === 0) return // idle
const task = TaskGet({ taskId: myTasks[0].id })
TaskUpdate({ taskId: task.id, status: 'in_progress' })
// Phase 2-4: Role-specific (see roles/{role}.md)
// Phase 5: Report + Loop — 所有输出必须带 [role] 标识
mcp__ccw-tools__team_msg({ operation: "log", team: "issue", from: role, to: "coordinator", type: "...", summary: `[${role}] ...` })
SendMessage({ type: "message", recipient: "coordinator", content: `## [${role}] ...`, summary: `[${role}] ...` })
TaskUpdate({ taskId: task.id, status: 'completed' })
// Check for next task → back to Phase 1
```
## Pipeline Modes
```
Quick Mode (1-2 simple issues):
EXPLORE-001 → SOLVE-001 → MARSHAL-001 → BUILD-001
Full Mode (complex issues, with review):
EXPLORE-001 → SOLVE-001 → AUDIT-001 ─┬─(approved)→ MARSHAL-001 → BUILD-001..N(parallel)
└─(rejected)→ SOLVE-fix → AUDIT-002(re-review, max 2x)
Batch Mode (5-100 issues):
EXPLORE-001..N(batch≤5) → SOLVE-001..N(batch≤3) → AUDIT-001(batch) → MARSHAL-001 → BUILD-001..M(DAG parallel)
```
### Mode Auto-Detection
```javascript
function detectMode(issueIds, userMode) {
if (userMode) return userMode // 用户显式指定
const count = issueIds.length
if (count <= 2) {
// Check complexity via issue priority
const issues = issueIds.map(id => JSON.parse(Bash(`ccw issue status ${id} --json`)))
const hasHighPriority = issues.some(i => i.priority >= 4)
return hasHighPriority ? 'full' : 'quick'
}
return 'batch'
}
```
## Coordinator Spawn Template
```javascript
TeamCreate({ team_name: "issue" })
// Explorer
Task({
subagent_type: "general-purpose",
team_name: "issue",
name: "explorer",
prompt: `你是 team "issue" 的 EXPLORER。
当你收到 EXPLORE-* 任务时,调用 Skill(skill="team-issue", args="--role=explorer") 执行。
当前需求: ${taskDescription}
约束: ${constraints}
## 角色准则(强制)
- 你只能处理 EXPLORE-* 前缀的任务,不得执行其他角色的工作
- 所有输出SendMessage、team_msg必须带 [explorer] 标识前缀
- 仅与 coordinator 通信,不得直接联系其他 worker
- 不得使用 TaskCreate 为其他角色创建任务
## 消息总线(必须)
每次 SendMessage 前,先调用 mcp__ccw-tools__team_msg 记录。
工作流程:
1. TaskList → 找到 EXPLORE-* 任务
2. Skill(skill="team-issue", args="--role=explorer") 执行
3. team_msg log + SendMessage 结果给 coordinator带 [explorer] 标识)
4. TaskUpdate completed → 检查下一个任务`
})
// Planner
Task({
subagent_type: "general-purpose",
team_name: "issue",
name: "planner",
prompt: `你是 team "issue" 的 PLANNER。
当你收到 SOLVE-* 任务时,调用 Skill(skill="team-issue", args="--role=planner") 执行。
当前需求: ${taskDescription}
约束: ${constraints}
## 角色准则(强制)
- 你只能处理 SOLVE-* 前缀的任务
- 所有输出必须带 [planner] 标识前缀
- 仅与 coordinator 通信
## 消息总线(必须)
每次 SendMessage 前,先调用 mcp__ccw-tools__team_msg 记录。
工作流程:
1. TaskList → 找到 SOLVE-* 任务
2. Skill(skill="team-issue", args="--role=planner") 执行
3. team_msg log + SendMessage 结果给 coordinator
4. TaskUpdate completed → 检查下一个任务`
})
// Reviewer
Task({
subagent_type: "general-purpose",
team_name: "issue",
name: "reviewer",
prompt: `你是 team "issue" 的 REVIEWER。
当你收到 AUDIT-* 任务时,调用 Skill(skill="team-issue", args="--role=reviewer") 执行。
当前需求: ${taskDescription}
约束: ${constraints}
## 角色准则(强制)
- 你只能处理 AUDIT-* 前缀的任务
- 所有输出必须带 [reviewer] 标识前缀
- 仅与 coordinator 通信
- 你是质量门控角色,审查方案但不修改代码
## 消息总线(必须)
每次 SendMessage 前,先调用 mcp__ccw-tools__team_msg 记录。
工作流程:
1. TaskList → 找到 AUDIT-* 任务
2. Skill(skill="team-issue", args="--role=reviewer") 执行
3. team_msg log + SendMessage 结果给 coordinator
4. TaskUpdate completed → 检查下一个任务`
})
// Integrator
Task({
subagent_type: "general-purpose",
team_name: "issue",
name: "integrator",
prompt: `你是 team "issue" 的 INTEGRATOR。
当你收到 MARSHAL-* 任务时,调用 Skill(skill="team-issue", args="--role=integrator") 执行。
当前需求: ${taskDescription}
约束: ${constraints}
## 角色准则(强制)
- 你只能处理 MARSHAL-* 前缀的任务
- 所有输出必须带 [integrator] 标识前缀
- 仅与 coordinator 通信
## 消息总线(必须)
每次 SendMessage 前,先调用 mcp__ccw-tools__team_msg 记录。
工作流程:
1. TaskList → 找到 MARSHAL-* 任务
2. Skill(skill="team-issue", args="--role=integrator") 执行
3. team_msg log + SendMessage 结果给 coordinator
4. TaskUpdate completed → 检查下一个任务`
})
// Implementer
Task({
subagent_type: "general-purpose",
team_name: "issue",
name: "implementer",
prompt: `你是 team "issue" 的 IMPLEMENTER。
当你收到 BUILD-* 任务时,调用 Skill(skill="team-issue", args="--role=implementer") 执行。
当前需求: ${taskDescription}
约束: ${constraints}
## 角色准则(强制)
- 你只能处理 BUILD-* 前缀的任务
- 所有输出必须带 [implementer] 标识前缀
- 仅与 coordinator 通信
## 消息总线(必须)
每次 SendMessage 前,先调用 mcp__ccw-tools__team_msg 记录。
工作流程:
1. TaskList → 找到 BUILD-* 任务
2. Skill(skill="team-issue", args="--role=implementer") 执行
3. team_msg log + SendMessage 结果给 coordinator
4. TaskUpdate completed → 检查下一个任务`
})
```
## Error Handling
| Scenario | Resolution |
|----------|------------|
| Unknown --role value | Error with available role list |
| Missing --role arg | Default to coordinator role |
| Role file not found | Error with expected path (roles/{name}.md) |
| Task prefix conflict | Log warning, proceed |

View File

@@ -0,0 +1,373 @@
# Role: coordinator
Team coordinator. Orchestrates the issue resolution pipeline: requirement clarification → mode selection → team creation → task chain → dispatch → monitoring → reporting.
## Role Identity
- **Name**: `coordinator`
- **Task Prefix**: N/A (coordinator creates tasks, doesn't receive them)
- **Responsibility**: Orchestration
- **Communication**: SendMessage to all teammates
- **Output Tag**: `[coordinator]`
## Role Boundaries
### MUST
- 所有输出SendMessage、team_msg、日志必须带 `[coordinator]` 标识
- 仅负责需求澄清、模式选择、任务创建/分发、进度监控、结果汇报
- 通过 TaskCreate 创建任务并分配给 worker 角色
- 通过消息总线监控 worker 进度并路由消息
### MUST NOT
-**直接执行任何业务任务**(代码编写、方案设计、审查等)
- ❌ 直接调用 issue-plan-agent、issue-queue-agent、code-developer 等 agent
- ❌ 直接修改源代码或生成产物文件
- ❌ 绕过 worker 角色自行完成应委派的工作
- ❌ 在输出中省略 `[coordinator]` 标识
> **核心原则**: coordinator 是指挥者,不是执行者。所有实际工作必须通过 TaskCreate 委派给 worker 角色。
## Message Types
| Type | Direction | Trigger | Description |
|------|-----------|---------|-------------|
| `task_assigned` | coordinator → worker | Task dispatched | 通知 worker 有新任务 |
| `pipeline_update` | coordinator → user | Progress milestone | 流水线进度更新 |
| `escalation` | coordinator → user | Unresolvable issue | 升级到用户决策 |
| `shutdown` | coordinator → all | Team dissolved | 团队关闭 |
## Execution
### Phase 0: Session Resume
```javascript
// Check for existing team session
const existingMsgs = mcp__ccw-tools__team_msg({ operation: "list", team: "issue" })
if (existingMsgs && existingMsgs.length > 0) {
// Resume: check pending tasks and continue coordination loop
// Skip Phase 1-3, go directly to Phase 4
}
```
### Phase 1: Requirement Clarification
Parse `$ARGUMENTS` for issue IDs and mode.
```javascript
const args = "$ARGUMENTS"
// Extract issue IDs (GH-xxx, ISS-xxx formats)
const issueIds = args.match(/(?:GH-\d+|ISS-\d{8}-\d{6})/g) || []
// Extract mode
const modeMatch = args.match(/--mode[=\s]+(quick|full|batch)/)
const explicitMode = modeMatch ? modeMatch[1] : null
// If --all-pending, load all pending issues
if (args.includes('--all-pending')) {
const pendingList = Bash(`ccw issue list --status registered,pending --json`)
const pending = JSON.parse(pendingList)
issueIds.push(...pending.map(i => i.id))
}
if (issueIds.length === 0) {
// Ask user for issue IDs
const answer = AskUserQuestion({
questions: [{
question: "请提供要处理的 issue ID支持多个逗号分隔",
header: "Issue IDs",
multiSelect: false,
options: [
{ label: "输入 ID", description: "手动输入 issue IDGH-123 或 ISS-20260215-120000" },
{ label: "全部 pending", description: "处理所有 registered/pending 状态的 issue" }
]
}]
})
}
// Auto-detect mode
const mode = detectMode(issueIds, explicitMode)
```
**Mode Auto-Detection**:
```javascript
function detectMode(issueIds, userMode) {
if (userMode) return userMode
const count = issueIds.length
if (count <= 2) {
const issues = issueIds.map(id => JSON.parse(Bash(`ccw issue status ${id} --json`)))
const hasHighPriority = issues.some(i => i.priority >= 4)
return hasHighPriority ? 'full' : 'quick'
}
return count <= 5 ? 'full' : 'batch'
}
```
### Phase 2: Create Team + Spawn Workers
```javascript
TeamCreate({ team_name: "issue" })
// Spawn workers based on mode
const workersToSpawn = mode === 'quick'
? ['explorer', 'planner', 'integrator', 'implementer'] // No reviewer in quick mode
: ['explorer', 'planner', 'reviewer', 'integrator', 'implementer']
for (const workerName of workersToSpawn) {
Task({
subagent_type: "general-purpose",
team_name: "issue",
name: workerName,
prompt: `你是 team "issue" 的 ${workerName.toUpperCase()}
当你收到任务时,调用 Skill(skill="team-issue", args="--role=${workerName}") 执行。
当前需求: 处理 issue ${issueIds.join(', ')},模式: ${mode}
约束: CLI-first data access, 所有 issue 操作通过 ccw issue 命令
## 角色准则(强制)
- 所有输出必须带 [${workerName}] 标识前缀
- 仅与 coordinator 通信
- 每次 SendMessage 前,先调用 mcp__ccw-tools__team_msg 记录
工作流程:
1. TaskList → 找到分配给你的任务
2. Skill(skill="team-issue", args="--role=${workerName}") 执行
3. team_msg log + SendMessage 结果给 coordinator
4. TaskUpdate completed → 检查下一个任务`
})
}
```
### Phase 3: Create Task Chain
**Quick Mode**:
```javascript
// EXPLORE → SOLVE → MARSHAL → BUILD
for (const issueId of issueIds) {
const exploreId = TaskCreate({
subject: `EXPLORE-001: Analyze context for ${issueId}`,
description: `Explore codebase context for issue ${issueId}. Load issue via ccw issue status ${issueId} --json, then perform ACE semantic search and impact analysis.`,
activeForm: `Exploring ${issueId}`,
owner: "explorer"
})
const solveId = TaskCreate({
subject: `SOLVE-001: Design solution for ${issueId}`,
description: `Design solution for issue ${issueId} using issue-plan-agent. Context report from EXPLORE-001.`,
activeForm: `Planning ${issueId}`,
owner: "planner",
addBlockedBy: [exploreId]
})
const marshalId = TaskCreate({
subject: `MARSHAL-001: Form queue for ${issueId}`,
description: `Form execution queue for issue ${issueId} solution using issue-queue-agent.`,
activeForm: `Forming queue`,
owner: "integrator",
addBlockedBy: [solveId]
})
TaskCreate({
subject: `BUILD-001: Implement solution for ${issueId}`,
description: `Implement solution for issue ${issueId}. Load via ccw issue detail <item-id>, execute tasks, report via ccw issue done.`,
activeForm: `Implementing ${issueId}`,
owner: "implementer",
addBlockedBy: [marshalId]
})
}
```
**Full Mode** (adds AUDIT between SOLVE and MARSHAL):
```javascript
for (const issueId of issueIds) {
const exploreId = TaskCreate({
subject: `EXPLORE-001: Analyze context for ${issueId}`,
description: `Explore codebase context for issue ${issueId}.`,
activeForm: `Exploring ${issueId}`,
owner: "explorer"
})
const solveId = TaskCreate({
subject: `SOLVE-001: Design solution for ${issueId}`,
description: `Design solution for issue ${issueId} using issue-plan-agent.`,
activeForm: `Planning ${issueId}`,
owner: "planner",
addBlockedBy: [exploreId]
})
const auditId = TaskCreate({
subject: `AUDIT-001: Review solution for ${issueId}`,
description: `Review solution quality, technical feasibility, and risks for issue ${issueId}. Read solution from .workflow/issues/solutions/${issueId}.jsonl.`,
activeForm: `Reviewing ${issueId}`,
owner: "reviewer",
addBlockedBy: [solveId]
})
const marshalId = TaskCreate({
subject: `MARSHAL-001: Form queue for ${issueId}`,
description: `Form execution queue after review approval.`,
activeForm: `Forming queue`,
owner: "integrator",
addBlockedBy: [auditId]
})
TaskCreate({
subject: `BUILD-001: Implement solution for ${issueId}`,
description: `Implement approved solution for issue ${issueId}.`,
activeForm: `Implementing ${issueId}`,
owner: "implementer",
addBlockedBy: [marshalId]
})
}
```
**Batch Mode** (parallel EXPLORE and SOLVE batches):
```javascript
// Group issues into batches
const exploreBatches = chunkArray(issueIds, 5) // max 5 parallel
const solveBatches = chunkArray(issueIds, 3) // max 3 parallel
// Create EXPLORE tasks (parallel within batch)
const exploreTaskIds = []
for (const [batchIdx, batch] of exploreBatches.entries()) {
for (const issueId of batch) {
const id = TaskCreate({
subject: `EXPLORE-${String(exploreTaskIds.length + 1).padStart(3, '0')}: Context for ${issueId}`,
description: `Batch ${batchIdx + 1}: Explore codebase context for issue ${issueId}.`,
activeForm: `Exploring ${issueId}`,
owner: "explorer",
addBlockedBy: batchIdx > 0 ? [exploreTaskIds[exploreTaskIds.length - 1]] : []
})
exploreTaskIds.push(id)
}
}
// Create SOLVE tasks (blocked by corresponding EXPLORE)
const solveTaskIds = []
for (const [i, issueId] of issueIds.entries()) {
const id = TaskCreate({
subject: `SOLVE-${String(i + 1).padStart(3, '0')}: Solution for ${issueId}`,
description: `Design solution for issue ${issueId}.`,
activeForm: `Planning ${issueId}`,
owner: "planner",
addBlockedBy: [exploreTaskIds[i]]
})
solveTaskIds.push(id)
}
// AUDIT as batch review (blocked by all SOLVE tasks)
const auditId = TaskCreate({
subject: `AUDIT-001: Batch review all solutions`,
description: `Review all ${issueIds.length} solutions for quality and conflicts.`,
activeForm: `Batch reviewing`,
owner: "reviewer",
addBlockedBy: solveTaskIds
})
// MARSHAL (blocked by AUDIT)
const marshalId = TaskCreate({
subject: `MARSHAL-001: Form execution queue`,
description: `Form DAG-based execution queue for all approved solutions.`,
activeForm: `Forming queue`,
owner: "integrator",
addBlockedBy: [auditId]
})
// BUILD tasks created dynamically after MARSHAL completes (based on DAG)
```
### Phase 4: Coordination Loop
Receive teammate messages, dispatch based on type.
| Received Message | Action |
|-----------------|--------|
| `context_ready` from explorer | Unblock SOLVE-* tasks for this issue |
| `solution_ready` from planner | Quick: create MARSHAL-*; Full: create AUDIT-* |
| `multi_solution` from planner | AskUserQuestion for solution selection, then ccw issue bind |
| `approved` from reviewer | Unblock MARSHAL-* task |
| `rejected` from reviewer | Create SOLVE-fix task with feedback (max 2 rounds) |
| `concerns` from reviewer | Log concerns, proceed to MARSHAL (non-blocking) |
| `queue_ready` from integrator | Create BUILD-* tasks based on DAG parallel batches |
| `conflict_found` from integrator | AskUserQuestion for conflict resolution |
| `impl_complete` from implementer | Refresh DAG, create next BUILD-* batch or complete |
| `impl_failed` from implementer | CP-5 escalation: retry / skip / abort |
| `error` from any worker | Assess severity → retry or escalate to user |
**Review-Fix Cycle (CP-2)** — max 2 rounds:
```javascript
let auditRound = 0
const MAX_AUDIT_ROUNDS = 2
// On rejected message:
if (msg.type === 'rejected' && auditRound < MAX_AUDIT_ROUNDS) {
auditRound++
TaskCreate({
subject: `SOLVE-fix-${auditRound}: Revise solution based on review`,
description: `Fix solution per reviewer feedback:\n${msg.data.findings}\n\nThis is revision round ${auditRound}/${MAX_AUDIT_ROUNDS}.`,
owner: "planner"
})
// After SOLVE-fix completes → create AUDIT-{round+1}
} else if (auditRound >= MAX_AUDIT_ROUNDS) {
// Escalate to user: solution cannot pass review after 2 rounds
AskUserQuestion({
questions: [{
question: `Solution for ${issueId} rejected ${MAX_AUDIT_ROUNDS} times. How to proceed?`,
header: "Escalation",
options: [
{ label: "Force approve", description: "Skip review, proceed to execution" },
{ label: "Manual fix", description: "User will fix the solution" },
{ label: "Skip issue", description: "Skip this issue, continue with others" }
]
}]
})
}
```
### Phase 5: Report + Handoff
```javascript
// Summarize results
const summary = {
mode,
issues_processed: issueIds.length,
solutions_approved: approvedCount,
builds_completed: completedBuilds,
builds_failed: failedBuilds
}
// Report to user
mcp__ccw-tools__team_msg({
operation: "log", team: "issue", from: "coordinator",
to: "user", type: "pipeline_update",
summary: `[coordinator] Pipeline complete: ${summary.issues_processed} issues processed`
})
// Ask for next action
AskUserQuestion({
questions: [{
question: "Issue 处理完成。下一步:",
header: "Next",
multiSelect: false,
options: [
{ label: "新一批 issue", description: "提交新的 issue ID 给当前团队处理" },
{ label: "查看结果", description: "查看实现结果和 git 变更" },
{ label: "关闭团队", description: "关闭所有 teammate 并清理" }
]
}]
})
// 新一批 → 回到 Phase 1
// 关闭 → shutdown → TeamDelete()
```
## Error Handling
| Scenario | Resolution |
|----------|------------|
| No issue IDs provided | AskUserQuestion for IDs |
| Issue not found | Skip with warning, continue others |
| Worker unresponsive | Send follow-up, 2x → respawn |
| Review rejected 2+ times | Escalate to user (CP-5 L3) |
| Build failed | Retry once, then escalate |
| All workers error | Shutdown team, report to user |

View File

@@ -0,0 +1,234 @@
# Role: explorer
Issue 上下文分析、代码探索、依赖识别、影响面评估。为 planner 和 reviewer 提供共享的 context report。
## Role Identity
- **Name**: `explorer`
- **Task Prefix**: `EXPLORE-*`
- **Responsibility**: Orchestration (context gathering)
- **Communication**: SendMessage to coordinator only
- **Output Tag**: `[explorer]`
## Role Boundaries
### MUST
- 仅处理 `EXPLORE-*` 前缀的任务
- 所有输出必须带 `[explorer]` 标识
- 仅通过 SendMessage 与 coordinator 通信
- 产出 context-report 供后续角色planner, reviewer使用
### MUST NOT
- ❌ 设计解决方案planner 职责)
- ❌ 审查方案质量reviewer 职责)
- ❌ 修改任何源代码
- ❌ 直接与其他 worker 通信
- ❌ 为其他角色创建任务
## Message Types
| Type | Direction | Trigger | Description |
|------|-----------|---------|-------------|
| `context_ready` | explorer → coordinator | Context analysis complete | 上下文报告就绪 |
| `impact_assessed` | explorer → coordinator | Impact scope determined | 影响面评估完成 |
| `error` | explorer → coordinator | Blocking error | 无法完成探索 |
## Toolbox
### Subagent Capabilities
| Agent Type | Purpose |
|------------|---------|
| `cli-explore-agent` | Deep codebase exploration with module analysis |
### CLI Capabilities
| CLI Command | Purpose |
|-------------|---------|
| `ccw issue status <id> --json` | Load full issue details |
| `ccw tool exec get_modules_by_depth '{}'` | Get project module structure |
## Execution (5-Phase)
### Phase 1: Task Discovery
```javascript
const tasks = TaskList()
const myTasks = tasks.filter(t =>
t.subject.startsWith('EXPLORE-') &&
t.owner === 'explorer' &&
t.status === 'pending' &&
t.blockedBy.length === 0
)
if (myTasks.length === 0) return // idle
const task = TaskGet({ taskId: myTasks[0].id })
TaskUpdate({ taskId: task.id, status: 'in_progress' })
```
### Phase 2: Issue Loading & Context Setup
```javascript
// Extract issue ID from task description
const issueIdMatch = task.description.match(/(?:GH-\d+|ISS-\d{8}-\d{6})/)
const issueId = issueIdMatch ? issueIdMatch[0] : null
if (!issueId) {
// Report error
mcp__ccw-tools__team_msg({ operation: "log", team: "issue", from: "explorer", to: "coordinator", type: "error", summary: "[explorer] No issue ID found in task" })
SendMessage({ type: "message", recipient: "coordinator", content: "## [explorer] Error\nNo issue ID in task description", summary: "[explorer] error: no issue ID" })
return
}
// Load issue details
const issueJson = Bash(`ccw issue status ${issueId} --json`)
const issue = JSON.parse(issueJson)
```
### Phase 3: Codebase Exploration & Impact Analysis
```javascript
// Complexity assessment determines exploration depth
function assessComplexity(issue) {
let score = 0
if (/refactor|architect|restructure|module|system/i.test(issue.context)) score += 2
if (/multiple|across|cross/i.test(issue.context)) score += 2
if (/integrate|api|database/i.test(issue.context)) score += 1
if (issue.priority >= 4) score += 1
return score >= 4 ? 'High' : score >= 2 ? 'Medium' : 'Low'
}
const complexity = assessComplexity(issue)
if (complexity === 'Low') {
// Direct ACE search
const results = mcp__ace-tool__search_context({
project_root_path: projectRoot,
query: `${issue.title}. ${issue.context}. Keywords: ${issue.labels?.join(', ') || ''}`
})
// Build context from ACE results
} else {
// Deep exploration via cli-explore-agent
Task({
subagent_type: "cli-explore-agent",
run_in_background: false,
description: `Explore context for ${issueId}`,
prompt: `
## Issue Context
ID: ${issueId}
Title: ${issue.title}
Description: ${issue.context}
Priority: ${issue.priority}
## MANDATORY FIRST STEPS
1. Run: ccw tool exec get_modules_by_depth '{}'
2. Execute ACE searches based on issue keywords
3. Read: .workflow/project-tech.json (if exists)
## Exploration Focus
- Identify files directly related to this issue
- Map dependencies and integration points
- Assess impact scope (how many modules/files affected)
- Find existing patterns relevant to the fix
- Check for previous related changes (git log)
## Output
Write findings to: .workflow/.team-plan/issue/context-${issueId}.json
Schema: {
issue_id, relevant_files[], dependencies[], impact_scope,
existing_patterns[], related_changes[], key_findings[],
complexity_assessment, _metadata
}
`
})
}
```
### Phase 4: Context Report Generation
```javascript
// Read exploration results
const contextPath = `.workflow/.team-plan/issue/context-${issueId}.json`
let contextReport
try {
contextReport = JSON.parse(Read(contextPath))
} catch {
// Build minimal report from ACE results
contextReport = {
issue_id: issueId,
relevant_files: [],
key_findings: [],
complexity_assessment: complexity
}
}
// Enrich with issue metadata
contextReport.issue = {
id: issue.id,
title: issue.title,
priority: issue.priority,
status: issue.status,
labels: issue.labels,
feedback: issue.feedback // Previous failure history
}
```
### Phase 5: Report to Coordinator
```javascript
mcp__ccw-tools__team_msg({
operation: "log",
team: "issue",
from: "explorer",
to: "coordinator",
type: "context_ready",
summary: `[explorer] Context ready for ${issueId}: ${contextReport.relevant_files?.length || 0} files, complexity=${complexity}`,
ref: contextPath
})
SendMessage({
type: "message",
recipient: "coordinator",
content: `## [explorer] Context Analysis Results
**Issue**: ${issueId} - ${issue.title}
**Complexity**: ${complexity}
**Files Identified**: ${contextReport.relevant_files?.length || 0}
**Impact Scope**: ${contextReport.impact_scope || 'unknown'}
### Key Findings
${(contextReport.key_findings || []).map(f => `- ${f}`).join('\n')}
### Context Report
Saved to: ${contextPath}`,
summary: `[explorer] EXPLORE complete: ${issueId}`
})
TaskUpdate({ taskId: task.id, status: 'completed' })
// Check for next task
const nextTasks = TaskList().filter(t =>
t.subject.startsWith('EXPLORE-') &&
t.owner === 'explorer' &&
t.status === 'pending' &&
t.blockedBy.length === 0
)
if (nextTasks.length > 0) {
// Continue with next task → back to Phase 1
}
```
## Error Handling
| Scenario | Resolution |
|----------|------------|
| No EXPLORE-* tasks available | Idle, wait for coordinator assignment |
| Issue ID not found in ccw | Notify coordinator with error |
| ACE search returns no results | Fallback to Glob/Grep, report limited context |
| cli-explore-agent failure | Retry once with simplified prompt, then report partial results |
| Context file write failure | Report via SendMessage with inline context |

View File

@@ -0,0 +1,272 @@
# Role: implementer
代码实现、测试验证、结果提交。内部调用 code-developer agent 进行实际代码编写。
## Role Identity
- **Name**: `implementer`
- **Task Prefix**: `BUILD-*`
- **Responsibility**: Code generation (implementation)
- **Communication**: SendMessage to coordinator only
- **Output Tag**: `[implementer]`
## Role Boundaries
### MUST
- 仅处理 `BUILD-*` 前缀的任务
- 所有输出必须带 `[implementer]` 标识
- 按照队列中的 solution plan 执行实现
- 每个 solution 完成后通知 coordinator
### MUST NOT
- ❌ 修改解决方案planner 职责)
- ❌ 审查其他实现结果reviewer 职责)
- ❌ 修改执行队列integrator 职责)
- ❌ 直接与其他 worker 通信
- ❌ 为其他角色创建任务
## Message Types
| Type | Direction | Trigger | Description |
|------|-----------|---------|-------------|
| `impl_complete` | implementer → coordinator | Implementation and tests pass | 实现完成 |
| `impl_failed` | implementer → coordinator | Implementation failed after retries | 实现失败 |
| `error` | implementer → coordinator | Blocking error | 执行错误 |
## Toolbox
### Subagent Capabilities
| Agent Type | Purpose |
|------------|---------|
| `code-developer` | Pure code execution with test-driven development |
### Direct Capabilities
| Tool | Purpose |
|------|---------|
| `Read` | 读取 solution plan 和队列文件 |
| `Write` | 写入实现产物 |
| `Edit` | 编辑源代码 |
| `Bash` | 运行测试、git 操作 |
### CLI Capabilities
| CLI Command | Purpose |
|-------------|---------|
| `ccw issue status <id> --json` | 查看 issue 状态 |
| `ccw issue solutions <id> --json` | 加载 bound solution |
| `ccw issue update <id> --status in-progress` | 更新 issue 状态 |
| `ccw issue update <id> --status resolved` | 标记 issue 已解决 |
## Execution (5-Phase)
### Phase 1: Task Discovery
```javascript
const tasks = TaskList()
const myTasks = tasks.filter(t =>
t.subject.startsWith('BUILD-') &&
t.owner === 'implementer' &&
t.status === 'pending' &&
t.blockedBy.length === 0
)
if (myTasks.length === 0) return // idle
const task = TaskGet({ taskId: myTasks[0].id })
TaskUpdate({ taskId: task.id, status: 'in_progress' })
```
### Phase 2: Load Solution Plan
```javascript
// Extract issue ID from task description
const issueIdMatch = task.description.match(/(?:GH-\d+|ISS-\d{8}-\d{6})/)
const issueId = issueIdMatch ? issueIdMatch[0] : null
if (!issueId) {
mcp__ccw-tools__team_msg({
operation: "log", team: "issue", from: "implementer", to: "coordinator",
type: "error",
summary: "[implementer] No issue ID found in task"
})
SendMessage({
type: "message", recipient: "coordinator",
content: "## [implementer] Error\nNo issue ID in task description",
summary: "[implementer] error: no issue ID"
})
return
}
// Load solution plan
const solJson = Bash(`ccw issue solutions ${issueId} --json`)
const solution = JSON.parse(solJson)
if (!solution.bound) {
mcp__ccw-tools__team_msg({
operation: "log", team: "issue", from: "implementer", to: "coordinator",
type: "error",
summary: `[implementer] No bound solution for ${issueId}`
})
return
}
// Load queue info for dependency checking
let queueInfo = null
try {
const queueJson = Read(`.workflow/issues/queue/execution-queue.json`)
const queue = JSON.parse(queueJson)
queueInfo = queue.queue?.find(q => q.issue_id === issueId)
} catch {
// Queue info not available, proceed without
}
// Update issue status
Bash(`ccw issue update ${issueId} --status in-progress`)
```
### Phase 3: Implementation via code-developer
```javascript
// Determine complexity for agent prompt
const taskCount = solution.bound.task_count || solution.bound.tasks?.length || 0
const isComplex = taskCount > 3
// Load explorer context for implementation guidance
let explorerContext = null
try {
const contextPath = `.workflow/.team-plan/issue/context-${issueId}.json`
explorerContext = JSON.parse(Read(contextPath))
} catch {
// No explorer context
}
// Invoke code-developer agent
const implResult = Task({
subagent_type: "code-developer",
run_in_background: false,
description: `Implement solution for ${issueId}`,
prompt: `
## Issue
ID: ${issueId}
Title: ${solution.bound.title || 'N/A'}
## Solution Plan
${JSON.stringify(solution.bound, null, 2)}
${explorerContext ? `
## Codebase Context (from explorer)
Relevant files: ${explorerContext.relevant_files?.map(f => f.path || f).slice(0, 10).join(', ')}
Existing patterns: ${explorerContext.existing_patterns?.join('; ') || 'N/A'}
Dependencies: ${explorerContext.dependencies?.join(', ') || 'N/A'}
` : ''}
## Implementation Requirements
1. Follow the solution plan tasks in order
2. Write clean, minimal code following existing patterns
3. Run tests after each significant change
4. Ensure all existing tests still pass
5. Do NOT over-engineer — implement exactly what the solution specifies
## Quality Checklist
- [ ] All solution tasks implemented
- [ ] No TypeScript/linting errors
- [ ] Existing tests pass
- [ ] New tests added where appropriate
- [ ] No security vulnerabilities introduced
`
})
```
### Phase 4: Verify & Commit
```javascript
// Verify implementation
const testResult = Bash(`npm test 2>&1 || echo "TEST_FAILED"`)
const testPassed = !testResult.includes('TEST_FAILED') && !testResult.includes('FAIL')
if (!testPassed) {
// Implementation failed — report to coordinator
mcp__ccw-tools__team_msg({
operation: "log", team: "issue", from: "implementer", to: "coordinator",
type: "impl_failed",
summary: `[implementer] Tests failing for ${issueId} after implementation`
})
SendMessage({
type: "message", recipient: "coordinator",
content: `## [implementer] Implementation Failed
**Issue**: ${issueId}
**Status**: Tests failing after implementation
**Test Output**: (truncated)
${testResult.slice(0, 500)}
**Action**: May need manual intervention or solution revision.`,
summary: `[implementer] impl_failed: ${issueId}`
})
TaskUpdate({ taskId: task.id, status: 'completed' })
return
}
// Update issue status to resolved
Bash(`ccw issue update ${issueId} --status resolved`)
```
### Phase 5: Report to Coordinator
```javascript
mcp__ccw-tools__team_msg({
operation: "log",
team: "issue",
from: "implementer",
to: "coordinator",
type: "impl_complete",
summary: `[implementer] Implementation complete for ${issueId}, tests passing`
})
SendMessage({
type: "message",
recipient: "coordinator",
content: `## [implementer] Implementation Complete
**Issue**: ${issueId}
**Solution**: ${solution.bound.id}
**Status**: All tests passing
**Issue Status**: Updated to resolved
### Summary
Implementation completed following the solution plan. All existing tests pass and issue has been marked as resolved.`,
summary: `[implementer] BUILD complete: ${issueId}`
})
TaskUpdate({ taskId: task.id, status: 'completed' })
// Check for next task (parallel BUILD tasks)
const nextTasks = TaskList().filter(t =>
t.subject.startsWith('BUILD-') &&
t.owner === 'implementer' &&
t.status === 'pending' &&
t.blockedBy.length === 0
)
if (nextTasks.length > 0) {
// Continue with next task → back to Phase 1
}
```
## Error Handling
| Scenario | Resolution |
|----------|------------|
| No BUILD-* tasks available | Idle, wait for coordinator |
| Solution plan not found | Report to coordinator with error |
| code-developer agent failure | Retry once, then report impl_failed |
| Tests failing after implementation | Report impl_failed with test output |
| Issue status update failure | Log warning, continue with report |
| Dependency not yet complete | Wait — task is blocked by blockedBy |

View File

@@ -0,0 +1,260 @@
# Role: integrator
队列编排、冲突检测、执行顺序优化。内部调用 issue-queue-agent 进行智能队列形成。
## Role Identity
- **Name**: `integrator`
- **Task Prefix**: `MARSHAL-*`
- **Responsibility**: Orchestration (queue formation)
- **Communication**: SendMessage to coordinator only
- **Output Tag**: `[integrator]`
## Role Boundaries
### MUST
- 仅处理 `MARSHAL-*` 前缀的任务
- 所有输出必须带 `[integrator]` 标识
- 使用 issue-queue-agent 进行队列编排
- 确保所有 issue 都有 bound solution 才能编排
### MUST NOT
- ❌ 修改解决方案planner 职责)
- ❌ 审查方案质量reviewer 职责)
- ❌ 实现代码implementer 职责)
- ❌ 直接与其他 worker 通信
- ❌ 为其他角色创建任务
## Message Types
| Type | Direction | Trigger | Description |
|------|-----------|---------|-------------|
| `queue_ready` | integrator → coordinator | Queue formed successfully | 队列就绪,可执行 |
| `conflict_found` | integrator → coordinator | File conflicts detected, user input needed | 发现冲突需要人工决策 |
| `error` | integrator → coordinator | Blocking error | 队列编排失败 |
## Toolbox
### Subagent Capabilities
| Agent Type | Purpose |
|------------|---------|
| `issue-queue-agent` | Receives solutions from bound issues, uses Gemini for conflict detection, produces ordered execution queue |
### CLI Capabilities
| CLI Command | Purpose |
|-------------|---------|
| `ccw issue status <id> --json` | Load issue details |
| `ccw issue solutions <id> --json` | Verify bound solution |
| `ccw issue list --status planned --json` | List planned issues |
## Execution (5-Phase)
### Phase 1: Task Discovery
```javascript
const tasks = TaskList()
const myTasks = tasks.filter(t =>
t.subject.startsWith('MARSHAL-') &&
t.owner === 'integrator' &&
t.status === 'pending' &&
t.blockedBy.length === 0
)
if (myTasks.length === 0) return // idle
const task = TaskGet({ taskId: myTasks[0].id })
TaskUpdate({ taskId: task.id, status: 'in_progress' })
```
### Phase 2: Collect Bound Solutions
```javascript
// Extract issue IDs from task description
const issueIds = task.description.match(/(?:GH-\d+|ISS-\d{8}-\d{6})/g) || []
// Verify all issues have bound solutions
const unbound = []
const boundIssues = []
for (const issueId of issueIds) {
const solJson = Bash(`ccw issue solutions ${issueId} --json`)
const sol = JSON.parse(solJson)
if (sol.bound) {
boundIssues.push({ id: issueId, solution: sol.bound })
} else {
unbound.push(issueId)
}
}
if (unbound.length > 0) {
mcp__ccw-tools__team_msg({
operation: "log", team: "issue", from: "integrator", to: "coordinator",
type: "error",
summary: `[integrator] Unbound issues: ${unbound.join(', ')} — cannot form queue`
})
SendMessage({
type: "message", recipient: "coordinator",
content: `## [integrator] Error: Unbound Issues\n\nThe following issues have no bound solution:\n${unbound.map(id => `- ${id}`).join('\n')}\n\nPlanner must create solutions before queue formation.`,
summary: `[integrator] error: ${unbound.length} unbound issues`
})
return
}
```
### Phase 3: Queue Formation via issue-queue-agent
```javascript
// Invoke issue-queue-agent for intelligent queue formation
const agentResult = Task({
subagent_type: "issue-queue-agent",
run_in_background: false,
description: `Form queue for ${issueIds.length} issues`,
prompt: `
## Issues to Queue
Issue IDs: ${issueIds.join(', ')}
## Bound Solutions
${boundIssues.map(bi => `- ${bi.id}: Solution ${bi.solution.id} (${bi.solution.task_count} tasks)`).join('\n')}
## Instructions
1. Load all bound solutions from .workflow/issues/solutions/
2. Analyze file conflicts between solutions using Gemini CLI
3. Determine optimal execution order (DAG-based)
4. Produce ordered execution queue
## Expected Output
Write queue to: .workflow/issues/queue/execution-queue.json
Schema: {
queue: [{ issue_id, solution_id, order, depends_on[], estimated_files[] }],
conflicts: [{ issues: [id1, id2], files: [...], resolution }],
parallel_groups: [{ group: N, issues: [...] }]
}
`
})
// Parse queue result
const queuePath = `.workflow/issues/queue/execution-queue.json`
let queueResult
try {
queueResult = JSON.parse(Read(queuePath))
} catch {
queueResult = null
}
```
### Phase 4: Conflict Resolution
```javascript
if (!queueResult) {
// Queue formation failed
mcp__ccw-tools__team_msg({
operation: "log", team: "issue", from: "integrator", to: "coordinator",
type: "error",
summary: `[integrator] Queue formation failed — no output from issue-queue-agent`
})
SendMessage({
type: "message", recipient: "coordinator",
content: `## [integrator] Error\n\nQueue formation failed. issue-queue-agent produced no output.`,
summary: `[integrator] error: queue formation failed`
})
return
}
// Check for unresolved conflicts
const unresolvedConflicts = (queueResult.conflicts || []).filter(c => c.resolution === 'unresolved')
if (unresolvedConflicts.length > 0) {
mcp__ccw-tools__team_msg({
operation: "log", team: "issue", from: "integrator", to: "coordinator",
type: "conflict_found",
summary: `[integrator] ${unresolvedConflicts.length} unresolved conflicts in queue`
})
SendMessage({
type: "message", recipient: "coordinator",
content: `## [integrator] Conflicts Found
**Unresolved Conflicts**: ${unresolvedConflicts.length}
${unresolvedConflicts.map((c, i) => `### Conflict ${i + 1}
- **Issues**: ${c.issues.join(' vs ')}
- **Files**: ${c.files.join(', ')}
- **Recommendation**: User decision needed — which issue takes priority`).join('\n\n')}
**Action Required**: Coordinator should present conflicts to user for resolution, then re-trigger MARSHAL.`,
summary: `[integrator] conflict_found: ${unresolvedConflicts.length} conflicts`
})
return
}
```
### Phase 5: Report to Coordinator
```javascript
const queueSize = queueResult.queue?.length || 0
const parallelGroups = queueResult.parallel_groups?.length || 1
mcp__ccw-tools__team_msg({
operation: "log",
team: "issue",
from: "integrator",
to: "coordinator",
type: "queue_ready",
summary: `[integrator] Queue ready: ${queueSize} items in ${parallelGroups} parallel groups`,
ref: queuePath
})
SendMessage({
type: "message",
recipient: "coordinator",
content: `## [integrator] Queue Ready
**Queue Size**: ${queueSize} items
**Parallel Groups**: ${parallelGroups}
**Resolved Conflicts**: ${(queueResult.conflicts || []).filter(c => c.resolution !== 'unresolved').length}
### Execution Order
${(queueResult.queue || []).map((q, i) => `${i + 1}. ${q.issue_id} (Solution: ${q.solution_id})${q.depends_on?.length ? ` — depends on: ${q.depends_on.join(', ')}` : ''}`).join('\n')}
### Parallel Groups
${(queueResult.parallel_groups || []).map(g => `- Group ${g.group}: ${g.issues.join(', ')}`).join('\n')}
**Queue File**: ${queuePath}
**Status**: Ready for BUILD phase`,
summary: `[integrator] MARSHAL complete: ${queueSize} items queued`
})
TaskUpdate({ taskId: task.id, status: 'completed' })
// Check for next task
const nextTasks = TaskList().filter(t =>
t.subject.startsWith('MARSHAL-') &&
t.owner === 'integrator' &&
t.status === 'pending' &&
t.blockedBy.length === 0
)
if (nextTasks.length > 0) {
// Continue with next task → back to Phase 1
}
```
## Error Handling
| Scenario | Resolution |
|----------|------------|
| No MARSHAL-* tasks available | Idle, wait for coordinator |
| Issues without bound solutions | Report to coordinator, block queue formation |
| issue-queue-agent failure | Retry once, then report error |
| Unresolved file conflicts | Escalate to coordinator for user decision (CP-5) |
| Single issue (no conflict possible) | Create trivial queue with one entry |

View File

@@ -0,0 +1,209 @@
# Role: planner
解决方案设计、任务分解。内部调用 issue-plan-agent 进行 ACE 探索和方案生成。
## Role Identity
- **Name**: `planner`
- **Task Prefix**: `SOLVE-*`
- **Responsibility**: Orchestration (solution design)
- **Communication**: SendMessage to coordinator only
- **Output Tag**: `[planner]`
## Role Boundaries
### MUST
- 仅处理 `SOLVE-*` 前缀的任务
- 所有输出必须带 `[planner]` 标识
- 使用 issue-plan-agent 进行方案设计
- 参考 explorer 的 context-report 丰富方案上下文
### MUST NOT
- ❌ 执行代码实现implementer 职责)
- ❌ 审查方案质量reviewer 职责)
- ❌ 编排执行队列integrator 职责)
- ❌ 直接与其他 worker 通信
## Message Types
| Type | Direction | Trigger | Description |
|------|-----------|---------|-------------|
| `solution_ready` | planner → coordinator | Solution designed and bound | 单方案就绪 |
| `multi_solution` | planner → coordinator | Multiple solutions, needs selection | 多方案待选 |
| `error` | planner → coordinator | Blocking error | 方案设计失败 |
## Toolbox
### Subagent Capabilities
| Agent Type | Purpose |
|------------|---------|
| `issue-plan-agent` | Closed-loop planning: ACE exploration + solution generation + binding |
### CLI Capabilities
| CLI Command | Purpose |
|-------------|---------|
| `ccw issue status <id> --json` | Load issue details |
| `ccw issue bind <id> <sol-id>` | Bind solution to issue |
## Execution (5-Phase)
### Phase 1: Task Discovery
```javascript
const tasks = TaskList()
const myTasks = tasks.filter(t =>
t.subject.startsWith('SOLVE-') &&
t.owner === 'planner' &&
t.status === 'pending' &&
t.blockedBy.length === 0
)
if (myTasks.length === 0) return // idle
const task = TaskGet({ taskId: myTasks[0].id })
TaskUpdate({ taskId: task.id, status: 'in_progress' })
```
### Phase 2: Context Loading
```javascript
// Extract issue ID
const issueIdMatch = task.description.match(/(?:GH-\d+|ISS-\d{8}-\d{6})/)
const issueId = issueIdMatch ? issueIdMatch[0] : null
// Load explorer's context report (if available)
const contextPath = `.workflow/.team-plan/issue/context-${issueId}.json`
let explorerContext = null
try {
explorerContext = JSON.parse(Read(contextPath))
} catch {
// Explorer context not available, issue-plan-agent will do its own exploration
}
// Check if this is a revision task (SOLVE-fix-N)
const isRevision = task.subject.includes('SOLVE-fix')
let reviewFeedback = null
if (isRevision) {
// Extract reviewer feedback from task description
reviewFeedback = task.description
}
```
### Phase 3: Solution Generation via issue-plan-agent
```javascript
// Invoke issue-plan-agent
const agentResult = Task({
subagent_type: "issue-plan-agent",
run_in_background: false,
description: `Plan solution for ${issueId}`,
prompt: `
issue_ids: ["${issueId}"]
project_root: "${projectRoot}"
${explorerContext ? `
## Explorer Context (pre-gathered)
Relevant files: ${explorerContext.relevant_files?.map(f => f.path || f).join(', ')}
Key findings: ${explorerContext.key_findings?.join('; ')}
Complexity: ${explorerContext.complexity_assessment}
` : ''}
${reviewFeedback ? `
## Revision Required
Previous solution was rejected by reviewer. Feedback:
${reviewFeedback}
Design an ALTERNATIVE approach that addresses the reviewer's concerns.
` : ''}
`
})
// Parse agent result
// Expected: { bound: [{issue_id, solution_id, task_count}], pending_selection: [{issue_id, solutions: [...]}] }
```
### Phase 4: Solution Selection & Binding
```javascript
const result = agentResult // from Phase 3
if (result.bound && result.bound.length > 0) {
// Single solution auto-bound
const bound = result.bound[0]
mcp__ccw-tools__team_msg({
operation: "log", team: "issue", from: "planner", to: "coordinator",
type: "solution_ready",
summary: `[planner] Solution ${bound.solution_id} bound to ${bound.issue_id} (${bound.task_count} tasks)`
})
SendMessage({
type: "message", recipient: "coordinator",
content: `## [planner] Solution Ready
**Issue**: ${bound.issue_id}
**Solution**: ${bound.solution_id}
**Tasks**: ${bound.task_count}
**Status**: Auto-bound (single solution)
Solution written to: .workflow/issues/solutions/${bound.issue_id}.jsonl`,
summary: `[planner] SOLVE complete: ${bound.issue_id}`
})
} else if (result.pending_selection && result.pending_selection.length > 0) {
// Multiple solutions need user selection
const pending = result.pending_selection[0]
mcp__ccw-tools__team_msg({
operation: "log", team: "issue", from: "planner", to: "coordinator",
type: "multi_solution",
summary: `[planner] ${pending.solutions.length} solutions for ${pending.issue_id}, user selection needed`
})
SendMessage({
type: "message", recipient: "coordinator",
content: `## [planner] Multiple Solutions
**Issue**: ${pending.issue_id}
**Solutions**: ${pending.solutions.length} options
${pending.solutions.map((s, i) => `### Option ${i + 1}: ${s.id}
${s.description}
Tasks: ${s.task_count}`).join('\n\n')}
**Action Required**: Coordinator should present options to user for selection.`,
summary: `[planner] multi_solution: ${pending.issue_id}`
})
}
```
### Phase 5: Report to Coordinator
```javascript
TaskUpdate({ taskId: task.id, status: 'completed' })
// Check for next task
const nextTasks = TaskList().filter(t =>
t.subject.startsWith('SOLVE-') &&
t.owner === 'planner' &&
t.status === 'pending' &&
t.blockedBy.length === 0
)
if (nextTasks.length > 0) {
// Continue with next task → back to Phase 1
}
```
## Error Handling
| Scenario | Resolution |
|----------|------------|
| No SOLVE-* tasks available | Idle, wait for coordinator |
| Issue not found | Notify coordinator with error |
| issue-plan-agent failure | Retry once, then report error |
| Explorer context missing | Proceed without — agent does its own exploration |
| Solution binding failure | Report to coordinator for manual binding |

View File

@@ -0,0 +1,305 @@
# Role: reviewer
方案审查、技术可行性验证、风险评估。**新增质量门控角色**,填补当前 plan → execute 直接执行无审查的缺口。
## Role Identity
- **Name**: `reviewer`
- **Task Prefix**: `AUDIT-*`
- **Responsibility**: Read-only analysis (solution review)
- **Communication**: SendMessage to coordinator only
- **Output Tag**: `[reviewer]`
## Role Boundaries
### MUST
- 仅处理 `AUDIT-*` 前缀的任务
- 所有输出必须带 `[reviewer]` 标识
- 仅通过 SendMessage 与 coordinator 通信
- 参考 explorer 的 context-report 验证方案覆盖度
- 对每个方案给出明确的 approved / rejected / concerns 结论
### MUST NOT
- ❌ 修改解决方案planner 职责)
- ❌ 修改任何源代码
- ❌ 编排执行队列integrator 职责)
- ❌ 直接与其他 worker 通信
- ❌ 为其他角色创建任务
## Message Types
| Type | Direction | Trigger | Description |
|------|-----------|---------|-------------|
| `approved` | reviewer → coordinator | Solution passes all checks | 方案审批通过 |
| `rejected` | reviewer → coordinator | Critical issues found | 方案被拒,需修订 |
| `concerns` | reviewer → coordinator | Minor issues noted | 有顾虑但不阻塞 |
| `error` | reviewer → coordinator | Blocking error | 审查失败 |
## Toolbox
### Direct Capabilities
| Tool | Purpose |
|------|---------|
| `Read` | 读取方案文件和上下文报告 |
| `Bash` | 执行 ccw issue 命令查看 issue/solution 详情 |
| `Glob` | 查找相关文件 |
| `Grep` | 搜索代码模式 |
| `mcp__ace-tool__search_context` | 语义搜索验证方案引用的代码 |
### CLI Capabilities
| CLI Command | Purpose |
|-------------|---------|
| `ccw issue status <id> --json` | 加载 issue 详情 |
| `ccw issue solutions <id> --json` | 查看已绑定的方案 |
## Review Criteria
### Technical Feasibility (权重 40%)
| Criterion | Check |
|-----------|-------|
| File Coverage | 方案是否涵盖所有受影响的文件 |
| Dependency Awareness | 是否考虑到依赖变更的级联影响 |
| API Compatibility | 是否保持向后兼容 |
| Pattern Conformance | 是否遵循现有代码模式 |
### Risk Assessment (权重 30%)
| Criterion | Check |
|-----------|-------|
| Scope Creep | 方案是否超出 issue 的边界 |
| Breaking Changes | 是否引入破坏性变更 |
| Side Effects | 是否有未预见的副作用 |
| Rollback Path | 出问题时能否回退 |
### Completeness (权重 30%)
| Criterion | Check |
|-----------|-------|
| All Tasks Defined | 任务分解是否完整 |
| Test Coverage | 是否包含测试计划 |
| Edge Cases | 是否考虑边界情况 |
| Documentation | 关键变更是否有说明 |
### Verdict Rules
| Score | Verdict | Action |
|-------|---------|--------|
| ≥ 80% | `approved` | 可直接进入 MARSHAL 阶段 |
| 60-79% | `concerns` | 附带建议,不阻塞流程 |
| < 60% | `rejected` | 需要 planner 修订方案 |
## Execution (5-Phase)
### Phase 1: Task Discovery
```javascript
const tasks = TaskList()
const myTasks = tasks.filter(t =>
t.subject.startsWith('AUDIT-') &&
t.owner === 'reviewer' &&
t.status === 'pending' &&
t.blockedBy.length === 0
)
if (myTasks.length === 0) return // idle
const task = TaskGet({ taskId: myTasks[0].id })
TaskUpdate({ taskId: task.id, status: 'in_progress' })
```
### Phase 2: Context & Solution Loading
```javascript
// Extract issue IDs from task description
const issueIds = task.description.match(/(?:GH-\d+|ISS-\d{8}-\d{6})/g) || []
// Load explorer context reports
const contexts = {}
for (const issueId of issueIds) {
const contextPath = `.workflow/.team-plan/issue/context-${issueId}.json`
try {
contexts[issueId] = JSON.parse(Read(contextPath))
} catch {
contexts[issueId] = null // No explorer context
}
}
// Load solution plans
const solutions = {}
for (const issueId of issueIds) {
const solJson = Bash(`ccw issue solutions ${issueId} --json`)
solutions[issueId] = JSON.parse(solJson)
}
```
### Phase 3: Multi-Dimensional Review
```javascript
const reviewResults = []
for (const issueId of issueIds) {
const context = contexts[issueId]
const solution = solutions[issueId]
if (!solution || !solution.bound) {
reviewResults.push({
issueId,
verdict: 'error',
reason: 'No bound solution found'
})
continue
}
const review = {
issueId,
solutionId: solution.bound.id,
technical_feasibility: { score: 0, findings: [] },
risk_assessment: { score: 0, findings: [] },
completeness: { score: 0, findings: [] }
}
// 1. Technical Feasibility — verify solution references real files
if (context && context.relevant_files) {
const solutionFiles = solution.bound.tasks?.flatMap(t => t.files || []) || []
const contextFiles = context.relevant_files.map(f => f.path || f)
const uncovered = contextFiles.filter(f => !solutionFiles.some(sf => sf.includes(f)))
if (uncovered.length === 0) {
review.technical_feasibility.score = 100
} else {
review.technical_feasibility.score = Math.max(40, 100 - uncovered.length * 15)
review.technical_feasibility.findings.push(
`Uncovered files: ${uncovered.join(', ')}`
)
}
} else {
review.technical_feasibility.score = 70 // No context to validate against
review.technical_feasibility.findings.push('Explorer context not available for cross-validation')
}
// 2. Risk Assessment — check for breaking changes, scope
const taskCount = solution.bound.task_count || solution.bound.tasks?.length || 0
if (taskCount > 10) {
review.risk_assessment.score = 50
review.risk_assessment.findings.push(`High task count (${taskCount}) indicates possible scope creep`)
} else {
review.risk_assessment.score = 90
}
// 3. Completeness — check task definitions
if (taskCount > 0) {
review.completeness.score = 85
} else {
review.completeness.score = 30
review.completeness.findings.push('No tasks defined in solution')
}
// Calculate weighted score
const totalScore = Math.round(
review.technical_feasibility.score * 0.4 +
review.risk_assessment.score * 0.3 +
review.completeness.score * 0.3
)
// Determine verdict
let verdict
if (totalScore >= 80) verdict = 'approved'
else if (totalScore >= 60) verdict = 'concerns'
else verdict = 'rejected'
review.total_score = totalScore
review.verdict = verdict
reviewResults.push(review)
}
```
### Phase 4: Compile Review Report
```javascript
// Determine overall verdict
const hasRejected = reviewResults.some(r => r.verdict === 'rejected')
const hasConcerns = reviewResults.some(r => r.verdict === 'concerns')
const overallVerdict = hasRejected ? 'rejected' : hasConcerns ? 'concerns' : 'approved'
// Build feedback for rejected solutions
const rejectedFeedback = reviewResults
.filter(r => r.verdict === 'rejected')
.map(r => `### ${r.issueId} (Score: ${r.total_score}%)
${r.technical_feasibility.findings.map(f => `- [Technical] ${f}`).join('\n')}
${r.risk_assessment.findings.map(f => `- [Risk] ${f}`).join('\n')}
${r.completeness.findings.map(f => `- [Completeness] ${f}`).join('\n')}`)
.join('\n\n')
// Write review report
const reportPath = `.workflow/.team-plan/issue/audit-report.json`
Write(reportPath, JSON.stringify({
timestamp: new Date().toISOString(),
overall_verdict: overallVerdict,
reviews: reviewResults
}, null, 2))
```
### Phase 5: Report to Coordinator
```javascript
// Choose message type based on verdict
const msgType = overallVerdict // 'approved' | 'rejected' | 'concerns'
mcp__ccw-tools__team_msg({
operation: "log",
team: "issue",
from: "reviewer",
to: "coordinator",
type: msgType,
summary: `[reviewer] ${overallVerdict.toUpperCase()}: ${reviewResults.length} solutions reviewed, score avg=${Math.round(reviewResults.reduce((a,r) => a + (r.total_score || 0), 0) / reviewResults.length)}%`,
ref: reportPath
})
SendMessage({
type: "message",
recipient: "coordinator",
content: `## [reviewer] Audit Results — ${overallVerdict.toUpperCase()}
**Overall**: ${overallVerdict}
**Solutions Reviewed**: ${reviewResults.length}
${reviewResults.map(r => `### ${r.issueId}${r.verdict} (${r.total_score}%)
- Technical: ${r.technical_feasibility.score}%
- Risk: ${r.risk_assessment.score}%
- Completeness: ${r.completeness.score}%
${r.verdict === 'rejected' ? `\n**Rejection Reasons**:\n${[...r.technical_feasibility.findings, ...r.risk_assessment.findings, ...r.completeness.findings].map(f => '- ' + f).join('\n')}` : ''}`).join('\n\n')}
${overallVerdict === 'rejected' ? `\n**Action Required**: Coordinator should create SOLVE-fix task for planner to revise rejected solutions.` : ''}
**Report**: ${reportPath}`,
summary: `[reviewer] AUDIT ${overallVerdict}: ${reviewResults.length} solutions`
})
TaskUpdate({ taskId: task.id, status: 'completed' })
// Check for next task
const nextTasks = TaskList().filter(t =>
t.subject.startsWith('AUDIT-') &&
t.owner === 'reviewer' &&
t.status === 'pending' &&
t.blockedBy.length === 0
)
if (nextTasks.length > 0) {
// Continue with next task → back to Phase 1
}
```
## Error Handling
| Scenario | Resolution |
|----------|------------|
| No AUDIT-* tasks available | Idle, wait for coordinator |
| Solution file not found | Check ccw issue solutions, report error if missing |
| Explorer context missing | Proceed with limited review (lower technical score) |
| All solutions rejected | Report to coordinator for CP-2 review-fix cycle |
| Review timeout | Report partial results with available data |

View File

@@ -0,0 +1,70 @@
{
"team_name": "issue",
"team_display_name": "Issue Resolution",
"description": "Unified team skill for issue processing pipeline (plan → queue → execute). Issue creation handled by issue-discover, CRUD by issue-manage.",
"version": "1.0.0",
"roles": {
"coordinator": {
"task_prefix": null,
"responsibility": "Pipeline orchestration, mode selection, task chain creation, progress monitoring",
"message_types": ["task_assigned", "pipeline_update", "escalation", "shutdown", "error"]
},
"explorer": {
"task_prefix": "EXPLORE",
"responsibility": "Issue context analysis, codebase exploration, dependency identification, impact assessment",
"message_types": ["context_ready", "impact_assessed", "error"],
"reuses_agent": "cli-explore-agent"
},
"planner": {
"task_prefix": "SOLVE",
"responsibility": "Solution design, task decomposition via issue-plan-agent",
"message_types": ["solution_ready", "multi_solution", "error"],
"reuses_agent": "issue-plan-agent"
},
"reviewer": {
"task_prefix": "AUDIT",
"responsibility": "Solution review, technical feasibility validation, risk assessment",
"message_types": ["approved", "rejected", "concerns", "error"],
"reuses_agent": null
},
"integrator": {
"task_prefix": "MARSHAL",
"responsibility": "Queue formation, conflict detection, execution order optimization via issue-queue-agent",
"message_types": ["queue_ready", "conflict_found", "error"],
"reuses_agent": "issue-queue-agent"
},
"implementer": {
"task_prefix": "BUILD",
"responsibility": "Code implementation, test verification, result submission via code-developer",
"message_types": ["impl_complete", "impl_failed", "error"],
"reuses_agent": "code-developer"
}
},
"pipelines": {
"quick": {
"description": "Simple issues, skip review",
"task_chain": ["EXPLORE-001", "SOLVE-001", "MARSHAL-001", "BUILD-001"]
},
"full": {
"description": "Complex issues, with CP-2 review cycle (max 2 rounds)",
"task_chain": ["EXPLORE-001", "SOLVE-001", "AUDIT-001", "MARSHAL-001", "BUILD-001"]
},
"batch": {
"description": "Batch processing 5-100 issues with parallel exploration and execution",
"task_chain": "EXPLORE-001..N(batch≤5) → SOLVE-001..N(batch≤3) → AUDIT-001 → MARSHAL-001 → BUILD-001..M(DAG parallel)"
}
},
"collaboration_patterns": ["CP-1", "CP-2", "CP-3", "CP-5"],
"session_dirs": {
"base": ".workflow/.team-plan/issue/",
"context": ".workflow/.team-plan/issue/context-{issueId}.json",
"audit": ".workflow/.team-plan/issue/audit-report.json",
"queue": ".workflow/issues/queue/execution-queue.json",
"solutions": ".workflow/issues/solutions/",
"messages": ".workflow/.team-msg/issue/"
}
}

View File

@@ -159,6 +159,45 @@ Full-lifecycle:
[Spec pipeline] → PLAN-001(blockedBy: DISCUSS-006) → IMPL-001 → TEST-001 + REVIEW-001
```
## Unified Session Directory
All session artifacts are stored under a single session folder:
```
.workflow/.team/TLS-{slug}-{YYYY-MM-DD}/
├── team-session.json # Session state (status, progress, completed_tasks)
├── spec/ # Spec artifacts (analyst, writer, reviewer output)
│ ├── spec-config.json
│ ├── discovery-context.json
│ ├── product-brief.md
│ ├── requirements/ # _index.md + REQ-*.md + NFR-*.md
│ ├── architecture/ # _index.md + ADR-*.md
│ ├── epics/ # _index.md + EPIC-*.md
│ ├── readiness-report.md
│ └── spec-summary.md
├── discussions/ # Discussion records (discussant output)
│ └── discuss-001..006.md
└── plan/ # Plan artifacts (planner output)
├── exploration-{angle}.json
├── explorations-manifest.json
├── plan.json
└── .task/
└── TASK-*.json
```
Messages remain at `.workflow/.team-msg/{team-name}/` (unchanged).
## Session Resume
Coordinator supports `--resume` / `--continue` flags to resume interrupted sessions:
1. Scans `.workflow/.team/TLS-*/team-session.json` for `status: "active"` or `"paused"`
2. Multiple matches → `AskUserQuestion` for user selection
3. Loads session state: `teamName`, `mode`, `sessionFolder`, `completed_tasks`
4. Rebuilds team (`TeamCreate` + worker spawns)
5. Creates only uncompleted tasks in the task chain
6. Jumps to Phase 4 coordination loop
## Coordinator Spawn Template
When coordinator creates teammates:
@@ -237,6 +276,7 @@ Task({
当你收到 PLAN-* 任务时,调用 Skill(skill="team-lifecycle", args="--role=planner") 执行。
当前需求: ${taskDescription}
约束: ${constraints}
Session: ${sessionFolder}
## 消息总线(必须)
每次 SendMessage 前,先调用 mcp__ccw-tools__team_msg 记录。

View File

@@ -23,7 +23,7 @@ Before every `SendMessage`, MUST call `mcp__ccw-tools__team_msg` to log:
```javascript
// Research complete
mcp__ccw-tools__team_msg({ operation: "log", team: teamName, from: "analyst", to: "coordinator", type: "research_ready", summary: "Research done: 5 exploration dimensions", ref: `${sessionFolder}/discovery-context.json` })
mcp__ccw-tools__team_msg({ operation: "log", team: teamName, from: "analyst", to: "coordinator", type: "research_ready", summary: "Research done: 5 exploration dimensions", ref: `${sessionFolder}/spec/discovery-context.json` })
// Error report
mcp__ccw-tools__team_msg({ operation: "log", team: teamName, from: "analyst", to: "coordinator", type: "error", summary: "Codebase access failed" })
@@ -61,7 +61,7 @@ TaskUpdate({ taskId: task.id, status: 'in_progress' })
```javascript
// Extract session folder from task description
const sessionMatch = task.description.match(/Session:\s*(.+)/)
const sessionFolder = sessionMatch ? sessionMatch[1].trim() : '.workflow/.spec-team/default'
const sessionFolder = sessionMatch ? sessionMatch[1].trim() : '.workflow/.team/default'
// Parse topic from task description
const topicLines = task.description.split('\n').filter(l => !l.startsWith('Session:') && !l.startsWith('输出:') && l.trim())
@@ -135,7 +135,7 @@ const specConfig = {
session_folder: sessionFolder,
discussion_depth: task.description.match(/讨论深度:\s*(.+)/)?.[1] || "standard"
}
Write(`${sessionFolder}/spec-config.json`, JSON.stringify(specConfig, null, 2))
Write(`${sessionFolder}/spec/spec-config.json`, JSON.stringify(specConfig, null, 2))
// Generate discovery-context.json
const discoveryContext = {
@@ -155,7 +155,7 @@ const discoveryContext = {
codebase_context: codebaseContext,
recommendations: { focus_areas: [], risks: [], open_questions: [] }
}
Write(`${sessionFolder}/discovery-context.json`, JSON.stringify(discoveryContext, null, 2))
Write(`${sessionFolder}/spec/discovery-context.json`, JSON.stringify(discoveryContext, null, 2))
```
### Phase 5: Report to Coordinator
@@ -191,8 +191,8 @@ ${(discoveryContext.seed_analysis.target_users || []).map(u => '- ' + u).join('\
${(discoveryContext.seed_analysis.exploration_dimensions || []).map((d, i) => (i+1) + '. ' + d).join('\n')}
### 输出位置
- Config: ${sessionFolder}/spec-config.json
- Context: ${sessionFolder}/discovery-context.json
- Config: ${sessionFolder}/spec/spec-config.json
- Context: ${sessionFolder}/spec/discovery-context.json
研究已就绪,可进入讨论轮次 DISCUSS-001。`,
summary: `研究就绪: ${dimensionCount}维度, ${specConfig.complexity}`

View File

@@ -22,6 +22,74 @@ Team lifecycle coordinator. Orchestrates the full pipeline across three modes: s
## Execution
### Phase 0: Session Resume Check
Before any new session setup, check if resuming an existing session:
```javascript
const args = "$ARGUMENTS"
const isResume = /--resume|--continue/.test(args)
if (isResume) {
// Scan for active/paused sessions
const sessionDirs = Glob({ pattern: '.workflow/.team/TLS-*/team-session.json' })
const resumable = sessionDirs.map(f => {
try {
const session = JSON.parse(Read(f))
if (session.status === 'active' || session.status === 'paused') return session
} catch {}
return null
}).filter(Boolean)
if (resumable.length === 0) {
// No resumable sessions → fall through to Phase 1
} else if (resumable.length === 1) {
var resumedSession = resumable[0]
} else {
// Multiple matches → user selects
AskUserQuestion({
questions: [{
question: "检测到多个可恢复的会话,请选择:",
header: "Resume",
multiSelect: false,
options: resumable.slice(0, 4).map(s => ({
label: s.session_id,
description: `${s.topic} (${s.current_phase}, ${s.status})`
}))
}]
})
var resumedSession = resumable.find(s => s.session_id === userChoice)
}
if (resumedSession) {
// Restore session state
const teamName = resumedSession.team_name
const mode = resumedSession.mode
const sessionFolder = `.workflow/.team/${resumedSession.session_id}`
const taskDescription = resumedSession.topic
// Rebuild team
TeamCreate({ team_name: teamName })
// Spawn workers based on mode (see Phase 2)
// Update session status
resumedSession.status = 'active'
resumedSession.resumed_at = new Date().toISOString()
resumedSession.updated_at = new Date().toISOString()
Write(`${sessionFolder}/team-session.json`, JSON.stringify(resumedSession, null, 2))
// Create only uncompleted tasks from pipeline
const completedTasks = new Set(resumedSession.completed_tasks || [])
const pipeline = resumedSession.mode === 'spec-only' ? SPEC_CHAIN
: resumedSession.mode === 'impl-only' ? IMPL_CHAIN
: [...SPEC_CHAIN, ...IMPL_CHAIN]
const remainingTasks = pipeline.filter(t => !completedTasks.has(t))
// → Skip to Phase 3 with remainingTasks, then Phase 4 coordination loop
}
}
```
### Phase 1: Requirement Clarification
Parse `$ARGUMENTS` to extract `--team-name` and task description.
@@ -30,7 +98,7 @@ Parse `$ARGUMENTS` to extract `--team-name` and task description.
const args = "$ARGUMENTS"
const teamNameMatch = args.match(/--team-name[=\s]+([\w-]+)/)
const teamName = teamNameMatch ? teamNameMatch[1] : `lifecycle-${Date.now().toString(36)}`
const taskDescription = args.replace(/--team-name[=\s]+[\w-]+/, '').replace(/--role[=\s]+\w+/, '').trim()
const taskDescription = args.replace(/--team-name[=\s]+[\w-]+/, '').replace(/--role[=\s]+\w+/, '').replace(/--resume|--continue/, '').trim()
```
Use AskUserQuestion to collect mode and constraints:
@@ -97,18 +165,42 @@ Simple tasks can skip clarification.
```javascript
TeamCreate({ team_name: teamName })
// Session setup
// Unified session setup
const topicSlug = taskDescription.toLowerCase().replace(/[^a-z0-9]+/g, '-').substring(0, 40)
const dateStr = new Date(Date.now() + 8 * 60 * 60 * 1000).toISOString().substring(0, 10)
const specSessionFolder = `.workflow/.spec-team/${topicSlug}-${dateStr}`
const implSessionFolder = `.workflow/.team-plan/${topicSlug}-${dateStr}`
const sessionId = `TLS-${topicSlug}-${dateStr}`
const sessionFolder = `.workflow/.team/${sessionId}`
// Create unified directory structure
if (mode === 'spec-only' || mode === 'full-lifecycle') {
Bash(`mkdir -p ${specSessionFolder}/discussions`)
Bash(`mkdir -p "${sessionFolder}/spec" "${sessionFolder}/discussions"`)
}
if (mode === 'impl-only' || mode === 'full-lifecycle') {
Bash(`mkdir -p ${implSessionFolder}`)
Bash(`mkdir -p "${sessionFolder}/plan"`)
}
// Create team-session.json
const teamSession = {
session_id: sessionId,
team_name: teamName,
topic: taskDescription,
mode: mode,
status: "active",
created_at: new Date().toISOString(),
updated_at: new Date().toISOString(),
paused_at: null,
resumed_at: null,
completed_at: null,
current_phase: mode === 'impl-only' ? 'plan' : 'spec',
completed_tasks: [],
pipeline_progress: {
spec: mode !== 'impl-only' ? { total: 12, completed: 0 } : null,
impl: mode !== 'spec-only' ? { total: 4, completed: 0 } : null
},
user_preferences: { scope: scope || '', focus: focus || '', discussion_depth: discussionDepth || '' },
messages_team: teamName
}
Write(`${sessionFolder}/team-session.json`, JSON.stringify(teamSession, null, 2))
```
**Conditional spawn based on mode** (see SKILL.md Coordinator Spawn Template for full prompts):
@@ -129,51 +221,51 @@ Task chain creation depends on the selected mode.
```javascript
// RESEARCH Phase
TaskCreate({ subject: "RESEARCH-001: 主题发现与上下文研究", description: `${taskDescription}\n\nSession: ${specSessionFolder}\n输出: ${specSessionFolder}/spec-config.json + discovery-context.json`, activeForm: "研究中" })
TaskCreate({ subject: "RESEARCH-001: 主题发现与上下文研究", description: `${taskDescription}\n\nSession: ${sessionFolder}\n输出: ${sessionFolder}/spec/spec-config.json + spec/discovery-context.json`, activeForm: "研究中" })
TaskUpdate({ taskId: researchId, owner: "analyst" })
// DISCUSS-001: 范围讨论 (blockedBy RESEARCH-001)
TaskCreate({ subject: "DISCUSS-001: 研究结果讨论 - 范围确认与方向调整", description: `讨论 RESEARCH-001 的发现结果\n\nSession: ${specSessionFolder}\n输入: ${specSessionFolder}/discovery-context.json\n输出: ${specSessionFolder}/discussions/discuss-001-scope.md\n\n讨论维度: 范围确认、方向调整、风险预判、探索缺口`, activeForm: "讨论范围中" })
TaskCreate({ subject: "DISCUSS-001: 研究结果讨论 - 范围确认与方向调整", description: `讨论 RESEARCH-001 的发现结果\n\nSession: ${sessionFolder}\n输入: ${sessionFolder}/spec/discovery-context.json\n输出: ${sessionFolder}/discussions/discuss-001-scope.md\n\n讨论维度: 范围确认、方向调整、风险预判、探索缺口`, activeForm: "讨论范围中" })
TaskUpdate({ taskId: discuss1Id, owner: "discussant", addBlockedBy: [researchId] })
// DRAFT-001: Product Brief (blockedBy DISCUSS-001)
TaskCreate({ subject: "DRAFT-001: 撰写 Product Brief", description: `基于研究和讨论共识撰写产品简报\n\nSession: ${specSessionFolder}\n输入: discovery-context.json + discuss-001-scope.md\n输出: ${specSessionFolder}/product-brief.md\n\n使用多视角分析: 产品/技术/用户`, activeForm: "撰写 Brief 中" })
TaskCreate({ subject: "DRAFT-001: 撰写 Product Brief", description: `基于研究和讨论共识撰写产品简报\n\nSession: ${sessionFolder}\n输入: discovery-context.json + discuss-001-scope.md\n输出: ${sessionFolder}/product-brief.md\n\n使用多视角分析: 产品/技术/用户`, activeForm: "撰写 Brief 中" })
TaskUpdate({ taskId: draft1Id, owner: "writer", addBlockedBy: [discuss1Id] })
// DISCUSS-002: Brief 评审 (blockedBy DRAFT-001)
TaskCreate({ subject: "DISCUSS-002: Product Brief 多视角评审", description: `评审 Product Brief 文档\n\nSession: ${specSessionFolder}\n输入: ${specSessionFolder}/product-brief.md\n输出: ${specSessionFolder}/discussions/discuss-002-brief.md\n\n讨论维度: 产品定位、目标用户、成功指标、竞品差异`, activeForm: "评审 Brief 中" })
TaskCreate({ subject: "DISCUSS-002: Product Brief 多视角评审", description: `评审 Product Brief 文档\n\nSession: ${sessionFolder}\n输入: ${sessionFolder}/spec/product-brief.md\n输出: ${sessionFolder}/discussions/discuss-002-brief.md\n\n讨论维度: 产品定位、目标用户、成功指标、竞品差异`, activeForm: "评审 Brief 中" })
TaskUpdate({ taskId: discuss2Id, owner: "discussant", addBlockedBy: [draft1Id] })
// DRAFT-002: Requirements/PRD (blockedBy DISCUSS-002)
TaskCreate({ subject: "DRAFT-002: 撰写 Requirements/PRD", description: `基于 Brief 和讨论反馈撰写需求文档\n\nSession: ${specSessionFolder}\n输入: product-brief.md + discuss-002-brief.md\n输出: ${specSessionFolder}/requirements/\n\n包含: 功能需求(REQ-*) + 非功能需求(NFR-*) + MoSCoW 优先级`, activeForm: "撰写 PRD 中" })
TaskCreate({ subject: "DRAFT-002: 撰写 Requirements/PRD", description: `基于 Brief 和讨论反馈撰写需求文档\n\nSession: ${sessionFolder}\n输入: product-brief.md + discuss-002-brief.md\n输出: ${sessionFolder}/requirements/\n\n包含: 功能需求(REQ-*) + 非功能需求(NFR-*) + MoSCoW 优先级`, activeForm: "撰写 PRD 中" })
TaskUpdate({ taskId: draft2Id, owner: "writer", addBlockedBy: [discuss2Id] })
// DISCUSS-003: 需求完整性 (blockedBy DRAFT-002)
TaskCreate({ subject: "DISCUSS-003: 需求完整性与优先级讨论", description: `讨论 PRD 需求完整性\n\nSession: ${specSessionFolder}\n输入: ${specSessionFolder}/requirements/_index.md\n输出: ${specSessionFolder}/discussions/discuss-003-requirements.md\n\n讨论维度: 需求遗漏、MoSCoW合理性、验收标准可测性、非功能需求充分性`, activeForm: "讨论需求中" })
TaskCreate({ subject: "DISCUSS-003: 需求完整性与优先级讨论", description: `讨论 PRD 需求完整性\n\nSession: ${sessionFolder}\n输入: ${sessionFolder}/spec/requirements/_index.md\n输出: ${sessionFolder}/discussions/discuss-003-requirements.md\n\n讨论维度: 需求遗漏、MoSCoW合理性、验收标准可测性、非功能需求充分性`, activeForm: "讨论需求中" })
TaskUpdate({ taskId: discuss3Id, owner: "discussant", addBlockedBy: [draft2Id] })
// DRAFT-003: Architecture (blockedBy DISCUSS-003)
TaskCreate({ subject: "DRAFT-003: 撰写 Architecture Document", description: `基于需求和讨论反馈撰写架构文档\n\nSession: ${specSessionFolder}\n输入: requirements/ + discuss-003-requirements.md\n输出: ${specSessionFolder}/architecture/\n\n包含: 架构风格 + 组件图 + 技术选型 + ADR-* + 数据模型`, activeForm: "撰写架构中" })
TaskCreate({ subject: "DRAFT-003: 撰写 Architecture Document", description: `基于需求和讨论反馈撰写架构文档\n\nSession: ${sessionFolder}\n输入: requirements/ + discuss-003-requirements.md\n输出: ${sessionFolder}/architecture/\n\n包含: 架构风格 + 组件图 + 技术选型 + ADR-* + 数据模型`, activeForm: "撰写架构中" })
TaskUpdate({ taskId: draft3Id, owner: "writer", addBlockedBy: [discuss3Id] })
// DISCUSS-004: 技术可行性 (blockedBy DRAFT-003)
TaskCreate({ subject: "DISCUSS-004: 架构决策与技术可行性讨论", description: `讨论架构设计合理性\n\nSession: ${specSessionFolder}\n输入: ${specSessionFolder}/architecture/_index.md\n输出: ${specSessionFolder}/discussions/discuss-004-architecture.md\n\n讨论维度: 技术选型风险、可扩展性、安全架构、ADR替代方案`, activeForm: "讨论架构中" })
TaskCreate({ subject: "DISCUSS-004: 架构决策与技术可行性讨论", description: `讨论架构设计合理性\n\nSession: ${sessionFolder}\n输入: ${sessionFolder}/spec/architecture/_index.md\n输出: ${sessionFolder}/discussions/discuss-004-architecture.md\n\n讨论维度: 技术选型风险、可扩展性、安全架构、ADR替代方案`, activeForm: "讨论架构中" })
TaskUpdate({ taskId: discuss4Id, owner: "discussant", addBlockedBy: [draft3Id] })
// DRAFT-004: Epics & Stories (blockedBy DISCUSS-004)
TaskCreate({ subject: "DRAFT-004: 撰写 Epics & Stories", description: `基于架构和讨论反馈撰写史诗和用户故事\n\nSession: ${specSessionFolder}\n输入: architecture/ + discuss-004-architecture.md\n输出: ${specSessionFolder}/epics/\n\n包含: EPIC-* + STORY-* + 依赖图 + MVP定义 + 执行顺序`, activeForm: "撰写 Epics 中" })
TaskCreate({ subject: "DRAFT-004: 撰写 Epics & Stories", description: `基于架构和讨论反馈撰写史诗和用户故事\n\nSession: ${sessionFolder}\n输入: architecture/ + discuss-004-architecture.md\n输出: ${sessionFolder}/epics/\n\n包含: EPIC-* + STORY-* + 依赖图 + MVP定义 + 执行顺序`, activeForm: "撰写 Epics 中" })
TaskUpdate({ taskId: draft4Id, owner: "writer", addBlockedBy: [discuss4Id] })
// DISCUSS-005: 执行就绪 (blockedBy DRAFT-004)
TaskCreate({ subject: "DISCUSS-005: 执行计划与MVP范围讨论", description: `讨论执行计划就绪性\n\nSession: ${specSessionFolder}\n输入: ${specSessionFolder}/epics/_index.md\n输出: ${specSessionFolder}/discussions/discuss-005-epics.md\n\n讨论维度: Epic粒度、故事估算、MVP范围、执行顺序、依赖风险`, activeForm: "讨论执行计划中" })
TaskCreate({ subject: "DISCUSS-005: 执行计划与MVP范围讨论", description: `讨论执行计划就绪性\n\nSession: ${sessionFolder}\n输入: ${sessionFolder}/spec/epics/_index.md\n输出: ${sessionFolder}/discussions/discuss-005-epics.md\n\n讨论维度: Epic粒度、故事估算、MVP范围、执行顺序、依赖风险`, activeForm: "讨论执行计划中" })
TaskUpdate({ taskId: discuss5Id, owner: "discussant", addBlockedBy: [draft4Id] })
// QUALITY-001: Readiness Check (blockedBy DISCUSS-005)
TaskCreate({ subject: "QUALITY-001: 规格就绪度检查", description: `全文档交叉验证和质量评分\n\nSession: ${specSessionFolder}\n输入: 全部文档\n输出: ${specSessionFolder}/readiness-report.md + spec-summary.md\n\n评分维度: 完整性(20%) + 一致性(20%) + 可追溯性(20%) + 深度(20%) + 需求覆盖率(20%)`, activeForm: "质量检查中" })
TaskCreate({ subject: "QUALITY-001: 规格就绪度检查", description: `全文档交叉验证和质量评分\n\nSession: ${sessionFolder}\n输入: 全部文档\n输出: ${sessionFolder}/spec/readiness-report.md + spec/spec-summary.md\n\n评分维度: 完整性(20%) + 一致性(20%) + 可追溯性(20%) + 深度(20%) + 需求覆盖率(20%)`, activeForm: "质量检查中" })
TaskUpdate({ taskId: qualityId, owner: "reviewer", addBlockedBy: [discuss5Id] })
// DISCUSS-006: 最终签收 (blockedBy QUALITY-001)
TaskCreate({ subject: "DISCUSS-006: 最终签收与交付确认", description: `最终讨论和签收\n\nSession: ${specSessionFolder}\n输入: ${specSessionFolder}/readiness-report.md\n输出: ${specSessionFolder}/discussions/discuss-006-final.md\n\n讨论维度: 质量报告审查、遗留问题处理、交付确认、下一步建议`, activeForm: "最终签收讨论中" })
TaskCreate({ subject: "DISCUSS-006: 最终签收与交付确认", description: `最终讨论和签收\n\nSession: ${sessionFolder}\n输入: ${sessionFolder}/spec/readiness-report.md\n输出: ${sessionFolder}/discussions/discuss-006-final.md\n\n讨论维度: 质量报告审查、遗留问题处理、交付确认、下一步建议`, activeForm: "最终签收讨论中" })
TaskUpdate({ taskId: discuss6Id, owner: "discussant", addBlockedBy: [qualityId] })
```
@@ -181,11 +273,11 @@ TaskUpdate({ taskId: discuss6Id, owner: "discussant", addBlockedBy: [qualityId]
```javascript
// PLAN-001
TaskCreate({ subject: "PLAN-001: 探索和规划实现", description: `${taskDescription}\n\n写入: ${implSessionFolder}/`, activeForm: "规划中" })
TaskCreate({ subject: "PLAN-001: 探索和规划实现", description: `${taskDescription}\n\nSession: ${sessionFolder}\n写入: ${sessionFolder}/plan/`, activeForm: "规划中" })
TaskUpdate({ taskId: planId, owner: "planner" })
// IMPL-001 (blockedBy PLAN-001)
TaskCreate({ subject: "IMPL-001: 实现已批准的计划", description: `${taskDescription}\n\nPlan: ${implSessionFolder}/plan.json`, activeForm: "实现中" })
TaskCreate({ subject: "IMPL-001: 实现已批准的计划", description: `${taskDescription}\n\nSession: ${sessionFolder}\nPlan: ${sessionFolder}/plan/plan.json`, activeForm: "实现中" })
TaskUpdate({ taskId: implId, owner: "executor", addBlockedBy: [planId] })
// TEST-001 (blockedBy IMPL-001)
@@ -193,7 +285,7 @@ TaskCreate({ subject: "TEST-001: 测试修复循环", description: `${taskDescri
TaskUpdate({ taskId: testId, owner: "tester", addBlockedBy: [implId] })
// REVIEW-001 (blockedBy IMPL-001, parallel with TEST-001)
TaskCreate({ subject: "REVIEW-001: 代码审查与需求验证", description: `${taskDescription}\n\nPlan: ${implSessionFolder}/plan.json`, activeForm: "审查中" })
TaskCreate({ subject: "REVIEW-001: 代码审查与需求验证", description: `${taskDescription}\n\nSession: ${sessionFolder}\nPlan: ${sessionFolder}/plan/plan.json`, activeForm: "审查中" })
TaskUpdate({ taskId: reviewId, owner: "reviewer", addBlockedBy: [implId] })
```
@@ -204,7 +296,7 @@ Create both spec and impl chains, with PLAN-001 blockedBy DISCUSS-006:
```javascript
// [All spec-only tasks as above]
// Then:
TaskCreate({ subject: "PLAN-001: 探索和规划实现", description: `${taskDescription}\n\nSpec: ${specSessionFolder}\n写入: ${implSessionFolder}/`, activeForm: "规划中" })
TaskCreate({ subject: "PLAN-001: 探索和规划实现", description: `${taskDescription}\n\nSession: ${sessionFolder}\n写入: ${sessionFolder}/plan/`, activeForm: "规划中" })
TaskUpdate({ taskId: planId, owner: "planner", addBlockedBy: [discuss6Id] })
// [Rest of impl-only tasks as above]
```
@@ -248,7 +340,7 @@ When receiving `research_ready` from analyst, confirm extracted requirements wit
```javascript
if (msgType === 'research_ready') {
const discoveryContext = JSON.parse(Read(`${specSessionFolder}/discovery-context.json`))
const discoveryContext = JSON.parse(Read(`${sessionFolder}/spec/discovery-context.json`))
const dimensions = discoveryContext.seed_analysis?.exploration_dimensions || []
const constraints = discoveryContext.seed_analysis?.constraints || []
const problemStatement = discoveryContext.seed_analysis?.problem_statement || ''
@@ -271,7 +363,7 @@ if (msgType === 'research_ready') {
// User provides additional requirements via free text
// Merge into discovery-context.json, then unblock DISCUSS-001
discoveryContext.seed_analysis.user_supplements = userInput
Write(`${specSessionFolder}/discovery-context.json`, JSON.stringify(discoveryContext, null, 2))
Write(`${sessionFolder}/spec/discovery-context.json`, JSON.stringify(discoveryContext, null, 2))
} else if (userChoice === '需要重新研究') {
// Reset RESEARCH-001 to pending, notify analyst
TaskUpdate({ taskId: researchId, status: 'pending' })
@@ -341,13 +433,13 @@ if (userChoice === '交付执行') {
})
// 读取 spec 文档
const specConfig = JSON.parse(Read(`${specSessionFolder}/spec-config.json`))
const specSummary = Read(`${specSessionFolder}/spec-summary.md`)
const productBrief = Read(`${specSessionFolder}/product-brief.md`)
const requirementsIndex = Read(`${specSessionFolder}/requirements/_index.md`)
const architectureIndex = Read(`${specSessionFolder}/architecture/_index.md`)
const epicsIndex = Read(`${specSessionFolder}/epics/_index.md`)
const epicFiles = Glob(`${specSessionFolder}/epics/EPIC-*.md`)
const specConfig = JSON.parse(Read(`${sessionFolder}/spec/spec-config.json`))
const specSummary = Read(`${sessionFolder}/spec/spec-summary.md`)
const productBrief = Read(`${sessionFolder}/spec/product-brief.md`)
const requirementsIndex = Read(`${sessionFolder}/spec/requirements/_index.md`)
const architectureIndex = Read(`${sessionFolder}/spec/architecture/_index.md`)
const epicsIndex = Read(`${sessionFolder}/spec/epics/_index.md`)
const epicFiles = Glob(`${sessionFolder}/spec/epics/EPIC-*.md`)
if (handoffChoice === 'lite-plan') {
// 读取首个 MVP Epic → 调用 lite-plan
@@ -368,7 +460,7 @@ if (userChoice === '交付执行') {
// Step A: 构建结构化描述
const structuredDesc = `GOAL: ${specConfig.seed_analysis?.problem_statement || specConfig.topic}
SCOPE: ${specConfig.complexity} complexity
CONTEXT: Generated from spec team session ${specConfig.session_id}. Source: ${specSessionFolder}/`
CONTEXT: Generated from spec team session ${specConfig.session_id}. Source: ${sessionFolder}/`
// Step B: 创建 WFS session
Skill({ skill: "workflow:session:start", args: `--auto "${structuredDesc}"` })
@@ -384,7 +476,7 @@ CONTEXT: Generated from spec team session ${specConfig.session_id}. Source: ${sp
**Source**: spec-team session ${specConfig.session_id}
**Generated**: ${new Date().toISOString()}
**Spec Directory**: ${specSessionFolder}
**Spec Directory**: ${sessionFolder}
## 1. Project Positioning & Goals
${extractSection(productBrief, "Vision")}
@@ -407,11 +499,11 @@ ${extractSection(epicsIndex, "Traceability Matrix")}
## Appendix: Source Documents
| Document | Path | Description |
|----------|------|-------------|
| Product Brief | ${specSessionFolder}/product-brief.md | Vision, goals, scope |
| Requirements | ${specSessionFolder}/requirements/ | _index.md + REQ-*.md + NFR-*.md |
| Architecture | ${specSessionFolder}/architecture/ | _index.md + ADR-*.md |
| Epics | ${specSessionFolder}/epics/ | _index.md + EPIC-*.md |
| Readiness Report | ${specSessionFolder}/readiness-report.md | Quality validation |
| Product Brief | ${sessionFolder}/spec/product-brief.md | Vision, goals, scope |
| Requirements | ${sessionFolder}/spec/requirements/ | _index.md + REQ-*.md + NFR-*.md |
| Architecture | ${sessionFolder}/spec/architecture/ | _index.md + ADR-*.md |
| Epics | ${sessionFolder}/spec/epics/ | _index.md + EPIC-*.md |
| Readiness Report | ${sessionFolder}/spec/readiness-report.md | Quality validation |
`)
// C.2: feature-index.jsonEPIC → Feature 映射)
@@ -501,30 +593,58 @@ function parseYAML(yamlStr) {
}
```
## Session State Tracking
At each key transition, update `team-session.json`:
```javascript
// Helper: update session state
function updateSession(sessionFolder, updates) {
const session = JSON.parse(Read(`${sessionFolder}/team-session.json`))
Object.assign(session, updates, { updated_at: new Date().toISOString() })
Write(`${sessionFolder}/team-session.json`, JSON.stringify(session, null, 2))
}
// On task completion:
updateSession(sessionFolder, {
completed_tasks: [...session.completed_tasks, taskPrefix],
pipeline_progress: { ...session.pipeline_progress,
[phase]: { ...session.pipeline_progress[phase], completed: session.pipeline_progress[phase].completed + 1 }
}
})
// On phase transition (spec → plan):
updateSession(sessionFolder, { current_phase: 'plan' })
// On completion:
updateSession(sessionFolder, { status: 'completed', completed_at: new Date().toISOString() })
// On user closes team:
updateSession(sessionFolder, { status: 'completed', completed_at: new Date().toISOString() })
```
## Session File Structure
```
# Spec session
.workflow/.spec-team/{topic-slug}-{YYYY-MM-DD}/
├── spec-config.json
├── discovery-context.json
├── product-brief.md
├── requirements/
├── architecture/
├── epics/
├── readiness-report.md
├── spec-summary.md
└── discussions/
└── discuss-001..006.md
# Impl session
.workflow/.team-plan/{task-slug}-{YYYY-MM-DD}/
├── exploration-*.json
├── explorations-manifest.json
├── planning-context.md
├── plan.json
└── .task/
└── TASK-*.json
.workflow/.team/TLS-{slug}-{YYYY-MM-DD}/
├── team-session.json # Session state (resume support)
├── spec/ # Spec artifacts
│ ├── spec-config.json
│ ├── discovery-context.json
│ ├── product-brief.md
│ ├── requirements/ # _index.md + REQ-*.md + NFR-*.md
│ ├── architecture/ # _index.md + ADR-*.md
│ ├── epics/ # _index.md + EPIC-*.md
│ ├── readiness-report.md
│ └── spec-summary.md
├── discussions/ # Discussion records
│ └── discuss-001..006.md
└── plan/ # Plan artifacts
├── exploration-{angle}.json
├── explorations-manifest.json
├── plan.json
└── .task/
└── TASK-*.json
```
## Error Handling

View File

@@ -92,12 +92,12 @@ const roundMatch = task.subject.match(/DISCUSS-(\d+)/)
const roundNumber = roundMatch ? parseInt(roundMatch[1]) : 0
const roundConfig = {
1: { artifact: 'discovery-context.json', type: 'json', outputFile: 'discuss-001-scope.md', perspectives: ['product', 'risk', 'coverage'], label: '范围讨论' },
2: { artifact: 'product-brief.md', type: 'md', outputFile: 'discuss-002-brief.md', perspectives: ['product', 'technical', 'quality', 'coverage'], label: 'Brief评审' },
3: { artifact: 'requirements/_index.md', type: 'md', outputFile: 'discuss-003-requirements.md', perspectives: ['quality', 'product', 'coverage'], label: '需求讨论' },
4: { artifact: 'architecture/_index.md', type: 'md', outputFile: 'discuss-004-architecture.md', perspectives: ['technical', 'risk'], label: '架构讨论' },
5: { artifact: 'epics/_index.md', type: 'md', outputFile: 'discuss-005-epics.md', perspectives: ['product', 'technical', 'quality', 'coverage'], label: 'Epics讨论' },
6: { artifact: 'readiness-report.md', type: 'md', outputFile: 'discuss-006-final.md', perspectives: ['product', 'technical', 'quality', 'risk', 'coverage'], label: '最终签收' }
1: { artifact: 'spec/discovery-context.json', type: 'json', outputFile: 'discuss-001-scope.md', perspectives: ['product', 'risk', 'coverage'], label: '范围讨论' },
2: { artifact: 'spec/product-brief.md', type: 'md', outputFile: 'discuss-002-brief.md', perspectives: ['product', 'technical', 'quality', 'coverage'], label: 'Brief评审' },
3: { artifact: 'spec/requirements/_index.md', type: 'md', outputFile: 'discuss-003-requirements.md', perspectives: ['quality', 'product', 'coverage'], label: '需求讨论' },
4: { artifact: 'spec/architecture/_index.md', type: 'md', outputFile: 'discuss-004-architecture.md', perspectives: ['technical', 'risk'], label: '架构讨论' },
5: { artifact: 'spec/epics/_index.md', type: 'md', outputFile: 'discuss-005-epics.md', perspectives: ['product', 'technical', 'quality', 'coverage'], label: 'Epics讨论' },
6: { artifact: 'spec/readiness-report.md', type: 'md', outputFile: 'discuss-006-final.md', perspectives: ['product', 'technical', 'quality', 'risk', 'coverage'], label: '最终签收' }
}
const config = roundConfig[roundNumber]

View File

@@ -59,7 +59,7 @@ const task = TaskGet({ taskId: myTasks[0].id })
TaskUpdate({ taskId: task.id, status: 'in_progress' })
// Extract plan path from task description
const planPathMatch = task.description.match(/\.workflow\/\.team-plan\/[^\s]+\/plan\.json/)
const planPathMatch = task.description.match(/\.workflow\/\.team\/[^\s]+\/plan\/plan\.json/)
const planPath = planPathMatch ? planPathMatch[0] : null
if (!planPath) {

View File

@@ -24,7 +24,7 @@ Before every `SendMessage`, MUST call `mcp__ccw-tools__team_msg` to log:
```javascript
// Plan ready
mcp__ccw-tools__team_msg({ operation: "log", team: teamName, from: "planner", to: "coordinator", type: "plan_ready", summary: "Plan ready: 3 tasks, Medium complexity", ref: `${sessionFolder}/plan.json` })
mcp__ccw-tools__team_msg({ operation: "log", team: teamName, from: "planner", to: "coordinator", type: "plan_ready", summary: "Plan ready: 3 tasks, Medium complexity", ref: `${sessionFolder}/plan/plan.json` })
// Plan revision
mcp__ccw-tools__team_msg({ operation: "log", team: teamName, from: "planner", to: "coordinator", type: "plan_revision", summary: "Split task-2 into two subtasks per feedback" })
@@ -38,7 +38,7 @@ mcp__ccw-tools__team_msg({ operation: "log", team: teamName, from: "planner", to
When `mcp__ccw-tools__team_msg` MCP is unavailable:
```javascript
Bash(`ccw team log --team "${teamName}" --from "planner" --to "coordinator" --type "plan_ready" --summary "Plan ready: 3 tasks" --ref "${sessionFolder}/plan.json" --json`)
Bash(`ccw team log --team "${teamName}" --from "planner" --to "coordinator" --type "plan_ready" --summary "Plan ready: 3 tasks" --ref "${sessionFolder}/plan/plan.json" --json`)
```
## Execution (5-Phase)
@@ -60,14 +60,30 @@ const task = TaskGet({ taskId: myTasks[0].id })
TaskUpdate({ taskId: task.id, status: 'in_progress' })
```
### Phase 1.5: Load Spec Context (Full-Lifecycle Mode)
```javascript
// Extract session folder from task description (set by coordinator)
const sessionMatch = task.description.match(/Session:\s*(.+)/)
const sessionFolder = sessionMatch ? sessionMatch[1].trim() : `.workflow/.team/default`
const planDir = `${sessionFolder}/plan`
Bash(`mkdir -p ${planDir}`)
// Check if spec directory exists (full-lifecycle mode)
const specDir = `${sessionFolder}/spec`
let specContext = null
try {
const reqIndex = Read(`${specDir}/requirements/_index.md`)
const archIndex = Read(`${specDir}/architecture/_index.md`)
const epicsIndex = Read(`${specDir}/epics/_index.md`)
const specConfig = JSON.parse(Read(`${specDir}/spec-config.json`))
specContext = { reqIndex, archIndex, epicsIndex, specConfig }
} catch { /* impl-only mode has no spec */ }
```
### Phase 2: Multi-Angle Exploration
```javascript
// Session setup
const taskSlug = task.subject.toLowerCase().replace(/[^a-z0-9]+/g, '-').substring(0, 40)
const dateStr = new Date(Date.now() + 8 * 60 * 60 * 1000).toISOString().substring(0, 10)
const sessionFolder = `.workflow/.team-plan/${taskSlug}-${dateStr}`
Bash(`mkdir -p ${sessionFolder}`)
// Complexity assessment
function assessComplexity(desc) {
@@ -110,7 +126,7 @@ if (complexity === 'Low') {
project_root_path: projectRoot,
query: task.description
})
Write(`${sessionFolder}/exploration-${selectedAngles[0]}.json`, JSON.stringify({
Write(`${planDir}/exploration-${selectedAngles[0]}.json`, JSON.stringify({
project_structure: "...",
relevant_files: [],
patterns: [],
@@ -133,11 +149,12 @@ Execute **${angle}** exploration for task planning context.
## Output Location
**Session Folder**: ${sessionFolder}
**Output File**: ${sessionFolder}/exploration-${angle}.json
**Output File**: ${planDir}/exploration-${angle}.json
## Assigned Context
- **Exploration Angle**: ${angle}
- **Task Description**: ${task.description}
- **Spec Context**: ${specContext ? 'Available — use spec/requirements, spec/architecture, spec/epics for informed exploration' : 'Not available (impl-only mode)'}
- **Exploration Index**: ${index + 1} of ${selectedAngles.length}
## MANDATORY FIRST STEPS
@@ -146,7 +163,7 @@ Execute **${angle}** exploration for task planning context.
3. Read: .workflow/project-tech.json (if exists - technology stack)
## Expected Output
Write JSON to: ${sessionFolder}/exploration-${angle}.json
Write JSON to: ${planDir}/exploration-${angle}.json
Follow explore-json-schema.json structure with ${angle}-focused findings.
**MANDATORY**: Every file in relevant_files MUST have:
@@ -168,10 +185,10 @@ const explorationManifest = {
explorations: selectedAngles.map(angle => ({
angle: angle,
file: `exploration-${angle}.json`,
path: `${sessionFolder}/exploration-${angle}.json`
path: `${planDir}/exploration-${angle}.json`
}))
}
Write(`${sessionFolder}/explorations-manifest.json`, JSON.stringify(explorationManifest, null, 2))
Write(`${planDir}/explorations-manifest.json`, JSON.stringify(explorationManifest, null, 2))
```
### Phase 3: Plan Generation
@@ -182,7 +199,7 @@ const schema = Bash(`cat ~/.ccw/workflows/cli-templates/schemas/plan-overview-ba
if (complexity === 'Low') {
// Direct Claude planning
Bash(`mkdir -p ${sessionFolder}/.task`)
Bash(`mkdir -p ${planDir}/.task`)
// Generate plan.json + .task/TASK-*.json following schemas
} else {
// Use cli-lite-planning-agent for Medium/High
@@ -191,11 +208,16 @@ if (complexity === 'Low') {
run_in_background: false,
description: "Generate detailed implementation plan",
prompt: `Generate implementation plan.
Output: ${sessionFolder}/plan.json + ${sessionFolder}/.task/TASK-*.json
Output: ${planDir}/plan.json + ${planDir}/.task/TASK-*.json
Schema: cat ~/.ccw/workflows/cli-templates/schemas/plan-overview-base-schema.json
Task Description: ${task.description}
Explorations: ${explorationManifest}
Complexity: ${complexity}
${specContext ? `Spec Context:
- Requirements: ${specContext.reqIndex.substring(0, 500)}
- Architecture: ${specContext.archIndex.substring(0, 500)}
- Epics: ${specContext.epicsIndex.substring(0, 500)}
Reference REQ-* IDs, follow ADR decisions, reuse Epic/Story decomposition.` : ''}
Requirements: 2-7 tasks, each with id, title, files[].change, convergence.criteria, depends_on`
})
}
@@ -204,8 +226,8 @@ Requirements: 2-7 tasks, each with id, title, files[].change, convergence.criter
### Phase 4: Submit for Approval
```javascript
const plan = JSON.parse(Read(`${sessionFolder}/plan.json`))
const planTasks = plan.task_ids.map(id => JSON.parse(Read(`${sessionFolder}/.task/${id}.json`)))
const plan = JSON.parse(Read(`${planDir}/plan.json`))
const planTasks = plan.task_ids.map(id => JSON.parse(Read(`${planDir}/.task/${id}.json`)))
const taskCount = plan.task_count || plan.task_ids.length
mcp__ccw-tools__team_msg({
@@ -213,7 +235,7 @@ mcp__ccw-tools__team_msg({
from: "planner", to: "coordinator",
type: "plan_ready",
summary: `Plan就绪: ${taskCount}个task, ${complexity}复杂度`,
ref: `${sessionFolder}/plan.json`
ref: `${planDir}/plan.json`
})
SendMessage({
@@ -232,8 +254,8 @@ ${planTasks.map((t, i) => (i+1) + '. ' + t.title).join('\n')}
${plan.approach}
### Plan Location
${sessionFolder}/plan.json
Task Files: ${sessionFolder}/.task/
${planDir}/plan.json
Task Files: ${planDir}/.task/
Please review and approve or request revisions.`,
summary: `Plan ready: ${taskCount} tasks`
@@ -253,7 +275,7 @@ TaskUpdate({ taskId: task.id, status: 'completed' })
## Session Files
```
.workflow/.team-plan/{task-slug}-{YYYY-MM-DD}/
{sessionFolder}/plan/
├── exploration-{angle}.json
├── explorations-manifest.json
├── planning-context.md
@@ -262,6 +284,8 @@ TaskUpdate({ taskId: task.id, status: 'completed' })
└── TASK-*.json
```
> **Note**: `sessionFolder` is extracted from task description (`Session: .workflow/.team/TLS-xxx`). Plan outputs go to `plan/` subdirectory. In full-lifecycle mode, spec products are available at `../spec/`.
## Error Handling
| Scenario | Resolution |

View File

@@ -73,7 +73,7 @@ const reviewMode = task.subject.startsWith('REVIEW-') ? 'code' : 'spec'
```javascript
if (reviewMode === 'code') {
// Load plan for acceptance criteria
const planPathMatch = task.description.match(/\.workflow\/\.team-plan\/[^\s]+\/plan\.json/)
const planPathMatch = task.description.match(/\.workflow\/\.team\/[^\s]+\/plan\/plan\.json/)
let plan = null
if (planPathMatch) {
try { plan = JSON.parse(Read(planPathMatch[0])) } catch {}
@@ -112,18 +112,18 @@ if (reviewMode === 'spec') {
adrs: [], epicsIndex: null, epics: [], discussions: []
}
try { documents.config = JSON.parse(Read(`${sessionFolder}/spec-config.json`)) } catch {}
try { documents.discoveryContext = JSON.parse(Read(`${sessionFolder}/discovery-context.json`)) } catch {}
try { documents.productBrief = Read(`${sessionFolder}/product-brief.md`) } catch {}
try { documents.requirementsIndex = Read(`${sessionFolder}/requirements/_index.md`) } catch {}
try { documents.architectureIndex = Read(`${sessionFolder}/architecture/_index.md`) } catch {}
try { documents.epicsIndex = Read(`${sessionFolder}/epics/_index.md`) } catch {}
try { documents.config = JSON.parse(Read(`${sessionFolder}/spec/spec-config.json`)) } catch {}
try { documents.discoveryContext = JSON.parse(Read(`${sessionFolder}/spec/discovery-context.json`)) } catch {}
try { documents.productBrief = Read(`${sessionFolder}/spec/product-brief.md`) } catch {}
try { documents.requirementsIndex = Read(`${sessionFolder}/spec/requirements/_index.md`) } catch {}
try { documents.architectureIndex = Read(`${sessionFolder}/spec/architecture/_index.md`) } catch {}
try { documents.epicsIndex = Read(`${sessionFolder}/spec/epics/_index.md`) } catch {}
// Load individual documents
Glob({ pattern: `${sessionFolder}/requirements/REQ-*.md` }).forEach(f => { try { documents.requirements.push(Read(f)) } catch {} })
Glob({ pattern: `${sessionFolder}/requirements/NFR-*.md` }).forEach(f => { try { documents.requirements.push(Read(f)) } catch {} })
Glob({ pattern: `${sessionFolder}/architecture/ADR-*.md` }).forEach(f => { try { documents.adrs.push(Read(f)) } catch {} })
Glob({ pattern: `${sessionFolder}/epics/EPIC-*.md` }).forEach(f => { try { documents.epics.push(Read(f)) } catch {} })
Glob({ pattern: `${sessionFolder}/spec/requirements/REQ-*.md` }).forEach(f => { try { documents.requirements.push(Read(f)) } catch {} })
Glob({ pattern: `${sessionFolder}/spec/requirements/NFR-*.md` }).forEach(f => { try { documents.requirements.push(Read(f)) } catch {} })
Glob({ pattern: `${sessionFolder}/spec/architecture/ADR-*.md` }).forEach(f => { try { documents.adrs.push(Read(f)) } catch {} })
Glob({ pattern: `${sessionFolder}/spec/epics/EPIC-*.md` }).forEach(f => { try { documents.epics.push(Read(f)) } catch {} })
Glob({ pattern: `${sessionFolder}/discussions/discuss-*.md` }).forEach(f => { try { documents.discussions.push(Read(f)) } catch {} })
const docInventory = {
@@ -473,7 +473,7 @@ ${allSpecIssues.map(i => '- ' + i).join('\n') || 'None'}
## Document Inventory
${Object.entries(docInventory).map(([k, v]) => '- ' + k + ': ' + (v === true ? '✓' : v === false ? '✗' : v)).join('\n')}
`
Write(`${sessionFolder}/readiness-report.md`, readinessReport)
Write(`${sessionFolder}/spec/readiness-report.md`, readinessReport)
// Generate spec-summary.md
const specSummary = `---
@@ -503,7 +503,7 @@ ${qualityGate === 'PASS' ? '- Ready for handoff to execution workflows' :
qualityGate === 'REVIEW' ? '- Address review items, then proceed to execution' :
'- Fix critical issues before proceeding'}
`
Write(`${sessionFolder}/spec-summary.md`, specSummary)
Write(`${sessionFolder}/spec/spec-summary.md`, specSummary)
}
```
@@ -590,8 +590,8 @@ ${allSpecIssues.map(i => '- ' + i).join('\n') || '无问题'}
${Object.entries(docInventory).map(([k, v]) => '- ' + k + ': ' + (typeof v === 'boolean' ? (v ? '✓' : '✗') : v)).join('\n')}
### 输出位置
- 就绪报告: ${sessionFolder}/readiness-report.md
- 执行摘要: ${sessionFolder}/spec-summary.md
- 就绪报告: ${sessionFolder}/spec/readiness-report.md
- 执行摘要: ${sessionFolder}/spec/spec-summary.md
${qualityGate === 'PASS' ? '质量达标,可进入最终讨论轮次 DISCUSS-006。' :
qualityGate === 'REVIEW' ? '质量基本达标但有改进空间,建议在讨论中审查。' :

View File

@@ -69,7 +69,7 @@ const sessionFolder = sessionMatch ? sessionMatch[1].trim() : ''
// Load session config
let specConfig = null
try { specConfig = JSON.parse(Read(`${sessionFolder}/spec-config.json`)) } catch {}
try { specConfig = JSON.parse(Read(`${sessionFolder}/spec/spec-config.json`)) } catch {}
// Determine document type from task subject
const docType = task.subject.includes('Product Brief') ? 'product-brief'
@@ -91,16 +91,16 @@ try { discussionFeedback = Read(`${sessionFolder}/${discussionFiles[docType]}`)
// Load prior documents progressively
const priorDocs = {}
if (docType !== 'product-brief') {
try { priorDocs.discoveryContext = Read(`${sessionFolder}/discovery-context.json`) } catch {}
try { priorDocs.discoveryContext = Read(`${sessionFolder}/spec/discovery-context.json`) } catch {}
}
if (['requirements', 'architecture', 'epics'].includes(docType)) {
try { priorDocs.productBrief = Read(`${sessionFolder}/product-brief.md`) } catch {}
try { priorDocs.productBrief = Read(`${sessionFolder}/spec/product-brief.md`) } catch {}
}
if (['architecture', 'epics'].includes(docType)) {
try { priorDocs.requirementsIndex = Read(`${sessionFolder}/requirements/_index.md`) } catch {}
try { priorDocs.requirementsIndex = Read(`${sessionFolder}/spec/requirements/_index.md`) } catch {}
}
if (docType === 'epics') {
try { priorDocs.architectureIndex = Read(`${sessionFolder}/architecture/_index.md`) } catch {}
try { priorDocs.architectureIndex = Read(`${sessionFolder}/spec/architecture/_index.md`) } catch {}
}
```
@@ -246,8 +246,8 @@ dependencies:
// 填充 template 中所有 section: Vision, Problem Statement, Target Users, Goals, Scope
// 应用 document-standards.md 格式规范
Write(`${sessionFolder}/product-brief.md`, `${frontmatter}\n\n${filledContent}`)
outputPath = 'product-brief.md'
Write(`${sessionFolder}/spec/product-brief.md`, `${frontmatter}\n\n${filledContent}`)
outputPath = 'spec/product-brief.md'
}
```
@@ -299,7 +299,7 @@ CONSTRAINTS: Every requirement must be specific enough to estimate and test. No
}
// === 生成 requirements/ 目录 ===
Bash(`mkdir -p "${sessionFolder}/requirements"`)
Bash(`mkdir -p "${sessionFolder}/spec/requirements"`)
const timestamp = new Date().toISOString()
@@ -330,7 +330,7 @@ ${req.user_story}
## Acceptance Criteria
${req.acceptance_criteria.map((ac, i) => `${i+1}. ${ac}`).join('\n')}
`
Write(`${sessionFolder}/requirements/REQ-${req.id}-${req.slug}.md`, reqContent)
Write(`${sessionFolder}/spec/requirements/REQ-${req.id}-${req.slug}.md`, reqContent)
})
// 写入独立 NFR-*.md 文件
@@ -353,7 +353,7 @@ ${nfr.requirement}
## Metric & Target
${nfr.metric} — Target: ${nfr.target}
`
Write(`${sessionFolder}/requirements/NFR-${nfr.type}-${nfr.id}-${nfr.slug}.md`, nfrContent)
Write(`${sessionFolder}/spec/requirements/NFR-${nfr.type}-${nfr.id}-${nfr.slug}.md`, nfrContent)
})
// 写入 _index.md汇总 + 链接)
@@ -390,8 +390,8 @@ ${nfReqs.map(n => `| [NFR-${n.type}-${n.id}](NFR-${n.type}-${n.id}-${n.slug}.md)
- **Could**: ${funcReqs.filter(r => r.priority === 'Could').length}
- **Won't**: ${funcReqs.filter(r => r.priority === "Won't").length}
`
Write(`${sessionFolder}/requirements/_index.md`, indexContent)
outputPath = 'requirements/_index.md'
Write(`${sessionFolder}/spec/requirements/_index.md`, indexContent)
outputPath = 'spec/requirements/_index.md'
}
```
@@ -485,7 +485,7 @@ CONSTRAINTS: Be genuinely critical, not just validating. Focus on actionable imp
}
// === 生成 architecture/ 目录 ===
Bash(`mkdir -p "${sessionFolder}/architecture"`)
Bash(`mkdir -p "${sessionFolder}/spec/architecture"`)
const timestamp = new Date().toISOString()
const adrs = parseADRs(geminiArchitectureOutput, codexReviewOutput)
@@ -518,7 +518,7 @@ ${adr.consequences}
## Review Feedback
${adr.reviewFeedback || 'N/A'}
`
Write(`${sessionFolder}/architecture/ADR-${adr.id}-${adr.slug}.md`, adrContent)
Write(`${sessionFolder}/spec/architecture/ADR-${adr.id}-${adr.slug}.md`, adrContent)
})
// 写入 _index.md含 Mermaid 组件图 + ER图 + 链接)
@@ -535,8 +535,8 @@ dependencies:
---`
// 包含: system overview, component diagram (Mermaid), tech stack table,
// ADR links table, data model (Mermaid erDiagram), API design, security controls
Write(`${sessionFolder}/architecture/_index.md`, archIndexContent)
outputPath = 'architecture/_index.md'
Write(`${sessionFolder}/spec/architecture/_index.md`, archIndexContent)
outputPath = 'spec/architecture/_index.md'
}
```
@@ -596,7 +596,7 @@ CONSTRAINTS: Every Must-have requirement must appear in at least one Story. Stor
}
// === 生成 epics/ 目录 ===
Bash(`mkdir -p "${sessionFolder}/epics"`)
Bash(`mkdir -p "${sessionFolder}/spec/epics"`)
const timestamp = new Date().toISOString()
const epicsList = parseEpics(cliOutput)
@@ -643,7 +643,7 @@ ${epic.reqs.map(r => `- [${r}](../requirements/${r}.md)`).join('\n')}
## Architecture
${epic.adrs.map(a => `- [${a}](../architecture/${a}.md)`).join('\n')}
`
Write(`${sessionFolder}/epics/EPIC-${epic.id}-${epic.slug}.md`, epicContent)
Write(`${sessionFolder}/spec/epics/EPIC-${epic.id}-${epic.slug}.md`, epicContent)
})
// 写入 _index.md含 Mermaid 依赖图 + MVP + 链接)
@@ -660,8 +660,8 @@ dependencies:
---`
// 包含: Epic overview table (with links), dependency Mermaid diagram,
// execution order, MVP scope, traceability matrix
Write(`${sessionFolder}/epics/_index.md`, epicsIndexContent)
outputPath = 'epics/_index.md'
Write(`${sessionFolder}/spec/epics/_index.md`, epicsIndexContent)
outputPath = 'spec/epics/_index.md'
}
```

View File

@@ -71,8 +71,10 @@
"collaboration_patterns": ["CP-1", "CP-2", "CP-4", "CP-5", "CP-6", "CP-10"],
"session_dirs": {
"spec": ".workflow/.spec-team/{topic-slug}-{YYYY-MM-DD}/",
"impl": ".workflow/.team-plan/{task-slug}-{YYYY-MM-DD}/",
"base": ".workflow/.team/TLS-{slug}-{YYYY-MM-DD}/",
"spec": "spec/",
"discussions": "discussions/",
"plan": "plan/",
"messages": ".workflow/.team-msg/{team-name}/"
}
}

View File

@@ -21,7 +21,7 @@ Unified lightweight planning and execution skill. Routes to lite-plan (planning
┌───────────┐ ┌───────────┐
│ lite-plan │ │lite-execute│
│ Phase 1 │ │ Phase 2 │
│ Plan+Exec │──handoff─→│ Standalone │
│ Plan+Exec │─direct──→│ Standalone │
└───────────┘ └───────────┘
```
@@ -44,9 +44,65 @@ function detectMode() {
| `workflow:lite-plan` | plan | [phases/01-lite-plan.md](phases/01-lite-plan.md) | Full planning pipeline (explore → plan → confirm → execute) |
| `workflow:lite-execute` | execute | [phases/02-lite-execute.md](phases/02-lite-execute.md) | Standalone execution (in-memory / prompt / file) |
## Interactive Preference Collection
Before dispatching, collect workflow preferences via AskUserQuestion:
```javascript
if (mode === 'plan') {
const prefResponse = AskUserQuestion({
questions: [
{
question: "是否跳过所有确认步骤(自动模式)?",
header: "Auto Mode",
multiSelect: false,
options: [
{ label: "Interactive (Recommended)", description: "交互模式,包含确认步骤" },
{ label: "Auto", description: "跳过所有确认,自动执行" }
]
},
{
question: "是否强制执行代码探索阶段?",
header: "Exploration",
multiSelect: false,
options: [
{ label: "Auto-detect (Recommended)", description: "智能判断是否需要探索" },
{ label: "Force explore", description: "强制执行代码探索" }
]
}
]
})
workflowPreferences = {
autoYes: prefResponse.autoMode === 'Auto',
forceExplore: prefResponse.exploration === 'Force explore'
}
} else {
// Execute mode (standalone, not in-memory)
const prefResponse = AskUserQuestion({
questions: [
{
question: "是否跳过所有确认步骤(自动模式)?",
header: "Auto Mode",
multiSelect: false,
options: [
{ label: "Interactive (Recommended)", description: "交互模式,包含确认步骤" },
{ label: "Auto", description: "跳过所有确认,自动执行" }
]
}
]
})
workflowPreferences = {
autoYes: prefResponse.autoMode === 'Auto',
forceExplore: false
}
}
```
**workflowPreferences** is passed to phase execution as context variable, referenced as `workflowPreferences.autoYes` and `workflowPreferences.forceExplore` within phases.
## Prompt Enhancement
Before dispatching to the target phase, enhance context:
After collecting preferences, enhance context and dispatch:
```javascript
// Step 1: Check for project context files
@@ -61,7 +117,7 @@ if (hasProjectGuidelines) {
console.log('Project guidelines available: .workflow/project-guidelines.json')
}
// Step 3: Dispatch to phase
// Step 3: Dispatch to phase (workflowPreferences available as context)
if (mode === 'plan') {
// Read phases/01-lite-plan.md and execute
} else {
@@ -71,20 +127,20 @@ if (mode === 'plan') {
## Execution Flow
### Plan Mode (workflow:lite-plan)
### Plan Mode
```
1. Parse flags (-y/--yes, -e/--explore) and task description
1. Collect preferences via AskUserQuestion (autoYes, forceExplore)
2. Enhance prompt with project context availability
3. Read phases/01-lite-plan.md
4. Execute lite-plan pipeline (Phase 1-5 within the phase doc)
5. lite-plan Phase 5 internally calls workflow:lite-execute via Skill handoff
5. lite-plan Phase 5 directly reads and executes Phase 2 (lite-execute) with executionContext
```
### Execute Mode (workflow:lite-execute)
### Execute Mode
```
1. Parse flags (--in-memory, -y/--yes) and input
1. Collect preferences via AskUserQuestion (autoYes)
2. Enhance prompt with project context availability
3. Read phases/02-lite-execute.md
4. Execute lite-execute pipeline (input detection → execution → review)
@@ -92,17 +148,10 @@ if (mode === 'plan') {
## Usage
```bash
# Plan mode
/workflow:lite-plan "实现JWT认证" # Interactive
/workflow:lite-plan --yes "实现JWT认证" # Auto mode
/workflow:lite-plan -y -e "优化数据库查询性能" # Auto + force exploration
Plan mode and execute mode are triggered by skill name routing (see Mode Detection). Workflow preferences (auto mode, force explore) are collected interactively via AskUserQuestion before dispatching to phases.
# Execute mode (standalone)
/workflow:lite-execute "Add unit tests for auth" # Prompt description
/workflow:lite-execute plan.json # File input
/workflow:lite-execute --in-memory # Called by lite-plan internally
```
**Plan mode**: Task description provided as arguments → interactive preference collection → planning pipeline
**Execute mode**: Task description, file path, or in-memory context → interactive preference collection → execution pipeline
## Phase Reference Documents

View File

@@ -2,8 +2,6 @@
Complete planning pipeline: task analysis, multi-angle exploration, clarification, adaptive planning, confirmation, and execution handoff.
**Source**: Converted from `.claude/commands/workflow/lite-plan.md` - all execution detail preserved verbatim.
---
## Overview
@@ -18,23 +16,13 @@ Intelligent lightweight planning command with dynamic workflow adaptation based
- Two-step confirmation: plan display → multi-dimensional input collection
- Execution handoff with complete context to lite-execute
## Usage
## Input
```bash
/workflow:lite-plan [FLAGS] <TASK_DESCRIPTION>
# Flags
-y, --yes Skip all confirmations (auto mode)
-e, --explore Force code exploration phase (overrides auto-detection)
# Arguments
<task-description> Task description or path to .md file (required)
# Examples
/workflow:lite-plan "实现JWT认证" # Interactive mode
/workflow:lite-plan --yes "实现JWT认证" # Auto mode (no confirmations)
/workflow:lite-plan -y -e "优化数据库查询性能" # Auto mode + force exploration
```
<task-description> Task description or path to .md file (required)
```
Workflow preferences (`autoYes`, `forceExplore`) are collected by SKILL.md via AskUserQuestion and passed as `workflowPreferences` context variable.
## Output Artifacts
@@ -56,25 +44,19 @@ Intelligent lightweight planning command with dynamic workflow adaptation based
## Auto Mode Defaults
When `--yes` or `-y` flag is used:
When `workflowPreferences.autoYes === true`:
- **Clarification Questions**: Skipped (no clarification phase)
- **Plan Confirmation**: Auto-selected "Allow"
- **Execution Method**: Auto-selected "Auto"
- **Code Review**: Auto-selected "Skip"
**Flag Parsing**:
```javascript
const autoYes = $ARGUMENTS.includes('--yes') || $ARGUMENTS.includes('-y')
const forceExplore = $ARGUMENTS.includes('--explore') || $ARGUMENTS.includes('-e')
```
## Execution Process
```
Phase 1: Task Analysis & Exploration
├─ Parse input (description or .md file)
├─ intelligent complexity assessment (Low/Medium/High)
├─ Exploration decision (auto-detect or --explore flag)
├─ Exploration decision (auto-detect or workflowPreferences.forceExplore)
├─ Context protection: If file reading ≥50k chars → force cli-explore-agent
└─ Decision:
├─ needsExploration=true → Launch parallel cli-explore-agents (1-4 based on complexity)
@@ -101,7 +83,7 @@ Phase 4: Confirmation & Selection
Phase 5: Execute
├─ Build executionContext (plan + explorations + clarifications + selections)
└─ Skill(skill="workflow:lite-execute", args="--in-memory")
└─ Direct handoff: Read phases/02-lite-execute.md → Execute with executionContext (Mode 1)
```
## Implementation
@@ -125,7 +107,7 @@ bash(`mkdir -p ${sessionFolder} && test -d ${sessionFolder} && echo "SUCCESS: ${
**Exploration Decision Logic**:
```javascript
needsExploration = (
flags.includes('--explore') || flags.includes('-e') ||
workflowPreferences.forceExplore ||
task.mentions_specific_files ||
task.requires_codebase_context ||
task.needs_architecture_understanding ||
@@ -368,12 +350,11 @@ explorations.forEach(exp => {
// - Produce dedupedClarifications with unique intents only
const dedupedClarifications = intelligentMerge(allClarifications)
// Parse --yes flag
const autoYes = $ARGUMENTS.includes('--yes') || $ARGUMENTS.includes('-y')
const autoYes = workflowPreferences.autoYes
if (autoYes) {
// Auto mode: Skip clarification phase
console.log(`[--yes] Skipping ${dedupedClarifications.length} clarification questions`)
console.log(`[Auto] Skipping ${dedupedClarifications.length} clarification questions`)
console.log(`Proceeding to planning with exploration results...`)
// Continue to Phase 3
} else if (dedupedClarifications.length > 0) {
@@ -608,14 +589,13 @@ ${taskList.map((t, i) => `${i+1}. ${t.title} (${t.scope || t.files?.[0]?.path ||
**Step 4.2: Collect Confirmation**
```javascript
// Parse --yes flag
const autoYes = $ARGUMENTS.includes('--yes') || $ARGUMENTS.includes('-y')
const autoYes = workflowPreferences.autoYes
let userSelection
if (autoYes) {
// Auto mode: Use defaults
console.log(`[--yes] Auto-confirming plan:`)
console.log(`[Auto] Auto-confirming plan:`)
console.log(` - Confirmation: Allow`)
console.log(` - Execution: Auto`)
console.log(` - Review: Skip`)
@@ -723,7 +703,10 @@ executionContext = {
**Step 5.2: Handoff**
```javascript
Skill(skill="workflow:lite-execute", args="--in-memory")
// Direct phase handoff: Read and execute Phase 2 (lite-execute) with in-memory context
// No Skill routing needed - executionContext is already set in Step 5.1
Read("phases/02-lite-execute.md")
// Execute Phase 2 with executionContext (Mode 1: In-Memory Plan)
```
## Session Folder Structure

View File

@@ -2,8 +2,6 @@
Complete execution engine: multi-mode input, task grouping, batch execution, code review, and development index update.
**Source**: Converted from `.claude/commands/workflow/lite-execute.md` - all execution detail preserved verbatim.
---
## Overview
@@ -20,22 +18,19 @@ Flexible task execution command supporting three input modes: in-memory plan (fr
## Usage
### Command Syntax
```bash
/workflow:lite-execute [FLAGS] <INPUT>
# Flags
--in-memory Use plan from memory (called by lite-plan)
# Arguments
### Input
```
<input> Task description string, or path to file (required)
```
Mode 1 (In-Memory) is triggered by lite-plan direct handoff when `executionContext` is available.
Workflow preferences (`autoYes`) are passed from SKILL.md via `workflowPreferences` context variable.
## Input Modes
### Mode 1: In-Memory Plan
**Trigger**: Called by lite-plan after Phase 4 approval with `--in-memory` flag
**Trigger**: Called by lite-plan direct handoff after Phase 4 approval (executionContext available)
**Input Source**: `executionContext` global variable set by lite-plan
@@ -61,14 +56,13 @@ Flexible task execution command supporting three input modes: in-memory plan (fr
**User Interaction**:
```javascript
// Parse --yes flag
const autoYes = $ARGUMENTS.includes('--yes') || $ARGUMENTS.includes('-y')
const autoYes = workflowPreferences.autoYes
let userSelection
if (autoYes) {
// Auto mode: Use defaults
console.log(`[--yes] Auto-confirming execution:`)
console.log(`[Auto] Auto-confirming execution:`)
console.log(` - Execution method: Auto`)
console.log(` - Code review: Skip`)
@@ -180,7 +174,7 @@ function getTasks(planObject) {
```
Input Parsing:
└─ Decision (mode detection):
├─ --in-memory flag → Mode 1: Load executionContext → Skip user selection
├─ executionContext exists → Mode 1: Load executionContext → Skip user selection
├─ Ends with .md/.json/.txt → Mode 3: Read file → Detect format
│ ├─ Valid plan.json → Use planObject → User selects method + review
│ └─ Not plan.json → Treat as prompt → User selects method + review
@@ -672,7 +666,7 @@ console.log(`✓ Development index: [${category}] ${entry.title}`)
| Error | Cause | Resolution |
|-------|-------|------------|
| Missing executionContext | --in-memory without context | Error: "No execution context found. Only available when called by lite-plan." |
| Missing executionContext | In-memory mode without context | Error: "No execution context found. Only available when called by lite-plan." |
| File not found | File path doesn't exist | Error: "File not found: {path}. Check file path." |
| Empty file | File exists but no content | Error: "File is empty: {path}. Provide task description." |
| Invalid Enhanced Task JSON | JSON missing required fields | Warning: "Missing required fields. Treating as plain text." |

View File

@@ -44,9 +44,54 @@ function detectMode() {
| `workflow:multi-cli-plan` | plan | [phases/01-multi-cli-plan.md](phases/01-multi-cli-plan.md) | Multi-CLI collaborative planning (ACE context → discussion → plan → execute) |
| `workflow:lite-execute` | execute | [phases/02-lite-execute.md](phases/02-lite-execute.md) | Standalone execution (in-memory / prompt / file) |
## Interactive Preference Collection
Before dispatching, collect workflow preferences via AskUserQuestion:
```javascript
if (mode === 'plan') {
const prefResponse = AskUserQuestion({
questions: [
{
question: "是否跳过所有确认步骤(自动模式)?",
header: "Auto Mode",
multiSelect: false,
options: [
{ label: "Interactive (Recommended)", description: "交互模式,包含确认步骤" },
{ label: "Auto", description: "跳过所有确认,自动执行" }
]
}
]
})
workflowPreferences = {
autoYes: prefResponse.autoMode === 'Auto'
}
} else {
// Execute mode
const prefResponse = AskUserQuestion({
questions: [
{
question: "是否跳过所有确认步骤(自动模式)?",
header: "Auto Mode",
multiSelect: false,
options: [
{ label: "Interactive (Recommended)", description: "交互模式,包含确认步骤" },
{ label: "Auto", description: "跳过所有确认,自动执行" }
]
}
]
})
workflowPreferences = {
autoYes: prefResponse.autoMode === 'Auto'
}
}
```
**workflowPreferences** is passed to phase execution as context variable, referenced as `workflowPreferences.autoYes` within phases.
## Prompt Enhancement
Before dispatching to the target phase, enhance context:
After collecting preferences, enhance context and dispatch:
```javascript
// Step 1: Check for project context files
@@ -61,7 +106,7 @@ if (hasProjectGuidelines) {
console.log('Project guidelines available: .workflow/project-guidelines.json')
}
// Step 3: Dispatch to phase
// Step 3: Dispatch to phase (workflowPreferences available as context)
if (mode === 'plan') {
// Read phases/01-multi-cli-plan.md and execute
} else {
@@ -74,17 +119,17 @@ if (mode === 'plan') {
### Plan Mode (workflow:multi-cli-plan)
```
1. Parse flags (-y/--yes, --max-rounds, --tools, --mode) and task description
1. Collect preferences via AskUserQuestion (autoYes)
2. Enhance prompt with project context availability
3. Read phases/01-multi-cli-plan.md
4. Execute multi-cli-plan pipeline (Phase 1-5 within the phase doc)
5. Phase 5 internally calls workflow:lite-execute via Skill handoff
5. Phase 5 directly reads and executes Phase 2 (lite-execute) with executionContext
```
### Execute Mode (workflow:lite-execute)
```
1. Parse flags (--in-memory, -y/--yes) and input
1. Collect preferences via AskUserQuestion (autoYes)
2. Enhance prompt with project context availability
3. Read phases/02-lite-execute.md
4. Execute lite-execute pipeline (input detection → execution → review)
@@ -92,17 +137,10 @@ if (mode === 'plan') {
## Usage
```bash
# Multi-CLI plan mode
/workflow:multi-cli-plan "Implement user authentication"
/workflow:multi-cli-plan --yes "Add dark mode support"
/workflow:multi-cli-plan "Refactor payment module" --max-rounds=3 --tools=gemini,codex
Plan mode and execute mode are triggered by skill name routing (see Mode Detection). Workflow preferences (auto mode) are collected interactively via AskUserQuestion before dispatching to phases.
# Execute mode (standalone)
/workflow:lite-execute "Add unit tests for auth" # Prompt description
/workflow:lite-execute plan.json # File input
/workflow:lite-execute --in-memory # Called by multi-cli-plan internally
```
**Plan mode**: Task description provided as arguments → interactive preference collection → multi-CLI planning pipeline
**Execute mode**: Task description, file path, or in-memory context → interactive preference collection → execution pipeline
## Phase Reference Documents

View File

@@ -2,11 +2,9 @@
Complete multi-CLI collaborative planning pipeline with ACE context gathering and iterative cross-verification. This phase document preserves the full content of the original `workflow:multi-cli-plan` command.
> **Source**: Converted from `.claude/commands/workflow/multi-cli-plan.md`. Frontmatter moved to SKILL.md.
## Auto Mode
When `--yes` or `-y`: Auto-approve plan, use recommended solution and execution method (Agent, Skip review).
When `workflowPreferences.autoYes` is true: Auto-approve plan, use recommended solution and execution method (Agent, Skip review).
# Multi-CLI Collaborative Planning Command
@@ -448,8 +446,9 @@ executionContext = {
**Step 4: Hand off to Execution**:
```javascript
// Execute to lite-execute with in-memory context
Skill(skill="workflow:lite-execute", args="--in-memory")
// Direct phase handoff: Read and execute Phase 2 (lite-execute) with in-memory context
Read("phases/02-lite-execute.md")
// Execute Phase 2 with executionContext (Mode 1: In-Memory Plan)
```
## Output File Structure

View File

@@ -1,10 +1,11 @@
# Phase 2: Lite Execute
Complete execution engine supporting multiple input modes: in-memory plan, prompt description, or file content. This phase document preserves the full content of the original `workflow:lite-execute` command.
Complete execution engine supporting multiple input modes: in-memory plan, prompt description, or file content.
> **Source**: Converted from `.claude/commands/workflow/lite-execute.md`. Frontmatter moved to SKILL.md.
## Objective
# Workflow Lite-Execute Command (/workflow:lite-execute)
- Execute implementation tasks from in-memory plan, prompt description, or file content
- Support batch execution with grouped tasks and code review
## Overview
@@ -61,14 +62,14 @@ Flexible task execution command supporting three input modes: in-memory plan (fr
**User Interaction**:
```javascript
// Parse --yes flag
const autoYes = $ARGUMENTS.includes('--yes') || $ARGUMENTS.includes('-y')
// Reference workflowPreferences (set by SKILL.md via AskUserQuestion)
const autoYes = workflowPreferences.autoYes
let userSelection
if (autoYes) {
// Auto mode: Use defaults
console.log(`[--yes] Auto-confirming execution:`)
console.log(`[Auto] Auto-confirming execution:`)
console.log(` - Execution method: Auto`)
console.log(` - Code review: Skip`)
@@ -753,7 +754,7 @@ Appended to `previousExecutionResults` array for context continuity in multi-exe
## Post-Completion Expansion
完成后询问用户是否扩展为issue(test/enhance/refactor/doc),选中项调用 `/issue:new "{summary} - {dimension}"`
完成后询问用户是否扩展为issue(test/enhance/refactor/doc),选中项调用 `Skill(skill="issue:new", args="{summary} - {dimension}")`
**Fixed ID Pattern**: `${sessionId}-${groupId}` enables predictable lookup without auto-generated timestamps.

View File

@@ -47,9 +47,49 @@ Unified planning skill combining 4-phase planning workflow, plan quality verific
5. **Auto-Continue**: After each phase completes, automatically execute next pending phase
6. **Accumulated State**: planning-notes.md carries context across phases for N+1 decisions
## Auto Mode
## Interactive Preference Collection
When `--yes` or `-y`: Auto-continue all phases (skip confirmations), use recommended conflict resolutions, use safe defaults for replan.
Before dispatching to phase execution, collect workflow preferences via AskUserQuestion:
```javascript
const prefResponse = AskUserQuestion({
questions: [
{
question: "是否跳过所有确认步骤(自动模式)?",
header: "Auto Mode",
multiSelect: false,
options: [
{ label: "Interactive (Recommended)", description: "交互模式,包含确认步骤" },
{ label: "Auto", description: "跳过所有确认,自动执行" }
]
}
]
})
workflowPreferences = {
autoYes: prefResponse.autoMode === 'Auto'
}
// For replan mode, also collect interactive preference
if (mode === 'replan') {
const replanPref = AskUserQuestion({
questions: [
{
question: "是否使用交互式澄清模式?",
header: "Replan Mode",
multiSelect: false,
options: [
{ label: "Standard (Recommended)", description: "使用安全默认值" },
{ label: "Interactive", description: "通过提问交互式澄清修改范围" }
]
}
]
})
workflowPreferences.interactive = replanPref.replanMode === 'Interactive'
}
```
**workflowPreferences** is passed to phase execution as context variable, referenced as `workflowPreferences.autoYes`, `workflowPreferences.interactive` within phases.
## Mode Detection
@@ -326,18 +366,18 @@ See phase files for detailed update code.
- **If user selects Verify**: Read phases/05-plan-verify.md, execute Phase 5 in-process
- **If user selects Execute**: Skill(skill="workflow-execute")
- **If user selects Review**: Route to /workflow:status
- **Auto mode (--yes)**: Auto-select "Verify Plan Quality", then auto-continue to execute if PROCEED
- **Auto mode (workflowPreferences.autoYes)**: Auto-select "Verify Plan Quality", then auto-continue to execute if PROCEED
- Update TodoWrite after each phase
- After each phase, automatically continue to next phase based on TodoList status
### Verify Mode
- Detect/validate session (from --session flag or auto-detect)
- Detect/validate session (auto-detect from active sessions)
- Initialize TodoWrite with single verification task
- Execute Phase 5 verification agent
- Present quality gate result and next step options
### Replan Mode
- Parse flags (--session, --interactive, task-id)
- Parse task ID from $ARGUMENTS (IMPL-N format, if present)
- Detect operation mode (task vs session)
- Initialize TodoWrite with replan-specific tasks
- Execute Phase 6 through all sub-phases (clarification → impact → backup → apply → verify)

View File

@@ -1,6 +1,6 @@
# Phase 2: Context Gathering
Gather project context and analyze codebase via context-gather tool.
Gather project context and analyze codebase via context-search-agent with parallel exploration.
## Objective
@@ -9,42 +9,344 @@ Gather project context and analyze codebase via context-gather tool.
- Detect conflict risk level for Phase 3 decision
- Update planning-notes.md with findings
## Core Philosophy
- **Agent Delegation**: Delegate all discovery to `context-search-agent` for autonomous execution
- **Detection-First**: Check for existing context-package before executing
- **Plan Mode**: Full comprehensive analysis (vs lightweight brainstorm mode)
- **Standardized Output**: Generate `.workflow/active/{session}/.process/context-package.json`
## Execution
### Step 2.1: Execute Context Gathering
### Step 2.1: Context-Package Detection
**Execute First** - Check if valid package already exists:
```javascript
Skill(skill="workflow:tools:context-gather", args="--session [sessionId] \"[structured-task-description]\"")
const contextPackagePath = `.workflow/active/${sessionId}/.process/context-package.json`;
if (file_exists(contextPackagePath)) {
const existing = Read(contextPackagePath);
// Validate package belongs to current session
if (existing?.metadata?.session_id === sessionId) {
console.log("Valid context-package found for session:", sessionId);
console.log("Stats:", existing.statistics);
console.log("Conflict Risk:", existing.conflict_detection.risk_level);
// Skip execution, store variables and proceed to Step 2.5
contextPath = contextPackagePath;
conflictRisk = existing.conflict_detection.risk_level;
return; // Early exit - skip Steps 2.2-2.4
}
}
```
**Use Same Structured Description**: Pass the same structured format from Phase 1
### Step 2.2: Complexity Assessment & Parallel Explore
**Input**: `sessionId` from Phase 1
**Only execute if Step 2.1 finds no valid package**
**Parse Output**:
- Extract: context-package.json path (store as `contextPath`)
- Typical pattern: `.workflow/active/[sessionId]/.process/context-package.json`
```javascript
// 2.2.1 Complexity Assessment
function analyzeTaskComplexity(taskDescription) {
const text = taskDescription.toLowerCase();
if (/architect|refactor|restructure|modular|cross-module/.test(text)) return 'High';
if (/multiple|several|integrate|migrate|extend/.test(text)) return 'Medium';
return 'Low';
}
**Validation**:
- Context package path extracted
- File exists and is valid JSON
- `prioritized_context` field exists
const ANGLE_PRESETS = {
architecture: ['architecture', 'dependencies', 'modularity', 'integration-points'],
security: ['security', 'auth-patterns', 'dataflow', 'validation'],
performance: ['performance', 'bottlenecks', 'caching', 'data-access'],
bugfix: ['error-handling', 'dataflow', 'state-management', 'edge-cases'],
feature: ['patterns', 'integration-points', 'testing', 'dependencies'],
refactor: ['architecture', 'patterns', 'dependencies', 'testing']
};
### TodoWrite Update (Phase 2 Skill executed - tasks attached)
function selectAngles(taskDescription, complexity) {
const text = taskDescription.toLowerCase();
let preset = 'feature';
if (/refactor|architect|restructure/.test(text)) preset = 'architecture';
else if (/security|auth|permission/.test(text)) preset = 'security';
else if (/performance|slow|optimi/.test(text)) preset = 'performance';
else if (/fix|bug|error|issue/.test(text)) preset = 'bugfix';
const count = complexity === 'High' ? 4 : (complexity === 'Medium' ? 3 : 1);
return ANGLE_PRESETS[preset].slice(0, count);
}
const complexity = analyzeTaskComplexity(task_description);
const selectedAngles = selectAngles(task_description, complexity);
const sessionFolder = `.workflow/active/${sessionId}/.process`;
// 2.2.2 Launch Parallel Explore Agents
const explorationTasks = selectedAngles.map((angle, index) =>
Task(
subagent_type="cli-explore-agent",
run_in_background=false,
description=`Explore: ${angle}`,
prompt=`
## Task Objective
Execute **${angle}** exploration for task planning context. Analyze codebase from this specific angle to discover relevant structure, patterns, and constraints.
## Assigned Context
- **Exploration Angle**: ${angle}
- **Task Description**: ${task_description}
- **Session ID**: ${sessionId}
- **Exploration Index**: ${index + 1} of ${selectedAngles.length}
- **Output File**: ${sessionFolder}/exploration-${angle}.json
## MANDATORY FIRST STEPS (Execute by Agent)
1. Run: ccw tool exec get_modules_by_depth '{}' (project structure)
2. Run: rg -l "{keyword_from_task}" --type ts (locate relevant files)
3. Execute: cat ~/.ccw/workflows/cli-templates/schemas/explore-json-schema.json (get output schema reference)
## Exploration Strategy (${angle} focus)
**Step 1: Structural Scan** (Bash)
- get_modules_by_depth.sh -> identify modules related to ${angle}
- find/rg -> locate files relevant to ${angle} aspect
- Analyze imports/dependencies from ${angle} perspective
**Step 2: Semantic Analysis** (Gemini CLI)
- How does existing code handle ${angle} concerns?
- What patterns are used for ${angle}?
- Where would new code integrate from ${angle} viewpoint?
**Step 3: Write Output**
- Consolidate ${angle} findings into JSON
- Identify ${angle}-specific clarification needs
## Expected Output
**File**: ${sessionFolder}/exploration-${angle}.json
**Schema Reference**: Schema obtained in MANDATORY FIRST STEPS step 3, follow schema exactly
**Required Fields** (all ${angle} focused):
- project_structure: Modules/architecture relevant to ${angle}
- relevant_files: Files affected from ${angle} perspective
**MANDATORY**: Every file MUST use structured object format with ALL required fields:
[{path: "src/file.ts", relevance: 0.85, rationale: "Contains AuthService.login()", role: "modify_target", discovery_source: "bash-scan", key_symbols: ["AuthService", "login"]}]
- **rationale** (required): Specific selection basis tied to ${angle} topic (>10 chars, not generic)
- **role** (required): modify_target|dependency|pattern_reference|test_target|type_definition|integration_point|config|context_only
- **discovery_source** (recommended): bash-scan|cli-analysis|ace-search|dependency-trace|manual
- **key_symbols** (recommended): Key functions/classes/types in the file relevant to the task
- Scores: 0.7+ high priority, 0.5-0.7 medium, <0.5 low
- patterns: ${angle}-related patterns to follow
- dependencies: Dependencies relevant to ${angle}
- integration_points: Where to integrate from ${angle} viewpoint (include file:line locations)
- constraints: ${angle}-specific limitations/conventions
- clarification_needs: ${angle}-related ambiguities (options array + recommended index)
- _metadata.exploration_angle: "${angle}"
## Success Criteria
- [ ] Schema obtained via cat explore-json-schema.json
- [ ] get_modules_by_depth.sh executed
- [ ] At least 3 relevant files identified with ${angle} rationale
- [ ] Patterns are actionable (code examples, not generic advice)
- [ ] Integration points include file:line locations
- [ ] Constraints are project-specific to ${angle}
- [ ] JSON output follows schema exactly
- [ ] clarification_needs includes options + recommended
## Output
Write: ${sessionFolder}/exploration-${angle}.json
Return: 2-3 sentence summary of ${angle} findings
`
)
);
// 2.2.3 Generate Manifest after all complete
const explorationFiles = bash(`find ${sessionFolder} -name "exploration-*.json" -type f`).split('\n').filter(f => f.trim());
const explorationManifest = {
session_id: sessionId,
task_description,
timestamp: new Date().toISOString(),
complexity,
exploration_count: selectedAngles.length,
angles_explored: selectedAngles,
explorations: explorationFiles.map(file => {
const data = JSON.parse(Read(file));
return { angle: data._metadata.exploration_angle, file: file.split('/').pop(), path: file, index: data._metadata.exploration_index };
})
};
Write(`${sessionFolder}/explorations-manifest.json`, JSON.stringify(explorationManifest, null, 2));
```
### Step 2.3: Invoke Context-Search Agent
**Only execute after Step 2.2 completes**
```javascript
// Load user intent from planning-notes.md (from Phase 1)
const planningNotesPath = `.workflow/active/${sessionId}/planning-notes.md`;
let userIntent = { goal: task_description, key_constraints: "None specified" };
if (file_exists(planningNotesPath)) {
const notesContent = Read(planningNotesPath);
const goalMatch = notesContent.match(/\*\*GOAL\*\*:\s*(.+)/);
const constraintsMatch = notesContent.match(/\*\*KEY_CONSTRAINTS\*\*:\s*(.+)/);
if (goalMatch) userIntent.goal = goalMatch[1].trim();
if (constraintsMatch) userIntent.key_constraints = constraintsMatch[1].trim();
}
Task(
subagent_type="context-search-agent",
run_in_background=false,
description="Gather comprehensive context for plan",
prompt=`
## Execution Mode
**PLAN MODE** (Comprehensive) - Full Phase 1-3 execution with priority sorting
## Session Information
- **Session ID**: ${sessionId}
- **Task Description**: ${task_description}
- **Output Path**: .workflow/${sessionId}/.process/context-package.json
## User Intent (from Phase 1 - Planning Notes)
**GOAL**: ${userIntent.goal}
**KEY_CONSTRAINTS**: ${userIntent.key_constraints}
This is the PRIMARY context source - all subsequent analysis must align with user intent.
## Exploration Input (from Step 2.2)
- **Manifest**: ${sessionFolder}/explorations-manifest.json
- **Exploration Count**: ${explorationManifest.exploration_count}
- **Angles**: ${explorationManifest.angles_explored.join(', ')}
- **Complexity**: ${complexity}
## Mission
Execute complete context-search-agent workflow for implementation planning:
### Phase 1: Initialization & Pre-Analysis
1. **Project State Loading**:
- Read and parse .workflow/project-tech.json. Use its overview section as the foundational project_context.
- Read and parse .workflow/project-guidelines.json. Load conventions, constraints, and learnings into a project_guidelines section.
- If files don't exist, proceed with fresh analysis.
2. **Detection**: Check for existing context-package (early exit if valid)
3. **Foundation**: Initialize CodexLens, get project structure, load docs
4. **Analysis**: Extract keywords, determine scope, classify complexity
### Phase 2: Multi-Source Context Discovery
Execute all discovery tracks (WITH USER INTENT INTEGRATION):
- **Track -1**: User Intent & Priority Foundation (EXECUTE FIRST)
- Load user intent (GOAL, KEY_CONSTRAINTS) from session input
- Map user requirements to codebase entities (files, modules, patterns)
- Establish baseline priority scores based on user goal alignment
- Output: user_intent_mapping.json with preliminary priority scores
- **Track 0**: Exploration Synthesis (load explorations-manifest.json, prioritize critical_files, deduplicate patterns/integration_points)
- **Track 1**: Historical archive analysis (query manifest.json for lessons learned)
- **Track 2**: Reference documentation (CLAUDE.md, architecture docs)
- **Track 3**: Web examples (use Exa MCP for unfamiliar tech/APIs)
- **Track 4**: Codebase analysis (5-layer discovery: files, content, patterns, deps, config/tests)
### Phase 3: Synthesis, Assessment & Packaging
1. Apply relevance scoring and build dependency graph
2. **Synthesize 5-source data**: Merge findings from all sources
- Priority order: User Intent > Archive > Docs > Exploration > Code > Web
- **Prioritize the context from project-tech.json** for architecture and tech stack
3. **Context Priority Sorting**:
a. Combine scores from Track -1 (user intent alignment) + relevance scores + exploration critical_files
b. Classify files into priority tiers:
- **Critical** (score >= 0.85): Directly mentioned in user goal OR exploration critical_files
- **High** (0.70-0.84): Key dependencies, patterns required for goal
- **Medium** (0.50-0.69): Supporting files, indirect dependencies
- **Low** (< 0.50): Contextual awareness only
c. Generate dependency_order: Based on dependency graph + user goal sequence
d. Document sorting_rationale: Explain prioritization logic
4. **Populate project_context**: Directly use the overview from project-tech.json
5. **Populate project_guidelines**: Load from project-guidelines.json
6. Integrate brainstorm artifacts (if .brainstorming/ exists, read content)
7. Perform conflict detection with risk assessment
8. **Inject historical conflicts** from archive analysis into conflict_detection
9. **Generate prioritized_context section**:
{
"prioritized_context": {
"user_intent": { "goal": "...", "scope": "...", "key_constraints": ["..."] },
"priority_tiers": {
"critical": [{ "path": "...", "relevance": 0.95, "rationale": "..." }],
"high": [...], "medium": [...], "low": [...]
},
"dependency_order": ["module1", "module2", "module3"],
"sorting_rationale": "Based on user goal alignment, exploration critical files, and dependency graph"
}
}
10. Generate and validate context-package.json with prioritized_context field
## Output Requirements
Complete context-package.json with:
- **metadata**: task_description, keywords, complexity, tech_stack, session_id
- **project_context**: description, technology_stack, architecture, key_components (from project-tech.json)
- **project_guidelines**: {conventions, constraints, quality_rules, learnings} (from project-guidelines.json)
- **assets**: {documentation[], source_code[], config[], tests[]} with relevance scores
- **dependencies**: {internal[], external[]} with dependency graph
- **brainstorm_artifacts**: {guidance_specification, role_analyses[], synthesis_output} with content
- **conflict_detection**: {risk_level, risk_factors, affected_modules[], mitigation_strategy, historical_conflicts[]}
- **exploration_results**: {manifest_path, exploration_count, angles, explorations[], aggregated_insights}
- **prioritized_context**: {user_intent, priority_tiers{critical, high, medium, low}, dependency_order[], sorting_rationale}
## Quality Validation
Before completion verify:
- [ ] Valid JSON format with all required fields
- [ ] File relevance accuracy >80%
- [ ] Dependency graph complete (max 2 transitive levels)
- [ ] Conflict risk level calculated correctly
- [ ] No sensitive data exposed
- [ ] Total files <= 50 (prioritize high-relevance)
## Planning Notes Record (REQUIRED)
After completing context-package.json, append to planning-notes.md:
**File**: .workflow/active/${sessionId}/planning-notes.md
**Location**: Under "## Context Findings (Phase 2)" section
**Format**:
### [Context-Search Agent] YYYY-MM-DD
- **Note**: [Brief summary of key findings]
Execute autonomously following agent documentation.
Report completion with statistics.
`
)
```
### Step 2.4: Output Verification
After agent completes, verify output:
```javascript
// Verify file was created
const outputPath = `.workflow/active/${sessionId}/.process/context-package.json`;
if (!file_exists(outputPath)) {
throw new Error("Agent failed to generate context-package.json");
}
// Store variables for subsequent phases
contextPath = outputPath;
// Verify exploration_results included
const pkg = JSON.parse(Read(outputPath));
if (pkg.exploration_results?.exploration_count > 0) {
console.log(`Exploration results aggregated: ${pkg.exploration_results.exploration_count} angles`);
}
conflictRisk = pkg.conflict_detection?.risk_level || 'low';
```
### TodoWrite Update (Phase 2 in progress - tasks attached)
```json
[
{"content": "Phase 1: Session Discovery", "status": "completed", "activeForm": "Executing session discovery"},
{"content": "Phase 2: Context Gathering", "status": "in_progress", "activeForm": "Executing context gathering"},
{"content": " Analyze codebase structure", "status": "in_progress", "activeForm": "Analyzing codebase structure"},
{"content": " Identify integration points", "status": "pending", "activeForm": "Identifying integration points"},
{"content": " Generate context package", "status": "pending", "activeForm": "Generating context package"},
{"content": " -> Analyze codebase structure", "status": "in_progress", "activeForm": "Analyzing codebase structure"},
{"content": " -> Identify integration points", "status": "pending", "activeForm": "Identifying integration points"},
{"content": " -> Generate context package", "status": "pending", "activeForm": "Generating context package"},
{"content": "Phase 4: Task Generation", "status": "pending", "activeForm": "Executing task generation"}
]
```
**Note**: Skill execute **attaches** context-gather's 3 tasks. Orchestrator **executes** these tasks sequentially.
### TodoWrite Update (Phase 2 completed - tasks collapsed)
```json
@@ -55,9 +357,7 @@ Skill(skill="workflow:tools:context-gather", args="--session [sessionId] \"[stru
]
```
**Note**: Phase 2 tasks completed and collapsed to summary.
### Step 2.2: Update Planning Notes
### Step 2.5: Update Planning Notes
After context gathering completes, update planning-notes.md with findings:
@@ -101,5 +401,5 @@ ${constraints.map((c, i) => `${i + 2}. [Context] ${c}`).join('\n')}`
## Next Phase
Return to orchestrator. Orchestrator checks `conflictRisk`:
- If `conflictRisk >= medium` [Phase 3: Conflict Resolution](03-conflict-resolution.md)
- If `conflictRisk < medium` [Phase 4: Task Generation](04-task-generation.md)
- If `conflictRisk >= medium` -> [Phase 3: Conflict Resolution](03-conflict-resolution.md)
- If `conflictRisk < medium` -> [Phase 4: Task Generation](04-task-generation.md)

View File

@@ -5,6 +5,7 @@ Detect and resolve conflicts with CLI analysis. This phase is **conditional** -
## Objective
- Detect conflicts between planned changes and existing codebase
- Detect module scenario uniqueness (functional overlaps)
- Present conflicts to user with resolution strategies
- Apply selected resolution strategies
- Update planning-notes.md with conflict decisions
@@ -14,46 +15,336 @@ Detect and resolve conflicts with CLI analysis. This phase is **conditional** -
Only execute when context-package.json indicates `conflict_risk` is "medium" or "high".
If `conflict_risk` is "none" or "low", skip directly to Phase 4.
## Conflict Categories
| Category | Description |
|----------|-------------|
| **Architecture** | Incompatible design patterns, module structure changes |
| **API** | Breaking contract changes, signature modifications |
| **Data Model** | Schema modifications, type breaking changes |
| **Dependency** | Version incompatibilities, setup conflicts |
| **ModuleOverlap** | Functional overlap, scenario boundary ambiguity, duplicate responsibility |
## Execution
### Step 3.1: Execute Conflict Resolution
### Step 3.1: Validation
```javascript
Skill(skill="workflow:tools:conflict-resolution", args="--session [sessionId] --context [contextPath]")
// 1. Verify session directory exists
const sessionDir = `.workflow/active/${sessionId}`;
if (!file_exists(sessionDir)) {
throw new Error(`Session directory not found: ${sessionDir}`);
}
// 2. Load context-package.json
const contextPackage = JSON.parse(Read(contextPath));
// 3. Check conflict_risk (skip if none/low)
const conflictRisk = contextPackage.conflict_detection?.risk_level || 'low';
if (conflictRisk === 'none' || conflictRisk === 'low') {
console.log("No significant conflicts detected, proceeding to task generation");
// Skip directly to Phase 4
return;
}
```
**Input**:
- sessionId from Phase 1
- contextPath from Phase 2
- conflict_risk from context-package.json
### Step 3.2: CLI-Powered Conflict Analysis
**Parse Output**:
- Extract: Execution status (success/skipped/failed)
- Verify: conflict-resolution.json file path (if executed)
**Agent Delegation**:
**Validation**:
- File `.workflow/active/[sessionId]/.process/conflict-resolution.json` exists (if executed)
```javascript
Task(subagent_type="cli-execution-agent", run_in_background=false, prompt=`
## Context
- Session: ${sessionId}
- Risk: ${conflictRisk}
- Files: ${existing_files_list}
**Skip Behavior**:
- If conflict_risk is "none" or "low", skip directly to Phase 4
- Display: "No significant conflicts detected, proceeding to task generation"
## Exploration Context (from context-package.exploration_results)
- Exploration Count: ${contextPackage.exploration_results?.exploration_count || 0}
- Angles Analyzed: ${JSON.stringify(contextPackage.exploration_results?.angles || [])}
- Pre-identified Conflict Indicators: ${JSON.stringify(contextPackage.exploration_results?.aggregated_insights?.conflict_indicators || [])}
- Critical Files: ${JSON.stringify(contextPackage.exploration_results?.aggregated_insights?.critical_files?.map(f => f.path) || [])}
- All Patterns: ${JSON.stringify(contextPackage.exploration_results?.aggregated_insights?.all_patterns || [])}
- All Integration Points: ${JSON.stringify(contextPackage.exploration_results?.aggregated_insights?.all_integration_points || [])}
### TodoWrite Update (Phase 3 Skill executed - tasks attached, if conflict_risk >= medium)
## Analysis Steps
### 0. Load Output Schema (MANDATORY)
Execute: cat ~/.ccw/workflows/cli-templates/schemas/conflict-resolution-schema.json
### 1. Load Context
- Read existing files from conflict_detection.existing_files
- Load plan from .workflow/active/${sessionId}/.process/context-package.json
- Load exploration_results and use aggregated_insights for enhanced analysis
- Extract role analyses and requirements
### 2. Execute CLI Analysis (Enhanced with Exploration + Scenario Uniqueness)
Primary (Gemini):
ccw cli -p "
PURPOSE: Detect conflicts between plan and codebase, using exploration insights
TASK:
* Review pre-identified conflict_indicators from exploration results
* Compare architectures (use exploration key_patterns)
* Identify breaking API changes
* Detect data model incompatibilities
* Assess dependency conflicts
* Analyze module scenario uniqueness
- Use exploration integration_points for precise locations
- Cross-validate with exploration critical_files
- Generate clarification questions for boundary definition
MODE: analysis
CONTEXT: @**/*.ts @**/*.js @**/*.tsx @**/*.jsx @.workflow/active/${sessionId}/**/*
EXPECTED: Conflict list with severity ratings, including:
- Validation of exploration conflict_indicators
- ModuleOverlap conflicts with overlap_analysis
- Targeted clarification questions
CONSTRAINTS: Focus on breaking changes, migration needs, and functional overlaps | Prioritize exploration-identified conflicts | analysis=READ-ONLY
" --tool gemini --mode analysis --rule analysis-code-patterns --cd {project_root}
Fallback: Qwen (same prompt) -> Claude (manual analysis)
### 3. Generate Strategies (2-4 per conflict)
Template per conflict:
- Severity: Critical/High/Medium
- Category: Architecture/API/Data/Dependency/ModuleOverlap
- Affected files + impact
- For ModuleOverlap: Include overlap_analysis with existing modules and scenarios
- Options with pros/cons, effort, risk
- For ModuleOverlap strategies: Add clarification_needed questions for boundary definition
- Recommended strategy + rationale
### 4. Return Structured Conflict Data
Output to conflict-resolution.json (generated in Phase 4)
**Schema Reference**: Execute cat ~/.ccw/workflows/cli-templates/schemas/conflict-resolution-schema.json to get full schema
Return JSON following the schema. Key requirements:
- Minimum 2 strategies per conflict, max 4
- All text in Chinese for user-facing fields (brief, name, pros, cons, modification_suggestions)
- modifications.old_content: 20-100 chars for unique Edit tool matching
- modifications.new_content: preserves markdown formatting
- modification_suggestions: 2-5 actionable suggestions for custom handling
### 5. Planning Notes Record (REQUIRED)
After analysis complete, append to planning-notes.md:
**File**: .workflow/active/${sessionId}/planning-notes.md
**Location**: Under "## Conflict Decisions (Phase 3)" section
**Format**:
### [Conflict-Resolution Agent] YYYY-MM-DD
- **Note**: [Brief summary of conflict types, strategies, key decisions]
`)
```
### Step 3.3: Iterative User Interaction
```javascript
const autoYes = workflowPreferences?.autoYes || false;
FOR each conflict:
round = 0, clarified = false, userClarifications = []
WHILE (!clarified && round++ < 10):
// 1. Display conflict info (text output for context)
displayConflictSummary(conflict) // id, brief, severity, overlap_analysis if ModuleOverlap
// 2. Strategy selection
if (autoYes) {
console.log(`[autoYes] Auto-selecting recommended strategy`)
selectedStrategy = conflict.strategies[conflict.recommended || 0]
clarified = true // Skip clarification loop
} else {
AskUserQuestion({
questions: [{
question: formatStrategiesForDisplay(conflict.strategies),
header: "Strategy",
multiSelect: false,
options: [
...conflict.strategies.map((s, i) => ({
label: `${s.name}${i === conflict.recommended ? ' (Recommended)' : ''}`,
description: `${s.complexity} complexity | ${s.risk} risk${s.clarification_needed?.length ? ' | Needs clarification' : ''}`
})),
{ label: "Custom modification", description: `Suggestions: ${conflict.modification_suggestions?.slice(0,2).join('; ')}` }
]
}]
})
// 3. Handle selection
if (userChoice === "Custom modification") {
customConflicts.push({ id, brief, category, suggestions, overlap_analysis })
break
}
selectedStrategy = findStrategyByName(userChoice)
}
// 4. Clarification (if needed) - batched max 4 per call
if (!autoYes && selectedStrategy.clarification_needed?.length > 0) {
for (batch of chunk(selectedStrategy.clarification_needed, 4)) {
AskUserQuestion({
questions: batch.map((q, i) => ({
question: q, header: `Clarify${i+1}`, multiSelect: false,
options: [{ label: "Provide details", description: "Enter answer" }]
}))
})
userClarifications.push(...collectAnswers(batch))
}
// 5. Agent re-analysis
reanalysisResult = Task({
subagent_type: "cli-execution-agent",
run_in_background: false,
prompt: `Conflict: ${conflict.id}, Strategy: ${selectedStrategy.name}
User Clarifications: ${JSON.stringify(userClarifications)}
Output: { uniqueness_confirmed, rationale, updated_strategy, remaining_questions }`
})
if (reanalysisResult.uniqueness_confirmed) {
selectedStrategy = { ...reanalysisResult.updated_strategy, clarifications: userClarifications }
clarified = true
} else {
selectedStrategy.clarification_needed = reanalysisResult.remaining_questions
}
} else {
clarified = true
}
if (clarified) resolvedConflicts.push({ conflict, strategy: selectedStrategy })
END WHILE
END FOR
selectedStrategies = resolvedConflicts.map(r => ({
conflict_id: r.conflict.id, strategy: r.strategy, clarifications: r.strategy.clarifications || []
}))
```
### Step 3.4: Apply Modifications
```javascript
// 1. Extract modifications from resolved strategies
const modifications = [];
selectedStrategies.forEach(item => {
if (item.strategy && item.strategy.modifications) {
modifications.push(...item.strategy.modifications.map(mod => ({
...mod,
conflict_id: item.conflict_id,
clarifications: item.clarifications
})));
}
});
console.log(`Applying ${modifications.length} modifications...`);
// 2. Apply each modification using Edit tool (with fallback to context-package.json)
const appliedModifications = [];
const failedModifications = [];
const fallbackConstraints = []; // For files that don't exist
modifications.forEach((mod, idx) => {
try {
console.log(`[${idx + 1}/${modifications.length}] Modifying ${mod.file}...`);
// Check if target file exists (brainstorm files may not exist in lite workflow)
if (!file_exists(mod.file)) {
console.log(` File not found, writing to context-package.json as constraint`);
fallbackConstraints.push({
source: "conflict-resolution",
conflict_id: mod.conflict_id,
target_file: mod.file,
section: mod.section,
change_type: mod.change_type,
content: mod.new_content,
rationale: mod.rationale
});
return; // Skip to next modification
}
if (mod.change_type === "update") {
Edit({ file_path: mod.file, old_string: mod.old_content, new_string: mod.new_content });
} else if (mod.change_type === "add") {
const fileContent = Read(mod.file);
const updated = insertContentAfterSection(fileContent, mod.section, mod.new_content);
Write(mod.file, updated);
} else if (mod.change_type === "remove") {
Edit({ file_path: mod.file, old_string: mod.old_content, new_string: "" });
}
appliedModifications.push(mod);
console.log(` Success`);
} catch (error) {
console.log(` Failed: ${error.message}`);
failedModifications.push({ ...mod, error: error.message });
}
});
// 3. Generate conflict-resolution.json output file
const resolutionOutput = {
session_id: sessionId,
resolved_at: new Date().toISOString(),
summary: {
total_conflicts: conflicts.length,
resolved_with_strategy: selectedStrategies.length,
custom_handling: customConflicts.length,
fallback_constraints: fallbackConstraints.length
},
resolved_conflicts: selectedStrategies.map(s => ({
conflict_id: s.conflict_id,
strategy_name: s.strategy.name,
strategy_approach: s.strategy.approach,
clarifications: s.clarifications || [],
modifications_applied: s.strategy.modifications?.filter(m =>
appliedModifications.some(am => am.conflict_id === s.conflict_id)
) || []
})),
custom_conflicts: customConflicts.map(c => ({
id: c.id, brief: c.brief, category: c.category,
suggestions: c.suggestions, overlap_analysis: c.overlap_analysis || null
})),
planning_constraints: fallbackConstraints,
failed_modifications: failedModifications
};
const resolutionPath = `.workflow/active/${sessionId}/.process/conflict-resolution.json`;
Write(resolutionPath, JSON.stringify(resolutionOutput, null, 2));
// 4. Update context-package.json with resolution details
const contextPkg = JSON.parse(Read(contextPath));
contextPkg.conflict_detection.conflict_risk = "resolved";
contextPkg.conflict_detection.resolution_file = resolutionPath;
contextPkg.conflict_detection.resolved_conflicts = selectedStrategies.map(s => s.conflict_id);
contextPkg.conflict_detection.custom_conflicts = customConflicts.map(c => c.id);
contextPkg.conflict_detection.resolved_at = new Date().toISOString();
Write(contextPath, JSON.stringify(contextPkg, null, 2));
// 5. Output custom conflict summary with overlap analysis (if any)
if (customConflicts.length > 0) {
customConflicts.forEach(conflict => {
console.log(`[${conflict.category}] ${conflict.id}: ${conflict.brief}`);
if (conflict.category === 'ModuleOverlap' && conflict.overlap_analysis) {
console.log(`Overlap info: New module: ${conflict.overlap_analysis.new_module.name}`);
}
conflict.suggestions.forEach(s => console.log(` - ${s}`));
});
}
```
### TodoWrite Update (Phase 3 in progress, if conflict_risk >= medium)
```json
[
{"content": "Phase 1: Session Discovery", "status": "completed", "activeForm": "Executing session discovery"},
{"content": "Phase 2: Context Gathering", "status": "completed", "activeForm": "Executing context gathering"},
{"content": "Phase 3: Conflict Resolution", "status": "in_progress", "activeForm": "Resolving conflicts"},
{"content": " Detect conflicts with CLI analysis", "status": "in_progress", "activeForm": "Detecting conflicts"},
{"content": " Present conflicts to user", "status": "pending", "activeForm": "Presenting conflicts"},
{"content": " Apply resolution strategies", "status": "pending", "activeForm": "Applying resolution strategies"},
{"content": " -> Detect conflicts with CLI analysis", "status": "in_progress", "activeForm": "Detecting conflicts"},
{"content": " -> Present conflicts to user", "status": "pending", "activeForm": "Presenting conflicts"},
{"content": " -> Apply resolution strategies", "status": "pending", "activeForm": "Applying resolution strategies"},
{"content": "Phase 4: Task Generation", "status": "pending", "activeForm": "Executing task generation"}
]
```
**Note**: Skill execute **attaches** conflict-resolution's 3 tasks. Orchestrator **executes** these tasks sequentially.
### TodoWrite Update (Phase 3 completed - tasks collapsed)
```json
@@ -65,37 +356,34 @@ Skill(skill="workflow:tools:conflict-resolution", args="--session [sessionId] --
]
```
**Note**: Phase 3 tasks completed and collapsed to summary.
### Step 3.2: Update Planning Notes
### Step 3.5: Update Planning Notes
After conflict resolution completes (if executed), update planning-notes.md:
```javascript
// If Phase 3 was executed, update planning-notes.md
if (conflictRisk === 'medium' || conflictRisk === 'high') {
const conflictResPath = `.workflow/active/${sessionId}/.process/conflict-resolution.json`
const conflictResPath = `.workflow/active/${sessionId}/.process/conflict-resolution.json`;
if (fs.existsSync(conflictResPath)) {
const conflictRes = JSON.parse(Read(conflictResPath))
const resolved = conflictRes.resolved_conflicts || []
const modifiedArtifacts = conflictRes.modified_artifacts || []
const planningConstraints = conflictRes.planning_constraints || []
if (file_exists(conflictResPath)) {
const conflictRes = JSON.parse(Read(conflictResPath));
const resolved = conflictRes.resolved_conflicts || [];
const modifiedArtifacts = conflictRes.modified_artifacts || [];
const planningConstraints = conflictRes.planning_constraints || [];
// Update Phase 3 section
Edit(planningNotesPath, {
old: '## Conflict Decisions (Phase 3)\n(To be filled if conflicts detected)',
new: `## Conflict Decisions (Phase 3)
- **RESOLVED**: ${resolved.map(r => `${r.type} ${r.strategy}`).join('; ') || 'None'}
- **RESOLVED**: ${resolved.map(r => `${r.type} -> ${r.strategy}`).join('; ') || 'None'}
- **MODIFIED_ARTIFACTS**: ${modifiedArtifacts.join(', ') || 'None'}
- **CONSTRAINTS**: ${planningConstraints.join('; ') || 'None'}`
})
// Append Phase 3 constraints to consolidated list
if (planningConstraints.length > 0) {
const currentNotes = Read(planningNotesPath)
const constraintCount = (currentNotes.match(/^\d+\./gm) || []).length
const currentNotes = Read(planningNotesPath);
const constraintCount = (currentNotes.match(/^\d+\./gm) || []).length;
Edit(planningNotesPath, {
old: '## Consolidated Constraints (Phase 4 Input)',
@@ -109,9 +397,9 @@ ${planningConstraints.map((c, i) => `${constraintCount + i + 1}. [Conflict] ${c}
**Auto-Continue**: Return to user showing conflict resolution results and selected strategies, then auto-continue.
**Auto Mode (--yes)**: When `--yes` flag is active, conflict-resolution sub-command automatically applies recommended resolution strategies without user confirmation. The orchestrator passes the `--yes` flag through to `workflow:tools:conflict-resolution`.
**Auto Mode**: When `workflowPreferences.autoYes` is true, conflict-resolution automatically applies recommended resolution strategies without user confirmation.
### Step 3.3: Memory State Check
### Step 3.6: Memory State Check
Evaluate current context window usage and memory state:

View File

@@ -17,30 +17,395 @@ Generate implementation plan and task JSONs via action-planning-agent.
- Task generation translates high-level role analyses into concrete, actionable work items
- **Intent priority**: Current user prompt > role analysis.md files > guidance-specification.md
## Core Philosophy
- **Planning Only**: Generate planning documents (IMPL_PLAN.md, task JSONs, TODO_LIST.md) - does NOT implement code
- **Agent-Driven Document Generation**: Delegate plan generation to action-planning-agent
- **NO Redundant Context Sorting**: Context priority sorting is ALREADY completed in context-gather Phase 2/3
- Use `context-package.json.prioritized_context` directly
- DO NOT re-sort files or re-compute priorities
- `priority_tiers` and `dependency_order` are pre-computed and ready-to-use
- **N+1 Parallel Planning**: Auto-detect multi-module projects, enable parallel planning (2+1 or 3+1 mode)
- **Progressive Loading**: Load context incrementally (Core -> Selective -> On-Demand) due to analysis.md file size
- **Memory-First**: Reuse loaded documents from conversation memory
- **Smart Selection**: Load synthesis_output OR guidance + relevant role analyses, NOT all role analyses
## Execution
### Step 4.1: Execute Task Generation
### Step 4.0: User Configuration (Interactive)
**Auto Mode Check**:
```javascript
Skill(skill="workflow:tools:task-generate-agent", args="--session [sessionId]")
const autoYes = workflowPreferences?.autoYes || false;
if (autoYes) {
console.log(`[autoYes] Using defaults: No materials, Agent executor, Codex CLI`)
userConfig = {
supplementaryMaterials: { type: "none", content: [] },
executionMethod: "agent",
preferredCliTool: "codex",
enableResume: true
}
// Skip to Step 4.1
}
```
**CLI Execution Note**: CLI tool usage is now determined semantically by action-planning-agent based on user's task description. If user specifies "use Codex/Gemini/Qwen for X", CLI tool usage is controlled by `meta.execution_config.method` per task, not by `command` fields in implementation steps.
**User Questions** (skipped if autoYes):
```javascript
if (!autoYes) AskUserQuestion({
questions: [
{
question: "Do you have supplementary materials or guidelines to include?",
header: "Materials",
multiSelect: false,
options: [
{ label: "No additional materials", description: "Use existing context only" },
{ label: "Provide file paths", description: "I'll specify paths to include" },
{ label: "Provide inline content", description: "I'll paste content directly" }
]
},
{
question: "Select execution method for generated tasks:",
header: "Execution",
multiSelect: false,
options: [
{ label: "Agent (Recommended)", description: "Claude agent executes tasks directly" },
{ label: "Hybrid", description: "Agent orchestrates, calls CLI for complex steps" },
{ label: "CLI Only", description: "All execution via CLI tools (codex/gemini/qwen)" }
]
},
{
question: "If using CLI, which tool do you prefer?",
header: "CLI Tool",
multiSelect: false,
options: [
{ label: "Codex (Recommended)", description: "Best for implementation tasks" },
{ label: "Gemini", description: "Best for analysis and large context" },
{ label: "Qwen", description: "Alternative analysis tool" },
{ label: "Auto", description: "Let agent decide per-task" }
]
}
]
})
```
**Input**:
- `sessionId` from Phase 1
- **planning-notes.md**: Consolidated constraints from all phases (Phase 1-3)
- Path: `.workflow/active/[sessionId]/planning-notes.md`
- Contains: User intent, context findings, conflict decisions, consolidated constraints
- **Purpose**: Provides structured, minimal context summary to action-planning-agent
### Step 4.1: Context Preparation & Module Detection
**Validation**:
- `.workflow/active/[sessionId]/plan.json` exists (structured plan overview)
- `.workflow/active/[sessionId]/IMPL_PLAN.md` exists
- `.workflow/active/[sessionId]/.task/IMPL-*.json` exists (at least one)
- `.workflow/active/[sessionId]/TODO_LIST.md` exists
**Command prepares session paths, metadata, detects module structure. Context priority sorting is NOT performed here.**
### TodoWrite Update (Phase 4 Skill executed - agent task attached)
```javascript
// Session Path Structure:
// .workflow/active/WFS-{session-id}/
// ├── workflow-session.json # Session metadata
// ├── planning-notes.md # Consolidated planning notes
// ├── .process/
// │ └── context-package.json # Context package
// ├── .task/ # Output: Task JSON files
// ├── plan.json # Output: Structured plan overview
// ├── IMPL_PLAN.md # Output: Implementation plan
// └── TODO_LIST.md # Output: TODO list
// Auto Module Detection (determines single vs parallel mode)
function autoDetectModules(contextPackage, projectRoot) {
// Complexity Gate: Only parallelize for High complexity
const complexity = contextPackage.metadata?.complexity || 'Medium';
if (complexity !== 'High') {
return [{ name: 'main', prefix: '', paths: ['.'] }];
}
// Priority 1: Explicit frontend/backend separation
if (exists('src/frontend') && exists('src/backend')) {
return [
{ name: 'frontend', prefix: 'A', paths: ['src/frontend'] },
{ name: 'backend', prefix: 'B', paths: ['src/backend'] }
];
}
// Priority 2: Monorepo structure
if (exists('packages/*') || exists('apps/*')) {
return detectMonorepoModules();
}
// Priority 3: Context-package dependency clustering
const modules = clusterByDependencies(contextPackage.dependencies?.internal);
if (modules.length >= 2) return modules.slice(0, 3);
// Default: Single module (original flow)
return [{ name: 'main', prefix: '', paths: ['.'] }];
}
// Decision Logic:
// complexity !== 'High' -> Force Phase 2A (Single Agent)
// modules.length == 1 -> Phase 2A (Single Agent, original flow)
// modules.length >= 2 && complexity == 'High' -> Phase 2B + Phase 3 (N+1 Parallel)
```
### Step 4.2A: Single Agent Planning (modules.length == 1)
**Purpose**: Generate IMPL_PLAN.md, task JSONs, and TODO_LIST.md - planning documents only, NOT code implementation.
```javascript
Task(
subagent_type="action-planning-agent",
run_in_background=false,
description="Generate planning documents (IMPL_PLAN.md, task JSONs, TODO_LIST.md)",
prompt=`
## TASK OBJECTIVE
Generate implementation planning documents (IMPL_PLAN.md, task JSONs, TODO_LIST.md) for workflow session
IMPORTANT: This is PLANNING ONLY - you are generating planning documents, NOT implementing code.
CRITICAL: Follow the progressive loading strategy defined in agent specification
## PLANNING NOTES (PHASE 1-3 CONTEXT)
Load: .workflow/active/${sessionId}/planning-notes.md
This document contains:
- User Intent: Original GOAL and KEY_CONSTRAINTS from Phase 1
- Context Findings: Critical files, architecture, and constraints from Phase 2
- Conflict Decisions: Resolved conflicts and planning constraints from Phase 3
- Consolidated Constraints: All constraints from all phases
**USAGE**: Read planning-notes.md FIRST. Use Consolidated Constraints list to guide task sequencing and dependencies.
## SESSION PATHS
Input:
- Session Metadata: .workflow/active/${sessionId}/workflow-session.json
- Planning Notes: .workflow/active/${sessionId}/planning-notes.md
- Context Package: .workflow/active/${sessionId}/.process/context-package.json
Output:
- Task Dir: .workflow/active/${sessionId}/.task/
- IMPL_PLAN: .workflow/active/${sessionId}/IMPL_PLAN.md
- TODO_LIST: .workflow/active/${sessionId}/TODO_LIST.md
## CONTEXT METADATA
Session ID: ${sessionId}
MCP Capabilities: {exa_code, exa_web, code_index}
## FEATURE SPECIFICATIONS (conditional)
If context-package has brainstorm_artifacts.feature_index_path:
Feature Index: [from context-package]
Feature Spec Dir: [from context-package]
Else if .workflow/active/${sessionId}/.brainstorming/feature-specs/ exists:
Feature Index: .workflow/active/${sessionId}/.brainstorming/feature-specs/feature-index.json
Feature Spec Dir: .workflow/active/${sessionId}/.brainstorming/feature-specs/
Use feature-index.json to:
- Map features to implementation tasks (feature_id -> task alignment)
- Reference individual feature spec files (spec_path) for detailed requirements
- Identify cross-cutting concerns that span multiple tasks
- Align task priorities with feature priorities
If the directory does not exist, skip this section.
## USER CONFIGURATION (from Step 4.0)
Execution Method: ${userConfig.executionMethod} // agent|hybrid|cli
Preferred CLI Tool: ${userConfig.preferredCliTool} // codex|gemini|qwen|auto
Supplementary Materials: ${userConfig.supplementaryMaterials}
## EXECUTION METHOD MAPPING
Based on userConfig.executionMethod, set task-level meta.execution_config:
"agent" ->
meta.execution_config = { method: "agent", cli_tool: null, enable_resume: false }
"cli" ->
meta.execution_config = { method: "cli", cli_tool: userConfig.preferredCliTool, enable_resume: true }
"hybrid" ->
Per-task decision: Simple tasks (<=3 files) -> "agent", Complex tasks (>3 files) -> "cli"
IMPORTANT: Do NOT add command field to implementation steps. Execution routing is controlled by task-level meta.execution_config.method only.
## PRIORITIZED CONTEXT (from context-package.prioritized_context) - ALREADY SORTED
Context sorting is ALREADY COMPLETED in context-gather Phase 2/3. DO NOT re-sort.
Direct usage:
- **user_intent**: Use goal/scope/key_constraints for task alignment
- **priority_tiers.critical**: PRIMARY focus for task generation
- **priority_tiers.high**: SECONDARY focus
- **dependency_order**: Use for task sequencing - already computed
- **sorting_rationale**: Reference for understanding priority decisions
## EXPLORATION CONTEXT (from context-package.exploration_results) - SUPPLEMENT ONLY
If prioritized_context is incomplete, fall back to exploration_results:
- Use aggregated_insights.critical_files for focus_paths generation
- Apply aggregated_insights.constraints to acceptance criteria
- Reference aggregated_insights.all_patterns for implementation approach
- Use aggregated_insights.all_integration_points for precise modification locations
## CONFLICT RESOLUTION CONTEXT (if exists)
- Check context-package.conflict_detection.resolution_file for conflict-resolution.json path
- If exists, load .process/conflict-resolution.json:
- Apply planning_constraints as task constraints
- Reference resolved_conflicts for implementation approach alignment
- Handle custom_conflicts with explicit task notes
## EXPECTED DELIVERABLES
1. Task JSON Files (.task/IMPL-*.json)
- Unified flat schema (task-schema.json)
- Quantified requirements with explicit counts
- Artifacts integration from context package
- **focus_paths from prioritized_context.priority_tiers (critical + high)**
- Pre-analysis steps (use dependency_order for task sequencing)
- **CLI Execution IDs and strategies (MANDATORY)**
2. Implementation Plan (IMPL_PLAN.md)
- Context analysis and artifact references
- Task breakdown and execution strategy
3. Plan Overview (plan.json)
- Structured plan overview (plan-overview-base-schema)
- Machine-readable task IDs, shared context, metadata
4. TODO List (TODO_LIST.md)
- Hierarchical structure
- Links to task JSONs and summaries
## CLI EXECUTION ID REQUIREMENTS (MANDATORY)
Each task JSON MUST include:
- **cli_execution.id**: Unique ID (format: {session_id}-{task_id})
- **cli_execution**: Strategy object based on depends_on:
- No deps -> { "strategy": "new" }
- 1 dep (single child) -> { "strategy": "resume", "resume_from": "parent-cli-id" }
- 1 dep (multiple children) -> { "strategy": "fork", "resume_from": "parent-cli-id" }
- N deps -> { "strategy": "merge_fork", "merge_from": ["id1", "id2", ...] }
## QUALITY STANDARDS
Hard Constraints:
- Task count <= 18 (hard limit)
- All requirements quantified
- Acceptance criteria measurable
- Artifact references mapped from context package
## PLANNING NOTES RECORD (REQUIRED)
After completing, update planning-notes.md:
## Task Generation (Phase 4)
### [Action-Planning Agent] YYYY-MM-DD
- **Tasks**: [count] ([IDs])
## N+1 Context
### Decisions
| Decision | Rationale | Revisit? |
|----------|-----------|----------|
| [choice] | [why] | [Yes/No] |
### Deferred
- [ ] [item] - [reason]
`
)
```
### Step 4.2B: N Parallel Planning (modules.length >= 2)
**Condition**: `modules.length >= 2` (multi-module detected)
**Purpose**: Launch N action-planning-agents simultaneously, one per module.
```javascript
// Launch N agents in parallel (one per module)
const planningTasks = modules.map(module =>
Task(
subagent_type="action-planning-agent",
run_in_background=false,
description=`Generate ${module.name} module task JSONs`,
prompt=`
## TASK OBJECTIVE
Generate task JSON files for ${module.name} module within workflow session
IMPORTANT: This is PLANNING ONLY - generate task JSONs, NOT implementing code.
IMPORTANT: Generate Task JSONs ONLY. IMPL_PLAN.md and TODO_LIST.md by Phase 3 Coordinator.
## PLANNING NOTES (PHASE 1-3 CONTEXT)
Load: .workflow/active/${sessionId}/planning-notes.md
## MODULE SCOPE
- Module: ${module.name} (${module.type})
- Focus Paths: ${module.paths.join(', ')}
- Task ID Prefix: IMPL-${module.prefix}
- Task Limit: <=6 tasks (hard limit for this module)
- Other Modules: ${otherModules.join(', ')} (reference only)
## SESSION PATHS
Input:
- Session Metadata: .workflow/active/${sessionId}/workflow-session.json
- Planning Notes: .workflow/active/${sessionId}/planning-notes.md
- Context Package: .workflow/active/${sessionId}/.process/context-package.json
Output:
- Task Dir: .workflow/active/${sessionId}/.task/
## CROSS-MODULE DEPENDENCIES
- For dependencies ON other modules: Use placeholder depends_on: ["CROSS::{module}::{pattern}"]
- Example: depends_on: ["CROSS::B::api-endpoint"]
- Phase 3 Coordinator resolves to actual task IDs
## CLI EXECUTION ID REQUIREMENTS (MANDATORY)
Each task JSON MUST include:
- **cli_execution.id**: Unique ID (format: {session_id}-IMPL-${module.prefix}{seq})
- Cross-module dep -> { "strategy": "cross_module_fork", "resume_from": "CROSS::{module}::{pattern}" }
## QUALITY STANDARDS
- Task count <= 9 for this module
- Focus paths scoped to ${module.paths.join(', ')} only
- Cross-module dependencies use CROSS:: placeholder format
## PLANNING NOTES RECORD (REQUIRED)
### [${module.name}] YYYY-MM-DD
- **Tasks**: [count] ([IDs])
- **CROSS deps**: [placeholders used]
`
)
);
// Execute all in parallel
await Promise.all(planningTasks);
```
### Step 4.3: Integration (+1 Coordinator, Multi-Module Only)
**Condition**: Only executed when `modules.length >= 2`
```javascript
Task(
subagent_type="action-planning-agent",
run_in_background=false,
description="Integrate module tasks and generate unified documents",
prompt=`
## TASK OBJECTIVE
Integrate all module task JSONs, resolve cross-module dependencies, and generate unified IMPL_PLAN.md and TODO_LIST.md
IMPORTANT: This is INTEGRATION ONLY - consolidate existing task JSONs, NOT creating new tasks.
## SESSION PATHS
Input:
- Session Metadata: .workflow/active/${sessionId}/workflow-session.json
- Context Package: .workflow/active/${sessionId}/.process/context-package.json
- Task JSONs: .workflow/active/${sessionId}/.task/IMPL-*.json (from Phase 2B)
Output:
- Updated Task JSONs: .workflow/active/${sessionId}/.task/IMPL-*.json (resolved dependencies)
- IMPL_PLAN: .workflow/active/${sessionId}/IMPL_PLAN.md
- TODO_LIST: .workflow/active/${sessionId}/TODO_LIST.md
## INTEGRATION STEPS
1. Collect all .task/IMPL-*.json, group by module prefix
2. Resolve CROSS:: dependencies -> actual task IDs, update task JSONs
3. Generate IMPL_PLAN.md (multi-module format)
4. Generate TODO_LIST.md (hierarchical format)
## CROSS-MODULE DEPENDENCY RESOLUTION
- Pattern: CROSS::{module}::{pattern} -> IMPL-{module}* matching title/context
- Log unresolved as warnings
## PLANNING NOTES RECORD (REQUIRED)
### [Coordinator] YYYY-MM-DD
- **Total**: [count] tasks
- **Resolved**: [CROSS:: resolutions]
`
)
```
### TodoWrite Update (Phase 4 in progress)
```json
[
@@ -50,8 +415,6 @@ Skill(skill="workflow:tools:task-generate-agent", args="--session [sessionId]")
]
```
**Note**: Single agent task attached. Agent autonomously completes discovery, planning, and output generation internally.
### TodoWrite Update (Phase 4 completed)
```json
@@ -62,9 +425,7 @@ Skill(skill="workflow:tools:task-generate-agent", args="--session [sessionId]")
]
```
**Note**: Agent task completed. No collapse needed (single task).
### Step 4.2: Plan Confirmation (User Decision Gate)
### Step 4.4: Plan Confirmation (User Decision Gate)
After Phase 4 completes, present user with action choices:
@@ -84,15 +445,15 @@ const userChoice = AskUserQuestion({
options: [
{
label: "Verify Plan Quality (Recommended)",
description: "Run quality verification to catch issues before execution. Checks plan structure, task dependencies, and completeness."
description: "Run quality verification to catch issues before execution."
},
{
label: "Start Execution",
description: "Begin implementing tasks immediately. Use this if you've already reviewed the plan or want to start quickly."
description: "Begin implementing tasks immediately."
},
{
label: "Review Status Only",
description: "View task breakdown and session status without taking further action. You can decide what to do next manually."
description: "View task breakdown and session status without taking further action."
}
]
}]
@@ -102,7 +463,6 @@ const userChoice = AskUserQuestion({
if (userChoice.answers["Next Action"] === "Verify Plan Quality (Recommended)") {
console.log("\nStarting plan verification...\n");
// Route to Phase 5 (plan-verify) within this skill
// Orchestrator reads phases/05-plan-verify.md and executes
} else if (userChoice.answers["Next Action"] === "Start Execution") {
console.log("\nStarting task execution...\n");
Skill(skill="workflow-execute", args="--session " + sessionId);
@@ -112,12 +472,12 @@ if (userChoice.answers["Next Action"] === "Verify Plan Quality (Recommended)") {
}
```
**Auto Mode (--yes)**: Auto-select "Verify Plan Quality", then auto-continue to execute if quality gate is PROCEED.
**Auto Mode**: When `workflowPreferences.autoYes` is true, auto-select "Verify Plan Quality", then auto-continue to execute if quality gate is PROCEED.
**Return to Orchestrator**: Based on user's choice:
- **Verify** Orchestrator reads phases/05-plan-verify.md and executes Phase 5 in-process
- **Execute** Skill(skill="workflow-execute")
- **Review** Route to /workflow:status
- **Verify** -> Orchestrator reads phases/05-plan-verify.md and executes Phase 5 in-process
- **Execute** -> Skill(skill="workflow-execute")
- **Review** -> Route to /workflow:status
## Output
@@ -130,6 +490,6 @@ if (userChoice.answers["Next Action"] === "Verify Plan Quality (Recommended)") {
## Next Phase (Conditional)
Based on user's plan confirmation choice:
- If "Verify" [Phase 5: Plan Verification](05-plan-verify.md)
- If "Execute" Skill(skill="workflow-execute")
- If "Review" External: /workflow:status
- If "Verify" -> [Phase 5: Plan Verification](05-plan-verify.md)
- If "Execute" -> Skill(skill="workflow-execute")
- If "Review" -> External: /workflow:status

View File

@@ -332,15 +332,16 @@ console.log(`Report: ${process_dir}/PLAN_VERIFICATION.md\n${recommendation} | C:
### Step 5.6: Next Step Selection
```javascript
const autoYes = $ARGUMENTS.includes('--yes') || $ARGUMENTS.includes('-y')
// Reference workflowPreferences (set by SKILL.md via AskUserQuestion)
const autoYes = workflowPreferences.autoYes
const canExecute = recommendation !== 'BLOCK_EXECUTION'
// Auto mode
if (autoYes) {
if (canExecute) {
Skill(skill="workflow-execute", args="--yes --resume-session=\"${session_id}\"")
Skill(skill="workflow-execute", args="--resume-session=\"${session_id}\"")
} else {
console.log(`[--yes] BLOCK_EXECUTION - Fix ${critical_count} critical issues first.`)
console.log(`[Auto] BLOCK_EXECUTION - Fix ${critical_count} critical issues first.`)
}
return
}
@@ -371,7 +372,9 @@ const selection = AskUserQuestion({
if (selection.includes("Execute")) {
Skill(skill="workflow-execute", args="--resume-session=\"${session_id}\"")
} else if (selection === "Re-verify") {
Skill(skill="workflow:plan-verify", args="--session ${session_id}")
// Direct phase re-execution: re-read and execute this phase
Read("phases/05-plan-verify.md")
// Re-execute with current session context
}
```

View File

@@ -12,37 +12,26 @@ Interactive workflow replanning with session-level artifact updates and boundary
## Entry Point
Triggered via `/workflow:replan` (Replan Mode).
Triggered via Replan Mode routing in SKILL.md.
## Operation Modes
## Input
### Session Replan Mode
Replan accepts requirements text as arguments. Session and mode preferences are provided via `workflowPreferences` context variables (set by SKILL.md via AskUserQuestion).
```bash
# Auto-detect active session
/workflow:replan "requirements text"
# Explicit session
/workflow:replan --session WFS-oauth "requirements text"
# File-based input
/workflow:replan --session WFS-oauth requirements-update.md
# Interactive mode
/workflow:replan --interactive
```
Input: requirements text (positional argument)
Context: workflowPreferences.autoYes, workflowPreferences.interactive
Session: auto-detected from .workflow/active/ or specified via workflowPreferences.sessionId
```
### Task Replan Mode
```bash
```text
# Direct task update
/workflow:replan IMPL-1 "requirements text"
Replan input: IMPL-1 "requirements text"
# Task with explicit session
/workflow:replan --session WFS-oauth IMPL-2 "requirements text"
# Interactive mode
/workflow:replan IMPL-1 --interactive
# Interactive mode (workflowPreferences.interactive = true)
Replan input: IMPL-1 "requirements text"
```
## Language Convention
@@ -53,10 +42,11 @@ Interactive question options use Chinese (user-facing UI text) with English iden
### Input Parsing
**Parse flags**:
**Parse input**:
```javascript
const sessionFlag = $ARGUMENTS.match(/--session\s+(\S+)/)?.[1]
const interactive = $ARGUMENTS.includes('--interactive')
// Reference workflowPreferences (set by SKILL.md via AskUserQuestion)
const sessionFlag = workflowPreferences.sessionId
const interactive = workflowPreferences.interactive
const taskIdMatch = $ARGUMENTS.match(/\b(IMPL-\d+(?:\.\d+)?)\b/)
const taskId = taskIdMatch?.[1]
```
@@ -117,10 +107,11 @@ const taskId = taskIdMatch?.[1]
### Auto Mode Support
When `--yes` or `-y` flag is used, the command skips interactive clarification and uses safe defaults:
When `workflowPreferences.autoYes === true`, the phase skips interactive clarification and uses safe defaults:
```javascript
const autoYes = $ARGUMENTS.includes('--yes') || $ARGUMENTS.includes('-y')
// Reference workflowPreferences (set by SKILL.md via AskUserQuestion)
const autoYes = workflowPreferences.autoYes
```
**Auto Mode Defaults**:
@@ -130,7 +121,7 @@ const autoYes = $ARGUMENTS.includes('--yes') || $ARGUMENTS.includes('-y')
- **Dependency Changes**: `no` (preserve existing dependencies)
- **User Confirmation**: Auto-confirm execution
**Note**: `--interactive` flag overrides `--yes` flag (forces interactive mode).
**Note**: `workflowPreferences.interactive` overrides `workflowPreferences.autoYes` (forces interactive mode).
---
@@ -142,7 +133,7 @@ const autoYes = $ARGUMENTS.includes('--yes') || $ARGUMENTS.includes('-y')
```javascript
if (autoYes && !interactive) {
// Use defaults and skip to Step 6.3
console.log(`[--yes] Using safe defaults for replan:`)
console.log(`[Auto] Using safe defaults for replan:`)
console.log(` - Scope: tasks_only`)
console.log(` - Changes: update_only`)
console.log(` - Dependencies: preserve existing`)
@@ -264,12 +255,12 @@ interface ImpactAnalysis {
**Step 6.3.3: User Confirmation**
```javascript
// Parse --yes flag
const autoYes = $ARGUMENTS.includes('--yes') || $ARGUMENTS.includes('-y')
// Reference workflowPreferences (set by SKILL.md via AskUserQuestion)
const autoYes = workflowPreferences.autoYes
if (autoYes) {
// Auto mode: Auto-confirm execution
console.log(`[--yes] Auto-confirming replan execution`)
console.log(`[Auto] Auto-confirming replan execution`)
userConfirmation = 'confirm'
// Proceed to Step 6.4
} else {
@@ -480,7 +471,7 @@ Available sessions: [list]
# No changes specified
WARNING: No modifications specified
Use --interactive mode or provide requirements
Provide requirements text or use interactive mode
```
### Task Errors
@@ -537,8 +528,8 @@ Backup preserved, rolling back changes
### Session Replan - Add Feature
```bash
/workflow:replan "Add 2FA support"
```
Replan input: "Add 2FA support"
# Interactive clarification
Q: Modification scope?

View File

@@ -106,6 +106,55 @@ SCOPE: [boundaries]
CONTEXT: [background/constraints]
```
### Pattern 6: Interactive Preference Collection (SKILL.md Responsibility)
Workflow preferences (auto mode, force explore, etc.) MUST be collected via AskUserQuestion in SKILL.md **before** dispatching to phases. Phases reference these as `workflowPreferences.{key}` context variables.
**Anti-Pattern**: Command-line flags (`--yes`, `-e`, `--explore`) parsed within phase files via `$ARGUMENTS.includes(...)`.
```javascript
// CORRECT: In SKILL.md (before phase dispatch)
const prefResponse = AskUserQuestion({
questions: [
{ question: "是否跳过确认?", header: "Auto Mode", options: [
{ label: "Interactive (Recommended)", description: "交互模式" },
{ label: "Auto", description: "跳过所有确认" }
]}
]
})
workflowPreferences = { autoYes: prefResponse.autoMode === 'Auto' }
// CORRECT: In phase files (reference only)
const autoYes = workflowPreferences.autoYes
// WRONG: In phase files (flag parsing)
const autoYes = $ARGUMENTS.includes('--yes') || $ARGUMENTS.includes('-y')
```
### Pattern 7: Direct Phase Handoff
When one phase needs to invoke another phase within the same skill, read and execute the phase document directly. Do NOT use Skill() routing back through SKILL.md.
```javascript
// CORRECT: Direct handoff (executionContext already set)
Read("phases/02-lite-execute.md")
// Execute with executionContext (Mode 1)
// WRONG: Skill routing (unnecessary round-trip)
Skill(skill="workflow:lite-execute", args="--in-memory")
```
### Pattern 8: Phase File Hygiene
Phase files are internal execution documents. They MUST NOT contain:
| Prohibited | Reason | Correct Location |
|------------|--------|------------------|
| Flag parsing (`$ARGUMENTS.includes(...)`) | Preferences collected in SKILL.md | SKILL.md via AskUserQuestion |
| Invocation syntax (`/skill-name "..."`) | Not user-facing docs | Removed or SKILL.md only |
| Conversion provenance (`Source: Converted from...`) | Implementation detail | Removed |
| Skill routing for inter-phase (`Skill(skill="...")`) | Use direct phase read | Direct `Read("phases/...")` |
## Execution Flow
```
@@ -221,9 +270,14 @@ allowed-tools: {tools}
1. **{Principle}**: {Description}
...
## Auto Mode
## Interactive Preference Collection
When `--yes` or `-y`: {auto-mode behavior}.
Collect workflow preferences via AskUserQuestion before dispatching to phases:
{AskUserQuestion code with preference derivation → workflowPreferences}
## Auto Mode Defaults
When `workflowPreferences.autoYes === true`: {auto-mode behavior}.
## Execution Flow
@@ -316,4 +370,5 @@ When designing a new workflow skill, answer these questions:
| What's the TodoWrite granularity? | TodoWrite Pattern | Some phases have sub-tasks, others are atomic |
| Is there a planning notes pattern? | Post-Phase Updates | Accumulated state document across phases |
| What's the error recovery? | Error Handling | Retry once then report, vs rollback |
| Does it need auto mode? | Auto Mode section | Skip confirmations with --yes flag |
| Does it need preference collection? | Interactive Preference Collection | Collect via AskUserQuestion in SKILL.md, pass as workflowPreferences |
| Does phase N hand off to phase M? | Direct Phase Handoff (Pattern 7) | Read phase doc directly, not Skill() routing |

View File

@@ -304,7 +304,7 @@ const workflowConfig = {
// Features
features: {
hasAutoMode: true, // --yes flag support
hasAutoMode: true, // Interactive preference collection (AskUserQuestion)
hasConditionalPhases: true, // some phases may be skipped
hasTodoWriteSubTasks: true, // phases expand into sub-tasks
hasPlanningNotes: true, // accumulated state document

View File

@@ -267,10 +267,11 @@ Extract from source orchestrator or generate from config:
function generateOrchestratorSections(config, sourceContent) {
const sections = [];
// Auto Mode (if feature enabled)
// Interactive Preference Collection + Auto Mode (if feature enabled)
if (config.features.hasAutoMode) {
sections.push(extractOrGenerate(sourceContent, 'Auto Mode',
'## Auto Mode\n\nWhen `--yes` or `-y`: Auto-continue all phases, use recommended defaults.\n'));
sections.push(generateInteractivePreferenceCollection(config));
sections.push(extractOrGenerate(sourceContent, 'Auto Mode Defaults',
'## Auto Mode Defaults\n\nWhen `workflowPreferences.autoYes === true`: Auto-continue all phases, use recommended defaults.\n'));
}
// Core Rules
@@ -334,7 +335,69 @@ function generateDefaultErrorHandling() {
}
```
## Step 2.8: Assemble SKILL.md
## Step 2.8: Generate Interactive Preference Collection
When the skill has configurable behaviors (auto mode, force options, etc.), generate the AskUserQuestion-based preference collection section for SKILL.md:
```javascript
function generateInteractivePreferenceCollection(config) {
if (!config.features.hasAutoMode && !config.preferenceQuestions?.length) {
return '';
}
let section = '## Interactive Preference Collection\n\n';
section += 'Collect workflow preferences via AskUserQuestion before dispatching to phases:\n\n';
section += '```javascript\n';
section += 'const prefResponse = AskUserQuestion({\n';
section += ' questions: [\n';
// Always include auto mode question if feature enabled
if (config.features.hasAutoMode) {
section += ' {\n';
section += ' question: "是否跳过所有确认步骤(自动模式)?",\n';
section += ' header: "Auto Mode",\n';
section += ' multiSelect: false,\n';
section += ' options: [\n';
section += ' { label: "Interactive (Recommended)", description: "交互模式,包含确认步骤" },\n';
section += ' { label: "Auto", description: "跳过所有确认,自动执行" }\n';
section += ' ]\n';
section += ' },\n';
}
// Add custom preference questions
for (const pq of (config.preferenceQuestions || [])) {
section += ` {\n`;
section += ` question: "${pq.question}",\n`;
section += ` header: "${pq.header}",\n`;
section += ` multiSelect: false,\n`;
section += ` options: [\n`;
for (const opt of pq.options) {
section += ` { label: "${opt.label}", description: "${opt.description}" },\n`;
}
section += ` ]\n`;
section += ` },\n`;
}
section += ' ]\n';
section += '})\n\n';
section += '// Derive workflowPreferences from user selection\n';
section += 'workflowPreferences = {\n';
if (config.features.hasAutoMode) {
section += ' autoYes: prefResponse.autoMode === "Auto",\n';
}
for (const pq of (config.preferenceQuestions || [])) {
section += ` ${pq.key}: prefResponse.${pq.header.toLowerCase().replace(/\\s+/g, '')} === "${pq.activeValue}",\n`;
}
section += '}\n';
section += '```\n\n';
section += '**workflowPreferences** is passed to phase execution as context variable.\n';
section += 'Phases reference as `workflowPreferences.autoYes`, `workflowPreferences.{key}`, etc.\n';
return section;
}
```
## Step 2.9: Assemble SKILL.md
```javascript
function assembleSkillMd(config, sourceContent) {

View File

@@ -25,6 +25,98 @@ Generate phase files in `phases/` directory, preserving full execution detail fr
**Anti-Pattern**: Creating a phase file that says "See original command for details" or "Execute the agent with appropriate parameters" - this defeats the purpose of the skill structure. The phase file must be self-contained.
## Phase File Content Restrictions
Phase files are internal execution documents. They MUST NOT contain the following prohibited content:
| Prohibited Pattern | Detection | Correct Location |
|-------------------|-----------|-----------------|
| Flag parsing (`$ARGUMENTS.includes(...)`) | Grep: `\$ARGUMENTS\.includes` | SKILL.md via AskUserQuestion → `workflowPreferences` |
| Invocation syntax (`/skill-name "..."`) | Grep: `\/\w+[\-:]\w+\s+"` | Removed entirely (phase files are not user-facing) |
| Conversion provenance (`Source: Converted from...`) | Grep: `Source:.*Converted from` | Removed entirely (implementation detail) |
| Skill routing for inter-phase (`Skill(skill="...")`) | Grep: `Skill\(skill=` | Direct `Read("phases/0N-xxx.md")` |
### Preference Reference Pattern
Phase files may **reference** workflow preferences but must NOT **parse** them from arguments:
```javascript
// CORRECT: Reference workflowPreferences (set by SKILL.md)
const autoYes = workflowPreferences.autoYes
const forceExplore = workflowPreferences.forceExplore
// WRONG: Parse from $ARGUMENTS
const autoYes = $ARGUMENTS.includes('--yes') || $ARGUMENTS.includes('-y')
const forceExplore = $ARGUMENTS.includes('--explore') || $ARGUMENTS.includes('-e')
```
### Inter-Phase Handoff Pattern
When phase N needs to invoke phase M, use direct phase reading:
```javascript
// CORRECT: Direct handoff (executionContext already set)
Read("phases/02-lite-execute.md")
// Execute with executionContext (Mode 1)
// WRONG: Skill routing (unnecessary round-trip)
Skill(skill="workflow:lite-execute", args="--in-memory")
```
### Content Restriction Enforcement
When extracting from commands (Step 3.2), apply content sanitization after verbatim extraction:
```javascript
function sanitizePhaseContent(content) {
let sanitized = content;
// Remove flag parsing blocks
sanitized = sanitized.replace(
/\/\/.*flag.*parsing[\s\S]*?\n(?=\n)/gi, ''
);
// Remove invocation syntax examples
sanitized = sanitized.replace(
/^.*\/\w+[\-:]\w+\s+"[^"]*".*$/gm, ''
);
// Remove conversion provenance notes
sanitized = sanitized.replace(
/^\*\*Source\*\*:.*Converted from.*$/gm, ''
);
// Replace all $ARGUMENTS.includes patterns with workflowPreferences reference
// Handles any flag name, not just --yes/-y
sanitized = sanitized.replace(
/\$ARGUMENTS\.includes\(['"]--?([^"']+)['"]\)/g,
(match, flagName) => {
// Map common flag names to workflowPreferences keys
const flagMap = {
'yes': 'workflowPreferences.autoYes',
'y': 'workflowPreferences.autoYes',
'explore': 'workflowPreferences.forceExplore',
'e': 'workflowPreferences.forceExplore'
};
return flagMap[flagName] || `workflowPreferences.${flagName}`;
}
);
// Also clean up residual || chains from multi-flag expressions
sanitized = sanitized.replace(
/workflowPreferences\.(\w+)\s*\|\|\s*workflowPreferences\.\1/g,
'workflowPreferences.$1'
);
// Replace Skill() inter-phase routing with direct Read (with or without args)
sanitized = sanitized.replace(
/Skill\(skill=["']([^"']+)["'](?:,\s*args=["']([^"']+)["'])?\)/g,
(match, skillName) => `Read("phases/0N-xxx.md")\n// Execute with context`
);
return sanitized;
}
```
## Step 3.1: Phase File Generation Strategy
```javascript
@@ -77,6 +169,16 @@ function extractPhaseFromCommand(phase, config) {
phaseContent += bodyContent;
// 2.5. Ensure Objective section exists
if (!bodyContent.includes('## Objective')) {
// Insert Objective after phase header, before main content
const objectiveSection = `## Objective\n\n- ${phase.description}\n`;
phaseContent = phaseContent.replace(
`${phase.description}.\n\n${bodyContent}`,
`${phase.description}.\n\n${objectiveSection}\n${bodyContent}`
);
}
// 3. Ensure Output section exists
if (!bodyContent.includes('## Output')) {
phaseContent += '\n## Output\n\n';

View File

@@ -231,7 +231,8 @@ function validateSkillMdSections(config) {
// Conditional sections
const conditionalSections = [
{ name: 'Auto Mode', pattern: /## Auto Mode/, condition: config.features.hasAutoMode },
{ name: 'Interactive Preference Collection', pattern: /## Interactive Preference Collection/, condition: config.features.hasAutoMode },
{ name: 'Auto Mode Defaults', pattern: /## Auto Mode Defaults/, condition: config.features.hasAutoMode },
{ name: 'Post-Phase Updates', pattern: /## Post-Phase Updates/, condition: config.features.hasPostPhaseUpdates }
];
@@ -257,7 +258,77 @@ function validateSkillMdSections(config) {
}
```
## Step 4.6: Aggregate Results and Report
## Step 4.6: Phase File Hygiene
Scan generated phase files for prohibited content patterns. Phase files are internal execution documents and must not contain user-facing syntax, flag parsing, or inter-phase routing.
```javascript
function validatePhaseFileHygiene(config) {
const skillDir = `.claude/skills/${config.skillName}`;
const results = { errors: [], warnings: [], info: [] };
const prohibitedPatterns = [
{
name: 'Flag parsing ($ARGUMENTS)',
regex: /\$ARGUMENTS\.includes/g,
severity: 'error',
fix: 'Replace with workflowPreferences.{key} reference'
},
{
name: 'Invocation syntax (/skill-name)',
regex: /\/\w+[\-:]\w+\s+["']/g,
severity: 'warning',
fix: 'Remove (phase files are not user-facing docs)'
},
{
name: 'Conversion provenance (Source: Converted from)',
regex: /Source:.*Converted from/g,
severity: 'warning',
fix: 'Remove (implementation detail)'
},
{
name: 'Skill routing for inter-phase (Skill(skill=...)',
regex: /Skill\(skill=/g,
severity: 'error',
fix: 'Replace with direct Read("phases/0N-xxx.md")'
},
{
name: 'CLI flag definitions (--flag)',
regex: /^\s*-\w,\s+--\w+\s+/gm,
severity: 'warning',
fix: 'Move flag definitions to SKILL.md Interactive Preference Collection'
}
];
for (const phase of config.phases) {
const filename = `${String(phase.number).padStart(2, '0')}-${phase.slug}.md`;
const filepath = `${skillDir}/phases/${filename}`;
if (!fileExists(filepath)) continue;
const content = Read(filepath);
for (const pattern of prohibitedPatterns) {
const matches = content.match(pattern.regex);
if (matches && matches.length > 0) {
const msg = `Phase ${phase.number} (${filename}): ${pattern.name} found (${matches.length} occurrence(s)). Fix: ${pattern.fix}`;
if (pattern.severity === 'error') {
results.errors.push(msg);
} else {
results.warnings.push(msg);
}
}
}
}
if (results.errors.length === 0 && results.warnings.length === 0) {
results.info.push('Phase file hygiene: All phase files clean ✓');
}
return results;
}
```
## Step 4.7: Aggregate Results and Report
```javascript
function generateValidationReport(config) {
@@ -266,6 +337,7 @@ function generateValidationReport(config) {
const content = validateContentQuality(config);
const dataFlow = validateDataFlow(config);
const sections = validateSkillMdSections(config);
const hygiene = validatePhaseFileHygiene(config);
// Aggregate
const allErrors = [
@@ -273,21 +345,24 @@ function generateValidationReport(config) {
...references.errors,
...content.errors,
...dataFlow.errors,
...sections.errors
...sections.errors,
...hygiene.errors
];
const allWarnings = [
...structural.warnings,
...references.warnings,
...content.warnings,
...dataFlow.warnings,
...sections.warnings
...sections.warnings,
...hygiene.warnings
];
const allInfo = [
...structural.info,
...references.info,
...content.info,
...dataFlow.info,
...sections.info
...sections.info,
...hygiene.info
];
// Quality gate
@@ -325,7 +400,7 @@ ${allInfo.length > 0 ? `Info:\n${allInfo.map(i => ` ${i}`).join('\n')}` : '
}
```
## Step 4.7: Error Recovery
## Step 4.8: Error Recovery
If validation fails, offer recovery options:
@@ -356,7 +431,7 @@ if (report.gate === 'FAIL') {
}
```
## Step 4.8: Integration Summary
## Step 4.9: Integration Summary
```javascript
function displayIntegrationSummary(config) {
@@ -367,21 +442,20 @@ Integration Complete:
Usage:
Trigger: ${config.triggers.map(t => `"${t}"`).join(', ')}
Auto: /${config.triggers[0]} --yes "task description"
Design Patterns Applied:
✓ Progressive phase loading (Ref: markers)
✓ Phase Reference Documents table
✓ Phase file hygiene (no flag parsing, no invocation syntax)
${config.features.hasTodoWriteSubTasks ? '✓' : '○'} TodoWrite attachment/collapse
${config.features.hasConditionalPhases ? '✓' : '○'} Conditional phase execution
${config.features.hasAutoMode ? '✓' : '○'} Auto mode (--yes flag)
${config.features.hasAutoMode ? '✓' : '○'} Interactive preference collection (AskUserQuestion)
${config.features.hasPostPhaseUpdates ? '✓' : '○'} Post-phase state updates
${config.features.hasPlanningNotes ? '✓' : '○'} Accumulated planning notes
Next Steps:
1. Review SKILL.md orchestrator logic
2. Review each phase file for completeness
3. Test skill invocation: /${config.triggers[0]} "test task"
3. Test skill invocation with trigger phrase
4. Iterate based on execution results
`);
}

View File

@@ -47,9 +47,31 @@ Unified TDD workflow skill combining TDD planning (Red-Green-Refactor task chain
5. **Auto-Continue**: After each phase completes, automatically execute next pending phase
6. **TDD Iron Law**: NO PRODUCTION CODE WITHOUT A FAILING TEST FIRST - enforced in task structure
## Auto Mode
## Interactive Preference Collection
When `--yes` or `-y`: Auto-continue all phases (skip confirmations), use recommended conflict resolutions, auto-select "Verify Plan Quality" after plan completion.
Before dispatching to phase execution, collect workflow preferences via AskUserQuestion:
```javascript
const prefResponse = AskUserQuestion({
questions: [
{
question: "是否跳过所有确认步骤(自动模式)?",
header: "Auto Mode",
multiSelect: false,
options: [
{ label: "Interactive (Recommended)", description: "交互模式,包含确认步骤" },
{ label: "Auto", description: "跳过所有确认,自动执行" }
]
}
]
})
workflowPreferences = {
autoYes: prefResponse.autoMode === 'Auto'
}
```
**workflowPreferences** is passed to phase execution as context variable, referenced as `workflowPreferences.autoYes` within phases.
## Mode Detection
@@ -426,7 +448,7 @@ Similar to workflow-plan, a `planning-notes.md` can accumulate context across ph
- **If user selects Verify**: Read phases/07-tdd-verify.md, execute Phase 7 in-process
- **If user selects Execute**: Skill(skill="workflow-execute")
- **If user selects Review**: Route to /workflow:status
- **Auto mode (--yes)**: Auto-select "Verify TDD Compliance", then auto-continue to execute if APPROVED
- **Auto mode (workflowPreferences.autoYes)**: Auto-select "Verify TDD Compliance", then auto-continue to execute if APPROVED
- Update TaskCreate/TaskUpdate after each phase
- After each phase, automatically continue to next phase based on TaskList status

View File

@@ -1,53 +1,407 @@
# Phase 2: Context Gathering
Gather project context and analyze codebase for TDD planning.
Gather project context and analyze codebase via context-search-agent with parallel exploration for TDD planning.
## Objective
- Gather project context via context-search agents
- Generate context-package.json with codebase analysis
- Extract conflictRisk to determine Phase 4 execution
- Gather project context using context-search-agent
- Identify critical files, architecture patterns, and constraints
- Detect conflict risk level for Phase 4 decision
- Update planning-notes.md with findings
## Core Philosophy
- **Agent Delegation**: Delegate all discovery to `context-search-agent` for autonomous execution
- **Detection-First**: Check for existing context-package before executing
- **Plan Mode**: Full comprehensive analysis (vs lightweight brainstorm mode)
- **Standardized Output**: Generate `.workflow/active/{session}/.process/context-package.json`
## Execution
### Step 2.1: Execute Context Gathering
### Step 2.1: Context-Package Detection
**Execute First** - Check if valid package already exists:
```javascript
Skill(skill="workflow:tools:context-gather", args="--session [sessionId] \"TDD: [structured-description]\"")
const contextPackagePath = `.workflow/active/${sessionId}/.process/context-package.json`;
if (file_exists(contextPackagePath)) {
const existing = Read(contextPackagePath);
// Validate package belongs to current session
if (existing?.metadata?.session_id === sessionId) {
console.log("Valid context-package found for session:", sessionId);
console.log("Stats:", existing.statistics);
console.log("Conflict Risk:", existing.conflict_detection.risk_level);
// Skip execution, store variables and proceed to Step 2.5
contextPath = contextPackagePath;
conflictRisk = existing.conflict_detection.risk_level;
return; // Early exit - skip Steps 2.2-2.4
}
}
```
**Use Same Structured Description**: Pass the same structured format from Phase 1.
### Step 2.2: Complexity Assessment & Parallel Explore
**Input**: `sessionId` from Phase 1
### Step 2.2: Parse Output
- Extract: context-package.json path (store as `contextPath`)
- Typical pattern: `.workflow/active/[sessionId]/.process/context-package.json`
**Validation**:
- Context package path extracted
- File exists and is valid JSON
### Step 2.3: Extract conflictRisk
**Only execute if Step 2.1 finds no valid package**
```javascript
const contextPackage = Read(contextPath)
const conflictRisk = contextPackage.conflict_risk // "none" | "low" | "medium" | "high"
// 2.2.1 Complexity Assessment
function analyzeTaskComplexity(taskDescription) {
const text = taskDescription.toLowerCase();
if (/architect|refactor|restructure|modular|cross-module/.test(text)) return 'High';
if (/multiple|several|integrate|migrate|extend/.test(text)) return 'Medium';
return 'Low';
}
const ANGLE_PRESETS = {
architecture: ['architecture', 'dependencies', 'modularity', 'integration-points'],
security: ['security', 'auth-patterns', 'dataflow', 'validation'],
performance: ['performance', 'bottlenecks', 'caching', 'data-access'],
bugfix: ['error-handling', 'dataflow', 'state-management', 'edge-cases'],
feature: ['patterns', 'integration-points', 'testing', 'dependencies'],
refactor: ['architecture', 'patterns', 'dependencies', 'testing']
};
function selectAngles(taskDescription, complexity) {
const text = taskDescription.toLowerCase();
let preset = 'feature';
if (/refactor|architect|restructure/.test(text)) preset = 'architecture';
else if (/security|auth|permission/.test(text)) preset = 'security';
else if (/performance|slow|optimi/.test(text)) preset = 'performance';
else if (/fix|bug|error|issue/.test(text)) preset = 'bugfix';
const count = complexity === 'High' ? 4 : (complexity === 'Medium' ? 3 : 1);
return ANGLE_PRESETS[preset].slice(0, count);
}
const complexity = analyzeTaskComplexity(task_description);
const selectedAngles = selectAngles(task_description, complexity);
const sessionFolder = `.workflow/active/${sessionId}/.process`;
// 2.2.2 Launch Parallel Explore Agents
const explorationTasks = selectedAngles.map((angle, index) =>
Task(
subagent_type="cli-explore-agent",
run_in_background=false,
description=`Explore: ${angle}`,
prompt=`
## Task Objective
Execute **${angle}** exploration for TDD task planning context. Analyze codebase from this specific angle to discover relevant structure, patterns, and constraints.
## Assigned Context
- **Exploration Angle**: ${angle}
- **Task Description**: ${task_description}
- **Session ID**: ${sessionId}
- **Exploration Index**: ${index + 1} of ${selectedAngles.length}
- **Output File**: ${sessionFolder}/exploration-${angle}.json
## MANDATORY FIRST STEPS (Execute by Agent)
1. Run: ccw tool exec get_modules_by_depth '{}' (project structure)
2. Run: rg -l "{keyword_from_task}" --type ts (locate relevant files)
3. Execute: cat ~/.ccw/workflows/cli-templates/schemas/explore-json-schema.json (get output schema reference)
## Exploration Strategy (${angle} focus)
**Step 1: Structural Scan** (Bash)
- get_modules_by_depth.sh -> identify modules related to ${angle}
- find/rg -> locate files relevant to ${angle} aspect
- Analyze imports/dependencies from ${angle} perspective
**Step 2: Semantic Analysis** (Gemini CLI)
- How does existing code handle ${angle} concerns?
- What patterns are used for ${angle}?
- Where would new code integrate from ${angle} viewpoint?
**Step 3: Write Output**
- Consolidate ${angle} findings into JSON
- Identify ${angle}-specific clarification needs
## Expected Output
**File**: ${sessionFolder}/exploration-${angle}.json
**Schema Reference**: Schema obtained in MANDATORY FIRST STEPS step 3, follow schema exactly
**Required Fields** (all ${angle} focused):
- project_structure: Modules/architecture relevant to ${angle}
- relevant_files: Files affected from ${angle} perspective
**MANDATORY**: Every file MUST use structured object format with ALL required fields:
[{path: "src/file.ts", relevance: 0.85, rationale: "Contains AuthService.login()", role: "modify_target", discovery_source: "bash-scan", key_symbols: ["AuthService", "login"]}]
- **rationale** (required): Specific selection basis tied to ${angle} topic (>10 chars, not generic)
- **role** (required): modify_target|dependency|pattern_reference|test_target|type_definition|integration_point|config|context_only
- **discovery_source** (recommended): bash-scan|cli-analysis|ace-search|dependency-trace|manual
- **key_symbols** (recommended): Key functions/classes/types in the file relevant to the task
- Scores: 0.7+ high priority, 0.5-0.7 medium, <0.5 low
- patterns: ${angle}-related patterns to follow
- dependencies: Dependencies relevant to ${angle}
- integration_points: Where to integrate from ${angle} viewpoint (include file:line locations)
- constraints: ${angle}-specific limitations/conventions
- clarification_needs: ${angle}-related ambiguities (options array + recommended index)
- _metadata.exploration_angle: "${angle}"
## Success Criteria
- [ ] Schema obtained via cat explore-json-schema.json
- [ ] get_modules_by_depth.sh executed
- [ ] At least 3 relevant files identified with ${angle} rationale
- [ ] Patterns are actionable (code examples, not generic advice)
- [ ] Integration points include file:line locations
- [ ] Constraints are project-specific to ${angle}
- [ ] JSON output follows schema exactly
- [ ] clarification_needs includes options + recommended
## Output
Write: ${sessionFolder}/exploration-${angle}.json
Return: 2-3 sentence summary of ${angle} findings
`
)
);
// 2.2.3 Generate Manifest after all complete
const explorationFiles = bash(`find ${sessionFolder} -name "exploration-*.json" -type f`).split('\n').filter(f => f.trim());
const explorationManifest = {
session_id: sessionId,
task_description,
timestamp: new Date().toISOString(),
complexity,
exploration_count: selectedAngles.length,
angles_explored: selectedAngles,
explorations: explorationFiles.map(file => {
const data = JSON.parse(Read(file));
return { angle: data._metadata.exploration_angle, file: file.split('/').pop(), path: file, index: data._metadata.exploration_index };
})
};
Write(`${sessionFolder}/explorations-manifest.json`, JSON.stringify(explorationManifest, null, 2));
```
**Note**: conflictRisk determines whether Phase 4 (Conflict Resolution) will execute.
### Step 2.3: Invoke Context-Search Agent
**TodoWrite**: Mark phase 2 completed, phase 3 in_progress
**Only execute after Step 2.2 completes**
**After Phase 2**: Return to user showing Phase 2 results, then auto-continue to Phase 3
```javascript
// Load user intent from planning-notes.md (from Phase 1)
const planningNotesPath = `.workflow/active/${sessionId}/planning-notes.md`;
let userIntent = { goal: task_description, key_constraints: "None specified" };
if (file_exists(planningNotesPath)) {
const notesContent = Read(planningNotesPath);
const goalMatch = notesContent.match(/\*\*GOAL\*\*:\s*(.+)/);
const constraintsMatch = notesContent.match(/\*\*KEY_CONSTRAINTS\*\*:\s*(.+)/);
if (goalMatch) userIntent.goal = goalMatch[1].trim();
if (constraintsMatch) userIntent.key_constraints = constraintsMatch[1].trim();
}
Task(
subagent_type="context-search-agent",
run_in_background=false,
description="Gather comprehensive context for TDD plan",
prompt=`
## Execution Mode
**PLAN MODE** (Comprehensive) - Full Phase 1-3 execution with priority sorting
## Session Information
- **Session ID**: ${sessionId}
- **Task Description**: ${task_description}
- **Output Path**: .workflow/${sessionId}/.process/context-package.json
## User Intent (from Phase 1 - Planning Notes)
**GOAL**: ${userIntent.goal}
**KEY_CONSTRAINTS**: ${userIntent.key_constraints}
This is the PRIMARY context source - all subsequent analysis must align with user intent.
## Exploration Input (from Step 2.2)
- **Manifest**: ${sessionFolder}/explorations-manifest.json
- **Exploration Count**: ${explorationManifest.exploration_count}
- **Angles**: ${explorationManifest.angles_explored.join(', ')}
- **Complexity**: ${complexity}
## Mission
Execute complete context-search-agent workflow for TDD implementation planning:
### Phase 1: Initialization & Pre-Analysis
1. **Project State Loading**:
- Read and parse .workflow/project-tech.json. Use its overview section as the foundational project_context.
- Read and parse .workflow/project-guidelines.json. Load conventions, constraints, and learnings into a project_guidelines section.
- If files don't exist, proceed with fresh analysis.
2. **Detection**: Check for existing context-package (early exit if valid)
3. **Foundation**: Initialize CodexLens, get project structure, load docs
4. **Analysis**: Extract keywords, determine scope, classify complexity
### Phase 2: Multi-Source Context Discovery
Execute all discovery tracks (WITH USER INTENT INTEGRATION):
- **Track -1**: User Intent & Priority Foundation (EXECUTE FIRST)
- Load user intent (GOAL, KEY_CONSTRAINTS) from session input
- Map user requirements to codebase entities (files, modules, patterns)
- Establish baseline priority scores based on user goal alignment
- Output: user_intent_mapping.json with preliminary priority scores
- **Track 0**: Exploration Synthesis (load explorations-manifest.json, prioritize critical_files, deduplicate patterns/integration_points)
- **Track 1**: Historical archive analysis (query manifest.json for lessons learned)
- **Track 2**: Reference documentation (CLAUDE.md, architecture docs)
- **Track 3**: Web examples (use Exa MCP for unfamiliar tech/APIs)
- **Track 4**: Codebase analysis (5-layer discovery: files, content, patterns, deps, config/tests)
### Phase 3: Synthesis, Assessment & Packaging
1. Apply relevance scoring and build dependency graph
2. **Synthesize 5-source data**: Merge findings from all sources
- Priority order: User Intent > Archive > Docs > Exploration > Code > Web
- **Prioritize the context from project-tech.json** for architecture and tech stack
3. **Context Priority Sorting**:
a. Combine scores from Track -1 (user intent alignment) + relevance scores + exploration critical_files
b. Classify files into priority tiers:
- **Critical** (score >= 0.85): Directly mentioned in user goal OR exploration critical_files
- **High** (0.70-0.84): Key dependencies, patterns required for goal
- **Medium** (0.50-0.69): Supporting files, indirect dependencies
- **Low** (< 0.50): Contextual awareness only
c. Generate dependency_order: Based on dependency graph + user goal sequence
d. Document sorting_rationale: Explain prioritization logic
4. **Populate project_context**: Directly use the overview from project-tech.json
5. **Populate project_guidelines**: Load from project-guidelines.json
6. Integrate brainstorm artifacts (if .brainstorming/ exists, read content)
7. Perform conflict detection with risk assessment
8. **Inject historical conflicts** from archive analysis into conflict_detection
9. **Generate prioritized_context section**:
{
"prioritized_context": {
"user_intent": { "goal": "...", "scope": "...", "key_constraints": ["..."] },
"priority_tiers": {
"critical": [{ "path": "...", "relevance": 0.95, "rationale": "..." }],
"high": [...], "medium": [...], "low": [...]
},
"dependency_order": ["module1", "module2", "module3"],
"sorting_rationale": "Based on user goal alignment, exploration critical files, and dependency graph"
}
}
10. Generate and validate context-package.json with prioritized_context field
## Output Requirements
Complete context-package.json with:
- **metadata**: task_description, keywords, complexity, tech_stack, session_id
- **project_context**: description, technology_stack, architecture, key_components (from project-tech.json)
- **project_guidelines**: {conventions, constraints, quality_rules, learnings} (from project-guidelines.json)
- **assets**: {documentation[], source_code[], config[], tests[]} with relevance scores
- **dependencies**: {internal[], external[]} with dependency graph
- **brainstorm_artifacts**: {guidance_specification, role_analyses[], synthesis_output} with content
- **conflict_detection**: {risk_level, risk_factors, affected_modules[], mitigation_strategy, historical_conflicts[]}
- **exploration_results**: {manifest_path, exploration_count, angles, explorations[], aggregated_insights}
- **prioritized_context**: {user_intent, priority_tiers{critical, high, medium, low}, dependency_order[], sorting_rationale}
## Quality Validation
Before completion verify:
- [ ] Valid JSON format with all required fields
- [ ] File relevance accuracy >80%
- [ ] Dependency graph complete (max 2 transitive levels)
- [ ] Conflict risk level calculated correctly
- [ ] No sensitive data exposed
- [ ] Total files <= 50 (prioritize high-relevance)
## Planning Notes Record (REQUIRED)
After completing context-package.json, append to planning-notes.md:
**File**: .workflow/active/${sessionId}/planning-notes.md
**Location**: Under "## Context Findings (Phase 2)" section
**Format**:
### [Context-Search Agent] YYYY-MM-DD
- **Note**: [Brief summary of key findings]
Execute autonomously following agent documentation.
Report completion with statistics.
`
)
```
### Step 2.4: Output Verification
After agent completes, verify output:
```javascript
// Verify file was created
const outputPath = `.workflow/active/${sessionId}/.process/context-package.json`;
if (!file_exists(outputPath)) {
throw new Error("Agent failed to generate context-package.json");
}
// Store variables for subsequent phases
contextPath = outputPath;
// Verify exploration_results included
const pkg = JSON.parse(Read(outputPath));
if (pkg.exploration_results?.exploration_count > 0) {
console.log(`Exploration results aggregated: ${pkg.exploration_results.exploration_count} angles`);
}
conflictRisk = pkg.conflict_detection?.risk_level || 'low';
```
### TodoWrite Update (Phase 2 in progress - tasks attached)
```json
[
{"content": "Phase 1: Session Discovery", "status": "completed", "activeForm": "Executing session discovery"},
{"content": "Phase 2: Context Gathering", "status": "in_progress", "activeForm": "Executing context gathering"},
{"content": " -> Analyze codebase structure", "status": "in_progress", "activeForm": "Analyzing codebase structure"},
{"content": " -> Identify integration points", "status": "pending", "activeForm": "Identifying integration points"},
{"content": " -> Generate context package", "status": "pending", "activeForm": "Generating context package"},
{"content": "Phase 3: Test Coverage Analysis", "status": "pending", "activeForm": "Executing test coverage analysis"},
{"content": "Phase 5: TDD Task Generation", "status": "pending", "activeForm": "Executing TDD task generation"},
{"content": "Phase 6: TDD Structure Validation", "status": "pending", "activeForm": "Validating TDD structure"}
]
```
### TodoWrite Update (Phase 2 completed - tasks collapsed)
```json
[
{"content": "Phase 1: Session Discovery", "status": "completed", "activeForm": "Executing session discovery"},
{"content": "Phase 2: Context Gathering", "status": "completed", "activeForm": "Executing context gathering"},
{"content": "Phase 3: Test Coverage Analysis", "status": "pending", "activeForm": "Executing test coverage analysis"},
{"content": "Phase 5: TDD Task Generation", "status": "pending", "activeForm": "Executing TDD task generation"},
{"content": "Phase 6: TDD Structure Validation", "status": "pending", "activeForm": "Validating TDD structure"}
]
```
### Step 2.5: Update Planning Notes
After context gathering completes, update planning-notes.md with findings:
```javascript
// Read context-package to extract key findings
const contextPackage = JSON.parse(Read(contextPath))
const conflictRisk = contextPackage.conflict_detection?.risk_level || 'low'
const criticalFiles = (contextPackage.exploration_results?.aggregated_insights?.critical_files || [])
.slice(0, 5).map(f => f.path)
const archPatterns = contextPackage.project_context?.architecture_patterns || []
const constraints = contextPackage.exploration_results?.aggregated_insights?.constraints || []
// Append Phase 2 findings to planning-notes.md
Edit(planningNotesPath, {
old: '## Context Findings (Phase 2)\n(To be filled by context-gather)',
new: `## Context Findings (Phase 2)
- **CRITICAL_FILES**: ${criticalFiles.join(', ') || 'None identified'}
- **ARCHITECTURE**: ${archPatterns.join(', ') || 'Not detected'}
- **CONFLICT_RISK**: ${conflictRisk}
- **CONSTRAINTS**: ${constraints.length > 0 ? constraints.join('; ') : 'None'}`
})
// Append Phase 2 constraints to consolidated list
Edit(planningNotesPath, {
old: '## Consolidated Constraints (Phase 4 Input)',
new: `## Consolidated Constraints (Phase 4 Input)
${constraints.map((c, i) => `${i + 2}. [Context] ${c}`).join('\n')}`
})
```
**Auto-Continue**: Return to user showing Phase 2 results, then auto-continue to Phase 3.
## Output
- **Variable**: `contextPath` (path to context-package.json)
- **Variable**: `conflictRisk` ("none" | "low" | "medium" | "high")
- **TodoWrite**: Mark Phase 2 completed, Phase 3 in_progress
- **Variable**: `conflictRisk` (none/low/medium/high)
- **File**: `context-package.json`
- **TodoWrite**: Mark Phase 2 completed, determine Phase 3 or Phase 4
## Next Phase
Return to orchestrator, then auto-continue to [Phase 3: Test Coverage Analysis](03-test-coverage-analysis.md).
Return to orchestrator. Orchestrator continues to [Phase 3: Test Coverage Analysis](03-test-coverage-analysis.md).

View File

@@ -9,27 +9,121 @@ Analyze existing test coverage, detect test framework, and identify coverage gap
- Identify related components and integration points
- Generate test-context-package.json
## Core Philosophy
- **Agent Delegation**: Delegate all test coverage analysis to `test-context-search-agent` for autonomous execution
- **Detection-First**: Check for existing test-context-package before executing
- **Coverage-First**: Analyze existing test coverage before planning new tests
- **Source Context Loading**: Import implementation summaries from source session
- **Standardized Output**: Generate `.workflow/active/{session}/.process/test-context-package.json`
## Execution
### Step 3.1: Execute Test Context Gathering
### Step 3.1: Test-Context-Package Detection
**Execute First** - Check if valid package already exists:
```javascript
Skill(skill="workflow:tools:test-context-gather", args="--session [sessionId]")
const testContextPath = `.workflow/active/${sessionId}/.process/test-context-package.json`;
if (file_exists(testContextPath)) {
const existing = Read(testContextPath);
// Validate package belongs to current session
if (existing?.metadata?.test_session_id === sessionId) {
console.log("Valid test-context-package found for session:", sessionId);
console.log("Coverage Stats:", existing.test_coverage.coverage_stats);
console.log("Framework:", existing.test_framework.framework);
console.log("Missing Tests:", existing.test_coverage.missing_tests.length);
// Skip execution, store variable and proceed
testContextPath_var = testContextPath;
return; // Early exit - skip Steps 3.2-3.3
}
}
```
**Purpose**: Analyze existing codebase for:
- Existing test patterns and conventions
- Current test coverage
- Related components and integration points
- Test framework detection
### Step 3.2: Invoke Test-Context-Search Agent
### Step 3.2: Parse Output
**Only execute if Step 3.1 finds no valid package**
- Extract: testContextPath (`.workflow/active/[sessionId]/.process/test-context-package.json`)
```javascript
Task(
subagent_type="test-context-search-agent",
run_in_background=false,
description="Gather test coverage context",
prompt=`
**Validation**:
- test-context-package.json exists and is valid JSON
- Contains framework detection results
## Execution Mode
**PLAN MODE** (Comprehensive) - Full Phase 1-3 execution
## Session Information
- **Test Session ID**: ${sessionId}
- **Output Path**: .workflow/active/${sessionId}/.process/test-context-package.json
## Mission
Execute complete test-context-search-agent workflow for test generation planning:
### Phase 1: Session Validation & Source Context Loading
1. **Detection**: Check for existing test-context-package (early exit if valid)
2. **Test Session Validation**: Load test session metadata, extract source_session reference
3. **Source Context Loading**: Load source session implementation summaries, changed files, tech stack
### Phase 2: Test Coverage Analysis
Execute coverage discovery:
- **Track 1**: Existing test discovery (find *.test.*, *.spec.* files)
- **Track 2**: Coverage gap analysis (match implementation files to test files)
- **Track 3**: Coverage statistics (calculate percentages, identify gaps by module)
### Phase 3: Framework Detection & Packaging
1. Framework identification from package.json/requirements.txt
2. Convention analysis from existing test patterns
3. Generate and validate test-context-package.json
## Output Requirements
Complete test-context-package.json with:
- **metadata**: test_session_id, source_session_id, task_type, complexity
- **source_context**: implementation_summaries, tech_stack, project_patterns
- **test_coverage**: existing_tests[], missing_tests[], coverage_stats
- **test_framework**: framework, version, test_pattern, conventions
- **assets**: implementation_summary[], existing_test[], source_code[] with priorities
- **focus_areas**: Test generation guidance based on coverage gaps
## Quality Validation
Before completion verify:
- [ ] Valid JSON format with all required fields
- [ ] Source session context loaded successfully
- [ ] Test coverage gaps identified
- [ ] Test framework detected (or marked as 'unknown')
- [ ] Coverage percentage calculated correctly
- [ ] Missing tests catalogued with priority
- [ ] Execution time < 30 seconds (< 60s for large codebases)
Execute autonomously following agent documentation.
Report completion with coverage statistics.
`
)
```
### Step 3.3: Output Verification
After agent completes, verify output:
```javascript
// Verify file was created
const outputPath = `.workflow/active/${sessionId}/.process/test-context-package.json`;
if (!file_exists(outputPath)) {
throw new Error("Agent failed to generate test-context-package.json");
}
// Load and display summary
const testContext = JSON.parse(Read(outputPath));
console.log("Test context package generated successfully");
console.log("Coverage:", testContext.test_coverage.coverage_stats.coverage_percentage + "%");
console.log("Tests to generate:", testContext.test_coverage.missing_tests.length);
// Store variable for subsequent phases
testContextPath_var = outputPath;
```
### TodoWrite Update (Phase 3 Skill executed - tasks attached)
@@ -38,17 +132,17 @@ Skill(skill="workflow:tools:test-context-gather", args="--session [sessionId]")
{"content": "Phase 1: Session Discovery", "status": "completed", "activeForm": "Executing session discovery"},
{"content": "Phase 2: Context Gathering", "status": "completed", "activeForm": "Executing context gathering"},
{"content": "Phase 3: Test Coverage Analysis", "status": "in_progress", "activeForm": "Executing test coverage analysis"},
{"content": " Detect test framework and conventions", "status": "in_progress", "activeForm": "Detecting test framework"},
{"content": " Analyze existing test coverage", "status": "pending", "activeForm": "Analyzing test coverage"},
{"content": " Identify coverage gaps", "status": "pending", "activeForm": "Identifying coverage gaps"},
{"content": " -> Detect test framework and conventions", "status": "in_progress", "activeForm": "Detecting test framework"},
{"content": " -> Analyze existing test coverage", "status": "pending", "activeForm": "Analyzing test coverage"},
{"content": " -> Identify coverage gaps", "status": "pending", "activeForm": "Identifying coverage gaps"},
{"content": "Phase 5: TDD Task Generation", "status": "pending", "activeForm": "Executing TDD task generation"},
{"content": "Phase 6: TDD Structure Validation", "status": "pending", "activeForm": "Validating TDD structure"}
]
```
**Note**: Skill execute **attaches** test-context-gather's 3 tasks. Orchestrator **executes** these tasks.
**Note**: Agent execution **attaches** test-context-search's 3 tasks. Orchestrator **executes** these tasks.
**Next Action**: Tasks attached **Execute Phase 3.1-3.3** sequentially
**Next Action**: Tasks attached -> **Execute Phase 3.1-3.3** sequentially
### TodoWrite Update (Phase 3 completed - tasks collapsed)
@@ -74,5 +168,5 @@ Skill(skill="workflow:tools:test-context-gather", args="--session [sessionId]")
## Next Phase
Based on `conflictRisk` from Phase 2:
- If conflictRisk medium [Phase 4: Conflict Resolution](04-conflict-resolution.md)
- If conflictRisk < medium Skip to [Phase 5: TDD Task Generation](05-tdd-task-generation.md)
- If conflictRisk >= medium -> [Phase 4: Conflict Resolution](04-conflict-resolution.md)
- If conflictRisk < medium -> Skip to [Phase 5: TDD Task Generation](05-tdd-task-generation.md)

View File

@@ -1,41 +1,337 @@
# Phase 4: Conflict Resolution (Conditional)
Detect and resolve conflicts when conflict risk is medium or high.
Detect and resolve conflicts with CLI analysis. This phase is **conditional** - only executes when `conflict_risk >= medium`.
## Objective
- Execute conflict detection and resolution only when conflictRisk ≥ medium
- Generate conflict-resolution.json with resolution strategies
- Skip directly to Phase 5 if conflictRisk is none/low
- Detect conflicts between planned changes and existing codebase
- Detect module scenario uniqueness (functional overlaps)
- Present conflicts to user with resolution strategies
- Apply selected resolution strategies
- Update planning-notes.md with conflict decisions
## Trigger Condition
**Only execute when**: `context-package.json` indicates `conflict_risk` is "medium" or "high"
Only execute when context-package.json indicates `conflict_risk` is "medium" or "high".
If `conflict_risk` is "none" or "low", skip directly to Phase 5.
**Skip Behavior**: If conflict_risk is "none" or "low", skip directly to Phase 5. Display: "No significant conflicts detected, proceeding to TDD task generation"
## Conflict Categories
| Category | Description |
|----------|-------------|
| **Architecture** | Incompatible design patterns, module structure changes |
| **API** | Breaking contract changes, signature modifications |
| **Data Model** | Schema modifications, type breaking changes |
| **Dependency** | Version incompatibilities, setup conflicts |
| **ModuleOverlap** | Functional overlap, scenario boundary ambiguity, duplicate responsibility |
## Execution
### Step 4.1: Execute Conflict Resolution
### Step 4.1: Validation
```javascript
Skill(skill="workflow:tools:conflict-resolution", args="--session [sessionId] --context [contextPath]")
// 1. Verify session directory exists
const sessionDir = `.workflow/active/${sessionId}`;
if (!file_exists(sessionDir)) {
throw new Error(`Session directory not found: ${sessionDir}`);
}
// 2. Load context-package.json
const contextPackage = JSON.parse(Read(contextPath));
// 3. Check conflict_risk (skip if none/low)
const conflictRisk = contextPackage.conflict_detection?.risk_level || 'low';
if (conflictRisk === 'none' || conflictRisk === 'low') {
console.log("No significant conflicts detected, proceeding to TDD task generation");
// Skip directly to Phase 5
return;
}
```
**Input**:
- sessionId from Phase 1
- contextPath from Phase 2
- conflict_risk from context-package.json
### Step 4.2: CLI-Powered Conflict Analysis
### Step 4.2: Parse Output
**Agent Delegation**:
- Extract: Execution status (success/skipped/failed)
- Verify: conflict-resolution.json file path (if executed)
```javascript
Task(subagent_type="cli-execution-agent", run_in_background=false, prompt=`
## Context
- Session: ${sessionId}
- Risk: ${conflictRisk}
- Files: ${existing_files_list}
**Validation**:
- File `.workflow/active/[sessionId]/.process/conflict-resolution.json` exists (if executed)
## Exploration Context (from context-package.exploration_results)
- Exploration Count: ${contextPackage.exploration_results?.exploration_count || 0}
- Angles Analyzed: ${JSON.stringify(contextPackage.exploration_results?.angles || [])}
- Pre-identified Conflict Indicators: ${JSON.stringify(contextPackage.exploration_results?.aggregated_insights?.conflict_indicators || [])}
- Critical Files: ${JSON.stringify(contextPackage.exploration_results?.aggregated_insights?.critical_files?.map(f => f.path) || [])}
- All Patterns: ${JSON.stringify(contextPackage.exploration_results?.aggregated_insights?.all_patterns || [])}
- All Integration Points: ${JSON.stringify(contextPackage.exploration_results?.aggregated_insights?.all_integration_points || [])}
### TodoWrite Update (Phase 4 Skill executed - tasks attached, if conflict_risk ≥ medium)
## Analysis Steps
### 0. Load Output Schema (MANDATORY)
Execute: cat ~/.ccw/workflows/cli-templates/schemas/conflict-resolution-schema.json
### 1. Load Context
- Read existing files from conflict_detection.existing_files
- Load plan from .workflow/active/${sessionId}/.process/context-package.json
- Load exploration_results and use aggregated_insights for enhanced analysis
- Extract role analyses and requirements
### 2. Execute CLI Analysis (Enhanced with Exploration + Scenario Uniqueness)
Primary (Gemini):
ccw cli -p "
PURPOSE: Detect conflicts between plan and codebase, using exploration insights
TASK:
* Review pre-identified conflict_indicators from exploration results
* Compare architectures (use exploration key_patterns)
* Identify breaking API changes
* Detect data model incompatibilities
* Assess dependency conflicts
* Analyze module scenario uniqueness
- Use exploration integration_points for precise locations
- Cross-validate with exploration critical_files
- Generate clarification questions for boundary definition
MODE: analysis
CONTEXT: @**/*.ts @**/*.js @**/*.tsx @**/*.jsx @.workflow/active/${sessionId}/**/*
EXPECTED: Conflict list with severity ratings, including:
- Validation of exploration conflict_indicators
- ModuleOverlap conflicts with overlap_analysis
- Targeted clarification questions
CONSTRAINTS: Focus on breaking changes, migration needs, and functional overlaps | Prioritize exploration-identified conflicts | analysis=READ-ONLY
" --tool gemini --mode analysis --rule analysis-code-patterns --cd {project_root}
Fallback: Qwen (same prompt) -> Claude (manual analysis)
### 3. Generate Strategies (2-4 per conflict)
Template per conflict:
- Severity: Critical/High/Medium
- Category: Architecture/API/Data/Dependency/ModuleOverlap
- Affected files + impact
- For ModuleOverlap: Include overlap_analysis with existing modules and scenarios
- Options with pros/cons, effort, risk
- For ModuleOverlap strategies: Add clarification_needed questions for boundary definition
- Recommended strategy + rationale
### 4. Return Structured Conflict Data
Output to conflict-resolution.json (generated in Phase 4)
**Schema Reference**: Execute cat ~/.ccw/workflows/cli-templates/schemas/conflict-resolution-schema.json to get full schema
Return JSON following the schema. Key requirements:
- Minimum 2 strategies per conflict, max 4
- All text in Chinese for user-facing fields (brief, name, pros, cons, modification_suggestions)
- modifications.old_content: 20-100 chars for unique Edit tool matching
- modifications.new_content: preserves markdown formatting
- modification_suggestions: 2-5 actionable suggestions for custom handling
### 5. Planning Notes Record (REQUIRED)
After analysis complete, append to planning-notes.md:
**File**: .workflow/active/${sessionId}/planning-notes.md
**Location**: Under "## Conflict Decisions (Phase 3)" section
**Format**:
### [Conflict-Resolution Agent] YYYY-MM-DD
- **Note**: [Brief summary of conflict types, strategies, key decisions]
`)
```
### Step 4.3: Iterative User Interaction
```javascript
const autoYes = workflowPreferences?.autoYes || false;
FOR each conflict:
round = 0, clarified = false, userClarifications = []
WHILE (!clarified && round++ < 10):
// 1. Display conflict info (text output for context)
displayConflictSummary(conflict) // id, brief, severity, overlap_analysis if ModuleOverlap
// 2. Strategy selection
if (autoYes) {
console.log(`[autoYes] Auto-selecting recommended strategy`)
selectedStrategy = conflict.strategies[conflict.recommended || 0]
clarified = true // Skip clarification loop
} else {
AskUserQuestion({
questions: [{
question: formatStrategiesForDisplay(conflict.strategies),
header: "Strategy",
multiSelect: false,
options: [
...conflict.strategies.map((s, i) => ({
label: `${s.name}${i === conflict.recommended ? ' (Recommended)' : ''}`,
description: `${s.complexity} complexity | ${s.risk} risk${s.clarification_needed?.length ? ' | Needs clarification' : ''}`
})),
{ label: "Custom modification", description: `Suggestions: ${conflict.modification_suggestions?.slice(0,2).join('; ')}` }
]
}]
})
// 3. Handle selection
if (userChoice === "Custom modification") {
customConflicts.push({ id, brief, category, suggestions, overlap_analysis })
break
}
selectedStrategy = findStrategyByName(userChoice)
}
// 4. Clarification (if needed) - batched max 4 per call
if (!autoYes && selectedStrategy.clarification_needed?.length > 0) {
for (batch of chunk(selectedStrategy.clarification_needed, 4)) {
AskUserQuestion({
questions: batch.map((q, i) => ({
question: q, header: `Clarify${i+1}`, multiSelect: false,
options: [{ label: "Provide details", description: "Enter answer" }]
}))
})
userClarifications.push(...collectAnswers(batch))
}
// 5. Agent re-analysis
reanalysisResult = Task({
subagent_type: "cli-execution-agent",
run_in_background: false,
prompt: `Conflict: ${conflict.id}, Strategy: ${selectedStrategy.name}
User Clarifications: ${JSON.stringify(userClarifications)}
Output: { uniqueness_confirmed, rationale, updated_strategy, remaining_questions }`
})
if (reanalysisResult.uniqueness_confirmed) {
selectedStrategy = { ...reanalysisResult.updated_strategy, clarifications: userClarifications }
clarified = true
} else {
selectedStrategy.clarification_needed = reanalysisResult.remaining_questions
}
} else {
clarified = true
}
if (clarified) resolvedConflicts.push({ conflict, strategy: selectedStrategy })
END WHILE
END FOR
selectedStrategies = resolvedConflicts.map(r => ({
conflict_id: r.conflict.id, strategy: r.strategy, clarifications: r.strategy.clarifications || []
}))
```
### Step 4.4: Apply Modifications
```javascript
// 1. Extract modifications from resolved strategies
const modifications = [];
selectedStrategies.forEach(item => {
if (item.strategy && item.strategy.modifications) {
modifications.push(...item.strategy.modifications.map(mod => ({
...mod,
conflict_id: item.conflict_id,
clarifications: item.clarifications
})));
}
});
console.log(`Applying ${modifications.length} modifications...`);
// 2. Apply each modification using Edit tool (with fallback to context-package.json)
const appliedModifications = [];
const failedModifications = [];
const fallbackConstraints = []; // For files that don't exist
modifications.forEach((mod, idx) => {
try {
console.log(`[${idx + 1}/${modifications.length}] Modifying ${mod.file}...`);
// Check if target file exists (brainstorm files may not exist in lite workflow)
if (!file_exists(mod.file)) {
console.log(` File not found, writing to context-package.json as constraint`);
fallbackConstraints.push({
source: "conflict-resolution",
conflict_id: mod.conflict_id,
target_file: mod.file,
section: mod.section,
change_type: mod.change_type,
content: mod.new_content,
rationale: mod.rationale
});
return; // Skip to next modification
}
if (mod.change_type === "update") {
Edit({ file_path: mod.file, old_string: mod.old_content, new_string: mod.new_content });
} else if (mod.change_type === "add") {
const fileContent = Read(mod.file);
const updated = insertContentAfterSection(fileContent, mod.section, mod.new_content);
Write(mod.file, updated);
} else if (mod.change_type === "remove") {
Edit({ file_path: mod.file, old_string: mod.old_content, new_string: "" });
}
appliedModifications.push(mod);
console.log(` Success`);
} catch (error) {
console.log(` Failed: ${error.message}`);
failedModifications.push({ ...mod, error: error.message });
}
});
// 3. Generate conflict-resolution.json output file
const resolutionOutput = {
session_id: sessionId,
resolved_at: new Date().toISOString(),
summary: {
total_conflicts: conflicts.length,
resolved_with_strategy: selectedStrategies.length,
custom_handling: customConflicts.length,
fallback_constraints: fallbackConstraints.length
},
resolved_conflicts: selectedStrategies.map(s => ({
conflict_id: s.conflict_id,
strategy_name: s.strategy.name,
strategy_approach: s.strategy.approach,
clarifications: s.clarifications || [],
modifications_applied: s.strategy.modifications?.filter(m =>
appliedModifications.some(am => am.conflict_id === s.conflict_id)
) || []
})),
custom_conflicts: customConflicts.map(c => ({
id: c.id, brief: c.brief, category: c.category,
suggestions: c.suggestions, overlap_analysis: c.overlap_analysis || null
})),
planning_constraints: fallbackConstraints,
failed_modifications: failedModifications
};
const resolutionPath = `.workflow/active/${sessionId}/.process/conflict-resolution.json`;
Write(resolutionPath, JSON.stringify(resolutionOutput, null, 2));
// 4. Update context-package.json with resolution details
const contextPkg = JSON.parse(Read(contextPath));
contextPkg.conflict_detection.conflict_risk = "resolved";
contextPkg.conflict_detection.resolution_file = resolutionPath;
contextPkg.conflict_detection.resolved_conflicts = selectedStrategies.map(s => s.conflict_id);
contextPkg.conflict_detection.custom_conflicts = customConflicts.map(c => c.id);
contextPkg.conflict_detection.resolved_at = new Date().toISOString();
Write(contextPath, JSON.stringify(contextPkg, null, 2));
// 5. Output custom conflict summary with overlap analysis (if any)
if (customConflicts.length > 0) {
customConflicts.forEach(conflict => {
console.log(`[${conflict.category}] ${conflict.id}: ${conflict.brief}`);
if (conflict.category === 'ModuleOverlap' && conflict.overlap_analysis) {
console.log(`Overlap info: New module: ${conflict.overlap_analysis.new_module.name}`);
}
conflict.suggestions.forEach(s => console.log(` - ${s}`));
});
}
```
### TodoWrite Update (Phase 4 in progress - tasks attached, if conflict_risk >= medium)
```json
[
@@ -43,18 +339,14 @@ Skill(skill="workflow:tools:conflict-resolution", args="--session [sessionId] --
{"content": "Phase 2: Context Gathering", "status": "completed", "activeForm": "Executing context gathering"},
{"content": "Phase 3: Test Coverage Analysis", "status": "completed", "activeForm": "Executing test coverage analysis"},
{"content": "Phase 4: Conflict Resolution", "status": "in_progress", "activeForm": "Executing conflict resolution"},
{"content": " Detect conflicts with CLI analysis", "status": "in_progress", "activeForm": "Detecting conflicts"},
{"content": " → Log and analyze detected conflicts", "status": "pending", "activeForm": "Analyzing conflicts"},
{"content": " Apply resolution strategies", "status": "pending", "activeForm": "Applying resolution strategies"},
{"content": " -> Detect conflicts with CLI analysis", "status": "in_progress", "activeForm": "Detecting conflicts"},
{"content": " -> Present conflicts to user", "status": "pending", "activeForm": "Presenting conflicts"},
{"content": " -> Apply resolution strategies", "status": "pending", "activeForm": "Applying resolution strategies"},
{"content": "Phase 5: TDD Task Generation", "status": "pending", "activeForm": "Executing TDD task generation"},
{"content": "Phase 6: TDD Structure Validation", "status": "pending", "activeForm": "Validating TDD structure"}
]
```
**Note**: Skill execute **attaches** conflict-resolution's 3 tasks. Orchestrator **executes** these tasks.
**Next Action**: Tasks attached → **Execute Phase 4.1-4.3** sequentially
### TodoWrite Update (Phase 4 completed - tasks collapsed)
```json
@@ -68,26 +360,65 @@ Skill(skill="workflow:tools:conflict-resolution", args="--session [sessionId] --
]
```
**Note**: Phase 4 tasks completed and collapsed to summary.
### Step 4.5: Update Planning Notes
**After Phase 4**: Return to user showing conflict resolution results and selected strategies, then auto-continue to Phase 5
After conflict resolution completes (if executed), update planning-notes.md:
### Memory State Check
```javascript
if (conflictRisk === 'medium' || conflictRisk === 'high') {
const conflictResPath = `.workflow/active/${sessionId}/.process/conflict-resolution.json`;
if (file_exists(conflictResPath)) {
const conflictRes = JSON.parse(Read(conflictResPath));
const resolved = conflictRes.resolved_conflicts || [];
const modifiedArtifacts = conflictRes.modified_artifacts || [];
const planningConstraints = conflictRes.planning_constraints || [];
// Update Phase 4 section
Edit(planningNotesPath, {
old: '## Conflict Decisions (Phase 4)\n(To be filled if conflicts detected)',
new: `## Conflict Decisions (Phase 4)
- **RESOLVED**: ${resolved.map(r => `${r.type} -> ${r.strategy}`).join('; ') || 'None'}
- **MODIFIED_ARTIFACTS**: ${modifiedArtifacts.join(', ') || 'None'}
- **CONSTRAINTS**: ${planningConstraints.join('; ') || 'None'}`
})
// Append Phase 4 constraints to consolidated list
if (planningConstraints.length > 0) {
const currentNotes = Read(planningNotesPath);
const constraintCount = (currentNotes.match(/^\d+\./gm) || []).length;
Edit(planningNotesPath, {
old: '## Consolidated Constraints (Phase 5 Input)',
new: `## Consolidated Constraints (Phase 5 Input)
${planningConstraints.map((c, i) => `${constraintCount + i + 1}. [Conflict] ${c}`).join('\n')}`
})
}
}
}
```
**Auto-Continue**: Return to user showing conflict resolution results and selected strategies, then auto-continue.
**Auto Mode**: When `workflowPreferences.autoYes` is true, conflict-resolution automatically applies recommended resolution strategies without user confirmation.
### Step 4.6: Memory State Check
Evaluate current context window usage and memory state:
After Phase 4, evaluate current context window usage and memory state:
- If memory usage is high (>110K tokens or approaching context limits):
```javascript
Skill(skill="compact")
```
```javascript
Skill(skill="compact")
```
- This optimizes memory before proceeding to Phase 5
- Memory compaction is particularly important after analysis phase which may generate extensive documentation
- Ensures optimal performance and prevents context overflow
## Output
- **File**: `.workflow/active/[sessionId]/.process/conflict-resolution.json` (if executed)
- **File**: `conflict-resolution.json` (if conflicts resolved)
- **TodoWrite**: Mark Phase 4 completed, Phase 5 in_progress
## Next Phase

View File

@@ -8,12 +8,351 @@ Generate TDD tasks with Red-Green-Refactor cycles via action-planning-agent.
- Each task contains internal Red-Green-Refactor cycle
- Include Phase 0 user configuration (execution method, CLI tool preference)
## Core Philosophy
- **Agent-Driven**: Delegate execution to action-planning-agent for autonomous operation
- **Two-Phase Flow**: Discovery (context gathering) -> Output (document generation)
- **Memory-First**: Reuse loaded documents from conversation memory
- **MCP-Enhanced**: Use MCP tools for advanced code analysis and research
- **Semantic CLI Selection**: CLI tool usage determined from user's task description, not flags
- **Path Clarity**: All `focus_paths` prefer absolute paths or clear relative paths from project root
- **TDD-First**: Every feature starts with a failing test (Red phase)
- **Feature-Complete Tasks**: Each task contains complete Red-Green-Refactor cycle
- **Quantification-Enforced**: All test cases, coverage requirements, and implementation scope MUST include explicit counts and enumerations
## Task Strategy
### Optimized Task Structure
- **1 feature = 1 task** containing complete TDD cycle internally
- Each task executes Red-Green-Refactor phases sequentially
- Task count = Feature count (typically 5 features = 5 tasks)
### When to Use Subtasks
- Feature complexity >2500 lines or >6 files per TDD cycle
- Multiple independent sub-features needing parallel execution
- Strong technical dependency blocking (e.g., API before UI)
- Different tech stacks or domains within feature
### Task Limits
- **Maximum 18 tasks** (hard limit for TDD workflows)
- **Feature-based**: Complete functional units with internal TDD cycles
- **Hierarchy**: Flat (<=5 simple features) | Two-level (6-10 for complex features with sub-features)
- **Re-scope**: If >18 tasks needed, break project into multiple TDD workflow sessions
## Execution
### Step 5.1: Execute TDD Task Generation
### Phase 0: User Configuration (Interactive)
**Purpose**: Collect user preferences before TDD task generation.
```javascript
Skill(skill="workflow:tools:task-generate-tdd", args="--session [sessionId]")
AskUserQuestion({
questions: [
{
question: "Do you have supplementary materials or guidelines to include?",
header: "Materials",
multiSelect: false,
options: [
{ label: "No additional materials", description: "Use existing context only" },
{ label: "Provide file paths", description: "I'll specify paths to include" },
{ label: "Provide inline content", description: "I'll paste content directly" }
]
},
{
question: "Select execution method for generated TDD tasks:",
header: "Execution",
multiSelect: false,
options: [
{ label: "Agent (Recommended)", description: "Claude agent executes Red-Green-Refactor cycles directly" },
{ label: "Hybrid", description: "Agent orchestrates, calls CLI for complex steps (Red/Green phases)" },
{ label: "CLI Only", description: "All TDD cycles via CLI tools (codex/gemini/qwen)" }
]
},
{
question: "If using CLI, which tool do you prefer?",
header: "CLI Tool",
multiSelect: false,
options: [
{ label: "Codex (Recommended)", description: "Best for TDD Red-Green-Refactor cycles" },
{ label: "Gemini", description: "Best for analysis and large context" },
{ label: "Qwen", description: "Alternative analysis tool" },
{ label: "Auto", description: "Let agent decide per-task" }
]
}
]
})
```
**Handle Materials Response**:
```javascript
if (userConfig.materials === "Provide file paths") {
const pathsResponse = AskUserQuestion({
questions: [{
question: "Enter file paths to include (comma-separated or one per line):",
header: "Paths",
multiSelect: false,
options: [
{ label: "Enter paths", description: "Provide paths in text input" }
]
}]
})
userConfig.supplementaryPaths = parseUserPaths(pathsResponse)
}
```
**Build userConfig**:
```javascript
const userConfig = {
supplementaryMaterials: {
type: "none|paths|inline",
content: [...],
},
executionMethod: "agent|hybrid|cli",
preferredCliTool: "codex|gemini|qwen|auto",
enableResume: true // Always enable resume for CLI executions
}
```
**Auto Mode**: When `--yes` or `-y`: Skip user questions, use defaults (no materials, Agent executor).
---
### Phase 1: Context Preparation & Discovery
**Memory-First Rule**: Skip file loading if documents already in conversation memory
**Progressive Loading Strategy**: Load context incrementally:
- **Core**: session metadata + context-package.json (always load)
- **Selective**: synthesis_output OR (guidance + relevant role analyses) - NOT all role analyses
- **On-Demand**: conflict resolution (if conflict_risk >= medium), test context
**Session Path Structure** (provided to agent):
```
.workflow/active/WFS-{session-id}/
├── workflow-session.json # Session metadata
├── .process/
│ ├── context-package.json # Context package with artifact catalog
│ ├── test-context-package.json # Test coverage analysis
│ └── conflict-resolution.json # Conflict resolution (if exists)
├── .task/ # Output: Task JSON files
│ ├── IMPL-1.json
│ ├── IMPL-2.json
│ └── ...
├── plan.json # Output: Structured plan overview (TDD variant)
├── IMPL_PLAN.md # Output: TDD implementation plan
└── TODO_LIST.md # Output: TODO list with TDD phases
```
**Discovery Actions**:
1. **Load Session Context** (if not in memory)
```javascript
if (!memory.has("workflow-session.json")) {
Read(`.workflow/active/${sessionId}/workflow-session.json`)
}
```
2. **Load Context Package** (if not in memory)
```javascript
if (!memory.has("context-package.json")) {
Read(`.workflow/active/${sessionId}/.process/context-package.json`)
}
```
3. **Load Test Context Package** (if not in memory)
```javascript
if (!memory.has("test-context-package.json")) {
Read(`.workflow/active/${sessionId}/.process/test-context-package.json`)
}
```
4. **Extract & Load Role Analyses** (from context-package.json)
```javascript
const roleAnalysisPaths = contextPackage.brainstorm_artifacts.role_analyses
.flatMap(role => role.files.map(f => f.path));
roleAnalysisPaths.forEach(path => Read(path));
```
5. **Load Conflict Resolution** (if exists)
```javascript
if (contextPackage.conflict_detection?.resolution_file) {
Read(contextPackage.conflict_detection.resolution_file)
} else if (contextPackage.brainstorm_artifacts?.conflict_resolution?.exists) {
Read(contextPackage.brainstorm_artifacts.conflict_resolution.path)
}
```
6. **Code Analysis with Native Tools** (optional)
```bash
find . -name "*test*" -type f
rg "describe|it\(|test\(" -g "*.ts"
```
7. **MCP External Research** (optional)
```javascript
mcp__exa__get_code_context_exa(
query="TypeScript TDD best practices Red-Green-Refactor",
tokensNum="dynamic"
)
```
---
### Step 5.1: Execute TDD Task Generation (Agent Invocation)
```javascript
Task(
subagent_type="action-planning-agent",
run_in_background=false,
description="Generate TDD planning documents (IMPL_PLAN.md, task JSONs, TODO_LIST.md)",
prompt=`
## TASK OBJECTIVE
Generate TDD implementation planning documents (IMPL_PLAN.md, task JSONs, TODO_LIST.md) for workflow session
IMPORTANT: This is PLANNING ONLY - you are generating planning documents, NOT implementing code.
CRITICAL: Follow the progressive loading strategy (load analysis.md files incrementally due to file size):
- **Core**: session metadata + context-package.json (always)
- **Selective**: synthesis_output OR (guidance + relevant role analyses) - NOT all
- **On-Demand**: conflict resolution (if conflict_risk >= medium), test context
## SESSION PATHS
Input:
- Session Metadata: .workflow/active/${sessionId}/workflow-session.json
- Context Package: .workflow/active/${sessionId}/.process/context-package.json
- Test Context: .workflow/active/${sessionId}/.process/test-context-package.json
Output:
- Task Dir: .workflow/active/${sessionId}/.task/
- IMPL_PLAN: .workflow/active/${sessionId}/IMPL_PLAN.md
- TODO_LIST: .workflow/active/${sessionId}/TODO_LIST.md
## CONTEXT METADATA
Session ID: ${sessionId}
Workflow Type: TDD
MCP Capabilities: {exa_code, exa_web, code_index}
## USER CONFIGURATION (from Phase 0)
Execution Method: ${userConfig.executionMethod}
Preferred CLI Tool: ${userConfig.preferredCliTool}
Supplementary Materials: ${userConfig.supplementaryMaterials}
## EXECUTION METHOD MAPPING
Based on userConfig.executionMethod, set task-level meta.execution_config:
"agent" ->
meta.execution_config = { method: "agent", cli_tool: null, enable_resume: false }
Agent executes Red-Green-Refactor phases directly
"cli" ->
meta.execution_config = { method: "cli", cli_tool: userConfig.preferredCliTool, enable_resume: true }
Agent executes pre_analysis, then hands off full context to CLI via buildCliHandoffPrompt()
"hybrid" ->
Per-task decision: Analyze TDD cycle complexity, set method to "agent" OR "cli" per task
- Simple cycles (<=5 test cases, <=3 files) -> method: "agent"
- Complex cycles (>5 test cases, >3 files, integration tests) -> method: "cli"
CLI tool: userConfig.preferredCliTool, enable_resume: true
IMPORTANT: Do NOT add command field to implementation steps. Execution routing is controlled by task-level meta.execution_config.method only.
## EXPLORATION CONTEXT (from context-package.exploration_results)
- Load exploration_results from context-package.json
- Use aggregated_insights.critical_files for focus_paths generation
- Apply aggregated_insights.constraints to acceptance criteria
- Reference aggregated_insights.all_patterns for implementation approach
- Use aggregated_insights.all_integration_points for precise modification locations
- Use conflict_indicators for risk-aware task sequencing
## CONFLICT RESOLUTION CONTEXT (if exists)
- Check context-package.conflict_detection.resolution_file for conflict-resolution.json path
- If exists, load .process/conflict-resolution.json:
- Apply planning_constraints as task constraints (for brainstorm-less workflows)
- Reference resolved_conflicts for implementation approach alignment
- Handle custom_conflicts with explicit task notes
## TEST CONTEXT INTEGRATION
- Load test-context-package.json for existing test patterns and coverage analysis
- Extract test framework configuration (Jest/Pytest/etc.)
- Identify existing test conventions and patterns
- Map coverage gaps to TDD Red phase test targets
## TDD DOCUMENT GENERATION TASK
### TDD-Specific Requirements Summary
#### Task Structure Philosophy
- **1 feature = 1 task** containing complete TDD cycle internally
- Each task executes Red-Green-Refactor phases sequentially
- Task count = Feature count (typically 5 features = 5 tasks)
- Subtasks only when complexity >2500 lines or >6 files per cycle
- **Maximum 18 tasks** (hard limit for TDD workflows)
#### TDD Cycle Mapping
- **Simple features**: IMPL-N with internal Red-Green-Refactor phases
- **Complex features**: IMPL-N (container) + IMPL-N.M (subtasks)
- Each cycle includes: test_count, test_cases array, implementation_scope, expected_coverage
#### Required Outputs Summary
##### 1. TDD Task JSON Files (.task/IMPL-*.json)
- **Location**: .workflow/active/${sessionId}/.task/
- **Schema**: Unified flat schema (task-schema.json) with TDD-specific metadata
- meta.tdd_workflow: true (REQUIRED)
- meta.max_iterations: 3 (Green phase test-fix cycle limit)
- cli_execution.id: Unique CLI execution ID (format: {session_id}-{task_id})
- cli_execution: Strategy object (new|resume|fork|merge_fork)
- tdd_cycles: Array with quantified test cases and coverage
- focus_paths: Absolute or clear relative paths (enhanced with exploration critical_files)
- implementation: Exactly 3 steps with tdd_phase field
1. Red Phase (tdd_phase: "red"): Write failing tests
2. Green Phase (tdd_phase: "green"): Implement to pass tests
3. Refactor Phase (tdd_phase: "refactor"): Improve code quality
- pre_analysis: Include exploration integration_points analysis
- meta.execution_config: Set per userConfig.executionMethod (agent/cli/hybrid)
##### 2. IMPL_PLAN.md (TDD Variant)
- **Location**: .workflow/active/${sessionId}/IMPL_PLAN.md
- **Template**: ~/.ccw/workflows/cli-templates/prompts/workflow/impl-plan-template.txt
- **TDD-Specific Frontmatter**: workflow_type="tdd", tdd_workflow=true, feature_count, task_breakdown
##### 3. TODO_LIST.md
- **Location**: .workflow/active/${sessionId}/TODO_LIST.md
- **Format**: Hierarchical task list with internal TDD phase indicators (Red -> Green -> Refactor)
### CLI EXECUTION ID REQUIREMENTS (MANDATORY)
Each task JSON MUST include:
- **cli_execution.id**: Unique ID for CLI execution (format: {session_id}-{task_id})
- **cli_execution**: Strategy object based on depends_on:
- No deps -> { "strategy": "new" }
- 1 dep (single child) -> { "strategy": "resume", "resume_from": "parent-cli-id" }
- 1 dep (multiple children) -> { "strategy": "fork", "resume_from": "parent-cli-id" }
- N deps -> { "strategy": "merge_fork", "resume_from": ["id1", "id2", ...] }
### Quantification Requirements (MANDATORY)
**Core Rules**:
1. **Explicit Test Case Counts**: Red phase specifies exact number with enumerated list
2. **Quantified Coverage**: Acceptance includes measurable percentage (e.g., ">=85%")
3. **Detailed Implementation Scope**: Green phase enumerates files, functions, line counts
4. **Enumerated Refactoring Targets**: Refactor phase lists specific improvements with counts
**TDD Phase Formats**:
- **Red Phase**: "Write N test cases: [test1, test2, ...]"
- **Green Phase**: "Implement N functions in file lines X-Y: [func1() X1-Y1, func2() X2-Y2, ...]"
- **Refactor Phase**: "Apply N refactorings: [improvement1 (details), improvement2 (details), ...]"
- **Acceptance**: "All N tests pass with >=X% coverage: verify by [test command]"
## SUCCESS CRITERIA
- All planning documents generated successfully:
- Task JSONs valid and saved to .task/ directory with cli_execution.id
- IMPL_PLAN.md created with complete TDD structure
- TODO_LIST.md generated matching task JSONs
- CLI execution strategies assigned based on task dependencies
- Return completion status with document count and task breakdown summary
`
)
```
**Note**: Phase 0 now includes:
@@ -46,15 +385,15 @@ Extract: feature count, task count, CLI execution IDs assigned
- User configuration applied:
- If executionMethod == "cli" or "hybrid": command field added to steps
- CLI tool preference reflected in execution guidance
- Task count 18 (compliance with hard limit)
- Task count <=18 (compliance with hard limit)
### Red Flag Detection (Non-Blocking Warnings)
- Task count >18: `⚠️ Task count exceeds hard limit - request re-scope`
- Missing cli_execution.id: `⚠️ Task lacks CLI execution ID for resume support`
- Missing test-fix-cycle: `⚠️ Green phase lacks auto-revert configuration`
- Generic task names: `⚠️ Vague task names suggest unclear TDD cycles`
- Missing focus_paths: `⚠️ Task lacks clear file scope for implementation`
- Task count >18: `WARNING: Task count exceeds hard limit - request re-scope`
- Missing cli_execution.id: `WARNING: Task lacks CLI execution ID for resume support`
- Missing test-fix-cycle: `WARNING: Green phase lacks auto-revert configuration`
- Generic task names: `WARNING: Vague task names suggest unclear TDD cycles`
- Missing focus_paths: `WARNING: Task lacks clear file scope for implementation`
**Action**: Log warnings to `.workflow/active/[sessionId]/.process/tdd-warnings.log` (non-blocking)
@@ -66,16 +405,16 @@ Extract: feature count, task count, CLI execution IDs assigned
{"content": "Phase 2: Context Gathering", "status": "completed", "activeForm": "Executing context gathering"},
{"content": "Phase 3: Test Coverage Analysis", "status": "completed", "activeForm": "Executing test coverage analysis"},
{"content": "Phase 5: TDD Task Generation", "status": "in_progress", "activeForm": "Executing TDD task generation"},
{"content": " Discovery - analyze TDD requirements", "status": "in_progress", "activeForm": "Analyzing TDD requirements"},
{"content": " Planning - design Red-Green-Refactor cycles", "status": "pending", "activeForm": "Designing TDD cycles"},
{"content": " Output - generate IMPL tasks with internal TDD phases", "status": "pending", "activeForm": "Generating TDD tasks"},
{"content": " -> Discovery - analyze TDD requirements", "status": "in_progress", "activeForm": "Analyzing TDD requirements"},
{"content": " -> Planning - design Red-Green-Refactor cycles", "status": "pending", "activeForm": "Designing TDD cycles"},
{"content": " -> Output - generate IMPL tasks with internal TDD phases", "status": "pending", "activeForm": "Generating TDD tasks"},
{"content": "Phase 6: TDD Structure Validation", "status": "pending", "activeForm": "Validating TDD structure"}
]
```
**Note**: Skill execute **attaches** task-generate-tdd's 3 tasks. Orchestrator **executes** these tasks. Each generated IMPL task will contain internal Red-Green-Refactor cycle.
**Note**: Agent execution **attaches** task-generate-tdd's 3 tasks. Orchestrator **executes** these tasks. Each generated IMPL task will contain internal Red-Green-Refactor cycle.
**Next Action**: Tasks attached **Execute Phase 5.1-5.3** sequentially
**Next Action**: Tasks attached -> **Execute Phase 5.1-5.3** sequentially
### TodoWrite Update (Phase 5 completed - tasks collapsed)
@@ -91,6 +430,70 @@ Extract: feature count, task count, CLI execution IDs assigned
**Note**: Phase 5 tasks completed and collapsed to summary. Each generated IMPL task contains complete Red-Green-Refactor cycle internally.
## TDD Task Structure Reference
**Quick Reference**:
- Each TDD task contains complete Red-Green-Refactor cycle
- Task ID format: `IMPL-N` (simple) or `IMPL-N.M` (complex subtasks)
- Required metadata:
- `meta.tdd_workflow: true`
- `meta.max_iterations: 3`
- `cli_execution.id: "{session_id}-{task_id}"`
- `cli_execution: { "strategy": "new|resume|fork|merge_fork", ... }`
- `tdd_cycles` array with quantified test cases and coverage:
```javascript
tdd_cycles: [
{
test_count: 5,
test_cases: ["case1", "case2"],
implementation_scope: "...",
expected_coverage: ">=85%"
}
]
```
- `focus_paths` use absolute or clear relative paths
- `implementation`: Exactly 3 steps with `tdd_phase` field ("red", "green", "refactor")
- `pre_analysis`: includes exploration integration_points analysis
- **meta.execution_config**: Set per `userConfig.executionMethod` (agent/cli/hybrid)
## Output Files Structure
```
.workflow/active/{session-id}/
├── plan.json # Structured plan overview (TDD variant)
├── IMPL_PLAN.md # Unified plan with TDD Implementation Tasks section
├── TODO_LIST.md # Progress tracking with internal TDD phase indicators
├── .task/
│ ├── IMPL-1.json # Complete TDD task (Red-Green-Refactor internally)
│ ├── IMPL-2.json # Complete TDD task
│ ├── IMPL-3.json # Complex feature container (if needed)
│ ├── IMPL-3.1.json # Complex feature subtask (if needed)
│ ├── IMPL-3.2.json # Complex feature subtask (if needed)
│ └── ...
└── .process/
├── conflict-resolution.json # Conflict resolution results (if conflict_risk >= medium)
├── test-context-package.json # Test coverage analysis
├── context-package.json # Input from context-gather
└── tdd-warnings.log # Non-blocking warnings
```
## Validation Rules
### Task Completeness
- Every IMPL-N must contain complete TDD workflow in `implementation`
- Each task must have 3 steps with `tdd_phase`: "red", "green", "refactor"
- Every task must have `meta.tdd_workflow: true`
### Dependency Enforcement
- Sequential features: IMPL-N depends_on ["IMPL-(N-1)"] if needed
- Complex feature subtasks: IMPL-N.M depends_on ["IMPL-N.(M-1)"] or parent dependencies
- No circular dependencies allowed
### Task Limits
- Maximum 18 total tasks (simple + subtasks) - hard limit for TDD workflows
- Flat hierarchy (<=5 tasks) or two-level (6-18 tasks with containers)
- Re-scope requirements if >18 tasks needed
## Output
- **File**: `plan.json` (structured plan overview)

View File

@@ -163,7 +163,7 @@ if (userChoice === "Verify TDD Compliance (Recommended)") {
}
```
**Auto Mode (--yes)**: Auto-select "Verify TDD Compliance", then auto-continue to execute if quality gate is APPROVED.
**Auto Mode**: When `workflowPreferences.autoYes` is true, auto-select "Verify TDD Compliance", then auto-continue to execute if quality gate is APPROVED.
## Output

View File

@@ -12,7 +12,7 @@ Full TDD compliance verification with quality gate reporting. Generates comprehe
## Operating Constraints
**ORCHESTRATOR MODE**:
- This phase coordinates sub-steps and `/workflow:tools:tdd-coverage-analysis`
- This phase coordinates sub-steps and inline TDD coverage analysis
- MAY write output files: TDD_COMPLIANCE_REPORT.md (primary report), .process/*.json (intermediate artifacts)
- MUST NOT modify source task files or implementation code
- MUST NOT create or delete tasks in the workflow
@@ -120,9 +120,183 @@ Calculate:
### Step 7.3: Coverage & Cycle Analysis
**Call Coverage Analysis Sub-command**:
```javascript
Skill(skill="workflow:tools:tdd-coverage-analysis", args="--session {session_id}")
**Execute TDD Coverage Analysis**:
#### Phase 3a: Extract Test Tasks
```bash
# Find TEST task files and extract focus_paths
find .workflow/active/{session_id}/.task/ -name 'TEST-*.json' -exec jq -r '.context.focus_paths[]' {} \;
```
**Output**: List of test directories/files from all TEST tasks
#### Phase 3b: Run Test Suite
```bash
# Auto-detect test framework from project
if [ -f "package.json" ] && grep -q "jest\|mocha\|vitest" package.json; then
TEST_CMD="npm test -- --coverage --json"
elif [ -f "pytest.ini" ] || [ -f "setup.py" ]; then
TEST_CMD="pytest --cov --json-report"
elif [ -f "Cargo.toml" ]; then
TEST_CMD="cargo test -- --test-threads=1 --nocapture"
elif [ -f "go.mod" ]; then
TEST_CMD="go test -coverprofile=coverage.out -json ./..."
else
TEST_CMD="echo 'No supported test framework found'"
fi
# Execute test suite with coverage
$TEST_CMD > .workflow/active/{session_id}/.process/test-results.json
```
**Output**: test-results.json with coverage data
#### Phase 3c: Parse Coverage Data
```bash
jq '.coverage' .workflow/active/{session_id}/.process/test-results.json > .workflow/active/{session_id}/.process/coverage-report.json
```
**Extract**:
- Line coverage percentage
- Branch coverage percentage
- Function coverage percentage
- Uncovered lines/branches
#### Phase 3d: Verify TDD Cycle
For each TDD chain (TEST-N.M -> IMPL-N.M -> REFACTOR-N.M):
**1. Red Phase Verification**
```bash
cat .workflow/active/{session_id}/.summaries/TEST-N.M-summary.md
```
Verify:
- Tests were created
- Tests failed initially
- Failure messages were clear
**2. Green Phase Verification**
```bash
cat .workflow/active/{session_id}/.summaries/IMPL-N.M-summary.md
```
Verify:
- Implementation was completed
- Tests now pass
- Implementation was minimal
**3. Refactor Phase Verification**
```bash
cat .workflow/active/{session_id}/.summaries/REFACTOR-N.M-summary.md
```
Verify:
- Refactoring was completed
- Tests still pass
- Code quality improved
#### TDD Cycle Verification Algorithm
```
For each feature N:
1. Load TEST-N.M-summary.md
IF summary missing:
Mark: "Red phase incomplete"
SKIP to next feature
CHECK: Contains "test" AND "fail"
IF NOT found:
Mark: "Red phase verification failed"
ELSE:
Mark: "Red phase [PASS]"
2. Load IMPL-N.M-summary.md
IF summary missing:
Mark: "Green phase incomplete"
SKIP to next feature
CHECK: Contains "pass" OR "green"
IF NOT found:
Mark: "Green phase verification failed"
ELSE:
Mark: "Green phase [PASS]"
3. Load REFACTOR-N.M-summary.md
IF summary missing:
Mark: "Refactor phase incomplete"
CONTINUE (refactor is optional)
CHECK: Contains "refactor" AND "pass"
IF NOT found:
Mark: "Refactor phase verification failed"
ELSE:
Mark: "Refactor phase [PASS]"
4. Calculate chain score:
- Red + Green + Refactor all [PASS] = 100%
- Red + Green [PASS], Refactor missing = 80%
- Red [PASS], Green missing = 40%
- All missing = 0%
```
#### Phase 3e: Generate Analysis Report
Create `.workflow/active/{session_id}/.process/tdd-cycle-report.md`:
```markdown
# TDD Cycle Analysis - {Session ID}
## Coverage Metrics
- **Line Coverage**: {percentage}%
- **Branch Coverage**: {percentage}%
- **Function Coverage**: {percentage}%
## Coverage Details
### Covered
- {covered_lines} lines
- {covered_branches} branches
- {covered_functions} functions
### Uncovered
- Lines: {uncovered_line_numbers}
- Branches: {uncovered_branch_locations}
## TDD Cycle Verification
### Feature 1: {Feature Name}
**Chain**: TEST-1.1 -> IMPL-1.1 -> REFACTOR-1.1
- [PASS] **Red Phase**: Tests created and failed initially
- [PASS] **Green Phase**: Implementation made tests pass
- [PASS] **Refactor Phase**: Refactoring maintained green tests
[Repeat for all features]
## TDD Compliance Summary
- **Total Chains**: {N}
- **Complete Cycles**: {N}
- **Incomplete Cycles**: {0}
- **Compliance Score**: {score}/100
## Gaps Identified
- {gap descriptions}
## Recommendations
- {improvement suggestions}
```
#### Coverage Metrics Calculation
```bash
line_coverage=$(jq '.coverage.lineCoverage' test-results.json)
branch_coverage=$(jq '.coverage.branchCoverage' test-results.json)
function_coverage=$(jq '.coverage.functionCoverage' test-results.json)
overall_score=$(echo "($line_coverage + $branch_coverage + $function_coverage) / 3" | bc)
```
**Parse Output Files**:
@@ -432,7 +606,7 @@ Next: Review full report for detailed findings
|-------|-------|------------|
| Coverage tool missing | No test framework | Configure testing first |
| Tests fail to run | Code errors | Fix errors before verify |
| Sub-command fails | tdd-coverage-analysis error | Check sub-command logs |
| Coverage analysis fails | Test framework or coverage tool error | Check test framework configuration |
## Output
@@ -445,9 +619,9 @@ Next: Review full report for detailed findings
.workflow/active/WFS-{session-id}/
├── TDD_COMPLIANCE_REPORT.md # Comprehensive compliance report ⭐
└── .process/
├── test-results.json # From tdd-coverage-analysis
├── coverage-report.json # From tdd-coverage-analysis
└── tdd-cycle-report.md # From tdd-coverage-analysis
├── test-results.json # From coverage analysis (Step 7.3)
├── coverage-report.json # From coverage analysis (Step 7.3)
└── tdd-cycle-report.md # From coverage analysis (Step 7.3)
```
## Next Steps Decision Table

View File

@@ -69,9 +69,31 @@ Task Pipeline (generated in Phase 4, executed in Phase 5):
/workflow:test-cycle-execute --max-iterations=15
```
## Auto Mode
## Interactive Preference Collection
When `--yes` or `-y`: Auto-select first active session, skip confirmations, auto-complete on success.
Before dispatching to phase execution, collect workflow preferences via AskUserQuestion:
```javascript
const prefResponse = AskUserQuestion({
questions: [
{
question: "是否跳过所有确认步骤(自动模式)?",
header: "Auto Mode",
multiSelect: false,
options: [
{ label: "Interactive (Recommended)", description: "交互模式,包含确认步骤" },
{ label: "Auto", description: "跳过所有确认,自动执行" }
]
}
]
})
workflowPreferences = {
autoYes: prefResponse.autoMode === 'Auto'
}
```
**workflowPreferences** is passed to phase execution as context variable, referenced as `workflowPreferences.autoYes` within phases.
## Execution Flow

View File

@@ -11,14 +11,441 @@ Gather test context via coverage analysis or codebase scan.
### Step 1.2: Gather Test Context
```
// Session Mode - gather from source session
Skill(skill="workflow:tools:test-context-gather", args="--session [testSessionId]")
Two modes are available depending on whether a source session exists:
// Prompt Mode - gather from codebase
Skill(skill="workflow:tools:context-gather", args="--session [testSessionId] \"[task_description]\"")
---
### Mode A: Session Mode (gather from source session)
Collect test coverage context using test-context-search-agent and package into standardized test-context JSON.
#### Core Philosophy
- **Agent Delegation**: Delegate all test coverage analysis to `test-context-search-agent` for autonomous execution
- **Detection-First**: Check for existing test-context-package before executing
- **Coverage-First**: Analyze existing test coverage before planning new tests
- **Source Context Loading**: Import implementation summaries from source session
- **Standardized Output**: Generate `.workflow/active/{test_session_id}/.process/test-context-package.json`
#### Step A.1: Test-Context-Package Detection
**Execute First** - Check if valid package already exists:
```javascript
const testContextPath = `.workflow/${test_session_id}/.process/test-context-package.json`;
if (file_exists(testContextPath)) {
const existing = Read(testContextPath);
// Validate package belongs to current test session
if (existing?.metadata?.test_session_id === test_session_id) {
console.log("Valid test-context-package found for session:", test_session_id);
console.log("Coverage Stats:", existing.test_coverage.coverage_stats);
console.log("Framework:", existing.test_framework.framework);
console.log("Missing Tests:", existing.test_coverage.missing_tests.length);
return existing; // Skip execution, return existing
} else {
console.warn("Invalid test_session_id in existing package, re-generating...");
}
}
```
#### Step A.2: Invoke Test-Context-Search Agent
**Only execute if Step A.1 finds no valid package**
```javascript
Task(
subagent_type="test-context-search-agent",
run_in_background=false,
description="Gather test coverage context",
prompt=`
## Execution Mode
**PLAN MODE** (Comprehensive) - Full Phase 1-3 execution
## Session Information
- **Test Session ID**: ${test_session_id}
- **Output Path**: .workflow/${test_session_id}/.process/test-context-package.json
## Mission
Execute complete test-context-search-agent workflow for test generation planning:
### Phase 1: Session Validation & Source Context Loading
1. **Detection**: Check for existing test-context-package (early exit if valid)
2. **Test Session Validation**: Load test session metadata, extract source_session reference
3. **Source Context Loading**: Load source session implementation summaries, changed files, tech stack
### Phase 2: Test Coverage Analysis
Execute coverage discovery:
- **Track 1**: Existing test discovery (find *.test.*, *.spec.* files)
- **Track 2**: Coverage gap analysis (match implementation files to test files)
- **Track 3**: Coverage statistics (calculate percentages, identify gaps by module)
### Phase 3: Framework Detection & Packaging
1. Framework identification from package.json/requirements.txt
2. Convention analysis from existing test patterns
3. Generate and validate test-context-package.json
## Output Requirements
Complete test-context-package.json with:
- **metadata**: test_session_id, source_session_id, task_type, complexity
- **source_context**: implementation_summaries, tech_stack, project_patterns
- **test_coverage**: existing_tests[], missing_tests[], coverage_stats
- **test_framework**: framework, version, test_pattern, conventions
- **assets**: implementation_summary[], existing_test[], source_code[] with priorities
- **focus_areas**: Test generation guidance based on coverage gaps
## Quality Validation
Before completion verify:
- [ ] Valid JSON format with all required fields
- [ ] Source session context loaded successfully
- [ ] Test coverage gaps identified
- [ ] Test framework detected (or marked as 'unknown')
- [ ] Coverage percentage calculated correctly
- [ ] Missing tests catalogued with priority
- [ ] Execution time < 30 seconds (< 60s for large codebases)
Execute autonomously following agent documentation.
Report completion with coverage statistics.
`
)
```
#### Step A.3: Output Verification
After agent completes, verify output:
```javascript
// Verify file was created
const outputPath = `.workflow/${test_session_id}/.process/test-context-package.json`;
if (!file_exists(outputPath)) {
throw new Error("Agent failed to generate test-context-package.json");
}
// Load and display summary
const testContext = Read(outputPath);
console.log("Test context package generated successfully");
console.log("Coverage:", testContext.test_coverage.coverage_stats.coverage_percentage + "%");
console.log("Tests to generate:", testContext.test_coverage.missing_tests.length);
```
---
### Mode B: Prompt Mode (gather from codebase)
Intelligently collect project context using context-search-agent based on task description, packages into standardized JSON.
#### Core Philosophy
- **Agent Delegation**: Delegate all discovery to `context-search-agent` for autonomous execution
- **Detection-First**: Check for existing context-package before executing
- **Plan Mode**: Full comprehensive analysis (vs lightweight brainstorm mode)
- **Standardized Output**: Generate `.workflow/active/{session}/.process/context-package.json`
#### Step B.1: Context-Package Detection
**Execute First** - Check if valid package already exists:
```javascript
const contextPackagePath = `.workflow/${session_id}/.process/context-package.json`;
if (file_exists(contextPackagePath)) {
const existing = Read(contextPackagePath);
// Validate package belongs to current session
if (existing?.metadata?.session_id === session_id) {
console.log("Valid context-package found for session:", session_id);
console.log("Stats:", existing.statistics);
console.log("Conflict Risk:", existing.conflict_detection.risk_level);
return existing; // Skip execution, return existing
} else {
console.warn("Invalid session_id in existing package, re-generating...");
}
}
```
#### Step B.2: Complexity Assessment & Parallel Explore
**Only execute if Step B.1 finds no valid package**
```javascript
// B.2.1 Complexity Assessment
function analyzeTaskComplexity(taskDescription) {
const text = taskDescription.toLowerCase();
if (/architect|refactor|restructure|modular|cross-module/.test(text)) return 'High';
if (/multiple|several|integrate|migrate|extend/.test(text)) return 'Medium';
return 'Low';
}
const ANGLE_PRESETS = {
architecture: ['architecture', 'dependencies', 'modularity', 'integration-points'],
security: ['security', 'auth-patterns', 'dataflow', 'validation'],
performance: ['performance', 'bottlenecks', 'caching', 'data-access'],
bugfix: ['error-handling', 'dataflow', 'state-management', 'edge-cases'],
feature: ['patterns', 'integration-points', 'testing', 'dependencies'],
refactor: ['architecture', 'patterns', 'dependencies', 'testing']
};
function selectAngles(taskDescription, complexity) {
const text = taskDescription.toLowerCase();
let preset = 'feature';
if (/refactor|architect|restructure/.test(text)) preset = 'architecture';
else if (/security|auth|permission/.test(text)) preset = 'security';
else if (/performance|slow|optimi/.test(text)) preset = 'performance';
else if (/fix|bug|error|issue/.test(text)) preset = 'bugfix';
const count = complexity === 'High' ? 4 : (complexity === 'Medium' ? 3 : 1);
return ANGLE_PRESETS[preset].slice(0, count);
}
const complexity = analyzeTaskComplexity(task_description);
const selectedAngles = selectAngles(task_description, complexity);
const sessionFolder = `.workflow/active/${session_id}/.process`;
// B.2.2 Launch Parallel Explore Agents
const explorationTasks = selectedAngles.map((angle, index) =>
Task(
subagent_type="cli-explore-agent",
run_in_background=false,
description=`Explore: ${angle}`,
prompt=`
## Task Objective
Execute **${angle}** exploration for task planning context. Analyze codebase from this specific angle to discover relevant structure, patterns, and constraints.
## Assigned Context
- **Exploration Angle**: ${angle}
- **Task Description**: ${task_description}
- **Session ID**: ${session_id}
- **Exploration Index**: ${index + 1} of ${selectedAngles.length}
- **Output File**: ${sessionFolder}/exploration-${angle}.json
## MANDATORY FIRST STEPS (Execute by Agent)
1. Run: ccw tool exec get_modules_by_depth '{}' (project structure)
2. Run: rg -l "{keyword_from_task}" --type ts (locate relevant files)
3. Execute: cat ~/.ccw/workflows/cli-templates/schemas/explore-json-schema.json (get output schema reference)
## Exploration Strategy (${angle} focus)
**Step 1: Structural Scan** (Bash)
- get_modules_by_depth.sh -> identify modules related to ${angle}
- find/rg -> locate files relevant to ${angle} aspect
- Analyze imports/dependencies from ${angle} perspective
**Step 2: Semantic Analysis** (Gemini CLI)
- How does existing code handle ${angle} concerns?
- What patterns are used for ${angle}?
- Where would new code integrate from ${angle} viewpoint?
**Step 3: Write Output**
- Consolidate ${angle} findings into JSON
- Identify ${angle}-specific clarification needs
## Expected Output
**File**: ${sessionFolder}/exploration-${angle}.json
**Schema Reference**: Schema obtained in MANDATORY FIRST STEPS step 3, follow schema exactly
**Required Fields** (all ${angle} focused):
- project_structure: Modules/architecture relevant to ${angle}
- relevant_files: Files affected from ${angle} perspective
**MANDATORY**: Every file MUST use structured object format with ALL required fields:
[{path: "src/file.ts", relevance: 0.85, rationale: "Contains AuthService.login() - entry point for JWT token generation", role: "modify_target", discovery_source: "bash-scan", key_symbols: ["AuthService", "login"]}]
- **rationale** (required): Specific selection basis tied to ${angle} topic (>10 chars, not generic)
- **role** (required): modify_target|dependency|pattern_reference|test_target|type_definition|integration_point|config|context_only
- **discovery_source** (recommended): bash-scan|cli-analysis|ace-search|dependency-trace|manual
- **key_symbols** (recommended): Key functions/classes/types in the file relevant to the task
- Scores: 0.7+ high priority, 0.5-0.7 medium, <0.5 low
- patterns: ${angle}-related patterns to follow
- dependencies: Dependencies relevant to ${angle}
- integration_points: Where to integrate from ${angle} viewpoint (include file:line locations)
- constraints: ${angle}-specific limitations/conventions
- clarification_needs: ${angle}-related ambiguities (options array + recommended index)
- _metadata.exploration_angle: "${angle}"
## Success Criteria
- [ ] Schema obtained via cat explore-json-schema.json
- [ ] get_modules_by_depth.sh executed
- [ ] At least 3 relevant files identified with ${angle} rationale
- [ ] Patterns are actionable (code examples, not generic advice)
- [ ] Integration points include file:line locations
- [ ] Constraints are project-specific to ${angle}
- [ ] JSON output follows schema exactly
- [ ] clarification_needs includes options + recommended
## Output
Write: ${sessionFolder}/exploration-${angle}.json
Return: 2-3 sentence summary of ${angle} findings
`
)
);
// B.2.3 Generate Manifest after all complete
const explorationFiles = bash(`find ${sessionFolder} -name "exploration-*.json" -type f`).split('\n').filter(f => f.trim());
const explorationManifest = {
session_id,
task_description,
timestamp: new Date().toISOString(),
complexity,
exploration_count: selectedAngles.length,
angles_explored: selectedAngles,
explorations: explorationFiles.map(file => {
const data = JSON.parse(Read(file));
return { angle: data._metadata.exploration_angle, file: file.split('/').pop(), path: file, index: data._metadata.exploration_index };
})
};
Write(`${sessionFolder}/explorations-manifest.json`, JSON.stringify(explorationManifest, null, 2));
```
#### Step B.3: Invoke Context-Search Agent
**Only execute after Step B.2 completes**
```javascript
// Load user intent from planning-notes.md (from Phase 1)
const planningNotesPath = `.workflow/active/${session_id}/planning-notes.md`;
let userIntent = { goal: task_description, key_constraints: "None specified" };
if (file_exists(planningNotesPath)) {
const notesContent = Read(planningNotesPath);
const goalMatch = notesContent.match(/\*\*GOAL\*\*:\s*(.+)/);
const constraintsMatch = notesContent.match(/\*\*KEY_CONSTRAINTS\*\*:\s*(.+)/);
if (goalMatch) userIntent.goal = goalMatch[1].trim();
if (constraintsMatch) userIntent.key_constraints = constraintsMatch[1].trim();
}
Task(
subagent_type="context-search-agent",
run_in_background=false,
description="Gather comprehensive context for plan",
prompt=`
## Execution Mode
**PLAN MODE** (Comprehensive) - Full Phase 1-3 execution with priority sorting
## Session Information
- **Session ID**: ${session_id}
- **Task Description**: ${task_description}
- **Output Path**: .workflow/${session_id}/.process/context-package.json
## User Intent (from Phase 1 - Planning Notes)
**GOAL**: ${userIntent.goal}
**KEY_CONSTRAINTS**: ${userIntent.key_constraints}
This is the PRIMARY context source - all subsequent analysis must align with user intent.
## Exploration Input (from Step B.2)
- **Manifest**: ${sessionFolder}/explorations-manifest.json
- **Exploration Count**: ${explorationManifest.exploration_count}
- **Angles**: ${explorationManifest.angles_explored.join(', ')}
- **Complexity**: ${complexity}
## Mission
Execute complete context-search-agent workflow for implementation planning:
### Phase 1: Initialization & Pre-Analysis
1. **Project State Loading**:
- Read and parse .workflow/project-tech.json. Use its overview section as the foundational project_context.
- Read and parse .workflow/project-guidelines.json. Load conventions, constraints, and learnings into a project_guidelines section.
- If files don't exist, proceed with fresh analysis.
2. **Detection**: Check for existing context-package (early exit if valid)
3. **Foundation**: Initialize CodexLens, get project structure, load docs
4. **Analysis**: Extract keywords, determine scope, classify complexity based on task description and project state
### Phase 2: Multi-Source Context Discovery
Execute all discovery tracks (WITH USER INTENT INTEGRATION):
- **Track -1**: User Intent & Priority Foundation (EXECUTE FIRST)
- Load user intent (GOAL, KEY_CONSTRAINTS) from session input
- Map user requirements to codebase entities (files, modules, patterns)
- Establish baseline priority scores based on user goal alignment
- Output: user_intent_mapping.json with preliminary priority scores
- **Track 0**: Exploration Synthesis (load explorations-manifest.json, prioritize critical_files, deduplicate patterns/integration_points)
- **Track 1**: Historical archive analysis (query manifest.json for lessons learned)
- **Track 2**: Reference documentation (CLAUDE.md, architecture docs)
- **Track 3**: Web examples (use Exa MCP for unfamiliar tech/APIs)
- **Track 4**: Codebase analysis (5-layer discovery: files, content, patterns, deps, config/tests)
### Phase 3: Synthesis, Assessment & Packaging
1. Apply relevance scoring and build dependency graph
2. **Synthesize 5-source data** (including Track -1): Merge findings from all sources
- Priority order: User Intent > Archive > Docs > Exploration > Code > Web
- **Prioritize the context from project-tech.json** for architecture and tech stack unless code analysis reveals it's outdated
3. **Context Priority Sorting**:
a. Combine scores from Track -1 (user intent alignment) + relevance scores + exploration critical_files
b. Classify files into priority tiers:
- **Critical** (score >= 0.85): Directly mentioned in user goal OR exploration critical_files
- **High** (0.70-0.84): Key dependencies, patterns required for goal
- **Medium** (0.50-0.69): Supporting files, indirect dependencies
- **Low** (< 0.50): Contextual awareness only
c. Generate dependency_order: Based on dependency graph + user goal sequence
d. Document sorting_rationale: Explain prioritization logic
4. **Populate project_context**: Directly use the overview from project-tech.json
5. **Populate project_guidelines**: Load from project-guidelines.json
6. Integrate brainstorm artifacts (if .brainstorming/ exists, read content)
7. Perform conflict detection with risk assessment
8. **Inject historical conflicts** from archive analysis into conflict_detection
9. **Generate prioritized_context section**:
{
"prioritized_context": {
"user_intent": { "goal": "...", "scope": "...", "key_constraints": ["..."] },
"priority_tiers": {
"critical": [{ "path": "...", "relevance": 0.95, "rationale": "..." }],
"high": [...], "medium": [...], "low": [...]
},
"dependency_order": ["module1", "module2", "module3"],
"sorting_rationale": "Based on user goal alignment, exploration critical files, and dependency graph"
}
}
10. Generate and validate context-package.json with prioritized_context field
## Output Requirements
Complete context-package.json with:
- **metadata**: task_description, keywords, complexity, tech_stack, session_id
- **project_context**: description, technology_stack, architecture, key_components (from project-tech.json)
- **project_guidelines**: {conventions, constraints, quality_rules, learnings} (from project-guidelines.json)
- **assets**: {documentation[], source_code[], config[], tests[]} with relevance scores
- **dependencies**: {internal[], external[]} with dependency graph
- **brainstorm_artifacts**: {guidance_specification, role_analyses[], synthesis_output} with content
- **conflict_detection**: {risk_level, risk_factors, affected_modules[], mitigation_strategy, historical_conflicts[]}
- **exploration_results**: {manifest_path, exploration_count, angles, explorations[], aggregated_insights}
- **prioritized_context**: {user_intent, priority_tiers{critical, high, medium, low}, dependency_order[], sorting_rationale}
## Quality Validation
Before completion verify:
- [ ] Valid JSON format with all required fields
- [ ] File relevance accuracy >80%
- [ ] Dependency graph complete (max 2 transitive levels)
- [ ] Conflict risk level calculated correctly
- [ ] No sensitive data exposed
- [ ] Total files <= 50 (prioritize high-relevance)
Execute autonomously following agent documentation.
Report completion with statistics.
`
)
```
#### Step B.4: Output Verification
After agent completes, verify output:
```javascript
// Verify file was created
const outputPath = `.workflow/${session_id}/.process/context-package.json`;
if (!file_exists(outputPath)) {
throw new Error("Agent failed to generate context-package.json");
}
// Verify exploration_results included
const pkg = JSON.parse(Read(outputPath));
if (pkg.exploration_results?.exploration_count > 0) {
console.log(`Exploration results aggregated: ${pkg.exploration_results.exploration_count} angles`);
}
```
---
**Input**: `testSessionId` from Phase 1
**Parse Output**:
@@ -34,13 +461,13 @@ Skill(skill="workflow:tools:context-gather", args="--session [testSessionId] \"[
```json
[
{"content": "Phase 1: Test Generation", "status": "in_progress"},
{"content": " Create test session", "status": "completed"},
{"content": " Gather test context", "status": "in_progress"},
{"content": " Load source/codebase context", "status": "in_progress"},
{"content": " Analyze test coverage", "status": "pending"},
{"content": " Generate context package", "status": "pending"},
{"content": " Test analysis (Gemini)", "status": "pending"},
{"content": " Generate test tasks", "status": "pending"},
{"content": " -> Create test session", "status": "completed"},
{"content": " -> Gather test context", "status": "in_progress"},
{"content": " -> Load source/codebase context", "status": "in_progress"},
{"content": " -> Analyze test coverage", "status": "pending"},
{"content": " -> Generate context package", "status": "pending"},
{"content": " -> Test analysis (Gemini)", "status": "pending"},
{"content": " -> Generate test tasks", "status": "pending"},
{"content": "Phase 2: Test Cycle Execution", "status": "pending"}
]
```
@@ -49,10 +476,10 @@ Skill(skill="workflow:tools:context-gather", args="--session [testSessionId] \"[
```json
[
{"content": "Phase 1: Test Generation", "status": "in_progress"},
{"content": " Create test session", "status": "completed"},
{"content": " Gather test context", "status": "completed"},
{"content": " Test analysis (Gemini)", "status": "pending"},
{"content": " Generate test tasks", "status": "pending"},
{"content": " -> Create test session", "status": "completed"},
{"content": " -> Gather test context", "status": "completed"},
{"content": " -> Test analysis (Gemini)", "status": "pending"},
{"content": " -> Generate test tasks", "status": "pending"},
{"content": "Phase 2: Test Cycle Execution", "status": "pending"}
]
```

View File

@@ -9,13 +9,95 @@ Analyze test requirements with Gemini using progressive L0-L3 test layers.
- Generate multi-layered test requirements (L0-L3)
- Scan for AI code issues
## Core Philosophy
- **Coverage-Driven**: Focus on identified test gaps from context analysis
- **Pattern-Based**: Learn from existing tests and project conventions
- **Gemini-Powered**: Use Gemini for test requirement analysis and strategy design
- **Single-Round Analysis**: Comprehensive test analysis in one execution
- **No Code Generation**: Strategy and planning only, actual test generation happens in task execution
## Core Responsibilities
- Coordinate test analysis workflow using cli-execution-agent
- Validate test-context-package.json prerequisites
- Execute Gemini analysis via agent for test strategy generation
- Validate agent outputs (gemini-test-analysis.md, TEST_ANALYSIS_RESULTS.md)
## Execution
### Step 1.3: Test Generation Analysis
#### Phase 1: Context Preparation
**Command prepares session context and validates prerequisites.**
1. **Session Validation**
- Load `.workflow/active/{test_session_id}/workflow-session.json`
- Verify test session type is "test-gen"
- Extract source session reference
2. **Context Package Validation**
- Read `test-context-package.json`
- Validate required sections: metadata, source_context, test_coverage, test_framework
- Extract coverage gaps and framework details
3. **Strategy Determination**
- **Simple** (1-3 files): Single Gemini analysis
- **Medium** (4-6 files): Comprehensive analysis
- **Complex** (>6 files): Modular analysis approach
#### Phase 2: Test Analysis Execution
**Purpose**: Analyze test coverage gaps and generate comprehensive test strategy.
```javascript
Task(
subagent_type="cli-execution-agent",
run_in_background=false,
description="Analyze test coverage gaps and generate test strategy",
prompt=`
## TASK OBJECTIVE
Analyze test requirements and generate comprehensive test generation strategy using Gemini CLI
## EXECUTION CONTEXT
Session: {test_session_id}
Source Session: {source_session_id}
Working Dir: .workflow/active/{test_session_id}/.process
Template: ~/.ccw/workflows/cli-templates/prompts/test/test-concept-analysis.txt
## EXECUTION STEPS
1. Execute Gemini analysis:
ccw cli -p "..." --tool gemini --mode write --rule test-test-concept-analysis --cd .workflow/active/{test_session_id}/.process
2. Generate TEST_ANALYSIS_RESULTS.md:
Synthesize gemini-test-analysis.md into standardized format for task generation
Include: coverage assessment, test framework, test requirements, generation strategy, implementation targets
## EXPECTED OUTPUTS
1. gemini-test-analysis.md - Raw Gemini analysis
2. TEST_ANALYSIS_RESULTS.md - Standardized test requirements document
## QUALITY VALIDATION
- Both output files exist and are complete
- All required sections present in TEST_ANALYSIS_RESULTS.md
- Test requirements are actionable and quantified
- Test scenarios cover happy path, errors, edge cases
- Dependencies and mocks clearly identified
`
)
```
Skill(skill="workflow:tools:test-concept-enhanced", args="--session [testSessionId] --context [contextPath]")
```
**Output Files**:
- `.workflow/active/{test_session_id}/.process/gemini-test-analysis.md`
- `.workflow/active/{test_session_id}/.process/TEST_ANALYSIS_RESULTS.md`
#### Phase 3: Output Validation
- Verify `gemini-test-analysis.md` exists and is complete
- Validate `TEST_ANALYSIS_RESULTS.md` generated by agent
- Check required sections present
- Confirm test requirements are actionable
**Input**:
- `testSessionId` from Phase 1
@@ -40,7 +122,24 @@ Skill(skill="workflow:tools:test-concept-enhanced", args="--session [testSession
- Quality Assurance Criteria
- Success Criteria
**Note**: Detailed specifications for project types, L0-L3 layers, and AI issue detection are defined in `/workflow:tools:test-concept-enhanced`.
## Error Handling
### Validation Errors
| Error | Resolution |
|-------|------------|
| Missing context package | Run test-context-gather first |
| No coverage gaps | Skip test generation, proceed to execution |
| No test framework detected | Configure test framework |
| Invalid source session | Complete implementation first |
### Execution Errors
| Error | Recovery |
|-------|----------|
| Gemini timeout | Reduce scope, analyze by module |
| Output incomplete | Retry with focused analysis |
| No output file | Check directory permissions |
**Fallback Strategy**: Generate basic TEST_ANALYSIS_RESULTS.md from context package if Gemini fails
## Output

View File

@@ -1,6 +1,6 @@
# Phase 4: Test Task Generate (test-task-generate)
Generate test task JSONs via action-planning-agent.
Generate test task JSONs via test-action-planning-agent.
## Objective
@@ -11,13 +11,282 @@ Generate test task JSONs via action-planning-agent.
### Step 1.4: Generate Test Tasks
#### Phase 1: Context Preparation
**Purpose**: Assemble test session paths, load test analysis context, and create test-planning-notes.md.
**Execution Steps**:
1. Parse `--session` flag to get test session ID
2. Load `workflow-session.json` for session metadata
3. Verify `TEST_ANALYSIS_RESULTS.md` exists (from test-concept-enhanced)
4. Load `test-context-package.json` for coverage data
5. Create `test-planning-notes.md` with initial context
**After Phase 1**: Initialize test-planning-notes.md
```javascript
// Create test-planning-notes.md with N+1 context support
const testPlanningNotesPath = `.workflow/active/${testSessionId}/test-planning-notes.md`
const sessionMetadata = JSON.parse(Read(`.workflow/active/${testSessionId}/workflow-session.json`))
const testAnalysis = Read(`.workflow/active/${testSessionId}/.process/TEST_ANALYSIS_RESULTS.md`)
const sourceSessionId = sessionMetadata.source_session_id || 'N/A'
// Extract key info from TEST_ANALYSIS_RESULTS.md
const projectType = testAnalysis.match(/Project Type:\s*(.+)/)?.[1] || 'Unknown'
const testFramework = testAnalysis.match(/Test Framework:\s*(.+)/)?.[1] || 'Unknown'
const coverageTarget = testAnalysis.match(/Coverage Target:\s*(.+)/)?.[1] || '80%'
Write(testPlanningNotesPath, `# Test Planning Notes
**Session**: ${testSessionId}
**Source Session**: ${sourceSessionId}
**Created**: ${new Date().toISOString()}
## Test Intent (Phase 1)
- **PROJECT_TYPE**: ${projectType}
- **TEST_FRAMEWORK**: ${testFramework}
- **COVERAGE_TARGET**: ${coverageTarget}
- **SOURCE_SESSION**: ${sourceSessionId}
---
## Context Findings (Phase 1)
### Files with Coverage Gaps
(Extracted from TEST_ANALYSIS_RESULTS.md)
### Test Framework & Conventions
- Framework: ${testFramework}
- Coverage Target: ${coverageTarget}
---
## Gemini Enhancement (Phase 1.5)
(To be filled by Gemini analysis)
### Enhanced Test Suggestions
- **L1 (Unit)**: (Pending)
- **L2.1 (Integration)**: (Pending)
- **L2.2 (API Contracts)**: (Pending)
- **L2.4 (External APIs)**: (Pending)
- **L2.5 (Failure Modes)**: (Pending)
### Gemini Analysis Summary
(Pending enrichment)
---
## Consolidated Test Requirements (Phase 2 Input)
1. [Context] ${testFramework} framework conventions
2. [Context] ${coverageTarget} coverage target
---
## Task Generation (Phase 2)
(To be filled by test-action-planning-agent)
## N+1 Context
### Decisions
| Decision | Rationale | Revisit? |
|----------|-----------|----------|
### Deferred
- [ ] (For N+1)
`)
```
Skill(skill="workflow:tools:test-task-generate", args="--session [testSessionId]")
---
#### Phase 1.5: Gemini Test Enhancement
**Purpose**: Enrich test specifications with comprehensive test suggestions and record to test-planning-notes.md.
**Execution Steps**:
1. Load TEST_ANALYSIS_RESULTS.md from `.workflow/active/{test-session-id}/.process/`
2. Invoke `cli-execution-agent` with Gemini for test enhancement analysis
3. Use template: `~/.ccw/workflows/cli-templates/prompts/test-suggestions-enhancement.txt`
4. Gemini generates enriched test suggestions across L1-L3 layers -> gemini-enriched-suggestions.md
5. Record enriched suggestions to test-planning-notes.md (Gemini Enhancement section)
```javascript
Task(
subagent_type="cli-execution-agent",
run_in_background=false,
description="Enhance test specifications with Gemini analysis",
prompt=`
## Task Objective
Analyze TEST_ANALYSIS_RESULTS.md and generate enriched test suggestions using Gemini CLI
## Input Files
- Read: .workflow/active/{test-session-id}/.process/TEST_ANALYSIS_RESULTS.md
- Extract: Project type, test framework, coverage gaps, identified files
## Gemini Analysis Execution
Execute Gemini with comprehensive test enhancement prompt:
ccw cli -p "[comprehensive test prompt]" --tool gemini --mode analysis --rule analysis-test-strategy-enhancement --cd .workflow/active/{test-session-id}/.process
## Expected Output
Generate gemini-enriched-suggestions.md with structured test enhancements:
- L1 (Unit Tests): Edge cases, boundaries, error paths
- L2.1 (Integration): Module interactions, dependency injection
- L2.2 (API Contracts): Request/response, validation, error responses
- L2.4 (External APIs): Mock strategies, failure scenarios, timeouts
- L2.5 (Failure Modes): Exception handling, error propagation, recovery
## Validation
- gemini-enriched-suggestions.md created and complete
- Suggestions are actionable and specific (not generic)
- All L1-L3 layers covered
`
)
```
**Output**: gemini-enriched-suggestions.md (complete Gemini analysis)
**After Phase 1.5**: Update test-planning-notes.md with Gemini enhancement findings
```javascript
// Read enriched suggestions from gemini-enriched-suggestions.md
const enrichedSuggestionsPath = `.workflow/active/${testSessionId}/.process/gemini-enriched-suggestions.md`
const enrichedSuggestions = Read(enrichedSuggestionsPath)
// Update Phase 1.5 section in test-planning-notes.md with full enriched suggestions
Edit(testPlanningNotesPath, {
old: '## Gemini Enhancement (Phase 1.5)\n(To be filled by Gemini analysis)\n\n### Enhanced Test Suggestions\n- **L1 (Unit)**: (Pending)\n- **L2.1 (Integration)**: (Pending)\n- **L2.2 (API Contracts)**: (Pending)\n- **L2.4 (External APIs)**: (Pending)\n- **L2.5 (Failure Modes)**: (Pending)\n\n### Gemini Analysis Summary\n(Pending enrichment)',
new: `## Gemini Enhancement (Phase 1.5)
**Analysis Timestamp**: ${new Date().toISOString()}
**Template**: test-suggestions-enhancement.txt
**Output File**: .process/gemini-enriched-suggestions.md
### Enriched Test Suggestions (Complete Gemini Analysis)
${enrichedSuggestions}
### Gemini Analysis Summary
- **Status**: Enrichment complete
- **Layers Covered**: L1, L2.1, L2.2, L2.4, L2.5
- **Focus Areas**: API contracts, integration patterns, error scenarios, edge cases
- **Output Stored**: Full analysis in gemini-enriched-suggestions.md`
})
// Append Gemini constraints to consolidated test requirements
const geminiConstraints = [
'[Gemini] Implement all suggested L1 edge cases and boundary tests',
'[Gemini] Apply L2.1 module interaction patterns from analysis',
'[Gemini] Follow L2.2 API contract test matrix from analysis',
'[Gemini] Use L2.4 external API mock strategies from analysis',
'[Gemini] Cover L2.5 error scenarios from analysis'
]
const currentNotes = Read(testPlanningNotesPath)
const constraintCount = (currentNotes.match(/^\d+\./gm) || []).length
Edit(testPlanningNotesPath, {
old: '## Consolidated Test Requirements (Phase 2 Input)',
new: `## Consolidated Test Requirements (Phase 2 Input)
1. [Context] ${testFramework} framework conventions
2. [Context] ${coverageTarget} coverage target
${geminiConstraints.map((c, i) => `${i + 3}. ${c}`).join('\n')}`
})
```
---
#### Phase 2: Test Document Generation (Agent)
**Agent Specialization**: This invokes `@test-action-planning-agent` - a specialized variant of action-planning-agent with:
- Progressive L0-L3 test layers (Static, Unit, Integration, E2E)
- AI code issue detection (L0.5) with severity levels
- Project type templates (React, Node API, CLI, Library, Monorepo)
- Test anti-pattern detection with quality gates
- Layer completeness thresholds and coverage targets
**See**: `d:\Claude_dms3\.claude\agents\test-action-planning-agent.md` for complete test specifications.
```javascript
Task(
subagent_type="test-action-planning-agent",
run_in_background=false,
description="Generate test planning documents",
prompt=`
## TASK OBJECTIVE
Generate test planning documents (IMPL_PLAN.md, task JSONs, TODO_LIST.md) for test workflow session
IMPORTANT: This is TEST PLANNING ONLY - you are generating planning documents, NOT executing tests.
## SESSION PATHS
Input:
- Session Metadata: .workflow/active/{test-session-id}/workflow-session.json
- TEST_ANALYSIS_RESULTS: .workflow/active/{test-session-id}/.process/TEST_ANALYSIS_RESULTS.md (REQUIRED)
- Test Planning Notes: .workflow/active/{test-session-id}/test-planning-notes.md (REQUIRED - contains Gemini enhancement findings)
- Test Context Package: .workflow/active/{test-session-id}/.process/test-context-package.json
- Context Package: .workflow/active/{test-session-id}/.process/context-package.json
- Enriched Suggestions: .workflow/active/{test-session-id}/.process/gemini-enriched-suggestions.md (for reference)
- Source Session Summaries: .workflow/active/{source-session-id}/.summaries/IMPL-*.md (if exists)
Output:
- Task Dir: .workflow/active/{test-session-id}/.task/
- IMPL_PLAN: .workflow/active/{test-session-id}/IMPL_PLAN.md
- TODO_LIST: .workflow/active/{test-session-id}/TODO_LIST.md
## CONTEXT METADATA
Session ID: {test-session-id}
Workflow Type: test_session
Source Session: {source-session-id} (if exists)
MCP Capabilities: {exa_code, exa_web, code_index}
## CONSOLIDATED CONTEXT
**From test-planning-notes.md**:
- Test Intent: Project type, test framework, coverage target
- Context Findings: Coverage gaps, file analysis
- Gemini Enhancement: Complete enriched test suggestions (L1-L3 layers)
* Full analysis embedded in planning-notes.md
* API contracts, integration patterns, error scenarios
- Consolidated Requirements: Combined constraints from all phases
## YOUR SPECIFICATIONS
You are @test-action-planning-agent. Your complete test specifications are defined in:
d:\Claude_dms3\.claude\agents\test-action-planning-agent.md
This includes:
- Progressive Test Layers (L0-L3) with L0.1-L0.5, L1.1-L1.5, L2.1-L2.5, L3.1-L3.4
- AI Code Issue Detection (L0.5) with 7 categories and severity levels
- Project Type Detection & Templates (6 project types)
- Test Anti-Pattern Detection (5 categories)
- Layer Completeness & Quality Metrics (thresholds and gate decisions)
- Task JSON structure requirements (minimum 4 tasks)
- Quality validation rules
**Follow your specification exactly** when generating test task JSONs.
## EXPECTED DELIVERABLES
1. Test Task JSON Files (.task/IMPL-*.json) - Minimum 4:
- IMPL-001.json: Test generation (L1-L3 layers per spec)
- IMPL-001.3-validation.json: Code validation gate (L0 + AI issues per spec)
- IMPL-001.5-review.json: Test quality gate (anti-patterns + coverage per spec)
- IMPL-002.json: Test execution & fix cycle
2. IMPL_PLAN.md: Test implementation plan with quality gates
3. TODO_LIST.md: Hierarchical task list with test phase indicators
## SUCCESS CRITERIA
- All test planning documents generated successfully
- Task count: minimum 4 (expandable for complex projects)
- Test framework: {detected from project}
- Coverage targets: L0 zero errors, L1 80%+, L2 70%+
- L0-L3 layers explicitly defined per spec
- AI issue detection configured per spec
- Quality gates with measurable thresholds
`
)
```
**Input**: `testSessionId` from Phase 1
**Note**: test-task-generate invokes action-planning-agent to generate test-specific IMPL_PLAN.md and task JSONs based on TEST_ANALYSIS_RESULTS.md.
**Note**: test-action-planning-agent generates test-specific IMPL_PLAN.md and task JSONs based on TEST_ANALYSIS_RESULTS.md.
**Expected Output** (minimum 4 tasks):
@@ -36,6 +305,37 @@ Skill(skill="workflow:tools:test-task-generate", args="--session [testSessionId]
- `.workflow/active/[testSessionId]/IMPL_PLAN.md` exists
- `.workflow/active/[testSessionId]/TODO_LIST.md` exists
## Test-Specific Execution Modes
### Test Generation (IMPL-001)
- **Agent Mode** (default): @code-developer generates tests within agent context
- **CLI Mode**: Use CLI tools when `command` field present in implementation_approach
### Test Execution & Fix (IMPL-002+)
- **Agent Mode** (default): Gemini diagnosis -> agent applies fixes
- **CLI Mode**: Gemini diagnosis -> CLI applies fixes (when `command` field present)
**CLI Tool Selection**: Determined semantically from user's task description (e.g., "use Codex for fixes")
## Output Directory Structure
```
.workflow/active/WFS-test-[session]/
|-- workflow-session.json # Session metadata
|-- IMPL_PLAN.md # Test implementation plan
|-- TODO_LIST.md # Task checklist
|-- test-planning-notes.md # Consolidated planning notes with full Gemini analysis
|-- .task/
| |-- IMPL-001.json # Test generation (L1-L3)
| |-- IMPL-001.3-validation.json # Code validation gate (L0 + AI)
| |-- IMPL-001.5-review.json # Test quality gate
| +-- IMPL-002.json # Test execution & fix cycle
+-- .process/
|-- test-context-package.json # Test coverage and patterns
|-- gemini-enriched-suggestions.md # Gemini-generated test enhancements
+-- TEST_ANALYSIS_RESULTS.md # L0-L3 requirements (from test-concept-enhanced)
```
## Output
- **Files**: IMPL_PLAN.md, IMPL-*.json (4+), TODO_LIST.md