Add quality standards and team command design patterns documentation

- Introduced a new quality standards document outlining assessment criteria for team command .md files, including completeness, pattern compliance, integration, and consistency dimensions.
- Established quality gates and issue classification for errors, warnings, and informational notes.
- Created a comprehensive team command design patterns document detailing infrastructure and collaboration patterns, including message bus integration, YAML front matter requirements, task lifecycle, five-phase execution structure, and error handling.
- Included a pattern selection guide for collaboration scenarios to enhance team interaction models.
This commit is contained in:
catlog22
2026-02-13 23:39:06 +08:00
parent 5ad7a954d4
commit cdb240d2c2
61 changed files with 5181 additions and 14525 deletions

View File

@@ -1,328 +0,0 @@
---
name: workflow-brainstorm-auto-parallel
description: Parallel brainstorming automation with dynamic role selection and concurrent execution across multiple perspectives. Triggers on "workflow:brainstorm:auto-parallel".
allowed-tools: spawn_agent, wait, send_input, close_agent, AskUserQuestion, Read, Write, Edit, Bash, Glob, Grep
---
# Workflow Brainstorm Auto-Parallel
Parallel brainstorming automation orchestrating interactive framework generation, concurrent multi-role analysis, and synthesis integration to produce comprehensive guidance specifications.
## Architecture Overview
```
┌─────────────────────────────────────────────────────────────────┐
│ Auto-Parallel Orchestrator (SKILL.md) │
│ → Pure coordinator: Execute phases, parse outputs, manage agents│
└───────────────┬─────────────────────────────────────────────────┘
┌───────────┼───────────┬───────────┐
↓ ↓ ↓ ↓
┌─────────┐ ┌─────────┐ ┌─────────┐ ┌─────────┐
│ Phase 0 │ │ Phase 1 │ │ Phase 2 │ │ Phase 3 │
│ Parse │ │Framework│ │Parallel │ │Synthesis│
│ Params │ │Generate │ │ Roles │ │Integrate│
└─────────┘ └─────────┘ └─────────┘ └─────────┘
↓ ↓ ↓ ↓
count, guidance- N role synthesis-
style-skill specification analyses specification
```
## Key Design Principles
1. **Pure Orchestrator**: Execute phases in sequence (Phase 1, 3 sequential; Phase 2 parallel)
2. **Auto-Continue**: All phases run autonomously without user intervention between phases
3. **Subagent Lifecycle**: Explicit lifecycle management with spawn_agent → wait → close_agent
4. **Progressive Phase Loading**: Phase docs are read on-demand when phase executes
5. **Parallel Execution**: Phase 2 launches N role agents concurrently
6. **Role Path Loading**: Subagent roles loaded via path reference in MANDATORY FIRST STEPS
## Auto Mode
When `--yes` or `-y`: Auto-select recommended roles, skip all clarification questions, use default answers.
## Execution Flow
```
Parameter Parsing:
├─ Extract --count N (default: 3, max: 9)
├─ Extract --style-skill package-name (optional, for ui-designer)
└─ Validate style SKILL package exists
Phase 1: Interactive Framework Generation
└─ Ref: phases/01-interactive-framework.md
├─ Sub-phases: Phase 0-5 (Context → Topic → Roles → Questions → Conflicts → Spec)
├─ Output: guidance-specification.md + workflow-session.json
└─ Parse: selected_roles[], session_id
Phase 2: Parallel Role Analysis
└─ Ref: phases/02-parallel-role-analysis.md
├─ Spawn N conceptual-planning-agent (concurrent execution)
├─ For each role: Execute framework-based analysis
├─ Optional: ui-designer appends --style-skill if provided
├─ Lifecycle: spawn_agent → batch wait → close_agent
└─ Output: [role]/analysis*.md (one per role)
Phase 3: Synthesis Integration
└─ Ref: phases/03-synthesis-integration.md
├─ Spawn cross-role analysis agent
├─ User interaction: Enhancement selection + clarifications
├─ Spawn parallel update agents (one per role)
├─ Lifecycle: spawn_agent → wait → close_agent for each agent
└─ Output: synthesis-specification.md + updated analyses
Return:
└─ Summary with session info and next steps
```
**Phase Reference Documents** (read on-demand when phase executes):
| Phase | Document | Purpose |
|-------|----------|---------|
| 1 | [phases/01-interactive-framework.md](phases/01-interactive-framework.md) | Interactive clarification generating confirmed guidance specification through role-based analysis |
| 2 | [phases/02-parallel-role-analysis.md](phases/02-parallel-role-analysis.md) | Unified role-specific analysis generation with interactive context gathering and concurrent execution |
| 3 | [phases/03-synthesis-integration.md](phases/03-synthesis-integration.md) | Cross-role synthesis integration with intelligent Q&A and targeted updates |
## Core Rules
1. **Start Immediately**: First action is parameter parsing
2. **No Preliminary Analysis**: Do not analyze topic before Phase 1 - artifacts handles all analysis
3. **Parse Every Output**: Extract selected_roles from workflow-session.json after Phase 1
4. **Auto-Continue**: Execute next pending phase automatically after previous completes
5. **Track Progress**: Phase execution state managed through workflow-session.json
6. **⚠️ CRITICAL: DO NOT STOP**: Continuous multi-phase workflow. After completing each phase, immediately proceed to next
7. **Parallel Execution**: Phase 2 spawns multiple agent tasks simultaneously for concurrent execution
8. **Progressive Phase Loading**: Read phase docs ONLY when that phase is about to execute
9. **Explicit Lifecycle**: Always close_agent after wait completes to free resources
## Usage
```bash
workflow:brainstorm:auto-parallel "<topic>" [--count N] [--style-skill package-name]
```
**Recommended Structured Format**:
```bash
workflow:brainstorm:auto-parallel "GOAL: [objective] SCOPE: [boundaries] CONTEXT: [background]" [--count N] [--style-skill package-name]
```
**Parameters**:
- `topic` (required): Topic or challenge description (structured format recommended)
- `--count N` (optional): Number of roles to select (default: 3, max: 9)
- `--style-skill package-name` (optional): Style SKILL package to load for ui-designer (located at `.codex/skills/style-{package-name}/`)
## Data Flow
### Phase 0 → Phase 1
**Input**:
- `topic`: User-provided topic or challenge description
- `count`: Number of roles to select (parsed from --count parameter)
- `style_skill_package`: Style SKILL package name (parsed from --style-skill parameter)
**Output**: None (in-memory variables)
### Phase 1 → Phase 2
**Input**: `topic`, `count`, `style_skill_package`
**Output**:
- `session_id`: Workflow session identifier (WFS-{topic-slug})
- `selected_roles[]`: Array of selected role names
- `guidance-specification.md`: Framework content
- `workflow-session.json`: Session metadata
**Parsing**:
```javascript
// Read workflow-session.json after Phase 1
const session_data = Read("${projectRoot}/.workflow/active/WFS-{topic}/workflow-session.json");
const selected_roles = session_data.selected_roles;
const session_id = session_data.session_id;
const style_skill_package = session_data.style_skill_package || null;
```
### Phase 2 → Phase 3
**Input**: `session_id`, `selected_roles[]`, `style_skill_package`
**Output**:
- `[role]/analysis*.md`: One analysis per selected role
- `.superdesign/design_iterations/`: UI design artifacts (if --style-skill provided)
**Validation**:
```javascript
// Verify all role analyses created
for (const role of selected_roles) {
const analysis_path = `${brainstorm_dir}/${role}/analysis.md`;
if (!exists(analysis_path)) {
ERROR: `Missing analysis for ${role}`;
}
}
```
### Phase 3 → Completion
**Input**: `session_id`, all role analyses, guidance-specification.md
**Output**:
- `synthesis-specification.md`: Integrated cross-role analysis
- Updated `[role]/analysis*.md` with clarifications
**Validation**:
```javascript
const synthesis_path = `${brainstorm_dir}/synthesis-specification.md`;
if (!exists(synthesis_path)) {
ERROR: "Synthesis generation failed";
}
```
## Subagent API Reference
### spawn_agent
Create a new subagent with task assignment.
```javascript
const agentId = spawn_agent({
message: `
## TASK ASSIGNMENT
### MANDATORY FIRST STEPS (Agent Execute)
1. **Read role definition**: ~/.codex/agents/{agent-type}.md (MUST read first)
2. Read: ${projectRoot}/.workflow/project-tech.json
3. Read: ${projectRoot}/.workflow/project-guidelines.json
---
## TASK CONTEXT
${taskContext}
## DELIVERABLES
${deliverables}
`
})
```
### wait
Get results from subagent (only way to retrieve results).
```javascript
const result = wait({
ids: [agentId],
timeout_ms: 600000 // 10 minutes
})
if (result.timed_out) {
// Handle timeout - can continue waiting or send_input to prompt completion
}
// Check completion status
if (result.status[agentId].completed) {
const output = result.status[agentId].completed;
}
```
### send_input
Continue interaction with active subagent (for clarification or follow-up).
```javascript
send_input({
id: agentId,
message: `
## CLARIFICATION ANSWERS
${answers}
## NEXT STEP
Continue with analysis generation.
`
})
```
### close_agent
Clean up subagent resources (irreversible).
```javascript
close_agent({ id: agentId })
```
## Session Management
**⚡ FIRST ACTION**: Check `{projectRoot}/.workflow/active/` for existing sessions before Phase 1
**Multiple Sessions Support**:
- Different Codex instances can have different brainstorming sessions
- If multiple sessions found, prompt user to select
- If single session found, use it
- If no session exists, create `WFS-[topic-slug]`
**Session Continuity**:
- MUST use selected session for all phases
- Each role's context stored in session directory
- Session isolation: Each session maintains independent state
## Output Structure
**Phase 1 Output**:
- `{projectRoot}/.workflow/active/WFS-{topic}/.brainstorming/guidance-specification.md` (framework content)
- `{projectRoot}/.workflow/active/WFS-{topic}/workflow-session.json` (metadata: selected_roles[], topic, timestamps, style_skill_package)
**Phase 2 Output**:
- `{projectRoot}/.workflow/active/WFS-{topic}/.brainstorming/{role}/analysis.md` (one per role)
- `.superdesign/design_iterations/` (ui-designer artifacts, if --style-skill provided)
**Phase 3 Output**:
- `{projectRoot}/.workflow/active/WFS-{topic}/.brainstorming/synthesis-specification.md` (integrated analysis)
- Updated `[role]/analysis*.md` with Enhancements + Clarifications sections
**⚠️ Storage Separation**: Guidance content in .md files, metadata in .json (no duplication)
**⚠️ Style References**: When --style-skill provided, workflow-session.json stores style_skill_package name, ui-designer loads from `.codex/skills/style-{package-name}/`
## Available Roles
- data-architect (数据架构师)
- product-manager (产品经理)
- product-owner (产品负责人)
- scrum-master (敏捷教练)
- subject-matter-expert (领域专家)
- system-architect (系统架构师)
- test-strategist (测试策略师)
- ui-designer (UI 设计师)
- ux-expert (UX 专家)
**Role Selection**: Handled by Phase 1 (artifacts) - intelligent recommendation + user selection
## Error Handling
- **Role selection failure**: Phase 1 defaults to product-manager with explanation
- **Agent execution failure**: Agent-specific retry with minimal dependencies
- **Template loading issues**: Agent handles graceful degradation
- **Synthesis conflicts**: Phase 3 highlights disagreements without resolution
- **Context overflow protection**: Per-role limits enforced by conceptual-planning-agent
- **Agent lifecycle errors**: Ensure close_agent in error paths to prevent resource leaks
## Reference Information
**File Structure**:
```
{projectRoot}/.workflow/active/WFS-[topic]/
├── workflow-session.json # Session metadata ONLY
└── .brainstorming/
├── guidance-specification.md # Framework (Phase 1)
├── {role}/
│ ├── analysis.md # Main document (with optional @references)
│ └── analysis-{slug}.md # Section documents (max 5)
└── synthesis-specification.md # Integration (Phase 3)
```
**Next Steps** (returned to user):
```
Brainstorming complete for session: {sessionId}
Roles analyzed: {count}
Synthesis: {projectRoot}/.workflow/active/WFS-{topic}/.brainstorming/synthesis-specification.md
✅ Next Steps:
1. workflow:plan --session {sessionId} # Generate implementation plan
```

View File

@@ -1,428 +0,0 @@
## Overview
Seven-phase workflow: **Context collection****Topic analysis****Role selection****Role questions****Conflict resolution****Final check****Generate specification**
All user interactions use ASK_USER / CONFIRM pseudo-code (implemented via AskUserQuestion tool) (max 4 questions per call, multi-round).
**Input**: `"GOAL: [objective] SCOPE: [boundaries] CONTEXT: [background]" [--count N]`
**Output**: `{projectRoot}/.workflow/active/WFS-{topic}/.brainstorming/guidance-specification.md`
**Core Principle**: Questions dynamically generated from project context + topic keywords, NOT generic templates
**Parameters**:
- `topic` (required): Topic or challenge description (structured format recommended)
- `--count N` (optional): Number of roles to select (system recommends N+2 options, default: 3)
---
## Quick Reference
### Phase Summary
| Phase | Goal | ASK_USER | Storage |
|-------|------|-----------------|---------|
| 0 | Context collection | - | context-package.json |
| 1 | Topic analysis | 2-4 questions | intent_context |
| 2 | Role selection | 1 multi-select | selected_roles |
| 3 | Role questions | 3-4 per role | role_decisions[role] |
| 4 | Conflict resolution | max 4 per round | cross_role_decisions |
| 4.5 | Final check | progressive rounds | additional_decisions |
| 5 | Generate spec | - | guidance-specification.md |
### ASK_USER Pattern
```javascript
// Single-select (Phase 1, 3, 4)
ASK_USER([
{
id: "{短标签}", // max 12 chars
type: "select",
prompt: "{问题文本}",
options: [
{ label: "{选项}", description: "{说明和影响}" },
{ label: "{选项}", description: "{说明和影响}" },
{ label: "{选项}", description: "{说明和影响}" }
]
}
// ... max 4 questions per call
]) // BLOCKS (wait for user response)
// Multi-select (Phase 2)
ASK_USER([{
id: "角色选择", type: "multi-select",
prompt: "请选择 {count} 个角色",
options: [/* max 4 options per call */]
}]) // BLOCKS (wait for user response)
```
### Multi-Round Execution
```javascript
const BATCH_SIZE = 4;
for (let i = 0; i < allQuestions.length; i += BATCH_SIZE) {
const batch = allQuestions.slice(i, i + BATCH_SIZE);
ASK_USER(batch); // BLOCKS (wait for user response)
// Store responses before next round
}
```
---
## Session Management
- Check `{projectRoot}/.workflow/active/` for existing sessions
- Multiple → Prompt selection | Single → Use it | None → Create `WFS-[topic-slug]`
- Parse `--count N` parameter (default: 3)
- Store decisions in `workflow-session.json`
---
## Execution Phases
### Phase 0: Context Collection
**Goal**: Gather project context BEFORE user interaction
**Steps**:
1. Check if `context-package.json` exists → Skip if valid
2. Invoke `context-search-agent` (BRAINSTORM MODE - lightweight)
3. Output: `{projectRoot}/.workflow/active/WFS-{session-id}/.process/context-package.json`
**Graceful Degradation**: If agent fails, continue to Phase 1 without context
```javascript
// Spawn context-search-agent
const contextAgentId = spawn_agent({
message: `
## TASK ASSIGNMENT
### MANDATORY FIRST STEPS (Agent Execute)
1. **Read role definition**: ~/.codex/agents/context-search-agent.md (MUST read first)
2. Read: ${projectRoot}/.workflow/project-tech.json
3. Read: ${projectRoot}/.workflow/project-guidelines.json
---
## Task Objective
Execute context-search-agent in BRAINSTORM MODE (Phase 1-2 only).
## Assigned Context
- **Session**: ${session_id}
- **Task**: ${task_description}
- **Output**: ${projectRoot}/.workflow/active/${session_id}/.process/context-package.json
## Required Output Fields
metadata, project_context, assets, dependencies, conflict_detection
`
});
// Wait for completion
const contextResult = wait({
ids: [contextAgentId],
timeout_ms: 300000 // 5 minutes
});
// Clean up agent
close_agent({ id: contextAgentId });
// Check result
if (contextResult.timed_out) {
console.warn("Context gathering timed out, continuing without context");
}
```
### Phase 1: Topic Analysis
**Goal**: Extract keywords/challenges enriched by Phase 0 context
**Steps**:
1. Load Phase 0 context (tech_stack, modules, conflict_risk)
2. Deep topic analysis (entities, challenges, constraints, metrics)
3. Generate 2-4 context-aware probing questions
4. ASK_USER → Store to `session.intent_context`
**Example**:
```javascript
ASK_USER([
{
id: "核心挑战", type: "select",
prompt: "实时协作平台的主要技术挑战?",
options: [
{ label: "实时数据同步", description: "100+用户同时在线,状态同步复杂度高" },
{ label: "可扩展性架构", description: "用户规模增长时的系统扩展能力" },
{ label: "冲突解决机制", description: "多用户同时编辑的冲突处理策略" }
]
},
{
id: "优先级", type: "select",
prompt: "MVP阶段最关注的指标",
options: [
{ label: "功能完整性", description: "实现所有核心功能" },
{ label: "用户体验", description: "流畅的交互体验和响应速度" },
{ label: "系统稳定性", description: "高可用性和数据一致性" }
]
}
]) // BLOCKS (wait for user response)
```
**⚠️ CRITICAL**: Questions MUST reference topic keywords. Generic "Project type?" violates dynamic generation.
### Phase 2: Role Selection
**Goal**: User selects roles from intelligent recommendations
**Available Roles**: data-architect, product-manager, product-owner, scrum-master, subject-matter-expert, system-architect, test-strategist, ui-designer, ux-expert
**Steps**:
1. Analyze Phase 1 keywords → Recommend count+2 roles with rationale
2. ASK_USER (type=multi-select) → Store to `session.selected_roles`
3. If count+2 > 4, split into multiple rounds
**Example**:
```javascript
ASK_USER([{
id: "角色选择", type: "multi-select",
prompt: "请选择 3 个角色参与头脑风暴分析",
options: [
{ label: "system-architect", description: "实时同步架构设计和技术选型" },
{ label: "ui-designer", description: "协作界面用户体验和状态展示" },
{ label: "product-manager", description: "功能优先级和MVP范围决策" },
{ label: "data-architect", description: "数据同步模型和存储方案设计" }
]
}]) // BLOCKS (wait for user response)
```
**⚠️ CRITICAL**: User MUST interact. NEVER auto-select without confirmation.
### Phase 3: Role-Specific Questions
**Goal**: Generate deep questions mapping role expertise to Phase 1 challenges
**Algorithm**:
1. FOR each selected role:
- Map Phase 1 challenges to role domain
- Generate 3-4 questions (implementation depth, trade-offs, edge cases)
- ASK_USER per role → Store to `session.role_decisions[role]`
2. Process roles sequentially (one at a time for clarity)
3. If role needs > 4 questions, split into multiple rounds
**Example** (system-architect):
```javascript
ASK_USER([
{
id: "状态同步", type: "select",
prompt: "100+ 用户实时状态同步方案?",
options: [
{ label: "Event Sourcing", description: "完整事件历史,支持回溯,存储成本高" },
{ label: "集中式状态管理", description: "实现简单,单点瓶颈风险" },
{ label: "CRDT", description: "去中心化,自动合并,学习曲线陡" }
]
},
{
id: "冲突解决", type: "select",
prompt: "两个用户同时编辑冲突如何解决?",
options: [
{ label: "自动合并", description: "用户无感知,可能产生意外结果" },
{ label: "手动解决", description: "用户控制,增加交互复杂度" },
{ label: "版本控制", description: "保留历史,需要分支管理" }
]
}
]) // BLOCKS (wait for user response)
```
### Phase 4: Conflict Resolution
**Goal**: Resolve ACTUAL conflicts from Phase 3 answers
**Algorithm**:
1. Analyze Phase 3 answers for conflicts:
- Contradictory choices (e.g., "fast iteration" vs "complex Event Sourcing")
- Missing integration (e.g., "Optimistic updates" but no conflict handling)
- Implicit dependencies (e.g., "Live cursors" but no auth defined)
2. Generate clarification questions referencing SPECIFIC Phase 3 choices
3. ASK_USER (max 4 per call, multi-round) → Store to `session.cross_role_decisions`
4. If NO conflicts: Skip Phase 4 (inform user: "未检测到跨角色冲突跳过Phase 4")
**Example**:
```javascript
ASK_USER([{
id: "架构冲突", type: "select",
prompt: "CRDT 与 UI 回滚期望冲突,如何解决?\n背景system-architect选择CRDTui-designer期望回滚UI",
options: [
{ label: "采用 CRDT", description: "保持去中心化调整UI期望" },
{ label: "显示合并界面", description: "增加用户交互,展示冲突详情" },
{ label: "切换到 OT", description: "支持回滚,增加服务器复杂度" }
]
}]) // BLOCKS (wait for user response)
```
### Phase 4.5: Final Clarification
**Purpose**: Ensure no important points missed before generating specification
**Steps**:
1. Ask initial check:
```javascript
CONFIRM("在生成最终规范之前,是否有前面未澄清的重点需要补充?") // BLOCKS (wait for user response)
// Returns: "无需补充" (前面的讨论已经足够完整) | "需要补充" (还有重要内容需要澄清)
```
2. If "需要补充":
- Analyze user's additional points
- Generate progressive questions (not role-bound, interconnected)
- ASK_USER (max 4 per round) → Store to `session.additional_decisions`
- Repeat until user confirms completion
3. If "无需补充": Proceed to Phase 5
**Progressive Pattern**: Questions interconnected, each round informs next, continue until resolved.
### Phase 5: Generate Specification
**Steps**:
1. Load all decisions: `intent_context` + `selected_roles` + `role_decisions` + `cross_role_decisions` + `additional_decisions`
2. Transform Q&A to declarative: Questions → Headers, Answers → CONFIRMED/SELECTED statements
3. Generate `guidance-specification.md`
4. Update `workflow-session.json` (metadata only)
5. Validate: No interrogative sentences, all decisions traceable
---
## Question Guidelines
### Core Principle
**Target**: 开发者(理解技术但需要从用户需求出发)
**Question Structure**: `[业务场景/需求前提] + [技术关注点]`
**Option Structure**: `标签:[技术方案] + 说明:[业务影响] + [技术权衡]`
### Quality Rules
**MUST Include**:
- ✅ All questions in Chinese (用中文提问)
- ✅ 业务场景作为问题前提
- ✅ 技术选项的业务影响说明
- ✅ 量化指标和约束条件
**MUST Avoid**:
- ❌ 纯技术选型无业务上下文
- ❌ 过度抽象的用户体验问题
- ❌ 脱离话题的通用架构问题
### Phase-Specific Requirements
| Phase | Focus | Key Requirements |
|-------|-------|------------------|
| 1 | 意图理解 | Reference topic keywords, 用户场景、业务约束、优先级 |
| 2 | 角色推荐 | Intelligent analysis (NOT keyword mapping), explain relevance |
| 3 | 角色问题 | Reference Phase 1 keywords, concrete options with trade-offs |
| 4 | 冲突解决 | Reference SPECIFIC Phase 3 choices, explain impact on both roles |
---
## Output & Governance
### Output Template
**File**: `{projectRoot}/.workflow/active/WFS-{topic}/.brainstorming/guidance-specification.md`
```markdown
# [Project] - Confirmed Guidance Specification
**Metadata**: [timestamp, type, focus, roles]
## 1. Project Positioning & Goals
**CONFIRMED Objectives**: [from topic + Phase 1]
**CONFIRMED Success Criteria**: [from Phase 1 answers]
## 2-N. [Role] Decisions
### SELECTED Choices
**[Question topic]**: [User's answer]
- **Rationale**: [From option description]
- **Impact**: [Implications]
### Cross-Role Considerations
**[Conflict resolved]**: [Resolution from Phase 4]
- **Affected Roles**: [Roles involved]
## Cross-Role Integration
**CONFIRMED Integration Points**: [API/Data/Auth from multiple roles]
## Risks & Constraints
**Identified Risks**: [From answers] → Mitigation: [Approach]
## Next Steps
**⚠️ Automatic Continuation** (when called from auto-parallel):
- auto-parallel spawns agents for role-specific analysis
- Each selected role gets conceptual-planning-agent
- Agents read this guidance-specification.md for context
## Appendix: Decision Tracking
| Decision ID | Category | Question | Selected | Phase | Rationale |
|-------------|----------|----------|----------|-------|-----------|
| D-001 | Intent | [Q] | [A] | 1 | [Why] |
| D-002 | Roles | [Selected] | [Roles] | 2 | [Why] |
| D-003+ | [Role] | [Q] | [A] | 3 | [Why] |
```
### File Structure
```
{projectRoot}/.workflow/active/WFS-[topic]/
├── workflow-session.json # Metadata ONLY
├── .process/
│ └── context-package.json # Phase 0 output
└── .brainstorming/
└── guidance-specification.md # Full guidance content
```
### Session Metadata
```json
{
"session_id": "WFS-{topic-slug}",
"type": "brainstorming",
"topic": "{original user input}",
"selected_roles": ["system-architect", "ui-designer", "product-manager"],
"phase_completed": "artifacts",
"timestamp": "2025-10-24T10:30:00Z",
"count_parameter": 3
}
```
**⚠️ Rule**: Session JSON stores ONLY metadata. All guidance content goes to guidance-specification.md.
### Validation Checklist
- ✅ No interrogative sentences (use CONFIRMED/SELECTED)
- ✅ Every decision traceable to user answer
- ✅ Cross-role conflicts resolved or documented
- ✅ Next steps concrete and specific
- ✅ No content duplication between .json and .md
### Update Mechanism
```
IF guidance-specification.md EXISTS:
Prompt: "Regenerate completely / Update sections / Cancel"
ELSE:
Run full Phase 0-5 flow
```
### Governance Rules
- All decisions MUST use CONFIRMED/SELECTED (NO "?" in decision sections)
- Every decision MUST trace to user answer
- Conflicts MUST be resolved (not marked "TBD")
- Next steps MUST be actionable
- Topic preserved as authoritative reference
**CRITICAL**: Guidance is single source of truth for downstream phases. Ambiguity violates governance.
---
## Post-Phase Update
After Phase 1 completes:
- **Output Created**: `guidance-specification.md`, `workflow-session.json`
- **Data Parsed**: `selected_roles[]`, `session_id`
- **Next Action**: Auto-continue to Phase 2 (parallel role analysis)
- **State Update**: Update workflow-session.json with `phase_completed: "artifacts"`

View File

@@ -1,767 +0,0 @@
## Overview
**Unified command for generating and updating role-specific analysis** with interactive context gathering, framework alignment, and incremental update support. Replaces 9 individual role commands with single parameterized workflow.
### Core Function
- **Multi-Role Support**: Single command supports all 9 brainstorming roles
- **Interactive Context**: Dynamic question generation based on role and framework
- **Incremental Updates**: Merge new insights into existing analyses
- **Framework Alignment**: Address guidance-specification.md discussion points
- **Agent Delegation**: Use conceptual-planning-agent with role-specific templates
- **Explicit Lifecycle**: Manage subagent creation, waiting, and cleanup
### Supported Roles
| Role ID | Title | Focus Area | Context Questions |
|---------|-------|------------|-------------------|
| `ux-expert` | UX专家 | User research, information architecture, user journey | 4 |
| `ui-designer` | UI设计师 | Visual design, high-fidelity mockups, design systems | 4 |
| `system-architect` | 系统架构师 | Technical architecture, scalability, integration patterns | 5 |
| `product-manager` | 产品经理 | Product strategy, roadmap, prioritization | 4 |
| `product-owner` | 产品负责人 | Backlog management, user stories, acceptance criteria | 4 |
| `scrum-master` | 敏捷教练 | Process facilitation, impediment removal, team dynamics | 3 |
| `subject-matter-expert` | 领域专家 | Domain knowledge, business rules, compliance | 4 |
| `data-architect` | 数据架构师 | Data models, storage strategies, data flow | 5 |
| `api-designer` | API设计师 | API contracts, versioning, integration patterns | 4 |
---
## Usage
```bash
# Generate new analysis with interactive context
role-analysis ux-expert
# Generate with existing framework + context questions
role-analysis system-architect --session WFS-xxx --include-questions
# Update existing analysis (incremental merge)
role-analysis ui-designer --session WFS-xxx --update
# Quick generation (skip interactive context)
role-analysis product-manager --session WFS-xxx --skip-questions
```
---
## Execution Protocol
### Phase 1: Detection & Validation
**Step 1.1: Role Validation**
```bash
VALIDATE role_name IN [
ux-expert, ui-designer, system-architect, product-manager,
product-owner, scrum-master, subject-matter-expert,
data-architect, api-designer
]
IF invalid:
ERROR: "Unknown role: {role_name}. Use one of: ux-expert, ui-designer, ..."
EXIT
```
**Step 1.2: Session Detection**
```bash
IF --session PROVIDED:
session_id = --session
brainstorm_dir = ${projectRoot}/.workflow/active/{session_id}/.brainstorming/
ELSE:
FIND ${projectRoot}/.workflow/active/WFS-*/
IF multiple:
PROMPT user to select
ELSE IF single:
USE existing
ELSE:
ERROR: "No active session. Run Phase 1 (artifacts) first"
EXIT
VALIDATE brainstorm_dir EXISTS
```
**Step 1.3: Framework Detection**
```bash
framework_file = {brainstorm_dir}/guidance-specification.md
IF framework_file EXISTS:
framework_mode = true
LOAD framework_content
ELSE:
WARN: "No framework found - will create standalone analysis"
framework_mode = false
```
**Step 1.4: Update Mode Detection**
```bash
existing_analysis = {brainstorm_dir}/{role_name}/analysis*.md
IF --update FLAG OR existing_analysis EXISTS:
update_mode = true
IF --update NOT PROVIDED:
ASK: "Analysis exists. Update or regenerate?"
OPTIONS: ["Incremental update", "Full regenerate", "Cancel"]
ELSE:
update_mode = false
```
### Phase 2: Interactive Context Gathering
**Trigger Conditions**:
- Default: Always ask unless `--skip-questions` provided
- `--include-questions`: Force context gathering even if analysis exists
- `--skip-questions`: Skip all interactive questions
**Step 2.1: Load Role Configuration**
```javascript
const roleConfig = {
'ux-expert': {
title: 'UX专家',
focus_area: 'User research, information architecture, user journey',
question_categories: ['User Intent', 'Requirements', 'UX'],
question_count: 4,
template: '~/.ccw/workflows/cli-templates/planning-roles/ux-expert.md'
},
'ui-designer': {
title: 'UI设计师',
focus_area: 'Visual design, high-fidelity mockups, design systems',
question_categories: ['Requirements', 'UX', 'Feasibility'],
question_count: 4,
template: '~/.ccw/workflows/cli-templates/planning-roles/ui-designer.md'
},
'system-architect': {
title: '系统架构师',
focus_area: 'Technical architecture, scalability, integration patterns',
question_categories: ['Scale & Performance', 'Technical Constraints', 'Architecture Complexity', 'Non-Functional Requirements'],
question_count: 5,
template: '~/.ccw/workflows/cli-templates/planning-roles/system-architect.md'
},
'product-manager': {
title: '产品经理',
focus_area: 'Product strategy, roadmap, prioritization',
question_categories: ['User Intent', 'Requirements', 'Process'],
question_count: 4,
template: '~/.ccw/workflows/cli-templates/planning-roles/product-manager.md'
},
'product-owner': {
title: '产品负责人',
focus_area: 'Backlog management, user stories, acceptance criteria',
question_categories: ['Requirements', 'Decisions', 'Process'],
question_count: 4,
template: '~/.ccw/workflows/cli-templates/planning-roles/product-owner.md'
},
'scrum-master': {
title: '敏捷教练',
focus_area: 'Process facilitation, impediment removal, team dynamics',
question_categories: ['Process', 'Risk', 'Decisions'],
question_count: 3,
template: '~/.ccw/workflows/cli-templates/planning-roles/scrum-master.md'
},
'subject-matter-expert': {
title: '领域专家',
focus_area: 'Domain knowledge, business rules, compliance',
question_categories: ['Requirements', 'Feasibility', 'Terminology'],
question_count: 4,
template: '~/.ccw/workflows/cli-templates/planning-roles/subject-matter-expert.md'
},
'data-architect': {
title: '数据架构师',
focus_area: 'Data models, storage strategies, data flow',
question_categories: ['Architecture', 'Scale & Performance', 'Technical Constraints', 'Feasibility'],
question_count: 5,
template: '~/.ccw/workflows/cli-templates/planning-roles/data-architect.md'
},
'api-designer': {
title: 'API设计师',
focus_area: 'API contracts, versioning, integration patterns',
question_categories: ['Architecture', 'Requirements', 'Feasibility', 'Decisions'],
question_count: 4,
template: '~/.ccw/workflows/cli-templates/planning-roles/api-designer.md'
}
};
config = roleConfig[role_name];
```
**Step 2.2: Generate Role-Specific Questions**
**9-Category Taxonomy**:
| Category | Focus | Example Question Pattern |
|----------|-------|--------------------------|
| User Intent | 用户目标 | "该分析的核心目标是什么?" |
| Requirements | 需求细化 | "需求的优先级如何排序?" |
| Architecture | 架构决策 | "技术栈的选择考量?" |
| UX | 用户体验 | "交互复杂度的取舍?" |
| Feasibility | 可行性 | "资源约束下的实现范围?" |
| Risk | 风险管理 | "风险容忍度是多少?" |
| Process | 流程规范 | "开发迭代的节奏?" |
| Decisions | 决策确认 | "冲突的解决方案?" |
| Terminology | 术语统一 | "统一使用哪个术语?" |
| Scale & Performance | 性能扩展 | "预期的负载和性能要求?" |
| Technical Constraints | 技术约束 | "现有技术栈的限制?" |
| Architecture Complexity | 架构复杂度 | "架构的复杂度权衡?" |
| Non-Functional Requirements | 非功能需求 | "可用性和可维护性要求?" |
**Question Generation Algorithm**:
```javascript
async function generateQuestions(role_name, framework_content) {
const config = roleConfig[role_name];
const questions = [];
// Parse framework for keywords
const keywords = extractKeywords(framework_content);
// Generate category-specific questions
for (const category of config.question_categories) {
const question = generateCategoryQuestion(category, keywords, role_name);
questions.push(question);
}
return questions.slice(0, config.question_count);
}
```
**Step 2.3: Multi-Round Question Execution**
```javascript
const BATCH_SIZE = 4;
const user_context = {};
for (let i = 0; i < questions.length; i += BATCH_SIZE) {
const batch = questions.slice(i, i + BATCH_SIZE);
const currentRound = Math.floor(i / BATCH_SIZE) + 1;
const totalRounds = Math.ceil(questions.length / BATCH_SIZE);
console.log(`\n[Round ${currentRound}/${totalRounds}] ${config.title} 上下文询问\n`);
ASK_USER(batch.map(q => ({
id: q.category.substring(0, 12), type: "select",
prompt: q.question,
options: q.options.map(opt => ({
label: opt.label,
description: opt.description
}))
}))); // BLOCKS (wait for user response)
// Store responses before next round
for (const answer of responses) {
user_context[answer.question] = {
answer: answer.selected,
category: answer.category,
timestamp: new Date().toISOString()
};
}
}
// Save context to file
Write(
`${brainstorm_dir}/${role_name}/${role_name}-context.md`,
formatUserContext(user_context)
);
```
**Question Quality Rules**:
**MUST Include**:
- ✅ All questions in Chinese (用中文提问)
- ✅ 业务场景作为问题前提
- ✅ 技术选项的业务影响说明
- ✅ 量化指标和约束条件
**MUST Avoid**:
- ❌ 纯技术选型无业务上下文
- ❌ 过度抽象的通用问题
- ❌ 脱离框架的重复询问
### Phase 3: Agent Execution
**Step 3.1: Load Session Metadata**
```bash
session_metadata = Read(${projectRoot}/.workflow/active/{session_id}/workflow-session.json)
original_topic = session_metadata.topic
selected_roles = session_metadata.selected_roles
```
**Step 3.2: Prepare Agent Context**
```javascript
const agentContext = {
role_name: role_name,
role_config: roleConfig[role_name],
output_location: `${brainstorm_dir}/${role_name}/`,
framework_mode: framework_mode,
framework_path: framework_mode ? `${brainstorm_dir}/guidance-specification.md` : null,
update_mode: update_mode,
user_context: user_context,
original_topic: original_topic,
session_id: session_id
};
```
**Step 3.3: Execute Conceptual Planning Agent**
**Framework-Based Analysis** (when guidance-specification.md exists):
```javascript
// Spawn conceptual-planning-agent
const roleAgentId = spawn_agent({
message: `
## TASK ASSIGNMENT
### MANDATORY FIRST STEPS (Agent Execute)
1. **Read role definition**: ~/.codex/agents/conceptual-planning-agent.md (MUST read first)
2. Read: ${projectRoot}/.workflow/project-tech.json
3. Read: ${projectRoot}/.workflow/project-guidelines.json
---
[FLOW_CONTROL]
Execute ${role_name} analysis for existing topic framework
## Context Loading
ASSIGNED_ROLE: ${role_name}
OUTPUT_LOCATION: ${agentContext.output_location}
ANALYSIS_MODE: ${framework_mode ? "framework_based" : "standalone"}
UPDATE_MODE: ${update_mode}
## Flow Control Steps
1. **load_topic_framework**
- Action: Load structured topic discussion framework
- Command: Read(${agentContext.framework_path})
- Output: topic_framework_content
2. **load_role_template**
- Action: Load ${role_name} planning template
- Command: Read(${roleConfig[role_name].template})
- Output: role_template_guidelines
3. **load_session_metadata**
- Action: Load session metadata and user intent
- Command: Read(${projectRoot}/.workflow/active/${session_id}/workflow-session.json)
- Output: session_context
4. **load_user_context** (if exists)
- Action: Load interactive context responses
- Command: Read(${brainstorm_dir}/${role_name}/${role_name}-context.md)
- Output: user_context_answers
5. **${update_mode ? 'load_existing_analysis' : 'skip'}**
${update_mode ? `
- Action: Load existing analysis for incremental update
- Command: Read(${brainstorm_dir}/${role_name}/analysis.md)
- Output: existing_analysis_content
` : ''}
## Analysis Requirements
**Primary Reference**: Original user prompt from workflow-session.json is authoritative
**Framework Source**: Address all discussion points in guidance-specification.md from ${role_name} perspective
**User Context Integration**: Incorporate interactive Q&A responses into analysis
**Role Focus**: ${roleConfig[role_name].focus_area}
**Template Integration**: Apply role template guidelines within framework structure
## Expected Deliverables
1. **analysis.md** (main document, optionally with analysis-{slug}.md sub-documents)
2. **Framework Reference**: @../guidance-specification.md (if framework_mode)
3. **User Context Reference**: @./${role_name}-context.md (if user context exists)
4. **User Intent Alignment**: Validate against session_context
## Update Requirements (if UPDATE_MODE)
- **Preserve Structure**: Maintain existing analysis structure
- **Add "Clarifications" Section**: Document new user context with timestamp
- **Merge Insights**: Integrate new perspectives without removing existing content
- **Resolve Conflicts**: If new context contradicts existing analysis, document both and recommend resolution
## Completion Criteria
- Address each discussion point from guidance-specification.md with ${role_name} expertise
- Provide actionable recommendations from ${role_name} perspective within analysis files
- All output files MUST start with "analysis" prefix (no recommendations.md or other naming)
- Reference framework document using @ notation for integration
- Update workflow-session.json with completion status
`
});
// Wait for agent completion
const roleResult = wait({
ids: [roleAgentId],
timeout_ms: 600000 // 10 minutes
});
// Check result and cleanup
if (roleResult.timed_out) {
console.warn(`${role_name} analysis timed out`);
}
// Always close agent
close_agent({ id: roleAgentId });
```
### Phase 3 (Parallel Mode): Execute Multiple Roles Concurrently
**When called from auto-parallel orchestrator**, execute all selected roles in parallel:
```javascript
// Step 1: Spawn all role agents in parallel
const roleAgents = [];
selected_roles.forEach((role_name, index) => {
const config = roleConfig[role_name];
// For ui-designer, append style-skill if provided
const styleContext = (role_name === 'ui-designer' && style_skill_package)
? `\n## Style Reference\n**Style SKILL Package**: .codex/skills/style-${style_skill_package}/\n**Load First**: Read SKILL.md from style package for design tokens`
: '';
const agentId = spawn_agent({
message: `
## TASK ASSIGNMENT
### MANDATORY FIRST STEPS (Agent Execute)
1. **Read role definition**: ~/.codex/agents/conceptual-planning-agent.md (MUST read first)
2. Read: ${projectRoot}/.workflow/project-tech.json
3. Read: ${projectRoot}/.workflow/project-guidelines.json
---
## Task Objective
Execute **${role_name}** (${config.title}) analysis for brainstorming session.
## Assigned Context
- **Role**: ${role_name}
- **Role Focus**: ${config.focus_area}
- **Session ID**: ${session_id}
- **Framework Path**: ${brainstorm_dir}/guidance-specification.md
- **Output Location**: ${brainstorm_dir}/${role_name}/
- **Role Index**: ${index + 1} of ${selected_roles.length}
${styleContext}
## Analysis Requirements
**Primary Reference**: Original user prompt from workflow-session.json is authoritative
**Framework Source**: Address all discussion points in guidance-specification.md from ${role_name} perspective
**Role Focus**: ${config.focus_area}
## Expected Deliverables
1. **analysis.md** (main document)
2. **analysis-{slug}.md** (optional sub-documents, max 5)
3. **Framework Reference**: @../guidance-specification.md
## Completion Criteria
- Address each framework discussion point with ${role_name} expertise
- Provide actionable recommendations within analysis files
- All output files MUST start with "analysis" prefix
`
});
roleAgents.push({ agentId, role_name });
});
// Step 2: Batch wait for all agents
const agentIds = roleAgents.map(a => a.agentId);
const parallelResults = wait({
ids: agentIds,
timeout_ms: 900000 // 15 minutes for all agents
});
// Step 3: Process results and check completion
const completedRoles = [];
const failedRoles = [];
roleAgents.forEach(({ agentId, role_name }) => {
if (parallelResults.status[agentId]?.completed) {
completedRoles.push(role_name);
} else {
failedRoles.push(role_name);
}
});
if (parallelResults.timed_out) {
console.warn('Some role analyses timed out:', failedRoles);
}
// Step 4: Batch cleanup - IMPORTANT: always close all agents
roleAgents.forEach(({ agentId }) => {
close_agent({ id: agentId });
});
console.log(`Completed: ${completedRoles.length}/${selected_roles.length} roles`);
console.log(`Completed roles: ${completedRoles.join(', ')}`);
if (failedRoles.length > 0) {
console.warn(`Failed roles: ${failedRoles.join(', ')}`);
}
```
### Phase 4: Validation & Finalization
**Step 4.1: Validate Output**
```bash
VERIFY EXISTS: ${brainstorm_dir}/${role_name}/analysis.md
VERIFY CONTAINS: "@../guidance-specification.md" (if framework_mode)
IF user_context EXISTS:
VERIFY CONTAINS: "@./${role_name}-context.md" OR "## Clarifications" section
```
**Step 4.2: Update Session Metadata**
```json
{
"phases": {
"BRAINSTORM": {
"${role_name}": {
"status": "${update_mode ? 'updated' : 'completed'}",
"completed_at": "timestamp",
"framework_addressed": true,
"context_gathered": user_context ? true : false,
"output_location": "${brainstorm_dir}/${role_name}/analysis.md",
"update_history": [
{
"timestamp": "ISO8601",
"mode": "${update_mode ? 'incremental' : 'initial'}",
"context_questions": question_count
}
]
}
}
}
}
```
**Step 4.3: Completion Report**
```markdown
✅ ${roleConfig[role_name].title} Analysis Complete
**Output**: ${brainstorm_dir}/${role_name}/analysis.md
**Mode**: ${update_mode ? 'Incremental Update' : 'New Generation'}
**Framework**: ${framework_mode ? '✓ Aligned' : '✗ Standalone'}
**Context Questions**: ${question_count} answered
${update_mode ? '
**Changes**:
- Added "Clarifications" section with new user context
- Merged new insights into existing sections
- Resolved conflicts with framework alignment
' : ''}
**Next Steps**:
${selected_roles.length > 1 ? `
- Continue with other roles: ${selected_roles.filter(r => r !== role_name).join(', ')}
- Run synthesis: Phase 3 (synthesis integration)
` : `
- Clarify insights: Phase 3 (synthesis integration)
- Generate plan: workflow:plan --session ${session_id}
`}
```
---
## Output Structure
### Directory Layout
```
{projectRoot}/.workflow/active/WFS-{session}/.brainstorming/
├── guidance-specification.md # Framework (if exists)
└── {role-name}/
├── {role-name}-context.md # Interactive Q&A responses
├── analysis.md # Main analysis (REQUIRED)
└── analysis-{slug}.md # Section documents (optional, max 5)
```
### Analysis Document Structure (New Generation)
```markdown
# ${roleConfig[role_name].title} Analysis: [Topic from Framework]
## Framework Reference
**Topic Framework**: @../guidance-specification.md
**Role Focus**: ${roleConfig[role_name].focus_area}
**User Context**: @./${role_name}-context.md
## User Context Summary
**Context Gathered**: ${question_count} questions answered
**Categories**: ${question_categories.join(', ')}
${user_context ? formatContextSummary(user_context) : ''}
## Discussion Points Analysis
[Address each point from guidance-specification.md with ${role_name} expertise]
### Core Requirements (from framework)
[Role-specific perspective on requirements]
### Technical Considerations (from framework)
[Role-specific technical analysis]
### User Experience Factors (from framework)
[Role-specific UX considerations]
### Implementation Challenges (from framework)
[Role-specific challenges and solutions]
### Success Metrics (from framework)
[Role-specific metrics and KPIs]
## ${roleConfig[role_name].title} Specific Recommendations
[Role-specific actionable strategies]
---
*Generated by ${role_name} analysis addressing structured framework*
*Context gathered: ${new Date().toISOString()}*
```
### Analysis Document Structure (Incremental Update)
```markdown
# ${roleConfig[role_name].title} Analysis: [Topic]
## Framework Reference
[Existing content preserved]
## Clarifications
### Session ${new Date().toISOString().split('T')[0]}
${Object.entries(user_context).map(([q, a]) => `
- **Q**: ${q} (Category: ${a.category})
**A**: ${a.answer}
`).join('\n')}
## User Context Summary
[Updated with new context]
## Discussion Points Analysis
[Existing content enhanced with new insights]
[Rest of sections updated based on clarifications]
```
---
## Integration with Other Phases
### Called By
- Auto-parallel orchestrator (Phase 2 - parallel role execution)
- Manual invocation for single-role analysis
### Spawns
- `conceptual-planning-agent` (via spawn_agent → wait → close_agent)
### Coordinates With
- Phase 1 (artifacts) - creates framework for role analysis
- Phase 3 (synthesis) - reads role analyses for integration
---
## Quality Assurance
### Required Analysis Elements
- [ ] Framework discussion points addressed (if framework_mode)
- [ ] User context integrated (if context gathered)
- [ ] Role template guidelines applied
- [ ] Output files follow naming convention (analysis*.md only)
- [ ] Framework reference using @ notation
- [ ] Session metadata updated
### Context Quality
- [ ] Questions in Chinese with business context
- [ ] Options include technical trade-offs
- [ ] Categories aligned with role focus
- [ ] No generic questions unrelated to framework
### Update Quality (if update_mode)
- [ ] "Clarifications" section added with timestamp
- [ ] New insights merged without content loss
- [ ] Conflicts documented and resolved
- [ ] Framework alignment maintained
### Agent Lifecycle Quality
- [ ] All spawn_agent have corresponding close_agent
- [ ] All wait calls have reasonable timeout
- [ ] Parallel agents use batch wait
- [ ] Error paths include agent cleanup
---
## Command Parameters
### Required Parameters
- `[role-name]`: Role identifier (ux-expert, ui-designer, system-architect, etc.)
### Optional Parameters
- `--session [session-id]`: Specify brainstorming session (auto-detect if omitted)
- `--update`: Force incremental update mode (auto-detect if analysis exists)
- `--include-questions`: Force context gathering even if analysis exists
- `--skip-questions`: Skip all interactive context gathering
- `--style-skill [package]`: For ui-designer only, load style SKILL package
### Parameter Combinations
| Scenario | Command | Behavior |
|----------|---------|----------|
| New analysis | `role-analysis ux-expert` | Generate + ask context questions |
| Quick generation | `role-analysis ux-expert --skip-questions` | Generate without context |
| Update existing | `role-analysis ux-expert --update` | Ask clarifications + merge |
| Force questions | `role-analysis ux-expert --include-questions` | Ask even if exists |
| Specific session | `role-analysis ux-expert --session WFS-xxx` | Target specific session |
---
## Error Handling
### Invalid Role Name
```
ERROR: Unknown role: "ui-expert"
Valid roles: ux-expert, ui-designer, system-architect, product-manager,
product-owner, scrum-master, subject-matter-expert,
data-architect, api-designer
```
### No Active Session
```
ERROR: No active brainstorming session found
Run: Phase 1 (artifacts) to create session
```
### Missing Framework (with warning)
```
WARN: No guidance-specification.md found
Generating standalone analysis without framework alignment
Recommend: Run Phase 1 (artifacts) first for better results
```
### Agent Execution Failure
```
ERROR: Conceptual planning agent failed
Check: ${brainstorm_dir}/${role_name}/error.log
Action: Retry with --skip-questions or check framework validity
```
### Agent Lifecycle Errors
```javascript
// Always ensure cleanup in error paths
try {
const agentId = spawn_agent({ message: "..." });
const result = wait({ ids: [agentId], timeout_ms: 600000 });
// ... process result ...
close_agent({ id: agentId });
} catch (error) {
// Ensure cleanup even on error
if (agentId) close_agent({ id: agentId });
throw error;
}
```
---
## Reference Information
### Role Template Locations
- Templates: `~/.ccw/workflows/cli-templates/planning-roles/`
- Format: `{role-name}.md` (e.g., `ux-expert.md`, `system-architect.md`)
### Context Package
- Location: `{projectRoot}/.workflow/active/WFS-{session}/.process/context-package.json`
- Used by: `context-search-agent` (Phase 0 of artifacts)
- Contains: Project context, tech stack, conflict risks
---
## Post-Phase Update
After Phase 2 completes:
- **Output Created**: `[role]/analysis*.md` for each selected role
- **Parallel Execution**: All N roles executed concurrently via spawn_agent + batch wait
- **Agent Cleanup**: All agents closed after wait completes
- **Next Action**: Auto-continue to Phase 3 (synthesis integration)
- **State Update**: Update workflow-session.json with role completion status

View File

@@ -1,464 +0,0 @@
## Overview
Six-phase workflow to eliminate ambiguities and enhance conceptual depth in role analyses:
**Phase 1-2**: Session detection → File discovery → Path preparation
**Phase 3A**: Cross-role analysis agent → Generate recommendations
**Phase 4**: User selects enhancements → User answers clarifications (via ASK_USER)
**Phase 5**: Parallel update agents (one per role)
**Phase 6**: Context package update → Metadata update → Completion report
All user interactions use ASK_USER / CONFIRM pseudo-code (implemented via AskUserQuestion tool) (max 4 questions per call, multi-round).
**Document Flow**:
- Input: `[role]/analysis*.md`, `guidance-specification.md`, session metadata
- Output: Updated `[role]/analysis*.md` with Enhancements + Clarifications sections
---
## Quick Reference
### Phase Summary
| Phase | Goal | Executor | Output |
|-------|------|----------|--------|
| 1 | Session detection | Main flow | session_id, brainstorm_dir |
| 2 | File discovery | Main flow | role_analysis_paths |
| 3A | Cross-role analysis | spawn_agent | enhancement_recommendations |
| 4 | User interaction | Main flow + ASK_USER | update_plan |
| 5 | Document updates | spawn_agent[] (parallel) | Updated analysis*.md |
| 6 | Finalization | Main flow | context-package.json, report |
### ASK_USER Pattern
```javascript
// Enhancement selection (multi-select)
ASK_USER([{
id: "改进选择", type: "multi-select",
prompt: "请选择要应用的改进建议",
options: [
{ label: "EP-001: API Contract", description: "添加详细的请求/响应 schema 定义" },
{ label: "EP-002: User Intent", description: "明确用户需求优先级和验收标准" }
]
}]) // BLOCKS (wait for user response)
// Clarification questions (single-select, multi-round)
ASK_USER([
{
id: "用户意图", type: "select",
prompt: "MVP 阶段的核心目标是什么?",
options: [
{ label: "快速验证", description: "最小功能集,快速上线获取反馈" },
{ label: "技术壁垒", description: "完善架构,为长期发展打基础" },
{ label: "功能完整", description: "覆盖所有规划功能,延迟上线" }
]
}
]) // BLOCKS (wait for user response)
```
---
## Execution Phases
### Phase 1: Discovery & Validation
1. **Detect Session**: Use `--session` parameter or find `{projectRoot}/.workflow/active/WFS-*`
2. **Validate Files**:
- `guidance-specification.md` (optional, warn if missing)
- `*/analysis*.md` (required, error if empty)
3. **Load User Intent**: Extract from `workflow-session.json`
### Phase 2: Role Discovery & Path Preparation
**Main flow prepares file paths for Agent**:
1. **Discover Analysis Files**:
- Glob: `{projectRoot}/.workflow/active/WFS-{session}/.brainstorming/*/analysis*.md`
- Supports: analysis.md + analysis-{slug}.md (max 5)
2. **Extract Role Information**:
- `role_analysis_paths`: Relative paths
- `participating_roles`: Role names from directories
3. **Pass to Agent**: session_id, brainstorm_dir, role_analysis_paths, participating_roles
### Phase 3A: Analysis & Enhancement Agent
**Agent executes cross-role analysis**:
```javascript
// Spawn analysis agent
const analysisAgentId = spawn_agent({
message: `
## TASK ASSIGNMENT
### MANDATORY FIRST STEPS (Agent Execute)
1. **Read role definition**: ~/.codex/agents/conceptual-planning-agent.md (MUST read first)
2. Read: ${projectRoot}/.workflow/project-tech.json
3. Read: ${projectRoot}/.workflow/project-guidelines.json
---
## Agent Mission
Analyze role documents, identify conflicts/gaps, generate enhancement recommendations
## Input
- brainstorm_dir: ${brainstorm_dir}
- role_analysis_paths: ${role_analysis_paths}
- participating_roles: ${participating_roles}
## Flow Control Steps
1. load_session_metadata → Read workflow-session.json
2. load_role_analyses → Read all analysis files
3. cross_role_analysis → Identify consensus, conflicts, gaps, ambiguities
4. generate_recommendations → Format as EP-001, EP-002, ...
## Output Format
[
{
"id": "EP-001",
"title": "API Contract Specification",
"affected_roles": ["system-architect", "api-designer"],
"category": "Architecture",
"current_state": "High-level API descriptions",
"enhancement": "Add detailed contract definitions",
"rationale": "Enables precise implementation",
"priority": "High"
}
]
`
});
// Wait for analysis completion
const analysisResult = wait({
ids: [analysisAgentId],
timeout_ms: 600000 // 10 minutes
});
// Parse enhancement recommendations
const recommendations = JSON.parse(analysisResult.status[analysisAgentId]?.completed || '[]');
// Clean up agent
close_agent({ id: analysisAgentId });
```
### Phase 4: User Interaction
**All interactions via ASK_USER (Chinese questions)**
#### Step 1: Enhancement Selection
```javascript
// If enhancements > 4, split into multiple rounds
const enhancements = recommendations; // from Phase 3A
const BATCH_SIZE = 4;
const selectedEnhancements = [];
for (let i = 0; i < enhancements.length; i += BATCH_SIZE) {
const batch = enhancements.slice(i, i + BATCH_SIZE);
ASK_USER([{
id: "改进选择", type: "multi-select",
prompt: `请选择要应用的改进建议 (第${Math.floor(i/BATCH_SIZE)+1}轮)`,
options: batch.map(ep => ({
label: `${ep.id}: ${ep.title}`,
description: `影响: ${ep.affected_roles.join(', ')} | ${ep.enhancement}`
}))
}]) // BLOCKS (wait for user response)
// Store selections before next round
selectedEnhancements.push(...user_selections);
}
// User can also skip: provide "跳过" option
```
#### Step 2: Clarification Questions
```javascript
// Generate questions based on 9-category taxonomy scan
// Categories: User Intent, Requirements, Architecture, UX, Feasibility, Risk, Process, Decisions, Terminology
const clarifications = [...]; // from analysis
const BATCH_SIZE = 4;
const userAnswers = {};
for (let i = 0; i < clarifications.length; i += BATCH_SIZE) {
const batch = clarifications.slice(i, i + BATCH_SIZE);
const currentRound = Math.floor(i / BATCH_SIZE) + 1;
const totalRounds = Math.ceil(clarifications.length / BATCH_SIZE);
ASK_USER(batch.map(q => ({
id: q.category.substring(0, 12), type: "select",
prompt: q.question,
options: q.options.map(opt => ({
label: opt.label,
description: opt.description
}))
}))) // BLOCKS (wait for user response)
// Store answers before next round
batch.forEach((q, idx) => {
userAnswers[q.id] = {
question: q.question,
answer: responses[idx],
category: q.category
};
});
}
```
### Question Guidelines
**Target**: 开发者(理解技术但需要从用户需求出发)
**Question Structure**: `[跨角色分析发现] + [需要澄清的决策点]`
**Option Structure**: `标签:[具体方案] + 说明:[业务影响] + [技术权衡]`
**9-Category Taxonomy**:
| Category | Focus | Example Question Pattern |
|----------|-------|--------------------------|
| User Intent | 用户目标 | "MVP阶段核心目标" + 验证/壁垒/完整性 |
| Requirements | 需求细化 | "功能优先级如何排序?" + 核心/增强/可选 |
| Architecture | 架构决策 | "技术栈选择考量?" + 熟悉度/先进性/成熟度 |
| UX | 用户体验 | "交互复杂度取舍?" + 简洁/丰富/渐进 |
| Feasibility | 可行性 | "资源约束下的范围?" + 最小/标准/完整 |
| Risk | 风险管理 | "风险容忍度?" + 保守/平衡/激进 |
| Process | 流程规范 | "迭代节奏?" + 快速/稳定/灵活 |
| Decisions | 决策确认 | "冲突解决方案?" + 方案A/方案B/折中 |
| Terminology | 术语统一 | "统一使用哪个术语?" + 术语A/术语B |
**Quality Rules**:
**MUST Include**:
- ✅ All questions in Chinese (用中文提问)
- ✅ 基于跨角色分析的具体发现
- ✅ 选项包含业务影响说明
- ✅ 解决实际的模糊点或冲突
**MUST Avoid**:
- ❌ 与角色分析无关的通用问题
- ❌ 重复已在 artifacts 阶段确认的内容
- ❌ 过于细节的实现级问题
#### Step 3: Build Update Plan
```javascript
update_plan = {
"role1": {
"enhancements": ["EP-001", "EP-003"],
"clarifications": [
{"question": "...", "answer": "...", "category": "..."}
]
},
"role2": {
"enhancements": ["EP-002"],
"clarifications": [...]
}
}
```
### Phase 5: Parallel Document Update Agents
**Execute in parallel** (one agent per role):
```javascript
// Step 1: Spawn all update agents in parallel
const updateAgents = [];
participating_roles.forEach((role, index) => {
const role_plan = update_plan[role];
const agentId = spawn_agent({
message: `
## TASK ASSIGNMENT
### MANDATORY FIRST STEPS (Agent Execute)
1. **Read role definition**: ~/.codex/agents/conceptual-planning-agent.md (MUST read first)
2. Read: ${projectRoot}/.workflow/project-tech.json
3. Read: ${projectRoot}/.workflow/project-guidelines.json
---
## Agent Mission
Apply enhancements and clarifications to ${role} analysis
## Input
- role: ${role}
- analysis_path: ${brainstorm_dir}/${role}/analysis.md
- enhancements: ${JSON.stringify(role_plan.enhancements)}
- clarifications: ${JSON.stringify(role_plan.clarifications)}
- original_user_intent: ${intent}
## Flow Control Steps
1. load_current_analysis → Read analysis file
2. add_clarifications_section → Insert Q&A section
3. apply_enhancements → Integrate into relevant sections
4. resolve_contradictions → Remove conflicts
5. enforce_terminology → Align terminology
6. validate_intent → Verify alignment with user intent
7. write_updated_file → Save changes
## Output
Updated ${role}/analysis.md
`
});
updateAgents.push({ agentId, role });
});
// Step 2: Batch wait for all update agents
const updateIds = updateAgents.map(a => a.agentId);
const updateResults = wait({
ids: updateIds,
timeout_ms: 900000 // 15 minutes for all updates
});
// Step 3: Check completion status
const updatedRoles = [];
const failedUpdates = [];
updateAgents.forEach(({ agentId, role }) => {
if (updateResults.status[agentId]?.completed) {
updatedRoles.push(role);
} else {
failedUpdates.push(role);
}
});
if (updateResults.timed_out) {
console.warn('Some role updates timed out:', failedUpdates);
}
// Step 4: Batch cleanup - IMPORTANT
updateAgents.forEach(({ agentId }) => {
close_agent({ id: agentId });
});
console.log(`Updated: ${updatedRoles.length}/${participating_roles.length} roles`);
```
**Agent Characteristics**:
- **Isolation**: Each agent updates exactly ONE role (parallel safe)
- **Dependencies**: Zero cross-agent dependencies
- **Validation**: All updates must align with original_user_intent
- **Lifecycle**: Explicit spawn → wait → close for each agent
### Phase 6: Finalization
#### Step 1: Update Context Package
```javascript
// Sync updated analyses to context-package.json
const context_pkg = Read("${projectRoot}/.workflow/active/WFS-{session}/.process/context-package.json")
// Update guidance-specification if exists
// Update synthesis-specification if exists
// Re-read all role analysis files
// Update metadata timestamps
Write(context_pkg_path, JSON.stringify(context_pkg))
```
#### Step 2: Update Session Metadata
```json
{
"phases": {
"BRAINSTORM": {
"status": "clarification_completed",
"clarification_completed": true,
"completed_at": "timestamp",
"participating_roles": [...],
"clarification_results": {
"enhancements_applied": ["EP-001", "EP-002"],
"questions_asked": 3,
"categories_clarified": ["Architecture", "UX"],
"roles_updated": ["role1", "role2"]
},
"quality_metrics": {
"user_intent_alignment": "validated",
"ambiguity_resolution": "complete",
"terminology_consistency": "enforced"
}
}
}
}
```
#### Step 3: Completion Report
```markdown
## ✅ Clarification Complete
**Enhancements Applied**: EP-001, EP-002, EP-003
**Questions Answered**: 3/5
**Roles Updated**: role1, role2, role3
### Next Steps
✅ PROCEED: `workflow:plan --session WFS-{session-id}`
```
---
## Output
**Location**: `{projectRoot}/.workflow/active/WFS-{session}/.brainstorming/[role]/analysis*.md`
**Updated Structure**:
```markdown
## Clarifications
### Session {date}
- **Q**: {question} (Category: {category})
**A**: {answer}
## {Existing Sections}
{Refined content based on clarifications}
```
**Changes**:
- User intent validated/corrected
- Requirements more specific/measurable
- Architecture with rationale
- Ambiguities resolved, placeholders removed
- Consistent terminology
---
## Quality Checklist
**Content**:
- ✅ All role analyses loaded/analyzed
- ✅ Cross-role analysis (consensus, conflicts, gaps)
- ✅ 9-category ambiguity scan
- ✅ Questions prioritized
**Analysis**:
- ✅ User intent validated
- ✅ Cross-role synthesis complete
- ✅ Ambiguities resolved
- ✅ Terminology consistent
**Documents**:
- ✅ Clarifications section formatted
- ✅ Sections reflect answers
- ✅ No placeholders (TODO/TBD)
- ✅ Valid Markdown
**Agent Lifecycle**:
- ✅ All spawn_agent have corresponding close_agent
- ✅ Analysis agent closed before update agents spawn
- ✅ All update agents closed after batch wait
- ✅ Error paths include agent cleanup
---
## Post-Phase Update
After Phase 3 completes:
- **Output Created**: `synthesis-specification.md`, updated role `analysis*.md` files
- **Cross-Role Integration**: All role insights synthesized and conflicts resolved
- **Agent Cleanup**: All agents (analysis + update) properly closed
- **Next Action**: Workflow complete, recommend `workflow:plan --session {session_id}`
- **State Update**: Update workflow-session.json with synthesis completion status

View File

@@ -1,806 +0,0 @@
---
name: workflow-execute
description: Coordinate agent execution for workflow tasks with automatic session discovery, parallel task processing, and status tracking. Triggers on "workflow execute".
allowed-tools: spawn_agent, wait, send_input, close_agent, AskUserQuestion, Read, Write, Edit, Bash, Glob, Grep
---
# Workflow Execute (Codex Version)
Orchestrates autonomous workflow execution through systematic task discovery, agent coordination, and progress tracking. **Executes entire workflow without user interruption** (except initial session selection if multiple active sessions exist), providing complete context to agents and ensuring proper flow control execution with comprehensive TodoWrite tracking.
**Also available as**: `workflow:plan` Phase 4 — when running `workflow:plan` with `--yes` or selecting "Start Execution" after Phase 3, the core execution logic runs inline without needing a separate `workflow:execute` call. Use this standalone command for `--resume-session` scenarios or when invoking execution independently.
**Resume Mode**: When called with `--resume-session` flag, skips discovery phase and directly enters TodoWrite generation and agent execution for the specified session.
## Architecture Overview
```
┌──────────────────────────────────────────────────────────────┐
│ Workflow Execute Orchestrator (SKILL.md) │
│ → Parse args → Session discovery → Strategy → Execute tasks │
└──────────┬───────────────────────────────────────────────────┘
┌──────┴──────┐
│ Normal Mode │──→ Phase 1 → Phase 2 → Phase 3 → Phase 4 → Phase 5
│ Resume Mode │──→ Phase 3 → Phase 4 → Phase 5
└─────────────┘
┌──────┴──────────────────────────────────────────────┐
│ Phase 1: Discovery (session selection) │
│ Phase 2: Validation (planning doc checks) │
│ Phase 3: TodoWrite Gen (progress tracking init) │
│ Phase 4: Strategy+Execute (lazy load + agent loop) │
│ Phase 5: Completion (status sync + user choice)│
└─────────────────────────────────────────────────────┘
```
## Key Design Principles
1. **Autonomous Execution**: Complete entire workflow without user interruption
2. **Lazy Loading**: Task JSONs read on-demand during execution, not upfront
3. **ONE AGENT = ONE TASK JSON**: Each agent instance executes exactly one task JSON file
4. **IMPL_PLAN-Driven Strategy**: Execution model derived from planning document
5. **Continuous Progress Tracking**: TodoWrite updates throughout entire workflow
6. **Subagent Lifecycle**: Explicit lifecycle management with spawn_agent → wait → close_agent
## Auto Mode
When `--yes` or `-y`:
- **Session Selection**: Automatically selects the first (most recent) active session
- **Completion Choice**: Automatically completes session (runs `workflow:session:complete --yes`)
When `--with-commit`:
- **Auto-Commit**: After each agent task completes, commit changes based on summary document
- **Commit Principle**: Minimal commits - only commit files modified by the completed task
- **Commit Message**: Generated from task summary with format: "feat/fix/refactor: {task-title} - {summary}"
## Usage
```
codex -p "@.codex/prompts/workflow-execute.md"
codex -p "@.codex/prompts/workflow-execute.md [FLAGS]"
# Flags
-y, --yes Skip all confirmations (auto mode)
--resume-session="<session-id>" Skip discovery, resume specified session
--with-commit Auto-commit after each task completion
# Examples
codex -p "@.codex/prompts/workflow-execute.md" # Interactive mode
codex -p "@.codex/prompts/workflow-execute.md --yes" # Auto mode
codex -p "@.codex/prompts/workflow-execute.md --resume-session=\"WFS-auth\"" # Resume specific session
codex -p "@.codex/prompts/workflow-execute.md -y --resume-session=\"WFS-auth\"" # Auto + resume
codex -p "@.codex/prompts/workflow-execute.md --with-commit" # With auto-commit
codex -p "@.codex/prompts/workflow-execute.md -y --with-commit" # Auto + commit
codex -p "@.codex/prompts/workflow-execute.md -y --with-commit --resume-session=\"WFS-auth\"" # All flags
```
## Execution Flow
```
Normal Mode:
Phase 1: Discovery
├─ Count active sessions
└─ Decision:
├─ count=0 → ERROR: No active sessions
├─ count=1 → Auto-select session → Phase 2
└─ count>1 → ASK_USER (max 4 options) → Phase 2
Phase 2: Planning Document Validation
├─ Check IMPL_PLAN.md exists
├─ Check TODO_LIST.md exists
└─ Validate .task/ contains IMPL-*.json files
Phase 3: TodoWrite Generation
├─ Update session status to "active" (Step 0)
├─ Parse TODO_LIST.md for task statuses
├─ Generate TodoWrite for entire workflow
└─ Prepare session context paths
Phase 4: Execution Strategy & Task Execution
├─ Step 4A: Parse execution strategy from IMPL_PLAN.md
└─ Step 4B: Execute tasks with lazy loading
└─ Loop:
├─ Get next in_progress task from TodoWrite
├─ Lazy load task JSON
├─ Launch agent (spawn_agent → wait → close_agent)
├─ Mark task completed (update IMPL-*.json status)
│ # Quick fix: Update task status for ccw dashboard
│ # TS=$(date -Iseconds) && jq --arg ts "$TS" '.status="completed" | .status_history=(.status_history // [])+[{"from":"in_progress","to":"completed","changed_at":$ts}]' IMPL-X.json > tmp.json && mv tmp.json IMPL-X.json
├─ [with-commit] Commit changes based on summary (minimal principle)
│ # Read summary from .summaries/IMPL-X-summary.md
│ # Extract changed files from summary's "Files Modified" section
│ # Generate commit message: "feat/fix/refactor: {task-title} - {summary}"
│ # git add <changed-files> && git commit -m "<commit-message>"
└─ Advance to next task
Phase 5: Completion
├─ Update task statuses in JSON files
├─ Generate summaries
└─ ASK_USER: Choose next step
├─ "Enter Review" → workflow:review
└─ "Complete Session" → workflow:session:complete
Resume Mode (--resume-session):
├─ Skip Phase 1 & Phase 2
└─ Entry Point: Phase 3 (TodoWrite Generation)
├─ Update session status to "active" (if not already)
└─ Continue: Phase 4 → Phase 5
```
## Core Rules
**Complete entire workflow autonomously without user interruption, using TodoWrite for comprehensive progress tracking.**
**Execute all discovered pending tasks until workflow completion or blocking dependency.**
**User-choice completion: When all tasks finished, ask user to choose review or complete.**
**ONE AGENT = ONE TASK JSON: Each agent instance executes exactly one task JSON file - never batch multiple tasks into single agent execution.**
**Explicit Lifecycle: Always close_agent after wait completes to free resources.**
## Subagent API Reference
### spawn_agent
Create a new subagent with task assignment.
```javascript
const agentId = spawn_agent({
message: `
## TASK ASSIGNMENT
### MANDATORY FIRST STEPS (Agent Execute)
1. **Read role definition**: ~/.codex/agents/{agent-type}.md (MUST read first)
2. Read: ${projectRoot}/.workflow/project-tech.json
3. Read: ${projectRoot}/.workflow/project-guidelines.json
## TASK CONTEXT
${taskContext}
## DELIVERABLES
${deliverables}
`
})
```
### wait
Get results from subagent (only way to retrieve results).
```javascript
const result = wait({
ids: [agentId],
timeout_ms: 600000 // 10 minutes
})
if (result.timed_out) {
// Handle timeout - can continue waiting or send_input to prompt completion
}
```
### send_input
Continue interaction with active subagent (for clarification or follow-up).
```javascript
send_input({
id: agentId,
message: `
## CLARIFICATION ANSWERS
${answers}
## NEXT STEP
Continue with implementation.
`
})
```
### close_agent
Clean up subagent resources (irreversible).
```javascript
close_agent({ id: agentId })
```
## Core Responsibilities
- **Session Discovery**: Identify and select active workflow sessions
- **Execution Strategy Parsing**: Extract execution model from IMPL_PLAN.md
- **TodoWrite Progress Tracking**: Maintain real-time execution status throughout entire workflow
- **Agent Orchestration**: Coordinate specialized agents with complete context via spawn_agent lifecycle
- **Status Synchronization**: Update task JSON files and workflow state
- **Autonomous Completion**: Continue execution until all tasks complete or reach blocking state
- **Session User-Choice Completion**: Ask user to choose review or complete when all tasks finished
## Execution Philosophy
- **Progress tracking**: Continuous TodoWrite updates throughout entire workflow execution
- **Autonomous completion**: Execute all tasks without user interruption until workflow complete
## Performance Optimization Strategy
**Lazy Loading**: Task JSONs read **on-demand** during execution, not upfront. TODO_LIST.md + IMPL_PLAN.md provide metadata for planning.
**Loading Strategy**:
- **TODO_LIST.md**: Read in Phase 3 (task metadata, status, dependencies for TodoWrite generation)
- **IMPL_PLAN.md**: Check existence in Phase 2 (normal mode), parse execution strategy in Phase 4A
- **Task JSONs**: Lazy loading - read only when task is about to execute (Phase 4B)
## Execution Lifecycle
### Phase 1: Discovery
**Applies to**: Normal mode only (skipped in resume mode)
**Purpose**: Find and select active workflow session with user confirmation when multiple sessions exist
**Process**:
#### Step 1.1: Count Active Sessions
```bash
bash(find ${projectRoot}/.workflow/active/ -name "WFS-*" -type d 2>/dev/null | wc -l)
```
#### Step 1.2: Handle Session Selection
**Case A: No Sessions** (count = 0)
```
ERROR: No active workflow sessions found
Run workflow:plan "task description" to create a session
```
**Case B: Single Session** (count = 1)
```bash
bash(find ${projectRoot}/.workflow/active/ -name "WFS-*" -type d 2>/dev/null | head -1 | xargs basename)
```
Auto-select and continue to Phase 2.
**Case C: Multiple Sessions** (count > 1)
List sessions with metadata and prompt user selection:
```bash
bash(for dir in ${projectRoot}/.workflow/active/WFS-*/; do [ -d "$dir" ] || continue; session=$(basename "$dir"); project=$(jq -r '.project // "Unknown"' "${dir}workflow-session.json" 2>/dev/null || echo "Unknown"); total=$(grep -c '^\- \[' "${dir}TODO_LIST.md" 2>/dev/null || echo 0); completed=$(grep -c '^\- \[x\]' "${dir}TODO_LIST.md" 2>/dev/null || echo 0); if [ "$total" -gt 0 ]; then progress=$((completed * 100 / total)); else progress=0; fi; echo "$session | $project | $completed/$total tasks ($progress%)"; done)
```
**Parse --yes flag**:
```javascript
const autoYes = $ARGUMENTS.includes('--yes') || $ARGUMENTS.includes('-y')
```
**Conditional Selection**:
```javascript
if (autoYes) {
// Auto mode: Select first session (most recent)
const firstSession = sessions[0]
console.log(`[--yes] Auto-selecting session: ${firstSession.id}`)
selectedSessionId = firstSession.id
// Continue to Phase 2
} else {
// Interactive mode: Use ASK_USER to present formatted options (max 4 options shown)
// If more than 4 sessions, show most recent 4 with "Other" option for manual input
const sessions = getActiveSessions() // sorted by last modified
const displaySessions = sessions.slice(0, 4)
ASK_USER([{
id: "session", type: "select",
prompt: "Multiple active sessions detected. Select one:",
options: displaySessions.map(s => ({
label: s.id,
description: `${s.project} | ${s.progress}`
}))
// Note: User can select "Other" to manually enter session ID
}]) // BLOCKS (wait for user response)
}
```
**Input Validation**:
- If user selects from options: Use selected session ID
- If user selects "Other" and provides input: Validate session exists
- If validation fails: Show error and re-prompt or suggest available sessions
Parse user input (supports: number "1", full ID "WFS-auth-system", or partial "auth"), validate selection, and continue to Phase 2.
#### Step 1.3: Load Session Metadata
```bash
bash(cat ${projectRoot}/.workflow/active/${sessionId}/workflow-session.json)
```
**Output**: Store session metadata in memory
**DO NOT read task JSONs yet** - defer until execution phase (lazy loading)
**Resume Mode**: This entire phase is skipped when `--resume-session="session-id"` flag is provided.
### Phase 2: Planning Document Validation
**Applies to**: Normal mode only (skipped in resume mode)
**Purpose**: Validate planning artifacts exist before execution
**Process**:
1. **Check IMPL_PLAN.md**: Verify file exists (defer detailed parsing to Phase 4A)
2. **Check TODO_LIST.md**: Verify file exists (defer reading to Phase 3)
3. **Validate Task Directory**: Ensure `.task/` contains at least one IMPL-*.json file
**Key Optimization**: Only existence checks here. Actual file reading happens in later phases.
**Resume Mode**: This phase is skipped when `--resume-session` flag is provided. Resume mode entry point is Phase 3.
### Phase 3: TodoWrite Generation
**Applies to**: Both normal and resume modes (resume mode entry point)
**Step 0: Update Session Status to Active**
Before generating TodoWrite, update session status from "planning" to "active":
```bash
# Update session status (idempotent - safe to run if already active)
jq '.status = "active" | .execution_started_at = (.execution_started_at // now | todate)' \
${projectRoot}/.workflow/active/${sessionId}/workflow-session.json > tmp.json && \
mv tmp.json ${projectRoot}/.workflow/active/${sessionId}/workflow-session.json
```
This ensures the dashboard shows the session as "ACTIVE" during execution.
**Process**:
1. **Create TodoWrite List**: Generate task list from TODO_LIST.md (not from task JSONs)
- Parse TODO_LIST.md to extract all tasks with current statuses
- Identify first pending task with met dependencies
- Generate comprehensive TodoWrite covering entire workflow
2. **Prepare Session Context**: Inject workflow paths for agent use (using provided session-id)
3. **Validate Prerequisites**: Ensure IMPL_PLAN.md and TODO_LIST.md exist and are valid
**Resume Mode Behavior**:
- Load existing TODO_LIST.md directly from `{projectRoot}/.workflow/active/{session-id}/`
- Extract current progress from TODO_LIST.md
- Generate TodoWrite from TODO_LIST.md state
- Proceed immediately to agent execution (Phase 4)
### Phase 4: Execution Strategy Selection & Task Execution
**Applies to**: Both normal and resume modes
**Step 4A: Parse Execution Strategy from IMPL_PLAN.md**
Read IMPL_PLAN.md Section 4 to extract:
- **Execution Model**: Sequential | Parallel | Phased | TDD Cycles
- **Parallelization Opportunities**: Which tasks can run in parallel
- **Serialization Requirements**: Which tasks must run sequentially
- **Critical Path**: Priority execution order
If IMPL_PLAN.md lacks execution strategy, use intelligent fallback (analyze task structure).
**Step 4B: Execute Tasks with Lazy Loading**
**Key Optimization**: Read task JSON **only when needed** for execution
**Execution Loop Pattern**:
```
while (TODO_LIST.md has pending tasks) {
next_task_id = getTodoWriteInProgressTask()
task_json = Read(${projectRoot}/.workflow/active/{session}/.task/{next_task_id}.json) // Lazy load
executeTaskWithAgent(task_json) // spawn_agent → wait → close_agent
updateTodoListMarkCompleted(next_task_id)
advanceTodoWriteToNextTask()
}
```
**Execution Process per Task**:
1. **Identify Next Task**: From TodoWrite, get the next `in_progress` task ID
2. **Load Task JSON on Demand**: Read `.task/{task-id}.json` for current task ONLY
3. **Validate Task Structure**: Ensure all 5 required fields exist (id, title, status, meta, context, flow_control)
4. **Launch Agent**: Invoke specialized agent via spawn_agent with complete context including flow control steps
5. **Wait for Completion**: wait for agent result, handle timeout
6. **Close Agent**: close_agent to free resources
7. **Collect Results**: Gather implementation results and outputs
8. **[with-commit] Auto-Commit**: If `--with-commit` flag enabled, commit changes based on summary
- Read summary from `.summaries/{task-id}-summary.md`
- Extract changed files from summary's "Files Modified" section
- Determine commit type from `meta.type` (feature→feat, bugfix→fix, refactor→refactor)
- Generate commit message: "{type}: {task-title} - {summary-first-line}"
- Commit only modified files (minimal principle): `git add <files> && git commit -m "<message>"`
9. **Continue Workflow**: Identify next pending task from TODO_LIST.md and repeat
**Note**: TODO_LIST.md updates are handled by agents (e.g., code-developer.md), not by the orchestrator.
### Phase 5: Completion
**Applies to**: Both normal and resume modes
**Process**:
1. **Update Task Status**: Mark completed tasks in JSON files
2. **Generate Summary**: Create task summary in `.summaries/`
3. **Update TodoWrite**: Mark current task complete, advance to next
4. **Synchronize State**: Update session state and workflow status
5. **Check Workflow Complete**: Verify all tasks are completed
6. **User Choice**: When all tasks finished, ask user to choose next step:
```javascript
// Parse --yes flag
const autoYes = $ARGUMENTS.includes('--yes') || $ARGUMENTS.includes('-y')
if (autoYes) {
// Auto mode: Complete session automatically
console.log(`[--yes] Auto-selecting: Complete Session`)
// Execute: workflow:session:complete --yes
} else {
// Interactive mode: Ask user
ASK_USER([{
id: "next_step", type: "select",
prompt: "All tasks completed. What would you like to do next?",
options: [
{
label: "Enter Review",
description: "Run specialized review (security/architecture/quality/action-items)"
},
{
label: "Complete Session",
description: "Archive session and update manifest"
}
]
}]) // BLOCKS (wait for user response)
}
```
**Based on user selection**:
- **"Enter Review"**: Execute `workflow:review`
- **"Complete Session"**: Execute `workflow:session:complete`
### Post-Completion Expansion
完成后询问用户是否扩展为issue(test/enhance/refactor/doc),选中项调用 `issue:new "{summary} - {dimension}"`
## Execution Strategy (IMPL_PLAN-Driven)
### Strategy Priority
**IMPL_PLAN-Driven Execution (Recommended)**:
1. **Read IMPL_PLAN.md execution strategy** (Section 4: Implementation Strategy)
2. **Follow explicit guidance**:
- Execution Model (Sequential/Parallel/Phased/TDD)
- Parallelization Opportunities (which tasks can run in parallel)
- Serialization Requirements (which tasks must run sequentially)
- Critical Path (priority execution order)
3. **Use TODO_LIST.md for status tracking** only
4. **IMPL_PLAN decides "HOW"**, execute implements it
**Intelligent Fallback (When IMPL_PLAN lacks execution details)**:
1. **Analyze task structure**:
- Check `meta.execution_group` in task JSONs
- Analyze `depends_on` relationships
- Understand task complexity and risk
2. **Apply smart defaults**:
- No dependencies + same execution_group → Parallel
- Has dependencies → Sequential (wait for deps)
- Critical/high-risk tasks → Sequential
3. **Conservative approach**: When uncertain, prefer sequential execution
### Execution Models
#### 1. Sequential Execution
**When**: IMPL_PLAN specifies "Sequential" OR no clear parallelization guidance
**Pattern**: Execute tasks one by one in TODO_LIST order via spawn_agent → wait → close_agent per task
**TodoWrite**: ONE task marked as `in_progress` at a time
#### 2. Parallel Execution
**When**: IMPL_PLAN specifies "Parallel" with clear parallelization opportunities
**Pattern**: Execute independent task groups concurrently by spawning multiple agents and batch waiting
**TodoWrite**: MULTIPLE tasks (in same batch) marked as `in_progress` simultaneously
**Agent Instantiation**: spawn one agent per task (respects ONE AGENT = ONE TASK JSON rule), then batch wait
**Parallel Execution Pattern**:
```javascript
// Step 1: Spawn agents for parallel batch
const batchAgents = [];
parallelTasks.forEach(task => {
const agentId = spawn_agent({
message: `
## TASK ASSIGNMENT
### MANDATORY FIRST STEPS (Agent Execute)
1. **Read role definition**: ~/.codex/agents/${task.meta.agent}.md (MUST read first)
2. Read: ${projectRoot}/.workflow/project-tech.json
3. Read: ${projectRoot}/.workflow/project-guidelines.json
---
Implement task ${task.id}: ${task.title}
[FLOW_CONTROL]
**Input**:
- Task JSON: ${session.task_json_path}
- Context Package: ${session.context_package_path}
**Output Location**:
- Workflow: ${session.workflow_dir}
- TODO List: ${session.todo_list_path}
- Summaries: ${session.summaries_dir}
**Execution**: Read task JSON → Execute pre_analysis → Check execution_config.method → (CLI: handoff to CLI tool | Agent: direct implementation) → Update TODO_LIST.md → Generate summary
`
});
batchAgents.push({ agentId, taskId: task.id });
});
// Step 2: Batch wait for all agents
const batchResult = wait({
ids: batchAgents.map(a => a.agentId),
timeout_ms: 600000
});
// Step 3: Check results and handle timeouts
if (batchResult.timed_out) {
console.log('Some parallel tasks timed out, continuing with completed results');
}
// Step 4: Close all agents
batchAgents.forEach(a => close_agent({ id: a.agentId }));
```
#### 3. Phased Execution
**When**: IMPL_PLAN specifies "Phased" with phase breakdown
**Pattern**: Execute tasks in phases, respect phase boundaries
**TodoWrite**: Within each phase, follow Sequential or Parallel rules
#### 4. Intelligent Fallback
**When**: IMPL_PLAN lacks execution strategy details
**Pattern**: Analyze task structure and apply smart defaults
**TodoWrite**: Follow Sequential or Parallel rules based on analysis
### Task Status Logic
```
pending + dependencies_met → executable
completed → skip
blocked → skip until dependencies clear
```
## TodoWrite Coordination
### TodoWrite Rules (Unified)
**Rule 1: Initial Creation**
- **Normal Mode**: Generate TodoWrite from discovered pending tasks for entire workflow
- **Resume Mode**: Generate from existing session state and current progress
**Rule 2: In-Progress Task Count (Execution-Model-Dependent)**
- **Sequential execution**: Mark ONLY ONE task as `in_progress` at a time
- **Parallel batch execution**: Mark ALL tasks in current batch as `in_progress` simultaneously
- **Execution group indicator**: Show `[execution_group: group-id]` for parallel tasks
**Rule 3: Status Updates**
- **Immediate Updates**: Update status after each task/batch completion without user interruption
- **Status Synchronization**: Sync with JSON task files after updates
- **Continuous Tracking**: Maintain TodoWrite throughout entire workflow execution until completion
**Rule 4: Workflow Completion Check**
- When all tasks marked `completed`, prompt user to choose review or complete session
### TodoWrite Tool Usage
**Example 1: Sequential Execution**
```javascript
TodoWrite({
todos: [
{
content: "Execute IMPL-1.1: Design auth schema [code-developer] [FLOW_CONTROL]",
status: "in_progress", // ONE task in progress
activeForm: "Executing IMPL-1.1: Design auth schema"
},
{
content: "Execute IMPL-1.2: Implement auth logic [code-developer] [FLOW_CONTROL]",
status: "pending",
activeForm: "Executing IMPL-1.2: Implement auth logic"
}
]
});
```
**Example 2: Parallel Batch Execution**
```javascript
TodoWrite({
todos: [
{
content: "Execute IMPL-1.1: Build Auth API [code-developer] [execution_group: parallel-auth-api]",
status: "in_progress", // Batch task 1
activeForm: "Executing IMPL-1.1: Build Auth API"
},
{
content: "Execute IMPL-1.2: Build User UI [code-developer] [execution_group: parallel-ui-comp]",
status: "in_progress", // Batch task 2 (running concurrently)
activeForm: "Executing IMPL-1.2: Build User UI"
},
{
content: "Execute IMPL-1.3: Setup Database [code-developer] [execution_group: parallel-db-schema]",
status: "in_progress", // Batch task 3 (running concurrently)
activeForm: "Executing IMPL-1.3: Setup Database"
},
{
content: "Execute IMPL-2.1: Integration Tests [test-fix-agent] [depends_on: IMPL-1.1, IMPL-1.2, IMPL-1.3]",
status: "pending", // Next batch (waits for current batch completion)
activeForm: "Executing IMPL-2.1: Integration Tests"
}
]
});
```
## Agent Execution Pattern
### Flow Control Execution
**[FLOW_CONTROL]** marker indicates task JSON contains `flow_control.pre_analysis` steps for context preparation.
**Note**: Orchestrator does NOT execute flow control steps - Agent interprets and executes them autonomously.
### Agent Prompt Template (Sequential)
**Path-Based Invocation**: Pass paths and trigger markers, let agent parse task JSON autonomously.
```javascript
// Step 1: Spawn agent
const agentId = spawn_agent({
message: `
## TASK ASSIGNMENT
### MANDATORY FIRST STEPS (Agent Execute)
1. **Read role definition**: ~/.codex/agents/${meta.agent}.md (MUST read first)
2. Read: ${projectRoot}/.workflow/project-tech.json
3. Read: ${projectRoot}/.workflow/project-guidelines.json
---
Implement task ${task.id}: ${task.title}
[FLOW_CONTROL]
**Input**:
- Task JSON: ${session.task_json_path}
- Context Package: ${session.context_package_path}
**Output Location**:
- Workflow: ${session.workflow_dir}
- TODO List: ${session.todo_list_path}
- Summaries: ${session.summaries_dir}
**Execution**: Read task JSON → Execute pre_analysis → Check execution_config.method → (CLI: handoff to CLI tool | Agent: direct implementation) → Update TODO_LIST.md → Generate summary
`
});
// Step 2: Wait for completion
const result = wait({
ids: [agentId],
timeout_ms: 600000 // 10 minutes per task
});
// Step 3: Close agent (IMPORTANT: always close)
close_agent({ id: agentId });
```
**Key Markers**:
- `Implement` keyword: Triggers tech stack detection and guidelines loading
- `[FLOW_CONTROL]`: Triggers flow_control.pre_analysis execution
**Why Path-Based**: Agent (code-developer.md) autonomously:
- Reads and parses task JSON (requirements, acceptance, flow_control, execution_config)
- Executes pre_analysis steps (Phase 1: context gathering)
- Checks execution_config.method (Phase 2: determine mode)
- CLI mode: Builds handoff prompt and executes via ccw cli with resume strategy
- Agent mode: Directly implements using modification_points and logic_flow
- Generates structured summary with integration points
Embedding task content in prompt creates duplication and conflicts with agent's parsing logic.
### Agent Assignment Rules
```
meta.agent specified → Use specified agent
meta.agent missing → Infer from meta.type:
- "feature" → @code-developer (role: ~/.codex/agents/code-developer.md)
- "test-gen" → @code-developer (role: ~/.codex/agents/code-developer.md)
- "test-fix" → @test-fix-agent (role: ~/.codex/agents/test-fix-agent.md)
- "review" → @universal-executor (role: ~/.codex/agents/universal-executor.md)
- "docs" → @doc-generator (role: ~/.codex/agents/doc-generator.md)
```
## Data Flow
```
Phase 1 (Discovery) → selectedSessionId, sessionMetadata
Phase 2 (Validation) → validated paths (IMPL_PLAN.md, TODO_LIST.md, .task/)
Phase 3 (TodoWrite Gen) → todoWriteList, sessionContextPaths
Phase 4 (Execute) → per-task: taskJson (lazy), spawn_agent → wait → close_agent, summaryDoc
Phase 5 (Completion) → updatedStatuses, userChoice (review|complete)
```
## Workflow File Structure Reference
```
{projectRoot}/.workflow/active/WFS-[topic-slug]/
├── workflow-session.json # Session state and metadata
├── IMPL_PLAN.md # Planning document and requirements
├── TODO_LIST.md # Progress tracking (updated by agents)
├── .task/ # Task definitions (JSON only)
│ ├── IMPL-1.json # Main task definitions
│ └── IMPL-1.1.json # Subtask definitions
├── .summaries/ # Task completion summaries
│ ├── IMPL-1-summary.md # Task completion details
│ └── IMPL-1.1-summary.md # Subtask completion details
└── .process/ # Planning artifacts
├── context-package.json # Smart context package
└── ANALYSIS_RESULTS.md # Planning analysis results
```
## Auto-Commit Mode (--with-commit)
**Behavior**: After each agent task completes, automatically commit changes based on summary document.
**Minimal Principle**: Only commit files modified by the completed task.
**Commit Message Format**: `{type}: {task-title} - {summary}`
**Type Mapping** (from `meta.type`):
- `feature``feat` | `bugfix``fix` | `refactor``refactor`
- `test-gen``test` | `docs``docs` | `review``chore`
**Implementation**:
```bash
# 1. Read summary from .summaries/{task-id}-summary.md
# 2. Extract files from "Files Modified" section
# 3. Commit: git add <files> && git commit -m "{type}: {title} - {summary}"
```
**Error Handling**: Skip commit on no changes/missing summary, log errors, continue workflow.
## Error Handling & Recovery
### Common Errors & Recovery
| Error Type | Cause | Recovery Strategy | Max Attempts |
|-----------|-------|------------------|--------------|
| **Discovery Errors** |
| No active session | No sessions in `{projectRoot}/.workflow/active/` | Create or resume session: `workflow:plan "project"` | N/A |
| Multiple sessions | Multiple sessions in `{projectRoot}/.workflow/active/` | Prompt user selection | N/A |
| Corrupted session | Invalid JSON files | Recreate session structure or validate files | N/A |
| **Execution Errors** |
| Agent failure | Agent crash/timeout | Retry with simplified context (close_agent first, then spawn new) | 2 |
| Flow control error | Command failure | Skip optional, fail critical | 1 per step |
| Context loading error | Missing dependencies | Reload from JSON, use defaults | 3 |
| JSON file corruption | File system issues | Restore from backup/recreate | 1 |
| **Lifecycle Errors** |
| Agent timeout | wait timed out | send_input to prompt completion, or close_agent and retry | 2 |
| Orphaned agent | Agent not closed after error | Ensure close_agent in error paths | N/A |
### Error Prevention
- **Pre-flight Checks**: Validate session integrity before execution
- **Backup Strategy**: Create task snapshots before major operations
- **Atomic Updates**: Update JSON files atomically to prevent corruption
- **Dependency Validation**: Check all depends_on references exist
- **Context Verification**: Ensure all required context is available
- **Lifecycle Cleanup**: Always close_agent in both success and error paths
### Error Recovery with Lifecycle Management
```javascript
// Safe agent execution pattern with error handling
let agentId = null;
try {
agentId = spawn_agent({ message: taskPrompt });
const result = wait({ ids: [agentId], timeout_ms: 600000 });
if (result.timed_out) {
// Option 1: Send prompt to complete
send_input({ id: agentId, message: "Please wrap up and generate summary." });
const retryResult = wait({ ids: [agentId], timeout_ms: 120000 });
}
// Process results...
close_agent({ id: agentId });
} catch (error) {
// Ensure cleanup on error
if (agentId) close_agent({ id: agentId });
// Handle error (retry or skip task)
}
```
## Flag Parsing
```javascript
const autoYes = $ARGUMENTS.includes('--yes') || $ARGUMENTS.includes('-y')
const withCommit = $ARGUMENTS.includes('--with-commit')
```
## Related Commands
**Prerequisite Commands**:
- `workflow:plan` - Create workflow session with planning documents
**Follow-up Commands**:
- `workflow:review` - Run specialized post-implementation review
- `workflow:session:complete` - Archive session and update manifest
- `issue:new` - Create issue for post-completion expansion

View File

@@ -1,187 +0,0 @@
---
name: workflow-lite-plan-execute
description: Lightweight planning + execution workflow. Serial CLI exploration → Search verification → Clarification → Planning → .task/*.json multi-file output → Execution via unified-execute.
allowed-tools: AskUserQuestion, Read, Write, Edit, Bash, Glob, Grep, mcp__ace-tool__search_context
---
# Planning Workflow
Lite Plan produces `plan.json` (plan overview) + `.task/TASK-*.json` (one file per task) implementation plan via serial CLI exploration and direct planning, then hands off to unified-execute-with-file for task execution.
> **Schema**: `cat ~/.ccw/workflows/cli-templates/schemas/task-schema.json`
## Key Design Principles
1. **Serial Execution**: All phases execute serially inline, no agent delegation
2. **CLI Exploration**: Multi-angle codebase exploration via `ccw cli` calls (default gemini, fallback claude)
3. **Search Verification**: Verify CLI findings with ACE search / Grep / Glob before incorporating
4. **Multi-File Task Output**: Produces `.task/TASK-*.json` (one file per task) compatible with `collaborative-plan-with-file` and `unified-execute-with-file`
5. **Progressive Phase Loading**: Only load phase docs when about to execute
## Auto Mode
When `--yes` or `-y`:
- Auto-approve plan and use default execution settings
- Skip non-critical clarifications; still ask minimal clarifications if required for safety/correctness
## Usage (Pseudo)
This section describes the skill input shape; actual invocation depends on the host runtime.
```
$workflow-lite-plan-execute <task description>
$workflow-lite-plan-execute [FLAGS] "<task description or file path>"
# Flags
-y, --yes Skip confirmations (auto mode)
-e, --explore Force exploration phase
```
Examples:
```
$workflow-lite-plan-execute "Implement JWT authentication"
$workflow-lite-plan-execute -y "Add user profile page"
$workflow-lite-plan-execute -e "Refactor payment module"
$workflow-lite-plan-execute "docs/todo.md"
```
> **Implementation sketch**: 编排器内部使用 `$workflow-lite-plan-execute "..."` 接口调用,此为伪代码示意,非命令行语法。
## Phase Reference Documents (Read On Demand)
| Phase | Document | Purpose |
|-------|----------|---------|
| 1 | `phases/01-lite-plan.md` | Serial CLI exploration, clarification, plan generation → .task/TASK-*.json |
| 2 | `phases/02-lite-execute.md` | Handoff to unified-execute-with-file for task execution |
## Orchestrator Logic
```javascript
// Flag parsing
const autoYes = $ARGUMENTS.includes('--yes') || $ARGUMENTS.includes('-y')
const forceExplore = $ARGUMENTS.includes('--explore') || $ARGUMENTS.includes('-e')
// Task extraction rule:
// - Strip known flags: -y/--yes, -e/--explore
// - Remaining args are joined as the task description
// - Treat it as a file path ONLY if (a) exactly one arg remains AND (b) the path exists
function extractTaskDescription(args) {
const knownFlags = new Set(['--yes', '-y', '--explore', '-e'])
const rest = args.filter(a => !knownFlags.has(a))
if (rest.length === 1 && file_exists(rest[0])) return rest[0]
return rest.join(' ').trim()
}
const taskDescription = extractTaskDescription($ARGUMENTS)
// Phase 1: Lite Plan → .task/TASK-*.json
Read('phases/01-lite-plan.md')
// Execute planning phase...
// Gate: only continue when confirmed (or --yes)
if (planResult?.userSelection?.confirmation !== 'Allow' && !autoYes) {
// Stop: user cancelled or requested modifications
return
}
// Phase 2: Handoff to unified-execute-with-file
Read('phases/02-lite-execute.md')
// Invoke unified-execute-with-file with .task/ directory path
```
## Output Contract
Phase 1 produces `plan.json` (plan overview, following `plan-overview-base-schema.json`) + `.task/TASK-*.json` (one file per task) — compatible with `collaborative-plan-with-file` and consumable by `unified-execute-with-file`.
> **Schema**: `cat ~/.ccw/workflows/cli-templates/schemas/task-schema.json`
**Output Directory**: `{projectRoot}/.workflow/.lite-plan/{session-id}/`
```
{projectRoot}/.workflow/.lite-plan/{session-id}/
├── exploration-{angle1}.md # Per-angle CLI exploration results
├── exploration-{angle2}.md # (1-4 files based on complexity)
├── explorations-manifest.json # Exploration index
├── exploration-notes.md # Synthesized exploration notes
├── requirement-analysis.json # Complexity assessment
├── plan.json # ⭐ Plan overview (plan-overview-base-schema.json)
├── .task/ # Task JSON files (one per task)
│ ├── TASK-001.json # Individual task definition
│ ├── TASK-002.json
│ └── ...
└── plan.md # Human-readable summary
```
**Task JSON Format** (one file per task, following task-schema.json):
```javascript
// File: .task/TASK-001.json
{
"id": "TASK-001",
"title": "string",
"description": "string",
"type": "feature|fix|refactor|enhancement|testing|infrastructure",
"priority": "high|medium|low",
"effort": "small|medium|large",
"scope": "string",
"depends_on": ["TASK-xxx"],
"convergence": {
"criteria": ["string"], // Testable conditions
"verification": "string", // Executable command or manual steps
"definition_of_done": "string" // Business language
},
"files": [{
"path": "string",
"action": "modify|create|delete",
"changes": ["string"],
"conflict_risk": "low|medium|high"
}],
"source": {
"tool": "workflow-lite-plan-execute",
"session_id": "string",
"original_id": "string"
}
}
```
## TodoWrite Pattern
Initialization:
```json
[
{"content": "Lite Plan - Planning", "status": "in_progress", "activeForm": "Planning"},
{"content": "Execution (unified-execute)", "status": "pending", "activeForm": "Executing tasks"}
]
```
After planning completes:
```json
[
{"content": "Lite Plan - Planning", "status": "completed", "activeForm": "Planning"},
{"content": "Execution (unified-execute)", "status": "in_progress", "activeForm": "Executing tasks"}
]
```
## Core Rules
1. **Planning phase NEVER modifies project code** — it may write planning artifacts, but all implementation is delegated to unified-execute
2. **All phases serial, no agent delegation** — everything runs inline, no spawn_agent
3. **CLI exploration with search verification** — CLI calls produce findings, ACE/Grep/Glob verify them
4. **`.task/*.json` is the output contract** — individual task JSON files passed to unified-execute-with-file
5. **Progressive loading**: Read phase doc only when about to execute
6. **File-path detection**: Treat input as a file path only if the path exists; do not infer from file extensions
## Error Handling
| Error | Resolution |
|-------|------------|
| CLI exploration failure | Skip angle, continue with remaining; fallback gemini → claude |
| Planning phase failure | Display error, offer retry |
| .task/ directory empty | Error: planning phase did not produce output |
| Phase file not found | Error with file path for debugging |
## Related Skills
- Collaborative planning: `../collaborative-plan-with-file/SKILL.md`
- Unified execution: `../unified-execute-with-file/SKILL.md`
- Full planning workflow: `../workflow-plan-execute/SKILL.md`

View File

@@ -1,595 +0,0 @@
# Phase 1: Lite Plan
## Overview
Serial lightweight planning with CLI-powered exploration and search verification. Produces `.task/TASK-*.json` (one file per task) compatible with `collaborative-plan-with-file` output format, consumable by `unified-execute-with-file`.
**Core capabilities:**
- Intelligent task analysis with automatic exploration detection
- Serial CLI exploration (ccw cli, default gemini / fallback claude) per angle
- Search verification after each CLI exploration (ACE search, Grep, Glob)
- Interactive clarification after exploration to gather missing information
- Direct planning by Claude (all complexity levels, no agent delegation)
- Unified multi-file task output (`.task/TASK-*.json`) with convergence criteria
## Parameters
| Parameter | Description |
|-----------|-------------|
| `-y`, `--yes` | Skip all confirmations (auto mode) |
| `-e`, `--explore` | Force code exploration phase (overrides auto-detection) |
| `<task-description>` | Task description or path to .md file (required) |
## Output Artifacts
| Artifact | Description |
|----------|-------------|
| `exploration-{angle}.md` | Per-angle CLI exploration results (verified) |
| `explorations-manifest.json` | Index of all exploration files |
| `exploration-notes.md` | Synthesized exploration notes (all angles combined) |
| `requirement-analysis.json` | Complexity assessment and session metadata |
| `plan.json` | Plan overview (plan-overview-base-schema) |
| `.task/TASK-*.json` | Multi-file task output (one JSON file per task) |
| `plan.md` | Human-readable summary with execution command |
**Output Directory**: `{projectRoot}/.workflow/.lite-plan/{session-id}/`
## Auto Mode Defaults
When `--yes` or `-y` flag is used:
- **Clarification Questions**: Skipped (no clarification phase)
- **Plan Confirmation**: Auto-selected "Allow"
**Flag Parsing**:
```javascript
const autoYes = $ARGUMENTS.includes('--yes') || $ARGUMENTS.includes('-y')
const forceExplore = $ARGUMENTS.includes('--explore') || $ARGUMENTS.includes('-e')
```
## Execution Process
```
Phase 1: Task Analysis & Exploration
├─ Parse input (description or .md file)
├─ Intelligent complexity assessment (Low/Medium/High)
├─ Exploration decision (auto-detect or --explore flag)
└─ Decision:
├─ needsExploration=true → Serial CLI exploration (1-4 angles)
│ └─ For each angle: CLI call → Search verification → Save results
└─ needsExploration=false → Skip to Phase 2/3
Phase 2: Clarification (optional, multi-round)
├─ Extract clarification needs from exploration results
├─ Deduplicate similar questions
└─ ASK_USER (max 4 questions per round, multiple rounds)
Phase 3: Planning → .task/*.json (NO CODE EXECUTION)
├─ Load exploration notes + clarifications + project context
├─ Direct Claude planning (following unified task JSON schema)
├─ Generate .task/TASK-*.json (one file per task)
└─ Generate plan.md (human-readable summary)
Phase 4: Confirmation
├─ Display plan summary (tasks, complexity, dependencies)
└─ ASK_USER: Allow / Modify / Cancel
Phase 5: Handoff
└─ → unified-execute-with-file with .task/ directory
```
## Implementation
### Phase 1: Serial CLI Exploration with Search Verification
#### Session Setup (MANDATORY)
##### Step 0: Determine Project Root
检测项目根目录,确保 `.workflow/` 产物位置正确:
```bash
PROJECT_ROOT=$(git rev-parse --show-toplevel 2>/dev/null || pwd)
```
优先通过 git 获取仓库根目录;非 git 项目回退到 `pwd` 取当前绝对路径。
存储为 `{projectRoot}`,后续所有 `.workflow/` 路径必须以此为前缀。
##### Step 1: Generate Session ID
Generate session ID and create session folder:
- **Session ID format**: `{task-slug}-{YYYY-MM-DD}`
- `task-slug`: lowercase task description, non-alphanumeric replaced with `-`, max 40 chars
- Date: UTC+8 (China Standard Time), format `2025-11-29`
- Example: `implement-jwt-refresh-2025-11-29`
- **Session Folder**: `{projectRoot}/.workflow/.lite-plan/{session-id}/`
- Create folder via `mkdir -p` and verify existence
#### Exploration Decision
Exploration is needed when **ANY** of these conditions are met:
- `--explore` / `-e` flag is set
- Task mentions specific files
- Task requires codebase context understanding
- Task needs architecture understanding
- Task modifies existing code
If none apply → skip to Phase 2 (Clarification) or Phase 3 (Planning).
**⚠️ Context Protection**: If file reading would exceed ≥50k chars → force CLI exploration to delegate context gathering.
#### Complexity Assessment
Analyze task complexity based on four dimensions:
| Dimension | Low | Medium | High |
|-----------|-----|--------|------|
| **Scope** | Single file, isolated | Multiple files, some dependencies | Cross-module, architectural |
| **Depth** | Surface change | Moderate structural impact | Architectural impact |
| **Risk** | Minimal | Moderate | High risk of breaking |
| **Dependencies** | None | Some interconnection | Highly interconnected |
#### Exploration Angle Selection
Angles are assigned based on task type keyword matching, then sliced by complexity:
| Task Type | Keywords | Angle Presets (priority order) |
|-----------|----------|-------------------------------|
| Architecture | refactor, architect, restructure, modular | architecture, dependencies, modularity, integration-points |
| Security | security, auth, permission, access | security, auth-patterns, dataflow, validation |
| Performance | performance, slow, optimi, cache | performance, bottlenecks, caching, data-access |
| Bugfix | fix, bug, error, issue, broken | error-handling, dataflow, state-management, edge-cases |
| Feature (default) | — | patterns, integration-points, testing, dependencies |
**Angle count by complexity**: Low → 1, Medium → 3, High → 4
Display exploration plan summary (complexity, selected angles) before starting.
#### Serial CLI Exploration Loop
For each selected exploration angle, execute the following three steps **serially**:
##### Step A: CLI Exploration Call
Execute `ccw cli` to explore codebase from the specific angle:
```bash
ccw cli -p "PURPOSE: Explore codebase from {angle} perspective for task planning context; success = actionable findings with file:line references verified against actual code
TASK: • Analyze project structure relevant to {angle} • Identify files and modules related to {angle} • Discover existing patterns and conventions for {angle} • Find integration points and dependencies (with file:line locations) • Identify constraints and risks from {angle} viewpoint • List questions needing user clarification
MODE: analysis
CONTEXT: @**/* | Memory: Task: {task_description}
EXPECTED: Structured analysis with sections: 1) Project structure overview 2) Relevant files with relevance assessment (high/medium/low) 3) Existing patterns (with code examples) 4) Dependencies 5) Integration points (file:line) 6) Constraints 7) Clarification questions
CONSTRAINTS: Focus on {angle} perspective | Analysis only | Include file:line references" --tool gemini --mode analysis --rule analysis-analyze-code-patterns
```
**CLI Tool Selection**:
- Default: `--tool gemini` (gemini-2.5-flash)
- Fallback: `--tool claude` (if gemini fails or is unavailable)
**Execution Mode**: `Bash({ command: "ccw cli ...", run_in_background: true })` → Wait for completion
##### Step B: Search Verification
After CLI exploration returns, verify key findings inline using search tools:
```
For each key finding from CLI result:
├─ Files mentioned → Glob to verify existence, Read to verify content
├─ Patterns mentioned → Grep to verify pattern presence in codebase
├─ Integration points → mcp__ace-tool__search_context to verify context accuracy
└─ Dependencies → Grep to verify import/export relationships
Verification rules:
├─ File exists? → Mark as ✅ verified
├─ File not found? → Mark as ⚠️ unverified, note in output
├─ Pattern confirmed? → Include with code reference
└─ Pattern not found? → Exclude or mark as uncertain
```
**Verification Checklist**:
- [ ] All mentioned files verified to exist
- [ ] Patterns described match actual code
- [ ] Integration points confirmed at correct file:line locations
- [ ] Dependencies are accurate (imports/exports verified)
##### Step C: Save Verified Exploration Results
Save verified exploration results as Markdown:
```markdown
# Exploration: {angle}
**Task**: {task_description}
**Timestamp**: {ISO timestamp}
**CLI Tool**: gemini / claude
---
## Findings
### Project Structure
{verified structure relevant to this angle}
### Relevant Files
| File | Relevance | Rationale | Verified |
|------|-----------|-----------|----------|
| `src/auth/login.ts` | high | Core authentication logic | ✅ |
| `src/middleware/auth.ts` | medium | Auth middleware chain | ✅ |
### Patterns
{verified patterns with actual code examples from codebase}
### Integration Points
{verified integration points with file:line references}
### Dependencies
{verified dependency relationships}
### Constraints
{angle-specific constraints discovered}
### Clarification Needs
{questions that need user input, with suggested options}
```
**Write**: `{sessionFolder}/exploration-{angle}.md`
#### Manifest Generation
After all angle explorations complete:
1. **Build manifest** — Create `explorations-manifest.json`:
```javascript
{
session_id: sessionId,
task_description: taskDescription,
timestamp: getUtc8ISOString(),
complexity: complexity,
exploration_count: selectedAngles.length,
explorations: selectedAngles.map((angle, i) => ({
angle: angle,
file: `exploration-${angle}.md`,
path: `${sessionFolder}/exploration-${angle}.md`,
index: i
}))
}
```
2. **Write** — Save to `{sessionFolder}/explorations-manifest.json`
#### Generate Exploration Notes
Synthesize all exploration Markdown files into a unified notes document.
**Steps**:
1. **Load** all exploration Markdown files via manifest
2. **Extract core files** — Collect all "Relevant Files" tables, deduplicate by path, sort by relevance
3. **Build exploration notes** — 6-part Markdown document (structure below)
4. **Write** to `{sessionFolder}/exploration-notes.md`
**Exploration Notes Structure** (`exploration-notes.md`):
```markdown
# Exploration Notes: {task_description}
**Generated**: {timestamp} | **Complexity**: {complexity}
**Exploration Angles**: {angles}
---
## Part 1: Multi-Angle Exploration Summary
Per angle: Key Files (priority sorted), Code Patterns, Integration Points, Dependencies, Constraints
## Part 2: Core Files Index
Top 10 files across all angles, with cross-references and structural details
## Part 3: Architecture Reasoning
Synthesized architectural insights from all angles
## Part 4: Risks and Mitigations
Derived from explorations and core file analysis
## Part 5: Clarification Questions Summary
Aggregated from all exploration angles, deduplicated
## Part 6: Key Code Location Index
| Component | File Path | Key Lines | Purpose |
|-----------|-----------|-----------|---------|
```
---
### Phase 2: Clarification (Optional, Multi-Round)
**Skip Conditions**: No exploration performed OR clarification needs empty across all explorations
**⚠️ CRITICAL**: ASK_USER limits max 4 questions per call. **MUST execute multiple rounds** to exhaust all clarification needs — do NOT stop at round 1.
**Flow**:
```
1. Load all exploration Markdown files
2. Extract clarification needs
└─ For each exploration → collect "Clarification Needs" section items
3. Deduplicate
└─ Intelligent merge: identify similar intent across angles
→ combine options, consolidate context
→ produce unique-intent questions only
4. Route by mode:
├─ --yes mode → Skip all clarifications, log count, proceed to Phase 3
└─ Interactive mode → Multi-round clarification:
├─ Batch size: 4 questions per round
├─ Per round: display "Round N/M", present via ASK_USER
│ └─ Each question: [source_angle] question + context
├─ Store responses in clarificationContext after each round
└─ Repeat until all questions exhausted
```
**Output**: `clarificationContext` (in-memory, keyed by question)
---
### Phase 3: Planning → .task/*.json
**IMPORTANT**: Phase 3 is **planning only** — NO code execution. All implementation happens via unified-execute-with-file.
#### Step 3.1: Gather Planning Context
1. **Read** all exploration Markdown files and `exploration-notes.md`
2. **Read** `{projectRoot}/.workflow/project-tech.json` (if exists)
3. **Read** `{projectRoot}/.workflow/project-guidelines.json` (if exists)
4. **Collect** clarificationContext (if any)
#### Step 3.2: Generate requirement-analysis.json
```javascript
Write(`${sessionFolder}/requirement-analysis.json`, JSON.stringify({
session_id: sessionId,
original_requirement: taskDescription,
complexity: complexity,
exploration_angles: selectedAngles,
total_explorations: selectedAngles.length,
timestamp: getUtc8ISOString()
}, null, 2))
```
#### Step 3.3: Generate .task/*.json
Direct Claude planning — synthesize exploration findings and clarifications into individual task JSON files:
**Task Grouping Rules**:
1. **Group by feature**: All changes for one feature = one task (even if 3-5 files)
2. **Group by context**: Tasks with similar context or related functional changes can be grouped
3. **Minimize task count**: Simple, related tasks grouped together (target 2-7 tasks)
4. **Substantial tasks**: Each task should represent meaningful work
5. **True dependencies only**: Only use depends_on when Task B cannot start without Task A's output
6. **Prefer parallel**: Most tasks should be independent (no depends_on)
**Unified Task JSON Format** (one JSON file per task, stored in `.task/` directory):
```javascript
{
id: "TASK-001", // Padded 3-digit ID
title: "...",
description: "...", // Scope/goal + implementation approach
type: "feature", // feature|infrastructure|enhancement|fix|refactor|testing
priority: "medium", // high|medium|low
effort: "medium", // small|medium|large
scope: "...", // Brief scope description
depends_on: [], // TASK-xxx references (empty if independent)
convergence: {
criteria: [ // Testable conditions (2-5 items)
"File src/auth/login.ts exports authenticateUser function",
"Unit test covers both success and failure paths"
],
verification: "npm test -- --grep auth", // Executable command or manual steps
definition_of_done: "Users can log in with JWT tokens and receive refresh tokens"
},
files: [ // Files to modify (from exploration findings)
{
path: "src/auth/login.ts",
action: "modify", // modify|create|delete
changes: ["Add JWT token generation", "Add refresh token logic"],
conflict_risk: "low" // low|medium|high
}
],
source: {
tool: "workflow-lite-plan-execute",
session_id: sessionId,
original_id: "TASK-001"
}
}
```
**Write .task/*.json**:
```javascript
// Create .task/ directory
Bash(`mkdir -p ${sessionFolder}/.task`)
// Write each task as an individual JSON file
tasks.forEach(task => {
Write(`${sessionFolder}/.task/${task.id}.json`, JSON.stringify(task, null, 2))
})
```
#### Step 3.3.5: Generate plan.json (plan-overview-base-schema)
Generate plan overview as structured index for `.task/` directory:
```javascript
// Guard: skip plan.json if no tasks generated
if (tasks.length === 0) {
console.warn('No tasks generated; skipping plan.json')
} else {
// Build plan overview following plan-overview-base-schema.json
const planOverview = {
summary: requirementUnderstanding, // 2-3 sentence overview from exploration synthesis
approach: approach, // High-level implementation strategy
task_ids: tasks.map(t => t.id), // ["TASK-001", "TASK-002", ...]
task_count: tasks.length,
complexity: complexity, // "Low" | "Medium" | "High"
recommended_execution: "Agent",
_metadata: {
timestamp: getUtc8ISOString(),
source: "direct-planning",
planning_mode: "direct",
plan_type: "feature",
schema_version: "2.0",
exploration_angles: selectedAngles
}
}
Write(`${sessionFolder}/plan.json`, JSON.stringify(planOverview, null, 2))
} // end guard
```
#### Step 3.4: Generate plan.md
Create human-readable summary:
```javascript
const planMd = `# Lite Plan
**Session**: ${sessionId}
**Requirement**: ${taskDescription}
**Created**: ${getUtc8ISOString()}
**Complexity**: ${complexity}
**Exploration Angles**: ${selectedAngles.join(', ')}
## 需求理解
${requirementUnderstanding}
## 任务概览
| # | ID | Title | Type | Priority | Effort | Dependencies |
|---|-----|-------|------|----------|--------|--------------|
${tasks.map((t, i) => `| ${i+1} | ${t.id} | ${t.title} | ${t.type} | ${t.priority} | ${t.effort} | ${t.depends_on.join(', ') || '-'} |`).join('\n')}
## 任务详情
${tasks.map(t => `### ${t.id}: ${t.title}
- **范围**: ${t.scope}
- **修改文件**: ${t.files.map(f => \`\\\`${f.path}\\\` (${f.action})\`).join(', ')}
- **收敛标准**:
${t.convergence.criteria.map(c => \` - ${c}\`).join('\n')}
- **验证方式**: ${t.convergence.verification}
- **完成定义**: ${t.convergence.definition_of_done}
`).join('\n')}
## 执行
\`\`\`bash
$unified-execute-with-file PLAN="${sessionFolder}/.task/"
\`\`\`
**Session artifacts**: \`${sessionFolder}/\`
`
Write(`${sessionFolder}/plan.md`, planMd)
```
---
### Phase 4: Task Confirmation
#### Step 4.1: Display Plan
Read `{sessionFolder}/.task/` directory and display summary:
- **Summary**: Overall approach (from requirement understanding)
- **Tasks**: Numbered list with ID, title, type, effort
- **Complexity**: Assessment result
- **Total tasks**: Count
- **Dependencies**: Graph overview
#### Step 4.2: Collect Confirmation
**Route by mode**:
```
├─ --yes mode → Auto-confirm:
│ └─ Confirmation: "Allow"
└─ Interactive mode → ASK_USER:
└─ Confirm plan? ({N} tasks, {complexity})
├─ Allow (proceed as-is)
├─ Modify (adjust before execution)
└─ Cancel (abort workflow)
```
**Output**: `userSelection` — `{ confirmation: "Allow" | "Modify" | "Cancel" }`
**Modify Loop**: If "Modify" selected, display current `.task/*.json` content, accept user edits (max 3 rounds), regenerate plan.md, re-confirm.
---
### Phase 5: Handoff to Execution
**CRITICAL**: lite-plan NEVER executes code directly. ALL execution goes through unified-execute-with-file.
→ Hand off to Phase 2: `phases/02-lite-execute.md` which invokes `unified-execute-with-file`
## Session Folder Structure
```
{projectRoot}/.workflow/.lite-plan/{session-id}/
├── exploration-{angle1}.md # CLI exploration angle 1
├── exploration-{angle2}.md # CLI exploration angle 2
├── exploration-{angle3}.md # (if applicable)
├── exploration-{angle4}.md # (if applicable)
├── explorations-manifest.json # Exploration index
├── exploration-notes.md # Synthesized exploration notes
├── requirement-analysis.json # Complexity assessment
├── plan.json # ⭐ Plan overview (plan-overview-base-schema.json)
├── .task/ # Task JSON files (one per task)
│ ├── TASK-001.json
│ ├── TASK-002.json
│ └── ...
└── plan.md # Human-readable summary
```
**Example**:
```
{projectRoot}/.workflow/.lite-plan/implement-jwt-refresh-2025-11-25/
├── exploration-patterns.md
├── exploration-integration-points.md
├── exploration-testing.md
├── explorations-manifest.json
├── exploration-notes.md
├── requirement-analysis.json
├── plan.json
├── .task/
│ ├── TASK-001.json
│ ├── TASK-002.json
│ └── ...
└── plan.md
```
## Error Handling
| Error | Resolution |
|-------|------------|
| CLI exploration failure | Skip angle, continue with remaining; fallback gemini → claude |
| CLI tool unavailable | Try fallback tool; if all fail, proceed without exploration |
| Search verification failure | Note unverified findings, continue with caution marker |
| Planning failure | Display error, offer retry |
| Clarification timeout | Use exploration findings as-is |
| Confirmation timeout | Save artifacts, display resume instructions |
| Modify loop > 3 times | Suggest using full planning workflow (workflow-plan-execute/SKILL.md) |
---
## Post-Phase Update
After Phase 1 (Lite Plan) completes:
- **Output Created**: `.task/TASK-*.json` + `plan.md` + exploration artifacts in session folder
- **Session Artifacts**: All files in `{projectRoot}/.workflow/.lite-plan/{session-id}/`
- **Next Action**: Auto-continue to [Phase 2: Execution Handoff](02-lite-execute.md)
- **TodoWrite**: Mark "Lite Plan - Planning" as completed, start "Execution (unified-execute)"

View File

@@ -1,635 +0,0 @@
# Phase 2: Execution
## Overview
消费 Phase 1 产出的 `.task/*.json` (multi-file task definitions),串行执行任务并进行收敛验证,通过 `execution.md` + `execution-events.md` 跟踪进度。
**Core workflow**: Load .task/*.json → Validate → Pre-Execution Analysis → Execute → Verify Convergence → Track Progress
**Key features**:
- **Single format**: 只消费 `.task/*.json` (one JSON file per task)
- **Convergence-driven**: 每个任务执行后验证收敛标准
- **Serial execution**: 按拓扑序串行执行,依赖跟踪
- **Dual progress tracking**: `execution.md` (概览) + `execution-events.md` (事件流)
- **Auto-commit**: 可选的每任务 conventional commits
- **Dry-run mode**: 模拟执行,不做实际修改
## Invocation
```javascript
$unified-execute-with-file PLAN="${sessionFolder}/.task/"
// With options
$unified-execute-with-file PLAN="${sessionFolder}/.task/" --auto-commit
$unified-execute-with-file PLAN="${sessionFolder}/.task/" --dry-run
```
## Output Structure
```
${projectRoot}/.workflow/.execution/EXEC-{slug}-{date}-{random}/
├── execution.md # Plan overview + task table + summary statistics
└── execution-events.md # Unified event log (single source of truth)
```
Additionally, the source `.task/*.json` files are updated in-place with execution states (`status`, `executed_at`, `result`).
---
## Session Initialization
```javascript
const getUtc8ISOString = () => new Date(Date.now() + 8 * 60 * 60 * 1000).toISOString()
const projectRoot = Bash(`git rev-parse --show-toplevel 2>/dev/null || pwd`).trim()
// Parse arguments
const autoCommit = $ARGUMENTS.includes('--auto-commit')
const dryRun = $ARGUMENTS.includes('--dry-run')
const planMatch = $ARGUMENTS.match(/PLAN="([^"]+)"/) || $ARGUMENTS.match(/PLAN=(\S+)/)
let planPath = planMatch ? planMatch[1] : null
// Auto-detect if no PLAN specified
if (!planPath) {
// Search in order (most recent first):
// .workflow/.lite-plan/*/.task/
// .workflow/.req-plan/*/.task/
// .workflow/.planning/*/.task/
// .workflow/.analysis/*/.task/
// .workflow/.brainstorm/*/.task/
}
// Resolve path
planPath = path.isAbsolute(planPath) ? planPath : `${projectRoot}/${planPath}`
// Generate session ID
const slug = path.basename(path.dirname(planPath)).toLowerCase().substring(0, 30)
const dateStr = getUtc8ISOString().substring(0, 10)
const random = Math.random().toString(36).substring(2, 9)
const sessionId = `EXEC-${slug}-${dateStr}-${random}`
const sessionFolder = `${projectRoot}/.workflow/.execution/${sessionId}`
Bash(`mkdir -p ${sessionFolder}`)
```
---
## Phase 1: Load & Validate
**Objective**: Parse `.task/*.json` files, validate schema and dependencies, build execution order.
### Step 1.1: Parse Task JSON Files
```javascript
// Read all JSON files from .task/ directory
const taskFiles = Glob(`${planPath}/*.json`).sort()
const tasks = taskFiles.map((file, i) => {
try { return JSON.parse(Read(file)) }
catch (e) { throw new Error(`File ${file}: Invalid JSON — ${e.message}`) }
})
if (tasks.length === 0) throw new Error('No task files found in .task/ directory')
```
### Step 1.2: Validate Schema
```javascript
const errors = []
tasks.forEach((task, i) => {
// Required fields
if (!task.id) errors.push(`Task ${i + 1}: missing 'id'`)
if (!task.title) errors.push(`Task ${i + 1}: missing 'title'`)
if (!task.description) errors.push(`Task ${i + 1}: missing 'description'`)
if (!Array.isArray(task.depends_on)) errors.push(`${task.id}: missing 'depends_on' array`)
// Convergence required
if (!task.convergence) {
errors.push(`${task.id}: missing 'convergence'`)
} else {
if (!task.convergence.criteria?.length) errors.push(`${task.id}: empty convergence.criteria`)
if (!task.convergence.verification) errors.push(`${task.id}: missing convergence.verification`)
if (!task.convergence.definition_of_done) errors.push(`${task.id}: missing convergence.definition_of_done`)
}
})
if (errors.length) {
// Report errors, stop execution
}
```
### Step 1.3: Build Execution Order
```javascript
// 1. Validate dependency references
const taskIds = new Set(tasks.map(t => t.id))
tasks.forEach(task => {
task.depends_on.forEach(dep => {
if (!taskIds.has(dep)) errors.push(`${task.id}: depends on unknown task '${dep}'`)
})
})
// 2. Detect cycles (DFS)
function detectCycles(tasks) {
const graph = new Map(tasks.map(t => [t.id, t.depends_on || []]))
const visited = new Set(), inStack = new Set(), cycles = []
function dfs(node, path) {
if (inStack.has(node)) { cycles.push([...path, node].join(' → ')); return }
if (visited.has(node)) return
visited.add(node); inStack.add(node)
;(graph.get(node) || []).forEach(dep => dfs(dep, [...path, node]))
inStack.delete(node)
}
tasks.forEach(t => { if (!visited.has(t.id)) dfs(t.id, []) })
return cycles
}
const cycles = detectCycles(tasks)
if (cycles.length) errors.push(`Circular dependencies: ${cycles.join('; ')}`)
// 3. Topological sort
function topoSort(tasks) {
const inDegree = new Map(tasks.map(t => [t.id, 0]))
tasks.forEach(t => t.depends_on.forEach(dep => {
inDegree.set(t.id, (inDegree.get(t.id) || 0) + 1)
}))
const queue = tasks.filter(t => inDegree.get(t.id) === 0).map(t => t.id)
const order = []
while (queue.length) {
const id = queue.shift()
order.push(id)
tasks.forEach(t => {
if (t.depends_on.includes(id)) {
inDegree.set(t.id, inDegree.get(t.id) - 1)
if (inDegree.get(t.id) === 0) queue.push(t.id)
}
})
}
return order
}
const executionOrder = topoSort(tasks)
```
### Step 1.4: Initialize Execution Artifacts
```javascript
// execution.md
const executionMd = `# Execution Overview
## Session Info
- **Session ID**: ${sessionId}
- **Plan Source**: ${planPath}
- **Started**: ${getUtc8ISOString()}
- **Total Tasks**: ${tasks.length}
- **Mode**: ${dryRun ? 'Dry-run (no changes)' : 'Direct inline execution'}
- **Auto-Commit**: ${autoCommit ? 'Enabled' : 'Disabled'}
## Task Overview
| # | ID | Title | Type | Priority | Effort | Dependencies | Status |
|---|-----|-------|------|----------|--------|--------------|--------|
${tasks.map((t, i) => `| ${i+1} | ${t.id} | ${t.title} | ${t.type || '-'} | ${t.priority || '-'} | ${t.effort || '-'} | ${t.depends_on.join(', ') || '-'} | pending |`).join('\n')}
## Pre-Execution Analysis
> Populated in Phase 2
## Execution Timeline
> Updated as tasks complete
## Execution Summary
> Updated after all tasks complete
`
Write(`${sessionFolder}/execution.md`, executionMd)
// execution-events.md
Write(`${sessionFolder}/execution-events.md`, `# Execution Events
**Session**: ${sessionId}
**Started**: ${getUtc8ISOString()}
**Source**: ${planPath}
---
`)
```
---
## Phase 2: Pre-Execution Analysis
**Objective**: Validate feasibility and identify issues before execution.
### Step 2.1: Analyze File Conflicts
```javascript
const fileTaskMap = new Map() // file → [taskIds]
tasks.forEach(task => {
(task.files || []).forEach(f => {
const key = f.path
if (!fileTaskMap.has(key)) fileTaskMap.set(key, [])
fileTaskMap.get(key).push(task.id)
})
})
const conflicts = []
fileTaskMap.forEach((taskIds, file) => {
if (taskIds.length > 1) {
conflicts.push({ file, tasks: taskIds, resolution: 'Execute in dependency order' })
}
})
// Check file existence
const missingFiles = []
tasks.forEach(task => {
(task.files || []).forEach(f => {
if (f.action !== 'create' && !file_exists(f.path)) {
missingFiles.push({ file: f.path, task: task.id })
}
})
})
```
### Step 2.2: Append to execution.md
```javascript
// Replace "Pre-Execution Analysis" section with:
// - File Conflicts (list or "No conflicts")
// - Missing Files (list or "All files exist")
// - Dependency Validation (errors or "No issues")
// - Execution Order (numbered list)
```
### Step 2.3: User Confirmation
```javascript
if (!dryRun) {
AskUserQuestion({
questions: [{
question: `Execute ${tasks.length} tasks?\n\n${conflicts.length ? `${conflicts.length} file conflicts\n` : ''}Execution order:\n${executionOrder.map((id, i) => ` ${i+1}. ${id}: ${tasks.find(t => t.id === id).title}`).join('\n')}`,
header: "Confirm",
multiSelect: false,
options: [
{ label: "Execute", description: "Start serial execution" },
{ label: "Dry Run", description: "Simulate without changes" },
{ label: "Cancel", description: "Abort execution" }
]
}]
})
}
```
---
## Phase 3: Serial Execution + Convergence Verification
**Objective**: Execute tasks sequentially, verify convergence after each task, track all state.
**Execution Model**: Direct inline execution — main process reads, edits, writes files directly. No CLI delegation.
### Step 3.1: Execution Loop
```javascript
const completedTasks = new Set()
const failedTasks = new Set()
const skippedTasks = new Set()
for (const taskId of executionOrder) {
const task = tasks.find(t => t.id === taskId)
const startTime = getUtc8ISOString()
// 1. Check dependencies
const unmetDeps = task.depends_on.filter(dep => !completedTasks.has(dep))
if (unmetDeps.length) {
appendToEvents(task, 'BLOCKED', `Unmet dependencies: ${unmetDeps.join(', ')}`)
skippedTasks.add(task.id)
task.status = 'skipped'
task.executed_at = startTime
task.result = { success: false, error: `Blocked by: ${unmetDeps.join(', ')}` }
continue
}
// 2. Record START event
appendToEvents(`## ${getUtc8ISOString()}${task.id}: ${task.title}
**Type**: ${task.type || '-'} | **Priority**: ${task.priority || '-'} | **Effort**: ${task.effort || '-'}
**Status**: ⏳ IN PROGRESS
**Files**: ${(task.files || []).map(f => f.path).join(', ') || 'To be determined'}
**Description**: ${task.description}
**Convergence Criteria**:
${task.convergence.criteria.map(c => `- [ ] ${c}`).join('\n')}
### Execution Log
`)
if (dryRun) {
// Simulate: mark as completed without changes
appendToEvents(`\n**Status**: ⏭ DRY RUN (no changes)\n\n---\n`)
task.status = 'completed'
task.executed_at = startTime
task.result = { success: true, summary: 'Dry run — no changes made' }
completedTasks.add(task.id)
continue
}
// 3. Execute task directly
// - Read each file in task.files (if specified)
// - Analyze what changes satisfy task.description + task.convergence.criteria
// - If task.files has detailed changes, use them as guidance
// - Apply changes using Edit (preferred) or Write (for new files)
// - Use Grep/Glob/mcp__ace-tool for discovery if needed
// - Use Bash for build/test commands
// 4. Verify convergence
const convergenceResults = verifyConvergence(task)
const endTime = getUtc8ISOString()
const filesModified = getModifiedFiles()
if (convergenceResults.allPassed) {
// 5a. Record SUCCESS
appendToEvents(`
**Status**: ✅ COMPLETED
**Duration**: ${calculateDuration(startTime, endTime)}
**Files Modified**: ${filesModified.join(', ')}
#### Changes Summary
${changeSummary}
#### Convergence Verification
${task.convergence.criteria.map((c, i) => `- [${convergenceResults.verified[i] ? 'x' : ' '}] ${c}`).join('\n')}
- **Verification**: ${convergenceResults.verificationOutput}
- **Definition of Done**: ${task.convergence.definition_of_done}
---
`)
task.status = 'completed'
task.executed_at = endTime
task.result = {
success: true,
files_modified: filesModified,
summary: changeSummary,
convergence_verified: convergenceResults.verified
}
completedTasks.add(task.id)
} else {
// 5b. Record FAILURE
handleTaskFailure(task, convergenceResults, startTime, endTime)
}
// 6. Auto-commit if enabled
if (autoCommit && task.status === 'completed') {
autoCommitTask(task, filesModified)
}
}
```
### Step 3.2: Convergence Verification
```javascript
function verifyConvergence(task) {
const results = {
verified: [], // boolean[] per criterion
verificationOutput: '', // output of verification command
allPassed: true
}
// 1. Check each criterion
// For each criterion in task.convergence.criteria:
// - If it references a testable condition, check it
// - If it's manual, mark as verified based on changes made
// - Record true/false per criterion
task.convergence.criteria.forEach(criterion => {
const passed = evaluateCriterion(criterion, task)
results.verified.push(passed)
if (!passed) results.allPassed = false
})
// 2. Run verification command (if executable)
const verification = task.convergence.verification
if (isExecutableCommand(verification)) {
try {
const output = Bash(verification, { timeout: 120000 })
results.verificationOutput = `${verification} → PASS`
} catch (e) {
results.verificationOutput = `${verification} → FAIL: ${e.message}`
results.allPassed = false
}
} else {
results.verificationOutput = `Manual: ${verification}`
}
return results
}
function isExecutableCommand(verification) {
// Detect executable patterns: npm, npx, jest, tsc, curl, pytest, go test, etc.
return /^(npm|npx|jest|tsc|eslint|pytest|go\s+test|cargo\s+test|curl|make)/.test(verification.trim())
}
```
### Step 3.3: Failure Handling
```javascript
function handleTaskFailure(task, convergenceResults, startTime, endTime) {
appendToEvents(`
**Status**: ❌ FAILED
**Duration**: ${calculateDuration(startTime, endTime)}
**Error**: Convergence verification failed
#### Failed Criteria
${task.convergence.criteria.map((c, i) => `- [${convergenceResults.verified[i] ? 'x' : ' '}] ${c}`).join('\n')}
- **Verification**: ${convergenceResults.verificationOutput}
---
`)
task.status = 'failed'
task.executed_at = endTime
task.result = {
success: false,
error: 'Convergence verification failed',
convergence_verified: convergenceResults.verified
}
failedTasks.add(task.id)
// Ask user
AskUserQuestion({
questions: [{
question: `Task ${task.id} failed convergence verification. How to proceed?`,
header: "Failure",
multiSelect: false,
options: [
{ label: "Skip & Continue", description: "Skip this task, continue with next" },
{ label: "Retry", description: "Retry this task" },
{ label: "Accept", description: "Mark as completed despite failure" },
{ label: "Abort", description: "Stop execution, keep progress" }
]
}]
})
}
```
### Step 3.4: Auto-Commit
```javascript
function autoCommitTask(task, filesModified) {
Bash(`git add ${filesModified.join(' ')}`)
const commitType = {
fix: 'fix', refactor: 'refactor', feature: 'feat',
enhancement: 'feat', testing: 'test', infrastructure: 'chore'
}[task.type] || 'chore'
const scope = inferScope(filesModified)
Bash(`git commit -m "$(cat <<'EOF'
${commitType}(${scope}): ${task.title}
Task: ${task.id}
Source: ${path.basename(planPath)}
EOF
)"`)
appendToEvents(`**Commit**: \`${commitType}(${scope}): ${task.title}\`\n`)
}
```
---
## Phase 4: Completion
**Objective**: Finalize all artifacts, write back execution state, offer follow-up actions.
### Step 4.1: Finalize execution.md
Append summary statistics to execution.md:
```javascript
const summary = `
## Execution Summary
- **Completed**: ${getUtc8ISOString()}
- **Total Tasks**: ${tasks.length}
- **Succeeded**: ${completedTasks.size}
- **Failed**: ${failedTasks.size}
- **Skipped**: ${skippedTasks.size}
- **Success Rate**: ${Math.round(completedTasks.size / tasks.length * 100)}%
### Task Results
| ID | Title | Status | Convergence | Files Modified |
|----|-------|--------|-------------|----------------|
${tasks.map(t => {
const ex = t || {}
const convergenceStatus = ex.result?.convergence_verified
? `${ex.result.convergence_verified.filter(v => v).length}/${ex.result.convergence_verified.length}`
: '-'
return `| ${t.id} | ${t.title} | ${ex.status || 'pending'} | ${convergenceStatus} | ${(ex.result?.files_modified || []).join(', ') || '-'} |`
}).join('\n')}
${failedTasks.size > 0 ? `### Failed Tasks
${[...failedTasks].map(id => {
const t = tasks.find(t => t.id === id)
return `- **${t.id}**: ${t.title}${t.result?.error || 'Unknown'}`
}).join('\n')}
` : ''}
### Artifacts
- **Plan Source**: ${planPath}
- **Execution Overview**: ${sessionFolder}/execution.md
- **Execution Events**: ${sessionFolder}/execution-events.md
`
// Append to execution.md
```
### Step 4.2: Finalize execution-events.md
```javascript
appendToEvents(`
---
# Session Summary
- **Session**: ${sessionId}
- **Completed**: ${getUtc8ISOString()}
- **Tasks**: ${completedTasks.size} completed, ${failedTasks.size} failed, ${skippedTasks.size} skipped
- **Total Events**: ${completedTasks.size + failedTasks.size + skippedTasks.size}
`)
```
### Step 4.3: Write Back .task/*.json with Execution State
Update each task JSON file in-place with execution state:
```javascript
tasks.forEach(task => {
const taskFile = `${planPath}/${task.id}.json`
Write(taskFile, JSON.stringify(task, null, 2))
})
// Each task now has status, executed_at, result fields
```
**Execution State** (added to each task JSON file):
```javascript
{
// ... original task fields ...
status: "completed" | "failed" | "skipped",
executed_at: "ISO timestamp",
result: {
success: boolean,
files_modified: string[], // list of modified file paths
summary: string, // change description
convergence_verified: boolean[], // per criterion
error: string // if failed
}
}
```
### Step 4.4: Post-Completion Options
```javascript
AskUserQuestion({
questions: [{
question: `Execution complete: ${completedTasks.size}/${tasks.length} succeeded (${Math.round(completedTasks.size / tasks.length * 100)}%).\nNext step:`,
header: "Post-Execute",
multiSelect: false,
options: [
{ label: "Retry Failed", description: `Re-execute ${failedTasks.size} failed tasks` },
{ label: "View Events", description: "Display execution-events.md" },
{ label: "Create Issue", description: "Create issue from failed tasks" },
{ label: "Done", description: "End workflow" }
]
}]
})
```
| Selection | Action |
|-----------|--------|
| Retry Failed | Filter tasks with `status === 'failed'`, re-execute, append `[RETRY]` events |
| View Events | Display execution-events.md content |
| Create Issue | `$issue:new` from failed task details |
| Done | Display artifact paths, end workflow |
---
## Error Handling & Recovery
| Situation | Action | Recovery |
|-----------|--------|----------|
| .task/ directory not found | Report error with path | Check path, verify planning phase output |
| Invalid JSON file | Report filename and error | Fix task JSON file manually |
| Missing convergence | Report validation error | Add convergence fields to tasks |
| Circular dependency | Stop, report cycle path | Fix dependencies in task files |
| Task execution fails | Record in events, ask user | Retry, skip, accept, or abort |
| Convergence verification fails | Mark task failed, ask user | Fix code and retry, or accept |
| Verification command timeout | Mark as unverified | Manual verification needed |
| File conflict during execution | Document in events | Resolve in dependency order |
| All tasks fail | Report, suggest plan review | Re-analyze or manual intervention |
---
## Post-Completion
After execution completes, the workflow is finished.
- **TodoWrite**: Mark "Execution (unified-execute)" as completed
- **Summary**: Display execution statistics (succeeded/failed/skipped counts)
- **Artifacts**: Point user to execution.md and execution-events.md paths

View File

@@ -1,453 +0,0 @@
---
name: workflow-plan-execute
description: 4-phase planning+execution workflow with action-planning-agent task generation, outputs IMPL_PLAN.md and task JSONs, optional Phase 4 execution. Triggers on "workflow:plan".
allowed-tools: spawn_agent, wait, send_input, close_agent, AskUserQuestion, Read, Write, Edit, Bash, Glob, Grep
---
# Workflow Plan
4-phase workflow that orchestrates session discovery, context gathering (with inline conflict resolution), task generation, and conditional execution to produce and implement plans (IMPL_PLAN.md, task JSONs, TODO_LIST.md).
## Architecture Overview
```
┌──────────────────────────────────────────────────────────────────────┐
│ Workflow Plan Orchestrator (SKILL.md) │
│ → Pure coordinator: Execute phases, parse outputs, pass context │
└───────────────┬──────────────────────────────────────────────────────┘
┌───────────┼───────────┬───────────┬───────────┐
↓ ↓ ↓ ↓ ↓
┌─────────┐ ┌──────────────────┐ ┌─────────┐ ┌─────────┐
│ Phase 1 │ │ Phase 2 │ │ Phase 3 │ │ Phase 4 │
│ Session │ │ Context Gather │ │ Task │ │Execute │
│Discovery│ │& Conflict Resolve│ │Generate │ │(optional)│
└─────────┘ └──────────────────┘ └─────────┘ └─────────┘
↓ ↓ ↓ ↓
sessionId contextPath IMPL_PLAN.md summaries
conflict_risk task JSONs completed
resolved TODO_LIST.md tasks
```
## Key Design Principles
1. **Pure Orchestrator**: Execute phases in sequence, parse outputs, pass context between them
2. **Auto-Continue**: All phases run autonomously without user intervention between phases
3. **Subagent Lifecycle**: Explicit lifecycle management with spawn_agent → wait → close_agent
4. **Progressive Phase Loading**: Phase docs are read on-demand, not all at once
5. **Inline Conflict Resolution**: Conflicts detected and resolved within Phase 2 (not a separate phase)
6. **Role Path Loading**: Subagent roles loaded via path reference in MANDATORY FIRST STEPS
## Auto Mode
When `--yes` or `-y`: Auto-continue all phases (skip confirmations), use recommended conflict resolutions, auto-execute Phase 4.
When `--with-commit`: Auto-commit after each task completion in Phase 4.
## Prep Package Integration
When `plan-prep-package.json` exists at `{projectRoot}/.workflow/.prep/plan-prep-package.json`, the skill consumes it with 6-point validation:
1. **Phase 1**: Use `task.structured` (GOAL/SCOPE/CONTEXT) for session creation, enrich planning-notes.md with source_refs and quality dimensions
2. **Phase 2**: Feed verified source_refs as supplementary docs for exploration agents
3. **Phase 3**: Auto-populate Phase 0 User Configuration (execution_method, preferred_cli_tool, supplementary_materials) — skip interactive questions
4. **Phase 4**: Apply `execution.with_commit` flag
Prep packages are generated by the interactive prompt `/prompts:prep-plan`. See [phases/00-prep-checklist.md](phases/00-prep-checklist.md) for schema and validation rules.
## Execution Flow
```
Input Parsing:
└─ Convert user input to structured format (GOAL/SCOPE/CONTEXT)
Phase 1: Session Discovery
└─ Ref: phases/01-session-discovery.md
└─ Output: sessionId (WFS-xxx)
Phase 2: Context Gathering & Conflict Resolution
└─ Ref: phases/02-context-gathering.md
├─ Step 1: Context-Package Detection
├─ Step 2: Complexity Assessment & Parallel Explore (conflict-aware)
├─ Step 3: Inline Conflict Resolution (conditional, if significant conflicts)
├─ Step 4: Invoke Context-Search Agent (with exploration + conflict results)
├─ Step 5: Output Verification
└─ Output: contextPath + conflict_risk + optional conflict-resolution.json
Phase 3: Task Generation
└─ Ref: phases/03-task-generation.md
└─ Output: plan.json, IMPL_PLAN.md, task JSONs, TODO_LIST.md
└─ Schema: plan.json follows plan-overview-base-schema.json; .task/IMPL-*.json follows task-schema.json
User Decision (or --yes auto):
└─ "Start Execution" → Phase 4
└─ "Verify Plan Quality" → workflow:plan-verify
└─ "Review Status Only" → workflow:status
Phase 4: Execution (Conditional)
└─ Ref: phases/04-execution.md
└─ Output: completed tasks, summaries, session completion
```
**Phase Reference Documents** (read on-demand when phase executes):
| Phase | Document | Purpose |
|-------|----------|---------|
| 1 | [phases/01-session-discovery.md](phases/01-session-discovery.md) | Session creation/discovery with intelligent session management |
| 2 | [phases/02-context-gathering.md](phases/02-context-gathering.md) | Context collection + inline conflict resolution |
| 3 | [phases/03-task-generation.md](phases/03-task-generation.md) | Implementation plan and task JSON generation |
| 4 | [phases/04-execution.md](phases/04-execution.md) | Task execution (conditional, triggered by user or --yes) |
## Core Rules
1. **Start Immediately**: First action is TodoWrite initialization, second action is Phase 1 execution
2. **No Preliminary Analysis**: Do not read files, analyze structure, or gather context before Phase 1
3. **Parse Every Output**: Extract required data from each phase output for next phase
4. **Auto-Continue via TodoList**: Check TodoList status to execute next pending phase automatically
5. **Track Progress**: Update TodoWrite dynamically with task attachment/collapse pattern
6. **Progressive Phase Loading**: Read phase docs ONLY when that phase is about to execute
7. **DO NOT STOP**: Continuous multi-phase workflow. After completing each phase, immediately proceed to next
8. **Explicit Lifecycle**: Always close_agent after wait completes to free resources
## Subagent API Reference
### spawn_agent
Create a new subagent with task assignment.
```javascript
const agentId = spawn_agent({
message: `
## TASK ASSIGNMENT
### MANDATORY FIRST STEPS (Agent Execute)
1. **Read role definition**: ~/.codex/agents/{agent-type}.md (MUST read first)
2. Read: ${projectRoot}/.workflow/project-tech.json
3. Read: ${projectRoot}/.workflow/project-guidelines.json
## TASK CONTEXT
${taskContext}
## DELIVERABLES
${deliverables}
`
})
```
### wait
Get results from subagent (only way to retrieve results).
```javascript
const result = wait({
ids: [agentId],
timeout_ms: 600000 // 10 minutes
})
if (result.timed_out) {
// Handle timeout - can continue waiting or send_input to prompt completion
}
```
### send_input
Continue interaction with active subagent (for clarification or follow-up).
```javascript
send_input({
id: agentId,
message: `
## CLARIFICATION ANSWERS
${answers}
## NEXT STEP
Continue with plan generation.
`
})
```
### close_agent
Clean up subagent resources (irreversible).
```javascript
close_agent({ id: agentId })
```
## Input Processing
**Convert User Input to Structured Format**:
1. **Simple Text** → Structure it:
```
User: "Build authentication system"
Structured:
GOAL: Build authentication system
SCOPE: Core authentication features
CONTEXT: New implementation
```
2. **Detailed Text** → Extract components:
```
User: "Add JWT authentication with email/password login and token refresh"
Structured:
GOAL: Implement JWT-based authentication
SCOPE: Email/password login, token generation, token refresh endpoints
CONTEXT: JWT token-based security, refresh token rotation
```
3. **File Reference** (e.g., `requirements.md`) → Read and structure:
- Read file content
- Extract goal, scope, requirements
- Format into structured description
## Data Flow
```
User Input (task description)
[Convert to Structured Format]
↓ Structured Description:
↓ GOAL: [objective]
↓ SCOPE: [boundaries]
↓ CONTEXT: [background]
Phase 1: session:start --auto "structured-description"
↓ Output: sessionId
↓ Write: planning-notes.md (User Intent section)
Phase 2: context-gather --session sessionId "structured-description"
↓ Input: sessionId + structured description
↓ Step 2: Parallel exploration (with conflict detection)
↓ Step 3: Inline conflict resolution (if significant conflicts detected)
↓ Step 4: Context-search-agent packaging
↓ Output: contextPath (context-package.json with prioritized_context)
↓ + optional conflict-resolution.json
↓ Update: planning-notes.md (Context Findings + Conflict Decisions + Consolidated Constraints)
Phase 3: task-generate-agent --session sessionId
↓ Input: sessionId + planning-notes.md + context-package.json + brainstorm artifacts
↓ Output: plan.json, IMPL_PLAN.md, task JSONs, TODO_LIST.md
User Decision: "Start Execution" / --yes auto
Phase 4: Execute tasks (conditional)
↓ Input: sessionId + IMPL_PLAN.md + TODO_LIST.md + .task/*.json
↓ Loop: lazy load → spawn_agent → wait → close_agent → commit (optional)
↓ Output: completed tasks, summaries, session completion
```
**Session Memory Flow**: Each phase receives session ID, which provides access to:
- Previous task summaries
- Existing context and analysis
- Brainstorming artifacts (potentially modified by Phase 2 conflict resolution)
- Session-specific configuration
## TodoWrite Pattern
**Core Concept**: Dynamic task attachment and collapse for real-time visibility into workflow execution.
### Key Principles
1. **Task Attachment** (when phase executed):
- Sub-command's internal tasks are **attached** to orchestrator's TodoWrite
- **Phase 2**: Multiple sub-tasks attached (e.g., explore, conflict resolution, context packaging)
- **Phase 3**: Single agent task attached
- First attached task marked as `in_progress`, others as `pending`
- Orchestrator **executes** these attached tasks sequentially
2. **Task Collapse** (after sub-tasks complete):
- **Applies to Phase 2**: Remove detailed sub-tasks from TodoWrite
- **Collapse** to high-level phase summary
- **Phase 3**: No collapse needed (single task, just mark completed)
- Maintains clean orchestrator-level view
3. **Continuous Execution**:
- After completion, automatically proceed to next pending phase
- No user intervention required between phases
- TodoWrite dynamically reflects current execution state
**Lifecycle**: Initial pending tasks → Phase executed (tasks ATTACHED) → Sub-tasks executed sequentially → Phase completed (tasks COLLAPSED) → Next phase begins → Repeat until all phases complete.
## Phase-Specific TodoWrite Updates
### Phase 2 (Tasks Attached):
```json
[
{"content": "Phase 1: Session Discovery", "status": "completed"},
{"content": "Phase 2: Context Gathering & Conflict Resolution", "status": "in_progress"},
{"content": " → Parallel exploration (conflict-aware)", "status": "in_progress"},
{"content": " → Inline conflict resolution (if needed)", "status": "pending"},
{"content": " → Context-search-agent packaging", "status": "pending"},
{"content": "Phase 3: Task Generation", "status": "pending"}
]
```
### Phase 2 (Collapsed):
```json
[
{"content": "Phase 1: Session Discovery", "status": "completed"},
{"content": "Phase 2: Context Gathering & Conflict Resolution", "status": "completed"},
{"content": "Phase 3: Task Generation", "status": "pending"}
]
```
### Phase 4 (Tasks Attached, conditional):
```json
[
{"content": "Phase 1: Session Discovery", "status": "completed"},
{"content": "Phase 2: Context Gathering & Conflict Resolution", "status": "completed"},
{"content": "Phase 3: Task Generation", "status": "completed"},
{"content": "Phase 4: Execution", "status": "in_progress"},
{"content": " → IMPL-1: [task title]", "status": "in_progress"},
{"content": " → IMPL-2: [task title]", "status": "pending"},
{"content": " → IMPL-3: [task title]", "status": "pending"}
]
```
## Planning Notes Template
After Phase 1, create `planning-notes.md` with this structure:
```markdown
# Planning Notes
**Session**: ${sessionId}
**Created**: ${timestamp}
## User Intent (Phase 1)
- **GOAL**: ${userGoal}
- **KEY_CONSTRAINTS**: ${userConstraints}
---
## Context Findings (Phase 2)
(To be filled by context-gather)
## Conflict Decisions (Phase 2)
(To be filled if conflicts detected)
## Consolidated Constraints (Phase 3 Input)
1. ${userConstraints}
---
## Task Generation (Phase 3)
(To be filled by action-planning-agent)
## N+1 Context
### Decisions
| Decision | Rationale | Revisit? |
|----------|-----------|----------|
### Deferred
- [ ] (For N+1)
```
## Post-Phase Updates
### After Phase 2
Read context-package to extract key findings, update planning-notes.md:
- `Context Findings (Phase 2)`: CRITICAL_FILES, ARCHITECTURE, CONFLICT_RISK, CONSTRAINTS
- `Conflict Decisions (Phase 2)`: RESOLVED, CUSTOM_HANDLING, CONSTRAINTS (if conflicts were resolved inline)
- `Consolidated Constraints`: Append Phase 2 constraints (context + conflict)
### Memory State Check
After Phase 2, evaluate context window usage. If memory usage is high (>120K tokens):
```javascript
// Codex: Use compact command if available
codex compact
```
## Phase 3 User Decision
After Phase 3 completes, present user with action choices.
**Auto Mode** (`--yes`): Skip user decision, directly enter Phase 4 (Execution).
```javascript
const autoYes = $ARGUMENTS.includes('--yes') || $ARGUMENTS.includes('-y')
if (autoYes) {
// Auto mode: Skip decision, proceed to Phase 4
console.log(`[--yes] Auto-continuing to Phase 4: Execution`)
// Read phases/04-execution.md and execute Phase 4
} else {
ASK_USER([{
id: "phase3-next-action",
type: "select",
prompt: "Planning complete. What would you like to do next?",
options: [
{
label: "Verify Plan Quality (Recommended)",
description: "Run quality verification to catch issues before execution."
},
{
label: "Start Execution",
description: "Begin implementing tasks immediately (Phase 4)."
},
{
label: "Review Status Only",
description: "View task breakdown and session status without taking further action."
}
]
}]) // BLOCKS (wait for user response)
}
// Execute based on user choice
// "Verify Plan Quality" → workflow:plan-verify --session sessionId
// "Start Execution" → Read phases/04-execution.md, execute Phase 4 inline
// "Review Status Only" → workflow:status --session sessionId
```
## Error Handling
- **Parsing Failure**: If output parsing fails, retry command once, then report error
- **Validation Failure**: If validation fails, report which file/data is missing
- **Command Failure**: Keep phase `in_progress`, report error to user, do not proceed to next phase
- **Subagent Timeout**: If wait times out, evaluate whether to continue waiting or send_input to prompt completion
## Coordinator Checklist
- **Pre-Phase**: Convert user input to structured format (GOAL/SCOPE/CONTEXT)
- Parse flags: `--yes`, `--with-commit`
- Initialize TodoWrite before any command
- Execute Phase 1 immediately with structured description
- Parse session ID from Phase 1 output, store in memory
- Pass session ID and structured description to Phase 2 command
- Parse context path from Phase 2 output, store in memory
- **Phase 2 handles conflict resolution inline** (no separate Phase 3 decision needed)
- **Build Phase 3 command**: workflow:tools:task-generate-agent --session [sessionId]
- Verify all Phase 3 outputs
- **Phase 3 User Decision**: Present choices or auto-continue if `--yes`
- **Phase 4 (conditional)**: If user selects "Start Execution" or `--yes`, read phases/04-execution.md and execute
- Pass `--with-commit` flag to Phase 4 if present
- Update TodoWrite after each phase
- After each phase, automatically continue to next phase based on TodoList status
- **Always close_agent after wait completes**
## Task JSON Schema Compatibility
Phase 3 generates `.task/IMPL-*.json` files using the **6-field schema** defined in `action-planning-agent.md`. These task JSONs are a **superset** of the unified `task-schema.json` (located at `.ccw/workflows/cli-templates/schemas/task-schema.json`).
**Key field mappings** (6-field → unified schema):
- `context.acceptance` → `convergence.criteria`
- `context.requirements` → `description` + `implementation`
- `context.depends_on` → `depends_on` (top-level)
- `context.focus_paths` → `focus_paths` (top-level)
- `meta.type` → `type` (top-level)
- `flow_control.target_files` → `files[].path`
All existing 6-field schema fields are preserved. The unified schema fields are accepted as optional aliases for cross-tool interoperability. See `action-planning-agent.md` Section 2.1 "Schema Compatibility" for the full mapping table.
## Related Commands
**Prerequisite Commands**:
- `workflow:brainstorm:artifacts` - Optional: Generate role-based analyses before planning
- `workflow:brainstorm:synthesis` - Optional: Refine brainstorm analyses with clarifications
**Follow-up Commands**:
- `workflow:plan-verify` - Recommended: Verify plan quality before execution
- `workflow:status` - Review task breakdown and current progress
- `workflow:execute` - Begin implementation (also available via Phase 4 inline execution)

View File

@@ -1,181 +0,0 @@
# Prep Package Schema & Integration Spec
Schema definition for `plan-prep-package.json` and integration points with the workflow-plan-execute skill.
## File Location
```
{projectRoot}/.workflow/.prep/plan-prep-package.json
```
Generated by: `/prompts:prep-plan` (interactive prompt)
Consumed by: Phase 1 (Session Discovery) → feeds into Phase 2, 3, 4
## JSON Schema
```json
{
"version": "1.0.0",
"generated_at": "ISO8601",
"prep_status": "ready | needs_refinement | blocked",
"target_skill": "workflow-plan-execute",
"environment": {
"project_root": "/path/to/project",
"prerequisites": {
"required_passed": true,
"recommended_passed": true,
"warnings": ["string"]
},
"tech_stack": "string",
"test_framework": "string",
"has_project_tech": true,
"has_project_guidelines": true
},
"task": {
"original": "raw user input",
"structured": {
"goal": "GOAL string (objective + success criteria)",
"scope": "SCOPE string (boundaries)",
"context": "CONTEXT string (constraints + tech context)"
},
"quality_score": 8,
"dimensions": {
"objective": { "score": 2, "value": "..." },
"success_criteria": { "score": 2, "value": "..." },
"scope": { "score": 2, "value": "..." },
"constraints": { "score": 1, "value": "..." },
"context": { "score": 1, "value": "..." }
},
"source_refs": [
{
"path": "docs/prd.md",
"type": "local_file | url | auto_detected",
"status": "verified | linked | not_found",
"preview": "first ~20 lines (local_file only)"
}
]
},
"execution": {
"auto_yes": true,
"with_commit": true,
"execution_method": "agent | cli | hybrid",
"preferred_cli_tool": "codex | gemini | qwen | auto",
"supplementary_materials": {
"type": "none | paths | inline",
"content": []
}
}
}
```
## Validation Rules (6 checks)
Phase 1 对 plan-prep-package.json 执行 **6 项验证**,全部通过才加载:
| # | 检查项 | 条件 | 失败处理 |
|---|--------|------|----------|
| 1 | prep_status | `=== "ready"` | 跳过 prep |
| 2 | target_skill | `=== "workflow-plan-execute"` | 跳过 prep防错误 skill |
| 3 | project_root | 与当前 projectRoot 一致 | 跳过 prep防错误项目 |
| 4 | quality_score | `>= 6` | 跳过 prep任务质量不达标 |
| 5 | 时效性 | generated_at 在 24h 以内 | 跳过 prep可能过期 |
| 6 | 必需字段 | task.structured.goal, execution 全部存在 | 跳过 prep |
## Phase 1 Integration (Session Discovery)
After session creation, enrich planning-notes.md with prep data:
```javascript
const prepPath = `${projectRoot}/.workflow/.prep/plan-prep-package.json`
let prepPackage = null
if (fs.existsSync(prepPath)) {
const raw = JSON.parse(Read(prepPath))
const checks = validatePlanPrepPackage(raw, projectRoot)
if (checks.valid) {
prepPackage = raw
// Use structured task for session creation
structuredDescription = {
goal: prepPackage.task.structured.goal,
scope: prepPackage.task.structured.scope,
context: prepPackage.task.structured.context
}
console.log(`✓ Prep package loaded: score=${prepPackage.task.quality_score}/10`)
} else {
console.warn(`⚠ Prep package validation failed, using defaults`)
}
}
// After session created, enrich planning-notes.md:
if (prepPackage) {
// 1. Add source refs section
const sourceRefsSection = prepPackage.task.source_refs
?.filter(r => r.status === 'verified' || r.status === 'linked')
.map(r => `- **${r.type}**: ${r.path}`)
.join('\n') || 'None'
// 2. Add quality dimensions
const dimensionsSection = Object.entries(prepPackage.task.dimensions)
.map(([k, v]) => `- **${k}**: ${v.value} (score: ${v.score}/2)`)
.join('\n')
// Append to planning-notes.md under User Intent
Edit(planningNotesPath, {
old: `- **KEY_CONSTRAINTS**: ${userConstraints}`,
new: `- **KEY_CONSTRAINTS**: ${userConstraints}
### Requirement Sources (from prep)
${sourceRefsSection}
### Quality Dimensions (from prep)
${dimensionsSection}`
})
}
```
## Phase 3 Integration (Task Generation - Phase 0 User Config)
Prep package auto-populates Phase 0 user configuration:
```javascript
// In Phase 3, Phase 0 (User Configuration):
if (prepPackage) {
// Auto-answer all Phase 0 questions from prep
userConfig = {
supplementaryMaterials: prepPackage.execution.supplementary_materials,
executionMethod: prepPackage.execution.execution_method,
preferredCliTool: prepPackage.execution.preferred_cli_tool,
enableResume: true
}
console.log(`✓ Phase 0 auto-configured from prep: ${userConfig.executionMethod} (${userConfig.preferredCliTool})`)
// Skip interactive questions, proceed to Phase 1 (Context Prep)
}
```
## Phase 2 Integration (Context Gathering)
Source refs from prep feed into exploration context:
```javascript
// In Phase 2, Step 2 (spawn explore agents):
// Add source_refs as supplementary context for exploration
if (prepPackage?.task?.source_refs?.length > 0) {
const verifiedRefs = prepPackage.task.source_refs.filter(r => r.status === 'verified')
// Include verified local docs in exploration agent prompt
explorationAgentPrompt += `\n## SUPPLEMENTARY REQUIREMENT DOCUMENTS\n`
explorationAgentPrompt += verifiedRefs.map(r => `Read and analyze: ${r.path}`).join('\n')
}
```
## Phase 4 Integration (Execution)
Commit flag from prep:
```javascript
// In Phase 4:
const withCommit = prepPackage?.execution?.with_commit || $ARGUMENTS.includes('--with-commit')
```

View File

@@ -1,379 +0,0 @@
# Phase 1: Session Discovery
Discover existing sessions or start new workflow session with intelligent session management and conflict detection.
## Objective
- Ensure project-level state exists (first-time initialization)
- Create or discover workflow session for the planning workflow
- Generate unique session ID (WFS-xxx format)
- Initialize session directory structure
## Step 0.0: Load Prep Package (if exists)
```javascript
// Load plan-prep-package.json (generated by /prompts:prep-plan)
let prepPackage = null
const prepPath = `${projectRoot}/.workflow/.prep/plan-prep-package.json`
if (fs.existsSync(prepPath)) {
const raw = JSON.parse(Read(prepPath))
const checks = validatePlanPrepPackage(raw, projectRoot)
if (checks.valid) {
prepPackage = raw
console.log(`✓ Prep package loaded: score=${prepPackage.task.quality_score}/10, exec=${prepPackage.execution.execution_method}`)
console.log(` Checks passed: ${checks.passed.join(', ')}`)
} else {
console.warn(`⚠ Prep package found but failed validation:`)
checks.failures.forEach(f => console.warn(`${f}`))
console.warn(` → Falling back to default behavior (prep-package ignored)`)
prepPackage = null
}
}
/**
* Validate plan-prep-package.json integrity before consumption.
*/
function validatePlanPrepPackage(prep, projectRoot) {
const passed = []
const failures = []
// Check 1: prep_status
if (prep.prep_status === 'ready') passed.push('status=ready')
else failures.push(`prep_status is "${prep.prep_status}", expected "ready"`)
// Check 2: target_skill
if (prep.target_skill === 'workflow-plan-execute') passed.push('target_skill match')
else failures.push(`target_skill is "${prep.target_skill}", expected "workflow-plan-execute"`)
// Check 3: project_root
if (prep.environment?.project_root === projectRoot) passed.push('project_root match')
else failures.push(`project_root mismatch: "${prep.environment?.project_root}" vs "${projectRoot}"`)
// Check 4: quality_score >= 6
if ((prep.task?.quality_score || 0) >= 6) passed.push(`quality=${prep.task.quality_score}/10`)
else failures.push(`quality_score ${prep.task?.quality_score || 0} < 6`)
// Check 5: generated_at within 24h
const hoursSince = (Date.now() - new Date(prep.generated_at).getTime()) / 3600000
if (hoursSince <= 24) passed.push(`age=${Math.round(hoursSince)}h`)
else failures.push(`prep-package is ${Math.round(hoursSince)}h old (max 24h)`)
// Check 6: required fields
const required = ['task.structured.goal', 'task.structured.scope', 'execution.execution_method']
const missing = required.filter(p => !p.split('.').reduce((o, k) => o?.[k], prep))
if (missing.length === 0) passed.push('fields complete')
else failures.push(`missing: ${missing.join(', ')}`)
return { valid: failures.length === 0, passed, failures }
}
// Build structured description from prep or raw input
let structuredDescription
if (prepPackage) {
structuredDescription = {
goal: prepPackage.task.structured.goal,
scope: prepPackage.task.structured.scope,
context: prepPackage.task.structured.context
}
} else {
structuredDescription = null // Will be parsed from user input later
}
```
## Step 0: Initialize Project State (First-time Only)
**Executed before all modes** - Ensures project-level state files exist by calling `workflow:init`.
### Check and Initialize
```bash
# Check if project state exists (both files required)
bash(test -f ${projectRoot}/.workflow/project-tech.json && echo "TECH_EXISTS" || echo "TECH_NOT_FOUND")
bash(test -f ${projectRoot}/.workflow/project-guidelines.json && echo "GUIDELINES_EXISTS" || echo "GUIDELINES_NOT_FOUND")
```
**If either NOT_FOUND**, delegate to `workflow:init`:
```javascript
// Codex: Execute workflow:init command for intelligent project analysis
codex workflow:init
// Wait for init completion
// project-tech.json and project-guidelines.json will be created
```
**Output**:
- If BOTH_EXIST: `PROJECT_STATE: initialized`
- If NOT_FOUND: Calls `workflow:init` → creates:
- `{projectRoot}/.workflow/project-tech.json` with full technical analysis
- `{projectRoot}/.workflow/project-guidelines.json` with empty scaffold
**Note**: `workflow:init` uses cli-explore-agent to build comprehensive project understanding (technology stack, architecture, key components). This step runs once per project. Subsequent executions skip initialization.
## Execution
### Step 1.1: Execute Session Start
```javascript
// Codex: Execute session start command
codex workflow:session:start --auto "[structured-task-description]"
```
**Task Description Structure**:
```
GOAL: [Clear, concise objective]
SCOPE: [What's included/excluded]
CONTEXT: [Relevant background or constraints]
```
**Example**:
```
GOAL: Build JWT-based authentication system
SCOPE: User registration, login, token validation
CONTEXT: Existing user database schema, REST API endpoints
```
### Step 1.2: Parse Output
- Extract: `SESSION_ID: WFS-[id]` (store as `sessionId`)
### Step 1.3: Validate
- Session ID successfully extracted
- Session directory `{projectRoot}/.workflow/active/[sessionId]/` exists
**Note**: Session directory contains `workflow-session.json` (metadata). Do NOT look for `manifest.json` here - it only exists in `{projectRoot}/.workflow/archives/` for archived sessions.
### Step 1.4: Initialize Planning Notes
Create `planning-notes.md` with N+1 context support, enriched with prep data:
```javascript
const planningNotesPath = `${projectRoot}/.workflow/active/${sessionId}/planning-notes.md`
const userGoal = structuredDescription?.goal || taskDescription
const userScope = structuredDescription?.scope || "Not specified"
const userConstraints = structuredDescription?.context || "None specified"
// Build source refs section from prep
const sourceRefsSection = (prepPackage?.task?.source_refs?.length > 0)
? prepPackage.task.source_refs
.filter(r => r.status === 'verified' || r.status === 'linked')
.map(r => `- **${r.type}**: ${r.path}`)
.join('\n')
: 'None'
// Build quality dimensions section from prep
const dimensionsSection = prepPackage?.task?.dimensions
? Object.entries(prepPackage.task.dimensions)
.map(([k, v]) => `- **${k}**: ${v.value} (${v.score}/2)`)
.join('\n')
: ''
Write(planningNotesPath, `# Planning Notes
**Session**: ${sessionId}
**Created**: ${new Date().toISOString()}
${prepPackage ? `**Prep Package**: plan-prep-package.json (score: ${prepPackage.task.quality_score}/10)` : ''}
## User Intent (Phase 1)
- **GOAL**: ${userGoal}
- **SCOPE**: ${userScope}
- **KEY_CONSTRAINTS**: ${userConstraints}
${sourceRefsSection !== 'None' ? `
### Requirement Sources (from prep)
${sourceRefsSection}
` : ''}${dimensionsSection ? `
### Quality Dimensions (from prep)
${dimensionsSection}
` : ''}
---
## Context Findings (Phase 2)
(To be filled by context-gather)
## Conflict Decisions (Phase 2)
(To be filled if conflicts detected)
## Consolidated Constraints (Phase 3 Input)
1. ${userConstraints}
---
## Task Generation (Phase 3)
(To be filled by action-planning-agent)
## N+1 Context
### Decisions
| Decision | Rationale | Revisit? |
|----------|-----------|----------|
### Deferred
- [ ] (For N+1)
`)
```
## Session Types
The `--type` parameter classifies sessions for CCW dashboard organization:
| Type | Description | Default For |
|------|-------------|-------------|
| `workflow` | Standard implementation (default) | `workflow:plan` |
| `review` | Code review sessions | `workflow:review-module-cycle` |
| `tdd` | TDD-based development | `workflow:tdd-plan` |
| `test` | Test generation/fix sessions | `workflow:test-fix-gen` |
| `docs` | Documentation sessions | `memory:docs` |
**Validation**: If `--type` is provided with invalid value, return error:
```
ERROR: Invalid session type. Valid types: workflow, review, tdd, test, docs
```
## Mode 1: Discovery Mode (Default)
### Usage
```bash
workflow:session:start
```
### Step 1: List Active Sessions
```bash
bash(ls -1 ${projectRoot}/.workflow/active/ 2>/dev/null | head -5)
```
### Step 2: Display Session Metadata
```bash
bash(cat ${projectRoot}/.workflow/active/WFS-promptmaster-platform/workflow-session.json)
```
### Step 4: User Decision
Present session information and wait for user to select or create session.
**Output**: `SESSION_ID: WFS-[user-selected-id]`
## Mode 2: Auto Mode (Intelligent)
### Usage
```bash
workflow:session:start --auto "task description"
```
### Step 1: Check Active Sessions Count
```bash
bash(find ${projectRoot}/.workflow/active/ -name "WFS-*" -type d 2>/dev/null | wc -l)
```
### Step 2a: No Active Sessions → Create New
```bash
# Generate session slug
bash(echo "implement OAuth2 auth" | sed 's/[^a-zA-Z0-9]/-/g' | tr '[:upper:]' '[:lower:]' | cut -c1-50)
# Create directory structure
bash(mkdir -p ${projectRoot}/.workflow/active/WFS-implement-oauth2-auth/.process)
bash(mkdir -p ${projectRoot}/.workflow/active/WFS-implement-oauth2-auth/.task)
bash(mkdir -p ${projectRoot}/.workflow/active/WFS-implement-oauth2-auth/.summaries)
# Create metadata (include type field, default to "workflow" if not specified)
bash(echo '{"session_id":"WFS-implement-oauth2-auth","project":"implement OAuth2 auth","status":"planning","type":"workflow","created_at":"2024-12-04T08:00:00Z"}' > ${projectRoot}/.workflow/active/WFS-implement-oauth2-auth/workflow-session.json)
```
**Output**: `SESSION_ID: WFS-implement-oauth2-auth`
### Step 2b: Single Active Session → Check Relevance
```bash
# Extract session ID
bash(find ${projectRoot}/.workflow/active/ -name "WFS-*" -type d 2>/dev/null | head -1 | xargs basename)
# Read project name from metadata
bash(cat ${projectRoot}/.workflow/active/WFS-promptmaster-platform/workflow-session.json | grep -o '"project":"[^"]*"' | cut -d'"' -f4)
# Check keyword match (manual comparison)
# If task contains project keywords → Reuse session
# If task unrelated → Create new session (use Step 2a)
```
**Output (reuse)**: `SESSION_ID: WFS-promptmaster-platform`
**Output (new)**: `SESSION_ID: WFS-[new-slug]`
### Step 2c: Multiple Active Sessions → Use First
```bash
# Get first active session
bash(find ${projectRoot}/.workflow/active/ -name "WFS-*" -type d 2>/dev/null | head -1 | xargs basename)
# Output warning and session ID
# WARNING: Multiple active sessions detected
# SESSION_ID: WFS-first-session
```
## Mode 3: Force New Mode
### Usage
```bash
workflow:session:start --new "task description"
```
### Step 1: Generate Unique Session Slug
```bash
# Convert to slug
bash(echo "fix login bug" | sed 's/[^a-zA-Z0-9]/-/g' | tr '[:upper:]' '[:lower:]' | cut -c1-50)
# Check if exists, add counter if needed
bash(ls ${projectRoot}/.workflow/active/WFS-fix-login-bug 2>/dev/null && echo "WFS-fix-login-bug-2" || echo "WFS-fix-login-bug")
```
### Step 2: Create Session Structure
```bash
bash(mkdir -p ${projectRoot}/.workflow/active/WFS-fix-login-bug/.process)
bash(mkdir -p ${projectRoot}/.workflow/active/WFS-fix-login-bug/.task)
bash(mkdir -p ${projectRoot}/.workflow/active/WFS-fix-login-bug/.summaries)
```
### Step 3: Create Metadata
```bash
# Include type field from --type parameter (default: "workflow")
bash(echo '{"session_id":"WFS-fix-login-bug","project":"fix login bug","status":"planning","type":"workflow","created_at":"2024-12-04T08:00:00Z"}' > ${projectRoot}/.workflow/active/WFS-fix-login-bug/workflow-session.json)
```
**Output**: `SESSION_ID: WFS-fix-login-bug`
## Execution Guideline
- **Non-interrupting**: When called from other commands, this command completes and returns control to the caller without interrupting subsequent tasks.
## Session ID Format
- Pattern: `WFS-[lowercase-slug]`
- Characters: `a-z`, `0-9`, `-` only
- Max length: 50 characters
- Uniqueness: Add numeric suffix if collision (`WFS-auth-2`, `WFS-auth-3`)
## Output Format Specification
### Success
```
SESSION_ID: WFS-session-slug
```
### Error
```
ERROR: --auto mode requires task description
ERROR: Failed to create session directory
```
### Analysis (Auto Mode)
```
ANALYSIS: Task relevance = high
DECISION: Reusing existing session
SESSION_ID: WFS-promptmaster-platform
```
## Output
- **Variable**: `sessionId` (e.g., `WFS-implement-oauth2-auth`)
- **File**: `{projectRoot}/.workflow/active/{sessionId}/planning-notes.md`
- **TodoWrite**: Mark Phase 1 completed, Phase 2 in_progress
## Next Phase
Return to orchestrator showing Phase 1 results, then auto-continue to [Phase 2: Context Gathering](02-context-gathering.md).

View File

@@ -1,942 +0,0 @@
# Phase 2: Context Gathering & Conflict Resolution
Intelligently collect project context using context-search-agent based on task description, packages into standardized JSON. When conflicts are detected, resolve them inline before packaging.
## Objective
- Check for existing valid context-package before executing
- Assess task complexity and launch parallel exploration agents (with conflict detection)
- Detect and resolve conflicts inline (conditional, when conflict indicators found)
- Invoke context-search-agent to analyze codebase
- Generate standardized `context-package.json` with prioritized context
## Core Philosophy
- **Agent Delegation**: Delegate all discovery to `context-search-agent` for autonomous execution
- **Detection-First**: Check for existing context-package before executing
- **Conflict-Aware Exploration**: Explore agents detect conflict indicators during exploration
- **Inline Resolution**: Conflicts resolved as sub-step within this phase, not a separate phase
- **Conditional Trigger**: Conflict resolution only executes when exploration results contain conflict indicators
- **Plan Mode**: Full comprehensive analysis (vs lightweight brainstorm mode)
- **Standardized Output**: Generate `{projectRoot}/.workflow/active/{session}/.process/context-package.json`
- **Explicit Lifecycle**: Manage subagent creation, waiting, and cleanup
## Execution Process
```
Input Parsing:
├─ Parse flags: --session
└─ Parse: task_description (required)
Step 1: Context-Package Detection
└─ Decision (existing package):
├─ Valid package exists → Return existing (skip execution)
└─ No valid package → Continue to Step 2
Step 2: Complexity Assessment & Parallel Explore (conflict-aware)
├─ Analyze task_description → classify Low/Medium/High
├─ Select exploration angles (1-4 based on complexity)
├─ Launch N cli-explore-agents in parallel (spawn_agent)
│ └─ Each outputs: exploration-{angle}.json (includes conflict_indicators)
├─ Wait for all agents (batch wait)
├─ Close all agents
└─ Generate explorations-manifest.json
Step 3: Inline Conflict Resolution (conditional)
├─ 3.1 Aggregate conflict_indicators from all explorations
├─ 3.2 Decision: No significant conflicts → Skip to Step 4
├─ 3.3 Spawn conflict-analysis agent (cli-execution-agent)
│ └─ Gemini/Qwen CLI analysis → conflict strategies
├─ 3.4 Iterative user clarification (send_input loop, max 10 rounds)
│ ├─ Display conflict + strategy ONE BY ONE
│ ├─ ASK_USER for user selection
│ └─ send_input → agent re-analysis → confirm uniqueness
├─ 3.5 Generate conflict-resolution.json
└─ 3.6 Close conflict agent
Step 4: Invoke Context-Search Agent (enhanced)
├─ Receives exploration results + resolved conflicts (if any)
└─ Generates context-package.json with exploration_results + conflict status
Step 5: Output Verification (enhanced)
└─ Verify context-package.json contains exploration_results + conflict resolution
```
## Execution Flow
### Step 1: Context-Package Detection
**Execute First** - Check if valid package already exists:
```javascript
const contextPackagePath = `${projectRoot}/.workflow/active/${session_id}/.process/context-package.json`;
if (file_exists(contextPackagePath)) {
const existing = Read(contextPackagePath);
// Validate package belongs to current session
if (existing?.metadata?.session_id === session_id) {
console.log("Valid context-package found for session:", session_id);
console.log("Stats:", existing.statistics);
console.log("Conflict Risk:", existing.conflict_detection.risk_level);
return existing; // Skip execution, return existing
} else {
console.warn("Invalid session_id in existing package, re-generating...");
}
}
```
### Step 2: Complexity Assessment & Parallel Explore
**Only execute if Step 1 finds no valid package**
```javascript
// 2.1 Complexity Assessment
function analyzeTaskComplexity(taskDescription) {
const text = taskDescription.toLowerCase();
if (/architect|refactor|restructure|modular|cross-module/.test(text)) return 'High';
if (/multiple|several|integrate|migrate|extend/.test(text)) return 'Medium';
return 'Low';
}
const ANGLE_PRESETS = {
architecture: ['architecture', 'dependencies', 'modularity', 'integration-points'],
security: ['security', 'auth-patterns', 'dataflow', 'validation'],
performance: ['performance', 'bottlenecks', 'caching', 'data-access'],
bugfix: ['error-handling', 'dataflow', 'state-management', 'edge-cases'],
feature: ['patterns', 'integration-points', 'testing', 'dependencies'],
refactor: ['architecture', 'patterns', 'dependencies', 'testing']
};
function selectAngles(taskDescription, complexity) {
const text = taskDescription.toLowerCase();
let preset = 'feature';
if (/refactor|architect|restructure/.test(text)) preset = 'architecture';
else if (/security|auth|permission/.test(text)) preset = 'security';
else if (/performance|slow|optimi/.test(text)) preset = 'performance';
else if (/fix|bug|error|issue/.test(text)) preset = 'bugfix';
const count = complexity === 'High' ? 4 : (complexity === 'Medium' ? 3 : 1);
return ANGLE_PRESETS[preset].slice(0, count);
}
const complexity = analyzeTaskComplexity(task_description);
const selectedAngles = selectAngles(task_description, complexity);
const sessionFolder = `${projectRoot}/.workflow/active/${session_id}/.process`;
// 2.2 Launch Parallel Explore Agents (with conflict detection)
const explorationAgents = [];
// Load source_refs from prep-package for supplementary context
const prepPath = `${projectRoot}/.workflow/.prep/plan-prep-package.json`
const prepSourceRefs = fs.existsSync(prepPath)
? (JSON.parse(Read(prepPath))?.task?.source_refs || []).filter(r => r.status === 'verified')
: []
const sourceRefsDirective = prepSourceRefs.length > 0
? `\n## SUPPLEMENTARY REQUIREMENT DOCUMENTS (from prep)\nRead these before exploration:\n${prepSourceRefs.map((r, i) => `${i + 1}. Read: ${r.path} (${r.type})`).join('\n')}\nCross-reference findings against these source documents.\n`
: ''
// Spawn all agents in parallel
selectedAngles.forEach((angle, index) => {
const agentId = spawn_agent({
message: `
## TASK ASSIGNMENT
### MANDATORY FIRST STEPS (Agent Execute)
1. **Read role definition**: ~/.codex/agents/cli-explore-agent.md (MUST read first)
2. Read: ${projectRoot}/.workflow/project-tech.json
3. Read: ${projectRoot}/.workflow/project-guidelines.json
---
## Task Objective
Execute **${angle}** exploration for task planning context. Analyze codebase from this specific angle to discover relevant structure, patterns, and constraints.
**CONFLICT DETECTION**: Additionally detect conflict indicators including module overlaps, breaking changes, incompatible patterns, and scenario boundary ambiguities.
${sourceRefsDirective}
## Assigned Context
- **Exploration Angle**: ${angle}
- **Task Description**: ${task_description}
- **Session ID**: ${session_id}
- **Exploration Index**: ${index + 1} of ${selectedAngles.length}
- **Output File**: ${sessionFolder}/exploration-${angle}.json
## MANDATORY FIRST STEPS (Execute by Agent)
**You (cli-explore-agent) MUST execute these steps in order:**
1. Run: ccw tool exec get_modules_by_depth '{}' (project structure)
2. Run: rg -l "{keyword_from_task}" --type ts (locate relevant files)
3. Execute: cat ~/.ccw/workflows/cli-templates/schemas/explore-json-schema.json (get output schema reference)
## Exploration Strategy (${angle} focus)
**Step 1: Structural Scan** (Bash)
- get_modules_by_depth.sh → identify modules related to ${angle}
- find/rg → locate files relevant to ${angle} aspect
- Analyze imports/dependencies from ${angle} perspective
**Step 2: Semantic Analysis** (Gemini CLI)
- How does existing code handle ${angle} concerns?
- What patterns are used for ${angle}?
- Where would new code integrate from ${angle} viewpoint?
- **Detect conflict indicators**: module overlaps, breaking changes, incompatible patterns
**Step 3: Write Output**
- Consolidate ${angle} findings into JSON
- Identify ${angle}-specific clarification needs
- **Include conflict_indicators array** with detected conflicts
## Expected Output
**File**: ${sessionFolder}/exploration-${angle}.json
**Schema Reference**: Schema obtained in MANDATORY FIRST STEPS step 3, follow schema exactly
**Required Fields** (all ${angle} focused):
- project_structure: Modules/architecture relevant to ${angle}
- relevant_files: Files affected from ${angle} perspective
**MANDATORY**: Every file MUST use structured object format with ALL required fields:
\`[{path: "src/file.ts", relevance: 0.85, rationale: "Contains AuthService.login() - entry point for JWT token generation", role: "modify_target", discovery_source: "bash-scan", key_symbols: ["AuthService", "login"]}]\`
- **rationale** (required): Specific selection basis tied to ${angle} topic (>10 chars, not generic)
- **role** (required): modify_target|dependency|pattern_reference|test_target|type_definition|integration_point|config|context_only
- **discovery_source** (recommended): bash-scan|cli-analysis|ace-search|dependency-trace|manual
- **key_symbols** (recommended): Key functions/classes/types in the file relevant to the task
- Scores: 0.7+ high priority, 0.5-0.7 medium, <0.5 low
- patterns: ${angle}-related patterns to follow
- dependencies: Dependencies relevant to ${angle}
- integration_points: Where to integrate from ${angle} viewpoint (include file:line locations)
- constraints: ${angle}-specific limitations/conventions
- clarification_needs: ${angle}-related ambiguities (options array + recommended index)
- **conflict_indicators**: Array of detected conflicts from ${angle} perspective
\`[{type: "ModuleOverlap|BreakingChange|PatternConflict", severity: "high|medium|low", description: "...", affected_files: [...]}]\`
- _metadata.exploration_angle: "${angle}"
## Success Criteria
- [ ] Schema obtained via cat explore-json-schema.json
- [ ] get_modules_by_depth.sh executed
- [ ] At least 3 relevant files identified with specific rationale + role
- [ ] Every file has rationale >10 chars (not generic like "Related to ${angle}")
- [ ] Every file has role classification (modify_target/dependency/etc.)
- [ ] Patterns are actionable (code examples, not generic advice)
- [ ] Integration points include file:line locations
- [ ] Constraints are project-specific to ${angle}
- [ ] conflict_indicators populated (empty array if none detected)
- [ ] JSON output follows schema exactly
- [ ] clarification_needs includes options + recommended
## Output
Write: ${sessionFolder}/exploration-${angle}.json
Return: 2-3 sentence summary of ${angle} findings + conflict indicators count
`
});
explorationAgents.push(agentId);
});
// 2.3 Batch wait for all exploration agents
const explorationResults = wait({
ids: explorationAgents,
timeout_ms: 600000 // 10 minutes
});
// Check for timeouts
if (explorationResults.timed_out) {
console.log('Some exploration agents timed out - continuing with completed results');
}
// 2.4 Close all exploration agents
explorationAgents.forEach(agentId => {
close_agent({ id: agentId });
});
// 2.5 Generate Manifest after all complete
const explorationFiles = bash(`find ${sessionFolder} -name "exploration-*.json" -type f`).split('\n').filter(f => f.trim());
const explorationManifest = {
session_id,
task_description,
timestamp: new Date().toISOString(),
complexity,
exploration_count: selectedAngles.length,
angles_explored: selectedAngles,
explorations: explorationFiles.map(file => {
const data = JSON.parse(Read(file));
return { angle: data._metadata.exploration_angle, file: file.split('/').pop(), path: file, index: data._metadata.exploration_index };
})
};
Write(`${sessionFolder}/explorations-manifest.json`, JSON.stringify(explorationManifest, null, 2));
```
### Step 3: Inline Conflict Resolution
**Conditional execution** - Only runs when exploration results contain significant conflict indicators.
#### 3.1 Aggregate Conflict Indicators
```javascript
// Aggregate conflict_indicators from all explorations
const allConflictIndicators = [];
explorationFiles.forEach(file => {
const data = JSON.parse(Read(file));
if (data.conflict_indicators?.length > 0) {
allConflictIndicators.push(...data.conflict_indicators.map(ci => ({
...ci,
source_angle: data._metadata.exploration_angle
})));
}
});
const hasSignificantConflicts = allConflictIndicators.some(ci => ci.severity === 'high') ||
allConflictIndicators.filter(ci => ci.severity === 'medium').length >= 2;
```
#### 3.2 Decision Gate
```javascript
if (!hasSignificantConflicts) {
console.log(`No significant conflicts detected (${allConflictIndicators.length} low indicators). Skipping conflict resolution.`);
// Skip to Step 4
} else {
console.log(`Significant conflicts detected: ${allConflictIndicators.length} indicators. Launching conflict analysis...`);
// Continue to 3.3
}
```
#### 3.3 Spawn Conflict-Analysis Agent
```javascript
const conflictAgentId = spawn_agent({
message: `
## TASK ASSIGNMENT
### MANDATORY FIRST STEPS (Agent Execute)
1. **Read role definition**: ~/.codex/agents/cli-execution-agent.md (MUST read first)
2. Read: ${projectRoot}/.workflow/project-tech.json
3. Read: ${projectRoot}/.workflow/project-guidelines.json
---
## Context
- Session: ${session_id}
- Conflict Indicators: ${JSON.stringify(allConflictIndicators)}
- Files: ${existing_files_list}
## Exploration Context (from exploration results)
- Exploration Count: ${explorationManifest.exploration_count}
- Angles Analyzed: ${JSON.stringify(explorationManifest.angles_explored)}
- Pre-identified Conflict Indicators: ${JSON.stringify(allConflictIndicators)}
- Critical Files: ${JSON.stringify(explorationFiles.flatMap(f => JSON.parse(Read(f)).relevant_files?.filter(rf => rf.relevance >= 0.7).map(rf => rf.path) || []))}
## Analysis Steps
### 0. Load Output Schema (MANDATORY)
Execute: cat ~/.ccw/workflows/cli-templates/schemas/conflict-resolution-schema.json
### 1. Load Context
- Read existing files from conflict indicators
- Load exploration results and use aggregated insights for enhanced analysis
### 2. Execute CLI Analysis (Enhanced with Exploration + Scenario Uniqueness)
Primary (Gemini):
ccw cli -p "
PURPOSE: Detect conflicts between plan and codebase, using exploration insights
TASK:
• **Review pre-identified conflict_indicators from exploration results**
• Compare architectures (use exploration key_patterns)
• Identify breaking API changes
• Detect data model incompatibilities
• Assess dependency conflicts
• **Analyze module scenario uniqueness**
- Cross-validate with exploration critical_files
- Generate clarification questions for boundary definition
MODE: analysis
CONTEXT: @**/*.ts @**/*.js @**/*.tsx @**/*.jsx @${projectRoot}/.workflow/active/${session_id}/**/*
EXPECTED: Conflict list with severity ratings, including:
- Validation of exploration conflict_indicators
- ModuleOverlap conflicts with overlap_analysis
- Targeted clarification questions
CONSTRAINTS: Focus on breaking changes, migration needs, and functional overlaps | Prioritize exploration-identified conflicts | analysis=READ-ONLY
" --tool gemini --mode analysis --rule analysis-code-patterns --cd ${project_root}
Fallback: Qwen (same prompt) → Claude (manual analysis)
### 3. Generate Strategies (2-4 per conflict)
Template per conflict:
- Severity: Critical/High/Medium
- Category: Architecture/API/Data/Dependency/ModuleOverlap
- Affected files + impact
- **For ModuleOverlap**: Include overlap_analysis with existing modules and scenarios
- Options with pros/cons, effort, risk
- **For ModuleOverlap strategies**: Add clarification_needed questions for boundary definition
- Recommended strategy + rationale
### 4. Return Structured Conflict Data
**Schema Reference**: Execute \`cat ~/.ccw/workflows/cli-templates/schemas/conflict-resolution-schema.json\` to get full schema
Return JSON following the schema above. Key requirements:
- Minimum 2 strategies per conflict, max 4
- All text in Chinese for user-facing fields (brief, name, pros, cons, modification_suggestions)
- modifications.old_content: 20-100 chars for unique Edit tool matching
- modifications.new_content: preserves markdown formatting
- modification_suggestions: 2-5 actionable suggestions for custom handling
### 5. Planning Notes Record (REQUIRED)
After analysis complete, append a brief execution record to planning-notes.md:
**File**: ${projectRoot}/.workflow/active/${session_id}/planning-notes.md
**Location**: Under "## Conflict Decisions (Phase 2)" section
**Format**:
\`\`\`
### [Conflict-Resolution Agent] YYYY-MM-DD
- **Note**: [brief summary of conflict types, resolution strategies, key decisions]
\`\`\`
`
});
// Wait for initial analysis
const analysisResult = wait({
ids: [conflictAgentId],
timeout_ms: 600000 // 10 minutes
});
// Parse conflicts from result
const conflicts = parseConflictsFromResult(analysisResult);
```
#### Conflict Categories
| Category | Description |
|----------|-------------|
| **Architecture** | Incompatible design patterns, module structure changes, pattern migration |
| **API** | Breaking contract changes, signature modifications, public interface impacts |
| **Data Model** | Schema modifications, type breaking changes, data migration needs |
| **Dependency** | Version incompatibilities, setup conflicts, breaking updates |
| **ModuleOverlap** | Functional overlap, scenario boundary ambiguity, duplicate responsibility |
#### 3.4 Iterative User Clarification
```javascript
const autoYes = $ARGUMENTS.includes('--yes') || $ARGUMENTS.includes('-y')
const resolvedConflicts = [];
const customConflicts = [];
FOR each conflict:
round = 0, clarified = false, userClarifications = []
WHILE (!clarified && round++ < 10):
// 1. Display conflict info (text output for context)
displayConflictSummary(conflict) // id, brief, severity, overlap_analysis if ModuleOverlap
// 2. Strategy selection
if (autoYes) {
console.log(`[--yes] Auto-selecting recommended strategy`)
selectedStrategy = conflict.strategies[conflict.recommended || 0]
clarified = true // Skip clarification loop
} else {
ASK_USER([{
id: `conflict-${conflict.id}-strategy`,
type: "select",
prompt: formatStrategiesForDisplay(conflict.strategies),
options: [
...conflict.strategies.map((s, i) => ({
label: `${s.name}${i === conflict.recommended ? ' (推荐)' : ''}`,
description: `${s.complexity}复杂度 | ${s.risk}风险${s.clarification_needed?.length ? ' | 需澄清' : ''}`
})),
{ label: "自定义修改", description: `建议: ${conflict.modification_suggestions?.slice(0,2).join('; ')}` }
]
}]) // BLOCKS (wait for user response)
// 3. Handle selection
if (userChoice === "自定义修改") {
customConflicts.push({ id, brief, category, suggestions, overlap_analysis })
break
}
selectedStrategy = findStrategyByName(userChoice)
}
// 4. Clarification (if needed) - using send_input for agent re-analysis
if (!autoYes && selectedStrategy.clarification_needed?.length > 0) {
for (batch of chunk(selectedStrategy.clarification_needed, 4)) {
ASK_USER(batch.map((q, i) => ({
id: `clarify-${conflict.id}-${i+1}`,
type: "select",
prompt: q,
options: [{ label: "详细说明", description: "提供答案" }]
}))) // BLOCKS (wait for user response)
userClarifications.push(...collectAnswers(batch))
}
// 5. Agent re-analysis via send_input (key: agent stays active)
send_input({
id: conflictAgentId,
message: `
## CLARIFICATION ANSWERS
Conflict: ${conflict.id}
Strategy: ${selectedStrategy.name}
User Clarifications: ${JSON.stringify(userClarifications)}
## REQUEST
Based on the clarifications above, update the strategy assessment.
Output: { uniqueness_confirmed: boolean, rationale: string, updated_strategy: {...}, remaining_questions: [...] }
`
});
// Wait for re-analysis result
const reanalysisResult = wait({
ids: [conflictAgentId],
timeout_ms: 300000 // 5 minutes
});
const parsedResult = parseReanalysisResult(reanalysisResult);
if (parsedResult.uniqueness_confirmed) {
selectedStrategy = { ...parsedResult.updated_strategy, clarifications: userClarifications }
clarified = true
} else {
selectedStrategy.clarification_needed = parsedResult.remaining_questions
}
} else {
clarified = true
}
if (clarified) resolvedConflicts.push({ conflict, strategy: selectedStrategy })
END WHILE
END FOR
selectedStrategies = resolvedConflicts.map(r => ({
conflict_id: r.conflict.id, strategy: r.strategy, clarifications: r.strategy.clarifications || []
}))
```
**Key Points**:
- ASK_USER: max 4 questions/call, batch if more
- Strategy options: 2-4 strategies + "自定义修改"
- Clarification loop via send_input: max 10 rounds, agent determines uniqueness_confirmed
- Agent stays active throughout interaction (no close_agent until Step 3.6)
- Custom conflicts: record overlap_analysis for subsequent manual handling
#### 3.5 Generate conflict-resolution.json
```javascript
// Apply modifications from resolved strategies
const modifications = [];
selectedStrategies.forEach(item => {
if (item.strategy && item.strategy.modifications) {
modifications.push(...item.strategy.modifications.map(mod => ({
...mod,
conflict_id: item.conflict_id,
clarifications: item.clarifications
})));
}
});
console.log(`\nApplying ${modifications.length} modifications...`);
const appliedModifications = [];
const failedModifications = [];
const fallbackConstraints = [];
modifications.forEach((mod, idx) => {
try {
console.log(`[${idx + 1}/${modifications.length}] Modifying ${mod.file}...`);
if (!file_exists(mod.file)) {
console.log(` File not found, recording as constraint`);
fallbackConstraints.push({
source: "conflict-resolution",
conflict_id: mod.conflict_id,
target_file: mod.file,
section: mod.section,
change_type: mod.change_type,
content: mod.new_content,
rationale: mod.rationale
});
return;
}
if (mod.change_type === "update") {
Edit({ file_path: mod.file, old_string: mod.old_content, new_string: mod.new_content });
} else if (mod.change_type === "add") {
const fileContent = Read(mod.file);
const updated = insertContentAfterSection(fileContent, mod.section, mod.new_content);
Write(mod.file, updated);
} else if (mod.change_type === "remove") {
Edit({ file_path: mod.file, old_string: mod.old_content, new_string: "" });
}
appliedModifications.push(mod);
} catch (error) {
failedModifications.push({ ...mod, error: error.message });
}
});
// Generate conflict-resolution.json
const resolutionOutput = {
session_id: session_id,
resolved_at: new Date().toISOString(),
summary: {
total_conflicts: conflicts.length,
resolved_with_strategy: selectedStrategies.length,
custom_handling: customConflicts.length,
fallback_constraints: fallbackConstraints.length
},
resolved_conflicts: selectedStrategies.map(s => ({
conflict_id: s.conflict_id,
strategy_name: s.strategy.name,
strategy_approach: s.strategy.approach,
clarifications: s.clarifications || [],
modifications_applied: s.strategy.modifications?.filter(m =>
appliedModifications.some(am => am.conflict_id === s.conflict_id)
) || []
})),
custom_conflicts: customConflicts.map(c => ({
id: c.id, brief: c.brief, category: c.category,
suggestions: c.suggestions, overlap_analysis: c.overlap_analysis || null
})),
planning_constraints: fallbackConstraints,
failed_modifications: failedModifications
};
const resolutionPath = `${projectRoot}/.workflow/active/${session_id}/.process/conflict-resolution.json`;
Write(resolutionPath, JSON.stringify(resolutionOutput, null, 2));
// Output custom conflict summary (if any)
if (customConflicts.length > 0) {
customConflicts.forEach(conflict => {
console.log(`[${conflict.category}] ${conflict.id}: ${conflict.brief}`);
if (conflict.category === 'ModuleOverlap' && conflict.overlap_analysis) {
console.log(` New module: ${conflict.overlap_analysis.new_module.name}`);
conflict.overlap_analysis.existing_modules.forEach(mod => {
console.log(` Overlaps: ${mod.name} (${mod.file})`);
});
}
conflict.suggestions.forEach(s => console.log(` - ${s}`));
});
}
```
#### 3.6 Close Conflict Agent
```javascript
close_agent({ id: conflictAgentId });
```
### Step 4: Invoke Context-Search Agent
**Execute after Step 2 (and Step 3 if triggered)**
```javascript
// Load user intent from planning-notes.md (from Phase 1)
const planningNotesPath = `${projectRoot}/.workflow/active/${session_id}/planning-notes.md`;
let userIntent = { goal: task_description, key_constraints: "None specified" };
if (file_exists(planningNotesPath)) {
const notesContent = Read(planningNotesPath);
const goalMatch = notesContent.match(/\*\*GOAL\*\*:\s*(.+)/);
const constraintsMatch = notesContent.match(/\*\*KEY_CONSTRAINTS\*\*:\s*(.+)/);
if (goalMatch) userIntent.goal = goalMatch[1].trim();
if (constraintsMatch) userIntent.key_constraints = constraintsMatch[1].trim();
}
// Prepare conflict resolution context for agent
const conflictContext = hasSignificantConflicts
? `Conflict Resolution: ${resolutionPath} (${selectedStrategies.length} resolved, ${customConflicts.length} custom)`
: `Conflict Resolution: None needed (no significant conflicts detected)`;
// Spawn context-search-agent
const contextAgentId = spawn_agent({
message: `
## TASK ASSIGNMENT
### MANDATORY FIRST STEPS (Agent Execute)
1. **Read role definition**: ~/.codex/agents/context-search-agent.md (MUST read first)
2. Read: ${projectRoot}/.workflow/project-tech.json
3. Read: ${projectRoot}/.workflow/project-guidelines.json
---
## Execution Mode
**PLAN MODE** (Comprehensive) - Full Phase 1-3 execution with priority sorting
## Session Information
- **Session ID**: ${session_id}
- **Task Description**: ${task_description}
- **Output Path**: ${projectRoot}/.workflow/active/${session_id}/.process/context-package.json
## User Intent (from Phase 1 - Planning Notes)
**GOAL**: ${userIntent.goal}
**KEY_CONSTRAINTS**: ${userIntent.key_constraints}
This is the PRIMARY context source - all subsequent analysis must align with user intent.
## Exploration Input (from Step 2)
- **Manifest**: ${sessionFolder}/explorations-manifest.json
- **Exploration Count**: ${explorationManifest.exploration_count}
- **Angles**: ${explorationManifest.angles_explored.join(', ')}
- **Complexity**: ${complexity}
## Conflict Resolution Input (from Step 3)
- **${conflictContext}**
${hasSignificantConflicts ? `- **Resolution File**: ${resolutionPath}
- **Resolved Conflicts**: ${selectedStrategies.length}
- **Custom Conflicts**: ${customConflicts.length}
- **Planning Constraints**: ${fallbackConstraints.length}` : ''}
## Mission
Execute complete context-search-agent workflow for implementation planning:
### Phase 1: Initialization & Pre-Analysis
1. **Project State Loading**:
- Read and parse \`${projectRoot}/.workflow/project-tech.json\`. Use its \`overview\` section as the foundational \`project_context\`. This is your primary source for architecture, tech stack, and key components.
- Read and parse \`${projectRoot}/.workflow/project-guidelines.json\`. Load \`conventions\`, \`constraints\`, and \`learnings\` into a \`project_guidelines\` section.
- If files don't exist, proceed with fresh analysis.
2. **Detection**: Check for existing context-package (early exit if valid)
3. **Foundation**: Initialize CodexLens, get project structure, load docs
4. **Analysis**: Extract keywords, determine scope, classify complexity based on task description and project state
### Phase 2: Multi-Source Context Discovery
Execute all discovery tracks (WITH USER INTENT INTEGRATION):
- **Track -1**: User Intent & Priority Foundation (EXECUTE FIRST)
- Load user intent (GOAL, KEY_CONSTRAINTS) from session input
- Map user requirements to codebase entities (files, modules, patterns)
- Establish baseline priority scores based on user goal alignment
- Output: user_intent_mapping.json with preliminary priority scores
- **Track 0**: Exploration Synthesis (load ${sessionFolder}/explorations-manifest.json, prioritize critical_files, deduplicate patterns/integration_points)
- **Track 1**: Historical archive analysis (query manifest.json for lessons learned)
- **Track 2**: Reference documentation (CLAUDE.md, architecture docs)
- **Track 3**: Web examples (use Exa MCP for unfamiliar tech/APIs)
- **Track 4**: Codebase analysis (5-layer discovery: files, content, patterns, deps, config/tests)
### Phase 3: Synthesis, Assessment & Packaging
1. Apply relevance scoring and build dependency graph
2. **Synthesize 5-source data** (including Track -1): Merge findings from all sources
- Priority order: User Intent > Archive > Docs > Exploration > Code > Web
- **Prioritize the context from \`project-tech.json\`** for architecture and tech stack unless code analysis reveals it's outdated
3. **Context Priority Sorting**:
a. Combine scores from Track -1 (user intent alignment) + relevance scores + exploration critical_files
b. Classify files into priority tiers:
- **Critical** (score >= 0.85): Directly mentioned in user goal OR exploration critical_files
- **High** (0.70-0.84): Key dependencies, patterns required for goal
- **Medium** (0.50-0.69): Supporting files, indirect dependencies
- **Low** (< 0.50): Contextual awareness only
c. Generate dependency_order: Based on dependency graph + user goal sequence
d. Document sorting_rationale: Explain prioritization logic
4. **Populate \`project_context\`**: Directly use the \`overview\` from \`project-tech.json\` to fill the \`project_context\` section. Include description, technology_stack, architecture, and key_components.
5. **Populate \`project_guidelines\`**: Load conventions, constraints, and learnings from \`project-guidelines.json\` into a dedicated section.
6. Integrate brainstorm artifacts (if .brainstorming/ exists, read content)
7. Perform conflict detection with risk assessment
8. **Inject conflict resolution results** (if conflict-resolution.json exists) into conflict_detection
9. **Generate prioritized_context section**:
\`\`\`json
{
"prioritized_context": {
"user_intent": {
"goal": "...",
"scope": "...",
"key_constraints": ["..."]
},
"priority_tiers": {
"critical": [{ "path": "...", "relevance": 0.95, "rationale": "..." }],
"high": [...],
"medium": [...],
"low": [...]
},
"dependency_order": ["module1", "module2", "module3"],
"sorting_rationale": "Based on user goal alignment (Track -1), exploration critical files, and dependency graph analysis"
}
}
\`\`\`
10. Generate and validate context-package.json with prioritized_context field
## Output Requirements
Complete context-package.json with:
- **metadata**: task_description, keywords, complexity, tech_stack, session_id
- **project_context**: description, technology_stack, architecture, key_components (sourced from \`project-tech.json\`)
- **project_guidelines**: {conventions, constraints, quality_rules, learnings} (sourced from \`project-guidelines.json\`)
- **assets**: {documentation[], source_code[], config[], tests[]} with relevance scores
- **dependencies**: {internal[], external[]} with dependency graph
- **brainstorm_artifacts**: {guidance_specification, role_analyses[], synthesis_output} with content
- **conflict_detection**: {risk_level, risk_factors, affected_modules[], mitigation_strategy, historical_conflicts[], resolution_file (if exists)}
- **exploration_results**: {manifest_path, exploration_count, angles, explorations[], aggregated_insights} (from Track 0)
- **prioritized_context**: {user_intent, priority_tiers{critical, high, medium, low}, dependency_order[], sorting_rationale}
## Quality Validation
Before completion verify:
- [ ] Valid JSON format with all required fields
- [ ] File relevance accuracy >80%
- [ ] Dependency graph complete (max 2 transitive levels)
- [ ] Conflict risk level calculated correctly
- [ ] No sensitive data exposed
- [ ] Total files <=50 (prioritize high-relevance)
## Planning Notes Record (REQUIRED)
After completing context-package.json, append a brief execution record to planning-notes.md:
**File**: ${projectRoot}/.workflow/active/${session_id}/planning-notes.md
**Location**: Under "## Context Findings (Phase 2)" section
**Format**:
\`\`\`
### [Context-Search Agent] YYYY-MM-DD
- **Note**: [brief summary of key findings]
\`\`\`
Execute autonomously following agent documentation.
Report completion with statistics.
`
});
// Wait for context agent to complete
const contextResult = wait({
ids: [contextAgentId],
timeout_ms: 900000 // 15 minutes
});
// Close context agent
close_agent({ id: contextAgentId });
```
### Step 5: Output Verification
After agent completes, verify output:
```javascript
// Verify file was created
const outputPath = `${projectRoot}/.workflow/active/${session_id}/.process/context-package.json`;
if (!file_exists(outputPath)) {
throw new Error("Agent failed to generate context-package.json");
}
// Verify exploration_results included
const pkg = JSON.parse(Read(outputPath));
if (pkg.exploration_results?.exploration_count > 0) {
console.log(`Exploration results aggregated: ${pkg.exploration_results.exploration_count} angles`);
}
// Verify conflict resolution status
if (hasSignificantConflicts) {
const resolutionFileRef = pkg.conflict_detection?.resolution_file;
if (resolutionFileRef) {
console.log(`Conflict resolution integrated: ${resolutionFileRef}`);
}
}
```
## Parameter Reference
| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `--session` | string | Yes | Workflow session ID (e.g., WFS-user-auth) |
| `task_description` | string | Yes | Detailed task description for context extraction |
## Auto Mode
When `--yes` or `-y`: Auto-select recommended strategy for each conflict in Step 3, skip clarification questions.
## Post-Phase Update
After context-gather completes, update planning-notes.md:
```javascript
const contextPackage = JSON.parse(Read(contextPath))
const conflictRisk = contextPackage.conflict_detection?.risk_level || 'low'
const criticalFiles = (contextPackage.exploration_results?.aggregated_insights?.critical_files || [])
.slice(0, 5).map(f => f.path)
const archPatterns = contextPackage.project_context?.architecture_patterns || []
const constraints = contextPackage.exploration_results?.aggregated_insights?.constraints || []
// Update Phase 2 section
Edit(planningNotesPath, {
old: '## Context Findings (Phase 2)\n(To be filled by context-gather)',
new: `## Context Findings (Phase 2)
- **CRITICAL_FILES**: ${criticalFiles.join(', ') || 'None identified'}
- **ARCHITECTURE**: ${archPatterns.join(', ') || 'Not detected'}
- **CONFLICT_RISK**: ${conflictRisk}
- **CONSTRAINTS**: ${constraints.length > 0 ? constraints.join('; ') : 'None'}`
})
// If conflicts were resolved inline, update conflict decisions section
if (hasSignificantConflicts && file_exists(resolutionPath)) {
const conflictRes = JSON.parse(Read(resolutionPath))
const resolved = conflictRes.resolved_conflicts || []
const planningConstraints = conflictRes.planning_constraints || []
Edit(planningNotesPath, {
old: '## Conflict Decisions (Phase 2)\n(To be filled if conflicts detected)',
new: `## Conflict Decisions (Phase 2)
- **RESOLVED**: ${resolved.map(r => `${r.conflict_id}${r.strategy_name}`).join('; ') || 'None'}
- **CUSTOM_HANDLING**: ${conflictRes.custom_conflicts?.map(c => c.id).join(', ') || 'None'}
- **CONSTRAINTS**: ${planningConstraints.map(c => c.content).join('; ') || 'None'}`
})
// Append conflict constraints to consolidated list
if (planningConstraints.length > 0) {
Edit(planningNotesPath, {
old: '## Consolidated Constraints (Phase 3 Input)',
new: `## Consolidated Constraints (Phase 3 Input)
${planningConstraints.map((c, i) => `${constraintCount + i + 1}. [Conflict] ${c.content}`).join('\n')}`
})
}
}
// Append Phase 2 constraints to consolidated list
Edit(planningNotesPath, {
old: '## Consolidated Constraints (Phase 3 Input)',
new: `## Consolidated Constraints (Phase 3 Input)
${constraints.map((c, i) => `${i + 2}. [Context] ${c}`).join('\n')}`
})
```
## Error Handling
### Recovery Strategy
```
1. Pre-check: Verify exploration results before conflict analysis
2. Monitor: Track agents via wait with timeout
3. Validate: Parse agent JSON output
4. Recover:
- Agent failure → check logs + report error
- Invalid JSON → retry once with Claude fallback
- CLI failure → fallback to Claude analysis
- Edit tool failure → report affected files + rollback option
- User cancels → mark as "unresolved", continue to Step 4
5. Degrade: If conflict analysis fails, skip and continue with context packaging
6. Cleanup: Always close_agent even on error path
```
### Rollback Handling
```
If Edit tool fails mid-application:
1. Log all successfully applied modifications
2. Output rollback option via text interaction
3. If rollback selected: restore files from git or backups
4. If continue: mark partial resolution in context-package.json
```
## Notes
- **Detection-first**: Always check for existing package before invoking agent
- **User intent integration**: Load user intent from planning-notes.md (Phase 1 output)
- **Conflict-aware exploration**: Explore agents detect conflict indicators during their work
- **Inline conflict resolution**: Conflicts resolved within this phase when significant indicators found
- **Output**: Generates `context-package.json` with `prioritized_context` field + optional `conflict-resolution.json`
- **Plan-specific**: Use this for implementation planning; brainstorm mode uses direct agent call
- **Explicit Lifecycle**: Always close_agent after wait to free resources
- **Batch Wait**: Use single wait call for multiple parallel agents for efficiency
## Output
- **Variable**: `contextPath` (e.g., `{projectRoot}/.workflow/active/WFS-xxx/.process/context-package.json`)
- **Variable**: `conflictRisk` (none/low/medium/high/resolved)
- **File**: Updated `planning-notes.md` with context findings + conflict decisions (if applicable)
- **File**: Optional `conflict-resolution.json` (when conflicts resolved inline)
## Next Phase
Return to orchestrator, then auto-continue to [Phase 3: Task Generation](03-task-generation.md).

View File

@@ -1,793 +0,0 @@
# Phase 3: Task Generation
Generate implementation plan documents (IMPL_PLAN.md, task JSONs, TODO_LIST.md) using action-planning-agent - produces planning artifacts, does NOT execute code implementation.
## Auto Mode
When `--yes` or `-y`: Skip user questions, use defaults (no materials, Agent executor, Codex CLI tool).
## Core Philosophy
- **Planning Only**: Generate planning documents (IMPL_PLAN.md, task JSONs, TODO_LIST.md) - does NOT implement code
- **Agent-Driven Document Generation**: Delegate plan generation to action-planning-agent
- **NO Redundant Context Sorting**: Context priority sorting is ALREADY completed in context-gather Phase 2
- Use `context-package.json.prioritized_context` directly
- DO NOT re-sort files or re-compute priorities
- `priority_tiers` and `dependency_order` are pre-computed and ready-to-use
- **N+1 Parallel Planning**: Auto-detect multi-module projects, enable parallel planning (2+1 or 3+1 mode)
- **Progressive Loading**: Load context incrementally (Core → Selective → On-Demand) due to analysis.md file size
- **Memory-First**: Reuse loaded documents from conversation memory
- **Smart Selection**: Load synthesis_output OR guidance + relevant role analyses, NOT all role analyses
- **MCP-Enhanced**: Use MCP tools for advanced code analysis and research
- **Path Clarity**: All `focus_paths` prefer absolute paths (e.g., `D:\\project\\src\\module`), or clear relative paths from project root (e.g., `./src/module`)
- **Explicit Lifecycle**: Manage subagent lifecycle with spawn_agent → wait → close_agent
## Execution Process
```
Input Parsing:
├─ Parse flags: --session
└─ Validation: session_id REQUIRED
Phase 0: User Configuration (Interactive)
├─ Question 1: Supplementary materials/guidelines?
├─ Question 2: Execution method preference (Agent/CLI/Hybrid)
├─ Question 3: CLI tool preference (if CLI selected)
└─ Store: userConfig for agent prompt
Phase 1: Context Preparation & Module Detection (Command)
├─ Assemble session paths (metadata, context package, output dirs)
├─ Provide metadata (session_id, execution_mode, mcp_capabilities)
├─ Auto-detect modules from context-package + directory structure
└─ Decision:
├─ modules.length == 1 → Single Agent Mode (Phase 2A)
└─ modules.length >= 2 → Parallel Mode (Phase 2B + Phase 3)
Phase 2A: Single Agent Planning (Original Flow)
├─ Spawn action-planning-agent
├─ Wait for completion
├─ Close agent
├─ Outputs: Task JSONs, IMPL_PLAN.md, TODO_LIST.md
└─ Lifecycle: spawn_agent → wait → close_agent
Phase 2B: N Parallel Planning (Multi-Module)
├─ Spawn N action-planning-agents simultaneously (one per module)
├─ Wait for all agents (batch wait)
├─ Close all agents
├─ Each generates module-scoped tasks (IMPL-{prefix}{seq}.json)
├─ Task ID format: IMPL-A1, IMPL-A2... / IMPL-B1, IMPL-B2...
└─ Each module limited to ≤9 tasks
Phase 3: Integration (+1 Coordinator, Multi-Module Only)
├─ Spawn coordinator agent
├─ Wait for completion
├─ Close agent
├─ Collect all module task JSONs
├─ Resolve cross-module dependencies (CROSS::{module}::{pattern} → actual ID)
├─ Generate unified IMPL_PLAN.md (grouped by module)
└─ Generate TODO_LIST.md (hierarchical: module → tasks)
```
## Document Generation Lifecycle
### Phase 0: User Configuration (Interactive)
**Purpose**: Collect user preferences before task generation to ensure generated tasks match execution expectations.
**Auto Mode Check**:
```javascript
const autoYes = $ARGUMENTS.includes('--yes') || $ARGUMENTS.includes('-y')
// Check for prep-package auto-configuration (from /prompts:prep-plan)
const prepPath = `${projectRoot}/.workflow/.prep/plan-prep-package.json`
const prepExec = fs.existsSync(prepPath) ? JSON.parse(Read(prepPath))?.execution : null
if (autoYes || prepExec) {
const source = prepExec ? 'prep-package' : '--yes flag'
console.log(`[${source}] Using defaults: ${prepExec?.execution_method || 'agent'} executor, ${prepExec?.preferred_cli_tool || 'codex'} CLI`)
userConfig = {
supplementaryMaterials: prepExec?.supplementary_materials || { type: "none", content: [] },
executionMethod: prepExec?.execution_method || "agent",
preferredCliTool: prepExec?.preferred_cli_tool || "codex",
enableResume: true
}
// Skip to Phase 1
}
```
**User Questions** (skipped if autoYes):
```javascript
if (!autoYes) ASK_USER([
{
id: "materials",
type: "select",
prompt: "Do you have supplementary materials or guidelines to include?",
options: [
{ label: "No additional materials", description: "Use existing context only" },
{ label: "Provide file paths", description: "I'll specify paths to include" },
{ label: "Provide inline content", description: "I'll paste content directly" }
]
},
{
id: "execution-method",
type: "select",
prompt: "Select execution method for generated tasks:",
options: [
{ label: "Agent (Recommended)", description: "Claude agent executes tasks directly" },
{ label: "Hybrid", description: "Agent orchestrates, calls CLI for complex steps" },
{ label: "CLI Only", description: "All execution via CLI tools (codex/gemini/qwen)" }
]
},
{
id: "cli-tool",
type: "select",
prompt: "If using CLI, which tool do you prefer?",
options: [
{ label: "Codex (Recommended)", description: "Best for implementation tasks" },
{ label: "Gemini", description: "Best for analysis and large context" },
{ label: "Qwen", description: "Alternative analysis tool" },
{ label: "Auto", description: "Let agent decide per-task" }
]
}
]) // BLOCKS (wait for user response)
```
**Handle Materials Response** (skipped if autoYes):
```javascript
if (!autoYes && userConfig.materials === "Provide file paths") {
// Follow-up question for file paths
const pathsResponse = ASK_USER([{
id: "material-paths",
type: "input",
prompt: "Enter file paths to include (comma-separated or one per line):",
options: [
{ label: "Enter paths", description: "Provide paths in text input" }
]
}]) // BLOCKS (wait for user response)
userConfig.supplementaryPaths = parseUserPaths(pathsResponse)
}
```
**Build userConfig**:
```javascript
const userConfig = {
supplementaryMaterials: {
type: "none|paths|inline",
content: [...], // Parsed paths or inline content
},
executionMethod: "agent|hybrid|cli",
preferredCliTool: "codex|gemini|qwen|auto",
enableResume: true // Always enable resume for CLI executions
}
```
**Pass to Agent**: Include `userConfig` in agent prompt for Phase 2A/2B.
### Phase 1: Context Preparation & Module Detection (Command Responsibility)
**Command prepares session paths, metadata, detects module structure. Context priority sorting is NOT performed here - it's already completed in context-gather Phase 2.**
**Session Path Structure**:
```
{projectRoot}/.workflow/active/WFS-{session-id}/
├── workflow-session.json # Session metadata
├── planning-notes.md # Consolidated planning notes
├── .process/
│ └── context-package.json # Context package with artifact catalog
├── .task/ # Output: Task JSON files
│ ├── IMPL-A1.json # Multi-module: prefixed by module
│ ├── IMPL-A2.json
│ ├── IMPL-B1.json
│ └── ...
├── plan.json # Output: Plan overview (plan-overview-base-schema.json)
├── IMPL_PLAN.md # Output: Implementation plan (grouped by module)
└── TODO_LIST.md # Output: TODO list (hierarchical)
```
**Command Preparation**:
1. **Assemble Session Paths** for agent prompt:
- `session_metadata_path`
- `context_package_path`
- Output directory paths
2. **Provide Metadata** (simple values):
- `session_id`
- `mcp_capabilities` (available MCP tools)
3. **Auto Module Detection** (determines single vs parallel mode):
```javascript
function autoDetectModules(contextPackage, projectRoot) {
// === Complexity Gate: Only parallelize for High complexity ===
const complexity = contextPackage.metadata?.complexity || 'Medium';
if (complexity !== 'High') {
// Force single agent mode for Low/Medium complexity
// This maximizes agent context reuse for related tasks
return [{ name: 'main', prefix: '', paths: ['.'] }];
}
// Priority 1: Explicit frontend/backend separation
if (exists('src/frontend') && exists('src/backend')) {
return [
{ name: 'frontend', prefix: 'A', paths: ['src/frontend'] },
{ name: 'backend', prefix: 'B', paths: ['src/backend'] }
];
}
// Priority 2: Monorepo structure
if (exists('packages/*') || exists('apps/*')) {
return detectMonorepoModules(); // Returns 2-3 main packages
}
// Priority 3: Context-package dependency clustering
const modules = clusterByDependencies(contextPackage.dependencies?.internal);
if (modules.length >= 2) return modules.slice(0, 3);
// Default: Single module (original flow)
return [{ name: 'main', prefix: '', paths: ['.'] }];
}
```
**Decision Logic**:
- `complexity !== 'High'` → Force Phase 2A (Single Agent, maximize context reuse)
- `modules.length == 1` → Phase 2A (Single Agent, original flow)
- `modules.length >= 2 && complexity == 'High'` → Phase 2B + Phase 3 (N+1 Parallel)
**Note**: CLI tool usage is now determined semantically by action-planning-agent based on user's task description, not by flags.
### Phase 2A: Single Agent Planning (Original Flow)
**Condition**: `modules.length == 1` (no multi-module detected)
**Purpose**: Generate IMPL_PLAN.md, task JSONs, and TODO_LIST.md - planning documents only, NOT code implementation.
**Agent Invocation**:
```javascript
// Spawn action-planning-agent
const planningAgentId = spawn_agent({
message: `
## TASK ASSIGNMENT
### MANDATORY FIRST STEPS (Agent Execute)
1. **Read role definition**: ~/.codex/agents/action-planning-agent.md (MUST read first)
2. Read: ${projectRoot}/.workflow/project-tech.json
3. Read: ${projectRoot}/.workflow/project-guidelines.json
---
## TASK OBJECTIVE
Generate implementation planning documents (IMPL_PLAN.md, task JSONs, TODO_LIST.md) for workflow session
IMPORTANT: This is PLANNING ONLY - you are generating planning documents, NOT implementing code.
CRITICAL: Follow the progressive loading strategy defined in agent specification (load analysis.md files incrementally due to file size)
## PLANNING NOTES (PHASE 1-2 CONTEXT)
Load: ${projectRoot}/.workflow/active/${session_id}/planning-notes.md
This document contains:
- User Intent: Original GOAL and KEY_CONSTRAINTS from Phase 1
- Context Findings: Critical files, architecture, and constraints from Phase 2
- Conflict Decisions: Resolved conflicts and planning constraints from Phase 2
- Consolidated Constraints: All constraints from all phases
**USAGE**: Read planning-notes.md FIRST. Use Consolidated Constraints list to guide task sequencing and dependencies.
## SESSION PATHS
Input:
- Session Metadata: ${projectRoot}/.workflow/active/${session_id}/workflow-session.json
- Planning Notes: ${projectRoot}/.workflow/active/${session_id}/planning-notes.md
- Context Package: ${projectRoot}/.workflow/active/${session_id}/.process/context-package.json
Output:
- Task Dir: ${projectRoot}/.workflow/active/${session_id}/.task/
- IMPL_PLAN: ${projectRoot}/.workflow/active/${session_id}/IMPL_PLAN.md
- TODO_LIST: ${projectRoot}/.workflow/active/${session_id}/TODO_LIST.md
## CONTEXT METADATA
Session ID: ${session_id}
MCP Capabilities: {exa_code, exa_web, code_index}
## USER CONFIGURATION (from Phase 0)
Execution Method: ${userConfig.executionMethod} // agent|hybrid|cli
Preferred CLI Tool: ${userConfig.preferredCliTool} // codex|gemini|qwen|auto
Supplementary Materials: ${userConfig.supplementaryMaterials}
## EXECUTION METHOD MAPPING
Based on userConfig.executionMethod, set task-level meta.execution_config:
"agent" →
meta.execution_config = { method: "agent", cli_tool: null, enable_resume: false }
Agent executes implementation_approach steps directly
"cli" →
meta.execution_config = { method: "cli", cli_tool: userConfig.preferredCliTool, enable_resume: true }
Agent executes pre_analysis, then hands off full context to CLI via buildCliHandoffPrompt()
"hybrid" →
Per-task decision: Analyze task complexity, set method to "agent" OR "cli" per task
- Simple tasks (≤3 files, straightforward logic) → method: "agent"
- Complex tasks (>3 files, complex logic, refactoring) → method: "cli"
CLI tool: userConfig.preferredCliTool, enable_resume: true
IMPORTANT: Do NOT add command field to implementation_approach steps. Execution routing is controlled by task-level meta.execution_config.method only.
## PRIORITIZED CONTEXT (from context-package.prioritized_context) - ALREADY SORTED
Context sorting is ALREADY COMPLETED in context-gather Phase 2. DO NOT re-sort.
Direct usage:
- **user_intent**: Use goal/scope/key_constraints for task alignment
- **priority_tiers.critical**: These files are PRIMARY focus for task generation
- **priority_tiers.high**: These files are SECONDARY focus
- **dependency_order**: Use this for task sequencing - already computed
- **sorting_rationale**: Reference for understanding priority decisions
## EXPLORATION CONTEXT (from context-package.exploration_results) - SUPPLEMENT ONLY
If prioritized_context is incomplete, fall back to exploration_results:
- Load exploration_results from context-package.json
- Use aggregated_insights.critical_files for focus_paths generation
- Apply aggregated_insights.constraints to acceptance criteria
- Reference aggregated_insights.all_patterns for implementation approach
- Use aggregated_insights.all_integration_points for precise modification locations
- Use conflict_indicators for risk-aware task sequencing
## CONFLICT RESOLUTION CONTEXT (if exists)
- Check context-package.conflict_detection.resolution_file for conflict-resolution.json path
- If exists, load .process/conflict-resolution.json:
- Apply planning_constraints as task constraints (for brainstorm-less workflows)
- Reference resolved_conflicts for implementation approach alignment
- Handle custom_conflicts with explicit task notes
## EXPECTED DELIVERABLES
1. Task JSON Files (.task/IMPL-*.json)
- 6-field schema (id, title, status, context_package_path, meta, context, flow_control)
- Quantified requirements with explicit counts
- Artifacts integration from context package
- **focus_paths generated directly from prioritized_context.priority_tiers (critical + high)**
- NO re-sorting or re-prioritization - use pre-computed tiers as-is
- Critical files are PRIMARY focus, High files are SECONDARY
- Flow control with pre_analysis steps (use prioritized_context.dependency_order for task sequencing)
- **CLI Execution IDs and strategies (MANDATORY)**
2. Implementation Plan (IMPL_PLAN.md)
- Context analysis and artifact references
- Task breakdown and execution strategy
- Complete structure per agent definition
3. TODO List (TODO_LIST.md)
- Hierarchical structure (containers, pending, completed markers)
- Links to task JSONs and summaries
- Matches task JSON hierarchy
## CLI EXECUTION ID REQUIREMENTS (MANDATORY)
Each task JSON MUST include:
- **cli_execution_id**: Unique ID for CLI execution (format: \`{session_id}-{task_id}\`)
- **cli_execution**: Strategy object based on depends_on:
- No deps → \`{ "strategy": "new" }\`
- 1 dep (single child) → \`{ "strategy": "resume", "resume_from": "parent-cli-id" }\`
- 1 dep (multiple children) → \`{ "strategy": "fork", "resume_from": "parent-cli-id" }\`
- N deps → \`{ "strategy": "merge_fork", "merge_from": ["id1", "id2", ...] }\`
**CLI Execution Strategy Rules**:
1. **new**: Task has no dependencies - starts fresh CLI conversation
2. **resume**: Task has 1 parent AND that parent has only this child - continues same conversation
3. **fork**: Task has 1 parent BUT parent has multiple children - creates new branch with parent context
4. **merge_fork**: Task has multiple parents - merges all parent contexts into new conversation
**Execution Command Patterns**:
- new: \`ccw cli -p "[prompt]" --tool [tool] --mode write --id [cli_execution_id]\`
- resume: \`ccw cli -p "[prompt]" --resume [resume_from] --tool [tool] --mode write\`
- fork: \`ccw cli -p "[prompt]" --resume [resume_from] --id [cli_execution_id] --tool [tool] --mode write\`
- merge_fork: \`ccw cli -p "[prompt]" --resume [merge_from.join(',')] --id [cli_execution_id] --tool [tool] --mode write\`
## QUALITY STANDARDS
Hard Constraints:
- Task count <= 18 (hard limit - request re-scope if exceeded)
- All requirements quantified (explicit counts and enumerated lists)
- Acceptance criteria measurable (include verification commands)
- Artifact references mapped from context package
- All documents follow agent-defined structure
## SUCCESS CRITERIA
- All planning documents generated successfully:
- Task JSONs valid and saved to .task/ directory
- IMPL_PLAN.md created with complete structure
- TODO_LIST.md generated matching task JSONs
- Return completion status with document count and task breakdown summary
## PLANNING NOTES RECORD (REQUIRED)
After completing, update planning-notes.md:
**File**: ${projectRoot}/.workflow/active/${session_id}/planning-notes.md
1. **Task Generation (Phase 3)**: Task count and key tasks
2. **N+1 Context**: Key decisions (with rationale) + deferred items
\`\`\`markdown
## Task Generation (Phase 3)
### [Action-Planning Agent] YYYY-MM-DD
- **Tasks**: [count] ([IDs])
## N+1 Context
### Decisions
| Decision | Rationale | Revisit? |
|----------|-----------|----------|
| [choice] | [why] | [Yes/No] |
### Deferred
- [ ] [item] - [reason]
\`\`\`
`
});
// Wait for planning agent to complete
const planningResult = wait({
ids: [planningAgentId],
timeout_ms: 900000 // 15 minutes
});
// Close planning agent
close_agent({ id: planningAgentId });
```
### Phase 2B: N Parallel Planning (Multi-Module)
**Condition**: `modules.length >= 2` (multi-module detected)
**Purpose**: Launch N action-planning-agents simultaneously, one per module, for parallel task JSON generation.
**Note**: Phase 2B agents generate Task JSONs ONLY. IMPL_PLAN.md and TODO_LIST.md are generated by Phase 3 Coordinator.
**Parallel Agent Invocation**:
```javascript
// Spawn N agents in parallel (one per module)
const moduleAgents = [];
modules.forEach(module => {
const agentId = spawn_agent({
message: `
## TASK ASSIGNMENT
### MANDATORY FIRST STEPS (Agent Execute)
1. **Read role definition**: ~/.codex/agents/action-planning-agent.md (MUST read first)
2. Read: ${projectRoot}/.workflow/project-tech.json
3. Read: ${projectRoot}/.workflow/project-guidelines.json
---
## TASK OBJECTIVE
Generate task JSON files for ${module.name} module within workflow session
IMPORTANT: This is PLANNING ONLY - generate task JSONs, NOT implementing code.
IMPORTANT: Generate Task JSONs ONLY. IMPL_PLAN.md and TODO_LIST.md by Phase 3 Coordinator.
CRITICAL: Follow the progressive loading strategy defined in agent specification (load analysis.md files incrementally due to file size)
## PLANNING NOTES (PHASE 1-2 CONTEXT)
Load: ${projectRoot}/.workflow/active/${session_id}/planning-notes.md
This document contains consolidated constraints and user intent to guide module-scoped task generation.
## MODULE SCOPE
- Module: ${module.name} (${module.type})
- Focus Paths: ${module.paths.join(', ')}
- Task ID Prefix: IMPL-${module.prefix}
- Task Limit: ≤6 tasks (hard limit for this module)
- Other Modules: ${otherModules.join(', ')} (reference only, do NOT generate tasks for them)
## SESSION PATHS
Input:
- Session Metadata: ${projectRoot}/.workflow/active/${session_id}/workflow-session.json
- Planning Notes: ${projectRoot}/.workflow/active/${session_id}/planning-notes.md
- Context Package: ${projectRoot}/.workflow/active/${session_id}/.process/context-package.json
Output:
- Task Dir: ${projectRoot}/.workflow/active/${session_id}/.task/
## CONTEXT METADATA
Session ID: ${session_id}
MCP Capabilities: {exa_code, exa_web, code_index}
## USER CONFIGURATION (from Phase 0)
Execution Method: ${userConfig.executionMethod} // agent|hybrid|cli
Preferred CLI Tool: ${userConfig.preferredCliTool} // codex|gemini|qwen|auto
Supplementary Materials: ${userConfig.supplementaryMaterials}
## EXECUTION METHOD MAPPING
Based on userConfig.executionMethod, set task-level meta.execution_config:
"agent" →
meta.execution_config = { method: "agent", cli_tool: null, enable_resume: false }
Agent executes implementation_approach steps directly
"cli" →
meta.execution_config = { method: "cli", cli_tool: userConfig.preferredCliTool, enable_resume: true }
Agent executes pre_analysis, then hands off full context to CLI via buildCliHandoffPrompt()
"hybrid" →
Per-task decision: Analyze task complexity, set method to "agent" OR "cli" per task
- Simple tasks (≤3 files, straightforward logic) → method: "agent"
- Complex tasks (>3 files, complex logic, refactoring) → method: "cli"
CLI tool: userConfig.preferredCliTool, enable_resume: true
IMPORTANT: Do NOT add command field to implementation_approach steps. Execution routing is controlled by task-level meta.execution_config.method only.
## PRIORITIZED CONTEXT (from context-package.prioritized_context) - ALREADY SORTED
Context sorting is ALREADY COMPLETED in context-gather Phase 2. DO NOT re-sort.
Filter by module scope (${module.paths.join(', ')}):
- **user_intent**: Use for task alignment within module
- **priority_tiers.critical**: Filter for files in ${module.paths.join(', ')} → PRIMARY focus
- **priority_tiers.high**: Filter for files in ${module.paths.join(', ')} → SECONDARY focus
- **dependency_order**: Use module-relevant entries for task sequencing
## EXPLORATION CONTEXT (from context-package.exploration_results) - SUPPLEMENT ONLY
If prioritized_context is incomplete for this module, fall back to exploration_results:
- Load exploration_results from context-package.json
- Filter for ${module.name} module: Use aggregated_insights.critical_files matching ${module.paths.join(', ')}
- Apply module-relevant constraints from aggregated_insights.constraints
- Reference aggregated_insights.all_patterns applicable to ${module.name}
- Use aggregated_insights.all_integration_points for precise modification locations within module scope
- Use conflict_indicators for risk-aware task sequencing
## CONFLICT RESOLUTION CONTEXT (if exists)
- Check context-package.conflict_detection.resolution_file for conflict-resolution.json path
- If exists, load .process/conflict-resolution.json:
- Apply planning_constraints relevant to ${module.name} as task constraints
- Reference resolved_conflicts affecting ${module.name} for implementation approach alignment
- Handle custom_conflicts with explicit task notes
## CROSS-MODULE DEPENDENCIES
- For dependencies ON other modules: Use placeholder depends_on: ["CROSS::{module}::{pattern}"]
- Example: depends_on: ["CROSS::B::api-endpoint"] (this module depends on B's api-endpoint task)
- Phase 3 Coordinator resolves to actual task IDs
- For dependencies FROM other modules: Document in task context as "provides_for" annotation
## EXPECTED DELIVERABLES
Task JSON Files (.task/IMPL-${module.prefix}*.json):
- 6-field schema (id, title, status, context_package_path, meta, context, flow_control)
- Task ID format: IMPL-${module.prefix}1, IMPL-${module.prefix}2, ...
- Quantified requirements with explicit counts
- Artifacts integration from context package (filtered for ${module.name})
- **focus_paths generated directly from prioritized_context.priority_tiers filtered by ${module.paths.join(', ')}**
- NO re-sorting - use pre-computed tiers filtered for this module
- Critical files are PRIMARY focus, High files are SECONDARY
- Flow control with pre_analysis steps (use prioritized_context.dependency_order for module task sequencing)
- **CLI Execution IDs and strategies (MANDATORY)**
- Focus ONLY on ${module.name} module scope
## CLI EXECUTION ID REQUIREMENTS (MANDATORY)
Each task JSON MUST include:
- **cli_execution_id**: Unique ID for CLI execution (format: \`{session_id}-IMPL-${module.prefix}{seq}\`)
- **cli_execution**: Strategy object based on depends_on:
- No deps → \`{ "strategy": "new" }\`
- 1 dep (single child) → \`{ "strategy": "resume", "resume_from": "parent-cli-id" }\`
- 1 dep (multiple children) → \`{ "strategy": "fork", "resume_from": "parent-cli-id" }\`
- N deps → \`{ "strategy": "merge_fork", "merge_from": ["id1", "id2", ...] }\`
- Cross-module dep → \`{ "strategy": "cross_module_fork", "resume_from": "CROSS::{module}::{pattern}" }\`
**CLI Execution Strategy Rules**:
1. **new**: Task has no dependencies - starts fresh CLI conversation
2. **resume**: Task has 1 parent AND that parent has only this child - continues same conversation
3. **fork**: Task has 1 parent BUT parent has multiple children - creates new branch with parent context
4. **merge_fork**: Task has multiple parents - merges all parent contexts into new conversation
5. **cross_module_fork**: Task depends on task from another module - Phase 3 resolves placeholder
**Execution Command Patterns**:
- new: \`ccw cli -p "[prompt]" --tool [tool] --mode write --id [cli_execution_id]\`
- resume: \`ccw cli -p "[prompt]" --resume [resume_from] --tool [tool] --mode write\`
- fork: \`ccw cli -p "[prompt]" --resume [resume_from] --id [cli_execution_id] --tool [tool] --mode write\`
- merge_fork: \`ccw cli -p "[prompt]" --resume [merge_from.join(',')] --id [cli_execution_id] --tool [tool] --mode write\`
- cross_module_fork: (Phase 3 resolves placeholder, then uses fork pattern)
## QUALITY STANDARDS
Hard Constraints:
- Task count <= 9 for this module (hard limit - coordinate with Phase 3 if exceeded)
- All requirements quantified (explicit counts and enumerated lists)
- Acceptance criteria measurable (include verification commands)
- Artifact references mapped from context package (module-scoped filter)
- Focus paths use absolute paths or clear relative paths from project root
- Cross-module dependencies use CROSS:: placeholder format
## SUCCESS CRITERIA
- Task JSONs saved to .task/ with IMPL-${module.prefix}* naming
- All task JSONs include cli_execution_id and cli_execution strategy
- Cross-module dependencies use CROSS:: placeholder format consistently
- Focus paths scoped to ${module.paths.join(', ')} only
- Return: task count, task IDs, dependency summary (internal + cross-module)
## PLANNING NOTES RECORD (REQUIRED)
After completing, append to planning-notes.md:
\`\`\`markdown
### [${module.name}] YYYY-MM-DD
- **Tasks**: [count] ([IDs])
- **CROSS deps**: [placeholders used]
\`\`\`
`
});
moduleAgents.push(agentId);
});
// Batch wait for all module agents
const moduleResults = wait({
ids: moduleAgents,
timeout_ms: 900000 // 15 minutes
});
// Close all module agents
moduleAgents.forEach(agentId => {
close_agent({ id: agentId });
});
```
**Output Structure** (direct to .task/):
```
.task/
├── IMPL-A1.json # Module A (e.g., frontend)
├── IMPL-A2.json
├── IMPL-B1.json # Module B (e.g., backend)
├── IMPL-B2.json
└── IMPL-C1.json # Module C (e.g., shared)
```
**Task ID Naming**:
- Format: `IMPL-{prefix}{seq}.json`
- Prefix: A, B, C... (assigned by detection order)
- Sequence: 1, 2, 3... (per-module increment)
### Phase 3: Integration (+1 Coordinator Agent, Multi-Module Only)
**Condition**: Only executed when `modules.length >= 2`
**Purpose**: Collect all module tasks, resolve cross-module dependencies, generate unified IMPL_PLAN.md and TODO_LIST.md documents.
**Coordinator Agent Invocation**:
```javascript
// Wait for all Phase 2B agents to complete (already done above)
// Spawn +1 Coordinator Agent
const coordinatorAgentId = spawn_agent({
message: `
## TASK ASSIGNMENT
### MANDATORY FIRST STEPS (Agent Execute)
1. **Read role definition**: ~/.codex/agents/action-planning-agent.md (MUST read first)
2. Read: ${projectRoot}/.workflow/project-tech.json
3. Read: ${projectRoot}/.workflow/project-guidelines.json
---
## TASK OBJECTIVE
Integrate all module task JSONs, resolve cross-module dependencies, and generate unified IMPL_PLAN.md and TODO_LIST.md
IMPORTANT: This is INTEGRATION ONLY - consolidate existing task JSONs, NOT creating new tasks.
## SESSION PATHS
Input:
- Session Metadata: ${projectRoot}/.workflow/active/${session_id}/workflow-session.json
- Context Package: ${projectRoot}/.workflow/active/${session_id}/.process/context-package.json
- Task JSONs: ${projectRoot}/.workflow/active/${session_id}/.task/IMPL-*.json (from Phase 2B)
Output:
- Updated Task JSONs: ${projectRoot}/.workflow/active/${session_id}/.task/IMPL-*.json (resolved dependencies)
- IMPL_PLAN: ${projectRoot}/.workflow/active/${session_id}/IMPL_PLAN.md
- TODO_LIST: ${projectRoot}/.workflow/active/${session_id}/TODO_LIST.md
## CONTEXT METADATA
Session ID: ${session_id}
Modules: ${modules.map(m => m.name + '(' + m.prefix + ')').join(', ')}
Module Count: ${modules.length}
## INTEGRATION STEPS
1. Collect all .task/IMPL-*.json, group by module prefix
2. Resolve CROSS:: dependencies → actual task IDs, update task JSONs
3. Generate IMPL_PLAN.md (multi-module format per agent specification)
4. Generate TODO_LIST.md (hierarchical format per agent specification)
## CROSS-MODULE DEPENDENCY RESOLUTION
- Pattern: CROSS::{module}::{pattern} → IMPL-{module}* matching title/context
- Example: CROSS::B::api-endpoint → IMPL-B1 (if B1 title contains "api-endpoint")
- Log unresolved as warnings
## EXPECTED DELIVERABLES
1. Updated Task JSONs with resolved dependency IDs
2. IMPL_PLAN.md - multi-module format with cross-dependency section
3. TODO_LIST.md - hierarchical by module with cross-dependency section
## SUCCESS CRITERIA
- No CROSS:: placeholders remaining in task JSONs
- IMPL_PLAN.md and TODO_LIST.md generated with multi-module structure
- Return: task count, per-module breakdown, resolved dependency count
## PLANNING NOTES RECORD (REQUIRED)
After integration, update planning-notes.md:
\`\`\`markdown
### [Coordinator] YYYY-MM-DD
- **Total**: [count] tasks
- **Resolved**: [CROSS:: resolutions]
## N+1 Context
### Decisions
| Decision | Rationale | Revisit? |
|----------|-----------|----------|
| CROSS::X → IMPL-Y | [why this resolution] | [Yes/No] |
### Deferred
- [ ] [unresolved CROSS or conflict] - [reason]
\`\`\`
`
});
// Wait for coordinator agent to complete
const coordinatorResult = wait({
ids: [coordinatorAgentId],
timeout_ms: 600000 // 10 minutes
});
// Close coordinator agent
close_agent({ id: coordinatorAgentId });
```
**Dependency Resolution Algorithm**:
```javascript
function resolveCrossModuleDependency(placeholder, allTasks) {
const [, targetModule, pattern] = placeholder.match(/CROSS::(\w+)::(.+)/);
const candidates = allTasks.filter(t =>
t.id.startsWith(`IMPL-${targetModule}`) &&
(t.title.toLowerCase().includes(pattern.toLowerCase()) ||
t.context?.description?.toLowerCase().includes(pattern.toLowerCase()))
);
return candidates.length > 0
? candidates.sort((a, b) => a.id.localeCompare(b.id))[0].id
: placeholder; // Keep for manual resolution
}
```
## Generate plan.json (plan-overview-base-schema)
After all task JSONs and IMPL_PLAN.md are generated (both single-agent and multi-module paths), generate `plan.json` as structured index:
```javascript
// Scan generated task files
const taskFiles = Glob(`${sessionFolder}/.task/IMPL-*.json`)
const taskIds = taskFiles.map(f => JSON.parse(Read(f)).id).sort()
// Guard: skip plan.json if no tasks generated
if (taskIds.length === 0) {
console.warn('No tasks generated; skipping plan.json')
} else {
const planOverview = {
summary: implPlanSummary, // From IMPL_PLAN.md overview section
approach: implPlanApproach, // From IMPL_PLAN.md approach/strategy
task_ids: taskIds, // ["IMPL-001", "IMPL-002", ...] or ["IMPL-A1", "IMPL-B1", ...]
task_count: taskIds.length,
complexity: complexity, // From Phase 1 assessment
recommended_execution: userConfig.executionMethod === "cli" ? "Codex" : "Agent",
_metadata: {
timestamp: getUtc8ISOString(),
source: "action-planning-agent",
planning_mode: "agent-based",
plan_type: "feature",
schema_version: "2.0"
}
}
Write(`${sessionFolder}/plan.json`, JSON.stringify(planOverview, null, 2))
} // end guard
```
## Output
- **Files**:
- `{projectRoot}/.workflow/active/{sessionId}/plan.json`
- `{projectRoot}/.workflow/active/{sessionId}/IMPL_PLAN.md`
- `{projectRoot}/.workflow/active/{sessionId}/.task/IMPL-*.json`
- `{projectRoot}/.workflow/active/{sessionId}/TODO_LIST.md`
- **Updated**: `planning-notes.md` with task generation record and N+1 context
## Next Step
Return to orchestrator. Present user with action choices (or auto-continue if `--yes`):
1. Verify Plan Quality (Recommended) → `workflow:plan-verify`
2. Start Execution → Phase 4 (phases/04-execution.md)
3. Review Status Only → `workflow:status`

View File

@@ -1,452 +0,0 @@
# Phase 4: Execution (Conditional)
Execute implementation tasks using agent orchestration with lazy loading, progress tracking, and optional auto-commit. This phase is triggered conditionally after Phase 3 completes.
## Trigger Conditions
- **User Selection**: User chooses "Start Execution" in Phase 3 User Decision
- **Auto Mode** (`--yes`): Automatically enters Phase 4 after Phase 3 completes
## Auto Mode
When `--yes` or `-y`:
- **Completion Choice**: Automatically completes session (runs `workflow:session:complete --yes`)
When `--with-commit`:
- **Auto-Commit**: After each agent task completes, commit changes based on summary document
- **Commit Principle**: Minimal commits - only commit files modified by the completed task
- **Commit Message**: Generated from task summary with format: "feat/fix/refactor: {task-title} - {summary}"
## Key Design Principles
1. **No Redundant Discovery**: Session ID and planning docs already available from Phase 1-3
2. **Autonomous Execution**: Complete entire workflow without user interruption
3. **Lazy Loading**: Task JSONs read on-demand during execution, not upfront
4. **ONE AGENT = ONE TASK JSON**: Each agent instance executes exactly one task JSON file
5. **IMPL_PLAN-Driven Strategy**: Execution model derived from planning document
6. **Continuous Progress Tracking**: TodoWrite updates throughout entire workflow
7. **Subagent Lifecycle**: Explicit lifecycle management with spawn_agent → wait → close_agent
## Execution Flow
```
Step 1: TodoWrite Generation
├─ Update session status to "active"
├─ Parse TODO_LIST.md for task statuses
├─ Generate TodoWrite for entire workflow
└─ Prepare session context paths
Step 2: Execution Strategy Parsing
├─ Parse IMPL_PLAN.md Section 4 (Execution Model)
└─ Fallback: Analyze task structure for smart defaults
Step 3: Task Execution Loop
├─ Get next in_progress task from TodoWrite
├─ Lazy load task JSON
├─ Launch agent (spawn_agent → wait → close_agent)
├─ Mark task completed (update IMPL-*.json status)
├─ [with-commit] Auto-commit changes
└─ Advance to next task
Step 4: Completion
├─ Synchronize all statuses
├─ Generate summaries
└─ ASK_USER: Review or Complete Session
```
## Step 1: TodoWrite Generation
### Step 1.0: Update Session Status to Active
Before generating TodoWrite, update session status from "planning" to "active":
```bash
# Update session status (idempotent - safe to run if already active)
jq '.status = "active" | .execution_started_at = (.execution_started_at // now | todate)' \
${projectRoot}/.workflow/active/${sessionId}/workflow-session.json > tmp.json && \
mv tmp.json ${projectRoot}/.workflow/active/${sessionId}/workflow-session.json
```
This ensures the dashboard shows the session as "ACTIVE" during execution.
### Step 1.1: Parse TODO_LIST.md
```
1. Create TodoWrite List: Generate task list from TODO_LIST.md (not from task JSONs)
- Parse TODO_LIST.md to extract all tasks with current statuses
- Identify first pending task with met dependencies
- Generate comprehensive TodoWrite covering entire workflow
2. Prepare Session Context: Inject workflow paths for agent use
3. Validate Prerequisites: Ensure IMPL_PLAN.md and TODO_LIST.md exist and are valid
```
**Performance Optimization**: TODO_LIST.md provides task metadata and status. Task JSONs are NOT loaded here - deferred to Step 3 (lazy loading).
## Step 2: Execution Strategy Parsing
### Step 2A: Parse Execution Strategy from IMPL_PLAN.md
Read IMPL_PLAN.md Section 4 to extract:
- **Execution Model**: Sequential | Parallel | Phased | TDD Cycles
- **Parallelization Opportunities**: Which tasks can run in parallel
- **Serialization Requirements**: Which tasks must run sequentially
- **Critical Path**: Priority execution order
### Step 2B: Intelligent Fallback
If IMPL_PLAN.md lacks execution strategy, use intelligent fallback:
1. **Analyze task structure**:
- Check `meta.execution_group` in task JSONs
- Analyze `depends_on` relationships
- Understand task complexity and risk
2. **Apply smart defaults**:
- No dependencies + same execution_group → Parallel
- Has dependencies → Sequential (wait for deps)
- Critical/high-risk tasks → Sequential
3. **Conservative approach**: When uncertain, prefer sequential execution
### Execution Models
#### 1. Sequential Execution
**When**: IMPL_PLAN specifies "Sequential" OR no clear parallelization guidance
**Pattern**: Execute tasks one by one in TODO_LIST order via spawn_agent → wait → close_agent per task
**TodoWrite**: ONE task marked as `in_progress` at a time
#### 2. Parallel Execution
**When**: IMPL_PLAN specifies "Parallel" with clear parallelization opportunities
**Pattern**: Execute independent task groups concurrently by spawning multiple agents and batch waiting
**TodoWrite**: MULTIPLE tasks (in same batch) marked as `in_progress` simultaneously
**Agent Instantiation**: spawn one agent per task (respects ONE AGENT = ONE TASK JSON rule), then batch wait
#### 3. Phased Execution
**When**: IMPL_PLAN specifies "Phased" with phase breakdown
**Pattern**: Execute tasks in phases, respect phase boundaries
**TodoWrite**: Within each phase, follow Sequential or Parallel rules
## Step 3: Task Execution Loop
### Execution Loop Pattern
```
while (TODO_LIST.md has pending tasks) {
next_task_id = getTodoWriteInProgressTask()
task_json = Read(${projectRoot}/.workflow/active/{session}/.task/{next_task_id}.json) // Lazy load
executeTaskWithAgent(task_json) // spawn_agent → wait → close_agent
updateTodoListMarkCompleted(next_task_id)
advanceTodoWriteToNextTask()
}
```
### Execution Process per Task
1. **Identify Next Task**: From TodoWrite, get the next `in_progress` task ID
2. **Load Task JSON on Demand**: Read `.task/{task-id}.json` for current task ONLY
3. **Validate Task Structure**: Ensure all required fields exist (id, title, status, meta, context, flow_control)
4. **Launch Agent**: Invoke specialized agent via spawn_agent with complete context including flow control steps
5. **Wait for Completion**: wait for agent result, handle timeout
6. **Close Agent**: close_agent to free resources
7. **Collect Results**: Gather implementation results and outputs
8. **[with-commit] Auto-Commit**: If `--with-commit` flag enabled, commit changes based on summary
9. **Continue Workflow**: Identify next pending task from TODO_LIST.md and repeat
**Note**: TODO_LIST.md updates are handled by agents (e.g., code-developer.md), not by the orchestrator.
### Agent Prompt Template (Sequential)
**Path-Based Invocation**: Pass paths and trigger markers, let agent parse task JSON autonomously.
```javascript
// Step 1: Spawn agent
const agentId = spawn_agent({
message: `
## TASK ASSIGNMENT
### MANDATORY FIRST STEPS (Agent Execute)
1. **Read role definition**: ~/.codex/agents/${meta.agent}.md (MUST read first)
2. Read: ${projectRoot}/.workflow/project-tech.json
3. Read: ${projectRoot}/.workflow/project-guidelines.json
---
Implement task ${task.id}: ${task.title}
[FLOW_CONTROL]
**Input**:
- Task JSON: ${session.task_json_path}
- Context Package: ${session.context_package_path}
**Output Location**:
- Workflow: ${session.workflow_dir}
- TODO List: ${session.todo_list_path}
- Summaries: ${session.summaries_dir}
**Execution**: Read task JSON → Execute pre_analysis → Check execution_config.method → (CLI: handoff to CLI tool | Agent: direct implementation) → Update TODO_LIST.md → Generate summary
`
});
// Step 2: Wait for completion
const result = wait({
ids: [agentId],
timeout_ms: 600000 // 10 minutes per task
});
// Step 3: Close agent (IMPORTANT: always close)
close_agent({ id: agentId });
```
**Key Markers**:
- `Implement` keyword: Triggers tech stack detection and guidelines loading
- `[FLOW_CONTROL]`: Triggers flow_control.pre_analysis execution
### Parallel Execution Pattern
```javascript
// Step 1: Spawn agents for parallel batch
const batchAgents = [];
parallelTasks.forEach(task => {
const agentId = spawn_agent({
message: `
## TASK ASSIGNMENT
### MANDATORY FIRST STEPS (Agent Execute)
1. **Read role definition**: ~/.codex/agents/${task.meta.agent}.md (MUST read first)
2. Read: ${projectRoot}/.workflow/project-tech.json
3. Read: ${projectRoot}/.workflow/project-guidelines.json
---
Implement task ${task.id}: ${task.title}
[FLOW_CONTROL]
**Input**:
- Task JSON: ${session.task_json_path}
- Context Package: ${session.context_package_path}
**Output Location**:
- Workflow: ${session.workflow_dir}
- TODO List: ${session.todo_list_path}
- Summaries: ${session.summaries_dir}
**Execution**: Read task JSON → Execute pre_analysis → Check execution_config.method → (CLI: handoff to CLI tool | Agent: direct implementation) → Update TODO_LIST.md → Generate summary
`
});
batchAgents.push({ agentId, taskId: task.id });
});
// Step 2: Batch wait for all agents
const batchResult = wait({
ids: batchAgents.map(a => a.agentId),
timeout_ms: 600000
});
// Step 3: Check results and handle timeouts
if (batchResult.timed_out) {
console.log('Some parallel tasks timed out, continuing with completed results');
}
// Step 4: Close all agents
batchAgents.forEach(a => close_agent({ id: a.agentId }));
```
### Agent Assignment Rules
```
meta.agent specified → Use specified agent
meta.agent missing → Infer from meta.type:
- "feature" → @code-developer (role: ~/.codex/agents/code-developer.md)
- "test-gen" → @code-developer (role: ~/.codex/agents/code-developer.md)
- "test-fix" → @test-fix-agent (role: ~/.codex/agents/test-fix-agent.md)
- "review" → @universal-executor (role: ~/.codex/agents/universal-executor.md)
- "docs" → @doc-generator (role: ~/.codex/agents/doc-generator.md)
```
### Task Status Logic
```
pending + dependencies_met → executable
completed → skip
blocked → skip until dependencies clear
```
## Step 4: Completion
### Process
1. **Update Task Status**: Mark completed tasks in JSON files
2. **Generate Summary**: Create task summary in `.summaries/`
3. **Update TodoWrite**: Mark current task complete, advance to next
4. **Synchronize State**: Update session state and workflow status
5. **Check Workflow Complete**: Verify all tasks are completed
6. **User Choice**: When all tasks finished, ask user to choose next step:
```javascript
// Parse --yes flag
const autoYes = $ARGUMENTS.includes('--yes') || $ARGUMENTS.includes('-y')
if (autoYes) {
// Auto mode: Complete session automatically
console.log(`[--yes] Auto-selecting: Complete Session`)
// Execute: workflow:session:complete --yes
} else {
// Interactive mode: Ask user
ASK_USER([{
id: "completion-next-step",
type: "select",
prompt: "All tasks completed. What would you like to do next?",
options: [
{
label: "Enter Review",
description: "Run specialized review (security/architecture/quality/action-items)"
},
{
label: "Complete Session",
description: "Archive session and update manifest"
}
]
}]) // BLOCKS (wait for user response)
}
```
**Based on user selection**:
- **"Enter Review"**: Execute `workflow:review`
- **"Complete Session"**: Execute `workflow:session:complete`
### Post-Completion Expansion
After completion, ask user whether to expand to issues (test/enhance/refactor/doc). Selected items invoke `issue:new "{summary} - {dimension}"`.
## Auto-Commit Mode (--with-commit)
**Behavior**: After each agent task completes, automatically commit changes based on summary document.
**Minimal Principle**: Only commit files modified by the completed task.
**Commit Message Format**: `{type}: {task-title} - {summary}`
**Type Mapping** (from `meta.type`):
- `feature``feat` | `bugfix``fix` | `refactor``refactor`
- `test-gen``test` | `docs``docs` | `review``chore`
**Implementation**:
```bash
# 1. Read summary from .summaries/{task-id}-summary.md
# 2. Extract files from "Files Modified" section
# 3. Commit: git add <files> && git commit -m "{type}: {title} - {summary}"
```
**Error Handling**: Skip commit on no changes/missing summary, log errors, continue workflow.
## TodoWrite Coordination
### TodoWrite Rules
**Rule 1: Initial Creation**
- Generate TodoWrite from TODO_LIST.md pending tasks
**Rule 2: In-Progress Task Count (Execution-Model-Dependent)**
- **Sequential execution**: Mark ONLY ONE task as `in_progress` at a time
- **Parallel batch execution**: Mark ALL tasks in current batch as `in_progress` simultaneously
- **Execution group indicator**: Show `[execution_group: group-id]` for parallel tasks
**Rule 3: Status Updates**
- **Immediate Updates**: Update status after each task/batch completion without user interruption
- **Status Synchronization**: Sync with JSON task files after updates
- **Continuous Tracking**: Maintain TodoWrite throughout entire workflow execution until completion
**Rule 4: Workflow Completion Check**
- When all tasks marked `completed`, prompt user to choose review or complete session
### TodoWrite Examples
**Sequential Execution**:
```javascript
TodoWrite({
todos: [
{
content: "Execute IMPL-1.1: Design auth schema [code-developer] [FLOW_CONTROL]",
status: "in_progress", // ONE task in progress
activeForm: "Executing IMPL-1.1: Design auth schema"
},
{
content: "Execute IMPL-1.2: Implement auth logic [code-developer] [FLOW_CONTROL]",
status: "pending",
activeForm: "Executing IMPL-1.2: Implement auth logic"
}
]
});
```
**Parallel Batch Execution**:
```javascript
TodoWrite({
todos: [
{
content: "Execute IMPL-1.1: Build Auth API [code-developer] [execution_group: parallel-auth-api]",
status: "in_progress",
activeForm: "Executing IMPL-1.1: Build Auth API"
},
{
content: "Execute IMPL-1.2: Build User UI [code-developer] [execution_group: parallel-ui-comp]",
status: "in_progress",
activeForm: "Executing IMPL-1.2: Build User UI"
},
{
content: "Execute IMPL-2.1: Integration Tests [test-fix-agent] [depends_on: IMPL-1.1, IMPL-1.2]",
status: "pending",
activeForm: "Executing IMPL-2.1: Integration Tests"
}
]
});
```
## Error Handling & Recovery
### Common Errors & Recovery
| Error Type | Cause | Recovery Strategy | Max Attempts |
|-----------|-------|------------------|--------------|
| **Execution Errors** |
| Agent failure | Agent crash/timeout | Retry with simplified context (close_agent first, then spawn new) | 2 |
| Flow control error | Command failure | Skip optional, fail critical | 1 per step |
| Context loading error | Missing dependencies | Reload from JSON, use defaults | 3 |
| JSON file corruption | File system issues | Restore from backup/recreate | 1 |
| **Lifecycle Errors** |
| Agent timeout | wait timed out | send_input to prompt completion, or close_agent and retry | 2 |
| Orphaned agent | Agent not closed after error | Ensure close_agent in error paths | N/A |
### Error Recovery with Lifecycle Management
```javascript
// Safe agent execution pattern with error handling
let agentId = null;
try {
agentId = spawn_agent({ message: taskPrompt });
const result = wait({ ids: [agentId], timeout_ms: 600000 });
if (result.timed_out) {
// Option 1: Send prompt to complete
send_input({ id: agentId, message: "Please wrap up and generate summary." });
const retryResult = wait({ ids: [agentId], timeout_ms: 120000 });
}
// Process results...
close_agent({ id: agentId });
} catch (error) {
// Ensure cleanup on error
if (agentId) close_agent({ id: agentId });
// Handle error (retry or skip task)
}
```
### Error Prevention
- **Lazy Loading**: Reduces upfront memory usage and validation errors
- **Atomic Updates**: Update JSON files atomically to prevent corruption
- **Dependency Validation**: Check all depends_on references exist
- **Context Verification**: Ensure all required context is available
- **Lifecycle Cleanup**: Always close_agent in both success and error paths
## Output
- **Updated**: Task JSON status fields (completed)
- **Created**: `.summaries/IMPL-*-summary.md` per task
- **Updated**: `TODO_LIST.md` (by agents)
- **Updated**: `workflow-session.json` status
- **Created**: Git commits (if `--with-commit`)
## Next Step
Return to orchestrator. Orchestrator handles completion summary output.

View File

@@ -1,868 +0,0 @@
---
name: workflow-tdd-plan
description: TDD workflow planning with Red-Green-Refactor task chain generation, test-first development structure, cycle tracking, and post-execution compliance verification. Triggers on "workflow:tdd-plan", "workflow:tdd-verify".
allowed-tools: spawn_agent, wait, send_input, close_agent, AskUserQuestion, Read, Write, Edit, Bash, Glob, Grep
---
# Workflow TDD Plan
6-phase TDD planning workflow that orchestrates session discovery, context gathering, test coverage analysis, conflict resolution, and TDD task generation to produce implementation plans with Red-Green-Refactor cycles. Includes post-execution TDD compliance verification.
## Architecture Overview
```
┌──────────────────────────────────────────────────────────────────┐
│ Workflow TDD Plan Orchestrator (SKILL.md) │
│ → Pure coordinator: Execute phases, parse outputs, pass context │
└───────────────┬──────────────────────────────────────────────────┘
┌────────────┼────────────┬────────────┬────────────┐
↓ ↓ ↓ ↓ ↓
┌────────┐ ┌────────┐ ┌──────────┐ ┌──────────┐ ┌──────────┐
│ Phase 1│ │ Phase 2│ │ Phase 3 │ │ Phase 4 │ │ Phase 5 │
│Session │ │Context │ │Test Covg │ │Conflict │ │TDD Task │
│Discover│ │Gather │ │Analysis │ │Resolve │ │Generate │
│ (ext) │ │ (ext) │ │ (local) │ │(ext,cond)│ │ (local) │
└────────┘ └────────┘ └──────────┘ └──────────┘ └──────────┘
↓ ↓ ↓ ↓ ↓
sessionId contextPath testContext resolved IMPL_PLAN.md
conflict_risk artifacts task JSONs
Phase 6: TDD Structure Validation (inline in SKILL.md)
Post-execution verification:
┌──────────────┐ ┌───────────────────┐
│ TDD Verify │────→│ Coverage Analysis │
│ (local) │ │ (local) │
└──────────────┘ └───────────────────┘
phases/03-tdd- phases/04-tdd-
verify.md coverage-analysis.md
```
## Key Design Principles
1. **Pure Orchestrator**: Execute phases in sequence, parse outputs, pass context between them
2. **Auto-Continue**: All phases run autonomously without user intervention between phases
3. **Subagent Lifecycle**: Explicit lifecycle management with spawn_agent → wait → close_agent
4. **Progressive Phase Loading**: Phase docs are read on-demand, not all at once
5. **Conditional Execution**: Phase 4 only executes when conflict_risk >= medium
6. **TDD-First**: Every feature starts with a failing test (Red phase)
7. **Role Path Loading**: Subagent roles loaded via path reference in MANDATORY FIRST STEPS
**CLI Tool Selection**: CLI tool usage is determined semantically from user's task description. Include "use Codex/Gemini/Qwen" in your request for CLI execution.
**Task Attachment Model**:
- Skill execute **expands workflow** by attaching sub-tasks to current TodoWrite
- When executing a sub-command, its internal tasks are attached to the orchestrator's TodoWrite
- Orchestrator **executes these attached tasks** sequentially
- After completion, attached tasks are **collapsed** back to high-level phase summary
- This is **task expansion**, not external delegation
**Auto-Continue Mechanism**:
- TodoList tracks current phase status and dynamically manages task attachment/collapse
- When each phase finishes executing, automatically execute next pending phase
- All phases run autonomously without user interaction
- **CONTINUOUS EXECUTION** - Do not stop until all phases complete
## Auto Mode
When `--yes` or `-y`: Auto-continue all phases (skip confirmations), use recommended conflict resolutions, skip TDD clarifications.
## Subagent API Reference
### spawn_agent
Create a new subagent with task assignment.
```javascript
const agentId = spawn_agent({
message: `
## TASK ASSIGNMENT
### MANDATORY FIRST STEPS (Agent Execute)
1. **Read role definition**: ~/.codex/agents/{agent-type}.md (MUST read first)
2. Read: {projectRoot}/.workflow/project-tech.json
3. Read: {projectRoot}/.workflow/project-guidelines.json
## TASK CONTEXT
${taskContext}
## DELIVERABLES
${deliverables}
`
})
```
### wait
Get results from subagent (only way to retrieve results).
```javascript
const result = wait({
ids: [agentId],
timeout_ms: 600000 // 10 minutes
})
if (result.timed_out) {
// Handle timeout - can continue waiting or send_input to prompt completion
}
```
### send_input
Continue interaction with active subagent (for clarification or follow-up).
```javascript
send_input({
id: agentId,
message: `
## CLARIFICATION ANSWERS
${answers}
## NEXT STEP
Continue with plan generation.
`
})
```
### close_agent
Clean up subagent resources (irreversible).
```javascript
close_agent({ id: agentId })
```
## Usage
```
workflow-tdd-plan <task description>
workflow-tdd-plan [-y|--yes] "<task description>"
# Flags
-y, --yes Skip all confirmations (auto mode)
# Arguments
<task description> Task description text, TDD-structured format, or path to .md file
# Examples
workflow-tdd-plan "Build user authentication with tests" # Simple TDD task
workflow-tdd-plan "Add JWT auth with email/password and token refresh" # Detailed task
workflow-tdd-plan -y "Implement payment processing" # Auto mode
workflow-tdd-plan "tdd-requirements.md" # From file
```
## TDD Compliance Requirements
### The Iron Law
```
NO PRODUCTION CODE WITHOUT A FAILING TEST FIRST
```
**Enforcement Method**:
- Phase 5: `implementation_approach` includes test-first steps (Red → Green → Refactor)
- Green phase: Includes test-fix-cycle configuration (max 3 iterations)
- Auto-revert: Triggered when max iterations reached without passing tests
**Verification**: Phase 6 validates Red-Green-Refactor structure in all generated tasks
### TDD Compliance Checkpoint
| Checkpoint | Validation Phase | Evidence Required |
|------------|------------------|-------------------|
| Test-first structure | Phase 5 | `implementation_approach` has 3 steps |
| Red phase exists | Phase 6 | Step 1: `tdd_phase: "red"` |
| Green phase with test-fix | Phase 6 | Step 2: `tdd_phase: "green"` + test-fix-cycle |
| Refactor phase exists | Phase 6 | Step 3: `tdd_phase: "refactor"` |
### Core TDD Principles
**Red Flags - STOP and Reassess**:
- Code written before test
- Test passes immediately (no Red phase witnessed)
- Cannot explain why test should fail
- "Just this once" rationalization
- "Tests after achieve same goals" thinking
**Why Order Matters**:
- Tests written after code pass immediately → proves nothing
- Test-first forces edge case discovery before implementation
- Tests-after verify what was built, not what's required
## Core Rules
1. **Start Immediately**: First action is TodoWrite initialization, second action is execute Phase 1
2. **No Preliminary Analysis**: Do not read files before Phase 1
3. **Parse Every Output**: Extract required data for next phase
4. **Auto-Continue via TodoList**: Check TodoList status to execute next pending phase automatically
5. **Track Progress**: Update TodoWrite dynamically with task attachment/collapse pattern
6. **TDD Context**: All descriptions include "TDD:" prefix
7. **Task Attachment Model**: Skill execute **attaches** sub-tasks to current workflow. Orchestrator **executes** these attached tasks itself, then **collapses** them after completion
8. **CRITICAL: DO NOT STOP**: Continuous multi-phase workflow. After executing all attached tasks, immediately collapse them and execute next phase
9. **Explicit Lifecycle**: Always close_agent after wait completes to free resources
## Execution Flow
```
Input Parsing:
└─ Convert user input to TDD-structured format (TDD:/GOAL/SCOPE/CONTEXT/TEST_FOCUS)
Phase 1: Session Discovery
└─ Ref: workflow-plan-execute/phases/01-session-discovery.md (external)
└─ Output: sessionId (WFS-xxx)
Phase 2: Context Gathering
└─ Ref: workflow-plan-execute/phases/02-context-gathering.md (external)
├─ Tasks attached: Analyze structure → Identify integration → Generate package
└─ Output: contextPath + conflict_risk
Phase 3: Test Coverage Analysis ← ATTACHED (3 tasks)
└─ Ref: phases/01-test-context-gather.md
├─ Phase 3.1: Detect test framework
├─ Phase 3.2: Analyze existing test coverage
└─ Phase 3.3: Identify coverage gaps
└─ Output: test-context-package.json ← COLLAPSED
Phase 4: Conflict Resolution (conditional)
└─ Decision (conflict_risk check):
├─ conflict_risk ≥ medium → Inline conflict resolution (within Phase 2)
│ ├─ Tasks attached: Detect conflicts → Present to user → Apply strategies
│ └─ Output: Modified brainstorm artifacts ← COLLAPSED
└─ conflict_risk < medium → Skip to Phase 5
Phase 5: TDD Task Generation ← ATTACHED (3 tasks)
└─ Ref: phases/02-task-generate-tdd.md
├─ Phase 5.1: Discovery - analyze TDD requirements
├─ Phase 5.2: Planning - design Red-Green-Refactor cycles
└─ Phase 5.3: Output - generate IMPL tasks with internal TDD phases
└─ Output: IMPL-*.json, IMPL_PLAN.md, plan.json ← COLLAPSED
└─ Schema: .task/IMPL-*.json follows 6-field superset of task-schema.json
Phase 6: TDD Structure Validation (inline)
└─ Internal validation + summary returned
└─ Recommend: plan-verify (external)
Return:
└─ Summary with recommended next steps
```
### Phase Reference Documents
**Local phases** (read on-demand when phase executes):
| Phase | Document | Purpose |
|-------|----------|---------|
| Phase 3 | [phases/01-test-context-gather.md](phases/01-test-context-gather.md) | Test coverage context gathering via test-context-search-agent |
| Phase 5 | [phases/02-task-generate-tdd.md](phases/02-task-generate-tdd.md) | TDD task JSON generation via action-planning-agent |
**External phases** (from workflow-plan-execute skill):
| Phase | Document | Purpose |
|-------|----------|---------|
| Phase 1 | workflow-plan-execute/phases/01-session-discovery.md | Session creation/discovery |
| Phase 2 | workflow-plan-execute/phases/02-context-gathering.md | Project context collection + inline conflict resolution |
**Post-execution verification**:
| Phase | Document | Purpose |
|-------|----------|---------|
| TDD Verify | [phases/03-tdd-verify.md](phases/03-tdd-verify.md) | TDD compliance verification with quality gate |
| Coverage Analysis | [phases/04-tdd-coverage-analysis.md](phases/04-tdd-coverage-analysis.md) | Test coverage and cycle analysis (called by TDD Verify) |
## 6-Phase Execution
### Phase 1: Session Discovery
**Step 1.1: Execute** - Session discovery and initialization
Read and execute: `workflow-plan-execute/phases/01-session-discovery.md` with `--type tdd --auto "TDD: [structured-description]"`
**TDD Structured Format**:
```
TDD: [Feature Name]
GOAL: [Objective]
SCOPE: [Included/excluded]
CONTEXT: [Background]
TEST_FOCUS: [Test scenarios]
```
**Parse**: Extract sessionId
**TodoWrite**: Mark phase 1 completed, phase 2 in_progress
**After Phase 1**: Return to user showing Phase 1 results, then auto-continue to Phase 2
---
### Phase 2: Context Gathering
**Step 2.1: Execute** - Context gathering and analysis
Read and execute: `workflow-plan-execute/phases/02-context-gathering.md` with `--session [sessionId] "TDD: [structured-description]"`
**Use Same Structured Description**: Pass the same structured format from Phase 1
**Input**: `sessionId` from Phase 1
**Parse Output**:
- Extract: context-package.json path (store as `contextPath`)
- Typical pattern: `{projectRoot}/.workflow/active/[sessionId]/.process/context-package.json`
**Validation**:
- Context package path extracted
- File exists and is valid JSON
**TodoWrite**: Mark phase 2 completed, phase 3 in_progress
**After Phase 2**: Return to user showing Phase 2 results, then auto-continue to Phase 3
---
### Phase 3: Test Coverage Analysis
**Step 3.1: Execute** - Test coverage analysis and framework detection
Read and execute: `phases/01-test-context-gather.md` with `--session [sessionId]`
**Purpose**: Analyze existing codebase for:
- Existing test patterns and conventions
- Current test coverage
- Related components and integration points
- Test framework detection
**Parse**: Extract testContextPath (`{projectRoot}/.workflow/active/[sessionId]/.process/test-context-package.json`)
**TodoWrite Update (Phase 3 - tasks attached)**:
```json
[
{"content": "Phase 1: Session Discovery", "status": "completed", "activeForm": "Executing session discovery"},
{"content": "Phase 2: Context Gathering", "status": "completed", "activeForm": "Executing context gathering"},
{"content": "Phase 3: Test Coverage Analysis", "status": "in_progress", "activeForm": "Executing test coverage analysis"},
{"content": " → Detect test framework and conventions", "status": "in_progress", "activeForm": "Detecting test framework"},
{"content": " → Analyze existing test coverage", "status": "pending", "activeForm": "Analyzing test coverage"},
{"content": " → Identify coverage gaps", "status": "pending", "activeForm": "Identifying coverage gaps"},
{"content": "Phase 5: TDD Task Generation", "status": "pending", "activeForm": "Executing TDD task generation"},
{"content": "Phase 6: TDD Structure Validation", "status": "pending", "activeForm": "Validating TDD structure"}
]
```
**Note**: Skill execute **attaches** test-context-gather's 3 tasks. Orchestrator **executes** these tasks.
**Next Action**: Tasks attached → **Execute Phase 3.1-3.3** sequentially
**TodoWrite Update (Phase 3 completed - tasks collapsed)**:
```json
[
{"content": "Phase 1: Session Discovery", "status": "completed", "activeForm": "Executing session discovery"},
{"content": "Phase 2: Context Gathering", "status": "completed", "activeForm": "Executing context gathering"},
{"content": "Phase 3: Test Coverage Analysis", "status": "completed", "activeForm": "Executing test coverage analysis"},
{"content": "Phase 5: TDD Task Generation", "status": "pending", "activeForm": "Executing TDD task generation"},
{"content": "Phase 6: TDD Structure Validation", "status": "pending", "activeForm": "Validating TDD structure"}
]
```
**After Phase 3**: Return to user showing test coverage results, then auto-continue to Phase 4/5
---
### Phase 4: Conflict Resolution (Optional)
**Trigger**: Only execute when context-package.json indicates conflict_risk is "medium" or "high"
**Step 4.1: Execute** - Conflict detection and resolution
Conflict resolution is handled inline within Phase 2 (context-gathering). When conflict_risk >= medium, Phase 2 automatically performs detection and resolution.
**Input**:
- sessionId from Phase 1
- contextPath from Phase 2
- conflict_risk from context-package.json
**Parse Output**:
- Extract: Execution status (success/skipped/failed)
- Verify: conflict-resolution.json file path (if executed)
**Skip Behavior**:
- If conflict_risk is "none" or "low", skip directly to Phase 5
- Display: "No significant conflicts detected, proceeding to TDD task generation"
**TodoWrite Update (Phase 4 - tasks attached, if conflict_risk >= medium)**:
```json
[
{"content": "Phase 1: Session Discovery", "status": "completed", "activeForm": "Executing session discovery"},
{"content": "Phase 2: Context Gathering", "status": "completed", "activeForm": "Executing context gathering"},
{"content": "Phase 3: Test Coverage Analysis", "status": "completed", "activeForm": "Executing test coverage analysis"},
{"content": "Phase 4: Conflict Resolution", "status": "in_progress", "activeForm": "Executing conflict resolution"},
{"content": " → Detect conflicts with CLI analysis", "status": "in_progress", "activeForm": "Detecting conflicts"},
{"content": " → Log and analyze detected conflicts", "status": "pending", "activeForm": "Analyzing conflicts"},
{"content": " → Apply resolution strategies", "status": "pending", "activeForm": "Applying resolution strategies"},
{"content": "Phase 5: TDD Task Generation", "status": "pending", "activeForm": "Executing TDD task generation"},
{"content": "Phase 6: TDD Structure Validation", "status": "pending", "activeForm": "Validating TDD structure"}
]
```
**TodoWrite Update (Phase 4 completed - tasks collapsed)**:
```json
[
{"content": "Phase 1: Session Discovery", "status": "completed", "activeForm": "Executing session discovery"},
{"content": "Phase 2: Context Gathering", "status": "completed", "activeForm": "Executing context gathering"},
{"content": "Phase 3: Test Coverage Analysis", "status": "completed", "activeForm": "Executing test coverage analysis"},
{"content": "Phase 4: Conflict Resolution", "status": "completed", "activeForm": "Executing conflict resolution"},
{"content": "Phase 5: TDD Task Generation", "status": "pending", "activeForm": "Executing TDD task generation"},
{"content": "Phase 6: TDD Structure Validation", "status": "pending", "activeForm": "Validating TDD structure"}
]
```
**After Phase 4**: Return to user showing conflict resolution results, then auto-continue to Phase 5
**Memory State Check**:
- Evaluate current context window usage and memory state
- If memory usage is high (>110K tokens or approaching context limits):
**Step 4.5: Execute** - Memory compaction (external skill: compact)
- This optimizes memory before proceeding to Phase 5
- Memory compaction is particularly important after analysis phase which may generate extensive documentation
---
### Phase 5: TDD Task Generation
**Step 5.1: Execute** - TDD task generation via action-planning-agent with Phase 0 user configuration
Read and execute: `phases/02-task-generate-tdd.md` with `--session [sessionId]`
**Note**: Phase 0 now includes:
- Supplementary materials collection (file paths or inline content)
- Execution method preference (Agent/Hybrid/CLI)
- CLI tool preference (Codex/Gemini/Qwen/Auto)
- These preferences are passed to agent for task generation
**Parse**: Extract feature count, task count, CLI execution IDs assigned
**Validate**:
- IMPL_PLAN.md exists (unified plan with TDD Implementation Tasks section)
- IMPL-*.json files exist (one per feature, or container + subtasks for complex features)
- TODO_LIST.md exists with internal TDD phase indicators
- Each IMPL task includes:
- `meta.tdd_workflow: true`
- `meta.cli_execution_id: {session_id}-{task_id}`
- `meta.cli_execution: { "strategy": "new|resume|fork|merge_fork", ... }`
- `flow_control.implementation_approach` with exactly 3 steps (red/green/refactor)
- Green phase includes test-fix-cycle configuration
- `context.focus_paths`: absolute or clear relative paths
- `flow_control.pre_analysis`: includes exploration integration_points analysis
- IMPL_PLAN.md contains workflow_type: "tdd" in frontmatter
- Task count <=18 (compliance with hard limit)
**Red Flag Detection** (Non-Blocking Warnings):
- Task count >18: `Warning: Task count exceeds hard limit - request re-scope`
- Missing cli_execution_id: `Warning: Task lacks CLI execution ID for resume support`
- Missing test-fix-cycle: `Warning: Green phase lacks auto-revert configuration`
- Generic task names: `Warning: Vague task names suggest unclear TDD cycles`
- Missing focus_paths: `Warning: Task lacks clear file scope for implementation`
**Action**: Log warnings to `{projectRoot}/.workflow/active/[sessionId]/.process/tdd-warnings.log` (non-blocking)
**TodoWrite Update (Phase 5 - tasks attached)**:
```json
[
{"content": "Phase 1: Session Discovery", "status": "completed", "activeForm": "Executing session discovery"},
{"content": "Phase 2: Context Gathering", "status": "completed", "activeForm": "Executing context gathering"},
{"content": "Phase 3: Test Coverage Analysis", "status": "completed", "activeForm": "Executing test coverage analysis"},
{"content": "Phase 5: TDD Task Generation", "status": "in_progress", "activeForm": "Executing TDD task generation"},
{"content": " → Discovery - analyze TDD requirements", "status": "in_progress", "activeForm": "Analyzing TDD requirements"},
{"content": " → Planning - design Red-Green-Refactor cycles", "status": "pending", "activeForm": "Designing TDD cycles"},
{"content": " → Output - generate IMPL tasks with internal TDD phases", "status": "pending", "activeForm": "Generating TDD tasks"},
{"content": "Phase 6: TDD Structure Validation", "status": "pending", "activeForm": "Validating TDD structure"}
]
```
**TodoWrite Update (Phase 5 completed - tasks collapsed)**:
```json
[
{"content": "Phase 1: Session Discovery", "status": "completed", "activeForm": "Executing session discovery"},
{"content": "Phase 2: Context Gathering", "status": "completed", "activeForm": "Executing context gathering"},
{"content": "Phase 3: Test Coverage Analysis", "status": "completed", "activeForm": "Executing test coverage analysis"},
{"content": "Phase 5: TDD Task Generation", "status": "completed", "activeForm": "Executing TDD task generation"},
{"content": "Phase 6: TDD Structure Validation", "status": "in_progress", "activeForm": "Validating TDD structure"}
]
```
**Step 5.4: Generate plan.json** (plan-overview-base-schema)
After task generation completes, generate `plan.json` as a machine-readable plan overview:
```javascript
// Generate plan.json after task generation
const sessionFolder = `${projectRoot}/.workflow/active/${sessionId}`
const taskFiles = Glob(`${sessionFolder}/.task/IMPL-*.json`)
const taskIds = taskFiles.map(f => JSON.parse(Read(f)).id).sort()
// Guard: skip plan.json if no tasks generated
if (taskIds.length === 0) {
console.warn('No tasks generated; skipping plan.json')
} else {
const planOverview = {
summary: `TDD plan for: ${taskDescription}`,
approach: `Red-Green-Refactor cycles across ${taskIds.length} implementation tasks`,
task_ids: taskIds,
task_count: taskIds.length,
complexity: complexity,
recommended_execution: "Agent",
_metadata: {
timestamp: getUtc8ISOString(),
source: "tdd-planning-agent",
planning_mode: "agent-based",
plan_type: "tdd",
schema_version: "2.0"
}
}
Write(`${sessionFolder}/plan.json`, JSON.stringify(planOverview, null, 2))
} // end guard
```
**Validate**: `plan.json` exists and contains valid JSON with `task_ids` matching generated IMPL-*.json files.
---
### Phase 6: TDD Structure Validation & Action Plan Verification (RECOMMENDED)
**Internal validation first, then recommend external verification**
**Internal Validation**:
1. Each task contains complete TDD workflow (Red-Green-Refactor internally)
2. Task structure validation:
- `meta.tdd_workflow: true` in all IMPL tasks
- `meta.cli_execution_id` present (format: {session_id}-{task_id})
- `meta.cli_execution` strategy assigned (new/resume/fork/merge_fork)
- `flow_control.implementation_approach` has exactly 3 steps
- Each step has correct `tdd_phase`: "red", "green", "refactor"
- `context.focus_paths` are absolute or clear relative paths
- `flow_control.pre_analysis` includes exploration integration analysis
3. Dependency validation:
- Sequential features: IMPL-N depends_on ["IMPL-(N-1)"] if needed
- Complex features: IMPL-N.M depends_on ["IMPL-N.(M-1)"] for subtasks
- CLI execution strategies correctly assigned based on dependency graph
4. Agent assignment: All IMPL tasks use @code-developer
5. Test-fix cycle: Green phase step includes test-fix-cycle logic with max_iterations
6. Task count: Total tasks <=18 (simple + subtasks hard limit)
7. User configuration:
- Execution method choice reflected in task structure
- CLI tool preference documented in implementation guidance (if CLI selected)
**Red Flag Checklist** (from TDD best practices):
- [ ] No tasks skip Red phase (`tdd_phase: "red"` exists in step 1)
- [ ] Test files referenced in Red phase (explicit paths, not placeholders)
- [ ] Green phase has test-fix-cycle with `max_iterations` configured
- [ ] Refactor phase has clear completion criteria
**Non-Compliance Warning Format**:
```
Warning TDD Red Flag: [issue description]
Task: [IMPL-N]
Recommendation: [action to fix]
```
**Evidence Gathering** (Before Completion Claims):
```bash
# Verify session artifacts exist
ls -la ${projectRoot}/.workflow/active/[sessionId]/{IMPL_PLAN.md,TODO_LIST.md,plan.json}
ls -la ${projectRoot}/.workflow/active/[sessionId]/.task/IMPL-*.json
# Count generated artifacts
echo "IMPL tasks: $(ls ${projectRoot}/.workflow/active/[sessionId]/.task/IMPL-*.json 2>/dev/null | wc -l)"
# Sample task structure verification (first task)
jq '{id, tdd: .meta.tdd_workflow, cli_id: .meta.cli_execution_id, phases: [.flow_control.implementation_approach[].tdd_phase]}' \
"$(ls ${projectRoot}/.workflow/active/[sessionId]/.task/IMPL-*.json | head -1)"
```
**Evidence Required Before Summary**:
| Evidence Type | Verification Method | Pass Criteria |
|---------------|---------------------|---------------|
| File existence | `ls -la` artifacts | All files present |
| Plan overview | `jq .task_count plan.json` | plan.json exists with valid task_ids |
| Task count | Count IMPL-*.json | Count matches claims (<=18) |
| TDD structure | jq sample extraction | Shows red/green/refactor + cli_execution_id |
| CLI execution IDs | jq extraction | All tasks have cli_execution_id assigned |
| Warning log | Check tdd-warnings.log | Logged (may be empty) |
**Return Summary**:
```
TDD Planning complete for session: [sessionId]
Features analyzed: [N]
Total tasks: [M] (1 task per simple feature + subtasks for complex features)
Task breakdown:
- Simple features: [K] tasks (IMPL-1 to IMPL-K)
- Complex features: [L] features with [P] subtasks
- Total task count: [M] (within 18-task hard limit)
Structure:
- IMPL-1: {Feature 1 Name} (Internal: Red → Green → Refactor)
- IMPL-2: {Feature 2 Name} (Internal: Red → Green → Refactor)
- IMPL-3: {Complex Feature} (Container)
- IMPL-3.1: {Sub-feature A} (Internal: Red → Green → Refactor)
- IMPL-3.2: {Sub-feature B} (Internal: Red → Green → Refactor)
[...]
Plans generated:
- Unified Implementation Plan: {projectRoot}/.workflow/active/[sessionId]/IMPL_PLAN.md
(includes TDD Implementation Tasks section with workflow_type: "tdd")
- Plan Overview: {projectRoot}/.workflow/active/[sessionId]/plan.json
(plan-overview-base-schema with task IDs, complexity, and execution metadata)
- Task List: {projectRoot}/.workflow/active/[sessionId]/TODO_LIST.md
(with internal TDD phase indicators and CLI execution strategies)
- Task JSONs: {projectRoot}/.workflow/active/[sessionId]/.task/IMPL-*.json
(with cli_execution_id and execution strategies for resume support)
TDD Configuration:
- Each task contains complete Red-Green-Refactor cycle
- Green phase includes test-fix cycle (max 3 iterations)
- Auto-revert on max iterations reached
- CLI execution strategies: new/resume/fork/merge_fork based on dependency graph
User Configuration Applied:
- Execution Method: [agent|hybrid|cli]
- CLI Tool Preference: [codex|gemini|qwen|auto]
- Supplementary Materials: [included|none]
- Task generation follows cli-tools-usage.md guidelines
ACTION REQUIRED: Before execution, ensure you understand WHY each Red phase test is expected to fail.
This is crucial for valid TDD - if you don't know why the test fails, you can't verify it tests the right thing.
Recommended Next Steps:
1. plan-verify (external) --session [sessionId] # Verify TDD plan quality and dependencies
2. workflow:execute (external) --session [sessionId] # Start TDD execution with CLI strategies
3. phases/03-tdd-verify.md [sessionId] # Post-execution TDD compliance check
Quality Gate: Consider running plan-verify to validate TDD task structure, dependencies, and CLI execution strategies
```
## Input Processing
Convert user input to TDD-structured format:
**Simple text** → Add TDD context
**Detailed text** → Extract components with TEST_FOCUS
**File/Issue** → Read and structure with TDD
## Data Flow
```
User Input (task description)
[Convert to TDD Structured Format]
↓ TDD Structured Description:
↓ TDD: [Feature Name]
↓ GOAL: [objective]
↓ SCOPE: [boundaries]
↓ CONTEXT: [background]
↓ TEST_FOCUS: [test scenarios]
Phase 1: session:start --type tdd --auto "TDD: structured-description"
↓ Output: sessionId
Phase 2: context-gather --session sessionId "TDD: structured-description"
↓ Output: contextPath + conflict_risk
Phase 3: test-context-gather --session sessionId
↓ Output: testContextPath (test-context-package.json)
Phase 4: conflict-resolution [AUTO-TRIGGERED if conflict_risk >= medium]
↓ Output: Modified brainstorm artifacts
↓ Skip if conflict_risk is none/low → proceed directly to Phase 5
Phase 5: task-generate-tdd --session sessionId
↓ Output: IMPL_PLAN.md, task JSONs, TODO_LIST.md, plan.json
Phase 6: Internal validation + summary
Return summary to user
```
## TodoWrite Pattern
**Core Concept**: Dynamic task attachment and collapse for TDD workflow with test coverage analysis and Red-Green-Refactor cycle generation.
### Key Principles
1. **Task Attachment** (when Skill executed):
- Sub-command's internal tasks are **attached** to orchestrator's TodoWrite
- First attached task marked as `in_progress`, others as `pending`
- Orchestrator **executes** these attached tasks sequentially
2. **Task Collapse** (after sub-tasks complete):
- Remove detailed sub-tasks from TodoWrite
- **Collapse** to high-level phase summary
- Maintains clean orchestrator-level view
3. **Continuous Execution**:
- After collapse, automatically proceed to next pending phase
- No user intervention required between phases
- TodoWrite dynamically reflects current execution state
**Lifecycle Summary**: Initial pending tasks → Phase executed (tasks ATTACHED) → Sub-tasks executed sequentially → Phase completed (tasks COLLAPSED to summary) → Next phase begins (conditional Phase 4 if conflict_risk >= medium) → Repeat until all phases complete.
### TDD-Specific Features
- **Phase 3**: Test coverage analysis detects existing patterns and gaps
- **Phase 5**: Generated IMPL tasks contain internal Red-Green-Refactor cycles
- **Conditional Phase 4**: Conflict resolution only if conflict_risk >= medium
**Note**: See individual Phase descriptions (Phase 3, 4, 5) for detailed TodoWrite Update examples with full JSON structures.
## Execution Flow Diagram
```
TDD Workflow Orchestrator
├─ Phase 1: Session Discovery
│ └─ workflow-plan-execute/phases/01-session-discovery.md --auto
│ └─ Returns: sessionId
├─ Phase 2: Context Gathering
│ └─ workflow-plan-execute/phases/02-context-gathering.md
│ └─ Returns: context-package.json path
├─ Phase 3: Test Coverage Analysis ← ATTACHED (3 tasks)
│ └─ phases/01-test-context-gather.md
│ ├─ Phase 3.1: Detect test framework
│ ├─ Phase 3.2: Analyze existing test coverage
│ └─ Phase 3.3: Identify coverage gaps
│ └─ Returns: test-context-package.json ← COLLAPSED
├─ Phase 4: Conflict Resolution (conditional)
│ IF conflict_risk >= medium:
│ └─ Inline within Phase 2 context-gathering ← ATTACHED (3 tasks)
│ ├─ Phase 4.1: Detect conflicts with CLI
│ ├─ Phase 4.2: Log and analyze detected conflicts
│ └─ Phase 4.3: Apply resolution strategies
│ └─ Returns: conflict-resolution.json ← COLLAPSED
│ ELSE:
│ └─ Skip to Phase 5
├─ Phase 5: TDD Task Generation ← ATTACHED (3 tasks)
│ └─ phases/02-task-generate-tdd.md
│ ├─ Phase 5.1: Discovery - analyze TDD requirements
│ ├─ Phase 5.2: Planning - design Red-Green-Refactor cycles
│ └─ Phase 5.3: Output - generate IMPL tasks with internal TDD phases
│ └─ Returns: IMPL-*.json, IMPL_PLAN.md, plan.json ← COLLAPSED
│ (Each IMPL task contains internal Red-Green-Refactor cycle)
└─ Phase 6: TDD Structure Validation
└─ Internal validation + summary returned
└─ Recommend: plan-verify (external)
Key Points:
• ← ATTACHED: Sub-tasks attached to orchestrator TodoWrite
• ← COLLAPSED: Sub-tasks executed and collapsed to phase summary
• TDD-specific: Each generated IMPL task contains complete Red-Green-Refactor cycle
```
## Error Handling
- **Parsing failure**: Retry once, then report
- **Validation failure**: Report missing/invalid data
- **Command failure**: Keep phase in_progress, report error
- **TDD validation failure**: Report incomplete chains or wrong dependencies
- **Subagent timeout**: Retry wait or send_input to prompt completion, then close_agent
### TDD Warning Patterns
| Pattern | Warning Message | Recommended Action |
|---------|----------------|-------------------|
| Task count >10 | High task count detected | Consider splitting into multiple sessions |
| Missing test-fix-cycle | Green phase lacks auto-revert | Add `max_iterations: 3` to task config |
| Red phase missing test path | Test file path not specified | Add explicit test file paths |
| Generic task names | Vague names like "Add feature" | Use specific behavior descriptions |
| No refactor criteria | Refactor phase lacks completion criteria | Define clear refactor scope |
### Non-Blocking Warning Policy
**All warnings are advisory** - they do not halt execution:
1. Warnings logged to `.process/tdd-warnings.log`
2. Summary displayed in Phase 6 output
3. User decides whether to address before execution
### Error Handling Quick Reference
| Error Type | Detection | Recovery Action |
|------------|-----------|-----------------|
| Parsing failure | Empty/malformed output | Retry once, then report |
| Missing context-package | File read error | Re-run context-gather (workflow-plan-execute/phases/02-context-gathering.md) |
| Invalid task JSON | jq parse error | Report malformed file path |
| Task count exceeds 18 | Count validation >=19 | Request re-scope, split into multiple sessions |
| Missing cli_execution_id | All tasks lack ID | Regenerate tasks with phase 0 user config |
| Test-context missing | File not found | Re-run phases/01-test-context-gather.md |
| Phase timeout | No response | Retry phase, check CLI connectivity |
| CLI tool not available | Tool not in cli-tools.json | Fall back to alternative preferred tool |
| Subagent unresponsive | wait timed_out | send_input to prompt, or close_agent and spawn new |
## Post-Execution: TDD Verification
After TDD tasks have been executed (via workflow:execute), run TDD compliance verification:
Read and execute: `phases/03-tdd-verify.md` with `--session [sessionId]`
This generates a comprehensive TDD_COMPLIANCE_REPORT.md with quality gate recommendation.
## Task JSON Schema Compatibility
Phase 5 generates `.task/IMPL-*.json` files using the **6-field schema** defined in `action-planning-agent.md`. These task JSONs are a **superset** of the unified `task-schema.json` (located at `.ccw/workflows/cli-templates/schemas/task-schema.json`).
**Key field mappings** (6-field → unified schema):
- `context.acceptance``convergence.criteria`
- `context.requirements``description` + `implementation`
- `context.depends_on``depends_on` (top-level)
- `context.focus_paths``focus_paths` (top-level)
- `meta.type``type` (top-level)
- `flow_control.target_files``files[].path`
All existing 6-field schema fields are preserved. TDD-specific extensions (`meta.tdd_workflow`, `tdd_phase` in implementation_approach steps) are additive and do not conflict with the unified schema. See `action-planning-agent.md` Section 2.1 "Schema Compatibility" for the full mapping table.
## Related Skills
**Prerequisite**:
- None - TDD planning is self-contained (can optionally run brainstorm before)
**Called by This Skill** (6 phases):
- workflow-plan-execute/phases/01-session-discovery.md - Phase 1: Create or discover TDD workflow session
- workflow-plan-execute/phases/02-context-gathering.md - Phase 2: Gather project context and analyze codebase
- phases/01-test-context-gather.md - Phase 3: Analyze existing test patterns and coverage
- Inline conflict resolution within Phase 2 - Phase 4: Detect and resolve conflicts (conditional)
- compact (external skill) - Phase 4.5: Memory optimization (if context approaching limits)
- phases/02-task-generate-tdd.md - Phase 5: Generate TDD tasks
**Follow-up**:
- plan-verify (external) - Recommended: Verify TDD plan quality and structure before execution
- workflow:status (external) - Review TDD task breakdown
- workflow:execute (external) - Begin TDD implementation
- phases/03-tdd-verify.md - Post-execution: Verify TDD compliance and generate quality report
## Next Steps Decision Table
| Situation | Recommended Action | Purpose |
|-----------|-------------------|---------|
| First time planning | Run plan-verify (external) | Validate task structure before execution |
| Warnings in tdd-warnings.log | Review log, refine tasks | Address Red Flags before proceeding |
| High task count warning | Consider new session | Split into focused sub-sessions |
| Ready to implement | Run workflow:execute (external) | Begin TDD Red-Green-Refactor cycles |
| After implementation | Run phases/03-tdd-verify.md | Generate TDD compliance report |
| Need to review tasks | Run workflow:status (external) | Inspect current task breakdown |
### TDD Workflow State Transitions
```
workflow-tdd-plan (this skill)
[Planning Complete] ──→ plan-verify (external, recommended)
[Verified/Ready] ─────→ workflow:execute (external)
[Implementation] ─────→ phases/03-tdd-verify.md (post-execution)
[Quality Report] ─────→ Done or iterate
```

View File

@@ -1,240 +0,0 @@
# Phase 1: Test Context Gather
## Overview
Orchestrator command that invokes `test-context-search-agent` to gather comprehensive test coverage context for test generation workflows. Generates standardized `test-context-package.json` with coverage analysis, framework detection, and source implementation context.
## Core Philosophy
- **Agent Delegation**: Delegate all test coverage analysis to `test-context-search-agent` for autonomous execution
- **Detection-First**: Check for existing test-context-package before executing
- **Coverage-First**: Analyze existing test coverage before planning new tests
- **Source Context Loading**: Import implementation summaries from source session
- **Standardized Output**: Generate `{projectRoot}/.workflow/active/{test_session_id}/.process/test-context-package.json`
- **Explicit Lifecycle**: Always close_agent after wait completes to free resources
## Execution Process
```
Input Parsing:
├─ Parse flags: --session
└─ Validation: test_session_id REQUIRED
Step 1: Test-Context-Package Detection
└─ Decision (existing package):
├─ Valid package exists → Return existing (skip execution)
└─ No valid package → Continue to Step 2
Step 2: Invoke Test-Context-Search Agent
├─ Phase 1: Session Validation & Source Context Loading
│ ├─ Detection: Check for existing test-context-package
│ ├─ Test session validation
│ └─ Source context loading (summaries, changed files)
├─ Phase 2: Test Coverage Analysis
│ ├─ Track 1: Existing test discovery
│ ├─ Track 2: Coverage gap analysis
│ └─ Track 3: Coverage statistics
└─ Phase 3: Framework Detection & Packaging
├─ Framework identification
├─ Convention analysis
└─ Generate test-context-package.json
Step 3: Output Verification
└─ Verify test-context-package.json created
```
## Execution Flow
### Step 1: Test-Context-Package Detection
**Execute First** - Check if valid package already exists:
```javascript
const testContextPath = `${projectRoot}/.workflow/active/${test_session_id}/.process/test-context-package.json`;
if (file_exists(testContextPath)) {
const existing = Read(testContextPath);
// Validate package belongs to current test session
if (existing?.metadata?.test_session_id === test_session_id) {
console.log("Valid test-context-package found for session:", test_session_id);
console.log("Coverage Stats:", existing.test_coverage.coverage_stats);
console.log("Framework:", existing.test_framework.framework);
console.log("Missing Tests:", existing.test_coverage.missing_tests.length);
return existing; // Skip execution, return existing
} else {
console.warn("Invalid test_session_id in existing package, re-generating...");
}
}
```
### Step 2: Invoke Test-Context-Search Agent
**Only execute if Step 1 finds no valid package**
```javascript
// Spawn test-context-search-agent
const agentId = spawn_agent({
message: `
## TASK ASSIGNMENT
### MANDATORY FIRST STEPS (Agent Execute)
1. **Read role definition**: ~/.codex/agents/test-context-search-agent.md (MUST read first)
2. Read: {projectRoot}/.workflow/project-tech.json
3. Read: {projectRoot}/.workflow/project-guidelines.json
---
## Execution Mode
**PLAN MODE** (Comprehensive) - Full Phase 1-3 execution
## Session Information
- **Test Session ID**: ${test_session_id}
- **Output Path**: ${projectRoot}/.workflow/active/${test_session_id}/.process/test-context-package.json
## Mission
Execute complete test-context-search-agent workflow for test generation planning:
### Phase 1: Session Validation & Source Context Loading
1. **Detection**: Check for existing test-context-package (early exit if valid)
2. **Test Session Validation**: Load test session metadata, extract source_session reference
3. **Source Context Loading**: Load source session implementation summaries, changed files, tech stack
### Phase 2: Test Coverage Analysis
Execute coverage discovery:
- **Track 1**: Existing test discovery (find *.test.*, *.spec.* files)
- **Track 2**: Coverage gap analysis (match implementation files to test files)
- **Track 3**: Coverage statistics (calculate percentages, identify gaps by module)
### Phase 3: Framework Detection & Packaging
1. Framework identification from package.json/requirements.txt
2. Convention analysis from existing test patterns
3. Generate and validate test-context-package.json
## Output Requirements
Complete test-context-package.json with:
- **metadata**: test_session_id, source_session_id, task_type, complexity
- **source_context**: implementation_summaries, tech_stack, project_patterns
- **test_coverage**: existing_tests[], missing_tests[], coverage_stats
- **test_framework**: framework, version, test_pattern, conventions
- **assets**: implementation_summary[], existing_test[], source_code[] with priorities
- **focus_areas**: Test generation guidance based on coverage gaps
## Quality Validation
Before completion verify:
- [ ] Valid JSON format with all required fields
- [ ] Source session context loaded successfully
- [ ] Test coverage gaps identified
- [ ] Test framework detected (or marked as 'unknown')
- [ ] Coverage percentage calculated correctly
- [ ] Missing tests catalogued with priority
- [ ] Execution time < 30 seconds (< 60s for large codebases)
Execute autonomously following agent documentation.
Report completion with coverage statistics.
`
});
// Wait for agent completion
const result = wait({
ids: [agentId],
timeout_ms: 300000 // 5 minutes
});
// Handle timeout
if (result.timed_out) {
console.warn("Test context gathering timed out, sending prompt...");
send_input({
id: agentId,
message: "Please complete test-context-package.json generation and report results."
});
const retryResult = wait({ ids: [agentId], timeout_ms: 120000 });
}
// Clean up agent resources
close_agent({ id: agentId });
```
### Step 3: Output Verification
After agent completes, verify output:
```javascript
// Verify file was created
const outputPath = `${projectRoot}/.workflow/active/${test_session_id}/.process/test-context-package.json`;
if (!file_exists(outputPath)) {
throw new Error("Agent failed to generate test-context-package.json");
}
// Load and display summary
const testContext = Read(outputPath);
console.log("Test context package generated successfully");
console.log("Coverage:", testContext.test_coverage.coverage_stats.coverage_percentage + "%");
console.log("Tests to generate:", testContext.test_coverage.missing_tests.length);
```
## Parameter Reference
| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `--session` | string | Yes | Test workflow session ID (e.g., WFS-test-auth) |
## Output Schema
Refer to `test-context-search-agent.md` Phase 3.2 for complete `test-context-package.json` schema.
**Key Sections**:
- **metadata**: Test session info, source session reference, complexity
- **source_context**: Implementation summaries with changed files and tech stack
- **test_coverage**: Existing tests, missing tests with priorities, coverage statistics
- **test_framework**: Framework name, version, patterns, conventions
- **assets**: Categorized files with relevance (implementation_summary, existing_test, source_code)
- **focus_areas**: Test generation guidance based on analysis
## Success Criteria
- Valid test-context-package.json generated in `{projectRoot}/.workflow/active/{test_session_id}/.process/`
- Source session context loaded successfully
- Test coverage gaps identified (>90% accuracy)
- Test framework detected and documented
- Execution completes within 30 seconds (60s for large codebases)
- All required schema fields present and valid
- Coverage statistics calculated correctly
- Agent reports completion with statistics
## Error Handling
| Error | Cause | Resolution |
|-------|-------|------------|
| Package validation failed | Invalid test_session_id in existing package | Re-run agent to regenerate |
| Source session not found | Invalid source_session reference | Verify test session metadata |
| No implementation summaries | Source session incomplete | Complete source session first |
| Agent execution timeout | Large codebase or slow analysis | Increase timeout, check file access |
| Missing required fields | Agent incomplete execution | Check agent logs, verify schema compliance |
| No test framework detected | Missing test dependencies | Agent marks as 'unknown', manual specification needed |
## Integration
### Called By
- SKILL.md (Phase 3: Test Coverage Analysis)
### Calls
- `test-context-search-agent` via spawn_agent - Autonomous test coverage analysis
## Notes
- **Detection-first**: Always check for existing test-context-package before invoking agent
- **No redundancy**: This command is a thin orchestrator, all logic in agent
- **Framework agnostic**: Supports Jest, Mocha, pytest, RSpec, Go testing, etc.
- **Coverage focus**: Primary goal is identifying implementation files without tests
- **Explicit lifecycle**: Always close_agent after wait completes
---
## Post-Phase Update
After Phase 1 (Test Context Gather) completes:
- **Output Created**: `test-context-package.json` in `{projectRoot}/.workflow/active/{session}/.process/`
- **Data Available**: Test coverage stats, framework info, missing tests list
- **Next Action**: Continue to Phase 4 (Conflict Resolution, if conflict_risk >= medium) or Phase 5 (TDD Task Generation)
- **TodoWrite**: Collapse Phase 3 sub-tasks to "Phase 3: Test Coverage Analysis: completed"

View File

@@ -1,771 +0,0 @@
## Auto Mode
When `--yes` or `-y`: Skip user questions, use defaults (no materials, Agent executor).
# Phase 2: TDD Task Generation
## Overview
Autonomous TDD task JSON and IMPL_PLAN.md generation using action-planning-agent with two-phase execution: discovery and document generation. Generates complete Red-Green-Refactor cycles contained within each task.
## Core Philosophy
- **Agent-Driven**: Delegate execution to action-planning-agent for autonomous operation
- **Two-Phase Flow**: Discovery (context gathering) → Output (document generation)
- **Memory-First**: Reuse loaded documents from conversation memory
- **MCP-Enhanced**: Use MCP tools for advanced code analysis and research
- **Semantic CLI Selection**: CLI tool usage determined from user's task description, not flags
- **Agent Simplicity**: Agent generates content with semantic CLI detection
- **Path Clarity**: All `focus_paths` prefer absolute paths (e.g., `D:\\project\\src\\module`), or clear relative paths from project root (e.g., `./src/module`)
- **TDD-First**: Every feature starts with a failing test (Red phase)
- **Feature-Complete Tasks**: Each task contains complete Red-Green-Refactor cycle
- **Quantification-Enforced**: All test cases, coverage requirements, and implementation scope MUST include explicit counts and enumerations
- **Explicit Lifecycle**: Always close_agent after wait completes to free resources
## Task Strategy & Philosophy
### Optimized Task Structure (Current)
- **1 feature = 1 task** containing complete TDD cycle internally
- Each task executes Red-Green-Refactor phases sequentially
- Task count = Feature count (typically 5 features = 5 tasks)
**Previous Approach** (Deprecated):
- 1 feature = 3 separate tasks (TEST-N.M, IMPL-N.M, REFACTOR-N.M)
- 5 features = 15 tasks with complex dependency chains
- High context switching cost between phases
### When to Use Subtasks
- Feature complexity >2500 lines or >6 files per TDD cycle
- Multiple independent sub-features needing parallel execution
- Strong technical dependency blocking (e.g., API before UI)
- Different tech stacks or domains within feature
### Task Limits
- **Maximum 18 tasks** (hard limit for TDD workflows)
- **Feature-based**: Complete functional units with internal TDD cycles
- **Hierarchy**: Flat (<=5 simple features) | Two-level (6-10 for complex features with sub-features)
- **Re-scope**: If >18 tasks needed, break project into multiple TDD workflow sessions
### TDD Cycle Mapping
- **Old approach**: 1 feature = 3 tasks (TEST-N.M, IMPL-N.M, REFACTOR-N.M)
- **Current approach**: 1 feature = 1 task (IMPL-N with internal Red-Green-Refactor phases)
- **Complex features**: 1 container (IMPL-N) + subtasks (IMPL-N.M) when necessary
## Execution Process
```
Input Parsing:
├─ Parse flags: --session
└─ Validation: session_id REQUIRED
Phase 1: Discovery & Context Loading (Memory-First)
├─ Load session context (if not in memory)
├─ Load context package (if not in memory)
├─ Load test context package (if not in memory)
├─ Extract & load role analyses from context package
├─ Load conflict resolution (if exists)
└─ Optional: MCP external research
Phase 2: Agent Execution (Document Generation)
├─ Pre-agent template selection (semantic CLI detection)
├─ Invoke action-planning-agent via spawn_agent
├─ Generate TDD Task JSON Files (.task/IMPL-*.json)
│ └─ Each task: complete Red-Green-Refactor cycle internally
├─ Create IMPL_PLAN.md (TDD variant)
└─ Generate TODO_LIST.md with TDD phase indicators
```
## Execution Lifecycle
### Phase 0: User Configuration (Interactive)
**Purpose**: Collect user preferences before TDD task generation to ensure generated tasks match execution expectations and provide necessary supplementary context.
**User Questions**:
```javascript
if (AUTO_YES) {
// --yes/-y: skip user questions, use defaults
userConfig = { materials: "No additional materials", execution: "Agent (Recommended)", cliTool: "Auto" }
} else {
ASK_USER([
{
id: "Materials", type: "select",
prompt: "Do you have supplementary materials or guidelines to include?",
options: [
{ label: "No additional materials", description: "Use existing context only" },
{ label: "Provide file paths", description: "I'll specify paths to include" },
{ label: "Provide inline content", description: "I'll paste content directly" }
]
},
{
id: "Execution", type: "select",
prompt: "Select execution method for generated TDD tasks:",
options: [
{ label: "Agent (Recommended)", description: "Agent executes Red-Green-Refactor cycles directly" },
{ label: "Hybrid", description: "Agent orchestrates, calls CLI for complex steps (Red/Green phases)" },
{ label: "CLI Only", description: "All TDD cycles via CLI tools (codex/gemini/qwen)" }
]
},
{
id: "CLI Tool", type: "select",
prompt: "If using CLI, which tool do you prefer?",
options: [
{ label: "Codex (Recommended)", description: "Best for TDD Red-Green-Refactor cycles" },
{ label: "Gemini", description: "Best for analysis and large context" },
{ label: "Qwen", description: "Alternative analysis tool" },
{ label: "Auto", description: "Let agent decide per-task" }
]
}
]) // BLOCKS (wait for user response)
}
```
**Handle Materials Response**:
```javascript
if (userConfig.materials === "Provide file paths") {
// Follow-up question for file paths
const pathsResponse = ASK_USER([{
id: "Paths", type: "input",
prompt: "Enter file paths to include (comma-separated or one per line):",
options: [
{ label: "Enter paths", description: "Provide paths in text input" }
]
}]) // BLOCKS (wait for user response)
userConfig.supplementaryPaths = parseUserPaths(pathsResponse)
}
```
**Build userConfig**:
```javascript
const userConfig = {
supplementaryMaterials: {
type: "none|paths|inline",
content: [...], // Parsed paths or inline content
},
executionMethod: "agent|hybrid|cli",
preferredCliTool: "codex|gemini|qwen|auto",
enableResume: true // Always enable resume for CLI executions
}
```
**Pass to Agent**: Include `userConfig` in agent prompt for Phase 2.
---
### Phase 1: Context Preparation & Discovery
**Command Responsibility**: Command prepares session paths and metadata, provides to agent for autonomous context loading.
**Memory-First Rule**: Skip file loading if documents already in conversation memory
**Progressive Loading Strategy**: Load context incrementally due to large analysis.md file sizes:
- **Core**: session metadata + context-package.json (always load)
- **Selective**: synthesis_output OR (guidance + relevant role analyses) - NOT all role analyses
- **On-Demand**: conflict resolution (if conflict_risk >= medium), test context
**Path Clarity Requirement**: All `focus_paths` prefer absolute paths (e.g., `D:\\project\\src\\module`), or clear relative paths from project root (e.g., `./src/module`)
**Session Path Structure** (Provided by Command to Agent):
```
${projectRoot}/.workflow/active/WFS-{session-id}/
├── workflow-session.json # Session metadata
├── .process/
│ ├── context-package.json # Context package with artifact catalog
│ ├── test-context-package.json # Test coverage analysis
│ └── conflict-resolution.json # Conflict resolution (if exists)
├── .task/ # Output: Task JSON files
│ ├── IMPL-1.json
│ ├── IMPL-2.json
│ └── ...
├── IMPL_PLAN.md # Output: TDD implementation plan
└── TODO_LIST.md # Output: TODO list with TDD phases
```
**Command Preparation**:
1. **Assemble Session Paths** for agent prompt:
- `session_metadata_path`: `${projectRoot}/.workflow/active/{session-id}/workflow-session.json`
- `context_package_path`: `${projectRoot}/.workflow/active/{session-id}/.process/context-package.json`
- `test_context_package_path`: `${projectRoot}/.workflow/active/{session-id}/.process/test-context-package.json`
- Output directory paths
2. **Provide Metadata** (simple values):
- `session_id`: WFS-{session-id}
- `workflow_type`: "tdd"
- `mcp_capabilities`: {exa_code, exa_web, code_index}
3. **Pass userConfig** from Phase 0
**Agent Context Package** (Agent loads autonomously):
```javascript
{
"session_id": "WFS-[session-id]",
"workflow_type": "tdd",
// Core (ALWAYS load)
"session_metadata": {
// If in memory: use cached content
// Else: Load from workflow-session.json
},
"context_package": {
// If in memory: use cached content
// Else: Load from context-package.json
},
// Selective (load based on progressive strategy)
"brainstorm_artifacts": {
// Loaded from context-package.json → brainstorm_artifacts section
"synthesis_output": {"path": "...", "exists": true}, // Load if exists (highest priority)
"guidance_specification": {"path": "...", "exists": true}, // Load if no synthesis
"role_analyses": [ // Load SELECTIVELY based on task relevance
{
"role": "system-architect",
"files": [{"path": "...", "type": "primary|supplementary"}]
}
]
},
// On-Demand (load if exists)
"test_context_package": {
// Load from test-context-package.json
// Contains existing test patterns and coverage analysis
},
"conflict_resolution": {
// Load from conflict-resolution.json if conflict_risk >= medium
// Check context-package.conflict_detection.resolution_file
},
// Capabilities
"mcp_capabilities": {
"exa_code": true,
"exa_web": true,
"code_index": true
},
// User configuration from Phase 0
"user_config": {
// From Phase 0 ASK_USER
}
}
```
**Discovery Actions**:
1. **Load Session Context** (if not in memory)
```javascript
if (!memory.has("workflow-session.json")) {
Read(${projectRoot}/.workflow/active/{session-id}/workflow-session.json)
}
```
2. **Load Context Package** (if not in memory)
```javascript
if (!memory.has("context-package.json")) {
Read(${projectRoot}/.workflow/active/{session-id}/.process/context-package.json)
}
```
3. **Load Test Context Package** (if not in memory)
```javascript
if (!memory.has("test-context-package.json")) {
Read(${projectRoot}/.workflow/active/{session-id}/.process/test-context-package.json)
}
```
4. **Extract & Load Role Analyses** (from context-package.json)
```javascript
// Extract role analysis paths from context package
const roleAnalysisPaths = contextPackage.brainstorm_artifacts.role_analyses
.flatMap(role => role.files.map(f => f.path));
// Load each role analysis file
roleAnalysisPaths.forEach(path => Read(path));
```
5. **Load Conflict Resolution** (from conflict-resolution.json, if exists)
```javascript
// Check for new conflict-resolution.json format
if (contextPackage.conflict_detection?.resolution_file) {
Read(contextPackage.conflict_detection.resolution_file) // .process/conflict-resolution.json
}
// Fallback: legacy brainstorm_artifacts path
else if (contextPackage.brainstorm_artifacts?.conflict_resolution?.exists) {
Read(contextPackage.brainstorm_artifacts.conflict_resolution.path)
}
```
6. **Code Analysis with Native Tools** (optional - enhance understanding)
```bash
# Find relevant test files and patterns
find . -name "*test*" -type f
rg "describe|it\(|test\(" -g "*.ts"
```
7. **MCP External Research** (optional - gather TDD best practices)
```javascript
// Get external TDD examples and patterns
mcp__exa__get_code_context_exa(
query="TypeScript TDD best practices Red-Green-Refactor",
tokensNum="dynamic"
)
```
### Phase 2: Agent Execution (TDD Document Generation)
**Purpose**: Generate TDD planning documents (IMPL_PLAN.md, task JSONs, TODO_LIST.md) - planning only, NOT code implementation.
**Agent Invocation**:
```javascript
// Spawn action-planning-agent
const agentId = spawn_agent({
message: `
## TASK ASSIGNMENT
### MANDATORY FIRST STEPS (Agent Execute)
1. **Read role definition**: ~/.codex/agents/action-planning-agent.md (MUST read first)
2. Read: {projectRoot}/.workflow/project-tech.json
3. Read: {projectRoot}/.workflow/project-guidelines.json
---
## TASK OBJECTIVE
Generate TDD implementation planning documents (IMPL_PLAN.md, task JSONs, TODO_LIST.md) for workflow session
IMPORTANT: This is PLANNING ONLY - you are generating planning documents, NOT implementing code.
CRITICAL: Follow the progressive loading strategy (load analysis.md files incrementally due to file size):
- **Core**: session metadata + context-package.json (always)
- **Selective**: synthesis_output OR (guidance + relevant role analyses) - NOT all
- **On-Demand**: conflict resolution (if conflict_risk >= medium), test context
## SESSION PATHS
Input:
- Session Metadata: ${projectRoot}/.workflow/active/{session-id}/workflow-session.json
- Context Package: ${projectRoot}/.workflow/active/{session-id}/.process/context-package.json
- Test Context: ${projectRoot}/.workflow/active/{session-id}/.process/test-context-package.json
Output:
- Task Dir: ${projectRoot}/.workflow/active/{session-id}/.task/
- IMPL_PLAN: ${projectRoot}/.workflow/active/{session-id}/IMPL_PLAN.md
- TODO_LIST: ${projectRoot}/.workflow/active/{session-id}/TODO_LIST.md
## CONTEXT METADATA
Session ID: {session-id}
Workflow Type: TDD
MCP Capabilities: {exa_code, exa_web, code_index}
## USER CONFIGURATION (from Phase 0)
Execution Method: ${userConfig.executionMethod} // agent|hybrid|cli
Preferred CLI Tool: ${userConfig.preferredCliTool} // codex|gemini|qwen|auto
Supplementary Materials: ${userConfig.supplementaryMaterials}
## EXECUTION METHOD MAPPING
Based on userConfig.executionMethod, set task-level meta.execution_config:
"agent" →
meta.execution_config = { method: "agent", cli_tool: null, enable_resume: false }
Agent executes Red-Green-Refactor phases directly
"cli" →
meta.execution_config = { method: "cli", cli_tool: userConfig.preferredCliTool, enable_resume: true }
Agent executes pre_analysis, then hands off full context to CLI via buildCliHandoffPrompt()
"hybrid" →
Per-task decision: Analyze TDD cycle complexity, set method to "agent" OR "cli" per task
- Simple cycles (<=5 test cases, <=3 files) → method: "agent"
- Complex cycles (>5 test cases, >3 files, integration tests) → method: "cli"
CLI tool: userConfig.preferredCliTool, enable_resume: true
IMPORTANT: Do NOT add command field to implementation_approach steps. Execution routing is controlled by task-level meta.execution_config.method only.
## EXPLORATION CONTEXT (from context-package.exploration_results)
- Load exploration_results from context-package.json
- Use aggregated_insights.critical_files for focus_paths generation
- Apply aggregated_insights.constraints to acceptance criteria
- Reference aggregated_insights.all_patterns for implementation approach
- Use aggregated_insights.all_integration_points for precise modification locations
- Use conflict_indicators for risk-aware task sequencing
## CONFLICT RESOLUTION CONTEXT (if exists)
- Check context-package.conflict_detection.resolution_file for conflict-resolution.json path
- If exists, load .process/conflict-resolution.json:
- Apply planning_constraints as task constraints (for brainstorm-less workflows)
- Reference resolved_conflicts for implementation approach alignment
- Handle custom_conflicts with explicit task notes
## TEST CONTEXT INTEGRATION
- Load test-context-package.json for existing test patterns and coverage analysis
- Extract test framework configuration (Jest/Pytest/etc.)
- Identify existing test conventions and patterns
- Map coverage gaps to TDD Red phase test targets
## TDD DOCUMENT GENERATION TASK
**Agent Configuration Reference**: All TDD task generation rules, quantification requirements, Red-Green-Refactor cycle structure, quality standards, and execution details are defined in action-planning-agent.
### TDD-Specific Requirements Summary
#### Task Structure Philosophy
- **1 feature = 1 task** containing complete TDD cycle internally
- Each task executes Red-Green-Refactor phases sequentially
- Task count = Feature count (typically 5 features = 5 tasks)
- Subtasks only when complexity >2500 lines or >6 files per cycle
- **Maximum 18 tasks** (hard limit for TDD workflows)
#### TDD Cycle Mapping
- **Simple features**: IMPL-N with internal Red-Green-Refactor phases
- **Complex features**: IMPL-N (container) + IMPL-N.M (subtasks)
- Each cycle includes: test_count, test_cases array, implementation_scope, expected_coverage
#### Required Outputs Summary
##### 1. TDD Task JSON Files (.task/IMPL-*.json)
- **Location**: \`${projectRoot}/.workflow/active/{session-id}/.task/\`
- **Schema**: 6-field structure with TDD-specific metadata
- \`id, title, status, context_package_path, meta, context, flow_control\`
- \`meta.tdd_workflow\`: true (REQUIRED)
- \`meta.max_iterations\`: 3 (Green phase test-fix cycle limit)
- \`meta.cli_execution_id\`: Unique CLI execution ID (format: \`{session_id}-{task_id}\`)
- \`meta.cli_execution\`: Strategy object (new|resume|fork|merge_fork)
- \`context.tdd_cycles\`: Array with quantified test cases and coverage
- \`context.focus_paths\`: Absolute or clear relative paths (enhanced with exploration critical_files)
- \`flow_control.implementation_approach\`: Exactly 3 steps with \`tdd_phase\` field
1. Red Phase (\`tdd_phase: "red"\`): Write failing tests
2. Green Phase (\`tdd_phase: "green"\`): Implement to pass tests
3. Refactor Phase (\`tdd_phase: "refactor"\`): Improve code quality
- \`flow_control.pre_analysis\`: Include exploration integration_points analysis
- **meta.execution_config**: Set per userConfig.executionMethod (agent/cli/hybrid)
- **Details**: See action-planning-agent.md § TDD Task JSON Generation
##### 2. IMPL_PLAN.md (TDD Variant)
- **Location**: \`${projectRoot}/.workflow/active/{session-id}/IMPL_PLAN.md\`
- **Template**: \`~/.ccw/workflows/cli-templates/prompts/workflow/impl-plan-template.txt\`
- **TDD-Specific Frontmatter**: workflow_type="tdd", tdd_workflow=true, feature_count, task_breakdown
- **TDD Implementation Tasks Section**: Feature-by-feature with internal Red-Green-Refactor cycles
- **Context Analysis**: Artifact references and exploration insights
- **Details**: See action-planning-agent.md § TDD Implementation Plan Creation
##### 3. TODO_LIST.md
- **Location**: \`${projectRoot}/.workflow/active/{session-id}/TODO_LIST.md\`
- **Format**: Hierarchical task list with internal TDD phase indicators (Red → Green → Refactor)
- **Status**: ▸ (container), [ ] (pending), [x] (completed)
- **Links**: Task JSON references and summaries
- **Details**: See action-planning-agent.md § TODO List Generation
### CLI EXECUTION ID REQUIREMENTS (MANDATORY)
Each task JSON MUST include:
- **meta.cli_execution_id**: Unique ID for CLI execution (format: \`{session_id}-{task_id}\`)
- **meta.cli_execution**: Strategy object based on depends_on:
- No deps → \`{ "strategy": "new" }\`
- 1 dep (single child) → \`{ "strategy": "resume", "resume_from": "parent-cli-id" }\`
- 1 dep (multiple children) → \`{ "strategy": "fork", "resume_from": "parent-cli-id" }\`
- N deps → \`{ "strategy": "merge_fork", "resume_from": ["id1", "id2", ...] }\`
- **Type**: \`resume_from: string | string[]\` (string for resume/fork, array for merge_fork)
**CLI Execution Strategy Rules**:
1. **new**: Task has no dependencies - starts fresh CLI conversation
2. **resume**: Task has 1 parent AND that parent has only this child - continues same conversation
3. **fork**: Task has 1 parent BUT parent has multiple children - creates new branch with parent context
4. **merge_fork**: Task has multiple parents - merges all parent contexts into new conversation
**Execution Command Patterns**:
- new: \`ccw cli -p "[prompt]" --tool [tool] --mode write --id [cli_execution_id]\`
- resume: \`ccw cli -p "[prompt]" --resume [resume_from] --tool [tool] --mode write\`
- fork: \`ccw cli -p "[prompt]" --resume [resume_from] --id [cli_execution_id] --tool [tool] --mode write\`
- merge_fork: \`ccw cli -p "[prompt]" --resume [resume_from.join(',')] --id [cli_execution_id] --tool [tool] --mode write\` (resume_from is array)
### Quantification Requirements (MANDATORY)
**Core Rules**:
1. **Explicit Test Case Counts**: Red phase specifies exact number with enumerated list
2. **Quantified Coverage**: Acceptance includes measurable percentage (e.g., ">=85%")
3. **Detailed Implementation Scope**: Green phase enumerates files, functions, line counts
4. **Enumerated Refactoring Targets**: Refactor phase lists specific improvements with counts
**TDD Phase Formats**:
- **Red Phase**: "Write N test cases: [test1, test2, ...]"
- **Green Phase**: "Implement N functions in file lines X-Y: [func1() X1-Y1, func2() X2-Y2, ...]"
- **Refactor Phase**: "Apply N refactorings: [improvement1 (details), improvement2 (details), ...]"
- **Acceptance**: "All N tests pass with >=X% coverage: verify by [test command]"
**Validation Checklist**:
- [ ] Every Red phase specifies exact test case count with enumerated list
- [ ] Every Green phase enumerates files, functions, and estimated line counts
- [ ] Every Refactor phase lists specific improvements with counts
- [ ] Every acceptance criterion includes measurable coverage percentage
- [ ] tdd_cycles array contains test_count and test_cases for each cycle
- [ ] No vague language ("comprehensive", "complete", "thorough")
- [ ] cli_execution_id and cli_execution strategy assigned to each task
### Agent Execution Summary
**Key Steps** (Detailed instructions in action-planning-agent.md):
1. Load task JSON template from provided path
2. Extract and decompose features with TDD cycles
3. Generate TDD task JSON files enforcing quantification requirements
4. Create IMPL_PLAN.md using TDD template variant
5. Generate TODO_LIST.md with TDD phase indicators
6. Update session state with TDD metadata
**Quality Gates** (Full checklist in action-planning-agent.md):
- Task count <=18 (hard limit)
- Each task has meta.tdd_workflow: true
- Each task has exactly 3 implementation steps with tdd_phase field ("red", "green", "refactor")
- Each task has meta.cli_execution_id and meta.cli_execution strategy
- Green phase includes test-fix cycle logic with max_iterations
- focus_paths are absolute or clear relative paths (from exploration critical_files)
- Artifact references mapped correctly from context package
- Exploration context integrated (critical_files, constraints, patterns, integration_points)
- Conflict resolution context applied (if conflict_risk >= medium)
- Test context integrated (existing test patterns and coverage analysis)
- Documents follow TDD template structure
- CLI tool selection based on userConfig.executionMethod
- Quantification requirements enforced (explicit counts, measurable acceptance, exact targets)
## SUCCESS CRITERIA
- All planning documents generated successfully:
- Task JSONs valid and saved to .task/ directory with cli_execution_id
- IMPL_PLAN.md created with complete TDD structure
- TODO_LIST.md generated matching task JSONs
- CLI execution strategies assigned based on task dependencies
- Return completion status with document count and task breakdown summary
## OUTPUT SUMMARY
Generate all three documents and report:
- TDD task JSON files created: N files (IMPL-*.json) with cli_execution_id assigned
- TDD cycles configured: N cycles with quantified test cases
- CLI execution strategies: new/resume/fork/merge_fork assigned per dependency graph
- Artifacts integrated: synthesis-spec/guidance-specification, relevant role analyses
- Exploration context: critical_files, constraints, patterns, integration_points
- Test context integrated: existing patterns and coverage
- Conflict resolution: applied (if conflict_risk >= medium)
- Session ready for TDD execution
`
});
// Wait for agent completion
const result = wait({
ids: [agentId],
timeout_ms: 600000 // 10 minutes
});
// Handle timeout
if (result.timed_out) {
console.warn("TDD task generation timed out, prompting completion...");
send_input({
id: agentId,
message: "Please finalize document generation and report completion status."
});
const retryResult = wait({ ids: [agentId], timeout_ms: 300000 });
}
// Clean up agent resources (IMPORTANT: must always call)
close_agent({ id: agentId });
```
### Agent Context Passing
**Context Delegation Model**: Command provides paths and metadata, agent loads context autonomously using progressive loading strategy.
**Command Provides** (in agent prompt):
```javascript
// Command assembles these simple values and paths for agent
const commandProvides = {
// Session paths
session_metadata_path: "${projectRoot}/.workflow/active/WFS-{id}/workflow-session.json",
context_package_path: "${projectRoot}/.workflow/active/WFS-{id}/.process/context-package.json",
test_context_package_path: "${projectRoot}/.workflow/active/WFS-{id}/.process/test-context-package.json",
output_task_dir: "${projectRoot}/.workflow/active/WFS-{id}/.task/",
output_impl_plan: "${projectRoot}/.workflow/active/WFS-{id}/IMPL_PLAN.md",
output_todo_list: "${projectRoot}/.workflow/active/WFS-{id}/TODO_LIST.md",
// Simple metadata
session_id: "WFS-{id}",
workflow_type: "tdd",
mcp_capabilities: { exa_code: true, exa_web: true, code_index: true },
// User configuration from Phase 0
user_config: {
supplementaryMaterials: { type: "...", content: [...] },
executionMethod: "agent|hybrid|cli",
preferredCliTool: "codex|gemini|qwen|auto",
enableResume: true
}
}
```
**Agent Loads Autonomously** (progressive loading):
```javascript
// Agent executes progressive loading based on memory state
const agentLoads = {
// Core (ALWAYS load if not in memory)
session_metadata: loadIfNotInMemory(session_metadata_path),
context_package: loadIfNotInMemory(context_package_path),
// Selective (based on progressive strategy)
// Priority: synthesis_output > guidance + relevant_role_analyses
brainstorm_content: loadSelectiveBrainstormArtifacts(context_package),
// On-Demand (load if exists and relevant)
test_context: loadIfExists(test_context_package_path),
conflict_resolution: loadConflictResolution(context_package),
// Optional (if MCP available)
exploration_results: extractExplorationResults(context_package),
external_research: executeMcpResearch() // If needed
}
```
**Progressive Loading Implementation** (agent responsibility):
1. **Check memory first** - skip if already loaded
2. **Load core files** - session metadata + context-package.json
3. **Smart selective loading** - synthesis_output OR (guidance + task-relevant role analyses)
4. **On-demand loading** - test context, conflict resolution (if conflict_risk >= medium)
5. **Extract references** - exploration results, artifact paths from context package
## TDD Task Structure Reference
This section provides quick reference for TDD task JSON structure. For complete implementation details, see the agent invocation prompt in Phase 2 above.
**Quick Reference**:
- Each TDD task contains complete Red-Green-Refactor cycle
- Task ID format: `IMPL-N` (simple) or `IMPL-N.M` (complex subtasks)
- Required metadata:
- `meta.tdd_workflow: true`
- `meta.max_iterations: 3`
- `meta.cli_execution_id: "{session_id}-{task_id}"`
- `meta.cli_execution: { "strategy": "new|resume|fork|merge_fork", ... }`
- Context: `tdd_cycles` array with quantified test cases and coverage:
```javascript
tdd_cycles: [
{
test_count: 5, // Number of test cases to write
test_cases: ["case1", "case2"], // Enumerated test scenarios
implementation_scope: "...", // Files and functions to implement
expected_coverage: ">=85%" // Coverage target
}
]
```
- Context: `focus_paths` use absolute or clear relative paths
- Flow control: Exactly 3 steps with `tdd_phase` field ("red", "green", "refactor")
- Flow control: `pre_analysis` includes exploration integration_points analysis
- **meta.execution_config**: Set per `userConfig.executionMethod` (agent/cli/hybrid)
- See Phase 2 agent prompt for full schema and requirements
## Output Files Structure
```
${projectRoot}/.workflow/active/{session-id}/
├── IMPL_PLAN.md # Unified plan with TDD Implementation Tasks section
├── TODO_LIST.md # Progress tracking with internal TDD phase indicators
├── .task/
│ ├── IMPL-1.json # Complete TDD task (Red-Green-Refactor internally)
│ ├── IMPL-2.json # Complete TDD task
│ ├── IMPL-3.json # Complex feature container (if needed)
│ ├── IMPL-3.1.json # Complex feature subtask (if needed)
│ ├── IMPL-3.2.json # Complex feature subtask (if needed)
│ └── ...
└── .process/
├── conflict-resolution.json # Conflict resolution results (if conflict_risk >= medium)
├── test-context-package.json # Test coverage analysis
├── context-package.json # Input from context-gather
├── context_package_path # Path to smart context package
└── green-fix-iteration-*.md # Fix logs from Green phase test-fix cycles
```
**File Count**:
- **Old approach**: 5 features = 15 task JSON files (TEST/IMPL/REFACTOR x 5)
- **New approach**: 5 features = 5 task JSON files (IMPL-N x 5)
- **Complex feature**: 1 feature = 1 container + M subtasks (IMPL-N + IMPL-N.M)
## Validation Rules
### Task Completeness
- Every IMPL-N must contain complete TDD workflow in `flow_control.implementation_approach`
- Each task must have 3 steps with `tdd_phase`: "red", "green", "refactor"
- Every task must have `meta.tdd_workflow: true`
### Dependency Enforcement
- Sequential features: IMPL-N depends_on ["IMPL-(N-1)"] if needed
- Complex feature subtasks: IMPL-N.M depends_on ["IMPL-N.(M-1)"] or parent dependencies
- No circular dependencies allowed
### Task Limits
- Maximum 18 total tasks (simple + subtasks) - hard limit for TDD workflows
- Flat hierarchy (<=5 tasks) or two-level (6-18 tasks with containers)
- Re-scope requirements if >18 tasks needed
### TDD Workflow Validation
- `meta.tdd_workflow` must be true
- `flow_control.implementation_approach` must have exactly 3 steps
- Each step must have `tdd_phase` field ("red", "green", or "refactor")
- Green phase step must include test-fix cycle logic
- `meta.max_iterations` must be present (default: 3)
## Error Handling
### Input Validation Errors
| Error | Cause | Resolution |
|-------|-------|------------|
| Session not found | Invalid session ID | Verify session exists |
| Context missing | Incomplete planning | Run context-gather first |
### TDD Generation Errors
| Error | Cause | Resolution |
|-------|-------|------------|
| Task count exceeds 18 | Too many features or subtasks | Re-scope requirements or merge features into multiple TDD sessions |
| Missing test framework | No test config | Configure testing first |
| Invalid TDD workflow | Missing tdd_phase or incomplete flow_control | Fix TDD structure in ANALYSIS_RESULTS.md |
| Missing tdd_workflow flag | Task doesn't have meta.tdd_workflow: true | Add TDD workflow metadata |
| Agent timeout | Large context or complex planning | Retry with send_input, or spawn new agent |
## Integration
**Called By**: SKILL.md (Phase 5: TDD Task Generation)
**Invokes**: `action-planning-agent` via spawn_agent for autonomous task generation
**Followed By**: Phase 6 (TDD Structure Validation in SKILL.md), then workflow:execute (external)
**CLI Tool Selection**: Determined semantically from user's task description. Include "use Codex/Gemini/Qwen" in your request for CLI execution.
**Output**:
- TDD task JSON files in `.task/` directory (IMPL-N.json format)
- IMPL_PLAN.md with TDD Implementation Tasks section
- TODO_LIST.md with internal TDD phase indicators
- Session state updated with task count and TDD metadata
- MCP enhancements integrated (if available)
## Test Coverage Analysis Integration
The TDD workflow includes test coverage analysis (via phases/01-test-context-gather.md) to:
- Detect existing test patterns and conventions
- Identify current test coverage gaps
- Discover test framework and configuration
- Enable integration with existing tests
This makes TDD workflow context-aware instead of assuming greenfield scenarios.
## Iterative Green Phase with Test-Fix Cycle
IMPL (Green phase) tasks include automatic test-fix cycle:
**Process Flow**:
1. **Initial Implementation**: Write minimal code to pass tests
2. **Test Execution**: Run test suite
3. **Success Path**: Tests pass → Complete task
4. **Failure Path**: Tests fail → Enter iterative fix cycle:
- **Gemini Diagnosis**: Analyze failures with bug-fix template
- **Fix Application**: Agent executes fixes directly
- **Retest**: Verify fix resolves failures
- **Repeat**: Up to max_iterations (default: 3)
5. **Safety Net**: Auto-revert all changes if max iterations reached
## Configuration Options
- **meta.max_iterations**: Number of fix attempts in Green phase (default: 3)
- **meta.execution_config.method**: Execution routing (agent/cli) determined from userConfig.executionMethod
---
## Post-Phase Update
After Phase 2 (TDD Task Generation) completes:
- **Output Created**: IMPL_PLAN.md, TODO_LIST.md, IMPL-*.json task files in `.task/` directory
- **TDD Structure**: Each task contains complete Red-Green-Refactor cycle internally
- **CLI Execution IDs**: All tasks assigned unique cli_execution_id for resume support
- **Next Action**: Phase 6 (TDD Structure Validation) in SKILL.md
- **TodoWrite**: Collapse Phase 5 sub-tasks to "Phase 5: TDD Task Generation: completed"

View File

@@ -1,575 +0,0 @@
# Phase 3: TDD Verify
## Goal
Verify TDD workflow execution quality by validating Red-Green-Refactor cycle compliance, test coverage completeness, and task chain structure integrity. This phase orchestrates multiple analysis steps and generates a comprehensive compliance report with quality gate recommendation.
**Output**: A structured Markdown report saved to `{projectRoot}/.workflow/active/WFS-{session}/TDD_COMPLIANCE_REPORT.md` containing:
- Executive summary with compliance score and quality gate recommendation
- Task chain validation (TEST → IMPL → REFACTOR structure)
- Test coverage metrics (line, branch, function)
- Red-Green-Refactor cycle verification
- Best practices adherence assessment
- Actionable improvement recommendations
## Operating Constraints
**ORCHESTRATOR MODE**:
- This phase coordinates coverage analysis (`phases/04-tdd-coverage-analysis.md`) and internal validation
- MAY write output files: TDD_COMPLIANCE_REPORT.md (primary report), .process/*.json (intermediate artifacts)
- MUST NOT modify source task files or implementation code
- MUST NOT create or delete tasks in the workflow
**Quality Gate Authority**: The compliance report provides a binding recommendation (BLOCK_MERGE / REQUIRE_FIXES / PROCEED_WITH_CAVEATS / APPROVED) based on objective compliance criteria.
## Core Responsibilities
- Verify TDD task chain structure (TEST → IMPL → REFACTOR)
- Analyze test coverage metrics
- Validate TDD cycle execution quality
- Generate compliance report with quality gate recommendation
## Execution Process
```
Input Parsing:
└─ Decision (session argument):
├─ --session provided → Use provided session
└─ No session → Auto-detect active session
Phase 1: Session Discovery & Validation
├─ Detect or validate session directory
├─ Check required artifacts exist (.task/*.json, .summaries/*)
└─ ERROR if invalid or incomplete
Phase 2: Task Chain Structure Validation
├─ Load all task JSONs from .task/
├─ Validate TDD structure: TEST-N.M → IMPL-N.M → REFACTOR-N.M
├─ Verify dependencies (depends_on)
├─ Validate meta fields (tdd_phase, agent)
└─ Extract chain validation data
Phase 3: Coverage & Cycle Analysis
├─ Read and execute: phases/04-tdd-coverage-analysis.md
├─ Parse: test-results.json, coverage-report.json, tdd-cycle-report.md
└─ Extract coverage metrics and TDD cycle verification
Phase 4: Compliance Report Generation
├─ Aggregate findings from Phases 1-3
├─ Calculate compliance score (0-100)
├─ Determine quality gate recommendation
├─ Generate TDD_COMPLIANCE_REPORT.md
└─ Display summary to user
```
## 4-Phase Execution
### Phase 1: Session Discovery & Validation
**Step 1.1: Detect Session**
```bash
IF --session parameter provided:
session_id = provided session
ELSE:
# Auto-detect active session
active_sessions = bash(find ${projectRoot}/.workflow/active/ -name "WFS-*" -type d 2>/dev/null)
IF active_sessions is empty:
ERROR: "No active workflow session found. Use --session <session-id>"
EXIT
ELSE IF active_sessions has multiple entries:
# Use most recently modified session
session_id = bash(ls -td ${projectRoot}/.workflow/active/WFS-*/ 2>/dev/null | head -1 | xargs basename)
ELSE:
session_id = basename(active_sessions[0])
# Derive paths
session_dir = ${projectRoot}/.workflow/active/WFS-{session_id}
task_dir = session_dir/.task
summaries_dir = session_dir/.summaries
process_dir = session_dir/.process
```
**Step 1.2: Validate Required Artifacts**
```bash
# Check task files exist
task_files = Glob(task_dir/*.json)
IF task_files.count == 0:
ERROR: "No task JSON files found. Run TDD planning (SKILL.md) first"
EXIT
# Check summaries exist (optional but recommended for full analysis)
summaries_exist = EXISTS(summaries_dir)
IF NOT summaries_exist:
WARNING: "No .summaries/ directory found. Some analysis may be limited."
```
**Output**: session_id, session_dir, task_files list
---
### Phase 2: Task Chain Structure Validation
**Step 2.1: Load and Parse Task JSONs**
```bash
# Single-pass JSON extraction using jq
validation_data = bash("""
# Load all tasks and extract structured data
cd '{session_dir}/.task'
# Extract all task IDs
task_ids=$(jq -r '.id' *.json 2>/dev/null | sort)
# Extract dependencies for IMPL tasks
impl_deps=$(jq -r 'select(.id | startswith("IMPL")) | .id + ":" + (.context.depends_on[]? // "none")' *.json 2>/dev/null)
# Extract dependencies for REFACTOR tasks
refactor_deps=$(jq -r 'select(.id | startswith("REFACTOR")) | .id + ":" + (.context.depends_on[]? // "none")' *.json 2>/dev/null)
# Extract meta fields
meta_tdd=$(jq -r '.id + ":" + (.meta.tdd_phase // "missing")' *.json 2>/dev/null)
meta_agent=$(jq -r '.id + ":" + (.meta.agent // "missing")' *.json 2>/dev/null)
# Output as JSON
jq -n --arg ids "$task_ids" \
--arg impl "$impl_deps" \
--arg refactor "$refactor_deps" \
--arg tdd "$meta_tdd" \
--arg agent "$meta_agent" \
'{ids: $ids, impl_deps: $impl, refactor_deps: $refactor, tdd: $tdd, agent: $agent}'
""")
```
**Step 2.2: Validate TDD Chain Structure**
```
Parse validation_data JSON and validate:
For each feature N (extracted from task IDs):
1. TEST-N.M exists?
2. IMPL-N.M exists?
3. REFACTOR-N.M exists? (optional but recommended)
4. IMPL-N.M.context.depends_on contains TEST-N.M?
5. REFACTOR-N.M.context.depends_on contains IMPL-N.M?
6. TEST-N.M.meta.tdd_phase == "red"?
7. TEST-N.M.meta.agent == "@code-review-test-agent"?
8. IMPL-N.M.meta.tdd_phase == "green"?
9. IMPL-N.M.meta.agent == "@code-developer"?
10. REFACTOR-N.M.meta.tdd_phase == "refactor"?
Calculate:
- chain_completeness_score = (complete_chains / total_chains) * 100
- dependency_accuracy = (correct_deps / total_deps) * 100
- meta_field_accuracy = (correct_meta / total_meta) * 100
```
**Output**: chain_validation_report (JSON structure with validation results)
---
### Phase 3: Coverage & Cycle Analysis
**Step 3.1: Call Coverage Analysis Phase**
Read and execute the coverage analysis phase:
- **Phase file**: `phases/04-tdd-coverage-analysis.md`
- **Args**: `--session {session_id}`
**Step 3.2: Parse Output Files**
```bash
# Check required outputs exist
IF NOT EXISTS(process_dir/test-results.json):
WARNING: "test-results.json not found. Coverage analysis incomplete."
coverage_data = null
ELSE:
coverage_data = Read(process_dir/test-results.json)
IF NOT EXISTS(process_dir/coverage-report.json):
WARNING: "coverage-report.json not found. Coverage metrics incomplete."
metrics = null
ELSE:
metrics = Read(process_dir/coverage-report.json)
IF NOT EXISTS(process_dir/tdd-cycle-report.md):
WARNING: "tdd-cycle-report.md not found. Cycle validation incomplete."
cycle_data = null
ELSE:
cycle_data = Read(process_dir/tdd-cycle-report.md)
```
**Step 3.3: Extract Coverage Metrics**
```
If coverage_data exists:
- line_coverage_percent
- branch_coverage_percent
- function_coverage_percent
- uncovered_files (list)
- uncovered_lines (map: file -> line ranges)
If cycle_data exists:
- red_phase_compliance (tests failed initially?)
- green_phase_compliance (tests pass after impl?)
- refactor_phase_compliance (tests stay green during refactor?)
- minimal_implementation_score (was impl minimal?)
```
**Output**: coverage_analysis, cycle_analysis
---
### Phase 4: Compliance Report Generation
**Step 4.1: Calculate Compliance Score**
```
Base Score: 100 points
Deductions:
Chain Structure:
- Missing TEST task: -30 points per feature
- Missing IMPL task: -30 points per feature
- Missing REFACTOR task: -10 points per feature
- Wrong dependency: -15 points per error
- Wrong agent: -5 points per error
- Wrong tdd_phase: -5 points per error
TDD Cycle Compliance:
- Test didn't fail initially: -10 points per feature
- Tests didn't pass after IMPL: -20 points per feature
- Tests broke during REFACTOR: -15 points per feature
- Over-engineered IMPL: -10 points per feature
Coverage Quality:
- Line coverage < 80%: -5 points
- Branch coverage < 70%: -5 points
- Function coverage < 80%: -5 points
- Critical paths uncovered: -10 points
Final Score: Max(0, Base Score - Total Deductions)
```
**Step 4.2: Determine Quality Gate**
```
IF score >= 90 AND no_critical_violations:
recommendation = "APPROVED"
ELSE IF score >= 70 AND critical_violations == 0:
recommendation = "PROCEED_WITH_CAVEATS"
ELSE IF score >= 50:
recommendation = "REQUIRE_FIXES"
ELSE:
recommendation = "BLOCK_MERGE"
```
**Step 4.3: Generate Report**
```bash
report_content = Generate markdown report (see structure below)
report_path = "{session_dir}/TDD_COMPLIANCE_REPORT.md"
Write(report_path, report_content)
```
**Step 4.4: Display Summary to User**
```bash
echo "=== TDD Verification Complete ==="
echo "Session: {session_id}"
echo "Report: {report_path}"
echo ""
echo "Quality Gate: {recommendation}"
echo "Compliance Score: {score}/100"
echo ""
echo "Chain Validation: {chain_completeness_score}%"
echo "Line Coverage: {line_coverage}%"
echo "Branch Coverage: {branch_coverage}%"
echo ""
echo "Next: Review full report for detailed findings"
```
## TodoWrite Pattern (Optional)
**Note**: As an orchestrator phase, TodoWrite tracking is optional and primarily useful for long-running verification processes. For most cases, the 4-phase execution is fast enough that progress tracking adds noise without value.
```javascript
// Only use TodoWrite for complex multi-session verification
// Skip for single-session verification
```
## Validation Logic
### Chain Validation Algorithm
```
1. Load all task JSONs from ${projectRoot}/.workflow/active/{session_id}/.task/
2. Extract task IDs and group by feature number
3. For each feature:
- Check TEST-N.M exists
- Check IMPL-N.M exists
- Check REFACTOR-N.M exists (optional but recommended)
- Verify IMPL-N.M depends_on TEST-N.M
- Verify REFACTOR-N.M depends_on IMPL-N.M
- Verify meta.tdd_phase values
- Verify meta.agent assignments
4. Calculate chain completeness score
5. Report incomplete or invalid chains
```
### Quality Gate Criteria
| Recommendation | Score Range | Critical Violations | Action |
|----------------|-------------|---------------------|--------|
| **APPROVED** | ≥90 | 0 | Safe to merge |
| **PROCEED_WITH_CAVEATS** | ≥70 | 0 | Can proceed, address minor issues |
| **REQUIRE_FIXES** | ≥50 | Any | Must fix before merge |
| **BLOCK_MERGE** | <50 | Any | Block merge until resolved |
**Critical Violations**:
- Missing TEST or IMPL task for any feature
- Tests didn't fail initially (Red phase violation)
- Tests didn't pass after IMPL (Green phase violation)
- Tests broke during REFACTOR (Refactor phase violation)
## Output Files
```
${projectRoot}/.workflow/active/WFS-{session-id}/
├── TDD_COMPLIANCE_REPORT.md # Comprehensive compliance report
└── .process/
├── test-results.json # From phases/04-tdd-coverage-analysis.md
├── coverage-report.json # From phases/04-tdd-coverage-analysis.md
└── tdd-cycle-report.md # From phases/04-tdd-coverage-analysis.md
```
## Error Handling
### Session Discovery Errors
| Error | Cause | Resolution |
|-------|-------|------------|
| No active session | No WFS-* directories | Provide --session explicitly |
| Multiple active sessions | Multiple WFS-* directories | Provide --session explicitly |
| Session not found | Invalid session-id | Check available sessions |
### Validation Errors
| Error | Cause | Resolution |
|-------|-------|------------|
| Task files missing | Incomplete planning | Run TDD planning (SKILL.md) first |
| Invalid JSON | Corrupted task files | Regenerate tasks |
| Missing summaries | Tasks not executed | Execute tasks before verify |
### Analysis Errors
| Error | Cause | Resolution |
|-------|-------|------------|
| Coverage tool missing | No test framework | Configure testing first |
| Tests fail to run | Code errors | Fix errors before verify |
| Coverage analysis fails | phases/04-tdd-coverage-analysis.md error | Check analysis output |
## Integration
### Phase Chain
- **Called After**: Task execution completes (all TDD tasks done)
- **Calls**: `phases/04-tdd-coverage-analysis.md`
- **Related Skills**: SKILL.md (orchestrator), `workflow-plan-execute/` (session management)
### When to Use
- After completing all TDD tasks in a workflow
- Before merging TDD workflow branch
- For TDD process quality assessment
- To identify missing TDD steps
## TDD Compliance Report Structure
```markdown
# TDD Compliance Report - {Session ID}
**Generated**: {timestamp}
**Session**: WFS-{session_id}
**Workflow Type**: TDD
---
## Executive Summary
### Quality Gate Decision
| Metric | Value | Status |
|--------|-------|--------|
| Compliance Score | {score}/100 | {status_emoji} |
| Chain Completeness | {percentage}% | {status} |
| Line Coverage | {percentage}% | {status} |
| Branch Coverage | {percentage}% | {status} |
| Function Coverage | {percentage}% | {status} |
### Recommendation
**{RECOMMENDATION}**
**Decision Rationale**:
{brief explanation based on score and violations}
**Quality Gate Criteria**:
- **APPROVED**: Score ≥90, no critical violations
- **PROCEED_WITH_CAVEATS**: Score ≥70, no critical violations
- **REQUIRE_FIXES**: Score ≥50 or critical violations exist
- **BLOCK_MERGE**: Score <50
---
## Chain Analysis
### Feature 1: {Feature Name}
**Status**: Complete
**Chain**: TEST-1.1 → IMPL-1.1 → REFACTOR-1.1
| Phase | Task | Status | Details |
|-------|------|--------|---------|
| Red | TEST-1.1 | Pass | Test created and failed with clear message |
| Green | IMPL-1.1 | Pass | Minimal implementation made test pass |
| Refactor | REFACTOR-1.1 | Pass | Code improved, tests remained green |
### Feature 2: {Feature Name}
**Status**: Incomplete
**Chain**: TEST-2.1 → IMPL-2.1 (Missing REFACTOR-2.1)
| Phase | Task | Status | Details |
|-------|------|--------|---------|
| Red | TEST-2.1 | Pass | Test created and failed |
| Green | IMPL-2.1 | Warning | Implementation seems over-engineered |
| Refactor | REFACTOR-2.1 | Missing | Task not completed |
**Issues**:
- REFACTOR-2.1 task not completed (-10 points)
- IMPL-2.1 implementation exceeded minimal scope (-10 points)
### Chain Validation Summary
| Metric | Value |
|--------|-------|
| Total Features | {count} |
| Complete Chains | {count} ({percent}%) |
| Incomplete Chains | {count} |
| Missing TEST | {count} |
| Missing IMPL | {count} |
| Missing REFACTOR | {count} |
| Dependency Errors | {count} |
| Meta Field Errors | {count} |
---
## Test Coverage Analysis
### Coverage Metrics
| Metric | Coverage | Target | Status |
|--------|----------|--------|--------|
| Line Coverage | {percentage}% | ≥80% | {status} |
| Branch Coverage | {percentage}% | ≥70% | {status} |
| Function Coverage | {percentage}% | ≥80% | {status} |
### Coverage Gaps
| File | Lines | Issue | Priority |
|------|-------|-------|----------|
| src/auth/service.ts | 45-52 | Uncovered error handling | HIGH |
| src/utils/parser.ts | 78-85 | Uncovered edge case | MEDIUM |
---
## TDD Cycle Validation
### Red Phase (Write Failing Test)
- {N}/{total} features had failing tests initially ({percent}%)
- Compliant features: {list}
- Non-compliant features: {list}
**Violations**:
- Feature 3: No evidence of initial test failure (-10 points)
### Green Phase (Make Test Pass)
- {N}/{total} implementations made tests pass ({percent}%)
- Compliant features: {list}
- Non-compliant features: {list}
**Violations**:
- Feature 2: Implementation over-engineered (-10 points)
### Refactor Phase (Improve Quality)
- {N}/{total} features completed refactoring ({percent}%)
- Compliant features: {list}
- Non-compliant features: {list}
**Violations**:
- Feature 2, 4: Refactoring step skipped (-20 points total)
---
## Best Practices Assessment
### Strengths
- Clear test descriptions
- Good test coverage
- Consistent naming conventions
- Well-structured code
### Areas for Improvement
- Some implementations over-engineered in Green phase
- Missing refactoring steps
- Test failure messages could be more descriptive
---
## Detailed Findings by Severity
### Critical Issues ({count})
{List of critical issues with impact and remediation}
### High Priority Issues ({count})
{List of high priority issues with impact and remediation}
### Medium Priority Issues ({count})
{List of medium priority issues with impact and remediation}
### Low Priority Issues ({count})
{List of low priority issues with impact and remediation}
---
## Recommendations
### Required Fixes (Before Merge)
1. Complete missing REFACTOR tasks (Features 2, 4)
2. Verify initial test failures for Feature 3
3. Fix tests that broke during refactoring
### Recommended Improvements
1. Simplify over-engineered implementations
2. Add edge case tests for Features 1, 3
3. Improve test failure message clarity
4. Increase branch coverage to >85%
### Optional Enhancements
1. Add more descriptive test names
2. Consider parameterized tests for similar scenarios
3. Document TDD process learnings
---
## Metrics Summary
| Metric | Value |
|--------|-------|
| Total Features | {count} |
| Complete Chains | {count} ({percent}%) |
| Compliance Score | {score}/100 |
| Critical Issues | {count} |
| High Issues | {count} |
| Medium Issues | {count} |
| Low Issues | {count} |
| Line Coverage | {percent}% |
| Branch Coverage | {percent}% |
| Function Coverage | {percent}% |
---
**Report End**
```
---
## Post-Phase Update
After TDD Verify completes:
- **Output Created**: `TDD_COMPLIANCE_REPORT.md` in session directory
- **Data Produced**: Compliance score, quality gate recommendation, chain validation, coverage metrics
- **Next Action**: Based on quality gate - APPROVED (merge), REQUIRE_FIXES (iterate), BLOCK_MERGE (rework)
- **TodoWrite**: Mark "TDD Verify: completed" with quality gate result

View File

@@ -1,287 +0,0 @@
# Phase 4: TDD Coverage Analysis
## Overview
Analyze test coverage and verify Red-Green-Refactor cycle execution for TDD workflow validation.
## Core Responsibilities
- Extract test files from TEST tasks
- Run test suite with coverage
- Parse coverage metrics
- Verify TDD cycle execution (Red -> Green -> Refactor)
- Generate coverage and cycle reports
## Execution Process
```
Input Parsing:
├─ Parse flags: --session
└─ Validation: session_id REQUIRED
Phase 1: Extract Test Tasks
└─ Find TEST-*.json files and extract focus_paths
Phase 2: Run Test Suite
└─ Decision (test framework):
├─ Node.js → npm test --coverage --json
├─ Python → pytest --cov --json-report
└─ Other → [test_command] --coverage --json
Phase 3: Parse Coverage Data
├─ Extract line coverage percentage
├─ Extract branch coverage percentage
├─ Extract function coverage percentage
└─ Identify uncovered lines/branches
Phase 4: Verify TDD Cycle
└─ FOR each TDD chain (TEST-N.M → IMPL-N.M → REFACTOR-N.M):
├─ Red Phase: Verify tests created and failed initially
├─ Green Phase: Verify tests now pass
└─ Refactor Phase: Verify code quality improved
Phase 5: Generate Analysis Report
└─ Create tdd-cycle-report.md with coverage metrics and cycle verification
```
## Execution Lifecycle
### Phase 1: Extract Test Tasks
```bash
find ${projectRoot}/.workflow/active/{session_id}/.task/ -name 'TEST-*.json' -exec jq -r '(.focus_paths // .context.focus_paths // [])[]' {} \;
```
**Output**: List of test directories/files from all TEST tasks
### Phase 2: Run Test Suite
```bash
# Node.js/JavaScript
npm test -- --coverage --json > ${projectRoot}/.workflow/active/{session_id}/.process/test-results.json
# Python
pytest --cov --json-report > ${projectRoot}/.workflow/active/{session_id}/.process/test-results.json
# Other frameworks (detect from project)
[test_command] --coverage --json-output ${projectRoot}/.workflow/active/{session_id}/.process/test-results.json
```
**Output**: test-results.json with coverage data
### Phase 3: Parse Coverage Data
```bash
jq '.coverage' ${projectRoot}/.workflow/active/{session_id}/.process/test-results.json > ${projectRoot}/.workflow/active/{session_id}/.process/coverage-report.json
```
**Extract**:
- Line coverage percentage
- Branch coverage percentage
- Function coverage percentage
- Uncovered lines/branches
### Phase 4: Verify TDD Cycle
For each TDD chain (TEST-N.M -> IMPL-N.M -> REFACTOR-N.M):
**1. Red Phase Verification**
```bash
# Check TEST task summary
cat ${projectRoot}/.workflow/active/{session_id}/.summaries/TEST-N.M-summary.md
```
Verify:
- Tests were created
- Tests failed initially
- Failure messages were clear
**2. Green Phase Verification**
```bash
# Check IMPL task summary
cat ${projectRoot}/.workflow/active/{session_id}/.summaries/IMPL-N.M-summary.md
```
Verify:
- Implementation was completed
- Tests now pass
- Implementation was minimal
**3. Refactor Phase Verification**
```bash
# Check REFACTOR task summary
cat ${projectRoot}/.workflow/active/{session_id}/.summaries/REFACTOR-N.M-summary.md
```
Verify:
- Refactoring was completed
- Tests still pass
- Code quality improved
### Phase 5: Generate Analysis Report
Create `${projectRoot}/.workflow/active/{session_id}/.process/tdd-cycle-report.md`:
```markdown
# TDD Cycle Analysis - {Session ID}
## Coverage Metrics
- **Line Coverage**: {percentage}%
- **Branch Coverage**: {percentage}%
- **Function Coverage**: {percentage}%
## Coverage Details
### Covered
- {covered_lines} lines
- {covered_branches} branches
- {covered_functions} functions
### Uncovered
- Lines: {uncovered_line_numbers}
- Branches: {uncovered_branch_locations}
## TDD Cycle Verification
### Feature 1: {Feature Name}
**Chain**: TEST-1.1 -> IMPL-1.1 -> REFACTOR-1.1
- [PASS] **Red Phase**: Tests created and failed initially
- [PASS] **Green Phase**: Implementation made tests pass
- [PASS] **Refactor Phase**: Refactoring maintained green tests
### Feature 2: {Feature Name}
**Chain**: TEST-2.1 -> IMPL-2.1 -> REFACTOR-2.1
- [PASS] **Red Phase**: Tests created and failed initially
- [WARN] **Green Phase**: Tests pass but implementation seems over-engineered
- [PASS] **Refactor Phase**: Refactoring maintained green tests
[Repeat for all features]
## TDD Compliance Summary
- **Total Chains**: {N}
- **Complete Cycles**: {N}
- **Incomplete Cycles**: {0}
- **Compliance Score**: {score}/100
## Gaps Identified
- Feature 3: Missing initial test failure verification
- Feature 5: No refactoring step completed
## Recommendations
- Complete missing refactoring steps
- Add edge case tests for Feature 2
- Verify test failure messages are descriptive
```
## Output Files
```
${projectRoot}/.workflow/active/{session-id}/
└── .process/
├── test-results.json # Raw test execution results
├── coverage-report.json # Parsed coverage data
└── tdd-cycle-report.md # TDD cycle analysis
```
## Test Framework Detection
Auto-detect test framework from project:
```bash
# Check for test frameworks
if [ -f "package.json" ] && grep -q "jest\|mocha\|vitest" package.json; then
TEST_CMD="npm test -- --coverage --json"
elif [ -f "pytest.ini" ] || [ -f "setup.py" ]; then
TEST_CMD="pytest --cov --json-report"
elif [ -f "Cargo.toml" ]; then
TEST_CMD="cargo test -- --test-threads=1 --nocapture"
elif [ -f "go.mod" ]; then
TEST_CMD="go test -coverprofile=coverage.out -json ./..."
else
TEST_CMD="echo 'No supported test framework found'"
fi
```
## TDD Cycle Verification Algorithm
```
For each feature N:
1. Load TEST-N.M-summary.md
IF summary missing:
Mark: "Red phase incomplete"
SKIP to next feature
CHECK: Contains "test" AND "fail"
IF NOT found:
Mark: "Red phase verification failed"
ELSE:
Mark: "Red phase [PASS]"
2. Load IMPL-N.M-summary.md
IF summary missing:
Mark: "Green phase incomplete"
SKIP to next feature
CHECK: Contains "pass" OR "green"
IF NOT found:
Mark: "Green phase verification failed"
ELSE:
Mark: "Green phase [PASS]"
3. Load REFACTOR-N.M-summary.md
IF summary missing:
Mark: "Refactor phase incomplete"
CONTINUE (refactor is optional)
CHECK: Contains "refactor" AND "pass"
IF NOT found:
Mark: "Refactor phase verification failed"
ELSE:
Mark: "Refactor phase [PASS]"
4. Calculate chain score:
- Red + Green + Refactor all [PASS] = 100%
- Red + Green [PASS], Refactor missing = 80%
- Red [PASS], Green missing = 40%
- All missing = 0%
```
## Coverage Metrics Calculation
```bash
# Parse coverage from test-results.json
line_coverage=$(jq '.coverage.lineCoverage' test-results.json)
branch_coverage=$(jq '.coverage.branchCoverage' test-results.json)
function_coverage=$(jq '.coverage.functionCoverage' test-results.json)
# Calculate overall score
overall_score=$(echo "($line_coverage + $branch_coverage + $function_coverage) / 3" | bc)
```
## Error Handling
### Test Execution Errors
| Error | Cause | Resolution |
|-------|-------|------------|
| Test framework not found | No test config | Configure test framework first |
| Tests fail to run | Syntax errors | Fix code before analysis |
| Coverage not available | Missing coverage tool | Install coverage plugin |
### Cycle Verification Errors
| Error | Cause | Resolution |
|-------|-------|------------|
| Summary missing | Task not executed | Execute tasks before analysis |
| Invalid summary format | Corrupted file | Re-run task to regenerate |
| No test evidence | Tests not committed | Ensure tests are committed |
## Integration
### Phase Chain
- **Called By**: `phases/03-tdd-verify.md` (Coverage & Cycle Analysis step)
- **Calls**: Test framework commands (npm test, pytest, etc.)
- **Followed By**: Compliance report generation in `phases/03-tdd-verify.md`
---
## Post-Phase Update
After TDD Coverage Analysis completes:
- **Output Created**: `test-results.json`, `coverage-report.json`, `tdd-cycle-report.md` in `.process/`
- **Data Produced**: Coverage metrics (line/branch/function), TDD cycle verification results per feature
- **Next Action**: Return data to `phases/03-tdd-verify.md` for compliance report aggregation
- **TodoWrite**: Mark "Coverage & Cycle Analysis: completed"