mirror of
https://github.com/catlog22/Claude-Code-Workflow.git
synced 2026-02-27 09:13:07 +08:00
feat: Enhance workflow execution and documentation processes
- Added compact protection directives to execution phases to ensure critical instructions are preserved during context compression. - Introduced checkpoints in execution steps to verify active memory of execution protocols. - Created new command files for team lifecycle roles: - `dispatch.md`: Manage task chains based on execution modes. - `monitor.md`: Event-driven pipeline coordination with worker callbacks. - `critique.md`: Multi-perspective CLI critique for structured analysis. - `implement.md`: Multi-backend code implementation with retry and fallback mechanisms. - `explore.md`: Complexity-driven codebase exploration for task planning. - `generate-doc.md`: Multi-CLI document generation for various document types. - Updated SKILL.md to include compact protection patterns and phase reference documentation.
This commit is contained in:
@@ -0,0 +1,142 @@
|
||||
# Command: dispatch
|
||||
|
||||
## Purpose
|
||||
|
||||
Create task chains based on execution mode. Each mode maps to a predefined pipeline from SKILL.md Task Metadata Registry. Tasks are created with proper dependency chains, owner assignments, and session references.
|
||||
|
||||
## Phase 2: Context Loading
|
||||
|
||||
| Input | Source | Required |
|
||||
|-------|--------|----------|
|
||||
| Mode | Phase 1 requirements (spec-only, impl-only, etc.) | Yes |
|
||||
| Session folder | `<session-folder>` from Phase 2 | Yes |
|
||||
| Scope | User requirements description | Yes |
|
||||
| Spec file | User-provided path (impl-only mode only) | Conditional |
|
||||
|
||||
## Phase 3: Task Chain Creation
|
||||
|
||||
### Mode-to-Pipeline Routing
|
||||
|
||||
| Mode | Tasks | Pipeline | First Task |
|
||||
|------|-------|----------|------------|
|
||||
| spec-only | 12 | Spec pipeline | RESEARCH-001 |
|
||||
| impl-only | 4 | Impl pipeline | PLAN-001 |
|
||||
| fe-only | 3 | FE pipeline | PLAN-001 |
|
||||
| fullstack | 6 | Fullstack pipeline | PLAN-001 |
|
||||
| full-lifecycle | 16 | Spec + Impl | RESEARCH-001 |
|
||||
| full-lifecycle-fe | 18 | Spec + Fullstack | RESEARCH-001 |
|
||||
|
||||
---
|
||||
|
||||
### Spec Pipeline (12 tasks)
|
||||
|
||||
Used by: spec-only, full-lifecycle, full-lifecycle-fe
|
||||
|
||||
| # | Subject | Owner | BlockedBy | Description |
|
||||
|---|---------|-------|-----------|-------------|
|
||||
| 1 | RESEARCH-001 | analyst | (none) | Seed analysis and context gathering |
|
||||
| 2 | DISCUSS-001 | discussant | RESEARCH-001 | Critique research findings |
|
||||
| 3 | DRAFT-001 | writer | DISCUSS-001 | Generate Product Brief |
|
||||
| 4 | DISCUSS-002 | discussant | DRAFT-001 | Critique Product Brief |
|
||||
| 5 | DRAFT-002 | writer | DISCUSS-002 | Generate Requirements/PRD |
|
||||
| 6 | DISCUSS-003 | discussant | DRAFT-002 | Critique Requirements/PRD |
|
||||
| 7 | DRAFT-003 | writer | DISCUSS-003 | Generate Architecture Document |
|
||||
| 8 | DISCUSS-004 | discussant | DRAFT-003 | Critique Architecture Document |
|
||||
| 9 | DRAFT-004 | writer | DISCUSS-004 | Generate Epics |
|
||||
| 10 | DISCUSS-005 | discussant | DRAFT-004 | Critique Epics |
|
||||
| 11 | QUALITY-001 | reviewer | DISCUSS-005 | 5-dimension spec quality validation |
|
||||
| 12 | DISCUSS-006 | discussant | QUALITY-001 | Final review discussion and sign-off |
|
||||
|
||||
### Impl Pipeline (4 tasks)
|
||||
|
||||
Used by: impl-only, full-lifecycle (PLAN-001 blockedBy DISCUSS-006)
|
||||
|
||||
| # | Subject | Owner | BlockedBy | Description |
|
||||
|---|---------|-------|-----------|-------------|
|
||||
| 1 | PLAN-001 | planner | (none) | Multi-angle exploration and planning |
|
||||
| 2 | IMPL-001 | executor | PLAN-001 | Code implementation |
|
||||
| 3 | TEST-001 | tester | IMPL-001 | Test-fix cycles |
|
||||
| 4 | REVIEW-001 | reviewer | IMPL-001 | 4-dimension code review |
|
||||
|
||||
### FE Pipeline (3 tasks)
|
||||
|
||||
Used by: fe-only
|
||||
|
||||
| # | Subject | Owner | BlockedBy | Description |
|
||||
|---|---------|-------|-----------|-------------|
|
||||
| 1 | PLAN-001 | planner | (none) | Planning (frontend focus) |
|
||||
| 2 | DEV-FE-001 | fe-developer | PLAN-001 | Frontend implementation |
|
||||
| 3 | QA-FE-001 | fe-qa | DEV-FE-001 | 5-dimension frontend QA |
|
||||
|
||||
GC loop (max 2 rounds): QA-FE verdict=NEEDS_FIX → create DEV-FE-002 + QA-FE-002 dynamically.
|
||||
|
||||
### Fullstack Pipeline (6 tasks)
|
||||
|
||||
Used by: fullstack, full-lifecycle-fe (PLAN-001 blockedBy DISCUSS-006)
|
||||
|
||||
| # | Subject | Owner | BlockedBy | Description |
|
||||
|---|---------|-------|-----------|-------------|
|
||||
| 1 | PLAN-001 | planner | (none) | Fullstack planning |
|
||||
| 2 | IMPL-001 | executor | PLAN-001 | Backend implementation |
|
||||
| 3 | DEV-FE-001 | fe-developer | PLAN-001 | Frontend implementation |
|
||||
| 4 | TEST-001 | tester | IMPL-001 | Backend test-fix cycles |
|
||||
| 5 | QA-FE-001 | fe-qa | DEV-FE-001 | Frontend QA |
|
||||
| 6 | REVIEW-001 | reviewer | TEST-001, QA-FE-001 | Full code review |
|
||||
|
||||
### Composite Modes
|
||||
|
||||
| Mode | Construction | PLAN-001 BlockedBy |
|
||||
|------|-------------|-------------------|
|
||||
| full-lifecycle | Spec (12) + Impl (4) | DISCUSS-006 |
|
||||
| full-lifecycle-fe | Spec (12) + Fullstack (6) | DISCUSS-006 |
|
||||
|
||||
---
|
||||
|
||||
### Impl-Only Pre-check
|
||||
|
||||
Before creating impl-only tasks, verify specification exists:
|
||||
|
||||
```
|
||||
Spec exists?
|
||||
├─ YES → read spec path → proceed with task creation
|
||||
└─ NO → error: "impl-only requires existing spec, use spec-only or full-lifecycle"
|
||||
```
|
||||
|
||||
### Task Description Template
|
||||
|
||||
Every task description includes session and scope context:
|
||||
|
||||
```
|
||||
TaskCreate({
|
||||
subject: "<TASK-ID>",
|
||||
owner: "<role>",
|
||||
description: "<task description from pipeline table>\nSession: <session-folder>\nScope: <scope>",
|
||||
blockedBy: [<dependency-list>],
|
||||
status: "pending"
|
||||
})
|
||||
```
|
||||
|
||||
### Execution Method
|
||||
|
||||
| Method | Behavior |
|
||||
|--------|----------|
|
||||
| sequential | One task active at a time; next activated after predecessor completes |
|
||||
| parallel | Tasks with all deps met run concurrently (e.g., TEST-001 + REVIEW-001) |
|
||||
|
||||
## Phase 4: Validation
|
||||
|
||||
| Check | Criteria |
|
||||
|-------|----------|
|
||||
| Task count | Matches mode total from routing table |
|
||||
| Dependencies | Every blockedBy references an existing task subject |
|
||||
| Owner assignment | Each task owner matches SKILL.md Role Registry prefix |
|
||||
| Session reference | Every task description contains `Session: <session-folder>` |
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Scenario | Resolution |
|
||||
|----------|------------|
|
||||
| Unknown mode | Reject with supported mode list |
|
||||
| Missing spec for impl-only | Error, suggest spec-only or full-lifecycle |
|
||||
| TaskCreate fails | Log error, report to user |
|
||||
| Duplicate task subject | Skip creation, log warning |
|
||||
@@ -0,0 +1,180 @@
|
||||
# Command: monitor
|
||||
|
||||
## Purpose
|
||||
|
||||
Event-driven pipeline coordination with Spawn-and-Stop pattern. Three wake-up sources drive pipeline advancement: worker callbacks (auto-advance), user `check` (status report), user `resume` (manual advance).
|
||||
|
||||
## Constants
|
||||
|
||||
| Constant | Value | Description |
|
||||
|----------|-------|-------------|
|
||||
| SPAWN_MODE | background | All workers spawned via `Task(run_in_background: true)` |
|
||||
| ONE_STEP_PER_INVOCATION | true | Coordinator does one operation then STOPS |
|
||||
|
||||
## Phase 2: Context Loading
|
||||
|
||||
| Input | Source | Required |
|
||||
|-------|--------|----------|
|
||||
| Session file | `<session-folder>/team-session.json` | Yes |
|
||||
| Task list | `TaskList()` | Yes |
|
||||
| Active workers | session.active_workers[] | Yes |
|
||||
| Pipeline mode | session.mode | Yes |
|
||||
|
||||
## Phase 3: Handler Routing
|
||||
|
||||
### Wake-up Source Detection
|
||||
|
||||
Parse `$ARGUMENTS` to determine handler:
|
||||
|
||||
| Priority | Condition | Handler |
|
||||
|----------|-----------|---------|
|
||||
| 1 | Message contains `[<role-name>]` from known worker role | handleCallback |
|
||||
| 2 | Contains "check" or "status" | handleCheck |
|
||||
| 3 | Contains "resume", "continue", or "next" | handleResume |
|
||||
| 4 | None of the above (initial spawn after dispatch) | handleSpawnNext |
|
||||
|
||||
Known worker roles: analyst, writer, discussant, planner, executor, tester, reviewer, explorer, architect, fe-developer, fe-qa.
|
||||
|
||||
---
|
||||
|
||||
### Handler: handleCallback
|
||||
|
||||
Worker completed a task. Verify completion, update state, auto-advance.
|
||||
|
||||
```
|
||||
Receive callback from [<role>]
|
||||
├─ Find matching active worker by role
|
||||
├─ Task status = completed?
|
||||
│ ├─ YES → remove from active_workers → update session
|
||||
│ │ ├─ Handle checkpoints (see below)
|
||||
│ │ └─ → handleSpawnNext
|
||||
│ └─ NO → progress message, do not advance → STOP
|
||||
└─ No matching worker found
|
||||
├─ Scan all active workers for completed tasks
|
||||
├─ Found completed → process each → handleSpawnNext
|
||||
└─ None completed → STOP
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Handler: handleCheck
|
||||
|
||||
Read-only status report. No pipeline advancement.
|
||||
|
||||
**Output format**:
|
||||
|
||||
```
|
||||
[coordinator] Pipeline Status
|
||||
[coordinator] Mode: <mode> | Progress: <completed>/<total> (<percent>%)
|
||||
|
||||
[coordinator] Execution Graph:
|
||||
Spec Phase: (if applicable)
|
||||
[<icon> RESEARCH-001] → [<icon> DISCUSS-001] → ...
|
||||
Impl Phase: (if applicable)
|
||||
[<icon> PLAN-001]
|
||||
├─ BE: [<icon> IMPL-001] → [<icon> TEST-001] → [<icon> REVIEW-001]
|
||||
└─ FE: [<icon> DEV-FE-001] → [<icon> QA-FE-001]
|
||||
|
||||
done=completed >>>=running o=pending .=not created
|
||||
|
||||
[coordinator] Active Workers:
|
||||
> <subject> (<role>) - running <elapsed>
|
||||
|
||||
[coordinator] Ready to spawn: <subjects>
|
||||
[coordinator] Commands: 'resume' to advance | 'check' to refresh
|
||||
```
|
||||
|
||||
**Icon mapping**: completed=done, in_progress=>>>, pending=o, not created=.
|
||||
|
||||
Then STOP.
|
||||
|
||||
---
|
||||
|
||||
### Handler: handleResume
|
||||
|
||||
Check active worker completion, process results, advance pipeline.
|
||||
|
||||
```
|
||||
Load active_workers from session
|
||||
├─ No active workers → handleSpawnNext
|
||||
└─ Has active workers → check each:
|
||||
├─ status = completed → mark done, log
|
||||
├─ status = in_progress → still running, log
|
||||
└─ other status → worker failure → reset to pending
|
||||
After processing:
|
||||
├─ Some completed → handleSpawnNext
|
||||
├─ All still running → report status → STOP
|
||||
└─ All failed → handleSpawnNext (retry)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Handler: handleSpawnNext
|
||||
|
||||
Find all ready tasks, spawn workers in background, update session, STOP.
|
||||
|
||||
```
|
||||
Collect task states from TaskList()
|
||||
├─ completedSubjects: status = completed
|
||||
├─ inProgressSubjects: status = in_progress
|
||||
└─ readySubjects: pending + all blockedBy in completedSubjects
|
||||
|
||||
Ready tasks found?
|
||||
├─ NONE + work in progress → report waiting → STOP
|
||||
├─ NONE + nothing in progress → PIPELINE_COMPLETE → Phase 5
|
||||
└─ HAS ready tasks → for each:
|
||||
├─ TaskUpdate → in_progress
|
||||
├─ team_msg log → task_unblocked
|
||||
├─ Spawn worker (see tool call below)
|
||||
└─ Add to session.active_workers
|
||||
Update session file → output summary → STOP
|
||||
```
|
||||
|
||||
**Spawn worker tool call** (one per ready task):
|
||||
|
||||
```bash
|
||||
Task({
|
||||
subagent_type: "general-purpose",
|
||||
description: "Spawn <role> worker for <subject>",
|
||||
team_name: <team-name>,
|
||||
name: "<role>",
|
||||
run_in_background: true,
|
||||
prompt: "<worker prompt from SKILL.md Coordinator Spawn Template>"
|
||||
})
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Checkpoints
|
||||
|
||||
| Completed Task | Mode Condition | Action |
|
||||
|---------------|----------------|--------|
|
||||
| DISCUSS-006 | full-lifecycle or full-lifecycle-fe | Output "SPEC PHASE COMPLETE" checkpoint, pause for user review before impl |
|
||||
|
||||
---
|
||||
|
||||
### Worker Failure Handling
|
||||
|
||||
When a worker has unexpected status (not completed, not in_progress):
|
||||
|
||||
1. Reset task → pending via TaskUpdate
|
||||
2. Log via team_msg (type: error)
|
||||
3. Report to user: task reset, will retry on next resume
|
||||
|
||||
## Phase 4: Validation
|
||||
|
||||
| Check | Criteria |
|
||||
|-------|----------|
|
||||
| Session state consistent | active_workers matches TaskList in_progress tasks |
|
||||
| No orphaned tasks | Every in_progress task has an active_worker entry |
|
||||
| Pipeline completeness | All expected tasks exist per mode |
|
||||
| Completion detection | readySubjects=0 + inProgressSubjects=0 → PIPELINE_COMPLETE |
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Scenario | Resolution |
|
||||
|----------|------------|
|
||||
| Session file not found | Error, suggest re-initialization |
|
||||
| Worker callback from unknown role | Log info, scan for other completions |
|
||||
| All workers still running on resume | Report status, suggest check later |
|
||||
| Pipeline stall (no ready, no running) | Check for missing tasks, report to user |
|
||||
@@ -0,0 +1,136 @@
|
||||
# Command: critique
|
||||
|
||||
## Purpose
|
||||
|
||||
Multi-perspective CLI critique: launch parallel analyses from assigned perspectives, collect structured ratings, detect divergences, and synthesize consensus.
|
||||
|
||||
## Phase 2: Context Loading
|
||||
|
||||
| Input | Source | Required |
|
||||
|-------|--------|----------|
|
||||
| Round config | DISCUSS-NNN → look up round in role.md table | Yes |
|
||||
| Artifact | `<session-folder>/<artifact-path>` from round config | Yes |
|
||||
| Perspectives | Round config perspectives column | Yes |
|
||||
| Discovery context | `<session-folder>/spec/discovery-context.json` | For coverage perspective |
|
||||
| Prior discussions | `<session-folder>/discussions/` | No |
|
||||
|
||||
## Phase 3: Multi-Perspective Critique
|
||||
|
||||
### Perspective Routing
|
||||
|
||||
| Perspective | CLI Tool | Role | Focus Areas |
|
||||
|-------------|----------|------|-------------|
|
||||
| Product | gemini | Product Manager | Market fit, user value, business viability, competitive differentiation |
|
||||
| Technical | codex | Tech Lead | Feasibility, tech debt, performance, security, maintainability |
|
||||
| Quality | claude | QA Lead | Completeness, testability, consistency, standards compliance |
|
||||
| Risk | gemini | Risk Analyst | Risk identification, dependencies, failure modes, mitigation |
|
||||
| Coverage | gemini | Requirements Analyst | Requirement completeness vs discovery-context, gap detection, scope creep |
|
||||
|
||||
### Execution Flow
|
||||
|
||||
```
|
||||
For each perspective in round config:
|
||||
├─ Build prompt with perspective focus + artifact content
|
||||
├─ Launch CLI analysis (background)
|
||||
│ Bash(command="ccw cli -p '<prompt>' --tool <cli-tool> --mode analysis", run_in_background=true)
|
||||
└─ Collect result via hook callback
|
||||
```
|
||||
|
||||
### CLI Call Template
|
||||
|
||||
```bash
|
||||
Bash(command="ccw cli -p 'PURPOSE: Analyze from <role> perspective for <round-id>
|
||||
TASK: <focus-areas-from-table>
|
||||
MODE: analysis
|
||||
CONTEXT: Artifact content below
|
||||
EXPECTED: JSON with strengths[], weaknesses[], suggestions[], rating (1-5)
|
||||
CONSTRAINTS: Output valid JSON only
|
||||
|
||||
Artifact:
|
||||
<artifact-content>' --tool <cli-tool> --mode analysis", run_in_background=true)
|
||||
```
|
||||
|
||||
### Extra Fields by Perspective
|
||||
|
||||
| Perspective | Additional Output Fields |
|
||||
|-------------|------------------------|
|
||||
| Risk | `risk_level`: low / medium / high / critical |
|
||||
| Coverage | `covered_requirements[]`, `partial_requirements[]`, `missing_requirements[]`, `scope_creep[]` |
|
||||
|
||||
---
|
||||
|
||||
### Divergence Detection
|
||||
|
||||
After all perspectives return, scan results for critical signals:
|
||||
|
||||
| Signal | Condition | Severity |
|
||||
|--------|-----------|----------|
|
||||
| Coverage gap | `missing_requirements` non-empty | High |
|
||||
| High risk | `risk_level` is high or critical | High |
|
||||
| Low rating | Any perspective rating <= 2 | Medium |
|
||||
| Rating spread | Max rating - min rating >= 3 | Medium |
|
||||
|
||||
### Consensus Determination
|
||||
|
||||
| Condition | Verdict |
|
||||
|-----------|---------|
|
||||
| No high-severity divergences AND average rating >= 3.0 | consensus_reached |
|
||||
| Any high-severity divergence OR average rating < 3.0 | consensus_blocked |
|
||||
|
||||
### Synthesis Process
|
||||
|
||||
```
|
||||
Collect all perspective results
|
||||
├─ Extract convergent themes (agreed by 2+ perspectives)
|
||||
├─ Extract divergent views (conflicting assessments)
|
||||
├─ Check coverage gaps from coverage result
|
||||
├─ Compile action items from all suggestions
|
||||
└─ Determine consensus per table above
|
||||
```
|
||||
|
||||
## Phase 4: Validation
|
||||
|
||||
### Discussion Record
|
||||
|
||||
Write to `<session-folder>/discussions/<round-id>-discussion.md`:
|
||||
|
||||
```
|
||||
# Discussion Record: <round-id>
|
||||
|
||||
**Artifact**: <artifact-path>
|
||||
**Perspectives**: <list>
|
||||
**Consensus**: reached / blocked
|
||||
|
||||
## Convergent Themes
|
||||
- <theme>
|
||||
|
||||
## Divergent Views
|
||||
- **<topic>** (<severity>): <description>
|
||||
|
||||
## Action Items
|
||||
1. <item>
|
||||
|
||||
## Ratings
|
||||
| Perspective | Rating |
|
||||
|-------------|--------|
|
||||
| <name> | <n>/5 |
|
||||
|
||||
**Average**: <avg>/5
|
||||
```
|
||||
|
||||
### Result Routing
|
||||
|
||||
| Outcome | Message Type | Content |
|
||||
|---------|-------------|---------|
|
||||
| Consensus reached | discussion_ready | Action items, record path, average rating |
|
||||
| Consensus blocked | discussion_blocked | Divergence points, severity, record path |
|
||||
| Artifact not found | error | Missing artifact path |
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Scenario | Resolution |
|
||||
|----------|------------|
|
||||
| Artifact not found | Report error to coordinator |
|
||||
| Single CLI perspective fails | Fallback to direct Claude analysis for that perspective |
|
||||
| All CLI analyses fail | Generate basic discussion from direct artifact reading |
|
||||
| All perspectives diverge | Report as discussion_blocked with all divergence points |
|
||||
@@ -0,0 +1,166 @@
|
||||
# Command: implement
|
||||
|
||||
## Purpose
|
||||
|
||||
Multi-backend code implementation: route tasks to appropriate execution backend (direct edit, subagent, or CLI), build focused prompts, execute with retry and fallback.
|
||||
|
||||
## Phase 2: Context Loading
|
||||
|
||||
| Input | Source | Required |
|
||||
|-------|--------|----------|
|
||||
| Plan | `<session-folder>/plan/plan.json` | Yes |
|
||||
| Task files | `<session-folder>/plan/.task/TASK-*.json` | Yes |
|
||||
| Backend | Task metadata / plan default / auto-select | Yes |
|
||||
| Working directory | task.metadata.working_dir or project root | No |
|
||||
| Wisdom | `<session-folder>/wisdom/` | No |
|
||||
|
||||
## Phase 3: Implementation
|
||||
|
||||
### Backend Selection
|
||||
|
||||
Priority order (first match wins):
|
||||
|
||||
| Priority | Source | Method |
|
||||
|----------|--------|--------|
|
||||
| 1 | Task metadata | `task.metadata.executor` field |
|
||||
| 2 | Plan default | "Execution Backend:" line in plan.json |
|
||||
| 3 | Auto-select | See auto-select table below |
|
||||
|
||||
**Auto-select routing**:
|
||||
|
||||
| Condition | Backend |
|
||||
|-----------|---------|
|
||||
| Description < 200 chars AND no refactor/architecture keywords AND single target file | agent (direct edit) |
|
||||
| Description < 200 chars AND simple scope | agent (subagent) |
|
||||
| Complex scope OR architecture keywords | codex |
|
||||
| Analysis-heavy OR multi-module integration | gemini |
|
||||
|
||||
### Execution Paths
|
||||
|
||||
```
|
||||
Backend selected
|
||||
├─ agent (direct edit)
|
||||
│ └─ Read target file → Edit directly → no subagent overhead
|
||||
├─ agent (subagent)
|
||||
│ └─ Task({ subagent_type: "code-developer", run_in_background: false })
|
||||
├─ codex (CLI)
|
||||
│ └─ Bash(command="ccw cli ... --tool codex --mode write", run_in_background=true)
|
||||
└─ gemini (CLI)
|
||||
└─ Bash(command="ccw cli ... --tool gemini --mode write", run_in_background=true)
|
||||
```
|
||||
|
||||
### Path 1: Direct Edit (agent, simple task)
|
||||
|
||||
```bash
|
||||
Read(file_path="<target-file>")
|
||||
Edit(file_path="<target-file>", old_string="<old>", new_string="<new>")
|
||||
```
|
||||
|
||||
### Path 2: Subagent (agent, moderate task)
|
||||
|
||||
```
|
||||
Task({
|
||||
subagent_type: "code-developer",
|
||||
run_in_background: false,
|
||||
description: "Implement <task-id>",
|
||||
prompt: "<execution-prompt>"
|
||||
})
|
||||
```
|
||||
|
||||
### Path 3: CLI Backend (codex or gemini)
|
||||
|
||||
```bash
|
||||
Bash(command="ccw cli -p '<execution-prompt>' --tool <codex|gemini> --mode write --cd <working-dir>", run_in_background=true)
|
||||
```
|
||||
|
||||
### Execution Prompt Template
|
||||
|
||||
All backends receive the same structured prompt:
|
||||
|
||||
```
|
||||
# Implementation Task: <task-id>
|
||||
|
||||
## Task Description
|
||||
<task-description>
|
||||
|
||||
## Acceptance Criteria
|
||||
1. <criterion>
|
||||
|
||||
## Context from Plan
|
||||
<architecture-section>
|
||||
<technical-stack-section>
|
||||
<task-context-section>
|
||||
|
||||
## Files to Modify
|
||||
<target-files or "Auto-detect based on task">
|
||||
|
||||
## Constraints
|
||||
- Follow existing code style and patterns
|
||||
- Preserve backward compatibility
|
||||
- Add appropriate error handling
|
||||
- Include inline comments for complex logic
|
||||
```
|
||||
|
||||
### Batch Execution
|
||||
|
||||
When multiple IMPL tasks exist, execute in dependency order:
|
||||
|
||||
```
|
||||
Topological sort by task.depends_on
|
||||
├─ Batch 1: Tasks with no dependencies → execute
|
||||
├─ Batch 2: Tasks depending on batch 1 → execute
|
||||
└─ Batch N: Continue until all tasks complete
|
||||
|
||||
Progress update per batch (when > 1 batch):
|
||||
→ team_msg: "Processing batch <N>/<total>: <task-id>"
|
||||
```
|
||||
|
||||
### Retry and Fallback
|
||||
|
||||
**Retry** (max 3 attempts per task):
|
||||
|
||||
```
|
||||
Attempt 1 → failure
|
||||
├─ team_msg: "Retry 1/3 after error: <message>"
|
||||
└─ Attempt 2 → failure
|
||||
├─ team_msg: "Retry 2/3 after error: <message>"
|
||||
└─ Attempt 3 → failure → fallback
|
||||
|
||||
```
|
||||
|
||||
**Fallback** (when primary backend fails after retries):
|
||||
|
||||
| Primary Backend | Fallback |
|
||||
|----------------|----------|
|
||||
| codex | agent (subagent) |
|
||||
| gemini | agent (subagent) |
|
||||
| agent (subagent) | Report failure to coordinator |
|
||||
| agent (direct edit) | agent (subagent) |
|
||||
|
||||
## Phase 4: Validation
|
||||
|
||||
### Self-Validation Steps
|
||||
|
||||
| Step | Method | Pass Criteria |
|
||||
|------|--------|--------------|
|
||||
| Syntax check | `Bash(command="tsc --noEmit", timeout=30000)` | Exit code 0 |
|
||||
| Acceptance match | Check criteria keywords vs modified files | All criteria addressed |
|
||||
| Test detection | Search for .test.ts/.spec.ts matching modified files | Tests identified |
|
||||
| File changes | `Bash(command="git diff --name-only HEAD")` | At least 1 file modified |
|
||||
|
||||
### Result Routing
|
||||
|
||||
| Outcome | Message Type | Content |
|
||||
|---------|-------------|---------|
|
||||
| All tasks pass validation | impl_complete | Task ID, files modified, backend used |
|
||||
| Batch progress | impl_progress | Batch index, total batches, current task |
|
||||
| Validation failure after retries | error | Task ID, error details, retry count |
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Scenario | Resolution |
|
||||
|----------|------------|
|
||||
| Syntax errors after implementation | Retry with error context (max 3) |
|
||||
| Backend unavailable | Fallback to agent |
|
||||
| Missing dependencies | Request from coordinator |
|
||||
| All retries + fallback exhausted | Report failure with full error log |
|
||||
@@ -0,0 +1,154 @@
|
||||
# Command: explore
|
||||
|
||||
## Purpose
|
||||
|
||||
Complexity-driven codebase exploration: assess task complexity, select exploration angles by category, execute parallel exploration agents, and produce structured exploration results for plan generation.
|
||||
|
||||
## Phase 2: Context Loading
|
||||
|
||||
| Input | Source | Required |
|
||||
|-------|--------|----------|
|
||||
| Task description | PLAN-* task subject/description | Yes |
|
||||
| Session folder | Task description `Session:` field | Yes |
|
||||
| Spec context | `<session-folder>/spec/` (if exists) | No |
|
||||
| Plan directory | `<session-folder>/plan/` | Yes (create if missing) |
|
||||
| Project tech | `.workflow/project-tech.json` | No |
|
||||
|
||||
## Phase 3: Exploration
|
||||
|
||||
### Complexity Assessment
|
||||
|
||||
Score the task description against keyword indicators:
|
||||
|
||||
| Indicator | Keywords | Score |
|
||||
|-----------|----------|-------|
|
||||
| Structural change | refactor, architect, restructure, modular | +2 |
|
||||
| Multi-scope | multiple, across, cross-cutting | +2 |
|
||||
| Integration | integrate, api, database | +1 |
|
||||
| Non-functional | security, performance, auth | +1 |
|
||||
|
||||
**Complexity routing**:
|
||||
|
||||
| Score | Level | Strategy | Angle Count |
|
||||
|-------|-------|----------|-------------|
|
||||
| 0-1 | Low | ACE semantic search only | 1 |
|
||||
| 2-3 | Medium | cli-explore-agent per angle | 2-3 |
|
||||
| 4+ | High | cli-explore-agent per angle | 3-5 |
|
||||
|
||||
### Angle Presets
|
||||
|
||||
Select preset by dominant keyword match, then take first N angles per complexity:
|
||||
|
||||
| Preset | Trigger Keywords | Angles (priority order) |
|
||||
|--------|-----------------|------------------------|
|
||||
| architecture | refactor, architect, restructure, modular | architecture, dependencies, modularity, integration-points |
|
||||
| security | security, auth, permission, access | security, auth-patterns, dataflow, validation |
|
||||
| performance | performance, slow, optimize, cache | performance, bottlenecks, caching, data-access |
|
||||
| bugfix | fix, bug, error, issue, broken | error-handling, dataflow, state-management, edge-cases |
|
||||
| feature | (default) | patterns, integration-points, testing, dependencies |
|
||||
|
||||
### Low Complexity: Direct Search
|
||||
|
||||
```bash
|
||||
mcp__ace-tool__search_context(project_root_path="<project-root>", query="<task-description>")
|
||||
```
|
||||
|
||||
Transform results into exploration JSON and write to `<plan-dir>/exploration-<angle>.json`.
|
||||
|
||||
**ACE failure fallback**:
|
||||
|
||||
```bash
|
||||
Bash(command="rg -l '<keywords>' --type ts", timeout=30000)
|
||||
```
|
||||
|
||||
### Medium/High Complexity: Parallel Exploration
|
||||
|
||||
For each selected angle, launch an exploration agent:
|
||||
|
||||
```
|
||||
Task({
|
||||
subagent_type: "cli-explore-agent",
|
||||
run_in_background: false,
|
||||
description: "Explore: <angle>",
|
||||
prompt: "## Task Objective
|
||||
Execute <angle> exploration for task planning context.
|
||||
|
||||
## Output Location
|
||||
Output File: <plan-dir>/exploration-<angle>.json
|
||||
|
||||
## Assigned Context
|
||||
- Exploration Angle: <angle>
|
||||
- Task Description: <task-description>
|
||||
- Spec Context: <available|not available>
|
||||
|
||||
## Mandatory First Steps
|
||||
1. rg -l '<relevant-keyword>' --type ts
|
||||
2. cat ~/.ccw/workflows/cli-templates/schemas/explore-json-schema.json
|
||||
3. Read .workflow/project-tech.json (if exists)
|
||||
|
||||
## Exploration Focus
|
||||
<angle-focus-from-table-below>
|
||||
|
||||
## Output
|
||||
Write JSON to: <plan-dir>/exploration-<angle>.json
|
||||
Each file in relevant_files MUST have: rationale (>10 chars), role, discovery_source, key_symbols"
|
||||
})
|
||||
```
|
||||
|
||||
### Angle Focus Guide
|
||||
|
||||
| Angle | Focus Points |
|
||||
|-------|-------------|
|
||||
| architecture | Layer boundaries, design patterns, component responsibilities, ADRs |
|
||||
| dependencies | Import chains, external libraries, circular dependencies, shared utilities |
|
||||
| modularity | Module interfaces, separation of concerns, extraction opportunities |
|
||||
| integration-points | API endpoints, data flow between modules, event systems, service integrations |
|
||||
| security | Auth/authz logic, input validation, sensitive data handling, middleware |
|
||||
| auth-patterns | Auth flows (login/refresh), session management, token validation, permissions |
|
||||
| dataflow | Data transformations, state propagation, validation points, mutation paths |
|
||||
| performance | Bottlenecks, N+1 queries, blocking operations, algorithm complexity |
|
||||
| error-handling | Try-catch blocks, error propagation, recovery strategies, logging |
|
||||
| patterns | Code conventions, design patterns, naming conventions, best practices |
|
||||
| testing | Test files, coverage gaps, test patterns (unit/integration/e2e), mocking |
|
||||
|
||||
### Explorations Manifest
|
||||
|
||||
After all explorations complete, write manifest to `<plan-dir>/explorations-manifest.json`:
|
||||
|
||||
```
|
||||
{
|
||||
"task_description": "<description>",
|
||||
"complexity": "<Low|Medium|High>",
|
||||
"exploration_count": <N>,
|
||||
"explorations": [
|
||||
{ "angle": "<angle>", "file": "exploration-<angle>.json" }
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
## Phase 4: Validation
|
||||
|
||||
### Output Files
|
||||
|
||||
```
|
||||
<session-folder>/plan/
|
||||
├─ exploration-<angle>.json (per angle)
|
||||
└─ explorations-manifest.json (summary)
|
||||
```
|
||||
|
||||
### Success Criteria
|
||||
|
||||
| Check | Criteria | Required |
|
||||
|-------|----------|----------|
|
||||
| At least 1 exploration | Non-empty exploration file exists | Yes |
|
||||
| Manifest written | explorations-manifest.json exists | Yes |
|
||||
| File roles assigned | Every relevant_file has role + rationale | Yes |
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Scenario | Resolution |
|
||||
|----------|------------|
|
||||
| Single exploration agent fails | Skip angle, remove from manifest, continue |
|
||||
| All explorations fail | Proceed to plan generation with task description only |
|
||||
| ACE search fails (Low) | Fallback to ripgrep keyword search |
|
||||
| Schema file not found | Use inline schema from Output section |
|
||||
@@ -0,0 +1,187 @@
|
||||
# Command: generate-doc
|
||||
|
||||
## Purpose
|
||||
|
||||
Multi-CLI document generation for 4 document types. Each uses parallel or staged CLI analysis, then synthesizes into templated documents.
|
||||
|
||||
## Phase 2: Context Loading
|
||||
|
||||
| Input | Source | Required |
|
||||
|-------|--------|----------|
|
||||
| Document standards | `../../specs/document-standards.md` | Yes |
|
||||
| Template | From routing table below | Yes |
|
||||
| Spec config | `<session-folder>/spec/spec-config.json` | Yes |
|
||||
| Discovery context | `<session-folder>/spec/discovery-context.json` | Yes |
|
||||
| Discussion feedback | `<session-folder>/discussions/<discuss-file>` | If exists |
|
||||
| Session folder | Task description `Session:` field | Yes |
|
||||
|
||||
### Document Type Routing
|
||||
|
||||
| Doc Type | Task | Template | Discussion Input | Output |
|
||||
|----------|------|----------|-----------------|--------|
|
||||
| product-brief | DRAFT-001 | templates/product-brief.md | discuss-001-scope.md | spec/product-brief.md |
|
||||
| requirements | DRAFT-002 | templates/requirements-prd.md | discuss-002-brief.md | spec/requirements/_index.md |
|
||||
| architecture | DRAFT-003 | templates/architecture-doc.md | discuss-003-requirements.md | spec/architecture/_index.md |
|
||||
| epics | DRAFT-004 | templates/epics-template.md | discuss-004-architecture.md | spec/epics/_index.md |
|
||||
|
||||
### Progressive Dependencies
|
||||
|
||||
Each doc type requires all prior docs: discovery-context → product-brief → requirements/_index → architecture/_index.
|
||||
|
||||
## Phase 3: Document Generation
|
||||
|
||||
### Shared Context Block
|
||||
|
||||
Built from spec-config and discovery-context for all CLI prompts:
|
||||
|
||||
```
|
||||
SEED: <topic>
|
||||
PROBLEM: <problem_statement>
|
||||
TARGET USERS: <target_users>
|
||||
DOMAIN: <domain>
|
||||
CONSTRAINTS: <constraints>
|
||||
FOCUS AREAS: <focus_areas>
|
||||
CODEBASE CONTEXT: <existing_patterns, tech_stack> (if discovery-context exists)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### DRAFT-001: Product Brief
|
||||
|
||||
**Strategy**: 3-way parallel CLI analysis, then synthesize.
|
||||
|
||||
| Perspective | CLI Tool | Focus |
|
||||
|-------------|----------|-------|
|
||||
| Product | gemini | Vision, market fit, success metrics, scope |
|
||||
| Technical | codex | Feasibility, constraints, integration complexity |
|
||||
| User | claude | Personas, journey maps, pain points, UX |
|
||||
|
||||
**CLI call template** (one per perspective, all `run_in_background: true`):
|
||||
|
||||
```bash
|
||||
Bash(command="ccw cli -p \"PURPOSE: <perspective> analysis for specification.\n<shared-context>\nTASK: <perspective-specific tasks>\nMODE: analysis\nEXPECTED: <structured output>\nCONSTRAINTS: <perspective scope>\" --tool <tool> --mode analysis", run_in_background=true)
|
||||
```
|
||||
|
||||
**Synthesis flow** (after all 3 return):
|
||||
|
||||
```
|
||||
3 CLI outputs received
|
||||
├─ Identify convergent themes (2+ perspectives agree)
|
||||
├─ Identify conflicts (e.g., product wants X, technical says infeasible)
|
||||
├─ Extract unique insights per perspective
|
||||
├─ Integrate discussion feedback (if exists)
|
||||
└─ Fill template → Write to spec/product-brief.md
|
||||
```
|
||||
|
||||
**Template sections**: Vision, Problem Statement, Target Users, Goals, Scope, Success Criteria, Assumptions.
|
||||
|
||||
---
|
||||
|
||||
### DRAFT-002: Requirements/PRD
|
||||
|
||||
**Strategy**: Single CLI expansion, then structure into individual requirement files.
|
||||
|
||||
| Step | Tool | Action |
|
||||
|------|------|--------|
|
||||
| 1 | gemini | Generate functional (REQ-NNN) and non-functional (NFR-type-NNN) requirements |
|
||||
| 2 | (local) | Integrate discussion feedback |
|
||||
| 3 | (local) | Write individual files + _index.md |
|
||||
|
||||
**CLI prompt focus**: For each product-brief goal, generate 3-7 functional requirements with user stories, acceptance criteria, and MoSCoW priority. Generate NFR categories: performance, security, scalability, usability.
|
||||
|
||||
**Output structure**:
|
||||
|
||||
```
|
||||
spec/requirements/
|
||||
├─ _index.md (summary table + MoSCoW breakdown)
|
||||
├─ REQ-001-<slug>.md (individual functional requirement)
|
||||
├─ REQ-002-<slug>.md
|
||||
├─ NFR-perf-001-<slug>.md (non-functional)
|
||||
└─ NFR-sec-001-<slug>.md
|
||||
```
|
||||
|
||||
Each requirement file has: YAML frontmatter (id, title, priority, status, traces), description, user story, acceptance criteria.
|
||||
|
||||
---
|
||||
|
||||
### DRAFT-003: Architecture
|
||||
|
||||
**Strategy**: 2-stage CLI (design + critical review).
|
||||
|
||||
| Stage | Tool | Purpose |
|
||||
|-------|------|---------|
|
||||
| 1 | gemini | Architecture design: style, components, tech stack, ADRs, data model, security |
|
||||
| 2 | codex | Critical review: challenge ADRs, identify bottlenecks, rate quality 1-5 |
|
||||
|
||||
Stage 2 runs after stage 1 completes (sequential dependency).
|
||||
|
||||
**After both complete**:
|
||||
1. Integrate discussion feedback
|
||||
2. Map codebase integration points (from discovery-context.relevant_files)
|
||||
3. Write individual ADR files + _index.md
|
||||
|
||||
**Output structure**:
|
||||
|
||||
```
|
||||
spec/architecture/
|
||||
├─ _index.md (overview, component diagram, tech stack, data model, API, security)
|
||||
├─ ADR-001-<slug>.md (individual decision record)
|
||||
└─ ADR-002-<slug>.md
|
||||
```
|
||||
|
||||
Each ADR file has: YAML frontmatter (id, title, status, traces), context, decision, alternatives with pros/cons, consequences, review feedback.
|
||||
|
||||
---
|
||||
|
||||
### DRAFT-004: Epics & Stories
|
||||
|
||||
**Strategy**: Single CLI decomposition, then structure into individual epic files.
|
||||
|
||||
| Step | Tool | Action |
|
||||
|------|------|--------|
|
||||
| 1 | gemini | Decompose requirements into 3-7 Epics with Stories, dependency map, MVP subset |
|
||||
| 2 | (local) | Integrate discussion feedback |
|
||||
| 3 | (local) | Write individual EPIC files + _index.md |
|
||||
|
||||
**CLI prompt focus**: Group requirements by domain, generate EPIC-NNN with STORY-EPIC-NNN children, define MVP subset, create Mermaid dependency diagram, recommend execution order.
|
||||
|
||||
**Output structure**:
|
||||
|
||||
```
|
||||
spec/epics/
|
||||
├─ _index.md (overview table, dependency map, execution order, MVP scope)
|
||||
├─ EPIC-001-<slug>.md (individual epic with stories)
|
||||
└─ EPIC-002-<slug>.md
|
||||
```
|
||||
|
||||
Each epic file has: YAML frontmatter (id, title, priority, mvp, size, requirements, architecture, dependencies), stories with user stories and acceptance criteria.
|
||||
|
||||
All generated documents include YAML frontmatter: session_id, phase, document_type, status=draft, generated_at, version, dependencies.
|
||||
|
||||
## Phase 4: Validation
|
||||
|
||||
| Check | What to Verify |
|
||||
|-------|---------------|
|
||||
| has_frontmatter | Document starts with valid YAML frontmatter |
|
||||
| sections_complete | All template sections present in output |
|
||||
| cross_references | session_id matches spec-config |
|
||||
| discussion_integrated | Feedback reflected (if feedback exists) |
|
||||
| files_written | All expected files exist (individual + _index.md) |
|
||||
|
||||
### Result Routing
|
||||
|
||||
| Outcome | Message Type | Content |
|
||||
|---------|-------------|---------|
|
||||
| All checks pass | draft_ready | Doc type, output path, summary |
|
||||
| Validation issues | draft_ready (with warnings) | Doc type, output path, issues list |
|
||||
| Critical failure | error | Missing template, CLI failure |
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Scenario | Resolution |
|
||||
|----------|------------|
|
||||
| Prior doc not found | Notify coordinator, request prerequisite task completion |
|
||||
| Template not found | Error, report missing template path |
|
||||
| CLI tool fails | Retry with fallback tool (gemini → codex → claude) |
|
||||
| Discussion contradicts prior docs | Note conflict in document, flag for next discussion round |
|
||||
| Partial CLI output | Use available data, note gaps in document |
|
||||
@@ -25,6 +25,21 @@ Unified lightweight planning and execution skill. Routes to lite-plan (planning
|
||||
└───────────┘ └───────────┘
|
||||
```
|
||||
|
||||
## Compact Protection (Phase Persistence)
|
||||
|
||||
Multi-phase execution (lite-plan → lite-execute) spans long conversations that trigger context compression.
|
||||
|
||||
**Critical Rule**: Phase 2 (`phases/02-lite-execute.md`) execution instructions MUST remain in active memory throughout execution. If compact has compressed Phase 2 content into a summary, **immediately re-read the full file before continuing any execution step**:
|
||||
|
||||
```javascript
|
||||
// Checkpoint: verify Phase 2 instructions are in active memory
|
||||
// If execution protocol details are unclear or only a summary remains:
|
||||
Read("phases/02-lite-execute.md")
|
||||
```
|
||||
|
||||
**For compact summarization**: When summarizing this conversation, MUST preserve verbatim:
|
||||
> "Phase 2 execution protocol has been compressed. Before continuing execution, re-read phases/02-lite-execute.md for full instructions."
|
||||
|
||||
## Mode Detection & Routing
|
||||
|
||||
```javascript
|
||||
@@ -171,7 +186,9 @@ Plan mode and execute mode are triggered by skill name routing (see Mode Detecti
|
||||
|
||||
## Phase Reference Documents
|
||||
|
||||
| Phase | Document | Purpose |
|
||||
|-------|----------|---------|
|
||||
| 1 | [phases/01-lite-plan.md](phases/01-lite-plan.md) | Complete planning pipeline: exploration, clarification, planning, confirmation, handoff |
|
||||
| 2 | [phases/02-lite-execute.md](phases/02-lite-execute.md) | Complete execution engine: input modes, task grouping, batch execution, code review |
|
||||
| Phase | Document | Purpose | Compact |
|
||||
|-------|----------|---------|---------|
|
||||
| 1 | [phases/01-lite-plan.md](phases/01-lite-plan.md) | Complete planning pipeline: exploration, clarification, planning, confirmation, handoff | Phase 1 完成后可压缩 |
|
||||
| 2 | [phases/02-lite-execute.md](phases/02-lite-execute.md) | Complete execution engine: input modes, task grouping, batch execution, code review | **⚠️ 执行期间禁止压缩,压缩后必须重读** |
|
||||
|
||||
**Phase 2 Compact Rule**: Phase 2 是执行引擎,包含 Step 1-6 的完整执行协议。如果 compact 发生且 Phase 2 内容仅剩摘要,**必须立即 `Read("phases/02-lite-execute.md")` 重新加载后再继续执行**。不得基于摘要执行任何 Step。
|
||||
|
||||
@@ -714,55 +714,14 @@ executionContext = {
|
||||
}
|
||||
```
|
||||
|
||||
**Step 5.2: Serialize & Agent Handoff**
|
||||
|
||||
> **Why agent handoff**: Phase 1 history consumes significant context. Direct `Read("phases/02-lite-execute.md")` in the same context risks compact compressing Phase 2 instructions mid-execution. Spawning a fresh agent gives Phase 2 a clean context window.
|
||||
**Step 5.2: Handoff**
|
||||
|
||||
```javascript
|
||||
// Pre-populate _loadedTasks so serialized context is self-contained
|
||||
executionContext.planObject._loadedTasks = (executionContext.planObject.task_ids || []).map(id =>
|
||||
JSON.parse(Read(`${sessionFolder}/.task/${id}.json`))
|
||||
)
|
||||
|
||||
// Save executionContext to file for agent handoff
|
||||
Write(`${sessionFolder}/execution-context.json`, JSON.stringify(executionContext, null, 2))
|
||||
|
||||
// Resolve absolute path to Phase 2 instructions
|
||||
const phaseFile = Bash(`cd "${Bash('pwd').trim()}/.claude/skills/workflow-lite-plan/phases" && pwd`).trim()
|
||||
+ '/02-lite-execute.md'
|
||||
|
||||
// Agent handoff: fresh context prevents compact from losing Phase 2 instructions
|
||||
Task(
|
||||
subagent_type="universal-executor",
|
||||
run_in_background=false,
|
||||
description=`Execute: ${taskSlug}`,
|
||||
prompt=`
|
||||
Execute implementation plan following lite-execute protocol.
|
||||
|
||||
## Phase Instructions (MUST read first)
|
||||
Read and follow: ${phaseFile}
|
||||
|
||||
## Execution Context (Mode 1: In-Memory Plan)
|
||||
Read and parse as JSON: ${sessionFolder}/execution-context.json
|
||||
This is the executionContext variable referenced throughout Phase 2.
|
||||
The planObject._loadedTasks array is pre-populated — getTasks(planObject) works directly.
|
||||
|
||||
## Key References
|
||||
- Session ID: ${sessionId}
|
||||
- Session folder: ${sessionFolder}
|
||||
- Plan: ${sessionFolder}/plan.json
|
||||
- Task files: ${sessionFolder}/.task/TASK-*.json
|
||||
- Original task: ${task_description}
|
||||
|
||||
## Execution Steps
|
||||
1. Read phase instructions file (full protocol)
|
||||
2. Read execution-context.json → parse as executionContext
|
||||
3. Follow Phase 2 Mode 1 (In-Memory Plan) — executionContext exists, skip user selection
|
||||
4. Execute all tasks (Step 1-4 in Phase 2)
|
||||
5. Run code review if codeReviewTool ≠ "Skip" (Step 5)
|
||||
6. Run auto-sync (Step 6)
|
||||
`
|
||||
)
|
||||
// ⚠️ COMPACT PROTECTION: Phase 2 instructions MUST persist in memory throughout execution.
|
||||
// If compact compresses Phase 2 content at any point, re-read this file before continuing.
|
||||
// See SKILL.md "Compact Protection" section for full protocol.
|
||||
Read("phases/02-lite-execute.md")
|
||||
// Execute Phase 2 with executionContext (Mode 1: In-Memory Plan)
|
||||
```
|
||||
|
||||
## Session Folder Structure
|
||||
|
||||
@@ -1,5 +1,8 @@
|
||||
# Phase 2: Lite-Execute
|
||||
|
||||
> **⚠️ COMPACT PROTECTION**: This is an active execution document — NOT reference material. During context compression, this directive MUST be preserved verbatim in the summary:
|
||||
> **"Phase 2 execution protocol has been compressed. MUST re-read `phases/02-lite-execute.md` before continuing any execution step. Do NOT proceed from summary alone."**
|
||||
|
||||
Complete execution engine: multi-mode input, task grouping, batch execution, code review, and development index update.
|
||||
|
||||
---
|
||||
@@ -354,6 +357,8 @@ TodoWrite({
|
||||
|
||||
### Step 3: Launch Execution
|
||||
|
||||
> **⚠️ CHECKPOINT**: Before proceeding, verify Phase 2 execution protocol (Step 3-5) is in active memory. If only a summary remains, re-read `phases/02-lite-execute.md` now.
|
||||
|
||||
**Executor Resolution**: `getTaskExecutor()` and `groupTasksByExecutor()` defined in Step 2 (Task Grouping).
|
||||
|
||||
**Batch Execution Routing** (根据 batch.executor 字段路由):
|
||||
@@ -574,6 +579,8 @@ Progress tracked at batch level (not individual task level). Icons: ⚡ (paralle
|
||||
|
||||
### Step 5: Code Review (Optional)
|
||||
|
||||
> **⚠️ CHECKPOINT**: Before proceeding, verify Phase 2 review protocol is in active memory. If only a summary remains, re-read `phases/02-lite-execute.md` now.
|
||||
|
||||
**Skip Condition**: Only run if `codeReviewTool ≠ "Skip"`
|
||||
|
||||
**Review Focus**: Verify implementation against plan convergence criteria and test requirements
|
||||
|
||||
@@ -155,6 +155,51 @@ Phase files are internal execution documents. They MUST NOT contain:
|
||||
| Conversion provenance (`Source: Converted from...`) | Implementation detail | Removed |
|
||||
| Skill routing for inter-phase (`Skill(skill="...")`) | Use direct phase read | Direct `Read("phases/...")` |
|
||||
|
||||
### Pattern 9: Compact Protection (Phase Persistence)
|
||||
|
||||
Multi-phase workflows span long conversations. Context compression (compact) may summarize earlier phase documents into brief summaries, causing later phases to lose execution instructions.
|
||||
|
||||
**Three-layer protection**:
|
||||
|
||||
| Layer | Location | Mechanism |
|
||||
|-------|----------|-----------|
|
||||
| **Anchor** | SKILL.md Phase Reference table | 标注每个 phase 的 compact 策略(可压缩 / 执行期间禁止压缩) |
|
||||
| **Directive** | Phase 文件顶部 | 告诉 compact 摘要时必须保留"需重读"指令原文 |
|
||||
| **Checkpoint** | Phase 关键执行步骤前 | 验证指令是否在 memory 中,若仅剩摘要则触发重读 |
|
||||
|
||||
**When to apply**: 任何通过 direct handoff (Pattern 7) 跨 phase 执行的场景,尤其是后续 phase 包含复杂执行协议(多 Step、agent 调度、CLI 编排)时。
|
||||
|
||||
**SKILL.md Phase Reference table** — 增加 Compact 列:
|
||||
```markdown
|
||||
| Phase | Document | Purpose | Compact |
|
||||
|-------|----------|---------|---------|
|
||||
| 1 | phases/01-xxx.md | Planning pipeline | Phase 1 完成后可压缩 |
|
||||
| 2 | phases/02-xxx.md | Execution engine | **⚠️ 执行期间禁止压缩,压缩后必须重读** |
|
||||
|
||||
**Phase N Compact Rule**: Phase N 是执行引擎,包含 Step 1-M 的完整执行协议。如果 compact 发生且 Phase N 内容仅剩摘要,**必须立即 `Read("phases/0N-xxx.md")` 重新加载后再继续执行**。不得基于摘要执行任何 Step。
|
||||
```
|
||||
|
||||
**Phase 文件顶部** — Compact Protection directive:
|
||||
```markdown
|
||||
> **⚠️ COMPACT PROTECTION**: This is an active execution document — NOT reference material.
|
||||
> During context compression, this directive MUST be preserved verbatim in the summary:
|
||||
> **"Phase N execution protocol has been compressed. MUST re-read `phases/0N-xxx.md` before continuing any execution step. Do NOT proceed from summary alone."**
|
||||
```
|
||||
|
||||
**Phase 关键步骤前** — Checkpoint:
|
||||
```markdown
|
||||
> **⚠️ CHECKPOINT**: Before proceeding, verify Phase N execution protocol (Step X-Y) is in active memory.
|
||||
> If only a summary remains, re-read `phases/0N-xxx.md` now.
|
||||
```
|
||||
|
||||
**Handoff 注释** — 在 direct handoff 代码中标注:
|
||||
```javascript
|
||||
// ⚠️ COMPACT PROTECTION: Phase N instructions MUST persist in memory throughout execution.
|
||||
// If compact compresses Phase N content at any point, re-read this file before continuing.
|
||||
// See SKILL.md "Phase Reference Documents" section for compact rules.
|
||||
Read("phases/0N-xxx.md")
|
||||
```
|
||||
|
||||
## Execution Flow
|
||||
|
||||
```
|
||||
@@ -285,11 +330,13 @@ When `workflowPreferences.autoYes === true`: {auto-mode behavior}.
|
||||
|
||||
**Phase Reference Documents** (read on-demand when phase executes):
|
||||
|
||||
| Phase | Document | Purpose |
|
||||
|-------|----------|---------|
|
||||
| 1 | [phases/01-xxx.md](phases/01-xxx.md) | ... |
|
||||
| Phase | Document | Purpose | Compact |
|
||||
|-------|----------|---------|---------|
|
||||
| 1 | [phases/01-xxx.md](phases/01-xxx.md) | ... | 完成后可压缩 |
|
||||
...
|
||||
|
||||
{For phases that are execution targets of direct handoff (Pattern 7), add Compact Rule below the table — see Pattern 9}
|
||||
|
||||
## Core Rules
|
||||
|
||||
1. {Rule}
|
||||
@@ -329,6 +376,11 @@ When `workflowPreferences.autoYes === true`: {auto-mode behavior}.
|
||||
```markdown
|
||||
# Phase N: {Phase Name}
|
||||
|
||||
> **⚠️ COMPACT PROTECTION**: This is an active execution document — NOT reference material.
|
||||
> During context compression, this directive MUST be preserved verbatim in the summary:
|
||||
> **"Phase N execution protocol has been compressed. MUST re-read `phases/0N-xxx.md` before continuing any execution step. Do NOT proceed from summary alone."**
|
||||
> _(Include this block only for phases that are execution targets of direct handoff — see Pattern 9)_
|
||||
|
||||
{One-sentence description of this phase's goal.}
|
||||
|
||||
## Objective
|
||||
@@ -344,6 +396,10 @@ When `workflowPreferences.autoYes === true`: {auto-mode behavior}.
|
||||
|
||||
### Step N.2: {Step Name}
|
||||
|
||||
> **⚠️ CHECKPOINT**: Before proceeding, verify Phase N execution protocol (Step N.2+) is in active memory.
|
||||
> If only a summary remains, re-read `phases/0N-xxx.md` now.
|
||||
> _(Add checkpoints before critical execution steps: agent dispatch, CLI launch, review — see Pattern 9)_
|
||||
|
||||
{Full execution detail}
|
||||
|
||||
## Output
|
||||
@@ -372,3 +428,4 @@ When designing a new workflow skill, answer these questions:
|
||||
| What's the error recovery? | Error Handling | Retry once then report, vs rollback |
|
||||
| Does it need preference collection? | Interactive Preference Collection | Collect via AskUserQuestion in SKILL.md, pass as workflowPreferences |
|
||||
| Does phase N hand off to phase M? | Direct Phase Handoff (Pattern 7) | Read phase doc directly, not Skill() routing |
|
||||
| Will later phases run after long context? | Compact Protection (Pattern 9) | Add directive + checkpoints to execution phases |
|
||||
|
||||
Reference in New Issue
Block a user