feat: Add role specifications for 三省六部 architecture

- Introduced role specifications for 尚书省 (shangshu), 刑部 (xingbu), and 中书省 (zhongshu) to facilitate task management and execution flow.
- Implemented quality gates for each phase of the process to ensure compliance and quality assurance.
- Established a coordinator role to manage the overall workflow and task distribution among the departments.
- Created a team configuration file to define roles, responsibilities, and routing rules for task execution.
- Added localization support for DeepWiki in both English and Chinese, enhancing accessibility for users.
This commit is contained in:
catlog22
2026-03-06 11:26:27 +08:00
parent 56c06ecf3d
commit 33cc451b61
46 changed files with 3050 additions and 1832 deletions

View File

@@ -0,0 +1,56 @@
---
name: skill-simplify
description: SKILL.md simplification with functional integrity verification. Analyze redundancy, optimize content, check no functionality lost. Triggers on "simplify skill", "optimize skill", "skill-simplify".
allowed-tools: AskUserQuestion, Read, Write, Edit, Bash, Glob, Grep
---
# Skill Simplify
Three-phase pipeline: analyze functional inventory, apply optimization rules, verify integrity.
**Phase Reference Documents** (read on-demand):
| Phase | Document | Purpose |
|-------|----------|---------|
| 1 | [phases/01-analysis.md](phases/01-analysis.md) | Extract functional inventory, identify redundancy, validate pseudo-code format |
| 2 | [phases/02-optimize.md](phases/02-optimize.md) | Apply simplification rules, fix format issues |
| 3 | [phases/03-check.md](phases/03-check.md) | Verify functional integrity, validate format |
## Input Processing
```javascript
const targetPath = input.trim()
const targetFile = targetPath.endsWith('.md') ? targetPath : `${targetPath}/SKILL.md`
const originalContent = Read(targetFile)
const originalLineCount = originalContent.split('\n').length
```
## TodoWrite Pattern
```javascript
TodoWrite({ todos: [
{ content: `Phase 1: Analyzing ${targetFile}`, status: "in_progress", activeForm: "Extracting functional inventory" },
{ content: "Phase 2: Optimize", status: "pending" },
{ content: "Phase 3: Integrity Check", status: "pending" }
]})
```
## Core Rules
1. **Preserve ALL functional elements**: Code blocks with logic, agent calls, data structures, routing, error handling, input/output specs
2. **Only reduce descriptive content**: Flowcharts, verbose comments, duplicate sections, examples that repeat logic
3. **Never summarize algorithm logic**: If-else branches, function bodies, schemas must remain verbatim
4. **Classify code blocks**: Distinguish `functional` (logic, routing, schemas) from `descriptive` (ASCII art, examples, display templates) — only descriptive blocks may be deleted
5. **Merge equivalent variants**: Single/multi-perspective templates differing only by a parameter → one template with variant comment
6. **Fix format issues**: Nested backtick template literals in code fences → convert to prose; hardcoded option lists → flag for dynamic generation; workflow handoff references → ensure execution steps present
7. **Validate pseudo-code**: Check bracket matching, variable consistency, structural completeness
8. **Quantitative verification**: Phase 3 counts must match Phase 1 counts for functional categories; descriptive block decreases are expected
## Error Handling
| Error | Resolution |
|-------|------------|
| Target file not found | Report error, stop |
| Check FAIL (missing functional elements) | Show delta, revert to original, report which elements lost |
| Check WARN (descriptive decrease or merge) | Show delta with justification |
| Format issues found | Report in check, fix in Phase 2 |

View File

@@ -0,0 +1,224 @@
# Phase 1: Functional Analysis
Read target file, extract functional inventory with code block classification, identify redundancy, validate pseudo-code format, and produce optimization plan.
## Objective
- Build quantitative functional inventory with code block classification (baseline for Phase 3)
- Identify redundancy categories with specific line ranges
- Detect pseudo-code format issues
- Produce optimization plan with estimated line savings
## Execution
### Step 1.1: Read & Measure Target
```javascript
const originalContent = Read(targetFile)
const lines = originalContent.split('\n')
const originalLineCount = lines.length
```
### Step 1.2: Extract Functional Inventory
Count and catalog every functional element. These counts are the **baseline** for Phase 3 verification.
```javascript
const inventory = {
// Code structures — with role classification
codeBlocks: [], // { startLine, endLine, language, purpose, role: 'functional'|'descriptive' }
agentCalls: [], // { line, agentType, description, mergeGroup?: string }
dataStructures: [], // { line, name, type: 'object'|'array'|'schema' }
// Logic elements
routingBranches: [], // { line, condition, outcomes[] }
errorHandlers: [], // { line, errorType, resolution }
conditionalLogic: [], // { line, condition, trueAction, falseAction }
// Interface elements
askUserQuestions: [], // { line, questionCount, headers[], optionType: 'static'|'dynamic' }
inputModes: [], // { line, mode, description }
outputArtifacts: [], // { line, artifact, format }
// Structural elements
todoWriteBlocks: [], // { line, phaseCount }
phaseHandoffs: [], // { line, fromPhase, toPhase }
skillInvocations: [], // { line, skillName, hasExecutionSteps: boolean }
// Reference elements
tables: [], // { startLine, endLine, columns }
schemas: [], // { line, schemaName, fields[] }
// Format issues
formatIssues: [], // { line, type, description, severity: 'error'|'warning' }
// Totals (computed)
counts: {}
}
```
**Extraction rules**:
- **Code blocks**: Match ` ```language ... ``` ` pairs, record start/end/language/first-line-as-purpose
- **Agent calls**: Match `Agent(`, `Task(`, `subagent_type=`, record type and prompt summary
- **Data structures**: Match `const xxx = {`, `const xxx = [`, JSON schema objects
- **Routing branches**: Match `if/else`, `switch/case`, ternary `? :` with meaningful branching
- **Error handlers**: Match `catch`, error table rows `| Error |`, fallback patterns
- **AskUserQuestion**: Match `AskUserQuestion({`, count questions array length
- **Input modes**: Match `Mode 1/2/3`, `--flag`, argument parsing
- **Output artifacts**: Match `Write(`, `Output:`, file path patterns in comments
- **TodoWrite**: Match `TodoWrite({`, count todo items
- **Phase handoffs**: Match `Read("phases/`, `Skill(`, `proceed_to_next_phase`
- **Tables**: Match `| header |` markdown table blocks
- **Schemas**: Match schema references, JSON structure definitions
### Step 1.2.1: Code Block Role Classification
For each code block, determine its role:
| Role | Criteria | Examples |
|------|----------|---------|
| `functional` | Contains algorithm logic, routing branches, conditional code, agent calls, schema definitions, data processing, AskUserQuestion, Skill invocations | `if/else`, `Agent({...})`, `const schema = {...}`, `Bash({...})` |
| `descriptive` | Contains ASCII art, usage examples, display templates, illustrative good/bad comparisons, folder structure trees | `┌───┐`, `# Example usage`, `❌ Bad / ✅ Good`, `├── file.ts` |
**Classification rules**:
- If block contains ANY of: `Agent(`, `Bash(`, `AskUserQuestion(`, `if (`, `switch`, `Skill(`, `Write(`, `Read(`, `TodoWrite(``functional`
- If block language is `bash` and content is only example invocations (no logic) → `descriptive`
- If block has no language tag and contains only ASCII box-drawing characters → `descriptive`
- If block is labeled as "Example" in surrounding markdown heading → `descriptive`
- **Default**: `functional` (conservative)
### Step 1.2.2: Pseudo-Code Format Validation
Scan all `functional` code blocks for format issues:
| Check | Detection | Severity |
|-------|-----------|----------|
| **Nested backticks** | Template literal `` ` `` inside ` ```javascript ``` ` code fence | warning |
| **Unclosed brackets** | Unmatched `{`, `(`, `[` in code block | error |
| **Undefined references** | `${variable}` where variable is never declared in the block or prior blocks | warning |
| **Inconsistent indentation** | Mixed tabs/spaces or inconsistent nesting depth | warning |
| **Dead code patterns** | Commented-out code blocks (`// if (`, `/* ... */` spanning 5+ lines) | warning |
| **Missing return/output** | Function-like block with no return, Write, or console.log | warning |
```javascript
inventory.formatIssues = validatePseudoCode(inventory.codeBlocks.filter(b => b.role === 'functional'))
```
### Step 1.2.3: Compute Totals
```javascript
inventory.counts = {
codeBlocks: inventory.codeBlocks.length,
functionalCodeBlocks: inventory.codeBlocks.filter(b => b.role === 'functional').length,
descriptiveCodeBlocks: inventory.codeBlocks.filter(b => b.role === 'descriptive').length,
agentCalls: inventory.agentCalls.length,
dataStructures: inventory.dataStructures.length,
routingBranches: inventory.routingBranches.length,
errorHandlers: inventory.errorHandlers.length,
conditionalLogic: inventory.conditionalLogic.length,
askUserQuestions: inventory.askUserQuestions.length,
inputModes: inventory.inputModes.length,
outputArtifacts: inventory.outputArtifacts.length,
todoWriteBlocks: inventory.todoWriteBlocks.length,
phaseHandoffs: inventory.phaseHandoffs.length,
skillInvocations: inventory.skillInvocations.length,
tables: inventory.tables.length,
schemas: inventory.schemas.length,
formatIssues: inventory.formatIssues.length
}
```
### Step 1.3: Identify Redundancy Categories
Scan for each category, record specific line ranges:
```javascript
const redundancyMap = {
deletable: [], // { category, startLine, endLine, reason, estimatedSave }
simplifiable: [], // { category, startLine, endLine, strategy, estimatedSave }
mergeable: [], // { items: [{startLine, endLine}], mergeStrategy, estimatedSave }
formatFixes: [], // { line, type, fix }
languageUnify: [] // { line, currentLang, targetLang }
}
```
**Deletable** (remove entirely, no functional loss):
| Pattern | Detection |
|---------|-----------|
| Duplicate Overview | `## Overview` that restates frontmatter description |
| ASCII flowchart | Flowchart that duplicates Phase Summary table or implementation structure |
| "When to use" section | Usage guidance not needed for execution |
| Best Practices section | Advisory content duplicating Core Rules |
| Duplicate examples | Code examples that repeat logic shown elsewhere |
| Folder structure duplicate | ASCII tree repeating Output Artifacts table |
| "Next Phase" paragraphs | Prose between phases when TodoWrite handles flow |
| Descriptive code blocks | Code blocks classified as `descriptive` whose content is covered by surrounding prose or tables |
**Simplifiable** (compress, preserve meaning):
| Pattern | Strategy |
|---------|----------|
| Verbose comments in code blocks | Reduce to single-line; keep only non-obvious logic comments |
| Multi-line console.log | Compress to single template literal |
| Wordy section intros | Remove "In this phase, we will..." preamble |
| Exploration prompt bloat | Trim to essential instructions, remove generic advice |
| Display-format code blocks | Convert code blocks that only define output format (console.log with template) to prose description |
**Mergeable** (combine related structures):
| Pattern | Strategy |
|---------|----------|
| Multiple similar AskUserQuestion calls | Extract shared function with mode parameter |
| Repeated Option routing | Unify into single dispatch |
| Sequential single-line operations | Combine into one code block |
| TodoWrite full blocks x N | Template once + delta comments |
| Duplicate error handling tables | Merge into single table |
| Equivalent template variants | Single/multi-perspective templates → one template with variant comment |
| Multiple output artifact tables | Merge into single combined table |
**Format fixes** (pseudo-code quality):
| Pattern | Fix |
|---------|-----|
| Nested backtick template literals | Convert surrounding code block to prose description, or use 4-backtick fence |
| Hardcoded option lists | Add comment: `// Generate dynamically from {context source}` |
| Workflow handoff without execution steps | Add execution steps referencing the target command's actual interface |
| Unclosed brackets | Fix bracket matching |
**Language unification**:
- Detect mixed Chinese/English in functional comments
- Recommend consistent language (match majority)
### Step 1.4: Build Optimization Plan
```javascript
const optimizationPlan = {
targetFile,
originalLineCount,
estimatedReduction: redundancyMap.deletable.reduce((s, d) => s + d.estimatedSave, 0)
+ redundancyMap.simplifiable.reduce((s, d) => s + d.estimatedSave, 0)
+ redundancyMap.mergeable.reduce((s, d) => s + d.estimatedSave, 0),
categories: {
deletable: { count: redundancyMap.deletable.length, totalLines: '...' },
simplifiable: { count: redundancyMap.simplifiable.length, totalLines: '...' },
mergeable: { count: redundancyMap.mergeable.length, totalLines: '...' },
formatFixes: { count: redundancyMap.formatFixes.length },
languageUnify: { count: redundancyMap.languageUnify.length }
},
// Ordered: delete → merge → simplify → format
operations: [
...redundancyMap.deletable.map(d => ({ type: 'delete', ...d, priority: 1 })),
...redundancyMap.mergeable.map(m => ({ type: 'merge', ...m, priority: 2 })),
...redundancyMap.simplifiable.map(s => ({ type: 'simplify', ...s, priority: 3 })),
...redundancyMap.formatFixes.map(f => ({ type: 'format', ...f, priority: 4 }))
]
}
```
Display plan summary: category counts, estimated reduction percentage, sections NOT changed (functional core).
## Output
- **Variable**: `analysisResult = { inventory, redundancyMap, optimizationPlan, originalContent, originalLineCount }`
- **TodoWrite**: Mark Phase 1 completed, Phase 2 in_progress

View File

@@ -0,0 +1,107 @@
# Phase 2: Optimize
Apply simplification rules from analysisResult to produce optimized content. Write result to disk.
## Objective
- Execute all optimization operations in priority order (delete → merge → simplify → format)
- Preserve every functional element identified in Phase 1 inventory
- Fix pseudo-code format issues
- Write optimized content back to target file
## Execution
### Step 2.1: Apply Operations in Order
Process `analysisResult.optimizationPlan.operations` sorted by priority:
**Priority 1 — Delete** (safest, highest impact):
| Target Pattern | Action |
|----------------|--------|
| Duplicate Overview section | Remove `## Overview` if it restates frontmatter `description` |
| ASCII flowchart | Remove if Phase Summary table or implementation structure covers same info |
| "When to use" / "Use Cases" section | Remove entirely |
| Best Practices section | Remove if content duplicates Core Rules |
| Duplicate folder structure | Remove ASCII tree if Output Artifacts table covers same info |
| Redundant "Next Phase" prose | Remove when TodoWrite handles flow |
| Standalone example sections | Remove if logic already demonstrated inline |
| Descriptive code blocks | Remove if content covered by surrounding prose or tables |
**Priority 2 — Merge** (structural optimization):
| Target Pattern | Action |
|----------------|--------|
| Multiple similar AskUserQuestion blocks | Extract shared function with mode parameter |
| Repeated Option A/B/C routing | Unify into single dispatch |
| Sequential single-line bash commands | Combine into single code block |
| TodoWrite full blocks x N | Template ONCE, subsequent as one-line comment |
| Duplicate error handling across sections | Merge into single `## Error Handling` table |
| Equivalent template variants | Single/multi templates → one template with `// For multi: add Perspective` comment |
| Multiple output artifact tables | Merge into single combined table with Phase column |
**Priority 3 — Simplify** (compress descriptive content):
| Target Pattern | Action |
|----------------|--------|
| Verbose inline comments | Reduce to single-line; remove obvious restatements |
| Display-format code blocks | Convert `console.log` with template literal to prose describing output format |
| Wordy section introductions | Remove preamble sentences |
| Exploration/agent prompt padding | Remove generic advice |
| Success Criteria lists > 7 items | Trim to essential 5-7, remove obvious/generic |
**Priority 4 — Format fixes** (pseudo-code quality):
| Target Pattern | Action |
|----------------|--------|
| Nested backtick template literals | Convert code block to prose description, or use 4-backtick fence |
| Hardcoded option lists | Replace with dynamic generation: describe source of options + generation logic |
| Workflow handoff without execution steps | Add concrete steps referencing target command's interface (e.g., pipe to `ccw issue create`) |
| Unclosed brackets | Fix bracket matching |
| Undefined variable references | Add declaration or link to source |
### Step 2.2: Language Unification (if applicable)
```javascript
if (analysisResult.redundancyMap.languageUnify.length > 0) {
// Detect majority language, unify non-functional text
// DO NOT change: variable names, function names, schema fields, error messages in code
}
```
### Step 2.3: Write Optimized Content
```javascript
Write(targetFile, optimizedContent)
const optimizedLineCount = optimizedContent.split('\n').length
const reduction = originalLineCount - optimizedLineCount
const reductionPct = Math.round(reduction / originalLineCount * 100)
```
### Step 2.4: Preserve Optimization Record
```javascript
const optimizationRecord = {
deletedSections: [], // section names removed
mergedGroups: [], // { from: [sections], to: description }
simplifiedAreas: [], // { section, strategy }
formatFixes: [], // { line, type, fix }
linesBefore: originalLineCount,
linesAfter: optimizedLineCount
}
```
## Key Rules
1. **Never modify functional code blocks** — only compress comments/whitespace within them
2. **Descriptive code blocks may be deleted** if their content is covered by prose or tables
3. **Never change function signatures, variable names, or schema fields**
4. **Merge preserves all branches** — unified function must handle all original cases
5. **When uncertain, keep original** — conservative approach prevents functional loss
6. **Format fixes must not alter semantics** — only presentation changes
## Output
- **File**: Target file overwritten with optimized content
- **Variable**: `optimizationRecord` (changes log for Phase 3)
- **TodoWrite**: Mark Phase 2 completed, Phase 3 in_progress

View File

@@ -0,0 +1,224 @@
# Phase 3: Integrity Check
Re-extract functional inventory from optimized file, compare against Phase 1 baseline, validate pseudo-code format. Report PASS/FAIL with detailed delta.
## Objective
- Re-run the same inventory extraction on optimized content
- Compare counts using role-aware classification (functional vs descriptive)
- Validate pseudo-code format issues are resolved
- Report check result with actionable details
- Revert if critical functional elements are missing
## Execution
### Step 3.1: Re-Extract Inventory from Optimized File
```javascript
const optimizedContent = Read(targetFile)
const optimizedLineCount = optimizedContent.split('\n').length
// Use SAME extraction logic as Phase 1 (including role classification)
const afterInventory = extractFunctionalInventory(optimizedContent)
```
### Step 3.2: Compare Inventories (Role-Aware)
```javascript
const beforeCounts = analysisResult.inventory.counts
const afterCounts = afterInventory.counts
const delta = {}
let hasCriticalLoss = false
let hasWarning = false
// CRITICAL: Functional elements that MUST NOT decrease
const CRITICAL = ['functionalCodeBlocks', 'dataStructures', 'routingBranches',
'errorHandlers', 'conditionalLogic', 'askUserQuestions',
'inputModes', 'outputArtifacts', 'skillInvocations']
// MERGE_AWARE: May decrease due to valid merge operations — verify coverage
const MERGE_AWARE = ['agentCalls', 'codeBlocks']
// EXPECTED_DECREASE: May decrease from merge/consolidation
const EXPECTED_DECREASE = ['descriptiveCodeBlocks', 'todoWriteBlocks',
'phaseHandoffs', 'tables', 'schemas']
for (const [key, before] of Object.entries(beforeCounts)) {
const after = afterCounts[key] || 0
const diff = after - before
let category, status
if (CRITICAL.includes(key)) {
category = 'critical'
status = diff < 0 ? 'FAIL' : 'OK'
if (diff < 0) hasCriticalLoss = true
} else if (MERGE_AWARE.includes(key)) {
category = 'merge_aware'
// Decrease is WARN (needs justification), not FAIL
status = diff < 0 ? 'WARN' : 'OK'
if (diff < 0) hasWarning = true
} else {
category = 'expected'
status = 'OK' // Descriptive decreases are expected
}
delta[key] = { before, after, diff, category, status }
}
```
### Step 3.3: Deep Verification
**For CRITICAL categories with decrease** — identify exactly what was lost:
```javascript
if (hasCriticalLoss) {
const lostElements = {}
for (const [key, d] of Object.entries(delta)) {
if (d.status === 'FAIL') {
const beforeItems = analysisResult.inventory[key]
const afterItems = afterInventory[key]
lostElements[key] = beforeItems.filter(beforeItem =>
!afterItems.some(afterItem => matchesElement(beforeItem, afterItem))
)
}
}
}
```
**For MERGE_AWARE categories with decrease** — verify merged coverage:
```javascript
if (hasWarning) {
for (const [key, d] of Object.entries(delta)) {
if (d.category === 'merge_aware' && d.diff < 0) {
// Check if merged template covers all original variants
// e.g., single Agent template with "// For multi: add Perspective" covers both
const beforeItems = analysisResult.inventory[key]
const afterItems = afterInventory[key]
const unmatched = beforeItems.filter(beforeItem =>
!afterItems.some(afterItem => matchesElement(beforeItem, afterItem))
)
if (unmatched.length > 0) {
// Check if unmatched items are covered by merge comments in remaining items
const mergeComments = afterItems.flatMap(item => extractMergeComments(item))
const trulyLost = unmatched.filter(item =>
!mergeComments.some(comment => coversElement(comment, item))
)
if (trulyLost.length > 0) {
delta[key].status = 'FAIL'
hasCriticalLoss = true
delta[key].trulyLost = trulyLost
}
// else: merge-covered, WARN is correct
}
}
}
}
```
### Step 3.4: Pseudo-Code Format Validation
```javascript
const afterFormatIssues = validatePseudoCode(afterInventory.codeBlocks.filter(b => b.role === 'functional'))
const beforeFormatCount = analysisResult.inventory.formatIssues.length
const afterFormatCount = afterFormatIssues.length
const formatDelta = {
before: beforeFormatCount,
after: afterFormatCount,
resolved: beforeFormatCount - afterFormatCount,
newIssues: afterFormatIssues.filter(issue =>
!analysisResult.inventory.formatIssues.some(orig => orig.line === issue.line && orig.type === issue.type)
)
}
// New format issues introduced by optimization = FAIL
if (formatDelta.newIssues.length > 0) {
hasCriticalLoss = true
}
```
**Pseudo-code validation checks**:
| Check | Detection | Action on Failure |
|-------|-----------|-------------------|
| Bracket matching | Count `{([` vs `})]` per code block | FAIL — fix or revert |
| Variable consistency | `${var}` used but never declared | WARNING — note in report |
| Structural completeness | Function body has entry but no exit (return/Write/output) | WARNING |
| Nested backtick resolution | Backtick template literals inside code fences | WARNING if pre-existing, FAIL if newly introduced |
| Schema field preservation | Schema fields in after match before | FAIL if fields lost |
### Step 3.5: Generate Check Report
```javascript
const status = hasCriticalLoss ? 'FAIL' : (hasWarning ? 'WARN' : 'PASS')
const checkReport = {
status,
linesBefore: analysisResult.originalLineCount,
linesAfter: optimizedLineCount,
reduction: `${analysisResult.originalLineCount - optimizedLineCount} lines (-${Math.round((analysisResult.originalLineCount - optimizedLineCount) / analysisResult.originalLineCount * 100)}%)`,
delta,
formatDelta,
lostElements: hasCriticalLoss ? lostElements : null
}
// Display report table
// | Category | Before | After | Delta | Status |
// Show all categories, highlight FAIL/WARN rows
// Show format issues summary if any
```
### Step 3.6: Act on Result
```javascript
if (status === 'FAIL') {
Write(targetFile, analysisResult.originalContent)
// Report: "Critical elements lost / new format issues introduced. Reverted."
}
if (status === 'WARN') {
// Report: "Decreases from merge/descriptive removal. Verify coverage."
// Show merge justifications for MERGE_AWARE categories
}
if (status === 'PASS') {
// Report: "All functional elements preserved. Optimization successful."
}
```
## Element Matching Rules
How `matchesElement()` determines if a before-element exists in after-inventory:
| Element Type | Match Criteria |
|-------------|---------------|
| codeBlocks | Same language + first meaningful line (ignore whitespace/comments) |
| agentCalls | Same agentType + similar prompt keywords (>60% overlap) |
| dataStructures | Same variable name OR same field set |
| routingBranches | Same condition expression (normalized) |
| errorHandlers | Same error type/pattern |
| conditionalLogic | Same condition + same outcome set |
| askUserQuestions | Same question count + similar option labels |
| inputModes | Same mode identifier |
| outputArtifacts | Same file path pattern or artifact name |
| skillInvocations | Same skill name |
| todoWriteBlocks | Same phase names (order-independent) |
| phaseHandoffs | Same target phase reference |
| tables | Same column headers |
| schemas | Same schema name or field set |
**Merge coverage check** (`coversElement()`):
- Agent calls: Merged template contains `// For multi:` or `// Multi-perspective:` comment referencing the missing variant
- Code blocks: Merged block contains comment noting the alternative was folded in
## Completion
```javascript
TodoWrite({ todos: [
{ content: `Phase 1: Analysis [${Object.keys(analysisResult.inventory.counts).length} categories]`, status: "completed" },
{ content: `Phase 2: Optimize [${checkReport.reduction}]`, status: "completed" },
{ content: `Phase 3: Check [${checkReport.status}] | Format: ${formatDelta.resolved} resolved, ${formatDelta.newIssues.length} new`, status: "completed" }
]})
```

Binary file not shown.

View File

@@ -0,0 +1,204 @@
---
name: team-edict
description: |
三省六部 multi-agent 协作框架,完整复刻 Edict 架构。
太子接旨 -> 中书省规划 -> 门下省审议(多CLI并行) -> 尚书省调度 -> 六部并行执行。
强制看板状态上报state/flow/progress支持 Blocked 一等公民状态,全流程可观测。
Triggers on "team edict", "三省六部", "edict team".
allowed-tools: TeamCreate(*), TeamDelete(*), SendMessage(*), TaskCreate(*), TaskUpdate(*), TaskList(*), TaskGet(*), Agent(*), AskUserQuestion(*), Read(*), Write(*), Edit(*), Bash(*), Glob(*), Grep(*)
---
# Team Edict — 三省六部
受古代三省六部制启发的多 agent 协作框架。核心设计:**严格的级联审批流 + 实时看板可观测性 + 多 CLI 并行分析**。
## Architecture
```
+----------------------------------------------------------+
| Skill(skill="team-edict") |
| args="任务描述" |
+------------------------+---------------------------------+
|
Coordinator (太子·接旨分拣)
Phase 0-5 orchestration
|
+---------------+--------------+
| |
[串行审批链] [看板 State Bus]
| |
中书省(PLAN) ← 所有 agent 强制上报
| state/flow/progress
门下省(REVIEW) ← 多CLI审议
|
尚书省(DISPATCH) ← 路由分析
|
+----+----+----+----+----+----+
工部 兵部 户部 礼部 吏部 刑部
(IMPL)(OPS)(DATA)(DOC)(HR)(QA)
[team-worker × 6, 按需并行]
```
## Role Router
此 skill 为 **coordinator-only**。所有 worker 直接以 `team-worker` agent 形式 spawn。
### 输入解析
直接解析 `$ARGUMENTS` 作为任务描述,始终路由至 coordinator。
### Role Registry
| 角色 | 别名 | Spec | Task Prefix | Inner Loop | 职责 |
|------|------|------|-------------|------------|------|
| coordinator | 太子 | [roles/coordinator/role.md](roles/coordinator/role.md) | (none) | - | 接旨分拣、驱动流程 |
| zhongshu | 中书省 | [role-specs/zhongshu.md](role-specs/zhongshu.md) | PLAN-* | false | 分析旨意、起草执行方案 |
| menxia | 门下省 | [role-specs/menxia.md](role-specs/menxia.md) | REVIEW-* | false | 多维审议、准奏/封驳 |
| shangshu | 尚书省 | [role-specs/shangshu.md](role-specs/shangshu.md) | DISPATCH-* | false | 分析方案、派发六部 |
| gongbu | 工部 | [role-specs/gongbu.md](role-specs/gongbu.md) | IMPL-* | true | 功能开发、架构设计、代码实现 |
| bingbu | 兵部 | [role-specs/bingbu.md](role-specs/bingbu.md) | OPS-* | true | 基础设施、部署、性能监控 |
| hubu | 户部 | [role-specs/hubu.md](role-specs/hubu.md) | DATA-* | true | 数据分析、统计、资源管理 |
| libu | 礼部 | [role-specs/libu.md](role-specs/libu.md) | DOC-* | true | 文档、规范、UI/UX、对外沟通 |
| libu-hr | 吏部 | [role-specs/libu-hr.md](role-specs/libu-hr.md) | HR-* | false | Agent 管理、培训、考核评估 |
| xingbu | 刑部 | [role-specs/xingbu.md](role-specs/xingbu.md) | QA-* | true | 代码审查、测试验收、合规审计 |
### 门下省 — 多 CLI 审议配置
门下省审议使用**多 CLI 并行分析**,同时从多个维度评估方案:
| 审议维度 | CLI Tool | Focus |
|----------|----------|-------|
| 可行性审查 | gemini | 技术路径、依赖完备性 |
| 完整性审查 | qwen | 子任务覆盖度、遗漏识别 |
| 风险评估 | gemini (second call) | 故障点、回滚方案 |
| 资源评估 | codex | 工作量合理性、部门匹配度 |
### 六部路由规则
尚书省DISPATCH根据任务内容将子任务路由至对应部门
| 关键词信号 | 目标部门 | 说明 |
|-----------|---------|------|
| 功能开发、架构、代码、重构、实现 | 工部 (gongbu) | 工程实现 |
| 部署、CI/CD、基础设施、容器、性能监控 | 兵部 (bingbu) | 运维部署 |
| 数据分析、统计、成本、报表、资源 | 户部 (hubu) | 数据管理 |
| 文档、README、API文档、UI文案、规范 | 礼部 (libu) | 文档规范 |
| 测试、QA、Bug、审查、合规 | 刑部 (xingbu) | 质量保障 |
| Agent管理、培训、技能优化、考核 | 吏部 (libu-hr) | 人事管理 |
### Dispatch
始终路由至 coordinator (太子)。
### Orchestration Mode
用户只提供任务描述。
**调用**: `Skill(skill="team-edict", args="任务描述")`
**生命周期**:
```
用户提供任务描述
-> coordinator Phase 1-2: 接旨判断 -> 简单问答直接回复 | 正式任务建 PLAN 任务
-> coordinator Phase 3: TeamCreate -> spawn 中书省 worker (PLAN-001)
-> 中书省执行 -> 生成执行方案 -> SendMessage callback
-> coordinator spawn 门下省 worker (REVIEW-001) <- 多CLI并行审议
-> 门下省审议 -> 准奏/封驳 -> SendMessage callback
-> 封驳: coordinator 通知中书省修改 (最多3轮)
-> 准奏: coordinator spawn 尚书省 worker (DISPATCH-001)
-> 尚书省分析路由 -> 生成六部任务清单 -> SendMessage callback
-> coordinator 按任务清单 spawn 六部 workers (按依赖并行/串行)
-> 六部执行 -> 各自 SendMessage callback
-> coordinator 汇总所有六部产出 -> Phase 5 报告
```
**用户命令** (唤醒暂停的 coordinator):
| 命令 | 动作 |
|------|------|
| `check` / `status` | 输出看板状态图,不推进 |
| `resume` / `continue` | 检查 worker 状态,推进下一步 |
| `revise PLAN-001 <反馈>` | 触发中书省重新起草 (封驳循环) |
## 看板状态协议
所有 worker 必须遵守以下状态上报规范(强制性):
### 状态机
```
Pending -> Doing -> Done
|
Blocked (可随时进入,需上报原因)
```
### 状态上报调用
每个 worker 使用 `team_msg` 进行看板操作(替代 kanban_update.py
```javascript
// 接任务时
team_msg(operation="log", session_id=<session_id>, from=<role>,
type="state_update", data={state: "Doing", current_step: "开始执行[任务]"})
// 进度上报 (每个关键步骤)
team_msg(operation="log", session_id=<session_id>, from=<role>,
type="impl_progress", data={
current: "正在执行步骤2实现API接口",
plan: "步骤1分析✅|步骤2实现🔄|步骤3测试"
})
// 任务交接 (flow)
team_msg(operation="log", session_id=<session_id>, from=<role>, to="coordinator",
type="task_handoff", data={from_role: <role>, to_role: "coordinator", remark: "✅ 完成:[产出摘要]"})
// 阻塞上报
team_msg(operation="log", session_id=<session_id>, from=<role>, to="coordinator",
type="error", data={state: "Blocked", reason: "[阻塞原因],请求协助"})
```
## Specs Reference
| 文件 | 内容 | 使用方 |
|------|------|--------|
| [specs/team-config.json](specs/team-config.json) | 角色注册表、六部路由规则、pipeline 定义、session 目录结构、artifact 路径 | coordinator启动时读取 |
| [specs/quality-gates.md](specs/quality-gates.md) | 各阶段质量门标准、跨阶段一致性检查规则、消息类型对应关系 | coordinatorPhase 8 汇总验收时、xingbuQA 验收时) |
## Session Directory
```
.workflow/.team/<session-id>/
├── plan/
│ ├── zhongshu-plan.md # 中书省起草的执行方案
│ └── dispatch-plan.md # 尚书省生成的六部任务清单
├── review/
│ └── menxia-review.md # 门下省审议报告含多CLI结论
├── artifacts/
│ ├── gongbu-output.md # 工部产出
│ ├── xingbu-report.md # 刑部测试报告
│ └── ... # 各部门产出
├── kanban/
│ └── state.json # 看板状态快照
└── wisdom/
└── contributions/ # 各 worker 知识沉淀
```
## Spawn Template
Coordinator 使用以下模板 spawn worker
```javascript
Agent({
subagent_type: "team-worker",
name: "<role>",
team_name: "<team_name>",
prompt: `role: <role>
role_spec: .claude/skills/team-edict/role-specs/<role>.md
session: <session_path>
session_id: <session_id>
team_name: <team_name>
requirement: <original_requirement>
inner_loop: <true|false>`,
run_in_background: false
})
```

View File

@@ -0,0 +1,56 @@
---
role: bingbu
prefix: OPS
inner_loop: true
discuss_rounds: []
message_types:
success: ops_complete
progress: ops_progress
error: error
---
# 兵部 — 基础设施与运维
基础设施运维、部署发布、CI/CD、性能监控、安全防御。
## Phase 2: 任务加载
**看板上报**:
```javascript
team_msg(operation="log", session_id=<session_id>, from="bingbu",
type="state_update", data={state:"Doing", current_step:"兵部开始执行:<运维任务>"})
```
1. 读取当前任务OPS-* task description
2. 读取 `<session_path>/plan/dispatch-plan.md` 获取任务令
## Phase 3: 运维执行
**进度上报(每步必须)**:
```javascript
team_msg(operation="log", session_id=<session_id>, from="bingbu",
type="ops_progress", data={current:"正在执行:<步骤>", plan:"<步骤1>✅|<步骤2>🔄|<步骤3>"})
```
**执行策略**:
| 任务类型 | 方法 | CLI 工具 |
|----------|------|---------|
| 部署脚本/CI配置 | 直接 Write/Edit | inline |
| 复杂基础设施分析 | CLI 分析 | gemini analysis |
| 性能问题诊断 | CLI 分析 | gemini --rule analysis-analyze-performance |
| 安全配置审查 | CLI 分析 | gemini --rule analysis-assess-security-risks |
## Phase 4: 产出上报
**写入** `<session_path>/artifacts/bingbu-output.md`
**看板流转 + SendMessage**:
```javascript
team_msg(operation="log", session_id=<session_id>, from="bingbu", to="coordinator",
type="task_handoff", data={from_role:"bingbu", to_role:"coordinator",
remark:"✅ 完成:<运维产出摘要>"})
SendMessage({type:"message", recipient:"coordinator",
content:`ops_complete: task=<task_id>, artifact=artifacts/bingbu-output.md`,
summary:"兵部运维任务完成"})
```

View File

@@ -0,0 +1,86 @@
---
role: gongbu
prefix: IMPL
inner_loop: true
discuss_rounds: []
message_types:
success: impl_complete
progress: impl_progress
error: error
---
# 工部 — 工程实现
负责功能开发、架构设计、代码实现、重构优化。
## Phase 2: 任务加载
**看板上报**:
```javascript
team_msg(operation="log", session_id=<session_id>, from="gongbu",
type="state_update", data={state:"Doing", current_step:"工部开始执行:<任务内容>"})
```
1. 读取当前任务IMPL-* task description
2. 读取 `<session_path>/plan/dispatch-plan.md` 获取任务令详情
3. 读取 `<session_path>/plan/zhongshu-plan.md` 获取验收标准
**后端选择**:
| 条件 | 后端 | 调用方式 |
|------|------|---------|
| 复杂多文件变更 / 架构级改动 | gemini | `ccw cli --tool gemini --mode write` |
| 中等复杂度 | codex | `ccw cli --tool codex --mode write` |
| 简单单文件修改 | 直接 Edit/Write | inline |
## Phase 3: 代码实现
**进度上报(每步必须)**:
```javascript
team_msg(operation="log", session_id=<session_id>, from="gongbu",
type="impl_progress", data={current:"正在执行:<当前步骤>",
plan:"<步骤1>✅|<步骤2>🔄|<步骤3>"})
```
**实现流程**:
1. 探索代码库,理解现有架构:
```bash
ccw cli -p "PURPOSE: 理解与任务相关的现有代码模式
TASK: • 找出相关模块 • 理解接口约定 • 识别可复用组件
CONTEXT: @**/*
MODE: analysis" --tool gemini --mode analysis
```
2. 按任务令实现功能CLI write 或 inline
3. 确保遵循现有代码风格和模式
## Phase 4: 自验证
| 检查项 | 方法 | 通过标准 |
|--------|------|---------|
| 语法检查 | IDE diagnostics | 无错误 |
| 验收标准 | 对照 dispatch-plan 中的验收要求 | 全部满足 |
| 文件完整性 | 检查所有计划修改的文件 | 全部存在 |
**产出写入** `<session_path>/artifacts/gongbu-output.md`:
```
# 工部产出报告
## 实现概述 / 修改文件 / 关键决策 / 验收自查
```
**看板流转 + SendMessage**:
```javascript
team_msg(operation="log", session_id=<session_id>, from="gongbu", to="coordinator",
type="task_handoff", data={from_role:"gongbu", to_role:"coordinator",
remark:"✅ 完成:<实现摘要>"})
SendMessage({type:"message", recipient:"coordinator",
content:`impl_complete: task=<task_id>, artifact=artifacts/gongbu-output.md`,
summary:"工部实现完成"})
```
## 阻塞处理
```javascript
// 遇到无法解决的问题时
team_msg(operation="log", session_id=<session_id>, from="gongbu", to="coordinator",
type="error", data={state:"Blocked", reason:"<具体阻塞原因>,请求协助"})
```

View File

@@ -0,0 +1,57 @@
---
role: hubu
prefix: DATA
inner_loop: true
discuss_rounds: []
message_types:
success: data_complete
progress: data_progress
error: error
---
# 户部 — 数据与资源管理
数据分析、统计汇总、成本分析、资源管理、报表生成。
## Phase 2: 任务加载
**看板上报**:
```javascript
team_msg(operation="log", session_id=<session_id>, from="hubu",
type="state_update", data={state:"Doing", current_step:"户部开始执行:<数据任务>"})
```
1. 读取当前任务DATA-* task description
2. 读取 `<session_path>/plan/dispatch-plan.md` 获取任务令
## Phase 3: 数据分析执行
**进度上报**:
```javascript
team_msg(operation="log", session_id=<session_id>, from="hubu",
type="data_progress", data={current:"正在执行:<步骤>", plan:"<步骤1>✅|<步骤2>🔄|<步骤3>"})
```
**执行策略**:
```bash
# 数据探索和分析
ccw cli -p "PURPOSE: <具体数据分析目标>
TASK: • 数据采集 • 清洗处理 • 统计分析 • 可视化/报表
CONTEXT: @**/*
MODE: analysis
EXPECTED: 结构化分析报告 + 关键指标" --tool gemini --mode analysis
```
## Phase 4: 产出上报
**写入** `<session_path>/artifacts/hubu-output.md`
**看板流转 + SendMessage**:
```javascript
team_msg(operation="log", session_id=<session_id>, from="hubu", to="coordinator",
type="task_handoff", data={from_role:"hubu", to_role:"coordinator",
remark:"✅ 完成:<数据产出摘要>"})
SendMessage({type:"message", recipient:"coordinator",
content:`data_complete: task=<task_id>, artifact=artifacts/hubu-output.md`,
summary:"户部数据任务完成"})
```

View File

@@ -0,0 +1,64 @@
---
role: libu-hr
prefix: HR
inner_loop: false
discuss_rounds: []
message_types:
success: hr_complete
progress: hr_progress
error: error
---
# 吏部 — 人事与能力管理
Agent管理、技能培训、考核评估、协作规范制定。
## Phase 2: 任务加载
**看板上报**:
```javascript
team_msg(operation="log", session_id=<session_id>, from="libu-hr",
type="state_update", data={state:"Doing", current_step:"吏部开始执行:<人事任务>"})
```
1. 读取当前任务HR-* task description
2. 读取 `<session_path>/plan/dispatch-plan.md` 获取任务令
## Phase 3: 人事任务执行
**进度上报**:
```javascript
team_msg(operation="log", session_id=<session_id>, from="libu-hr",
type="hr_progress", data={current:"正在执行:<步骤>", plan:"<步骤1>✅|<步骤2>🔄"})
```
**任务类型处理**:
| 任务类型 | 处理方式 |
|---------|---------|
| Agent SOUL 审查/优化 | 读取 SOUL.md分析后提供改进建议 |
| Skill 编写/优化 | 分析现有 skill 模式,生成优化版本 |
| 能力基线评估 | CLI 分析,生成评估报告 |
| 协作规范制定 | 基于现有模式生成规范文档 |
```bash
ccw cli -p "PURPOSE: <具体人事任务目标>
TASK: <具体步骤>
CONTEXT: @.claude/agents/**/* @.claude/skills/**/*
MODE: analysis
EXPECTED: <期望产出格式>" --tool gemini --mode analysis
```
## Phase 4: 产出上报
**写入** `<session_path>/artifacts/libu-hr-output.md`
**看板流转 + SendMessage**:
```javascript
team_msg(operation="log", session_id=<session_id>, from="libu-hr", to="coordinator",
type="task_handoff", data={from_role:"libu-hr", to_role:"coordinator",
remark:"✅ 完成:<人事产出摘要>"})
SendMessage({type:"message", recipient:"coordinator",
content:`hr_complete: task=<task_id>, artifact=artifacts/libu-hr-output.md`,
summary:"吏部人事任务完成"})
```

View File

@@ -0,0 +1,56 @@
---
role: libu
prefix: DOC
inner_loop: true
discuss_rounds: []
message_types:
success: doc_complete
progress: doc_progress
error: error
---
# 礼部 — 文档与规范
文档撰写、规范制定、UI/UX文案、对外沟通、API文档、Release Notes。
## Phase 2: 任务加载
**看板上报**:
```javascript
team_msg(operation="log", session_id=<session_id>, from="libu",
type="state_update", data={state:"Doing", current_step:"礼部开始执行:<文档任务>"})
```
1. 读取当前任务DOC-* task description
2. 读取相关代码/实现产出(通常依赖工部产出)
3. 读取 `<session_path>/plan/dispatch-plan.md` 获取输出要求
## Phase 3: 文档生成
**进度上报**:
```javascript
team_msg(operation="log", session_id=<session_id>, from="libu",
type="doc_progress", data={current:"正在撰写:<文档章节>", plan:"<章节1>✅|<章节2>🔄|<章节3>"})
```
**执行策略**:
| 文档类型 | 方法 |
|---------|------|
| README / API文档 | 读取代码后直接 Write |
| 复杂规范/指南 | `ccw cli --tool gemini --mode write` |
| 多语言翻译 | `ccw cli --tool qwen --mode write` |
## Phase 4: 产出上报
**写入** `<session_path>/artifacts/libu-output.md`
**看板流转 + SendMessage**:
```javascript
team_msg(operation="log", session_id=<session_id>, from="libu", to="coordinator",
type="task_handoff", data={from_role:"libu", to_role:"coordinator",
remark:"✅ 完成:<文档产出摘要>"})
SendMessage({type:"message", recipient:"coordinator",
content:`doc_complete: task=<task_id>, artifact=artifacts/libu-output.md`,
summary:"礼部文档任务完成"})
```

View File

@@ -0,0 +1,139 @@
---
role: menxia
prefix: REVIEW
inner_loop: false
discuss_rounds: []
message_types:
success: review_result
error: error
---
# 门下省 — 多维审议
从四个维度并行审议中书省方案,输出准奏/封驳结论。**核心特性:多 CLI 并行分析**。
## Phase 2: 接旨 + 方案加载
**看板上报**:
```javascript
team_msg(operation="log", session_id=<session_id>, from="menxia",
type="state_update", data={state:"Doing", current_step:"门下省接旨,开始审议方案"})
```
**加载方案**:
1. 从 prompt 中提取 `plan_file` 路径(由 coordinator 传入)
2. `Read(plan_file)` 获取中书省方案全文
3. 若 plan_file 未指定,默认读取 `<session_path>/plan/zhongshu-plan.md`
**进度上报**:
```javascript
team_msg(operation="log", session_id=<session_id>, from="menxia",
type="impl_progress", data={current:"方案加载完成,启动多维并行审议",
plan:"方案加载✅|可行性审查🔄|完整性审查🔄|风险评估🔄|资源评估🔄|综合结论"})
```
## Phase 3: 多 CLI 并行审议
**四维并行分析**(同时启动,不等待单个完成):
### 维度1 — 可行性审查 (gemini)
```bash
ccw cli -p "PURPOSE: 审查以下方案的技术可行性;成功标准=每个技术路径均有可实现依据
TASK: • 验证技术路径是否可实现 • 检查所需依赖是否已具备 • 评估技术风险
MODE: analysis
CONTEXT: @**/*
EXPECTED: 可行性结论(通过/有条件通过/不可行)+ 具体问题列表
CONSTRAINTS: 只关注技术可行性,不评估工作量
---
方案内容:
<plan_content>" --tool gemini --mode analysis --rule analysis-review-architecture
```
### 维度2 — 完整性审查 (qwen)
```bash
ccw cli -p "PURPOSE: 审查方案是否覆盖所有需求,识别遗漏;成功标准=每个需求点有对应子任务
TASK: • 逐条对比原始需求与子任务清单 • 识别未覆盖的需求 • 检查验收标准是否可量化
MODE: analysis
CONTEXT: @**/*
EXPECTED: 完整性结论(完整/有缺失)+ 遗漏清单
CONSTRAINTS: 只关注需求覆盖度,不评估实现方式
---
原始需求:<requirement>
方案子任务:<subtasks_section>" --tool qwen --mode analysis
```
### 维度3 — 风险评估 (gemini, 第二次调用)
```bash
ccw cli -p "PURPOSE: 识别方案中的潜在故障点和风险;成功标准=每个高风险点有对应缓解措施
TASK: • 识别技术风险点 • 检查是否有回滚方案 • 评估依赖失败的影响
MODE: analysis
EXPECTED: 风险矩阵(风险项/概率/影响/缓解措施)
---
方案内容:
<plan_content>" --tool gemini --mode analysis --rule analysis-assess-security-risks
```
### 维度4 — 资源评估 (codex)
```bash
ccw cli -p "PURPOSE: 评估各部门工作量分配是否合理;成功标准=工作量与各部门专长匹配
TASK: • 检查子任务与部门专长的匹配度 • 评估工作量是否均衡 • 识别超负荷或空置部门
MODE: analysis
EXPECTED: 资源分配评估表 + 调整建议
CONSTRAINTS: 只关注工作量合理性和部门匹配度
---
方案子任务:<subtasks_section>" --tool codex --mode analysis
```
**执行策略**: 四个 CLI 调用顺序执行,每个同步等待结果后再启动下一个。
## Phase 4: 综合结论 + 上报
**综合审议结果**:
| 维度 | 结论权重 | 否决条件 |
|------|---------|---------|
| 可行性 | 30% | 不可行 → 直接封驳 |
| 完整性 | 30% | 重大遗漏(核心需求未覆盖) → 封驳 |
| 风险 | 25% | 高风险无缓解措施 → 封驳 |
| 资源 | 15% | 部门严重错配 → 附带条件准奏 |
**写入审议报告** `<session_path>/review/menxia-review.md`:
```markdown
# 门下省审议报告
## 审议结论:[准奏 ✅ / 封驳 ❌]
## 四维审议摘要
| 维度 | 结论 | 关键发现 |
|------|------|---------|
| 可行性 | 通过/不通过 | <要点> |
| 完整性 | 完整/有缺失 | <遗漏项> |
| 风险 | 可控/高风险 | <风险项> |
| 资源 | 合理/需调整 | <建议> |
## 封驳意见(若封驳)
<具体需要修改的问题,逐条列出>
## 附带条件(若有条件准奏)
<建议中书省在执行中注意的事项>
```
**进度上报**:
```javascript
team_msg(operation="log", session_id=<session_id>, from="menxia",
type="impl_progress", data={current:"审议完成,结论:<准奏/封驳>",
plan:"方案加载✅|可行性审查✅|完整性审查✅|风险评估✅|资源评估✅|综合结论✅"})
```
**看板流转 + SendMessage 回调**:
```javascript
// 流转上报
team_msg(operation="log", session_id=<session_id>, from="menxia", to="coordinator",
type="task_handoff", data={from_role:"menxia", to_role:"coordinator",
remark:"<准奏✅/封驳❌>:审议报告见 review/menxia-review.md"})
// SendMessage 回调
SendMessage({type:"message", recipient:"coordinator",
content:`review_result: approved=<true/false>, round=<N>, report=review/menxia-review.md`,
summary:"门下省审议完成"})
```

View File

@@ -0,0 +1,105 @@
---
role: shangshu
prefix: DISPATCH
inner_loop: false
discuss_rounds: []
message_types:
success: dispatch_ready
error: error
---
# 尚书省 — 执行调度
分析准奏方案,按部门职责拆解子任务,生成六部执行调度清单。
## Phase 2: 接旨 + 方案加载
**看板上报**:
```javascript
team_msg(operation="log", session_id=<session_id>, from="shangshu",
type="state_update", data={state:"Doing", current_step:"尚书省接令,分析准奏方案,准备调度六部"})
```
**加载方案**:
1. 读取 `<session_path>/plan/zhongshu-plan.md`(准奏方案)
2. 读取 `<session_path>/review/menxia-review.md`(审议报告,含附带条件)
3. 解析子任务清单和验收标准
**进度上报**:
```javascript
team_msg(operation="log", session_id=<session_id>, from="shangshu",
type="impl_progress", data={current:"方案解析完成,开始路由分析",
plan:"方案加载✅|路由分析🔄|任务分解|生成调度令|输出清单"})
```
## Phase 3: 路由分析 + 任务分解
**六部路由规则**:
| 关键词信号 | 目标部门 | agent role |
|-----------|---------|------------|
| 功能开发、架构设计、代码实现、重构、API、接口 | 工部 | gongbu |
| 部署、CI/CD、基础设施、容器、性能监控、安全防御 | 兵部 | bingbu |
| 数据分析、统计、成本、报表、资源管理、度量 | 户部 | hubu |
| 文档、README、UI文案、规范、对外沟通、翻译 | 礼部 | libu |
| 测试、QA、Bug定位、代码审查、合规审计 | 刑部 | xingbu |
| Agent管理、培训、技能优化、考核、知识库 | 吏部 | libu-hr |
**对每个子任务**:
1. 提取关键词,匹配目标部门
2. 若跨部门(如"实现+测试"),拆分为独立子任务
3. 分析依赖关系(哪些必须串行,哪些可并行)
**进度上报**:
```javascript
team_msg(operation="log", session_id=<session_id>, from="shangshu",
type="impl_progress", data={current:"路由分析完成,生成六部调度令",
plan:"方案加载✅|路由分析✅|任务分解✅|生成调度令🔄|输出清单"})
```
## Phase 4: 生成调度清单 + 上报
**写入调度清单** `<session_path>/plan/dispatch-plan.md`:
```markdown
# 尚书省调度清单
## 调度概览
- 总子任务数: N
- 涉及部门: <部门列表>
- 预计并行批次: M 批
## 调度令
### 第1批无依赖并行执行
#### 工部任务令 (IMPL-001)
- **任务**: <具体任务描述>
- **输出要求**: <格式/验收标准>
- **参考文件**: <如有>
#### 礼部任务令 (DOC-001)
- **任务**: <具体任务描述>
- **输出要求**: <格式/验收标准>
### 第2批依赖第1批串行
#### 刑部任务令 (QA-001)
- **任务**: 验收工部产出,执行测试
- **输出要求**: 测试报告 + 通过/不通过结论
- **前置条件**: IMPL-001 完成
## 汇总验收标准
<综合所有部门产出的最终验收指标>
## 附带条件(来自门下省审议)
<门下省要求注意的事项>
```
**看板流转 + SendMessage 回调**:
```javascript
team_msg(operation="log", session_id=<session_id>, from="shangshu", to="coordinator",
type="task_handoff", data={from_role:"shangshu", to_role:"coordinator",
remark:"✅ 调度清单生成完毕,共<N>个子任务分配给<M>个部门"})
SendMessage({type:"message", recipient:"coordinator",
content:`dispatch_ready: plan=plan/dispatch-plan.md, departments=[<dept_list>], batches=<N>`,
summary:"尚书省调度清单就绪"})
```

View File

@@ -0,0 +1,85 @@
---
role: xingbu
prefix: QA
inner_loop: true
discuss_rounds: []
message_types:
success: qa_complete
progress: qa_progress
error: error
fix: fix_required
---
# 刑部 — 质量保障
代码审查、测试验收、Bug定位、合规审计。
## Phase 2: 任务加载
**看板上报**:
```javascript
team_msg(operation="log", session_id=<session_id>, from="xingbu",
type="state_update", data={state:"Doing", current_step:"刑部开始执行:<QA任务>"})
```
1. 读取当前任务QA-* task description
2. 读取 `<session_path>/plan/dispatch-plan.md` 获取验收标准
3. 读取 `.claude/skills/team-edict/specs/quality-gates.md` 获取质量门标准
4. 读取被测部门(通常为工部)的产出报告
## Phase 3: 质量审查
**进度上报**:
```javascript
team_msg(operation="log", session_id=<session_id>, from="xingbu",
type="qa_progress", data={current:"正在执行:<审查步骤>",
plan:"<步骤1>✅|<步骤2>🔄|<步骤3>"})
```
**多 CLI 并行审查**(按任务类型选择):
代码审查:
```bash
ccw cli --tool codex --mode review
```
测试执行:
```bash
# 检测测试框架并运行
ccw cli -p "PURPOSE: 执行测试套件并分析结果
TASK: • 识别测试框架 • 运行所有相关测试 • 分析失败原因
CONTEXT: @**/*.test.* @**/*.spec.*
MODE: analysis" --tool gemini --mode analysis
```
合规审计(如需):
```bash
ccw cli -p "PURPOSE: 审查代码合规性
TASK: • 检查敏感信息暴露 • 权限控制审查 • 日志规范
CONTEXT: @**/*
MODE: analysis" --tool gemini --mode analysis --rule analysis-assess-security-risks
```
**Test-Fix 循环**最多3轮:
1. 运行测试 -> 分析结果
2. 通过率 >= 95% -> 退出(成功)
3. 通知工部修复: `SendMessage({type:"message", recipient:"gongbu", content:"fix_required: <具体问题>"})`
4. 等待工部修复 callback -> 重新测试
## Phase 4: 审查报告
**写入** `<session_path>/artifacts/xingbu-report.md`:
```
# 刑部质量报告
## 审查结论 (通过/不通过) / 测试结果 / Bug清单 / 合规状态
```
**看板流转 + SendMessage**:
```javascript
team_msg(operation="log", session_id=<session_id>, from="xingbu", to="coordinator",
type="task_handoff", data={from_role:"xingbu", to_role:"coordinator",
remark:"✅ 完成:质量审查<通过/不通过>,见 xingbu-report.md"})
SendMessage({type:"message", recipient:"coordinator",
content:`qa_complete: task=<task_id>, passed=<true/false>, artifact=artifacts/xingbu-report.md`,
summary:"刑部质量审查完成"})
```

View File

@@ -0,0 +1,116 @@
---
role: zhongshu
prefix: PLAN
inner_loop: false
discuss_rounds: []
message_types:
success: plan_ready
error: error
---
# 中书省 — 规划起草
分析旨意,起草结构化执行方案,提交门下省审议。
## Phase 2: 接旨 + 上下文加载
**看板上报(必须立即执行)**:
```javascript
team_msg(operation="log", session_id=<session_id>, from="zhongshu",
type="state_update", data={state:"Doing", current_step:"中书省接旨,开始分析任务"})
```
**加载上下文**:
1. 从 task description 提取 `session_path``requirement`
2. 若存在历史方案(封驳重来):读取 `<session_path>/review/menxia-review.md` 获取封驳意见
3. 执行代码库探索(如涉及代码任务):
```bash
ccw cli -p "PURPOSE: 理解当前代码库结构,为任务规划提供上下文
TASK: • 识别相关模块 • 理解现有架构 • 找出关键文件
CONTEXT: @**/*
EXPECTED: 关键文件列表 + 架构概述 + 依赖关系
MODE: analysis" --tool gemini --mode analysis
```
**进度上报**:
```javascript
team_msg(operation="log", session_id=<session_id>, from="zhongshu",
type="impl_progress", data={current:"完成上下文分析,开始起草方案",
plan:"上下文分析✅|方案起草🔄|子任务分解|输出方案"})
```
## Phase 3: 起草执行方案
**方案结构**(写入 `<session_path>/plan/zhongshu-plan.md`:
```markdown
# 执行方案
## 任务描述
<原始旨意>
## 技术分析
<基于代码库探索的分析结论>
## 执行策略
<高层方案描述不超过500字>
## 子任务清单
| 部门 | 子任务 | 优先级 | 前置依赖 | 预期产出 |
|------|--------|--------|----------|---------|
| 工部 | <具体任务> | P0 | 无 | <产出形式> |
| 刑部 | <测试任务> | P1 | 工部完成 | 测试报告 |
...
## 验收标准
<可量化的成功指标>
## 风险点
<潜在问题和建议回滚方案>
```
**起草原则**:
| 维度 | 要求 |
|------|------|
| 技术可行性 | 方案必须基于实际代码库现状 |
| 完整性 | 覆盖所有需求点,无遗漏 |
| 颗粒度 | 子任务可被具体部门直接执行 |
| 风险 | 每个高风险点有回滚方案 |
**进度上报**:
```javascript
team_msg(operation="log", session_id=<session_id>, from="zhongshu",
type="impl_progress", data={current:"方案起草完成,准备提交审议",
plan:"上下文分析✅|方案起草✅|子任务分解✅|输出方案🔄"})
```
## Phase 4: 输出 + 上报
1. 确认方案文件已写入 `<session_path>/plan/zhongshu-plan.md`
2. **看板流转上报**:
```javascript
team_msg(operation="log", session_id=<session_id>, from="zhongshu", to="coordinator",
type="task_handoff", data={from_role:"zhongshu", to_role:"coordinator",
remark:"✅ 完成:执行方案已起草,含<N>个子任务,提交门下省审议"})
```
3. **SendMessage 回调**:
```javascript
SendMessage({type:"message", recipient:"coordinator",
content:"plan_ready: 中书省方案起草完成,见 plan/zhongshu-plan.md",
summary:"中书省规划完成"})
```
## 错误处理
| 情况 | 处理 |
|------|------|
| 任务描述不清晰 | 在方案中列出假设,继续起草 |
| 代码库探索超时 | 基于旨意直接起草,标注"待验证" |
| 封驳重来(含封驳意见) | 针对封驳意见逐条修改,在方案头部列出修改点 |
**阻塞上报**(当无法继续时):
```javascript
team_msg(operation="log", session_id=<session_id>, from="zhongshu", to="coordinator",
type="error", data={state:"Blocked", reason:"<阻塞原因>,请求协助"})
```

View File

@@ -0,0 +1,254 @@
# Coordinator — 太子·接旨分拣
接收用户旨意,判断消息类型,驱动三省六部全流程。
## Identity
- **Name**: `coordinator` | **Tag**: `[coordinator]`
- **职责**: 接旨分拣 -> 建任务 -> 驱动中书省规划 -> 门下省审议 -> 尚书省调度 -> 六部执行 -> 汇总奏报
## Specs Reference
启动时必须读取以下配置文件:
| 文件 | 用途 | 读取时机 |
|------|------|---------|
| `specs/team-config.json` | 角色注册表、六部路由规则、session 目录结构、artifact 路径 | Phase 0/1 启动时 |
| `specs/quality-gates.md` | 各阶段质量门标准,用于验收判断 | Phase 8 汇总奏报时 |
```javascript
// Phase 0/1 启动时执行
Read(".claude/skills/team-edict/specs/team-config.json") // 加载路由规则和artifact路径
```
---
## Boundaries
### MUST
- 判断用户消息:简单问答直接回复,正式任务建 PLAN-001 走全流程
- 创建团队、按依赖链 spawn worker agents
- 每个关键节点更新看板状态team_msg state_update
- 等待 worker callback 后再推进下一阶段
- 最终汇总所有六部产出,回奏用户
### MUST NOT
- 自己执行规划、开发、测试工作(委托给三省六部)
- 跳过门下省审议直接派发执行
- 封驳超过3轮仍强行推进
---
## Entry Router
| 检测条件 | 处理路径 |
|---------|---------|
| 消息含已知 worker role tag | -> handleCallback |
| 参数含 "check" / "status" | -> handleCheck |
| 参数含 "resume" / "continue" | -> handleResume |
| 存在 active/paused 会话 | -> Phase 0 Resume |
| 以上都不满足 | -> Phase 1 新任务 |
---
## Phase 0: 会话恢复检查
1. 扫描 `.workflow/.team/EDT-*/team-session.json` 中 status=active/paused 的会话
2. 若找到:展示会话摘要,询问是否恢复
3. 恢复:加载会话上下文,跳转到上次中断的阶段
4. 不恢复Phase 1 新建
---
## Phase 1: 接旨分拣
**消息分拣规则**:
| 类型 | 特征 | 处理 |
|------|------|------|
| 简单问答 | <10字 / 闲聊 / 追问 / 状态查询 | 直接回复,不建任务 |
| 正式旨意 | 明确目标 + 可交付物 / ≥10字含动词 | 进入 Phase 2 |
若判断为正式旨意,输出:
```
已接旨,太子正在整理需求,即将转交中书省处理。
```
---
## Phase 2: 建队 + 初始化看板
1. **TeamCreate**: `team_name = "edict"` (或加时间戳区分)
2. **创建会话目录**: `.workflow/.team/EDT-<timestamp>/`
3. **创建初始看板状态**:
```javascript
team_msg(operation="log", session_id=<session_id>, from="coordinator",
type="state_update", data={
state: "Planning",
task_title: <提炼的任务标题>,
pipeline: "PLAN -> REVIEW -> DISPATCH -> 六部执行"
})
```
4. **创建任务链**:
- `PLAN-001`: 中书省起草方案 (status: pending)
- `REVIEW-001`: 门下省审议 (blockedBy: PLAN-001)
- `DISPATCH-001`: 尚书省调度 (blockedBy: REVIEW-001)
---
## Phase 3: 驱动中书省
1. 更新 PLAN-001 -> in_progress
2. **Spawn 中书省 worker**:
```javascript
Agent({
subagent_type: "team-worker",
name: "zhongshu",
team_name: <team_name>,
prompt: `role: zhongshu
role_spec: .claude/skills/team-edict/role-specs/zhongshu.md
session: <session_path>
session_id: <session_id>
team_name: <team_name>
requirement: <original_requirement>
inner_loop: false`,
run_in_background: false
})
```
3. 等待 SendMessage callback (type: plan_ready)
4. STOP — 等待中书省回调
---
## Phase 4: 接收规划 -> 驱动门下省审议
**当收到 zhongshu 的 plan_ready callback**:
1. 更新 PLAN-001 -> completed
2. 更新 REVIEW-001 -> in_progress
3. 记录流转:
```javascript
team_msg(operation="log", session_id=<session_id>, from="coordinator",
type="task_handoff", data={from_role:"zhongshu", to_role:"menxia", remark:"方案提交审议"})
```
4. **Spawn 门下省 worker** (参数含方案路径):
```javascript
Agent({
subagent_type: "team-worker",
name: "menxia",
team_name: <team_name>,
prompt: `role: menxia
role_spec: .claude/skills/team-edict/role-specs/menxia.md
session: <session_path>
session_id: <session_id>
team_name: <team_name>
requirement: <original_requirement>
plan_file: <session_path>/plan/zhongshu-plan.md
inner_loop: false`,
run_in_background: false
})
```
5. STOP — 等待门下省回调
---
## Phase 5: 处理审议结果
**当收到 menxia 的 review_result callback**:
| 结论 | 处理 |
|------|------|
| 准奏 (approved=true) | 更新 REVIEW-001 -> completed进入 Phase 6 |
| 封驳 (approved=false, round<3) | 通知中书省修改,重新执行 Phase 3 |
| 封驳 (round>=3) | AskUserQuestion 请用户决策 |
**封驳循环**: 在 PLAN-001 上追加修改任务,重置状态,重新 spawn 中书省。
---
## Phase 6: 驱动尚书省调度
1. 更新 DISPATCH-001 -> in_progress
2. 记录流转 (menxia -> shangshu)
3. **Spawn 尚书省 worker**:
```javascript
Agent({
subagent_type: "team-worker",
name: "shangshu",
team_name: <team_name>,
prompt: `role: shangshu
role_spec: .claude/skills/team-edict/role-specs/shangshu.md
session: <session_path>
session_id: <session_id>
team_name: <team_name>
requirement: <original_requirement>
plan_file: <session_path>/plan/zhongshu-plan.md
inner_loop: false`,
run_in_background: false
})
```
4. STOP — 等待尚书省回调
---
## Phase 7: 驱动六部执行
**当收到 shangshu 的 dispatch_ready callback** (含六部任务清单):
1. 更新 DISPATCH-001 -> completed
2. 读取尚书省生成的 `<session_path>/plan/dispatch-plan.md`
3. 解析六部任务清单,按依赖关系建任务
4. **并行 spawn 六部 workers** (无依赖的部门同时启动):
| 部门 | 前置条件 | spawn 方式 |
|------|---------|------------|
| 工部/兵部/户部/礼部/吏部/刑部 | 按 dispatch-plan 中的 blockedBy | 并行启动无依赖项 |
```javascript
// 示例:工部和礼部无依赖,并行启动
Agent({ subagent_type: "team-worker", name: "gongbu", ... })
Agent({ subagent_type: "team-worker", name: "xingbu", ... })
```
5. 每个 spawn 后 STOP 等待 callback收到后 spawn 下一批
---
## Phase 8: 汇总奏报
**当所有六部 worker 均完成**:
1. 收集 `<session_path>/artifacts/` 下所有产出
2. 生成汇总奏报 (最终回复):
```
## 奏报·任务完成
**任务**: <task_title>
**执行路径**: 中书省规划 -> 门下省准奏 -> 尚书省调度 -> 六部执行
### 各部产出
- 工部: <gongbu 产出摘要>
- 刑部: <xingbu 测试报告>
- ...
### 质量验收
<合并刑部的 QA 报告>
```
3. TeamDelete
4. 回复用户
---
## Callback 处理协议
| Sender | Message Type | 处理 |
|--------|-------------|------|
| zhongshu | plan_ready | -> Phase 5 (驱动门下省) |
| menxia | review_result | -> Phase 5 (处理审议) |
| shangshu | dispatch_ready | -> Phase 7 (驱动六部) |
| gongbu | impl_complete | -> 标记完成,检查是否全部完成 |
| bingbu | ops_complete | -> 标记完成,检查是否全部完成 |
| hubu | data_complete | -> 标记完成,检查是否全部完成 |
| libu | doc_complete | -> 标记完成,检查是否全部完成 |
| libu-hr | hr_complete | -> 标记完成,检查是否全部完成 |
| xingbu | qa_complete | -> 标记完成,检查是否全部完成 |
| 任意 | error (Blocked) | -> 记录阻塞AskUserQuestion 或自动协调 |

View File

@@ -0,0 +1,133 @@
# Quality Gates — team-edict
看板强制上报、审议质量、执行验收的分级质量门控标准。
## 质量阈值
| 门控 | 分数 | 动作 |
|------|------|------|
| **通过** | >= 80% | 继续下一阶段 |
| **警告** | 60-79% | 记录警告,谨慎推进 |
| **失败** | < 60% | 必须解决后才能继续 |
---
## 各阶段质量门
### Phase 1: 接旨分拣 (coordinator)
| 检查项 | 标准 | 严重性 |
|--------|------|--------|
| 任务分类正确 | 正式旨意/简单问答判断符合规则 | Error |
| 任务标题合规 | 10-30字中文概括无路径/URL/系统元数据 | Error |
| Session 创建 | EDT-{slug}-{date} 格式,目录结构完整 | Error |
| 初始任务链 | PLAN/REVIEW/DISPATCH 任务创建,依赖正确 | Error |
### Phase 2: 中书省规划 (zhongshu)
| 检查项 | 标准 | 严重性 |
|--------|------|--------|
| 看板上报 | 接任务/进度/完成 三个时机均已上报 | Error |
| 方案文件存在 | `plan/zhongshu-plan.md` 已写入 | Error |
| 子任务清单完整 | 覆盖所有旨意要点,含部门分配 | Error |
| 验收标准可量化 | >= 2 条可验证的成功指标 | Warning |
| 风险点识别 | >= 1 条风险及回滚方案 | Warning |
### Phase 3: 门下省审议 (menxia)
| 检查项 | 标准 | 严重性 |
|--------|------|--------|
| 四维分析均完成 | 可行性/完整性/风险/资源均有结论 | Error |
| 多CLI全部执行 | gemini×2 + qwen + codex 均调用 | Error |
| 审议报告存在 | `review/menxia-review.md` 已写入 | Error |
| 结论明确 | 准奏✅ 或 封驳❌ + 具体理由 | Error |
| 封驳意见具体 | 逐条列出需修改问题(封驳时必须)| Error封驳时|
| 看板上报 | 接任务/进度/完成 三个时机均已上报 | Error |
### Phase 4: 尚书省调度 (shangshu)
| 检查项 | 标准 | 严重性 |
|--------|------|--------|
| 调度清单存在 | `plan/dispatch-plan.md` 已写入 | Error |
| 每个子任务有部门归属 | 100% 覆盖,无遗漏子任务 | Error |
| 依赖关系正确 | 串行依赖标注清晰,并行任务识别正确 | Error |
| 看板上报 | 接任务/进度/完成 三个时机均已上报 | Error |
### Phase 5: 六部执行 (gongbu/bingbu/hubu/libu/libu-hr/xingbu)
| 检查项 | 标准 | 严重性 |
|--------|------|--------|
| 看板上报完整 | 接任务/每步进度/完成/阻塞 均正确上报 | Error |
| 产出文件存在 | `artifacts/<dept>-output.md` 已写入 | Error |
| 验收标准满足 | 对照 dispatch-plan 中的要求逐条验证 | Error |
| 阻塞主动上报 | 无法继续时 state=Blocked + reason | Error阻塞时|
### 刑部专项: 质量验收
| 检查项 | 标准 | 严重性 |
|--------|------|--------|
| 测试通过率 | >= 95% | Error |
| code review | codex review 无 Critical 问题 | Error |
| test-fix 循环 | <= 3 轮 | Warning |
| QA 报告完整 | 通过/不通过结论 + 问题清单 | Error |
---
## 跨阶段一致性检查
### 封驳循环约束
| 检查 | 规则 |
|------|------|
| 封驳轮数 | coordinator 跟踪超过3轮必须 AskUserQuestion |
| 修改覆盖度 | 每轮中书省修改必须回应门下省的所有封驳意见 |
| 方案版本 | zhongshu-plan.md 每轮包含"本轮修改点"摘要 |
### 消息类型一致性
| Sender | message_type | Coordinator 处理 |
|--------|-------------|-----------------|
| zhongshu | plan_ready | -> spawn menxia |
| menxia | review_result (approved=true) | -> spawn shangshu |
| menxia | review_result (approved=false) | -> respawn zhongshu (round++) |
| shangshu | dispatch_ready | -> spawn 六部 workers |
| 六部 | *_complete | -> 标记完成,检查全部完成 |
| 任意 | error (Blocked) | -> 记录AskUserQuestion 或协调 |
### Task Prefix 唯一性
| Role | Prefix | 冲突检查 |
|------|--------|---------|
| zhongshu | PLAN | ✅ 唯一 |
| menxia | REVIEW | ✅ 唯一 |
| shangshu | DISPATCH | ✅ 唯一 |
| gongbu | IMPL | ✅ 唯一 |
| bingbu | OPS | ✅ 唯一 |
| hubu | DATA | ✅ 唯一 |
| libu | DOC | ✅ 唯一 |
| libu-hr | HR | ✅ 唯一 |
| xingbu | QA | ✅ 唯一 |
---
## 问题分级
### Error必须修复
- 看板上报缺失(任一强制时机未上报)
- 产出文件未写入
- 封驳超过3轮未询问用户
- 阻塞状态未上报
- task prefix 冲突
### Warning应当修复
- 进度上报粒度不足(步骤描述过于笼统)
- 验收标准不可量化
- 风险点无回滚方案
### Info建议改进
- 产出报告缺乏详细摘要
- wisdom contributions 未记录
- 调度批次可进一步优化并行度

View File

@@ -0,0 +1,180 @@
{
"version": "5.0.0",
"team_name": "team-edict",
"team_display_name": "Team Edict — 三省六部",
"description": "完整复刻 Edict 三省六部架构:太子接旨 -> 中书省规划 -> 门下省多CLI审议 -> 尚书省调度 -> 六部并行执行。强制看板状态上报,支持 Blocked 一等公民状态,全流程可观测。",
"architecture": "team-worker agent + role-specs + 串行审批链 + 多CLI并行审议",
"worker_agent": "team-worker",
"session_prefix": "EDT",
"roles": {
"coordinator": {
"alias": "太子",
"task_prefix": null,
"responsibility": "接旨分拣、驱动八阶段流程、封驳循环控制、六部并行调度、最终汇总奏报",
"message_types": ["plan_ready", "review_result", "dispatch_ready", "impl_complete", "ops_complete", "data_complete", "doc_complete", "hr_complete", "qa_complete", "error"]
},
"zhongshu": {
"alias": "中书省",
"task_prefix": "PLAN",
"role_spec": "role-specs/zhongshu.md",
"responsibility": "分析旨意、代码库探索gemini CLI、起草结构化执行方案",
"inner_loop": false,
"message_types": ["plan_ready", "error"]
},
"menxia": {
"alias": "门下省",
"task_prefix": "REVIEW",
"role_spec": "role-specs/menxia.md",
"responsibility": "四维并行审议gemini×2 + qwen + codex、输出准奏/封驳结论",
"inner_loop": false,
"multi_cli": {
"enabled": true,
"dimensions": [
{"name": "可行性", "tool": "gemini", "rule": "analysis-review-architecture"},
{"name": "完整性", "tool": "qwen"},
{"name": "风险评估", "tool": "gemini", "rule": "analysis-assess-security-risks"},
{"name": "资源评估", "tool": "codex"}
]
},
"message_types": ["review_result", "error"]
},
"shangshu": {
"alias": "尚书省",
"task_prefix": "DISPATCH",
"role_spec": "role-specs/shangshu.md",
"responsibility": "解析准奏方案、按六部路由规则拆解子任务、生成调度令清单",
"inner_loop": false,
"message_types": ["dispatch_ready", "error"]
},
"gongbu": {
"alias": "工部",
"task_prefix": "IMPL",
"role_spec": "role-specs/gongbu.md",
"responsibility": "功能开发、架构设计、代码实现、重构优化",
"inner_loop": true,
"message_types": ["impl_complete", "impl_progress", "error"]
},
"bingbu": {
"alias": "兵部",
"task_prefix": "OPS",
"role_spec": "role-specs/bingbu.md",
"responsibility": "基础设施运维、部署发布、CI/CD、性能监控、安全防御",
"inner_loop": true,
"message_types": ["ops_complete", "ops_progress", "error"]
},
"hubu": {
"alias": "户部",
"task_prefix": "DATA",
"role_spec": "role-specs/hubu.md",
"responsibility": "数据分析、统计汇总、成本分析、资源管理、报表生成",
"inner_loop": true,
"message_types": ["data_complete", "data_progress", "error"]
},
"libu": {
"alias": "礼部",
"task_prefix": "DOC",
"role_spec": "role-specs/libu.md",
"responsibility": "文档撰写、规范制定、UI/UX文案、API文档、对外沟通",
"inner_loop": true,
"message_types": ["doc_complete", "doc_progress", "error"]
},
"libu-hr": {
"alias": "吏部",
"task_prefix": "HR",
"role_spec": "role-specs/libu-hr.md",
"responsibility": "Agent管理、技能培训与优化、考核评估、协作规范制定",
"inner_loop": false,
"message_types": ["hr_complete", "error"]
},
"xingbu": {
"alias": "刑部",
"task_prefix": "QA",
"role_spec": "role-specs/xingbu.md",
"responsibility": "代码审查、测试验收、Bug定位修复、合规审计test-fix循环最多3轮",
"inner_loop": true,
"message_types": ["qa_complete", "qa_progress", "fix_required", "error"]
}
},
"pipeline": {
"type": "cascade_with_parallel_execution",
"description": "串行审批链 + 六部按依赖并行执行",
"stages": [
{
"stage": 1,
"name": "规划",
"roles": ["zhongshu"],
"blockedBy": []
},
{
"stage": 2,
"name": "审议",
"roles": ["menxia"],
"blockedBy": ["zhongshu"],
"retry": {"max_rounds": 3, "on_reject": "respawn zhongshu with feedback"}
},
{
"stage": 3,
"name": "调度",
"roles": ["shangshu"],
"blockedBy": ["menxia"]
},
{
"stage": 4,
"name": "执行",
"roles": ["gongbu", "bingbu", "hubu", "libu", "libu-hr", "xingbu"],
"blockedBy": ["shangshu"],
"parallel": true,
"note": "实际并行度由 dispatch-plan.md 中的 blockedBy 决定"
}
],
"diagram": "PLAN-001 -> REVIEW-001 -> DISPATCH-001 -> [IMPL/OPS/DATA/DOC/HR/QA 按需并行]"
},
"kanban_protocol": {
"description": "所有 worker 强制遵守的看板状态上报规范",
"state_machine": ["Pending", "Doing", "Blocked", "Done"],
"mandatory_events": [
{"event": "接任务时", "type": "state_update", "data": "state=Doing + current_step"},
{"event": "每个关键步骤", "type": "impl_progress", "data": "current + plan(步骤1✅|步骤2🔄|步骤3)"},
{"event": "完成时", "type": "task_handoff", "data": "from_role -> coordinator + remark"},
{"event": "阻塞时", "type": "error", "data": "state=Blocked + reason"}
],
"implementation": "team_msg(operation='log', session_id=<session_id>, from=<role>, ...)"
},
"routing_rules": {
"description": "尚书省六部路由规则",
"rules": [
{"keywords": ["功能开发", "架构", "代码", "重构", "API", "接口", "实现"], "department": "gongbu"},
{"keywords": ["部署", "CI/CD", "基础设施", "容器", "性能监控", "安全防御"], "department": "bingbu"},
{"keywords": ["数据分析", "统计", "成本", "报表", "资源管理"], "department": "hubu"},
{"keywords": ["文档", "README", "UI文案", "规范", "API文档", "对外沟通"], "department": "libu"},
{"keywords": ["测试", "QA", "Bug", "审查", "合规审计"], "department": "xingbu"},
{"keywords": ["Agent管理", "培训", "技能优化", "考核"], "department": "libu-hr"}
]
},
"session_dirs": {
"base": ".workflow/.team/EDT-{slug}-{YYYY-MM-DD}/",
"plan": "plan/",
"review": "review/",
"artifacts": "artifacts/",
"kanban": "kanban/",
"wisdom": "wisdom/contributions/",
"messages": ".msg/"
},
"artifacts": {
"zhongshu": "plan/zhongshu-plan.md",
"menxia": "review/menxia-review.md",
"shangshu": "plan/dispatch-plan.md",
"gongbu": "artifacts/gongbu-output.md",
"bingbu": "artifacts/bingbu-output.md",
"hubu": "artifacts/hubu-output.md",
"libu": "artifacts/libu-output.md",
"libu-hr": "artifacts/libu-hr-output.md",
"xingbu": "artifacts/xingbu-report.md"
}
}

View File

@@ -6,83 +6,78 @@ allowed-tools: Skill, Agent, AskUserQuestion, TodoWrite, Read, Write, Edit, Bash
# Workflow-Lite-Execute
Complete execution engine: multi-mode input, task grouping, batch execution, code review, and development index update.
Execution engine for workflow-lite-plan handoff and standalone task execution.
---
## Overview
Flexible task execution command supporting three input modes: in-memory plan (from lite-plan), direct prompt description, or file content. Handles execution orchestration, progress tracking, and optional code review.
**Core capabilities:**
- Multi-mode input (in-memory plan, prompt description, or file path)
- Execution orchestration (Agent or Codex) with full context
- Live progress tracking via TodoWrite at execution call level
- Optional code review with selected tool (Gemini, Agent, or custom)
- Context continuity across multiple executions
- Intelligent format detection (Enhanced Task JSON vs plain text)
## Usage
### Input
```
<input> Task description string, or path to file (required)
```
### Flags
| Flag | Description |
|------|-------------|
| `--in-memory` | Mode 1: Use executionContext from workflow-lite-plan handoff (via Skill({ skill: "workflow-lite-execute", args: "--in-memory" }) |
Mode 1 (In-Memory) is triggered by `--in-memory` flag or when `executionContext` global variable is available.
## Input Modes
### Mode 1: In-Memory Plan
**Trigger**: Called by workflow-lite-plan direct handoff after Phase 4 approval (executionContext available)
**Trigger**: `--in-memory` flag or `executionContext` global variable available
**Input Source**: `executionContext` global variable set by workflow-lite-plan
**Content**: Complete execution context (see Data Structures section)
**Behavior**: Skip execution method/code review selection (already chosen in LP-Phase 4), directly proceed with full context (exploration, clarifications, plan artifacts all available)
**Behavior**:
- Skip execution method selection (already set by lite-plan)
- Directly proceed to execution with full context
- All planning artifacts available (exploration, clarifications, plan)
> **Note**: LP-Phase 4 is the single confirmation gate. Mode 1 invocation means user already approved — no further prompts.
### Mode 2: Prompt Description
**Trigger**: User calls with task description string
**Trigger**: User calls with task description string (e.g., "Add unit tests for auth module")
**Input**: Simple task description (e.g., "Add unit tests for auth module")
**Behavior**: Store prompt as `originalUserInput` → create simple execution plan → run `selectExecutionOptions()` → proceed
**Behavior**:
- Store prompt as `originalUserInput`
- Create simple execution plan from prompt
- AskUserQuestion: Select execution method (Agent/Codex/Auto)
- AskUserQuestion: Select code review tool (Skip/Gemini/Agent/Other)
- Proceed to execution with `originalUserInput` included
### Mode 3: File Content
**Trigger**: User calls with file path (ends with .md/.json/.txt)
**User Interaction**:
```javascript
const autoYes = workflowPreferences.autoYes
let userSelection
if (autoYes) {
// Auto mode: Use defaults
console.log(`[Auto] Auto-confirming execution:`)
console.log(` - Execution method: Auto`)
console.log(` - Code review: Skip`)
userSelection = {
execution_method: "Auto",
code_review_tool: "Skip"
fileContent = Read(filePath)
try {
jsonData = JSON.parse(fileContent)
// plan.json detection: two-layer format with task_ids[]
if (jsonData.summary && jsonData.approach && jsonData.task_ids) {
planObject = jsonData
originalUserInput = jsonData.summary
isPlanJson = true
const planDir = filePath.replace(/[/\\][^/\\]+$/, '')
planObject._loadedTasks = loadTaskFiles(planDir, jsonData.task_ids)
} else {
originalUserInput = fileContent
isPlanJson = false
}
} else {
// Interactive mode: Ask user
userSelection = AskUserQuestion({
} catch {
originalUserInput = fileContent
isPlanJson = false
}
```
- `isPlanJson === true`: Use `planObject` directly → run `selectExecutionOptions()`
- `isPlanJson === false`: Treat as prompt (same as Mode 2)
### User Selection (Mode 2/3 shared)
```javascript
function selectExecutionOptions() {
// autoYes: set by -y flag (standalone only; Mode 1 never reaches here)
const autoYes = workflowPreferences?.autoYes ?? false
if (autoYes) {
return { execution_method: "Auto", code_review_tool: "Skip" }
}
return AskUserQuestion({
questions: [
{
question: "Select execution method:",
@@ -110,120 +105,17 @@ if (autoYes) {
}
```
### Mode 3: File Content
## Execution Steps
**Trigger**: User calls with file path
**Input**: Path to file containing task description or plan.json
**Step 1: Read and Detect Format**
### Step 1: Initialize & Echo Strategy
```javascript
fileContent = Read(filePath)
// Attempt JSON parsing
try {
jsonData = JSON.parse(fileContent)
// Check if plan.json from workflow-lite-plan session (two-layer format: task_ids[])
if (jsonData.summary && jsonData.approach && jsonData.task_ids) {
planObject = jsonData
originalUserInput = jsonData.summary
isPlanJson = true
// Load tasks from .task/*.json files
const planDir = filePath.replace(/[/\\][^/\\]+$/, '') // parent directory
planObject._loadedTasks = loadTaskFiles(planDir, jsonData.task_ids)
} else {
// Valid JSON but not plan.json - treat as plain text
originalUserInput = fileContent
isPlanJson = false
}
} catch {
// Not valid JSON - treat as plain text prompt
originalUserInput = fileContent
isPlanJson = false
}
```
**Step 2: Create Execution Plan**
If `isPlanJson === true`:
- Use `planObject` directly
- User selects execution method and code review
If `isPlanJson === false`:
- Treat file content as prompt (same behavior as Mode 2)
- Create simple execution plan from content
**Step 3: User Interaction**
- AskUserQuestion: Select execution method (Agent/Codex/Auto)
- AskUserQuestion: Select code review tool
- Proceed to execution with full context
## Helper Functions
```javascript
// Load task files from .task/ directory (two-layer format)
function loadTaskFiles(planDir, taskIds) {
return taskIds.map(id => {
const taskPath = `${planDir}/.task/${id}.json`
return JSON.parse(Read(taskPath))
})
}
// Get tasks array from loaded .task/*.json files
function getTasks(planObject) {
return planObject._loadedTasks || []
}
```
## Execution Process
```
Input Parsing:
└─ Decision (mode detection):
├─ executionContext exists → Mode 1: Load executionContext → Skip user selection
├─ Ends with .md/.json/.txt → Mode 3: Read file → Detect format
│ ├─ Valid plan.json → Use planObject → User selects method + review
│ └─ Not plan.json → Treat as prompt → User selects method + review
└─ Other → Mode 2: Prompt description → User selects method + review
Execution:
├─ Step 1: Initialize result tracking (previousExecutionResults = [])
├─ Step 2: Task grouping & batch creation
│ ├─ Extract explicit depends_on (no file/keyword inference)
│ ├─ Group: independent tasks → per-executor parallel batches (one CLI per batch)
│ ├─ Group: dependent tasks → sequential phases (respect dependencies)
│ └─ Create TodoWrite list for batches
├─ Step 3: Launch execution
│ ├─ Phase 1: Independent tasks (⚡ per-executor batches, multi-CLI concurrent)
│ └─ Phase 2+: Dependent tasks by dependency order
├─ Step 4: Track progress (TodoWrite updates per batch)
└─ Step 5: Code review (if codeReviewTool ≠ "Skip")
Output:
└─ Execution complete with results in previousExecutionResults[]
```
## Detailed Execution Steps
### Step 1: Initialize Execution Tracking
**Operations**:
- Initialize result tracking for multi-execution scenarios
- Set up `previousExecutionResults` array for context continuity
- **In-Memory Mode**: Echo execution strategy from workflow-lite-plan for transparency
```javascript
// Initialize result tracking
previousExecutionResults = []
// In-Memory Mode: Echo execution strategy (transparency before execution)
// Mode 1: echo strategy for transparency
if (executionContext) {
console.log(`
📋 Execution Strategy (from lite-plan):
Execution Strategy (from lite-plan):
Method: ${executionContext.executionMethod}
Review: ${executionContext.codeReviewTool}
Tasks: ${getTasks(executionContext.planObject).length}
@@ -231,19 +123,24 @@ if (executionContext) {
${executionContext.executorAssignments ? ` Assignments: ${JSON.stringify(executionContext.executorAssignments)}` : ''}
`)
}
// Helper: load .task/*.json files (two-layer format)
function loadTaskFiles(planDir, taskIds) {
return taskIds.map(id => JSON.parse(Read(`${planDir}/.task/${id}.json`)))
}
function getTasks(planObject) {
return planObject._loadedTasks || []
}
```
### Step 2: Task Grouping & Batch Creation
**Dependency Analysis & Grouping Algorithm**:
```javascript
// Use explicit depends_on from plan.json (no inference from file/keywords)
// Dependency extraction: explicit depends_on only (no file/keyword inference)
function extractDependencies(tasks) {
const taskIdToIndex = {}
tasks.forEach((t, i) => { taskIdToIndex[t.id] = i })
return tasks.map((task, i) => {
// Only use explicit depends_on from plan.json
const deps = (task.depends_on || [])
.map(depId => taskIdToIndex[depId])
.filter(idx => idx !== undefined && idx < i)
@@ -251,184 +148,170 @@ function extractDependencies(tasks) {
})
}
// Executor Resolution (used by task grouping below)
// 获取任务的 executor优先使用 executorAssignmentsfallback 到全局 executionMethod
// Executor resolution: executorAssignments[taskId] > executionMethod > Auto fallback
function getTaskExecutor(task) {
const assignments = executionContext?.executorAssignments || {}
if (assignments[task.id]) {
return assignments[task.id].executor // 'gemini' | 'codex' | 'agent'
}
// Fallback: 全局 executionMethod 映射
if (assignments[task.id]) return assignments[task.id].executor // 'gemini' | 'codex' | 'agent'
const method = executionContext?.executionMethod || 'Auto'
if (method === 'Agent') return 'agent'
if (method === 'Codex') return 'codex'
// Auto: 根据复杂度
return planObject.complexity === 'Low' ? 'agent' : 'codex'
return planObject.complexity === 'Low' ? 'agent' : 'codex' // Auto fallback
}
// 按 executor 分组任务(核心分组组件)
function groupTasksByExecutor(tasks) {
const groups = { gemini: [], codex: [], agent: [] }
tasks.forEach(task => {
const executor = getTaskExecutor(task)
groups[executor].push(task)
})
tasks.forEach(task => { groups[getTaskExecutor(task)].push(task) })
return groups
}
// Group into batches: per-executor parallel batches (one CLI per batch)
// Batch creation: independent → per-executor parallel, dependent → sequential phases
function createExecutionCalls(tasks, executionMethod) {
const tasksWithDeps = extractDependencies(tasks)
const processed = new Set()
const calls = []
// Phase 1: Independent tasks → per-executor batches (multi-CLI concurrent)
// Phase 1: Independent tasks → per-executor parallel batches
const independentTasks = tasksWithDeps.filter(t => t.dependencies.length === 0)
if (independentTasks.length > 0) {
const executorGroups = groupTasksByExecutor(independentTasks)
let parallelIndex = 1
for (const [executor, tasks] of Object.entries(executorGroups)) {
if (tasks.length === 0) continue
tasks.forEach(t => processed.add(t.taskIndex))
calls.push({
method: executionMethod,
executor: executor, // 明确指定 executor
executionType: "parallel",
method: executionMethod, executor, executionType: "parallel",
groupId: `P${parallelIndex++}`,
taskSummary: tasks.map(t => t.title).join(' | '),
tasks: tasks
taskSummary: tasks.map(t => t.title).join(' | '), tasks
})
}
}
// Phase 2: Dependent tasks → sequential/parallel batches (respect dependencies)
// Phase 2+: Dependent tasks → respect dependency order
let sequentialIndex = 1
let remaining = tasksWithDeps.filter(t => !processed.has(t.taskIndex))
while (remaining.length > 0) {
// Find tasks whose dependencies are all satisfied
const ready = remaining.filter(t =>
t.dependencies.every(d => processed.has(d))
)
if (ready.length === 0) {
console.warn('Circular dependency detected, forcing remaining tasks')
ready.push(...remaining)
}
let ready = remaining.filter(t => t.dependencies.every(d => processed.has(d)))
if (ready.length === 0) { console.warn('Circular dependency detected, forcing remaining'); ready = [...remaining] }
if (ready.length > 1) {
// Multiple ready tasks → per-executor batches (parallel within this phase)
const executorGroups = groupTasksByExecutor(ready)
for (const [executor, tasks] of Object.entries(executorGroups)) {
if (tasks.length === 0) continue
tasks.forEach(t => processed.add(t.taskIndex))
calls.push({
method: executionMethod,
executor: executor,
executionType: "parallel",
method: executionMethod, executor, executionType: "parallel",
groupId: `P${calls.length + 1}`,
taskSummary: tasks.map(t => t.title).join(' | '),
tasks: tasks
taskSummary: tasks.map(t => t.title).join(' | '), tasks
})
}
} else {
// Single ready task → sequential batch
ready.forEach(t => processed.add(t.taskIndex))
calls.push({
method: executionMethod,
executor: getTaskExecutor(ready[0]),
executionType: "sequential",
groupId: `S${sequentialIndex++}`,
taskSummary: ready[0].title,
tasks: ready
method: executionMethod, executor: getTaskExecutor(ready[0]),
executionType: "sequential", groupId: `S${sequentialIndex++}`,
taskSummary: ready[0].title, tasks: ready
})
}
remaining = remaining.filter(t => !processed.has(t.taskIndex))
}
return calls
}
executionCalls = createExecutionCalls(getTasks(planObject), executionMethod).map(c => ({ ...c, id: `[${c.groupId}]` }))
TodoWrite({
todos: executionCalls.map(c => ({
content: `${c.executionType === "parallel" ? "⚡" : "→"} ${c.id} [${c.executor}] (${c.tasks.length} tasks)`,
todos: executionCalls.map((c, i) => ({
content: `${c.executionType === "parallel" ? "⚡" : `→ [${i+1}/${executionCalls.filter(x=>x.executionType==="sequential").length}]`} ${c.id} [${c.executor}] ${c.tasks.map(t=>t.id).join(', ')}`,
status: "pending",
activeForm: `Executing ${c.id} [${c.executor}]`
activeForm: `Waiting: ${c.tasks.length} task(s) via ${c.executor}`
}))
})
```
### Step 3: Launch Execution
### Step 3: Launch Execution & Track Progress
> **⚠️ CHECKPOINT**: Before proceeding, verify Phase 2 execution protocol (Step 3-5) is in active memory. If only a summary remains, re-read `phases/02-lite-execute.md` now.
> **CHECKPOINT**: Verify Phase 2 execution protocol (Step 3-5) is in active memory. If only a summary remains, re-read `phases/02-lite-execute.md` now.
**Executor Resolution**: `getTaskExecutor()` and `groupTasksByExecutor()` defined in Step 2 (Task Grouping).
**Batch Execution Routing** (根据 batch.executor 字段路由):
**Batch Routing** (by `batch.executor` field):
```javascript
// executeBatch 根据 batch 自身的 executor 字段决定调用哪个 CLI
function executeBatch(batch) {
const executor = batch.executor || getTaskExecutor(batch.tasks[0])
const sessionId = executionContext?.session?.id || 'standalone'
const fixedId = `${sessionId}-${batch.groupId}`
if (executor === 'agent') {
// Agent execution (synchronous)
return Agent({
subagent_type: "code-developer",
run_in_background: false,
description: batch.taskSummary,
prompt: buildExecutionPrompt(batch)
})
} else if (executor === 'codex') {
// Codex CLI (background)
return Bash(`ccw cli -p "${buildExecutionPrompt(batch)}" --tool codex --mode write --id ${fixedId}`, { run_in_background: true })
} else if (executor === 'gemini') {
// Gemini CLI (background)
return Bash(`ccw cli -p "${buildExecutionPrompt(batch)}" --tool gemini --mode write --id ${fixedId}`, { run_in_background: true })
return Agent({ subagent_type: "code-developer", run_in_background: false,
description: batch.taskSummary, prompt: buildExecutionPrompt(batch) })
} else {
// CLI execution (codex/gemini): background with fixed ID
const tool = executor // 'codex' | 'gemini'
const mode = executor === 'gemini' ? 'analysis' : 'write'
const previousCliId = batch.resumeFromCliId || null
const cmd = previousCliId
? `ccw cli -p "${buildExecutionPrompt(batch)}" --tool ${tool} --mode ${mode} --id ${fixedId} --resume ${previousCliId}`
: `ccw cli -p "${buildExecutionPrompt(batch)}" --tool ${tool} --mode ${mode} --id ${fixedId}`
return Bash(cmd, { run_in_background: true })
// STOP - wait for task hook callback
}
}
```
**并行执行原则**:
- 每个 batch 对应一个独立的 CLI 实例或 Agent 调用
- 并行 = 多个 Bash(run_in_background=true) 或多个 Task() 同时发出
- 绝不将多个独立任务合并到同一个 CLI prompt
- Agent 任务不可后台执行(run_in_background=false),但多个 Agent 任务可通过单条消息中的多个 Task() 调用并发
**Parallel execution rules**:
- Each batch = one independent CLI instance or Agent call
- Parallel = multiple Bash(run_in_background=true) or multiple Agent() in single message
- Never merge independent tasks into one CLI prompt
- Agent: run_in_background=false, but multiple Agent() calls can be concurrent in single message
**Execution Flow**: Parallel batches concurrently → Sequential batches in order
```javascript
const parallel = executionCalls.filter(c => c.executionType === "parallel")
const sequential = executionCalls.filter(c => c.executionType === "sequential")
// Phase 1: Launch all parallel batches (single message with multiple tool calls)
// Phase 1: All parallel batches (single message, multiple tool calls)
if (parallel.length > 0) {
TodoWrite({ todos: executionCalls.map(c => ({ status: c.executionType === "parallel" ? "in_progress" : "pending" })) })
TodoWrite({ todos: executionCalls.map(c => ({
status: c.executionType === "parallel" ? "in_progress" : "pending",
activeForm: c.executionType === "parallel" ? `Running [${c.executor}]: ${c.tasks.map(t=>t.id).join(', ')}` : `Blocked by parallel phase`
})) })
parallelResults = await Promise.all(parallel.map(c => executeBatch(c)))
previousExecutionResults.push(...parallelResults)
TodoWrite({ todos: executionCalls.map(c => ({ status: parallel.includes(c) ? "completed" : "pending" })) })
TodoWrite({ todos: executionCalls.map(c => ({
status: parallel.includes(c) ? "completed" : "pending",
activeForm: parallel.includes(c) ? `Done [${c.executor}]` : `Ready`
})) })
}
// Phase 2: Execute sequential batches one by one
// Phase 2: Sequential batches one by one
for (const call of sequential) {
TodoWrite({ todos: executionCalls.map(c => ({ status: c === call ? "in_progress" : "..." })) })
TodoWrite({ todos: executionCalls.map(c => ({
status: c === call ? "in_progress" : (c.status === "completed" ? "completed" : "pending"),
activeForm: c === call ? `Running [${c.executor}]: ${c.tasks.map(t=>t.id).join(', ')}` : undefined
})) })
result = await executeBatch(call)
previousExecutionResults.push(result)
TodoWrite({ todos: executionCalls.map(c => ({ status: "completed" or "pending" })) })
TodoWrite({ todos: executionCalls.map(c => ({
status: sequential.indexOf(c) <= sequential.indexOf(call) ? "completed" : "pending"
})) })
}
```
**Resume on Failure**:
```javascript
if (bash_result.status === 'failed' || bash_result.status === 'timeout') {
// fixedId = `${sessionId}-${groupId}` (predictable, no auto-generated timestamps)
console.log(`Execution incomplete. Resume: ccw cli -p "Continue" --resume ${fixedId} --tool codex --mode write --id ${fixedId}-retry`)
batch.resumeFromCliId = fixedId
}
```
Progress tracked at batch level. Icons: ⚡ parallel (concurrent), → sequential (one-by-one).
### Unified Task Prompt Builder
**Task Formatting Principle**: Each task is a self-contained checklist. The executor only needs to know what THIS task requires. Same template for Agent and CLI.
Each task is a self-contained checklist. Same template for Agent and CLI.
```javascript
function buildExecutionPrompt(batch) {
// Task template (6 parts: Files → Why → How → Reference → Risks → Done)
const formatTask = (t) => `
## ${t.title}
@@ -437,355 +320,174 @@ function buildExecutionPrompt(batch) {
### Files
${(t.files || []).map(f => `- **${f.path}** → \`${f.target || ''}\`: ${f.change || (f.changes || []).join(', ') || ''}`).join('\n')}
${t.rationale ? `
### Why this approach (Medium/High)
${t.rationale ? `### Why this approach (Medium/High)
${t.rationale.chosen_approach}
${t.rationale.decision_factors?.length > 0 ? `\nKey factors: ${t.rationale.decision_factors.join(', ')}` : ''}
${t.rationale.tradeoffs ? `\nTradeoffs: ${t.rationale.tradeoffs}` : ''}
` : ''}
${t.rationale.decision_factors?.length > 0 ? `Key factors: ${t.rationale.decision_factors.join(', ')}` : ''}
${t.rationale.tradeoffs ? `Tradeoffs: ${t.rationale.tradeoffs}` : ''}` : ''}
### How to do it
${t.description}
${t.implementation.map(step => `- ${step}`).join('\n')}
${t.code_skeleton ? `
### Code skeleton (High)
${t.code_skeleton ? `### Code skeleton (High)
${t.code_skeleton.interfaces?.length > 0 ? `**Interfaces**: ${t.code_skeleton.interfaces.map(i => `\`${i.name}\` - ${i.purpose}`).join(', ')}` : ''}
${t.code_skeleton.key_functions?.length > 0 ? `\n**Functions**: ${t.code_skeleton.key_functions.map(f => `\`${f.signature}\` - ${f.purpose}`).join(', ')}` : ''}
${t.code_skeleton.classes?.length > 0 ? `\n**Classes**: ${t.code_skeleton.classes.map(c => `\`${c.name}\` - ${c.purpose}`).join(', ')}` : ''}
` : ''}
${t.code_skeleton.key_functions?.length > 0 ? `**Functions**: ${t.code_skeleton.key_functions.map(f => `\`${f.signature}\` - ${f.purpose}`).join(', ')}` : ''}
${t.code_skeleton.classes?.length > 0 ? `**Classes**: ${t.code_skeleton.classes.map(c => `\`${c.name}\` - ${c.purpose}`).join(', ')}` : ''}` : ''}
### Reference
- Pattern: ${t.reference?.pattern || 'N/A'}
- Files: ${t.reference?.files?.join(', ') || 'N/A'}
${t.reference?.examples ? `- Notes: ${t.reference.examples}` : ''}
${t.risks?.length > 0 ? `
### Risk mitigations (High)
${t.risks.map(r => `- ${r.description} → **${r.mitigation}**`).join('\n')}
` : ''}
${t.risks?.length > 0 ? `### Risk mitigations (High)
${t.risks.map(r => `- ${r.description} → **${r.mitigation}**`).join('\n')}` : ''}
### Done when
${(t.convergence?.criteria || []).map(c => `- [ ] ${c}`).join('\n')}
${(t.test?.success_metrics || []).length > 0 ? `\n**Success metrics**: ${t.test.success_metrics.join(', ')}` : ''}`
${(t.test?.success_metrics || []).length > 0 ? `**Success metrics**: ${t.test.success_metrics.join(', ')}` : ''}`
// Build prompt
const sections = []
if (originalUserInput) sections.push(`## Goal\n${originalUserInput}`)
sections.push(`## Tasks\n${batch.tasks.map(formatTask).join('\n\n---\n')}`)
// Context (reference only)
const context = []
if (previousExecutionResults.length > 0) {
if (previousExecutionResults.length > 0)
context.push(`### Previous Work\n${previousExecutionResults.map(r => `- ${r.tasksSummary}: ${r.status}`).join('\n')}`)
}
if (clarificationContext) {
if (clarificationContext)
context.push(`### Clarifications\n${Object.entries(clarificationContext).map(([q, a]) => `- ${q}: ${a}`).join('\n')}`)
}
if (executionContext?.planObject?.data_flow?.diagram) {
if (executionContext?.planObject?.data_flow?.diagram)
context.push(`### Data Flow\n${executionContext.planObject.data_flow.diagram}`)
}
if (executionContext?.session?.artifacts?.plan) {
if (executionContext?.session?.artifacts?.plan)
context.push(`### Artifacts\nPlan: ${executionContext.session.artifacts.plan}`)
}
// Project guidelines (user-defined constraints from /workflow:session:solidify)
// Loaded via: ccw spec load --category planning
context.push(`### Project Guidelines\n(Loaded via ccw spec load --category planning)`)
if (context.length > 0) sections.push(`## Context\n${context.join('\n\n')}`)
sections.push(`Complete each task according to its "Done when" checklist.`)
return sections.join('\n\n')
}
```
**Option A: Agent Execution**
### Step 4: Code Review (Optional)
When to use:
- `getTaskExecutor(task) === "agent"`
-`executionMethod = "Agent"` (全局 fallback)
-`executionMethod = "Auto" AND complexity = "Low"` (全局 fallback)
> **CHECKPOINT**: Verify Phase 2 review protocol is in active memory. If only a summary remains, re-read `phases/02-lite-execute.md` now.
```javascript
Task(
subagent_type="code-developer",
run_in_background=false,
description=batch.taskSummary,
prompt=buildExecutionPrompt(batch)
)
```
**Skip Condition**: Only run if `codeReviewTool !== "Skip"`
**Result Collection**: After completion, collect result following `executionResult` structure (see Data Structures section)
**Option B: CLI Execution (Codex)**
When to use:
- `getTaskExecutor(task) === "codex"`
-`executionMethod = "Codex"` (全局 fallback)
-`executionMethod = "Auto" AND complexity = "Medium/High"` (全局 fallback)
```bash
ccw cli -p "${buildExecutionPrompt(batch)}" --tool codex --mode write
```
**Execution with fixed IDs** (predictable ID pattern):
```javascript
// Launch CLI in background, wait for task hook callback
// Generate fixed execution ID: ${sessionId}-${groupId}
const sessionId = executionContext?.session?.id || 'standalone'
const fixedExecutionId = `${sessionId}-${batch.groupId}` // e.g., "implement-auth-2025-12-13-P1"
// Check if resuming from previous failed execution
const previousCliId = batch.resumeFromCliId || null
// Build command with fixed ID (and optional resume for continuation)
const cli_command = previousCliId
? `ccw cli -p "${buildExecutionPrompt(batch)}" --tool codex --mode write --id ${fixedExecutionId} --resume ${previousCliId}`
: `ccw cli -p "${buildExecutionPrompt(batch)}" --tool codex --mode write --id ${fixedExecutionId}`
// Execute in background, stop output and wait for task hook callback
Bash(
command=cli_command,
run_in_background=true
)
// STOP HERE - CLI executes in background, task hook will notify on completion
```
**Resume on Failure** (with fixed ID):
```javascript
// If execution failed or timed out, offer resume option
if (bash_result.status === 'failed' || bash_result.status === 'timeout') {
console.log(`
⚠️ Execution incomplete. Resume available:
Fixed ID: ${fixedExecutionId}
Lookup: ccw cli detail ${fixedExecutionId}
Resume: ccw cli -p "Continue tasks" --resume ${fixedExecutionId} --tool codex --mode write --id ${fixedExecutionId}-retry
`)
// Store for potential retry in same session
batch.resumeFromCliId = fixedExecutionId
}
```
**Result Collection**: After completion, analyze output and collect result following `executionResult` structure (include `cliExecutionId` for resume capability)
**Option C: CLI Execution (Gemini)**
When to use: `getTaskExecutor(task) === "gemini"` (分析类任务)
```bash
# 使用统一的 buildExecutionPrompt切换 tool 和 mode
ccw cli -p "${buildExecutionPrompt(batch)}" --tool gemini --mode analysis --id ${sessionId}-${batch.groupId}
```
### Step 4: Progress Tracking
Progress tracked at batch level (not individual task level). Icons: ⚡ (parallel, concurrent), → (sequential, one-by-one)
### Step 5: Code Review (Optional)
> **⚠️ CHECKPOINT**: Before proceeding, verify Phase 2 review protocol is in active memory. If only a summary remains, re-read `phases/02-lite-execute.md` now.
**Skip Condition**: Only run if `codeReviewTool ≠ "Skip"`
**Review Focus**: Verify implementation against plan convergence criteria and test requirements
- Read plan.json + .task/*.json for task convergence criteria and test checklist
- Check each convergence criterion is fulfilled
- Verify success metrics from test field (Medium/High complexity)
- Run unit/integration tests specified in test field
- Validate code quality and identify issues
- Ensure alignment with planned approach and risk mitigations
**Operations**:
- Agent Review: Current agent performs direct review
- Gemini Review: Execute gemini CLI with review prompt
- Codex Review: Two options - (A) with prompt for complex reviews, (B) `--uncommitted` flag only for quick reviews
- Custom tool: Execute specified CLI tool (qwen, etc.)
**Unified Review Template** (All tools use same standard):
**Review Criteria**:
**Review Criteria** (all tools use same standard):
- **Convergence Criteria**: Verify each criterion from task convergence.criteria
- **Test Checklist** (Medium/High): Check unit, integration, success_metrics from task test
- **Code Quality**: Analyze quality, identify issues, suggest improvements
- **Plan Alignment**: Validate implementation matches planned approach and risk mitigations
**Shared Prompt Template** (used by all CLI tools):
**Shared Prompt Template**:
```
PURPOSE: Code review for implemented changes against plan convergence criteria and test requirements
TASK: • Verify plan convergence criteria fulfillment • Check test requirements (unit, integration, success_metrics) • Analyze code quality • Identify issues • Suggest improvements • Validate plan adherence and risk mitigations
MODE: analysis
CONTEXT: @**/* @{plan.json} @{.task/*.json} [@{exploration.json}] | Memory: Review lite-execute changes against plan requirements including test checklist
EXPECTED: Quality assessment with:
- Convergence criteria verification (all tasks from .task/*.json)
- Test checklist validation (Medium/High: unit, integration, success_metrics)
- Issue identification
- Recommendations
Explicitly check each convergence criterion and test item from .task/*.json files.
EXPECTED: Quality assessment with: convergence criteria verification, test checklist validation, issue identification, recommendations. Explicitly check each convergence criterion and test item from .task/*.json.
CONSTRAINTS: Focus on plan convergence criteria, test requirements, and plan adherence | analysis=READ-ONLY
```
**Tool-Specific Execution** (Apply shared prompt template above):
**Tool-Specific Execution** (apply shared prompt template above):
```bash
# Method 1: Agent Review (current agent)
# - Read plan.json: ${executionContext.session.artifacts.plan}
# - Apply unified review criteria (see Shared Prompt Template)
# - Report findings directly
| Tool | Command | Notes |
|------|---------|-------|
| Agent Review | Current agent reads plan.json + applies review criteria directly | No CLI call |
| Gemini Review | `ccw cli -p "[template]" --tool gemini --mode analysis` | Recommended |
| Qwen Review | `ccw cli -p "[template]" --tool qwen --mode analysis` | Alternative |
| Codex Review (A) | `ccw cli -p "[template]" --tool codex --mode review` | With prompt, for complex reviews |
| Codex Review (B) | `ccw cli --tool codex --mode review --uncommitted` | No prompt, quick review |
# Method 2: Gemini Review (recommended)
ccw cli -p "[Shared Prompt Template with artifacts]" --tool gemini --mode analysis
# CONTEXT includes: @**/* @${plan.json} [@${exploration.json}]
> Codex: `-p` prompt and target flags (`--uncommitted`/`--base`/`--commit`) are **mutually exclusive**.
# Method 3: Qwen Review (alternative)
ccw cli -p "[Shared Prompt Template with artifacts]" --tool qwen --mode analysis
# Same prompt as Gemini, different execution engine
# Method 4: Codex Review (git-aware) - Two mutually exclusive options:
# Option A: With custom prompt (reviews uncommitted by default)
ccw cli -p "[Shared Prompt Template with artifacts]" --tool codex --mode review
# Use for complex reviews with specific focus areas
# Option B: Target flag only (no prompt allowed)
ccw cli --tool codex --mode review --uncommitted
# Quick review of uncommitted changes without custom instructions
# ⚠️ IMPORTANT: -p prompt and target flags (--uncommitted/--base/--commit) are MUTUALLY EXCLUSIVE
```
**Multi-Round Review with Fixed IDs**:
**Multi-Round Review**:
```javascript
// Generate fixed review ID
const reviewId = `${sessionId}-review`
// First review pass with fixed ID
const reviewResult = Bash(`ccw cli -p "[Review prompt]" --tool gemini --mode analysis --id ${reviewId}`)
// If issues found, continue review dialog with fixed ID chain
const reviewResult = Bash(`ccw cli -p "[template]" --tool gemini --mode analysis --id ${reviewId}`)
if (hasUnresolvedIssues(reviewResult)) {
// Resume with follow-up questions
Bash(`ccw cli -p "Clarify the security concerns you mentioned" --resume ${reviewId} --tool gemini --mode analysis --id ${reviewId}-followup`)
Bash(`ccw cli -p "Clarify concerns" --resume ${reviewId} --tool gemini --mode analysis --id ${reviewId}-followup`)
}
```
**Implementation Note**: Replace `[Shared Prompt Template with artifacts]` placeholder with actual template content, substituting:
- `@{plan.json}``@${executionContext.session.artifacts.plan}`
- `[@{exploration.json}]` → exploration files from artifacts (if exists)
**Artifact Substitution**: Replace `@{plan.json}``@${executionContext.session.artifacts.plan}`, `[@{exploration.json}]` → exploration files from artifacts (if exists).
### Step 6: Auto-Sync Project State
### Step 5: Auto-Sync Project State
**Trigger**: After all executions complete (regardless of code review)
**Operation**: Execute `/workflow:session:sync -y "{summary}"` to update both `specs/*.md` and `project-tech.json` in one shot.
**Operation**: `/workflow:session:sync -y "{summary}"`
Summary 取值优先级:`originalUserInput``planObject.summary` → git log 自动推断。
Summary priority: `originalUserInput``planObject.summary` → git log auto-infer.
## Best Practices
### Step 6: Post-Completion Expansion
**Input Modes**: In-memory (workflow-lite-plan), prompt (standalone), file (JSON/text)
**Task Grouping**: Based on explicit depends_on only; independent tasks split by executor, each batch runs as separate CLI instance
**Execution**: Independent task batches launch concurrently via single Claude message with multiple tool calls (one tool call per batch)
Ask user whether to expand into issues (test/enhance/refactor/doc). Selected items call `/issue:new "{summary} - {dimension}"`.
## Error Handling
| Error | Cause | Resolution |
|-------|-------|------------|
| Missing executionContext | In-memory mode without context | Error: "No execution context found. Only available when called by lite-plan." |
| File not found | File path doesn't exist | Error: "File not found: {path}. Check file path." |
| Empty file | File exists but no content | Error: "File is empty: {path}. Provide task description." |
| Invalid Enhanced Task JSON | JSON missing required fields | Warning: "Missing required fields. Treating as plain text." |
| Malformed JSON | JSON parsing fails | Treat as plain text (expected for non-JSON files) |
| Execution failure | Agent/Codex crashes | Display error, use fixed ID `${sessionId}-${groupId}` for resume: `ccw cli -p "Continue" --resume <fixed-id> --id <fixed-id>-retry` |
| Execution timeout | CLI exceeded timeout | Use fixed ID for resume with extended timeout |
| Codex unavailable | Codex not installed | Show installation instructions, offer Agent execution |
| Fixed ID not found | Custom ID lookup failed | Check `ccw cli history`, verify date directories |
| Error | Resolution |
|-------|------------|
| Missing executionContext | "No execution context found. Only available when called by lite-plan." |
| File not found | "File not found: {path}. Check file path." |
| Empty file | "File is empty: {path}. Provide task description." |
| Invalid plan JSON | Warning: "Missing required fields. Treating as plain text." |
| Malformed JSON | Treat as plain text (expected for non-JSON files) |
| Execution failure | Use fixed ID `${sessionId}-${groupId}` for resume |
| Execution timeout | Use fixed ID for resume with extended timeout |
| Codex unavailable | Show installation instructions, offer Agent execution |
| Fixed ID not found | Check `ccw cli history`, verify date directories |
## Data Structures
### executionContext (Input - Mode 1)
Passed from lite-plan via global variable:
```javascript
{
planObject: {
summary: string,
approach: string,
task_ids: string[], // Task IDs referencing .task/*.json files
task_count: number, // Number of tasks
_loadedTasks: [...], // Populated at runtime from .task/*.json files
task_ids: string[],
task_count: number,
_loadedTasks: [...], // populated at runtime from .task/*.json
estimated_time: string,
recommended_execution: string,
complexity: string
},
// Task file paths (populated for two-layer format)
taskFiles: [{id: string, path: string}] | null,
explorationsContext: {...} | null, // Multi-angle explorations
explorationAngles: string[], // List of exploration angles
explorationManifest: {...} | null, // Exploration manifest
explorationsContext: {...} | null,
explorationAngles: string[],
explorationManifest: {...} | null,
clarificationContext: {...} | null,
executionMethod: "Agent" | "Codex" | "Auto", // 全局默认
executionMethod: "Agent" | "Codex" | "Auto",
codeReviewTool: "Skip" | "Gemini Review" | "Agent Review" | string,
originalUserInput: string,
// 任务级 executor 分配(优先于 executionMethod
executorAssignments: {
executorAssignments: { // per-task override, priority over executionMethod
[taskId]: { executor: "gemini" | "codex" | "agent", reason: string }
},
// Session artifacts location (saved by lite-plan)
session: {
id: string, // Session identifier: {taskSlug}-{shortTimestamp}
folder: string, // Session folder path: .workflow/.lite-plan/{session-id}
id: string, // {taskSlug}-{shortTimestamp}
folder: string, // .workflow/.lite-plan/{session-id}
artifacts: {
explorations: [{angle, path}], // exploration-{angle}.json paths
explorations_manifest: string, // explorations-manifest.json path
plan: string // plan.json path (always present)
explorations: [{angle, path}],
explorations_manifest: string,
plan: string // always present
}
}
}
```
**Artifact Usage**:
- Artifact files contain detailed planning context
- Pass artifact paths to CLI tools and agents for enhanced context
- See execution options below for usage examples
### executionResult (Output)
Collected after each execution call completes:
```javascript
{
executionId: string, // e.g., "[Agent-1]", "[Codex-1]"
executionId: string, // e.g., "[Agent-1]", "[Codex-1]"
status: "completed" | "partial" | "failed",
tasksSummary: string, // Brief description of tasks handled
completionSummary: string, // What was completed
keyOutputs: string, // Files created/modified, key changes
notes: string, // Important context for next execution
fixedCliId: string | null // Fixed CLI execution ID (e.g., "implement-auth-2025-12-13-P1")
tasksSummary: string,
completionSummary: string,
keyOutputs: string,
notes: string,
fixedCliId: string | null // for resume: ccw cli detail ${fixedCliId}
}
```
Appended to `previousExecutionResults` array for context continuity in multi-execution scenarios.
## Post-Completion Expansion
**Auto-sync**: 执行 `/workflow:session:sync -y "{summary}"` 更新 specs/*.md + project-techStep 6 已触发,此处不重复)。
完成后询问用户是否扩展为issue(test/enhance/refactor/doc),选中项调用 `/issue:new "{summary} - {dimension}"`
**Fixed ID Pattern**: `${sessionId}-${groupId}` enables predictable lookup without auto-generated timestamps.
**Resume Usage**: If `status` is "partial" or "failed", use `fixedCliId` to resume:
```bash
# Lookup previous execution
ccw cli detail ${fixedCliId}
# Resume with new fixed ID for retry
ccw cli -p "Continue from where we left off" --resume ${fixedCliId} --tool codex --mode write --id ${fixedCliId}-retry
// Appended to previousExecutionResults[] for context continuity
```

View File

@@ -6,29 +6,13 @@ allowed-tools: Skill, Agent, AskUserQuestion, TodoWrite, Read, Write, Edit, Bash
# Workflow-Lite-Plan
Complete planning pipeline: task analysis, multi-angle exploration, clarification, adaptive planning, confirmation, and execution handoff.
Planning pipeline: explore → clarify → plan → confirm → handoff to lite-execute.
---
## Overview
Intelligent lightweight planning command with dynamic workflow adaptation based on task complexity. Focuses on planning phases (exploration, clarification, planning, confirmation) and delegates execution to workflow-lite-execute skill.
**Core capabilities:**
- Intelligent task analysis with automatic exploration detection
- Dynamic code exploration (cli-explore-agent) when codebase understanding needed
- Interactive clarification after exploration to gather missing information
- Adaptive planning: Low complexity → Direct Claude; Medium/High → cli-lite-planning-agent
- Two-step confirmation: plan display → multi-dimensional input collection
- Execution handoff with complete context to workflow-lite-execute
## Context Isolation
> **⚠️ CRITICAL**: If this phase was invoked from analyze-with-file (via "执行任务"),
> the analyze-with-file session is **COMPLETE** and all its phase instructions
> are FINISHED and MUST NOT be referenced.
> Only follow the LP-Phase 1-5 defined in THIS document (01-lite-plan.md).
> Phase numbers in this document are INDEPENDENT of any prior workflow.
> **CRITICAL**: If invoked from analyze-with-file (via "执行任务"), the analyze-with-file session is **COMPLETE** and all its phase instructions are FINISHED and MUST NOT be referenced. Only follow LP-Phase 1-5 defined in THIS document. Phase numbers are INDEPENDENT of any prior workflow.
## Input
@@ -36,14 +20,12 @@ Intelligent lightweight planning command with dynamic workflow adaptation based
<task-description> Task description or path to .md file (required)
```
### Flags
| Flag | Description |
|------|-------------|
| `-y`, `--yes` | Auto mode: Skip clarification, auto-confirm plan, auto-select execution, skip review |
| `-y`, `--yes` | Auto mode: Skip clarification, auto-confirm plan, auto-select execution, skip review (entire plan+execute workflow) |
| `--force-explore` | Force code exploration even when task has prior analysis |
Workflow preferences (`autoYes`, `forceExplore`) are collected by SKILL.md via AskUserQuestion and passed as `workflowPreferences` context variable.
**Note**: Workflow preferences (`autoYes`, `forceExplore`) must be initialized at skill start. If not provided by caller, skill will prompt user for workflow mode selection.
## Output Artifacts
@@ -57,88 +39,68 @@ Workflow preferences (`autoYes`, `forceExplore`) are collected by SKILL.md via A
**Output Directory**: `.workflow/.lite-plan/{task-slug}-{YYYY-MM-DD}/`
**Agent Usage**:
- Low complexity → Direct Claude planning (no agent)
- Medium/High complexity → `cli-lite-planning-agent` generates `plan.json`
**Agent Usage**: Low → Direct Claude planning (no agent) | Medium/High → `cli-lite-planning-agent`
**Schema Reference**: `~/.ccw/workflows/cli-templates/schemas/plan-overview-base-schema.json`
## Auto Mode Defaults
When `workflowPreferences.autoYes === true`:
- **Clarification Questions**: Skipped (no clarification phase)
- **Plan Confirmation**: Auto-selected "Allow"
- **Execution Method**: Auto-selected "Auto"
- **Code Review**: Auto-selected "Skip"
When `workflowPreferences.autoYes === true` (entire plan+execute workflow):
- **Clarification**: Skipped | **Plan Confirmation**: Allow & Execute | **Execution**: Auto | **Review**: Skip
## Execution Process
Auto mode authorizes the complete plan-and-execute workflow with a single confirmation. No further prompts.
```
LP-Phase 1: Task Analysis & Exploration
├─ Parse input (description or .md file)
├─ intelligent complexity assessment (Low/Medium/High)
├─ Exploration decision (auto-detect or workflowPreferences.forceExplore)
├─ Context protection: If file reading ≥50k chars → force cli-explore-agent
└─ Decision:
├─ needsExploration=true → Launch parallel cli-explore-agents (1-4 based on complexity)
└─ needsExploration=false → Skip to LP-Phase 2/3
## Phase Summary
LP-Phase 2: Clarification (optional, multi-round)
├─ Aggregate clarification_needs from all exploration angles
├─ Deduplicate similar questions
└─ Decision:
├─ Has clarifications → AskUserQuestion (max 4 questions per round, multiple rounds allowed)
└─ No clarifications → Skip to LP-Phase 3
LP-Phase 3: Planning (NO CODE EXECUTION - planning only)
└─ Decision (based on LP-Phase 1 complexity):
├─ Low → Load schema: cat ~/.ccw/workflows/cli-templates/schemas/plan-overview-base-schema.json → Direct Claude planning (following schema) → plan.json
└─ Medium/High → cli-lite-planning-agent → plan.json (agent internally executes quality check)
LP-Phase 4: Confirmation & Selection
├─ Display plan summary (tasks, complexity, estimated time)
└─ AskUserQuestion:
├─ Confirm: Allow / Modify / Cancel
├─ Execution: Agent / Codex / Auto
└─ Review: Gemini / Agent / Skip
LP-Phase 5: Execute
├─ Build executionContext (plan + explorations + clarifications + selections)
└─ Direct handoff: Skill("lite-execute") → Execute with executionContext (Mode 1)
```
| Phase | Core Action | Output |
|-------|-------------|--------|
| LP-0 | Initialize workflowPreferences | autoYes, forceExplore |
| LP-1 | Complexity assessment → parallel cli-explore-agents (1-4) | exploration-*.json + manifest |
| LP-2 | Aggregate + dedup clarification_needs → multi-round AskUserQuestion | clarificationContext (in-memory) |
| LP-3 | Low: Direct Claude planning / Medium+High: cli-lite-planning-agent | plan.json + .task/TASK-*.json |
| LP-4 | Display plan → AskUserQuestion (Confirm + Execution + Review) | userSelection |
| LP-5 | Build executionContext → Skill("lite-execute") | handoff (Mode 1) |
## Implementation
### LP-Phase 0: Workflow Preferences Initialization
```javascript
if (typeof workflowPreferences === 'undefined' || workflowPreferences === null) {
workflowPreferences = {
autoYes: false, // false: show LP-Phase 2/4 prompts | true (-y): skip all prompts
forceExplore: false
}
}
```
### LP-Phase 1: Intelligent Multi-Angle Exploration
**Session Setup** (MANDATORY - follow exactly):
**Session Setup** (MANDATORY):
```javascript
// Helper: Get UTC+8 (China Standard Time) ISO string
const getUtc8ISOString = () => new Date(Date.now() + 8 * 60 * 60 * 1000).toISOString()
const taskSlug = task_description.toLowerCase().replace(/[^a-z0-9]+/g, '-').substring(0, 40)
const dateStr = getUtc8ISOString().substring(0, 10) // Format: 2025-11-29
const sessionId = `${taskSlug}-${dateStr}` // e.g., "implement-jwt-refresh-2025-11-29"
const dateStr = getUtc8ISOString().substring(0, 10)
const sessionId = `${taskSlug}-${dateStr}`
const sessionFolder = `.workflow/.lite-plan/${sessionId}`
bash(`mkdir -p ${sessionFolder} && test -d ${sessionFolder} && echo "SUCCESS: ${sessionFolder}" || echo "FAILED: ${sessionFolder}"`)
```
**TodoWrite (LP-Phase 1 start)**:
**TodoWrite Template** (initial state — subsequent phases update status progressively):
```javascript
// Pattern: set phases[0..N-1].status="completed", phases[N].status="in_progress"
// Only full block shown here; subsequent updates follow same structure with status changes
TodoWrite({ todos: [
{ content: "LP-Phase 1: Exploration", status: "in_progress", activeForm: "Exploring codebase" },
{ content: "LP-Phase 2: Clarification", status: "pending", activeForm: "Collecting clarifications" },
{ content: "LP-Phase 3: Planning", status: "pending", activeForm: "Generating plan" },
{ content: "LP-Phase 4: Confirmation", status: "pending", activeForm: "Awaiting confirmation" },
{ content: "LP-Phase 5: Execution", status: "pending", activeForm: "Executing tasks" }
{ content: `LP-Phase 1: Exploration [${complexity}] ${selectedAngles.length} angles`, status: "in_progress", activeForm: `Exploring: ${selectedAngles.join(', ')}` },
{ content: "LP-Phase 2: Clarification", status: "pending" },
{ content: `LP-Phase 3: Planning [${planningStrategy}]`, status: "pending" },
{ content: "LP-Phase 4: Confirmation", status: "pending" },
{ content: "LP-Phase 5: Execution", status: "pending" }
]})
```
**Exploration Decision Logic**:
```javascript
// Check if task description already contains prior analysis context (from analyze-with-file)
const hasPriorAnalysis = /##\s*Prior Analysis/i.test(task_description)
needsExploration = workflowPreferences.forceExplore ? true
@@ -149,33 +111,21 @@ needsExploration = workflowPreferences.forceExplore ? true
task.modifies_existing_code)
if (!needsExploration) {
// Skip exploration — analysis context already in task description (or not needed)
// manifest is absent; LP-Phase 3 loads it with safe fallback
// manifest absent; LP-Phase 3 loads with safe fallback
proceed_to_next_phase()
}
```
**⚠️ Context Protection**: File reading 50k chars → force `needsExploration=true` (delegate to cli-explore-agent)
**Context Protection**: File reading >=50k chars → force `needsExploration=true` (delegate to cli-explore-agent)
**Complexity Assessment** (Intelligent Analysis):
**Complexity Assessment**:
```javascript
// analyzes task complexity based on:
// - Scope: How many systems/modules are affected?
// - Depth: Surface change vs architectural impact?
// - Risk: Potential for breaking existing functionality?
// - Dependencies: How interconnected is the change?
const complexity = analyzeTaskComplexity(task_description)
// Returns: 'Low' | 'Medium' | 'High'
// Low: ONLY truly trivial — single file, single function, zero cross-module impact, no new patterns
// Examples: fix typo, rename variable, add log line, adjust constant value
// Medium: Multiple files OR any integration point OR new pattern introduction OR moderate risk
// Examples: add endpoint, implement feature, refactor module, fix bug spanning files
// High: Cross-module, architectural, or systemic change
// Examples: new subsystem, migration, security overhaul, API redesign
// ⚠️ Default bias: When uncertain between Low and Medium, choose Medium
// 'Low': single file, single function, zero cross-module impact (fix typo, rename var, adjust constant)
// 'Medium': multiple files OR integration point OR new pattern (add endpoint, implement feature, refactor)
// 'High': cross-module, architectural, systemic (new subsystem, migration, security overhaul)
// Default bias: uncertain between Low/Medium → choose Medium
// Angle assignment based on task type (orchestrator decides, not agent)
const ANGLE_PRESETS = {
architecture: ['architecture', 'dependencies', 'modularity', 'integration-points'],
security: ['security', 'auth-patterns', 'dataflow', 'validation'],
@@ -186,59 +136,39 @@ const ANGLE_PRESETS = {
function selectAngles(taskDescription, count) {
const text = taskDescription.toLowerCase()
let preset = 'feature' // default
let preset = 'feature'
if (/refactor|architect|restructure|modular/.test(text)) preset = 'architecture'
else if (/security|auth|permission|access/.test(text)) preset = 'security'
else if (/performance|slow|optimi|cache/.test(text)) preset = 'performance'
else if (/fix|bug|error|issue|broken/.test(text)) preset = 'bugfix'
return ANGLE_PRESETS[preset].slice(0, count)
}
const selectedAngles = selectAngles(task_description, complexity === 'High' ? 4 : (complexity === 'Medium' ? 3 : 1))
// Planning strategy determination
// Agent trigger: anything beyond trivial single-file change
// - hasPriorAnalysis → always agent (analysis validated non-trivial task)
// - multi-angle exploration → agent (complexity warranted multiple angles)
// - Medium/High complexity → agent
// Direct Claude planning ONLY for truly trivial Low + no analysis + single angle
// Direct Claude planning ONLY for: Low + no prior analysis + single angle
const planningStrategy = (
complexity === 'Low' && !hasPriorAnalysis && selectedAngles.length <= 1
) ? 'Direct Claude Planning'
: 'cli-lite-planning-agent'
) ? 'Direct Claude Planning' : 'cli-lite-planning-agent'
console.log(`
## Exploration Plan
Task Complexity: ${complexity}
Selected Angles: ${selectedAngles.join(', ')}
Planning Strategy: ${planningStrategy}
Launching ${selectedAngles.length} parallel explorations...
`)
console.log(`Exploration Plan: ${complexity} | ${selectedAngles.join(', ')} | ${planningStrategy}`)
```
**Launch Parallel Explorations** - Orchestrator assigns angle to each agent:
**⚠️ CRITICAL - NO BACKGROUND EXECUTION**:
- **MUST NOT use `run_in_background: true`** - exploration results are REQUIRED before planning
**Launch Parallel Explorations**:
**CRITICAL**: MUST NOT use `run_in_background: true` — exploration results are REQUIRED before planning.
```javascript
// Launch agents with pre-assigned angles
const explorationTasks = selectedAngles.map((angle, index) =>
Task(
subagent_type="cli-explore-agent",
run_in_background=false, // ⚠️ MANDATORY: Must wait for results
run_in_background=false,
description=`Explore: ${angle}`,
prompt=`
## Task Objective
Execute **${angle}** exploration for task planning context. Analyze codebase from this specific angle to discover relevant structure, patterns, and constraints.
Execute **${angle}** exploration for task planning context.
## Output Location
**Session Folder**: ${sessionFolder}
**Output File**: ${sessionFolder}/exploration-${angle}.json
@@ -247,9 +177,6 @@ Execute **${angle}** exploration for task planning context. Analyze codebase fro
- **Task Description**: ${task_description}
- **Exploration Index**: ${index + 1} of ${selectedAngles.length}
## Agent Initialization
cli-explore-agent autonomously handles: project structure discovery, schema loading, project context loading (project-tech.json, specs/*.md), and keyword search. These steps execute automatically.
## Exploration Strategy (${angle} focus)
**Step 1: Structural Scan** (Bash)
@@ -267,11 +194,7 @@ cli-explore-agent autonomously handles: project structure discovery, schema load
- Identify ${angle}-specific clarification needs
## Expected Output
**Schema Reference**: explore-json-schema.json (auto-loaded by agent during initialization)
**Required Fields** (all ${angle} focused):
- Follow explore-json-schema.json exactly (auto-loaded by agent)
**Schema**: explore-json-schema.json (auto-loaded by agent)
- All fields scoped to ${angle} perspective
- Ensure rationale is specific and >10 chars (not generic)
- Include file:line locations in integration_points
@@ -279,16 +202,12 @@ cli-explore-agent autonomously handles: project structure discovery, schema load
## Success Criteria
- [ ] get_modules_by_depth.sh executed
- [ ] At least 3 relevant files identified with specific rationale + role
- [ ] Every file has rationale >10 chars (not generic like "Related to ${angle}")
- [ ] Every file has role classification (modify_target/dependency/etc.)
- [ ] At least 3 relevant files with specific rationale (>10 chars) + role classification
- [ ] Patterns are actionable (code examples, not generic advice)
- [ ] Integration points include file:line locations
- [ ] Constraints are project-specific to ${angle}
- [ ] JSON output follows schema exactly
- [ ] clarification_needs includes options + recommended
- [ ] Files with relevance >= 0.7 have key_code array describing key symbols
- [ ] Files with relevance >= 0.7 have topic_relation explaining connection to ${angle}
- [ ] JSON follows schema; clarification_needs includes options + recommended
- [ ] Files with relevance >= 0.7 have key_code array + topic_relation
## Execution
**Write**: \`${sessionFolder}/exploration-${angle}.json\`
@@ -296,30 +215,25 @@ cli-explore-agent autonomously handles: project structure discovery, schema load
`
)
)
// Execute all exploration tasks in parallel
```
**Auto-discover Generated Exploration Files**:
**Auto-discover & Build Manifest**:
```javascript
// After explorations complete, auto-discover all exploration-*.json files
const explorationFiles = bash(`find ${sessionFolder} -name "exploration-*.json" -type f`)
.split('\n')
.filter(f => f.trim())
.split('\n').filter(f => f.trim())
// Read metadata to build manifest
const explorationManifest = {
session_id: sessionId,
task_description: task_description,
timestamp: getUtc8ISOString(),
complexity: complexity,
exploration_count: explorationCount,
exploration_count: explorationFiles.length,
explorations: explorationFiles.map(file => {
const data = JSON.parse(Read(file))
const filename = path.basename(file)
return {
angle: data._metadata.exploration_angle,
file: filename,
file: path.basename(file),
path: file,
index: data._metadata.exploration_index
}
@@ -327,34 +241,12 @@ const explorationManifest = {
}
Write(`${sessionFolder}/explorations-manifest.json`, JSON.stringify(explorationManifest, null, 2))
console.log(`
## Exploration Complete
Generated exploration files in ${sessionFolder}:
${explorationManifest.explorations.map(e => `- exploration-${e.angle}.json (angle: ${e.angle})`).join('\n')}
Manifest: explorations-manifest.json
Angles explored: ${explorationManifest.explorations.map(e => e.angle).join(', ')}
`)
console.log(`Exploration complete: ${explorationManifest.explorations.map(e => e.angle).join(', ')}`)
```
**TodoWrite (LP-Phase 1 complete)**:
```javascript
TodoWrite({ todos: [
{ content: "LP-Phase 1: Exploration", status: "completed", activeForm: "Exploring codebase" },
{ content: "LP-Phase 2: Clarification", status: "in_progress", activeForm: "Collecting clarifications" },
{ content: "LP-Phase 3: Planning", status: "pending", activeForm: "Generating plan" },
{ content: "LP-Phase 4: Confirmation", status: "pending", activeForm: "Awaiting confirmation" },
{ content: "LP-Phase 5: Execution", status: "pending", activeForm: "Executing tasks" }
]})
```
// TodoWrite: Phase 1 completed, Phase 2 → in_progress
**Output**:
- `${sessionFolder}/exploration-{angle1}.json`
- `${sessionFolder}/exploration-{angle2}.json`
- ... (1-4 files based on complexity)
- `${sessionFolder}/explorations-manifest.json`
**Output**: `exploration-{angle}.json` (1-4 files) + `explorations-manifest.json`
---
@@ -362,11 +254,9 @@ TodoWrite({ todos: [
**Skip if**: No exploration or `clarification_needs` is empty across all explorations
**⚠️ CRITICAL**: AskUserQuestion tool limits max 4 questions per call. **MUST execute multiple rounds** to exhaust all clarification needs - do NOT stop at round 1.
**CRITICAL**: AskUserQuestion limits max 4 questions per call. **MUST execute multiple rounds** to exhaust all clarification needs.
**Aggregate clarification needs from all exploration angles**:
```javascript
// Load manifest and all exploration files (may not exist if exploration was skipped)
const manifest = file_exists(`${sessionFolder}/explorations-manifest.json`)
? JSON.parse(Read(`${sessionFolder}/explorations-manifest.json`))
: { exploration_count: 0, explorations: [] }
@@ -375,41 +265,28 @@ const explorations = manifest.explorations.map(exp => ({
data: JSON.parse(Read(exp.path))
}))
// Aggregate clarification needs from all explorations
// Aggregate from all explorations
const allClarifications = []
explorations.forEach(exp => {
if (exp.data.clarification_needs?.length > 0) {
exp.data.clarification_needs.forEach(need => {
allClarifications.push({
...need,
source_angle: exp.angle
})
allClarifications.push({ ...need, source_angle: exp.angle })
})
}
})
// Intelligent deduplication: analyze allClarifications by intent
// - Identify questions with similar intent across different angles
// - Merge similar questions: combine options, consolidate context
// - Produce dedupedClarifications with unique intents only
// Intelligent dedup: merge similar intent across angles, combine options
const dedupedClarifications = intelligentMerge(allClarifications)
const autoYes = workflowPreferences.autoYes
if (autoYes) {
// Auto mode: Skip clarification phase
if (workflowPreferences.autoYes) {
console.log(`[Auto] Skipping ${dedupedClarifications.length} clarification questions`)
console.log(`Proceeding to planning with exploration results...`)
// Continue to LP-Phase 3
} else if (dedupedClarifications.length > 0) {
// Interactive mode: Multi-round clarification
const BATCH_SIZE = 4
const totalRounds = Math.ceil(dedupedClarifications.length / BATCH_SIZE)
for (let i = 0; i < dedupedClarifications.length; i += BATCH_SIZE) {
const batch = dedupedClarifications.slice(i, i + BATCH_SIZE)
const currentRound = Math.floor(i / BATCH_SIZE) + 1
console.log(`### Clarification Round ${currentRound}/${totalRounds}`)
AskUserQuestion({
@@ -423,7 +300,6 @@ if (autoYes) {
}))
}))
})
// Store batch responses in clarificationContext before next round
}
}
@@ -435,90 +311,56 @@ if (autoYes) {
### LP-Phase 3: Planning
**Planning Strategy Selection** (based on LP-Phase 1 complexity):
**IMPORTANT**: LP-Phase 3 is **planning only** - NO code execution. All execution happens in LP-Phase 5 via lite-execute.
**Executor Assignment** (Claude 智能分配plan 生成后执行):
**IMPORTANT**: LP-Phase 3 is **planning only** — NO code execution. All execution happens in LP-Phase 5 via lite-execute.
**Executor Assignment** (after plan generation):
```javascript
// 分配规则(优先级从高到低):
// 1. 用户明确指定:"用 gemini 分析..." → gemini, "codex 实现..." → codex
// 2. 默认 → agent
const executorAssignments = {} // { taskId: { executor: 'gemini'|'codex'|'agent', reason: string } }
// Load tasks from .task/ directory for executor assignment
// Priority: 1. User explicit ("用 gemini 分析..." → gemini) | 2. Default → agent
const executorAssignments = {} // { taskId: { executor: 'gemini'|'codex'|'agent', reason } }
const taskFiles = Glob(`${sessionFolder}/.task/TASK-*.json`)
taskFiles.forEach(taskPath => {
const task = JSON.parse(Read(taskPath))
// Claude 根据上述规则语义分析,为每个 task 分配 executor
executorAssignments[task.id] = { executor: '...', reason: '...' }
})
```
**Low Complexity** - Direct planning by Claude:
**Low Complexity** Direct planning by Claude:
```javascript
// Step 1: Read schema
const schema = Bash(`cat ~/.ccw/workflows/cli-templates/schemas/plan-overview-base-schema.json`)
// Step 2: Read exploration files if available
const manifest = file_exists(`${sessionFolder}/explorations-manifest.json`)
? JSON.parse(Read(`${sessionFolder}/explorations-manifest.json`))
: { explorations: [] }
manifest.explorations.forEach(exp => {
const explorationData = Read(exp.path)
console.log(`\n### Exploration: ${exp.angle}\n${explorationData}`)
console.log(`\n### Exploration: ${exp.angle}\n${Read(exp.path)}`)
})
// Step 3: Generate task objects (Claude directly, no agent)
// ⚠️ Tasks MUST incorporate insights from exploration files read in Step 2
// Task fields use NEW names: convergence.criteria (not acceptance), files[].change (not modification_points), test (not verification)
// Generate tasks — MUST incorporate exploration insights
// Field names: convergence.criteria (not acceptance), files[].change (not modification_points), test (not verification)
const tasks = [
{
id: "TASK-001",
title: "...",
description: "...",
depends_on: [],
id: "TASK-001", title: "...", description: "...", depends_on: [],
convergence: { criteria: ["..."] },
files: [{ path: "...", change: "..." }],
implementation: ["..."],
test: "..."
},
// ... more tasks
implementation: ["..."], test: "..."
}
]
// Step 4: Write task files to .task/ directory
const taskDir = `${sessionFolder}/.task`
Bash(`mkdir -p "${taskDir}"`)
tasks.forEach(task => {
Write(`${taskDir}/${task.id}.json`, JSON.stringify(task, null, 2))
})
tasks.forEach(task => Write(`${taskDir}/${task.id}.json`, JSON.stringify(task, null, 2)))
// Step 5: Generate plan overview (NO embedded tasks[])
const plan = {
summary: "...",
approach: "...",
task_ids: tasks.map(t => t.id),
task_count: tasks.length,
complexity: "Low",
estimated_time: "...",
recommended_execution: "Agent",
_metadata: {
timestamp: getUtc8ISOString(),
source: "direct-planning",
planning_mode: "direct",
plan_type: "feature"
}
summary: "...", approach: "...",
task_ids: tasks.map(t => t.id), task_count: tasks.length,
complexity: "Low", estimated_time: "...", recommended_execution: "Agent",
_metadata: { timestamp: getUtc8ISOString(), source: "direct-planning", planning_mode: "direct", plan_type: "feature" }
}
// Step 6: Write plan overview to session folder
Write(`${sessionFolder}/plan.json`, JSON.stringify(plan, null, 2))
// Step 7: MUST continue to LP-Phase 4 (Confirmation) - DO NOT execute code here
// MUST continue to LP-Phase 4 — DO NOT execute code here
```
**Medium/High Complexity** - Invoke cli-lite-planning-agent:
**Medium/High Complexity** Invoke cli-lite-planning-agent:
```javascript
Task(
@@ -529,20 +371,17 @@ Task(
Generate implementation plan and write plan.json.
## Output Location
**Session Folder**: ${sessionFolder}
**Output Files**:
- ${sessionFolder}/planning-context.md (evidence + understanding)
- ${sessionFolder}/plan.json (plan overview -- NO embedded tasks[])
- ${sessionFolder}/plan.json (plan overview NO embedded tasks[])
- ${sessionFolder}/.task/TASK-*.json (independent task files, one per task)
## Output Schema Reference
Execute: cat ~/.ccw/workflows/cli-templates/schemas/plan-overview-base-schema.json (get schema reference before generating plan)
## Schema Reference
Execute: cat ~/.ccw/workflows/cli-templates/schemas/plan-overview-base-schema.json
## Project Context (MANDATORY - Load via ccw spec)
## Project Context (MANDATORY)
Execute: ccw spec load --category planning
This loads technology stack, architecture, key components, and user-defined constraints/conventions.
**CRITICAL**: All generated tasks MUST comply with constraints in specs/*.md
## Task Description
@@ -553,14 +392,11 @@ ${task_description}
${manifest.explorations.length > 0
? manifest.explorations.map(exp => `### Exploration: ${exp.angle} (${exp.file})
Path: ${exp.path}
Read this file for detailed ${exp.angle} analysis.`).join('\n\n') + `
Total explorations: ${manifest.exploration_count}
Angles covered: ${manifest.explorations.map(e => e.angle).join(', ')}
Total: ${manifest.exploration_count} | Angles: ${manifest.explorations.map(e => e.angle).join(', ')}
Manifest: ${sessionFolder}/explorations-manifest.json`
: `No exploration files. Task Description above contains "## Prior Analysis" with analysis summary, key files, and findings — use it as primary planning context.`}
: `No exploration files. Task Description contains "## Prior Analysis" — use as primary planning context.`}
## User Clarifications
${JSON.stringify(clarificationContext) || "None"}
@@ -569,112 +405,68 @@ ${JSON.stringify(clarificationContext) || "None"}
${complexity}
## Requirements
Generate plan.json and .task/*.json following the schema obtained above. Key constraints:
- _metadata.exploration_angles: ${JSON.stringify(manifest.explorations.map(e => e.angle))}
**Output Format**: Two-layer structure:
- plan.json: Overview with task_ids[] referencing .task/ files (NO tasks[] array)
- .task/TASK-*.json: Independent task files following task-schema.json
Follow plan-overview-base-schema.json (loaded via cat command above) for plan.json structure.
Follow task-schema.json for .task/TASK-*.json structure.
Note: Use files[].change (not modification_points), convergence.criteria (not acceptance).
- Two-layer output: plan.json (task_ids[], NO tasks[]) + .task/TASK-*.json
- Follow plan-overview-base-schema.json for plan.json, task-schema.json for .task/*.json
- Field names: files[].change (not modification_points), convergence.criteria (not acceptance)
## Task Grouping Rules
1. **Group by feature**: All changes for one feature = one task (even if 3-5 files)
2. **Group by context**: Tasks with similar context or related functional changes can be grouped together
3. **Minimize agent count**: Simple, unrelated tasks can also be grouped to reduce agent execution overhead
2. **Group by context**: Related functional changes can be grouped together
3. **Minimize agent count**: Group simple unrelated tasks to reduce overhead
4. **Avoid file-per-task**: Do NOT create separate tasks for each file
5. **Substantial tasks**: Each task should represent 15-60 minutes of work
6. **True dependencies only**: Only use depends_on when Task B cannot start without Task A's output
7. **Prefer parallel**: Most tasks should be independent (no depends_on)
5. **Substantial tasks**: Each task = 15-60 minutes of work
6. **True dependencies only**: depends_on only when Task B needs Task A's output
7. **Prefer parallel**: Most tasks should be independent
## Execution
1. Read schema file (cat command above)
2. Execute CLI planning using Gemini (Qwen fallback)
3. Read ALL exploration files for comprehensive context
4. Synthesize findings and generate tasks + plan overview
5. **Write**: \`${sessionFolder}/planning-context.md\` (evidence paths + understanding)
6. **Create**: \`${sessionFolder}/.task/\` directory (mkdir -p)
7. **Write**: \`${sessionFolder}/.task/TASK-001.json\`, \`TASK-002.json\`, etc. (one per task)
8. **Write**: \`${sessionFolder}/plan.json\` (overview with task_ids[], NO tasks[])
9. Return brief completion summary
1. Read schema → 2. ccw spec load → 3. Read ALL exploration files → 4. Synthesize + generate
5. Write: planning-context.md, .task/TASK-*.json, plan.json (task_ids[], NO tasks[])
6. Return brief completion summary
`
)
```
**Output**: `${sessionFolder}/plan.json`
**TodoWrite (LP-Phase 3 complete)**:
```javascript
TodoWrite({ todos: [
{ content: "LP-Phase 1: Exploration", status: "completed", activeForm: "Exploring codebase" },
{ content: "LP-Phase 2: Clarification", status: "completed", activeForm: "Collecting clarifications" },
{ content: "LP-Phase 3: Planning", status: "completed", activeForm: "Generating plan" },
{ content: "LP-Phase 4: Confirmation", status: "in_progress", activeForm: "Awaiting confirmation" },
{ content: "LP-Phase 5: Execution", status: "pending", activeForm: "Executing tasks" }
]})
```
// TodoWrite: Phase 3 completed, Phase 4 → in_progress
---
### LP-Phase 4: Task Confirmation & Execution Selection
**Step 4.1: Display Plan**
**Display Plan**:
```javascript
const plan = JSON.parse(Read(`${sessionFolder}/plan.json`))
// Load tasks from .task/ directory
const tasks = (plan.task_ids || []).map(id => {
const taskPath = `${sessionFolder}/.task/${id}.json`
return JSON.parse(Read(taskPath))
})
const taskList = tasks
const tasks = (plan.task_ids || []).map(id => JSON.parse(Read(`${sessionFolder}/.task/${id}.json`)))
console.log(`
## Implementation Plan
**Summary**: ${plan.summary}
**Approach**: ${plan.approach}
**Tasks** (${taskList.length}):
${taskList.map((t, i) => `${i+1}. ${t.title} (${t.scope || t.files?.[0]?.path || ''})`).join('\n')}
**Complexity**: ${plan.complexity}
**Estimated Time**: ${plan.estimated_time}
**Recommended**: ${plan.recommended_execution}
**Tasks** (${tasks.length}):
${tasks.map((t, i) => `${i+1}. ${t.title} (${t.scope || t.files?.[0]?.path || ''})`).join('\n')}
**Complexity**: ${plan.complexity} | **Time**: ${plan.estimated_time} | **Recommended**: ${plan.recommended_execution}
`)
```
**Step 4.2: Collect Confirmation**
**Collect Confirmation**:
```javascript
const autoYes = workflowPreferences.autoYes
let userSelection
if (autoYes) {
// Auto mode: Use defaults
console.log(`[Auto] Auto-confirming plan:`)
console.log(` - Confirmation: Allow`)
console.log(` - Execution: Auto`)
console.log(` - Review: Skip`)
userSelection = {
confirmation: "Allow",
execution_method: "Auto",
code_review_tool: "Skip"
}
if (workflowPreferences.autoYes) {
console.log(`[Auto] Allow & Execute | Auto | Skip`)
userSelection = { confirmation: "Allow", execution_method: "Auto", code_review_tool: "Skip" }
} else {
// Interactive mode: Ask user
// Note: Execution "Other" option allows specifying CLI tools from ~/.claude/cli-tools.json
// "Other" in Execution allows specifying CLI tools from ~/.claude/cli-tools.json
userSelection = AskUserQuestion({
questions: [
{
question: `Confirm plan? (${taskList.length} tasks, ${plan.complexity})`,
question: `Confirm plan and authorize execution? (${tasks.length} tasks, ${plan.complexity})`,
header: "Confirm",
multiSelect: false,
options: [
{ label: "Allow", description: "Proceed as-is" },
{ label: "Allow & Execute", description: "Approve plan and begin execution immediately (no further prompts)" },
{ label: "Modify", description: "Adjust before execution" },
{ label: "Cancel", description: "Abort workflow" }
]
@@ -705,67 +497,42 @@ if (autoYes) {
}
```
**TodoWrite (LP-Phase 4 confirmed)**:
```javascript
const executionLabel = userSelection.execution_method
TodoWrite({ todos: [
{ content: "LP-Phase 1: Exploration", status: "completed", activeForm: "Exploring codebase" },
{ content: "LP-Phase 2: Clarification", status: "completed", activeForm: "Collecting clarifications" },
{ content: "LP-Phase 3: Planning", status: "completed", activeForm: "Generating plan" },
{ content: `LP-Phase 4: Confirmed [${executionLabel}]`, status: "completed", activeForm: "Confirmed" },
{ content: `LP-Phase 5: Execution [${executionLabel}]`, status: "in_progress", activeForm: `Executing [${executionLabel}]` }
]})
```
// TodoWrite: Phase 4 → completed `[${userSelection.execution_method} + ${userSelection.code_review_tool}]`, Phase 5 → in_progress
---
### LP-Phase 5: Handoff to Execution
**CRITICAL**: lite-plan NEVER executes code directly. ALL execution MUST go through lite-execute.
**Step 5.1: Build executionContext**
**CRITICAL**: lite-plan NEVER executes code directly. ALL execution goes through lite-execute.
**Build executionContext**:
```javascript
// Load manifest and all exploration files (may not exist if exploration was skipped)
const manifest = file_exists(`${sessionFolder}/explorations-manifest.json`)
? JSON.parse(Read(`${sessionFolder}/explorations-manifest.json`))
: { exploration_count: 0, explorations: [] }
const explorations = {}
manifest.explorations.forEach(exp => {
if (file_exists(exp.path)) {
explorations[exp.angle] = JSON.parse(Read(exp.path))
}
if (file_exists(exp.path)) explorations[exp.angle] = JSON.parse(Read(exp.path))
})
const plan = JSON.parse(Read(`${sessionFolder}/plan.json`))
executionContext = {
planObject: plan, // plan overview (no tasks[])
taskFiles: (plan.task_ids || []).map(id => ({
id,
path: `${sessionFolder}/.task/${id}.json`
})),
planObject: plan,
taskFiles: (plan.task_ids || []).map(id => ({ id, path: `${sessionFolder}/.task/${id}.json` })),
explorationsContext: explorations,
explorationAngles: manifest.explorations.map(e => e.angle),
explorationManifest: manifest,
clarificationContext: clarificationContext || null,
executionMethod: userSelection.execution_method, // 全局默认,可被 executorAssignments 覆盖
executionMethod: userSelection.execution_method,
codeReviewTool: userSelection.code_review_tool,
originalUserInput: task_description,
// 任务级 executor 分配(优先于全局 executionMethod
executorAssignments: executorAssignments, // { taskId: { executor, reason } }
executorAssignments: executorAssignments, // { taskId: { executor, reason } } — overrides executionMethod
session: {
id: sessionId,
folder: sessionFolder,
artifacts: {
explorations: manifest.explorations.map(exp => ({
angle: exp.angle,
path: exp.path
})),
explorations: manifest.explorations.map(exp => ({ angle: exp.angle, path: exp.path })),
explorations_manifest: `${sessionFolder}/explorations-manifest.json`,
plan: `${sessionFolder}/plan.json`,
task_dir: `${sessionFolder}/.task`
@@ -774,56 +541,39 @@ executionContext = {
}
```
**Step 5.2: Handoff with Tracking**
**Handoff**:
```javascript
// Update TodoWrite to show handoff to lite-execute
if (!workflowPreferences.autoYes) {
console.log(`Handing off to execution engine. No further prompts.`)
}
// TodoWrite: Phase 5 → completed, add LE-Phase 1 → in_progress
const taskCount = (plan.task_ids || []).length
TodoWrite({ todos: [
{ content: "LP-Phase 1: Exploration", status: "completed", activeForm: "Exploring codebase" },
{ content: "LP-Phase 2: Clarification", status: "completed", activeForm: "Collecting clarifications" },
{ content: "LP-Phase 3: Planning", status: "completed", activeForm: "Generating plan" },
{ content: `LP-Phase 4: Confirmed [${executionLabel}]`, status: "completed", activeForm: "Confirmed" },
{ content: `LP-Phase 5: Handoff → lite-execute`, status: "completed", activeForm: "Handoff to execution" },
{ content: "LP-Phase 1: Exploration", status: "completed" },
{ content: "LP-Phase 2: Clarification", status: "completed" },
{ content: "LP-Phase 3: Planning", status: "completed" },
{ content: `LP-Phase 4: Confirmed [${userSelection.execution_method}]`, status: "completed" },
{ content: `LP-Phase 5: Handoff → lite-execute`, status: "completed" },
{ content: `LE-Phase 1: Task Loading [${taskCount} tasks]`, status: "in_progress", activeForm: "Loading tasks" }
]})
// Invoke lite-execute skill with executionContext
Skill("lite-execute")
// executionContext is passed as global variable (Mode 1: In-Memory Plan)
// lite-execute will continue TodoWrite tracking with LE-Phase prefix
// executionContext passed as global variable (Mode 1: In-Memory Plan)
```
## Session Folder Structure
```
.workflow/.lite-plan/{task-slug}-{YYYY-MM-DD}/
├── exploration-{angle1}.json # Exploration angle 1
├── exploration-{angle2}.json # Exploration angle 2
├── exploration-{angle3}.json # Exploration angle 3 (if applicable)
├── exploration-{angle4}.json # Exploration angle 4 (if applicable)
├── explorations-manifest.json # Exploration index
├── planning-context.md # Evidence paths + understanding
├── plan.json # Plan overview (task_ids[])
└── .task/ # Task files directory
├── TASK-001.json
├── TASK-002.json
└── ...
```
**Example**:
```
.workflow/.lite-plan/implement-jwt-refresh-2025-11-25-14-30-25/
├── exploration-architecture.json
├── exploration-auth-patterns.json
├── exploration-security.json
├── explorations-manifest.json
├── planning-context.md
├── plan.json
├── exploration-{angle}.json (1-4) # Per-angle exploration
├── explorations-manifest.json # Exploration index
├── planning-context.md # Evidence paths + understanding
├── plan.json # Plan overview (task_ids[])
└── .task/
├── TASK-001.json
├── TASK-002.json
└── TASK-003.json
└── ...
```
## Error Handling
@@ -835,7 +585,3 @@ Skill("lite-execute")
| Clarification timeout | Use exploration findings as-is |
| Confirmation timeout | Save context, display resume instructions |
| Modify loop > 3 times | Suggest breaking task or using /workflow-plan |
## Next Phase
After LP-Phase 5 handoff, execution continues in the workflow-lite-execute skill.