refactor: unify agent API to Codex v4 across all skills

Migrate all .codex/skills from old agent API (wait/send_input/ids) to
Codex v4 API (wait_agent/assign_task/targets) across 31 files in 10
skills: spec-generator, brainstorm, clean, issue-discover,
parallel-dev-cycle, review-cycle, roadmap-with-file, workflow-plan,
workflow-tdd-plan, workflow-test-fix-cycle, and spec-setup.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
This commit is contained in:
catlog22
2026-03-30 15:25:22 +08:00
parent 2ca87087f1
commit b88bf5e0f6
31 changed files with 377 additions and 275 deletions

View File

@@ -5,7 +5,7 @@ description: |
(spawn_agents_on_csv) → cross-role synthesis. Single role mode: individual role analysis.
CSV-driven parallel coordination with NDJSON discovery board.
argument-hint: "[-y|--yes] [--count N] [--session ID] [--skip-questions] [--style-skill PKG] \"topic\" | <role-name> [--session ID]"
allowed-tools: spawn_agents_on_csv, spawn_agent, wait, send_input, close_agent, request_user_input, Read, Write, Edit, Bash, Glob, Grep
allowed-tools: spawn_agents_on_csv, spawn_agent, wait_agent, send_message, assign_task, close_agent, request_user_input, Read, Write, Edit, Bash, Glob, Grep
---
## Auto Mode
@@ -556,7 +556,7 @@ Follow the same analysis protocol as wave role analysis but with interactive ref
`
})
wait({ id: agentId })
wait_agent({ targets: [agentId] })
close_agent({ id: agentId })
console.log(`\n✓ ${roleName} analysis complete: ${roleDir}/analysis.md`)
@@ -631,7 +631,7 @@ Evaluate complexity score (0-8):
`
})
wait({ id: synthesisAgent })
wait_agent({ targets: [synthesisAgent] })
close_agent({ id: synthesisAgent })
```

View File

@@ -204,11 +204,11 @@ Format:
})
// Wait with timeout handling
let result = wait({ ids: [exploreAgent], timeout_ms: 600000 })
let result = wait_agent({ targets: [exploreAgent], timeout_ms: 600000 })
if (result.timed_out) {
send_input({ id: exploreAgent, message: 'Complete now and write cleanup-manifest.json.' })
result = wait({ ids: [exploreAgent], timeout_ms: 300000 })
assign_task({ target: exploreAgent, items: [{ type: "text", text: "Complete now and write cleanup-manifest.json." }] })
result = wait_agent({ targets: [exploreAgent], timeout_ms: 300000 })
if (result.timed_out) throw new Error('Agent timeout')
}

View File

@@ -1,7 +1,7 @@
---
name: issue-discover
description: "Unified issue discovery and creation. Create issues from GitHub/text, discover issues via multi-perspective analysis, or prompt-driven iterative exploration. Triggers on \"issue:new\", \"issue:discover\", \"issue:discover-by-prompt\", \"create issue\", \"discover issues\", \"find issues\"."
allowed-tools: spawn_agent, wait, send_input, close_agent, request_user_input, Read, Write, Edit, Bash, Glob, Grep, mcp__ace-tool__search_context, mcp__exa__search
allowed-tools: spawn_agent, wait_agent, send_message, assign_task, close_agent, request_user_input, Read, Write, Edit, Bash, Glob, Grep, mcp__ace-tool__search_context, mcp__exa__search
---
# Issue Discover
@@ -54,7 +54,7 @@ Unified issue discovery and creation skill covering three entry points: manual i
2. **Progressive Phase Loading**: Only read the selected phase document
3. **CLI-First Data Access**: All issue CRUD via `ccw issue` CLI commands
4. **Auto Mode Support**: `-y` flag skips action selection with auto-detection
5. **Subagent Lifecycle**: Explicit lifecycle management with spawn_agent → wait → close_agent
5. **Subagent Lifecycle**: Explicit lifecycle management with spawn_agent → wait_agent → close_agent
6. **Role Path Loading**: Subagent roles loaded via path reference in MANDATORY FIRST STEPS
## Auto Mode
@@ -130,7 +130,7 @@ Post-Phase:
5. **Auto-Detect Input**: Smart input parsing reduces need for explicit --action flag
6. **⚠️ CRITICAL: DO NOT STOP**: Continuous multi-phase workflow. After completing each phase, immediately proceed to next
7. **Progressive Phase Loading**: Read phase docs ONLY when that phase is about to execute
8. **Explicit Lifecycle**: Always close_agent after wait completes to free resources
8. **Explicit Lifecycle**: Always close_agent after wait_agent completes to free resources
## Input Processing
@@ -246,18 +246,18 @@ ${deliverables}
})
```
### wait
### wait_agent
Get results from subagent (only way to retrieve results).
```javascript
const result = wait({
ids: [agentId],
const result = wait_agent({
targets: [agentId],
timeout_ms: 600000 // 10 minutes
})
if (result.timed_out) {
// Handle timeout - can continue waiting or send_input to prompt completion
// Handle timeout - can use assign_task to prompt completion
}
// Check completion status
@@ -266,20 +266,20 @@ if (result.status[agentId].completed) {
}
```
### send_input
### assign_task
Continue interaction with active subagent (for clarification or follow-up).
Assign new work to active subagent (for clarification or follow-up).
```javascript
send_input({
id: agentId,
message: `
assign_task({
target: agentId,
items: [{ type: "text", text: `
## CLARIFICATION ANSWERS
${answers}
## NEXT STEP
Continue with plan generation.
`
` }]
})
```

View File

@@ -164,8 +164,8 @@ ${getPerspectiveGuidance(perspective)}
// Step 2: Batch wait for all agents
const agentIds = perspectiveAgents.map(a => a.agentId);
const results = wait({
ids: agentIds,
const results = wait_agent({
targets: agentIds,
timeout_ms: 600000 // 10 minutes
});
@@ -220,8 +220,8 @@ Research industry best practices for ${perspective} using Exa search
`
});
const exaResult = wait({
ids: [exaAgentId],
const exaResult = wait_agent({
targets: [exaAgentId],
timeout_ms: 300000 // 5 minutes
});

View File

@@ -242,8 +242,8 @@ while (shouldContinue && iteration < maxIterations) {
// Step 2: Batch wait for all dimension agents
const dimensionAgentIds = dimensionAgents.map(a => a.agentId);
const iterationResults = wait({
ids: dimensionAgentIds,
const iterationResults = wait_agent({
targets: dimensionAgentIds,
timeout_ms: 600000 // 10 minutes
});
@@ -297,7 +297,7 @@ while (shouldContinue && iteration < maxIterations) {
│ │
│ 2. Execute: Spawn agents for this iteration │
│ └─ Each agent: explore → collect → return summary │
│ └─ Lifecycle: spawn_agent → batch wait → close_agent │
│ └─ Lifecycle: spawn_agent → batch wait_agent → close_agent │
│ │
│ 3. Analyze: Process iteration results │
│ └─ New findings? Gaps? Contradictions? │

View File

@@ -1,7 +1,7 @@
---
name: parallel-dev-cycle
description: Multi-agent parallel development cycle with requirement analysis, exploration planning, code development, and validation. Orchestration runs inline in main flow (no separate orchestrator agent). Supports continuous iteration with markdown progress documentation. Triggers on "parallel-dev-cycle".
allowed-tools: spawn_agent, wait, send_input, close_agent, request_user_input, Read, Write, Edit, Bash, Glob, Grep
allowed-tools: spawn_agent, wait_agent, send_message, assign_task, close_agent, request_user_input, Read, Write, Edit, Bash, Glob, Grep
---
# Parallel Dev Cycle
@@ -111,7 +111,7 @@ Phase 3: Result Aggregation & Iteration
├─ Parse PHASE_RESULT from each agent
├─ Detect issues (test failures, blockers)
├─ Decision: Issues found AND iteration < max?
│ ├─ Yes → Send feedback via send_input, loop back to Phase 2
│ ├─ Yes → Send feedback via assign_task, loop back to Phase 2
│ └─ No → Proceed to Phase 4
└─ Output: parsedResults, iteration status
@@ -330,7 +330,7 @@ PHASE_RESULT:
### Main Flow → Agent Communication
Feedback via `send_input` (file refs + issue summary, never full content):
Feedback via `assign_task` (file refs + issue summary, never full content):
```
## FEEDBACK FROM [Source]
[Issue summary with file:line references]
@@ -357,7 +357,7 @@ Feedback via `send_input` (file refs + issue summary, never full content):
| Error Type | Recovery |
|------------|----------|
| Agent timeout | send_input requesting convergence, then retry |
| Agent timeout | assign_task requesting convergence, then retry |
| State corrupted | Rebuild from progress markdown files and changes.log |
| Agent failed | Re-spawn agent with previous context |
| Conflicting results | Main flow sends reconciliation request |

View File

@@ -408,8 +408,8 @@ const agents = {
// Wait for all agents to complete
console.log('Waiting for all agents...')
const results = wait({
ids: [agents.ra, agents.ep, agents.cd, agents.vas],
const results = wait_agent({
targets: [agents.ra, agents.ep, agents.cd, agents.vas],
timeout_ms: 1800000 // 30 minutes
})
```
@@ -421,16 +421,16 @@ if (results.timed_out) {
console.log('Some agents timed out, sending convergence request...')
Object.entries(agents).forEach(([name, id]) => {
if (!results.status[id].completed) {
send_input({
id: id,
message: `
assign_task({
target: id,
items: [{ type: "text", text: `
## TIMEOUT NOTIFICATION
Execution timeout reached. Please:
1. Output current progress to markdown file
2. Save all state updates
3. Return completion status
`
` }]
})
}
})

View File

@@ -148,29 +148,29 @@ Update requirements.md if needed.
}
```
### Step 3.6: Send Feedback via send_input
### Step 3.6: Send Feedback via assign_task
```javascript
const feedback = generateFeedback(parsedResults)
// Send feedback to relevant agents
if (feedback.ra) {
send_input({
id: agents.ra,
message: feedback.ra
assign_task({
target: agents.ra,
items: [{ type: "text", text: feedback.ra }]
})
}
if (feedback.cd) {
send_input({
id: agents.cd,
message: feedback.cd
assign_task({
target: agents.cd,
items: [{ type: "text", text: feedback.cd }]
})
}
// Wait for agents to process feedback and update
const updatedResults = wait({
ids: [agents.ra, agents.cd].filter(Boolean),
const updatedResults = wait_agent({
targets: [agents.ra, agents.cd].filter(Boolean),
timeout_ms: 900000 // 15 minutes for fixes
})
@@ -213,7 +213,7 @@ Phase 3: Result Aggregation
│ ├─ No → Phase 4 (Complete)
│ └─ Yes
│ ├─ iteration < max?
│ │ ├─ Yes → Generate feedback → send_input → Wait → Back to Phase 2
│ │ ├─ Yes → Generate feedback → assign_task → Wait → Back to Phase 2
│ │ └─ No → Phase 4 (Complete with issues)
```

View File

@@ -316,18 +316,18 @@ ${deliverables}
})
```
### wait
### wait_agent
Get results from subagent (only way to retrieve results).
```javascript
const result = wait({
ids: [agentId],
const result = wait_agent({
targets: [agentId],
timeout_ms: 600000 // 10 minutes
})
if (result.timed_out) {
// Handle timeout - can continue waiting or send_input to prompt completion
// Handle timeout - can use assign_task to prompt completion
}
// Check completion status
@@ -336,20 +336,20 @@ if (result.status[agentId].completed) {
}
```
### send_input
### assign_task
Continue interaction with active subagent (for clarification or follow-up).
Assign new work to active subagent (for clarification or follow-up).
```javascript
send_input({
id: agentId,
message: `
assign_task({
target: agentId,
items: [{ type: "text", text: `
## CLARIFICATION ANSWERS
${answers}
## NEXT STEP
Continue with analysis generation.
`
` }]
})
```

View File

@@ -173,8 +173,8 @@ ${getDimensionGuidance(dimension)}
});
// Step 2: Batch wait for all 7 agents
const reviewResults = wait({
ids: reviewAgents,
const reviewResults = wait_agent({
targets: reviewAgents,
timeout_ms: 3600000 // 60 minutes
});
@@ -296,8 +296,8 @@ ${getDimensionGuidance(dimension)}
});
// Step 2: Batch wait for all 7 agents
const reviewResults = wait({
ids: reviewAgents,
const reviewResults = wait_agent({
targets: reviewAgents,
timeout_ms: 3600000 // 60 minutes
});
@@ -407,8 +407,8 @@ Then apply **Deep Scan mode** for semantic analysis:
});
// Wait for completion
const deepDiveResult = wait({
ids: [deepDiveAgentId],
const deepDiveResult = wait_agent({
targets: [deepDiveAgentId],
timeout_ms: 2400000 // 40 minutes
});

View File

@@ -163,8 +163,8 @@ Then apply **Deep Scan mode** for semantic analysis:
});
// Step 2: Batch wait for all deep-dive agents
const deepDiveResults = wait({
ids: deepDiveAgents,
const deepDiveResults = wait_agent({
targets: deepDiveAgents,
timeout_ms: 2400000 // 40 minutes
});
@@ -275,8 +275,8 @@ Then apply **Deep Scan mode** for semantic analysis:
});
// Step 2: Batch wait for all deep-dive agents
const deepDiveResults = wait({
ids: deepDiveAgents,
const deepDiveResults = wait_agent({
targets: deepDiveAgents,
timeout_ms: 2400000 // 40 minutes
});

View File

@@ -43,8 +43,8 @@ for (let i = 0; i < batches.length; i += MAX_PARALLEL) {
console.log(`Spawned ${agentIds.length} planning agents...`);
// Step 2: Batch wait for all agents in this chunk
const chunkResults = wait({
ids: agentIds.map(a => a.agentId),
const chunkResults = wait_agent({
targets: agentIds.map(a => a.agentId),
timeout_ms: 600000 // 10 minutes
});
@@ -204,8 +204,8 @@ Before finalizing outputs:
});
// Wait for completion
const result = wait({
ids: [agentId],
const result = wait_agent({
targets: [agentId],
timeout_ms: 600000 // 10 minutes
});

View File

@@ -221,8 +221,8 @@ Use fix_strategy.test_pattern to run affected tests:
});
// Wait for completion
const execResult = wait({
ids: [execAgentId],
const execResult = wait_agent({
targets: [execAgentId],
timeout_ms: 1200000 // 20 minutes per group
});

View File

@@ -56,33 +56,33 @@ ${deliverables}
})
```
### wait
### wait_agent
Get results from subagent (only way to retrieve results).
```javascript
const result = wait({
ids: [agentId],
const result = wait_agent({
targets: [agentId],
timeout_ms: 600000 // 10 minutes
})
if (result.timed_out) {
// Handle timeout - can continue waiting or send_input to prompt completion
// Handle timeout - can use assign_task to prompt completion
}
```
### send_input
Continue interaction with active subagent (for clarification or follow-up).
### assign_task
Assign new work to active subagent (for clarification or follow-up).
```javascript
send_input({
id: agentId,
message: `
assign_task({
target: agentId,
items: [{ type: "text", text: `
## CLARIFICATION ANSWERS
${answers}
## NEXT STEP
Continue with plan generation.
`
` }]
})
```
@@ -537,8 +537,8 @@ Return findings as JSON with schema:
`
})
const exploreResult = wait({
ids: [exploreAgentId],
const exploreResult = wait_agent({
targets: [exploreAgentId],
timeout_ms: 120000
})
@@ -629,8 +629,8 @@ ${selectedMode === 'progressive' ? `**Progressive Mode**:
`
})
const decompositionResult = wait({
ids: [decompositionAgentId],
const decompositionResult = wait_agent({
targets: [decompositionAgentId],
timeout_ms: 300000 // 5 minutes for complex decomposition
})

View File

@@ -107,4 +107,4 @@ After Phase 6, choose execution path:
- **Type-specialized**: Profiles adapt templates to service/api/library/platform requirements
- **Iterative quality**: Phase 6.5 auto-fix repairs issues, max 2 iterations before handoff
- **Terminology-first**: glossary.json ensures consistent terminology across all documents
- **Agent-delegated**: Heavy document phases (2-5, 6.5) run in doc-generator agents to minimize main context usage
- **Agent-delegated**: Heavy document phases (2-5, 6.5) run in doc-generator agents via `spawn_agent/wait_agent/close_agent` (Codex v4 API) to minimize main context usage

View File

@@ -1,7 +1,8 @@
---
name: spec-generator
description: "Specification generator - 7 phase document chain producing product brief, PRD, architecture, epics, and issues. Agent-delegated heavy phases (2-5, 6.5) with Codex review gates. Triggers on \"generate spec\", \"create specification\", \"spec generator\", \"workflow:spec\"."
allowed-tools: Agent, request_user_input, TaskCreate, TaskUpdate, TaskList, Read, Write, Edit, Bash, Glob, Grep, Skill
agents: doc-generator
phases: 9
---
# Spec Generator
@@ -27,22 +28,22 @@ Phase 5: Epics & Stories -> epics/ (_index.md + EPIC-*.md)
| (Gemini + Codex review)
Phase 6: Readiness Check -> readiness-report.md + spec-summary.md [Inline]
| (Gemini + Codex dual validation + per-req verification)
├── Pass (>=80%): Handoff or Phase 7
├── Review (60-79%): Handoff with caveats or Phase 7
└── Fail (<60%): Phase 6.5 Auto-Fix (max 2 iterations)
+-- Pass (>=80%): Handoff or Phase 7
+-- Review (60-79%): Handoff with caveats or Phase 7
+-- Fail (<60%): Phase 6.5 Auto-Fix (max 2 iterations)
|
Phase 6.5: Auto-Fix -> Updated Phase 2-5 documents [Agent]
|
└── Re-run Phase 6 validation
+-- Re-run Phase 6 validation
|
Phase 7: Issue Export -> issue-export-report.md [Inline]
(EpicIssue mapping, ccw issue create, wave assignment)
(Epic->Issue mapping, ccw issue create, wave assignment)
```
## Key Design Principles
1. **Document Chain**: Each phase builds on previous outputs, creating a traceable specification chain from idea to executable issues
2. **Agent-Delegated**: Heavy document phases (2-5, 6.5) run in `doc-generator` agents, keeping main context lean (summaries only)
2. **Agent-Delegated**: Heavy document phases (2-5, 6.5) run in `doc-generator` agents via `spawn_agent`, keeping main context lean (summaries only)
3. **Multi-Perspective Analysis**: CLI tools (Gemini/Codex/Claude) provide product, technical, and user perspectives in parallel
4. **Codex Review Gates**: Phases 3, 5, 6 include Codex CLI review for quality validation before output
5. **Interactive by Default**: Each phase offers user confirmation points; `-y` flag enables full auto mode
@@ -55,6 +56,36 @@ Phase 7: Issue Export -> issue-export-report.md
---
## Agent Registry
| Agent | task_name | Role File | Responsibility | Pattern | fork_context |
|-------|-----------|-----------|----------------|---------|-------------|
| doc-generator (Phase 2) | `doc-gen-p2` | ~/.codex/agents/doc-generator.toml | Product brief + glossary generation | 2.1 Standard | false |
| doc-generator (Phase 3) | `doc-gen-p3` | ~/.codex/agents/doc-generator.toml | Requirements / PRD generation | 2.1 Standard | false |
| doc-generator (Phase 4) | `doc-gen-p4` | ~/.codex/agents/doc-generator.toml | Architecture + ADR generation | 2.1 Standard | false |
| doc-generator (Phase 5) | `doc-gen-p5` | ~/.codex/agents/doc-generator.toml | Epics & Stories generation | 2.1 Standard | false |
| doc-generator (Phase 6.5) | `doc-gen-fix` | ~/.codex/agents/doc-generator.toml | Auto-fix readiness issues | 2.1 Standard | false |
| cli-explore-agent (Phase 1) | `spec-explorer` | ~/.codex/agents/cli-explore-agent.toml | Codebase exploration | 2.1 Standard | false |
> **COMPACT PROTECTION**: Agent files are execution documents. When context compression occurs and agent instructions are reduced to summaries, **you MUST immediately `Read` the corresponding agent file to reload before continuing execution**.
---
## Fork Context Strategy
| Agent | task_name | fork_context | fork_from | Rationale |
|-------|-----------|-------------|-----------|-----------|
| cli-explore-agent | `spec-explorer` | false | — | Independent utility: codebase scan, isolated task |
| doc-generator (P2) | `doc-gen-p2` | false | — | Sequential pipeline: context passed via file paths in message |
| doc-generator (P3) | `doc-gen-p3` | false | — | Sequential pipeline: reads P2 output files from disk |
| doc-generator (P4) | `doc-gen-p4` | false | — | Sequential pipeline: reads P2-P3 output files from disk |
| doc-generator (P5) | `doc-gen-p5` | false | — | Sequential pipeline: reads P2-P4 output files from disk |
| doc-generator (P6.5) | `doc-gen-fix` | false | — | Utility fix: reads readiness-report.md + affected phase files |
**Why all `fork_context: false`**: This is a Pipeline pattern (2.5) — each phase produces files on disk and the next phase reads them. No agent needs the orchestrator's conversation history; all context is explicitly passed via file paths in the spawn message.
---
## Mandatory Prerequisites
> **Do NOT skip**: Before performing any operations, you **must** completely read the following documents. Proceeding without reading the specifications will result in outputs that do not meet quality standards.
@@ -92,6 +123,9 @@ Phase 1: Discovery & Seed Analysis
|- Parse input (text or file reference)
|- Gemini CLI seed analysis (problem, users, domain, dimensions)
|- Codebase exploration (conditional, if project detected)
| |- spawn_agent({ task_name: "spec-explorer", fork_context: false, message: ... })
| |- wait_agent({ targets: ["spec-explorer"], timeout_ms: 300000 })
| |- close_agent({ target: "spec-explorer" })
|- Spec type selection: service|api|library|platform (interactive, -y defaults to service)
|- User confirmation (interactive, -y skips)
|- Output: spec-config.json, discovery-context.json (optional)
@@ -107,35 +141,39 @@ Phase 1.5: Requirement Expansion & Clarification
|- Output: refined-requirements.json
Phase 2: Product Brief [AGENT: doc-generator]
|- Delegate to Task(subagent_type="doc-generator")
|- spawn_agent({ task_name: "doc-gen-p2", fork_context: false, message: <context envelope> })
|- Agent reads: phases/02-product-brief.md
|- Agent executes: 3 parallel CLI analyses + synthesis + glossary generation
|- Agent writes: product-brief.md, glossary.json
|- Agent returns: JSON summary {files_created, quality_notes, key_decisions}
|- wait_agent({ targets: ["doc-gen-p2"], timeout_ms: 600000 })
|- close_agent({ target: "doc-gen-p2" })
|- Orchestrator validates: files exist, spec-config.json updated
Phase 3: Requirements / PRD [AGENT: doc-generator]
|- Delegate to Task(subagent_type="doc-generator")
|- spawn_agent({ task_name: "doc-gen-p3", fork_context: false, message: <context envelope> })
|- Agent reads: phases/03-requirements.md
|- Agent executes: Gemini expansion + Codex review (Step 2.5) + priority sorting
|- Agent writes: requirements/ directory (_index.md + REQ-*.md + NFR-*.md)
|- Agent returns: JSON summary {files_created, codex_review_integrated, key_decisions}
|- wait_agent({ targets: ["doc-gen-p3"], timeout_ms: 600000 })
|- close_agent({ target: "doc-gen-p3" })
|- Orchestrator validates: directory exists, file count matches
Phase 4: Architecture [AGENT: doc-generator]
|- Delegate to Task(subagent_type="doc-generator")
|- spawn_agent({ task_name: "doc-gen-p4", fork_context: false, message: <context envelope> })
|- Agent reads: phases/04-architecture.md
|- Agent executes: Gemini analysis + Codex review + codebase mapping
|- Agent writes: architecture/ directory (_index.md + ADR-*.md)
|- Agent returns: JSON summary {files_created, codex_review_rating, key_decisions}
|- wait_agent({ targets: ["doc-gen-p4"], timeout_ms: 600000 })
|- close_agent({ target: "doc-gen-p4" })
|- Orchestrator validates: directory exists, ADR files present
Phase 5: Epics & Stories [AGENT: doc-generator]
|- Delegate to Task(subagent_type="doc-generator")
|- spawn_agent({ task_name: "doc-gen-p5", fork_context: false, message: <context envelope> })
|- Agent reads: phases/05-epics-stories.md
|- Agent executes: Gemini decomposition + Codex review (Step 2.5) + validation
|- Agent writes: epics/ directory (_index.md + EPIC-*.md)
|- Agent returns: JSON summary {files_created, codex_review_integrated, mvp_epic_count}
|- wait_agent({ targets: ["doc-gen-p5"], timeout_ms: 600000 })
|- close_agent({ target: "doc-gen-p5" })
|- Orchestrator validates: directory exists, MVP epics present
Phase 6: Readiness Check [INLINE + ENHANCED]
@@ -150,16 +188,17 @@ Phase 6: Readiness Check [INLINE + ENHANCED]
|- Handoff options: Phase 7 (issue export), lite-plan, req-plan, plan, iterate
Phase 6.5: Auto-Fix (conditional) [AGENT: doc-generator]
|- Delegate to Task(subagent_type="doc-generator")
|- spawn_agent({ task_name: "doc-gen-fix", fork_context: false, message: <context envelope> })
|- Agent reads: phases/06-5-auto-fix.md + readiness-report.md
|- Agent executes: fix affected Phase 2-5 documents
|- Agent returns: JSON summary {files_modified, issues_fixed, phases_touched}
|- wait_agent({ targets: ["doc-gen-fix"], timeout_ms: 600000 })
|- close_agent({ target: "doc-gen-fix" })
|- Re-run Phase 6 validation
|- Max 2 iterations, then force handoff
Phase 7: Issue Export [INLINE]
|- Ref: phases/07-issue-export.md
|- Read EPIC-*.md files, assign waves (MVPwave-1, otherswave-2)
|- Read EPIC-*.md files, assign waves (MVP->wave-1, others->wave-2)
|- Create issues via ccw issue create (one per Epic)
|- Map Epic dependencies to issue dependencies
|- Generate issue-export-report.md
@@ -168,21 +207,21 @@ Phase 7: Issue Export [INLINE]
Complete: Full specification package with issues ready for execution
Phase 6/7 Handoff Bridge (conditional, based on user selection):
├─ team-planex: Execute issues via coordinated team workflow
├─ lite-plan: Extract first MVP Epic description direct text input
├─ plan / req-plan: Create WFS session + .brainstorming/ bridge files
├─ guidance-specification.md (synthesized from spec outputs)
├─ feature-specs/feature-index.json (Epic Feature mapping)
└─ feature-specs/F-{num}-{slug}.md (one per Epic)
└─ context-search-agent auto-discovers .brainstorming/
context-package.json.brainstorm_artifacts populated
action-planning-agent consumes: guidance_spec (P1) feature_index (P2)
Phase 6/7 -> Handoff Bridge (conditional, based on user selection):
+- team-planex: Execute issues via coordinated team workflow
+- lite-plan: Extract first MVP Epic description -> direct text input
+- plan / req-plan: Create WFS session + .brainstorming/ bridge files
| +- guidance-specification.md (synthesized from spec outputs)
| +- feature-specs/feature-index.json (Epic -> Feature mapping)
| +-- feature-specs/F-{num}-{slug}.md (one per Epic)
+- context-search-agent auto-discovers .brainstorming/
-> context-package.json.brainstorm_artifacts populated
-> action-planning-agent consumes: guidance_spec (P1) -> feature_index (P2)
```
## Directory Setup
```javascript
```
// Session ID generation
const slug = topic.toLowerCase().replace(/[^a-z0-9\u4e00-\u9fff]+/g, '-').slice(0, 40);
const date = new Date().toISOString().slice(0, 10);
@@ -196,24 +235,24 @@ Bash(`mkdir -p "${workDir}"`);
```
.workflow/.spec/SPEC-{slug}-{YYYY-MM-DD}/
├── spec-config.json # Session configuration + phase state
├── discovery-context.json # Codebase exploration results (optional)
├── refined-requirements.json # Phase 1.5: Confirmed requirements after discussion
├── glossary.json # Phase 2: Terminology glossary for cross-doc consistency
├── product-brief.md # Phase 2: Product brief
├── requirements/ # Phase 3: Detailed PRD (directory)
├── _index.md # Summary, MoSCoW table, traceability, links
├── REQ-NNN-{slug}.md # Individual functional requirement
└── NFR-{type}-NNN-{slug}.md # Individual non-functional requirement
├── architecture/ # Phase 4: Architecture decisions (directory)
├── _index.md # Overview, components, tech stack, links
└── ADR-NNN-{slug}.md # Individual Architecture Decision Record
├── epics/ # Phase 5: Epic/Story breakdown (directory)
├── _index.md # Epic table, dependency map, MVP scope
└── EPIC-NNN-{slug}.md # Individual Epic with Stories
├── readiness-report.md # Phase 6: Quality report (+ per-req verification table)
├── spec-summary.md # Phase 6: One-page executive summary
└── issue-export-report.md # Phase 7: Issue mapping table + spec links
+-- spec-config.json # Session configuration + phase state
+-- discovery-context.json # Codebase exploration results (optional)
+-- refined-requirements.json # Phase 1.5: Confirmed requirements after discussion
+-- glossary.json # Phase 2: Terminology glossary for cross-doc consistency
+-- product-brief.md # Phase 2: Product brief
+-- requirements/ # Phase 3: Detailed PRD (directory)
| +-- _index.md # Summary, MoSCoW table, traceability, links
| +-- REQ-NNN-{slug}.md # Individual functional requirement
| +-- NFR-{type}-NNN-{slug}.md # Individual non-functional requirement
+-- architecture/ # Phase 4: Architecture decisions (directory)
| +-- _index.md # Overview, components, tech stack, links
| +-- ADR-NNN-{slug}.md # Individual Architecture Decision Record
+-- epics/ # Phase 5: Epic/Story breakdown (directory)
| +-- _index.md # Epic table, dependency map, MVP scope
| +-- EPIC-NNN-{slug}.md # Individual Epic with Stories
+-- readiness-report.md # Phase 6: Quality report (+ per-req verification table)
+-- spec-summary.md # Phase 6: One-page executive summary
+-- issue-export-report.md # Phase 7: Issue mapping table + spec links
```
## State Management
@@ -255,79 +294,134 @@ Bash(`mkdir -p "${workDir}"`);
## Core Rules
1. **Start Immediately**: First action is TaskCreate initialization, then Phase 0 (spec study), then Phase 1
1. **Start Immediately**: First action is Phase 0 (spec study), then Phase 1
2. **Progressive Phase Loading**: Read phase docs ONLY when that phase is about to execute
3. **Auto-Continue**: All phases run autonomously; check TaskList to execute next pending phase
3. **Auto-Continue**: All phases run autonomously; proceed to next phase after current completes
4. **Parse Every Output**: Extract required data from each phase for next phase context
5. **DO NOT STOP**: Continuous 7-phase pipeline until all phases complete or user exits
6. **Respect -y Flag**: When auto mode, skip all request_user_input calls, use recommended defaults
6. **Respect -y Flag**: When auto mode, skip all user interaction calls, use recommended defaults
7. **Respect -c Flag**: When continue mode, load spec-config.json and resume from checkpoint
8. **Inject Glossary**: From Phase 3 onward, inject glossary.json terms into every CLI prompt
9. **Load Profile**: Read templates/profiles/{spec_type}-profile.md and inject requirements into Phase 2-5 prompts
10. **Iterate on Failure**: When Phase 6 score < 60%, auto-trigger Phase 6.5 (max 2 iterations)
11. **Agent Delegation**: Phase 2-5 and 6.5 MUST be delegated to `doc-generator` agents via Task tool — never execute inline
12. **Lean Context**: Orchestrator only sees agent return summaries (JSON), never the full document content
13. **Validate Agent Output**: After each agent returns, verify files exist on disk and spec-config.json was updated
11. **Agent Delegation**: Phase 2-5 and 6.5 MUST be delegated to `doc-generator` agents via `spawn_agent` — never execute inline
12. **Lean Context**: Orchestrator only sees agent return summaries from `wait_agent`, never the full document content
13. **Validate Agent Output**: After each `wait_agent` returns, verify files exist on disk and spec-config.json was updated
14. **Lifecycle Balance**: Every `spawn_agent` MUST have a matching `close_agent` after `wait_agent` retrieves results
## Agent Delegation Protocol
For Phase 2-5 and 6.5, the orchestrator delegates to a `doc-generator` agent via the Task tool. The orchestrator builds a lean context envelope — passing only paths, never file content.
For Phase 2-5 and 6.5, the orchestrator delegates to a `doc-generator` agent via `spawn_agent`. The orchestrator builds a lean context envelope — passing only paths, never file content.
### Context Envelope Template
```javascript
Task({
subagent_type: "doc-generator",
run_in_background: false,
description: `Spec Phase ${N}: ${phaseName}`,
prompt: `
## Spec Generator - Phase ${N}: ${phaseName}
```
spawn_agent({
task_name: "doc-gen-p<N>",
fork_context: false,
message: `
## Spec Generator - Phase <N>: <phase-name>
### MANDATORY FIRST STEPS (Agent Execute)
1. **Read role definition**: ~/.codex/agents/doc-generator.toml (MUST read first)
2. Read: <skill-dir>/phases/<phase-file>
---
### Session
- ID: ${sessionId}
- Work Dir: ${workDir}
- Auto Mode: ${autoMode}
- Spec Type: ${specType}
- ID: <session-id>
- Work Dir: <work-dir>
- Auto Mode: <auto-mode>
- Spec Type: <spec-type>
### Input (read from disk)
${inputFilesList} // Only file paths — agent reads content itself
<input-files-list>
### Instructions
Read: ${skillDir}/phases/${phaseFile} // Agent reads the phase doc for full instructions
Apply template: ${skillDir}/templates/${templateFile}
Read: <skill-dir>/phases/<phase-file>
Apply template: <skill-dir>/templates/<template-file>
### Glossary (Phase 3+ only)
Read: ${workDir}/glossary.json
Read: <work-dir>/glossary.json
### Output
Write files to: ${workDir}/${outputPath}
Update: ${workDir}/spec-config.json (phasesCompleted)
Write files to: <work-dir>/<output-path>
Update: <work-dir>/spec-config.json (phasesCompleted)
Return: JSON summary { files_created, quality_notes, key_decisions }
`
});
})
```
### Orchestrator Post-Agent Validation
After each agent returns:
After each agent phase, the orchestrator validates output:
```javascript
// 1. Parse agent return summary
const summary = JSON.parse(agentResult);
```
// 1. Wait for agent completion
const result = wait_agent({ targets: ["doc-gen-p<N>"], timeout_ms: 600000 })
// 2. Validate files exist
// 2. Handle timeout
if (result.timed_out) {
assign_task({
target: "doc-gen-p<N>",
items: [{ type: "text", text: "Please finalize current work and output results immediately." }]
})
const retryResult = wait_agent({ targets: ["doc-gen-p<N>"], timeout_ms: 120000 })
if (retryResult.timed_out) {
close_agent({ target: "doc-gen-p<N>" })
// Fall back to inline execution for this phase
}
}
// 3. Close agent (lifecycle balance)
close_agent({ target: "doc-gen-p<N>" })
// 4. Parse agent return summary
const summary = parseJSON(result.status["doc-gen-p<N>"].completed)
// 5. Validate files exist
summary.files_created.forEach(file => {
const exists = Glob(`${workDir}/${file}`);
if (!exists.length) throw new Error(`Agent claimed to create ${file} but file not found`);
});
const exists = Glob(`<work-dir>/${file}`)
if (!exists.length) → Error: agent claimed file but not found
})
// 3. Verify spec-config.json updated
const config = JSON.parse(Read(`${workDir}/spec-config.json`));
const phaseComplete = config.phasesCompleted.some(p => p.phase === N);
if (!phaseComplete) throw new Error(`Agent did not update phasesCompleted for Phase ${N}`);
// 6. Verify spec-config.json updated
const config = JSON.parse(Read(`<work-dir>/spec-config.json`))
const phaseComplete = config.phasesCompleted.some(p => p.phase === N)
if (!phaseComplete) → Error: agent did not update phasesCompleted
// 4. Store summary for downstream context (do NOT read full documents)
phasesSummaries[N] = summary;
// 7. Store summary for downstream context (do NOT read full documents)
phasesSummaries[N] = summary
```
---
## Lifecycle Management
### Timeout Protocol
| Phase | task_name | Default Timeout | On Timeout |
|-------|-----------|-----------------|------------|
| Phase 1 (explore) | `spec-explorer` | 300000ms (5min) | assign_task "finalize" → re-wait 120s → close |
| Phase 2 | `doc-gen-p2` | 600000ms (10min) | assign_task "finalize" → re-wait 120s → close + inline fallback |
| Phase 3 | `doc-gen-p3` | 600000ms (10min) | assign_task "finalize" → re-wait 120s → close + inline fallback |
| Phase 4 | `doc-gen-p4` | 600000ms (10min) | assign_task "finalize" → re-wait 120s → close + inline fallback |
| Phase 5 | `doc-gen-p5` | 600000ms (10min) | assign_task "finalize" → re-wait 120s → close + inline fallback |
| Phase 6.5 | `doc-gen-fix` | 600000ms (10min) | assign_task "finalize" → re-wait 120s → close + force handoff |
### Cleanup Protocol
At the end of each agent-delegated phase, close the agent immediately after retrieving results. Each phase spawns a fresh agent — no agent persists across phases.
```
// Standard per-phase cleanup (after wait_agent succeeds)
close_agent({ target: "doc-gen-p<N>" })
// On workflow abort / user cancellation
const activeAgents = ["doc-gen-p2", "doc-gen-p3", "doc-gen-p4", "doc-gen-p5", "doc-gen-fix", "spec-explorer"]
activeAgents.forEach(name => {
try { close_agent({ target: name }) } catch { /* not active */ }
})
```
---
@@ -387,7 +481,7 @@ phasesSummaries[N] = summary;
### Phase 7: Issue Export
| Document | Purpose | When to Use |
|----------|---------|-------------|
| [phases/07-issue-export.md](phases/07-issue-export.md) | EpicIssue mapping and export | Phase start |
| [phases/07-issue-export.md](phases/07-issue-export.md) | Epic->Issue mapping and export | Phase start |
| [specs/quality-gates.md](specs/quality-gates.md) | Issue export quality criteria | Validation |
### Debugging & Troubleshooting
@@ -403,6 +497,7 @@ phasesSummaries[N] = summary;
|-------|-------|-----------|--------|
| Phase 1 | Empty input | Yes | Error and exit |
| Phase 1 | CLI seed analysis fails | No | Use basic parsing fallback |
| Phase 1 | Codebase explore agent timeout | No | close_agent, proceed without discovery-context |
| Phase 1.5 | Gap analysis CLI fails | No | Skip to user questions with basic prompts |
| Phase 1.5 | User skips discussion | No | Proceed with seed_analysis as-is |
| Phase 1.5 | Max rounds reached (5) | No | Force confirmation with current state |
@@ -417,8 +512,9 @@ phasesSummaries[N] = summary;
| Phase 7 | ccw issue create fails for one Epic | No | Log error, continue with remaining Epics |
| Phase 7 | No EPIC files found | Yes | Error and return to Phase 5 |
| Phase 7 | All issue creations fail | Yes | Error with CLI diagnostic, suggest manual creation |
| Phase 2-5 | Agent fails to return | Yes | Retry once, then fall back to inline execution |
| Phase 2-5 | Agent timeout (wait_agent timed_out) | No | assign_task "finalize" → re-wait → close + inline fallback |
| Phase 2-5 | Agent returns incomplete files | No | Log gaps, attempt inline completion for missing files |
| Any | close_agent on non-existent agent | No | Catch error, continue (agent may have self-terminated) |
### CLI Fallback Chain

View File

@@ -90,9 +90,8 @@ EXPECTED: JSON output:
}
CONSTRAINTS: 问题必须是开放式的,建议必须具体可执行,使用用户输入的语言
" --tool gemini --mode analysis`,
run_in_background: true
});
// Wait for CLI result before continuing
// Parse CLI result before continuing
```
解析 CLI 输出为结构化数据:
@@ -186,10 +185,9 @@ EXPECTED: JSON output:
}
}
CONSTRAINTS: 避免重复已回答的问题,聚焦未覆盖的领域
" --tool gemini --mode analysis`,
run_in_background: true
" --tool gemini --mode analysis`
});
// Wait for CLI result, parse and continue
// Parse CLI result and continue
// If status === "ready_for_confirmation", break to confirmation step
// If status === "need_more_discussion", present follow-up questions
@@ -284,8 +282,7 @@ TASK:
MODE: analysis
EXPECTED: JSON output matching refined-requirements.json schema
CONSTRAINTS: 保守推断,只添加高置信度的扩展
" --tool gemini --mode analysis`,
run_in_background: true
" --tool gemini --mode analysis`
});
// Parse output directly into refined-requirements.json
}

View File

@@ -91,9 +91,8 @@ MODE: analysis
EXPECTED: JSON output with fields: problem_statement, target_users[], domain, constraints[], dimensions[], complexity
CONSTRAINTS: Be specific and actionable, not vague
" --tool gemini --mode analysis`,
run_in_background: true
});
// Wait for CLI result before continuing
// Parse CLI result before continuing
```
Parse the CLI output into structured `seedAnalysis`:
@@ -117,19 +116,29 @@ const hasCodebase = Glob('**/*.{ts,js,py,java,go,rs}').length > 0
|| Glob('Cargo.toml').length > 0;
if (hasCodebase) {
Agent({
subagent_type: "cli-explore-agent",
run_in_background: false,
description: `Explore codebase for spec: ${slug}`,
prompt: `
spawn_agent({
task_name: "spec-explorer",
fork_context: false,
message: `
## TASK ASSIGNMENT
### MANDATORY FIRST STEPS (Agent Execute)
1. **Read role definition**: ~/.codex/agents/cli-explore-agent.toml (MUST read first)
2. Search for code related to topic keywords
3. Read project config files (package.json, pyproject.toml, etc.) if they exist
---
## Spec Generator Context
Topic: ${seedInput}
Dimensions: ${seedAnalysis.dimensions.join(', ')}
Session: ${workDir}
## MANDATORY FIRST STEPS
1. Search for code related to topic keywords
2. Read project config files (package.json, pyproject.toml, etc.) if they exist
Goal: Explore codebase to inform specification decisions
Scope:
- Include: Source code files, config files, existing architecture
- Exclude: node_modules, dist, build artifacts
## Exploration Focus
- Identify existing implementations related to the topic
@@ -151,6 +160,16 @@ Schema:
}
`
});
const exploreResult = wait_agent({ targets: ["spec-explorer"], timeout_ms: 300000 });
if (exploreResult.timed_out) {
assign_task({
target: "spec-explorer",
items: [{ type: "text", text: "Finalize current findings and write discovery-context.json immediately." }]
});
wait_agent({ targets: ["spec-explorer"], timeout_ms: 120000 });
}
close_agent({ target: "spec-explorer" });
}
```
@@ -247,4 +266,4 @@ Write(`${workDir}/spec-config.json`, JSON.stringify(specConfig, null, 2));
## Next Phase
Proceed to [Phase 2: Product Brief](02-product-brief.md) with the generated spec-config.json.
Proceed to [Phase 1.5: Requirement Expansion](01-5-requirement-clarification.md) with the generated spec-config.json.

View File

@@ -1,7 +1,7 @@
# Phase 2: Product Brief
> **Execution Mode: Agent Delegated**
> This phase is executed by a `doc-generator` agent. The orchestrator (SKILL.md) passes session context via the Task tool. The agent reads this file for instructions, executes all steps, writes output files, and returns a JSON summary.
> **Execution Mode: Agent Delegated (Codex v4)**
> This phase is executed by a `doc-generator` agent. The orchestrator spawns the agent via `spawn_agent({ task_name: "doc-gen-p2", fork_context: false })` and retrieves results via `wait_agent`. The agent reads this file as part of its MANDATORY FIRST STEPS, executes all steps, writes output files, and returns a JSON summary.
Generate a product brief through multi-perspective CLI analysis, establishing "what" and "why".
@@ -98,7 +98,6 @@ MODE: analysis
EXPECTED: Structured product analysis with: vision, goals with metrics, scope, competitive positioning, assumptions
CONSTRAINTS: Focus on 'what' and 'why', not 'how'
" --tool gemini --mode analysis`,
run_in_background: true
});
```
@@ -122,7 +121,6 @@ MODE: analysis
EXPECTED: Technical analysis with: feasibility assessment, constraints, integration complexity, tech recommendations, risks
CONSTRAINTS: Focus on feasibility and constraints, not detailed architecture
" --tool codex --mode analysis`,
run_in_background: true
});
```
@@ -146,7 +144,6 @@ MODE: analysis
EXPECTED: User analysis with: personas, journey map, pain points, UX criteria, interaction recommendations
CONSTRAINTS: Focus on user needs and experience, not implementation
" --tool claude --mode analysis`,
run_in_background: true
});
// STOP: Wait for all 3 CLI results before continuing

View File

@@ -1,7 +1,7 @@
# Phase 3: Requirements (PRD)
> **Execution Mode: Agent Delegated**
> This phase is executed by a `doc-generator` agent. The orchestrator (SKILL.md) passes session context via the Task tool. The agent reads this file for instructions, executes all steps, writes output files, and returns a JSON summary.
> **Execution Mode: Agent Delegated (Codex v4)**
> This phase is executed by a `doc-generator` agent. The orchestrator spawns the agent via `spawn_agent({ task_name: "doc-gen-p3", fork_context: false })` and retrieves results via `wait_agent`. The agent reads this file as part of its MANDATORY FIRST STEPS, executes all steps, writes output files, and returns a JSON summary.
Generate a detailed Product Requirements Document with functional/non-functional requirements, acceptance criteria, and MoSCoW prioritization.
@@ -73,10 +73,9 @@ MODE: analysis
EXPECTED: Structured requirements with: ID, title, description, user story, acceptance criteria, priority, traceability to goals
CONSTRAINTS: Every requirement must be specific enough to estimate and test. No vague requirements like 'system should be fast'.
" --tool gemini --mode analysis`,
run_in_background: true
});
// Wait for CLI result
// Parse CLI result
```
### Step 2.5: Codex Requirements Review
@@ -106,10 +105,9 @@ MODE: analysis
EXPECTED: Requirements review with: per-requirement feedback, testability assessment, scope violations, data model gaps, quality rating
CONSTRAINTS: Be genuinely critical. Focus on requirements that would block implementation if left vague.
" --tool codex --mode analysis`,
run_in_background: true
});
// Wait for Codex review result
// Parse Codex review result
// Integrate feedback into requirements before writing files:
// - Fix vague acceptance criteria flagged by Codex
// - Correct RFC 2119 keyword misuse

View File

@@ -1,7 +1,7 @@
# Phase 4: Architecture
> **Execution Mode: Agent Delegated**
> This phase is executed by a `doc-generator` agent. The orchestrator (SKILL.md) passes session context via the Task tool. The agent reads this file for instructions, executes all steps, writes output files, and returns a JSON summary.
> **Execution Mode: Agent Delegated (Codex v4)**
> This phase is executed by a `doc-generator` agent. The orchestrator spawns the agent via `spawn_agent({ task_name: "doc-gen-p4", fork_context: false })` and retrieves results via `wait_agent`. The agent reads this file as part of its MANDATORY FIRST STEPS, executes all steps, writes output files, and returns a JSON summary.
Generate technical architecture decisions, component design, and technology selections based on requirements.
@@ -109,10 +109,9 @@ MODE: analysis
EXPECTED: Complete architecture with: style justification, component diagram, tech stack table, ADRs, data model, security controls, API overview
CONSTRAINTS: Architecture must support all Must-have requirements. Prefer proven technologies over cutting-edge.
" --tool gemini --mode analysis`,
run_in_background: true
});
// Wait for CLI result
// Parse CLI result
```
### Step 3: Architecture Review via Codex CLI
@@ -142,10 +141,9 @@ MODE: analysis
EXPECTED: Architecture review with: per-ADR feedback, scalability concerns, security gaps, technology risks, quality rating
CONSTRAINTS: Be genuinely critical, not just validating. Focus on actionable improvements.
" --tool codex --mode analysis`,
run_in_background: true
});
// Wait for CLI result
// Parse CLI result
```
### Step 4: Interactive ADR Decisions (Optional)

View File

@@ -1,7 +1,7 @@
# Phase 5: Epics & Stories
> **Execution Mode: Agent Delegated**
> This phase is executed by a `doc-generator` agent. The orchestrator (SKILL.md) passes session context via the Task tool. The agent reads this file for instructions, executes all steps, writes output files, and returns a JSON summary.
> **Execution Mode: Agent Delegated (Codex v4)**
> This phase is executed by a `doc-generator` agent. The orchestrator spawns the agent via `spawn_agent({ task_name: "doc-gen-p5", fork_context: false })` and retrieves results via `wait_agent`. The agent reads this file as part of its MANDATORY FIRST STEPS, executes all steps, writes output files, and returns a JSON summary.
Decompose the specification into executable Epics and Stories with dependency mapping.
@@ -83,10 +83,9 @@ CONSTRAINTS:
- Dependencies should be minimized across Epics
\${glossary ? \`- Maintain terminology consistency with glossary: \${glossary.terms.map(t => t.term).join(', ')}\` : ''}
" --tool gemini --mode analysis`,
run_in_background: true
});
// Wait for CLI result
// Parse CLI result
```
### Step 2.5: Codex Epics Review
@@ -117,10 +116,9 @@ MODE: analysis
EXPECTED: Epic review with: coverage gaps, oversized stories, dependency issues, traceability gaps, quality rating
CONSTRAINTS: Focus on issues that would block execution planning. Be specific about which Story/Epic has problems.
" --tool codex --mode analysis`,
run_in_background: true
});
// Wait for Codex review result
// Parse Codex review result
// Integrate feedback into epics before writing files:
// - Add missing Stories for uncovered Must requirements
// - Split XL stories in MVP epics into smaller units

View File

@@ -1,7 +1,7 @@
# Phase 6.5: Auto-Fix
> **Execution Mode: Agent Delegated**
> This phase is executed by a `doc-generator` agent when triggered by the orchestrator after Phase 6 identifies issues. The agent reads this file for instructions, applies fixes to affected documents, and returns a JSON summary.
> **Execution Mode: Agent Delegated (Codex v4)**
> This phase is executed by a `doc-generator` agent when triggered by the orchestrator after Phase 6 identifies issues. The orchestrator spawns via `spawn_agent({ task_name: "doc-gen-fix", fork_context: false })` and retrieves results via `wait_agent`. The agent reads this file as part of its MANDATORY FIRST STEPS, applies fixes to affected documents, and returns a JSON summary.
Automatically repair specification issues identified in Phase 6 Readiness Check.
@@ -96,11 +96,10 @@ TASK:
MODE: analysis
EXPECTED: Corrected document content addressing all listed issues
CONSTRAINTS: Minimal changes - only fix flagged issues, do not restructure unflagged sections
" --tool gemini --mode analysis`,
run_in_background: true
" --tool gemini --mode analysis`
});
// Wait for result, apply fixes to document
// Parse result, apply fixes to document
// Update document version in frontmatter
}
```

View File

@@ -102,10 +102,9 @@ MODE: analysis
EXPECTED: JSON-compatible output with: dimension scores, overall score, gate, issues list (severity + description + location), traceability matrix
CONSTRAINTS: Be thorough but fair. Focus on actionable issues.
" --tool gemini --mode analysis`,
run_in_background: true
});
// Wait for CLI result
// Parse CLI result
```
### Step 2b: Codex Technical Depth Review
@@ -139,7 +138,6 @@ MODE: analysis
EXPECTED: Technical depth review with: per-dimension scores (1-5), specific gaps, improvement recommendations, overall technical readiness assessment
CONSTRAINTS: Focus on gaps that would cause implementation ambiguity. Ignore cosmetic issues.
" --tool codex --mode analysis`,
run_in_background: true
});
// Codex result merged with Gemini result in Step 3
@@ -350,7 +348,9 @@ if (selection === "Execute via lite-plan") {
const epicContent = Read(firstMvpFile);
const title = extractTitle(epicContent); // First # heading
const description = extractSection(epicContent, "Description");
Skill(skill="workflow-lite-plan", args=`"${title}: ${description}"`)
// Invoke workflow-lite-plan skill with Epic description
// Codex: use skill invocation mechanism for the target skill
invoke_skill("workflow-lite-plan", `"${title}: ${description}"`)
}
if (selection === "Full planning" || selection === "Create roadmap") {
@@ -374,8 +374,8 @@ SCOPE: ${extractScope(specSummary)}
CONTEXT: Generated from spec session ${specConfig.session_id}. Source: ${workDir}/`;
// Step C: Create WFS session (provides session directory + .brainstorming/)
Skill(skill="workflow:session:start", args=`--auto "${structuredDesc}"`)
// Produces sessionId (WFS-xxx) and session directory at .workflow/active/{sessionId}/
invoke_skill("workflow:session:start", `--auto "${structuredDesc}"`)
// -> Produces sessionId (WFS-xxx) and session directory at .workflow/active/{sessionId}/
// Step D: Create .brainstorming/ bridge files
const brainstormDir = `.workflow/active/${sessionId}/.brainstorming`;
@@ -476,9 +476,9 @@ ${extractSection(epicContent, "Architecture")}
// → context-package.json.brainstorm_artifacts populated
// → action-planning-agent loads guidance_specification (P1) + feature_index (P2)
if (selection === "Full planning") {
Skill(skill="workflow-plan", args=`"${structuredDesc}"`)
invoke_skill("workflow-plan", `"${structuredDesc}"`)
} else {
Skill(skill="workflow:req-plan-with-file", args=`"${extractGoal(specSummary)}"`)
invoke_skill("workflow:req-plan-with-file", `"${extractGoal(specSummary)}"`)
}
}

View File

@@ -281,12 +281,12 @@ const answer = request_user_input({
const selection = answer.answers.next_step.answers[0];
if (selection === "Execute via team-planex(Recommended)") {
const issueIds = createdIssues.map(i => i.issue_id).join(',');
Skill({ skill: "team-planex", args: `--issues ${issueIds}` });
invoke_skill("team-planex", `--issues ${issueIds}`);
}
if (selection === "Wave 1 only") {
const wave1Ids = createdIssues.filter(i => i.wave === 1).map(i => i.issue_id).join(',');
Skill({ skill: "team-planex", args: `--issues ${wave1Ids}` });
invoke_skill("team-planex", `--issues ${wave1Ids}`);
}
if (selection === "Done") {

View File

@@ -2,7 +2,7 @@
name: spec-setup
description: Initialize project-level state and configure specs via interactive questionnaire.
argument-hint: "[--regenerate] [--skip-specs] [--reset]"
allowed-tools: spawn_agent, wait, send_input, close_agent, request_user_input, Read, Write, Edit, Bash, Glob, Grep
allowed-tools: spawn_agent, wait_agent, send_message, assign_task, close_agent, request_user_input, Read, Write, Edit, Bash, Glob, Grep
---
# Workflow Spec Setup Command
@@ -184,11 +184,11 @@ Project root: ${projectRoot}
})
// Wait for completion
const result = wait({ ids: [exploreAgent], timeout_ms: 600000 })
const result = wait_agent({ targets: [exploreAgent], timeout_ms: 600000 })
if (result.timed_out) {
send_input({ id: exploreAgent, message: 'Complete analysis now and write project-tech.json.' })
const retry = wait({ ids: [exploreAgent], timeout_ms: 300000 })
assign_task({ target: exploreAgent, items: [{ type: "text", text: "Complete analysis now and write project-tech.json." }] })
const retry = wait_agent({ targets: [exploreAgent], timeout_ms: 300000 })
if (retry.timed_out) throw new Error('Agent timeout')
}

View File

@@ -6,7 +6,7 @@ description: |
(spawn_agent or N+1 parallel agents) → plan verification → interactive replan.
Produces IMPL_PLAN.md, task JSONs, TODO_LIST.md.
argument-hint: "[-y|--yes] [--session ID] \"task description\" | verify [--session ID] | replan [--session ID] [IMPL-N] \"changes\""
allowed-tools: spawn_agent, wait, send_input, close_agent, request_user_input, Read, Write, Edit, Bash, Glob, Grep
allowed-tools: spawn_agent, wait_agent, send_message, assign_task, close_agent, request_user_input, Read, Write, Edit, Bash, Glob, Grep
---
## Auto Mode
@@ -271,7 +271,7 @@ Format: {
`
})
wait({ id: ctxAgent })
wait_agent({ targets: [ctxAgent] })
close_agent({ id: ctxAgent })
// Parse outputs
@@ -403,7 +403,7 @@ ${contextPkg.conflict_risk === 'medium' || contextPkg.conflict_risk === 'high'
`
})
wait({ id: planAgent })
wait_agent({ targets: [planAgent] })
close_agent({ id: planAgent })
}
```
@@ -439,7 +439,7 @@ Mark cross-module dependencies as CROSS::${'{module}'}::${'{task}'}
}
// Wait for all module planners
wait({ ids: moduleAgents.map(a => a.id) })
wait_agent({ targets: moduleAgents.map(a => a.id) })
moduleAgents.forEach(a => close_agent({ id: a.id }))
// +1 Coordinator: integrate all modules
@@ -460,7 +460,7 @@ Integrate ${uniqueModules.length} module plans into unified IMPL_PLAN.md.
`
})
wait({ id: coordAgent })
wait_agent({ targets: [coordAgent] })
close_agent({ id: coordAgent })
}
```
@@ -601,7 +601,7 @@ ${replanTaskId ? `**Target Task**: ${sessionFolder}/.task/${replanTaskId}.json`
`
})
wait({ id: replanAgent })
wait_agent({ targets: [replanAgent] })
close_agent({ id: replanAgent })
console.log(` Replan complete. Review: ${sessionFolder}/IMPL_PLAN.md`)

View File

@@ -7,7 +7,7 @@ description: |
interactive verification. Produces IMPL_PLAN.md with Red-Green-Refactor cycles,
task JSONs, TODO_LIST.md.
argument-hint: "[-y|--yes] [--session ID] \"task description\" | verify [--session ID]"
allowed-tools: spawn_agent, wait, send_input, close_agent, request_user_input, Read, Write, Edit, Bash, Glob, Grep
allowed-tools: spawn_agent, wait_agent, send_message, assign_task, close_agent, request_user_input, Read, Write, Edit, Bash, Glob, Grep
---
## Auto Mode
@@ -298,7 +298,7 @@ Format: {
`
})
wait({ id: ctxAgent })
wait_agent({ targets: [ctxAgent] })
close_agent({ id: ctxAgent })
// Parse outputs
@@ -367,7 +367,7 @@ Format: {
`
})
wait({ id: testAgent })
wait_agent({ targets: [testAgent] })
close_agent({ id: testAgent })
const testContext = JSON.parse(Read(`${sessionFolder}/.process/test-context-package.json`) || '{}')
@@ -500,7 +500,7 @@ Each task MUST include Red-Green-Refactor cycle:
`
})
wait({ id: planAgent })
wait_agent({ targets: [planAgent] })
close_agent({ id: planAgent })
console.log(` TDD tasks generated`)
@@ -690,7 +690,7 @@ BLOCKED: Critical failures, must fix before execution
`
})
wait({ id: verifyAgent })
wait_agent({ targets: [verifyAgent] })
close_agent({ id: verifyAgent })
const report = Read(`${sessionFolder}/.process/TDD_COMPLIANCE_REPORT.md`)

View File

@@ -1,7 +1,7 @@
---
name: workflow-test-fix-cycle
description: "End-to-end test-fix workflow generate test sessions with progressive layers (L0-L3), then execute iterative fix cycles until pass rate >= 95%. Combines test-fix-gen and test-cycle-execute into a unified pipeline. Triggers on \"workflow:test-fix-cycle\"."
allowed-tools: spawn_agent, wait, send_input, close_agent, request_user_input, Read, Write, Edit, Bash, Glob, Grep
allowed-tools: spawn_agent, wait_agent, send_message, assign_task, close_agent, request_user_input, Read, Write, Edit, Bash, Glob, Grep
---
# Workflow Test-Fix Cycle
@@ -64,7 +64,7 @@ Task Pipeline:
1. **Two-Phase Pipeline**: Generation (Phase 1) creates session + tasks, Execution (Phase 2) runs iterative fix cycles
2. **Pure Orchestrator**: Dispatch to phase docs, parse outputs, pass context between phases
3. **Phase 1 Auto-Continue**: Sub-phases within Phase 1 run autonomously
4. **Subagent Lifecycle**: Explicit lifecycle management with spawn_agent → wait → close_agent
4. **Subagent Lifecycle**: Explicit lifecycle management with spawn_agent → wait_agent → close_agent
5. **Progressive Test Layers**: L0 (Static) → L1 (Unit) → L2 (Integration) → L3 (E2E)
6. **AI Code Issue Detection**: Validates against common AI-generated code problems
7. **Intelligent Strategy Engine**: conservative → aggressive → surgical based on iteration context
@@ -99,33 +99,33 @@ ${deliverables}
})
```
### wait
### wait_agent
Get results from subagent (only way to retrieve results).
```javascript
const result = wait({
ids: [agentId],
const result = wait_agent({
targets: [agentId],
timeout_ms: 600000 // 10 minutes
})
if (result.timed_out) {
// Handle timeout - can continue waiting or send_input to prompt completion
// Handle timeout - can use assign_task to prompt completion
}
```
### send_input
Continue interaction with active subagent (for clarification or follow-up).
### assign_task
Assign new work to active subagent (for clarification or follow-up).
```javascript
send_input({
id: agentId,
message: `
assign_task({
target: agentId,
items: [{ type: "text", text: `
## CLARIFICATION ANSWERS
${answers}
## NEXT STEP
Continue with plan generation.
`
` }]
})
```
@@ -225,7 +225,7 @@ Phase 2: Test-Cycle Execution (phases/02-test-cycle-execute.md)
6. **Task Attachment Model**: Sub-tasks ATTACH → execute → COLLAPSE
7. **MANDATORY CONFIRMATION GATE**: After Phase 1 completes, you MUST stop and present the generated plan to the user. Wait for explicit user approval via request_user_input before starting Phase 2. NEVER auto-proceed from Phase 1 to Phase 2
8. **Phase 2 Continuous**: Once user approves, Phase 2 runs continuously until pass rate >= 95% or max iterations reached
9. **Explicit Lifecycle**: Always close_agent after wait completes to free resources
9. **Explicit Lifecycle**: Always close_agent after wait_agent completes to free resources
## Phase Execution
@@ -235,9 +235,9 @@ Phase 2: Test-Cycle Execution (phases/02-test-cycle-execute.md)
5 sub-phases that create a test session and generate task JSONs:
1. Create Test Session → `testSessionId`
2. Gather Test Context (spawn_agent → wait → close_agent) → `contextPath`
3. Test Generation Analysis (spawn_agent → wait → close_agent) → `TEST_ANALYSIS_RESULTS.md`
4. Generate Test Tasks (spawn_agent → wait → close_agent) → `IMPL-001.json`, `IMPL-001.3.json`, `IMPL-001.5.json`, `IMPL-002.json`, `IMPL_PLAN.md`, `TODO_LIST.md`
2. Gather Test Context (spawn_agent → wait_agent → close_agent) → `contextPath`
3. Test Generation Analysis (spawn_agent → wait_agent → close_agent) → `TEST_ANALYSIS_RESULTS.md`
4. Generate Test Tasks (spawn_agent → wait_agent → close_agent) → `IMPL-001.json`, `IMPL-001.3.json`, `IMPL-001.5.json`, `IMPL-002.json`, `IMPL_PLAN.md`, `TODO_LIST.md`
5. Phase 1 Summary → **⛔ MANDATORY: Present plan and wait for user confirmation before Phase 2**
**Agents Used** (via spawn_agent):
@@ -343,7 +343,7 @@ Phase 2: Test-Cycle Execution (phases/02-test-cycle-execute.md)
```javascript
try {
const agentId = spawn_agent({ message: "..." });
const result = wait({ ids: [agentId], timeout_ms: 600000 });
const result = wait_agent({ targets: [agentId], timeout_ms: 600000 });
// ... process result ...
close_agent({ id: agentId });
} catch (error) {
@@ -358,7 +358,7 @@ try {
- Detect input type (session ID / description / file path / resume)
- Initialize progress tracking with 2 top-level phases
- Read `phases/01-test-fix-gen.md` for detailed sub-phase execution
- Execute 5 sub-phases with spawn_agent → wait → close_agent lifecycle
- Execute 5 sub-phases with spawn_agent → wait_agent → close_agent lifecycle
- Verify all Phase 1 outputs (4+ task JSONs, IMPL_PLAN.md, TODO_LIST.md)
- **Ensure all agents are closed** after each sub-phase completes
- **⛔ MANDATORY: Present plan summary and request_user_input for confirmation**
@@ -371,7 +371,7 @@ try {
**Phase 2 (Execution)**:
- Read `phases/02-test-cycle-execute.md` for detailed execution logic
- Load session state and task queue
- Execute iterative test-fix cycles with spawn_agent → wait → close_agent
- Execute iterative test-fix cycles with spawn_agent → wait_agent → close_agent
- Track iterations in progress tracking
- Auto-complete session on success (pass rate >= 95%)
- **Ensure all agents are closed** after each iteration

View File

@@ -92,7 +92,7 @@ Gather test context for session [testSessionId]
`
});
const contextResult = wait({ ids: [contextAgentId], timeout_ms: 600000 });
const contextResult = wait_agent({ targets: [contextAgentId], timeout_ms: 600000 });
close_agent({ id: contextAgentId });
// Prompt Mode - gather from codebase via context-search-agent
@@ -119,7 +119,7 @@ Gather project context for session [testSessionId]: [task_description]
`
});
const contextResult = wait({ ids: [contextAgentId], timeout_ms: 600000 });
const contextResult = wait_agent({ targets: [contextAgentId], timeout_ms: 600000 });
close_agent({ id: contextAgentId });
```
@@ -203,7 +203,7 @@ ${projectRoot}/.workflow/active/[testSessionId]/.process/TEST_ANALYSIS_RESULTS.m
`
});
const analysisResult = wait({ ids: [analysisAgentId], timeout_ms: 1200000 });
const analysisResult = wait_agent({ targets: [analysisAgentId], timeout_ms: 1200000 });
close_agent({ id: analysisAgentId });
```
@@ -272,7 +272,7 @@ Generate test-specific IMPL_PLAN.md and task JSONs for session [testSessionId]
`
});
const taskGenResult = wait({ ids: [taskGenAgentId], timeout_ms: 600000 });
const taskGenResult = wait_agent({ targets: [taskGenAgentId], timeout_ms: 600000 });
close_agent({ id: taskGenAgentId });
```
@@ -446,7 +446,7 @@ Sub-Phase 1.5: Phase 1 Summary
```javascript
try {
const agentId = spawn_agent({ message: "..." });
const result = wait({ ids: [agentId], timeout_ms: 600000 });
const result = wait_agent({ targets: [agentId], timeout_ms: 600000 });
// ... process result ...
close_agent({ id: agentId });
} catch (error) {

View File

@@ -139,8 +139,8 @@ const analysisAgentId = spawn_agent({
});
// Wait for analysis completion
const analysisResult = wait({
ids: [analysisAgentId],
const analysisResult = wait_agent({
targets: [analysisAgentId],
timeout_ms: 2400000 // 40 minutes (CLI analysis timeout)
});
@@ -199,8 +199,8 @@ const fixAgentId = spawn_agent({
});
// Wait for execution completion
const fixResult = wait({
ids: [fixAgentId],
const fixResult = wait_agent({
targets: [fixAgentId],
timeout_ms: 600000 // 10 minutes
});
@@ -390,7 +390,7 @@ Fallback is triggered when any of these conditions occur:
```javascript
try {
const agentId = spawn_agent({ message: "..." });
const result = wait({ ids: [agentId], timeout_ms: 2400000 });
const result = wait_agent({ ids: [agentId], timeout_ms: 2400000 });
// ... process result ...
close_agent({ id: agentId });
} catch (error) {