mirror of
https://github.com/catlog22/Claude-Code-Workflow.git
synced 2026-02-28 09:23:08 +08:00
feat: add templates for epics, product brief, and requirements PRD
- Created a new directory structure for epics and stories with templates for individual epics and an index file. - Added a product brief template for generating product brief documents in Phase 2. - Introduced a requirements PRD template for generating a Product Requirements Document as a directory of individual requirement files in Phase 3. feat: implement V2PipelineTab component for Memory V2 management - Developed the V2PipelineTab component to manage extraction and consolidation processes. - Included ExtractionCard and ConsolidationCard components to handle respective functionalities. - Added JobsList component to display job statuses and allow filtering by job kind. feat: create hooks for Memory V2 pipeline - Implemented custom hooks for managing extraction and consolidation statuses, as well as job listings. - Added mutation hooks to trigger extraction and consolidation processes with automatic query invalidation on success.
This commit is contained in:
@@ -1,620 +0,0 @@
|
||||
---
|
||||
name: req-plan-with-file
|
||||
description: Requirement-level progressive roadmap planning with issue creation. Decomposes requirements into convergent layers or task sequences, creates issues via ccw issue create, and generates roadmap.md for human review. Issues stored in .workflow/issues/issues.jsonl (single source of truth).
|
||||
argument-hint: "[-y|--yes] [-c|--continue] [-m|--mode progressive|direct|auto] \"requirement description\""
|
||||
allowed-tools: TodoWrite(*), Task(*), AskUserQuestion(*), Read(*), Grep(*), Glob(*), Bash(*), Edit(*), Write(*)
|
||||
---
|
||||
|
||||
## Auto Mode
|
||||
|
||||
When `--yes` or `-y`: Auto-confirm strategy selection, use recommended mode, skip interactive validation rounds.
|
||||
|
||||
# Workflow Req-Plan Command (/workflow:req-plan-with-file)
|
||||
|
||||
## Quick Start
|
||||
|
||||
```bash
|
||||
# Basic usage
|
||||
/workflow:req-plan-with-file "Implement user authentication system with OAuth and 2FA"
|
||||
|
||||
# With mode selection
|
||||
/workflow:req-plan-with-file -m progressive "Build real-time notification system" # Layered MVP→iterations
|
||||
/workflow:req-plan-with-file -m direct "Refactor payment module" # Topologically-sorted task sequence
|
||||
/workflow:req-plan-with-file -m auto "Add data export feature" # Auto-select strategy
|
||||
|
||||
# Continue existing session
|
||||
/workflow:req-plan-with-file --continue "user authentication system"
|
||||
|
||||
# Auto mode
|
||||
/workflow:req-plan-with-file -y "Implement caching layer"
|
||||
```
|
||||
|
||||
**Context Source**: cli-explore-agent (optional) + requirement analysis
|
||||
**Output Directory**: `.workflow/.req-plan/{session-id}/`
|
||||
**Core Innovation**: Requirement decomposition → issue creation via `ccw issue create`. Issues stored in `.workflow/issues/issues.jsonl` (single source of truth). Wave/dependency info embedded in issue tags (`wave-N`) and `extended_context.notes.depends_on_issues`. team-planex consumes issues directly by ID or tag query.
|
||||
|
||||
## Overview
|
||||
|
||||
Requirement-level layered roadmap planning command. Decomposes a requirement into **convergent layers or task sequences**, creates issues via `ccw issue create`. Issues are the single source of truth in `.workflow/issues/issues.jsonl`; wave and dependency info is embedded in issue tags and `extended_context.notes`.
|
||||
|
||||
**Dual Modes**:
|
||||
- **Progressive**: Layered MVP→iterations, suitable for high-uncertainty requirements (validate first, then refine)
|
||||
- **Direct**: Topologically-sorted task sequence, suitable for low-uncertainty requirements (clear tasks, directly ordered)
|
||||
- **Auto**: Automatically selects based on uncertainty level
|
||||
|
||||
**Core Workflow**: Requirement Understanding → Strategy Selection → Context Collection (optional) → Decomposition + Issue Creation → Validation → team-planex Handoff
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────────────────────┐
|
||||
│ REQ-PLAN ROADMAP WORKFLOW │
|
||||
├─────────────────────────────────────────────────────────────────────────┤
|
||||
│ │
|
||||
│ Phase 1: Requirement Understanding & Strategy Selection │
|
||||
│ ├─ Parse requirement: goal / constraints / stakeholders │
|
||||
│ ├─ Assess uncertainty level │
|
||||
│ │ ├─ High uncertainty → recommend progressive │
|
||||
│ │ └─ Low uncertainty → recommend direct │
|
||||
│ ├─ User confirms strategy (-m skips, -y auto-selects recommended) │
|
||||
│ └─ Initialize strategy-assessment.json + roadmap.md skeleton │
|
||||
│ │
|
||||
│ Phase 2: Context Collection (Optional) │
|
||||
│ ├─ Detect codebase: package.json / go.mod / src / ... │
|
||||
│ ├─ Has codebase → cli-explore-agent explores relevant modules │
|
||||
│ └─ No codebase → skip, pure requirement decomposition │
|
||||
│ │
|
||||
│ Phase 3: Decomposition & Issue Creation (cli-roadmap-plan-agent) │
|
||||
│ ├─ Progressive: define 2-4 layers, each with full convergence │
|
||||
│ ├─ Direct: vertical slicing + topological sort, each with convergence│
|
||||
│ ├─ Create issues via ccw issue create (ISS-xxx IDs) │
|
||||
│ └─ Generate roadmap.md (with issue ID references) │
|
||||
│ │
|
||||
│ Phase 4: Validation & team-planex Handoff │
|
||||
│ ├─ Display decomposition results (tabular + convergence criteria) │
|
||||
│ ├─ User feedback loop (up to 5 rounds) │
|
||||
│ └─ Next steps: team-planex full execution / wave-by-wave / view │
|
||||
│ │
|
||||
└─────────────────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
## Output
|
||||
|
||||
```
|
||||
.workflow/.req-plan/RPLAN-{slug}-{YYYY-MM-DD}/
|
||||
├── roadmap.md # Human-readable roadmap with issue ID references
|
||||
├── strategy-assessment.json # Strategy assessment result
|
||||
└── exploration-codebase.json # Codebase context (optional)
|
||||
```
|
||||
|
||||
| File | Phase | Description |
|
||||
|------|-------|-------------|
|
||||
| `strategy-assessment.json` | 1 | Uncertainty analysis + mode recommendation + extracted goal/constraints/stakeholders |
|
||||
| `roadmap.md` (skeleton) | 1 | Initial skeleton with placeholders, finalized in Phase 3 |
|
||||
| `exploration-codebase.json` | 2 | Codebase context: relevant modules, patterns, integration points (only when codebase exists) |
|
||||
| `roadmap.md` (final) | 3 | Human-readable roadmap with issue ID references, convergence details, team-planex execution guide |
|
||||
|
||||
**roadmap.md template**:
|
||||
|
||||
```markdown
|
||||
# Requirement Roadmap
|
||||
|
||||
**Session**: RPLAN-{slug}-{date}
|
||||
**Requirement**: {requirement}
|
||||
**Strategy**: {progressive|direct}
|
||||
**Generated**: {timestamp}
|
||||
|
||||
## Strategy Assessment
|
||||
- Uncertainty level: {high|medium|low}
|
||||
- Decomposition mode: {progressive|direct}
|
||||
- Assessment basis: {factors summary}
|
||||
|
||||
## Roadmap
|
||||
{Tabular display of layers/tasks}
|
||||
|
||||
## Convergence Criteria Details
|
||||
{Expanded convergence for each layer/task}
|
||||
|
||||
## Risks
|
||||
{Aggregated risks}
|
||||
|
||||
## Next Steps
|
||||
{Execution guidance}
|
||||
```
|
||||
|
||||
## Configuration
|
||||
|
||||
| Flag | Default | Description |
|
||||
|------|---------|-------------|
|
||||
| `-y, --yes` | false | Auto-confirm all decisions |
|
||||
| `-c, --continue` | false | Continue existing session |
|
||||
| `-m, --mode` | auto | Decomposition strategy: progressive / direct / auto |
|
||||
|
||||
**Session ID format**: `RPLAN-{slug}-{YYYY-MM-DD}`
|
||||
- slug: lowercase, alphanumeric + CJK characters, max 40 chars
|
||||
- date: YYYY-MM-DD (UTC+8)
|
||||
- Auto-detect continue: session folder + roadmap.md exists → continue mode
|
||||
|
||||
## JSONL Schema Design
|
||||
|
||||
### Issue Format
|
||||
|
||||
Each line in `issues.jsonl` follows the standard `issues-jsonl-schema.json` (see `.ccw/workflows/cli-templates/schemas/issues-jsonl-schema.json`).
|
||||
|
||||
**Key fields per issue**:
|
||||
|
||||
| Field | Source | Description |
|
||||
|-------|--------|-------------|
|
||||
| `id` | `ccw issue create` | Formal ISS-YYYYMMDD-NNN ID |
|
||||
| `title` | Layer/task mapping | `[LayerName] goal` or `[TaskType] title` |
|
||||
| `context` | Convergence fields | Markdown with goal, scope, convergence criteria, verification, DoD |
|
||||
| `priority` | Effort mapping | small→4, medium→3, large→2 |
|
||||
| `source` | Fixed | `"text"` |
|
||||
| `tags` | Auto-generated | `["req-plan", mode, name/type, "wave-N"]` |
|
||||
| `extended_context.notes` | Metadata JSON | session, strategy, original_id, wave, depends_on_issues |
|
||||
| `lifecycle_requirements` | Fixed | test_strategy, regression_scope, acceptance_type, commit_strategy |
|
||||
|
||||
### Convergence Criteria (in issue context)
|
||||
|
||||
Each issue's `context` field contains convergence information:
|
||||
|
||||
| Section | Purpose | Requirement |
|
||||
|---------|---------|-------------|
|
||||
| `## Convergence Criteria` | List of checkable specific conditions | **Testable** (can be written as assertions or manual steps) |
|
||||
| `## Verification` | How to verify these conditions | **Executable** (command, script, or explicit steps) |
|
||||
| `## Definition of Done` | One-sentence completion definition | **Business language** (non-technical person can judge) |
|
||||
|
||||
## Implementation
|
||||
|
||||
### Session Initialization
|
||||
|
||||
**Objective**: Create session context and directory structure.
|
||||
|
||||
```javascript
|
||||
const getUtc8ISOString = () => new Date(Date.now() + 8 * 60 * 60 * 1000).toISOString()
|
||||
|
||||
// Parse flags
|
||||
const autoYes = $ARGUMENTS.includes('--yes') || $ARGUMENTS.includes('-y')
|
||||
const continueMode = $ARGUMENTS.includes('--continue') || $ARGUMENTS.includes('-c')
|
||||
const modeMatch = $ARGUMENTS.match(/(?:--mode|-m)\s+(progressive|direct|auto)/)
|
||||
const requestedMode = modeMatch ? modeMatch[1] : 'auto'
|
||||
|
||||
// Clean requirement text (remove flags)
|
||||
const requirement = $ARGUMENTS
|
||||
.replace(/--yes|-y|--continue|-c|--mode\s+\w+|-m\s+\w+/g, '')
|
||||
.trim()
|
||||
|
||||
const slug = requirement.toLowerCase()
|
||||
.replace(/[^a-z0-9\u4e00-\u9fa5]+/g, '-')
|
||||
.substring(0, 40)
|
||||
const dateStr = getUtc8ISOString().substring(0, 10)
|
||||
const sessionId = `RPLAN-${slug}-${dateStr}`
|
||||
const sessionFolder = `.workflow/.req-plan/${sessionId}`
|
||||
|
||||
// Auto-detect continue: session folder + roadmap.md exists → continue mode
|
||||
Bash(`mkdir -p ${sessionFolder}`)
|
||||
```
|
||||
|
||||
### Phase 1: Requirement Understanding & Strategy Selection
|
||||
|
||||
**Objective**: Parse requirement, assess uncertainty, select decomposition strategy.
|
||||
|
||||
**Prerequisites**: Session initialized, requirement description available.
|
||||
|
||||
**Steps**:
|
||||
|
||||
1. **Parse Requirement**
|
||||
- Extract core goal (what to achieve)
|
||||
- Identify constraints (tech stack, timeline, compatibility, etc.)
|
||||
- Identify stakeholders (users, admins, developers, etc.)
|
||||
- Identify keywords to determine domain
|
||||
|
||||
2. **Assess Uncertainty Level**
|
||||
|
||||
```javascript
|
||||
const uncertaintyFactors = {
|
||||
scope_clarity: 'low|medium|high',
|
||||
technical_risk: 'low|medium|high',
|
||||
dependency_unknown: 'low|medium|high',
|
||||
domain_familiarity: 'low|medium|high',
|
||||
requirement_stability: 'low|medium|high'
|
||||
}
|
||||
// high uncertainty (>=3 high) → progressive
|
||||
// low uncertainty (>=3 low) → direct
|
||||
// otherwise → ask user preference
|
||||
```
|
||||
|
||||
3. **Strategy Selection** (skip if `-m` already specified)
|
||||
|
||||
```javascript
|
||||
if (requestedMode !== 'auto') {
|
||||
selectedMode = requestedMode
|
||||
} else if (autoYes) {
|
||||
selectedMode = recommendedMode
|
||||
} else {
|
||||
AskUserQuestion({
|
||||
questions: [{
|
||||
question: `Decomposition strategy selection:\n\nUncertainty assessment: ${uncertaintyLevel}\nRecommended strategy: ${recommendedMode}\n\nSelect decomposition strategy:`,
|
||||
header: "Strategy",
|
||||
multiSelect: false,
|
||||
options: [
|
||||
{
|
||||
label: recommendedMode === 'progressive' ? "Progressive (Recommended)" : "Progressive",
|
||||
description: "Layered MVP→iterations, validate core first then refine progressively. Suitable for high-uncertainty requirements needing quick validation"
|
||||
},
|
||||
{
|
||||
label: recommendedMode === 'direct' ? "Direct (Recommended)" : "Direct",
|
||||
description: "Topologically-sorted task sequence with explicit dependencies. Suitable for clear requirements with confirmed technical approach"
|
||||
}
|
||||
]
|
||||
}]
|
||||
})
|
||||
}
|
||||
```
|
||||
|
||||
4. **Generate strategy-assessment.json**
|
||||
|
||||
```javascript
|
||||
const strategyAssessment = {
|
||||
session_id: sessionId,
|
||||
requirement: requirement,
|
||||
timestamp: getUtc8ISOString(),
|
||||
uncertainty_factors: uncertaintyFactors,
|
||||
uncertainty_level: uncertaintyLevel, // 'high' | 'medium' | 'low'
|
||||
recommended_mode: recommendedMode,
|
||||
selected_mode: selectedMode,
|
||||
goal: extractedGoal,
|
||||
constraints: extractedConstraints,
|
||||
stakeholders: extractedStakeholders,
|
||||
domain_keywords: extractedKeywords
|
||||
}
|
||||
Write(`${sessionFolder}/strategy-assessment.json`, JSON.stringify(strategyAssessment, null, 2))
|
||||
```
|
||||
|
||||
5. **Initialize roadmap.md skeleton** (placeholder sections, finalized in Phase 4)
|
||||
|
||||
```javascript
|
||||
const roadmapMdSkeleton = `# Requirement Roadmap
|
||||
|
||||
**Session**: ${sessionId}
|
||||
**Requirement**: ${requirement}
|
||||
**Strategy**: ${selectedMode}
|
||||
**Status**: Planning
|
||||
**Created**: ${getUtc8ISOString()}
|
||||
|
||||
## Strategy Assessment
|
||||
- Uncertainty level: ${uncertaintyLevel}
|
||||
- Decomposition mode: ${selectedMode}
|
||||
|
||||
## Roadmap
|
||||
> To be populated after Phase 3 decomposition
|
||||
|
||||
## Convergence Criteria Details
|
||||
> To be populated after Phase 3 decomposition
|
||||
|
||||
## Risk Items
|
||||
> To be populated after Phase 3 decomposition
|
||||
|
||||
## Next Steps
|
||||
> To be populated after Phase 4 validation
|
||||
`
|
||||
Write(`${sessionFolder}/roadmap.md`, roadmapMdSkeleton)
|
||||
```
|
||||
|
||||
**Success Criteria**:
|
||||
- Requirement goal, constraints, stakeholders identified
|
||||
- Uncertainty level assessed
|
||||
- Strategy selected (progressive or direct)
|
||||
- strategy-assessment.json generated
|
||||
- roadmap.md skeleton initialized
|
||||
|
||||
### Phase 2: Context Collection (Optional)
|
||||
|
||||
**Objective**: If a codebase exists, collect relevant context to enhance decomposition quality.
|
||||
|
||||
**Prerequisites**: Phase 1 complete.
|
||||
|
||||
**Steps**:
|
||||
|
||||
1. **Detect Codebase**
|
||||
|
||||
```javascript
|
||||
const hasCodebase = Bash(`
|
||||
test -f package.json && echo "nodejs" ||
|
||||
test -f go.mod && echo "golang" ||
|
||||
test -f Cargo.toml && echo "rust" ||
|
||||
test -f pyproject.toml && echo "python" ||
|
||||
test -f pom.xml && echo "java" ||
|
||||
test -d src && echo "generic" ||
|
||||
echo "none"
|
||||
`).trim()
|
||||
```
|
||||
|
||||
2. **Codebase Exploration** (only when hasCodebase !== 'none')
|
||||
|
||||
```javascript
|
||||
if (hasCodebase !== 'none') {
|
||||
Task({
|
||||
subagent_type: "cli-explore-agent",
|
||||
run_in_background: false,
|
||||
description: `Explore codebase: ${slug}`,
|
||||
prompt: `
|
||||
## Exploration Context
|
||||
Requirement: ${requirement}
|
||||
Strategy: ${selectedMode}
|
||||
Project Type: ${hasCodebase}
|
||||
Session: ${sessionFolder}
|
||||
|
||||
## MANDATORY FIRST STEPS
|
||||
1. Run: ccw tool exec get_modules_by_depth '{}'
|
||||
2. Execute relevant searches based on requirement keywords
|
||||
3. Read: .workflow/project-tech.json (if exists)
|
||||
4. Read: .workflow/specs/*.md (if exists)
|
||||
|
||||
## Exploration Focus
|
||||
- Identify modules/components related to the requirement
|
||||
- Find existing patterns that should be followed
|
||||
- Locate integration points for new functionality
|
||||
- Assess current architecture constraints
|
||||
|
||||
## Output
|
||||
Write findings to: ${sessionFolder}/exploration-codebase.json
|
||||
|
||||
Schema: {
|
||||
project_type: "${hasCodebase}",
|
||||
relevant_modules: [{name, path, relevance}],
|
||||
existing_patterns: [{pattern, files, description}],
|
||||
integration_points: [{location, description, risk}],
|
||||
architecture_constraints: [string],
|
||||
tech_stack: {languages, frameworks, tools},
|
||||
_metadata: {timestamp, exploration_scope}
|
||||
}
|
||||
`
|
||||
})
|
||||
}
|
||||
// No codebase → skip, proceed directly to Phase 3
|
||||
```
|
||||
|
||||
**Success Criteria**:
|
||||
- Codebase detection complete
|
||||
- When codebase exists, exploration-codebase.json generated
|
||||
- When no codebase, skipped and logged
|
||||
|
||||
### Phase 3: Decomposition & Issue Creation
|
||||
|
||||
**Objective**: Execute requirement decomposition via `cli-roadmap-plan-agent`, creating issues and generating roadmap.md.
|
||||
|
||||
**Prerequisites**: Phase 1, Phase 2 complete. Strategy selected. Context collected (if applicable).
|
||||
|
||||
**Agent**: `cli-roadmap-plan-agent` (dedicated requirement roadmap planning agent, supports CLI-assisted decomposition + issue creation + built-in quality checks)
|
||||
|
||||
**Steps**:
|
||||
|
||||
1. **Prepare Context**
|
||||
|
||||
```javascript
|
||||
const strategy = JSON.parse(Read(`${sessionFolder}/strategy-assessment.json`))
|
||||
let explorationContext = null
|
||||
if (file_exists(`${sessionFolder}/exploration-codebase.json`)) {
|
||||
explorationContext = JSON.parse(Read(`${sessionFolder}/exploration-codebase.json`))
|
||||
}
|
||||
```
|
||||
|
||||
2. **Invoke cli-roadmap-plan-agent**
|
||||
|
||||
The agent internally executes a 5-phase flow:
|
||||
- Phase 1: Context loading + requirement analysis
|
||||
- Phase 2: CLI-assisted decomposition (Gemini → Qwen → manual fallback)
|
||||
- Phase 3: Record enhancement + validation (schema compliance, dependency checks, convergence quality)
|
||||
- Phase 4: Issue creation + roadmap generation (ccw issue create → roadmap.md)
|
||||
- Phase 5: CLI decomposition quality check (**MANDATORY** - requirement coverage, convergence criteria quality, dependency correctness)
|
||||
|
||||
```javascript
|
||||
Task({
|
||||
subagent_type: "cli-roadmap-plan-agent",
|
||||
run_in_background: false,
|
||||
description: `Roadmap decomposition: ${slug}`,
|
||||
prompt: `
|
||||
## Roadmap Decomposition Task
|
||||
|
||||
### Input Context
|
||||
- **Requirement**: ${requirement}
|
||||
- **Selected Mode**: ${selectedMode}
|
||||
- **Session ID**: ${sessionId}
|
||||
- **Session Folder**: ${sessionFolder}
|
||||
|
||||
### Strategy Assessment
|
||||
${JSON.stringify(strategy, null, 2)}
|
||||
|
||||
### Codebase Context
|
||||
${explorationContext
|
||||
? `File: ${sessionFolder}/exploration-codebase.json\n${JSON.stringify(explorationContext, null, 2)}`
|
||||
: 'No codebase detected - pure requirement decomposition'}
|
||||
|
||||
### Issue Creation
|
||||
- Use \`ccw issue create\` for each decomposed item
|
||||
- Issue format: issues-jsonl-schema (id, title, status, priority, context, source, tags, extended_context)
|
||||
- Update \`roadmap.md\` with issue ID references
|
||||
|
||||
### CLI Configuration
|
||||
- Primary tool: gemini
|
||||
- Fallback: qwen
|
||||
- Timeout: 60000ms
|
||||
|
||||
### Expected Output
|
||||
1. **${sessionFolder}/roadmap.md** - Human-readable roadmap with issue references
|
||||
2. Issues created in \`.workflow/issues/issues.jsonl\` via ccw issue create
|
||||
|
||||
### Mode-Specific Requirements
|
||||
|
||||
${selectedMode === 'progressive' ? `**Progressive Mode**:
|
||||
- 2-4 layers from MVP to full implementation
|
||||
- Each layer: id (L0-L3), name, goal, scope, excludes, convergence, risks, effort, depends_on
|
||||
- L0 (MVP) must be a self-contained closed loop with no dependencies
|
||||
- Scope: each feature belongs to exactly ONE layer (no overlap)
|
||||
- Layer names: MVP / Usable / Refined / Optimized` :
|
||||
|
||||
`**Direct Mode**:
|
||||
- Topologically-sorted task sequence
|
||||
- Each task: id (T1-Tn), title, type, scope, inputs, outputs, convergence, depends_on, parallel_group
|
||||
- Inputs must come from preceding task outputs or existing resources
|
||||
- Tasks in same parallel_group must be truly independent`}
|
||||
|
||||
### Convergence Quality Requirements
|
||||
- criteria[]: MUST be testable (can write assertions or manual verification steps)
|
||||
- verification: MUST be executable (command, script, or explicit steps)
|
||||
- definition_of_done: MUST use business language (non-technical person can judge)
|
||||
|
||||
### Execution
|
||||
1. Analyze requirement and build decomposition context
|
||||
2. Execute CLI-assisted decomposition (Gemini, fallback Qwen)
|
||||
3. Parse output, validate records, enhance convergence quality
|
||||
4. Create issues via ccw issue create, generate roadmap.md
|
||||
5. Execute mandatory quality check (Phase 5)
|
||||
6. Return brief completion summary
|
||||
`
|
||||
})
|
||||
```
|
||||
|
||||
**Success Criteria**:
|
||||
- Issues created via `ccw issue create`, each with formal ISS-xxx ID
|
||||
- roadmap.md generated with issue ID references
|
||||
- Agent's internal quality check passed
|
||||
- No circular dependencies
|
||||
- Progressive: 2-4 layers, no scope overlap
|
||||
- Direct: tasks have explicit inputs/outputs, parallel_group assigned
|
||||
|
||||
### Phase 4: Validation & team-planex Handoff
|
||||
|
||||
**Objective**: Display decomposition results, collect user feedback, provide team-planex execution options.
|
||||
|
||||
**Prerequisites**: Phase 3 complete, issues created, roadmap.md generated.
|
||||
|
||||
**Steps**:
|
||||
|
||||
1. **Display Decomposition Results** (tabular format)
|
||||
|
||||
```javascript
|
||||
// Use issueIdMap from Phase 3 for display
|
||||
const issueIds = Object.values(issueIdMap)
|
||||
```
|
||||
|
||||
**Progressive Mode**:
|
||||
```markdown
|
||||
## Roadmap Overview
|
||||
|
||||
| Wave | Issue ID | Name | Goal | Priority |
|
||||
|------|----------|------|------|----------|
|
||||
| 1 | ISS-xxx | MVP | ... | 2 |
|
||||
| 2 | ISS-yyy | Usable | ... | 3 |
|
||||
|
||||
### Convergence Criteria
|
||||
**Wave 1 - MVP (ISS-xxx)**:
|
||||
- Criteria: [criteria list]
|
||||
- Verification: [verification]
|
||||
- Definition of Done: [definition_of_done]
|
||||
```
|
||||
|
||||
**Direct Mode**:
|
||||
```markdown
|
||||
## Task Sequence
|
||||
|
||||
| Wave | Issue ID | Title | Type | Dependencies |
|
||||
|------|----------|-------|------|--------------|
|
||||
| 1 | ISS-xxx | ... | infrastructure | - |
|
||||
| 2 | ISS-yyy | ... | feature | ISS-xxx |
|
||||
|
||||
### Convergence Criteria
|
||||
**Wave 1 - ISS-xxx**:
|
||||
- Criteria: [criteria list]
|
||||
- Verification: [verification]
|
||||
- Definition of Done: [definition_of_done]
|
||||
```
|
||||
|
||||
2. **User Feedback Loop** (up to 5 rounds, skipped when autoYes)
|
||||
|
||||
```javascript
|
||||
if (!autoYes) {
|
||||
let round = 0
|
||||
let continueLoop = true
|
||||
|
||||
while (continueLoop && round < 5) {
|
||||
round++
|
||||
const feedback = AskUserQuestion({
|
||||
questions: [{
|
||||
question: `Roadmap validation (round ${round}):\nAny feedback on the current decomposition?`,
|
||||
header: "Feedback",
|
||||
multiSelect: false,
|
||||
options: [
|
||||
{ label: "Approve", description: "Decomposition is reasonable, proceed to next steps" },
|
||||
{ label: "Adjust Scope", description: "Some issue scopes need adjustment" },
|
||||
{ label: "Modify Convergence", description: "Convergence criteria are not specific or testable enough" },
|
||||
{ label: "Re-decompose", description: "Overall strategy or layering approach needs change" }
|
||||
]
|
||||
}]
|
||||
})
|
||||
|
||||
if (feedback === 'Approve') {
|
||||
continueLoop = false
|
||||
} else {
|
||||
// Handle adjustment based on feedback type
|
||||
// After adjustment, re-display and return to loop top
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
3. **Post-Completion Options**
|
||||
|
||||
```javascript
|
||||
if (!autoYes) {
|
||||
AskUserQuestion({
|
||||
questions: [{
|
||||
question: `路线图已生成,${issueIds.length} 个 issues 已创建。下一步:`,
|
||||
header: "Next Step",
|
||||
multiSelect: false,
|
||||
options: [
|
||||
{ label: "Execute with team-planex", description: `启动 team-planex 执行全部 ${issueIds.length} 个 issues` },
|
||||
{ label: "Execute first wave", description: "仅执行 Wave 1(按 wave-1 tag 筛选)" },
|
||||
{ label: "View issues", description: "查看已创建的 issue 详情" },
|
||||
{ label: "Done", description: "保存路线图,稍后执行" }
|
||||
]
|
||||
}]
|
||||
})
|
||||
}
|
||||
```
|
||||
|
||||
| Selection | Action |
|
||||
|-----------|--------|
|
||||
| Execute with team-planex | `Skill(skill="team-planex", args="${issueIds.join(' ')}")` |
|
||||
| Execute first wave | Filter issues by `wave-1` tag, pass to team-planex |
|
||||
| View issues | Display issues summary from `.workflow/issues/issues.jsonl` |
|
||||
| Done | Display file paths, end |
|
||||
|
||||
**Success Criteria**:
|
||||
- User feedback processed (or skipped via autoYes)
|
||||
- Post-completion options provided
|
||||
- team-planex handoff available via issue IDs
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Error | Resolution |
|
||||
|-------|------------|
|
||||
| cli-explore-agent failure | Skip code exploration, proceed with pure requirement decomposition |
|
||||
| No codebase | Normal flow, skip Phase 2 |
|
||||
| Circular dependency detected | Prompt user to adjust dependencies, re-decompose |
|
||||
| User feedback timeout | Save current state, display `--continue` recovery command |
|
||||
| Max feedback rounds reached | Use current version to generate final artifacts |
|
||||
| Session folder conflict | Append timestamp suffix |
|
||||
|
||||
## Best Practices
|
||||
|
||||
1. **Clear requirement description**: Detailed description → more accurate uncertainty assessment and decomposition
|
||||
2. **Validate MVP first**: In progressive mode, L0 should be the minimum verifiable closed loop
|
||||
3. **Testable convergence**: criteria must be writable as assertions or manual steps; definition_of_done should be judgeable by non-technical stakeholders (see Convergence Criteria in JSONL Schema Design)
|
||||
4. **Agent-First for Exploration**: Delegate codebase exploration to cli-explore-agent, do not analyze directly in main flow
|
||||
5. **Incremental validation**: Use `--continue` to iterate on existing roadmaps
|
||||
6. **team-planex integration**: Issues created follow standard issues-jsonl-schema, directly consumable by team-planex via issue IDs and tags
|
||||
|
||||
|
||||
---
|
||||
|
||||
**Now execute req-plan-with-file for**: $ARGUMENTS
|
||||
539
.claude/commands/workflow/roadmap-with-file.md
Normal file
539
.claude/commands/workflow/roadmap-with-file.md
Normal file
@@ -0,0 +1,539 @@
|
||||
---
|
||||
name: roadmap-with-file
|
||||
description: Strategic requirement roadmap with iterative decomposition and issue creation. Outputs roadmap.md (human-readable, single source) + issues.jsonl (machine-executable). Handoff to team-planex.
|
||||
argument-hint: "[-y|--yes] [-c|--continue] [-m progressive|direct|auto] \"requirement description\""
|
||||
allowed-tools: TodoWrite(*), Task(*), AskUserQuestion(*), Read(*), Grep(*), Glob(*), Bash(*), Edit(*), Write(*)
|
||||
---
|
||||
|
||||
## Auto Mode
|
||||
|
||||
When `--yes` or `-y`: Auto-confirm strategy selection, use recommended mode, skip interactive rounds.
|
||||
|
||||
# Workflow Roadmap Command (/workflow:roadmap-with-file)
|
||||
|
||||
## Quick Start
|
||||
|
||||
```bash
|
||||
# Basic usage
|
||||
/workflow:roadmap-with-file "Implement user authentication system with OAuth and 2FA"
|
||||
|
||||
# With mode selection
|
||||
/workflow:roadmap-with-file -m progressive "Build real-time notification system" # MVP→iterations
|
||||
/workflow:roadmap-with-file -m direct "Refactor payment module" # Topological sequence
|
||||
/workflow:roadmap-with-file -m auto "Add data export feature" # Auto-select
|
||||
|
||||
# Continue existing session
|
||||
/workflow:roadmap-with-file --continue "auth system"
|
||||
|
||||
# Auto mode
|
||||
/workflow:roadmap-with-file -y "Implement caching layer"
|
||||
```
|
||||
|
||||
**Context Source**: cli-explore-agent (optional) + requirement analysis
|
||||
**Output Directory**: `.workflow/.roadmap/{session-id}/`
|
||||
**Core Output**: `roadmap.md` (single source, human-readable) + `issues.jsonl` (global, machine-executable)
|
||||
|
||||
## Output Artifacts
|
||||
|
||||
### Single Source of Truth
|
||||
|
||||
| Artifact | Purpose | Consumer |
|
||||
|----------|---------|----------|
|
||||
| `roadmap.md` | ⭐ Human-readable strategic roadmap with all context | Human review, team-planex handoff |
|
||||
| `.workflow/issues/issues.jsonl` | Global issue store (appended) | team-planex, issue commands |
|
||||
|
||||
### Why No Separate JSON Files?
|
||||
|
||||
| Original File | Why Removed | Where Content Goes |
|
||||
|---------------|-------------|-------------------|
|
||||
| `strategy-assessment.json` | Duplicates roadmap.md content | Embedded in `roadmap.md` Strategy Assessment section |
|
||||
| `exploration-codebase.json` | Single-use intermediate | Embedded in `roadmap.md` Codebase Context appendix |
|
||||
|
||||
## Overview
|
||||
|
||||
Strategic requirement roadmap with **iterative decomposition**. Creates a single `roadmap.md` that evolves through discussion, with issues persisted to global `issues.jsonl` for execution.
|
||||
|
||||
**Core workflow**: Understand → Decompose → Iterate → Validate → Handoff
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────────────────────┐
|
||||
│ ROADMAP ITERATIVE WORKFLOW │
|
||||
├─────────────────────────────────────────────────────────────────────────┤
|
||||
│ │
|
||||
│ Phase 1: Requirement Understanding & Strategy │
|
||||
│ ├─ Parse requirement: goal / constraints / stakeholders │
|
||||
│ ├─ Assess uncertainty level → recommend mode │
|
||||
│ ├─ User confirms strategy (-m skips, -y auto-selects) │
|
||||
│ └─ Initialize roadmap.md with Strategy Assessment │
|
||||
│ │
|
||||
│ Phase 2: Decomposition & Issue Creation │
|
||||
│ ├─ cli-roadmap-plan-agent executes decomposition │
|
||||
│ ├─ Progressive: 2-4 layers (MVP→Optimized) with convergence │
|
||||
│ ├─ Direct: Topological task sequence with convergence │
|
||||
│ ├─ Create issues via ccw issue create → issues.jsonl │
|
||||
│ └─ Update roadmap.md with Roadmap table + Issue references │
|
||||
│ │
|
||||
│ Phase 3: Iterative Refinement (Multi-Round) │
|
||||
│ ├─ Present roadmap to user │
|
||||
│ ├─ Feedback: Approve | Adjust Scope | Modify Convergence | Replan │
|
||||
│ ├─ Update roadmap.md with each round │
|
||||
│ └─ Repeat until approved (max 5 rounds) │
|
||||
│ │
|
||||
│ Phase 4: Handoff │
|
||||
│ ├─ Final roadmap.md with Issue ID references │
|
||||
│ ├─ Options: team-planex | first wave | view issues | done │
|
||||
│ └─ Issues ready in .workflow/issues/issues.jsonl │
|
||||
│ │
|
||||
└─────────────────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
## Dual Modes
|
||||
|
||||
| Mode | Strategy | Best For | Decomposition |
|
||||
|------|----------|----------|---------------|
|
||||
| **Progressive** | MVP → Usable → Refined → Optimized | High uncertainty, need validation | 2-4 layers, each with full convergence |
|
||||
| **Direct** | Topological task sequence | Clear requirements, confirmed tech | Tasks with explicit inputs/outputs |
|
||||
|
||||
**Auto-selection logic**:
|
||||
- ≥3 high uncertainty factors → Progressive
|
||||
- ≥3 low uncertainty factors → Direct
|
||||
- Otherwise → Ask user preference
|
||||
|
||||
## Output Structure
|
||||
|
||||
```
|
||||
.workflow/.roadmap/RMAP-{slug}-{date}/
|
||||
└── roadmap.md # ⭐ Single source of truth
|
||||
# - Strategy Assessment (embedded)
|
||||
# - Roadmap Table
|
||||
# - Convergence Criteria per Issue
|
||||
# - Codebase Context (appendix, if applicable)
|
||||
# - Iteration History
|
||||
|
||||
.workflow/issues/issues.jsonl # Global issue store (appended)
|
||||
# - One JSON object per line
|
||||
# - Consumed by team-planex, issue commands
|
||||
```
|
||||
|
||||
## roadmap.md Template
|
||||
|
||||
```markdown
|
||||
# Requirement Roadmap
|
||||
|
||||
**Session**: RMAP-{slug}-{date}
|
||||
**Requirement**: {requirement}
|
||||
**Strategy**: {progressive|direct}
|
||||
**Status**: {Planning|Refining|Ready}
|
||||
**Created**: {timestamp}
|
||||
|
||||
---
|
||||
|
||||
## Strategy Assessment
|
||||
|
||||
- **Uncertainty Level**: {high|medium|low}
|
||||
- **Decomposition Mode**: {progressive|direct}
|
||||
- **Assessment Basis**: {factors summary}
|
||||
- **Goal**: {extracted goal}
|
||||
- **Constraints**: {extracted constraints}
|
||||
- **Stakeholders**: {extracted stakeholders}
|
||||
|
||||
---
|
||||
|
||||
## Roadmap
|
||||
|
||||
### Progressive Mode
|
||||
| Wave | Issue ID | Layer | Goal | Priority | Dependencies |
|
||||
|------|----------|-------|------|----------|--------------|
|
||||
| 1 | ISS-xxx | MVP | ... | 2 | - |
|
||||
| 2 | ISS-yyy | Usable | ... | 3 | ISS-xxx |
|
||||
|
||||
### Direct Mode
|
||||
| Wave | Issue ID | Title | Type | Dependencies |
|
||||
|------|----------|-------|------|--------------|
|
||||
| 1 | ISS-xxx | ... | infrastructure | - |
|
||||
| 2 | ISS-yyy | ... | feature | ISS-xxx |
|
||||
|
||||
---
|
||||
|
||||
## Convergence Criteria
|
||||
|
||||
### ISS-xxx: {Issue Title}
|
||||
- **Criteria**: [testable conditions]
|
||||
- **Verification**: [executable steps/commands]
|
||||
- **Definition of Done**: [business language, non-technical]
|
||||
|
||||
### ISS-yyy: {Issue Title}
|
||||
...
|
||||
|
||||
---
|
||||
|
||||
## Risks
|
||||
|
||||
| Risk | Severity | Mitigation |
|
||||
|------|----------|------------|
|
||||
| ... | ... | ... |
|
||||
|
||||
---
|
||||
|
||||
## Iteration History
|
||||
|
||||
### Round 1 - {timestamp}
|
||||
**User Feedback**: {feedback summary}
|
||||
**Changes Made**: {adjustments}
|
||||
**Status**: {approved|continue iteration}
|
||||
|
||||
---
|
||||
|
||||
## Codebase Context (Optional)
|
||||
|
||||
*Included when codebase exploration was performed*
|
||||
|
||||
- **Relevant Modules**: [...]
|
||||
- **Existing Patterns**: [...]
|
||||
- **Integration Points**: [...]
|
||||
```
|
||||
|
||||
## Issues JSONL Specification
|
||||
|
||||
### Location & Format
|
||||
|
||||
```
|
||||
Path: .workflow/issues/issues.jsonl
|
||||
Format: JSONL (one complete JSON object per line)
|
||||
Encoding: UTF-8
|
||||
Mode: Append-only (new issues appended to end)
|
||||
```
|
||||
|
||||
### Record Schema
|
||||
|
||||
```json
|
||||
{
|
||||
"id": "ISS-YYYYMMDD-NNN",
|
||||
"title": "[LayerName] goal or [TaskType] title",
|
||||
"status": "pending",
|
||||
"priority": 2,
|
||||
"context": "Markdown with goal, scope, convergence, verification, DoD",
|
||||
"source": "text",
|
||||
"tags": ["roadmap", "progressive|direct", "wave-N", "layer-name"],
|
||||
"extended_context": {
|
||||
"notes": {
|
||||
"session": "RMAP-{slug}-{date}",
|
||||
"strategy": "progressive|direct",
|
||||
"wave": 1,
|
||||
"depends_on_issues": []
|
||||
}
|
||||
},
|
||||
"lifecycle_requirements": {
|
||||
"test_strategy": "unit",
|
||||
"regression_scope": "affected",
|
||||
"acceptance_type": "automated",
|
||||
"commit_strategy": "per-issue"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Query Interface
|
||||
|
||||
```bash
|
||||
# By ID
|
||||
ccw issue get ISS-20260227-001
|
||||
|
||||
# By tag
|
||||
ccw issue list --tag wave-1
|
||||
ccw issue list --tag roadmap
|
||||
|
||||
# By session
|
||||
ccw issue list --session RMAP-auth-2026-02-27
|
||||
|
||||
# Execute
|
||||
ccw issue execute ISS-20260227-001 ISS-20260227-002
|
||||
```
|
||||
|
||||
### Consumers
|
||||
|
||||
| Consumer | Usage |
|
||||
|----------|-------|
|
||||
| `team-planex` | Load by ID or tag, execute in wave order |
|
||||
| `issue-manage` | CRUD operations on issues |
|
||||
| `issue:execute` | DAG-based parallel execution |
|
||||
| `issue:queue` | Form execution queue from solutions |
|
||||
|
||||
## Implementation
|
||||
|
||||
### Session Initialization
|
||||
|
||||
```javascript
|
||||
const getUtc8ISOString = () => new Date(Date.now() + 8 * 60 * 60 * 1000).toISOString()
|
||||
|
||||
// Parse flags
|
||||
const autoYes = $ARGUMENTS.includes('--yes') || $ARGUMENTS.includes('-y')
|
||||
const continueMode = $ARGUMENTS.includes('--continue') || $ARGUMENTS.includes('-c')
|
||||
const modeMatch = $ARGUMENTS.match(/(?:--mode|-m)\s+(progressive|direct|auto)/)
|
||||
const requestedMode = modeMatch ? modeMatch[1] : 'auto'
|
||||
|
||||
// Clean requirement text
|
||||
const requirement = $ARGUMENTS
|
||||
.replace(/--yes|-y|--continue|-c|--mode\s+\w+|-m\s+\w+/g, '')
|
||||
.trim()
|
||||
|
||||
const slug = requirement.toLowerCase()
|
||||
.replace(/[^a-z0-9\u4e00-\u9fa5]+/g, '-')
|
||||
.substring(0, 40)
|
||||
const dateStr = getUtc8ISOString().substring(0, 10)
|
||||
const sessionId = `RMAP-${slug}-${dateStr}`
|
||||
const sessionFolder = `.workflow/.roadmap/${sessionId}`
|
||||
|
||||
// Auto-detect continue
|
||||
if (continueMode || file_exists(`${sessionFolder}/roadmap.md`)) {
|
||||
// Resume existing session
|
||||
}
|
||||
Bash(`mkdir -p ${sessionFolder}`)
|
||||
```
|
||||
|
||||
### Phase 1: Requirement Understanding & Strategy
|
||||
|
||||
**Objective**: Parse requirement, assess uncertainty, select decomposition strategy, initialize roadmap.md.
|
||||
|
||||
**Steps**:
|
||||
|
||||
1. **Parse Requirement**
|
||||
- Extract: goal, constraints, stakeholders, keywords
|
||||
|
||||
2. **Assess Uncertainty**
|
||||
```javascript
|
||||
const uncertaintyFactors = {
|
||||
scope_clarity: 'low|medium|high',
|
||||
technical_risk: 'low|medium|high',
|
||||
dependency_unknown: 'low|medium|high',
|
||||
domain_familiarity: 'low|medium|high',
|
||||
requirement_stability: 'low|medium|high'
|
||||
}
|
||||
// ≥3 high → progressive, ≥3 low → direct, else → ask
|
||||
```
|
||||
|
||||
3. **Strategy Selection** (skip if `-m` specified or autoYes)
|
||||
```javascript
|
||||
AskUserQuestion({
|
||||
questions: [{
|
||||
question: `Decomposition strategy:\nUncertainty: ${uncertaintyLevel}\nRecommended: ${recommendedMode}`,
|
||||
header: "Strategy",
|
||||
multiSelect: false,
|
||||
options: [
|
||||
{ label: recommendedMode === 'progressive' ? "Progressive (Recommended)" : "Progressive",
|
||||
description: "MVP→iterations, validate core first" },
|
||||
{ label: recommendedMode === 'direct' ? "Direct (Recommended)" : "Direct",
|
||||
description: "Topological task sequence, clear dependencies" }
|
||||
]
|
||||
}]
|
||||
})
|
||||
```
|
||||
|
||||
4. **Initialize roadmap.md** with Strategy Assessment section
|
||||
|
||||
**Success Criteria**:
|
||||
- roadmap.md created with Strategy Assessment
|
||||
- Strategy selected (progressive or direct)
|
||||
- Uncertainty factors documented
|
||||
|
||||
### Phase 2: Decomposition & Issue Creation
|
||||
|
||||
**Objective**: Execute decomposition via `cli-roadmap-plan-agent`, create issues, update roadmap.md.
|
||||
|
||||
**Agent**: `cli-roadmap-plan-agent`
|
||||
|
||||
**Agent Tasks**:
|
||||
1. Analyze requirement with strategy context
|
||||
2. Execute CLI-assisted decomposition (Gemini → Qwen fallback)
|
||||
3. Create issues via `ccw issue create`
|
||||
4. Generate roadmap table with Issue ID references
|
||||
5. Update roadmap.md
|
||||
|
||||
**Agent Prompt Template**:
|
||||
```javascript
|
||||
Task({
|
||||
subagent_type: "cli-roadmap-plan-agent",
|
||||
run_in_background: false,
|
||||
description: `Roadmap decomposition: ${slug}`,
|
||||
prompt: `
|
||||
## Roadmap Decomposition Task
|
||||
|
||||
### Input Context
|
||||
- **Requirement**: ${requirement}
|
||||
- **Strategy**: ${selectedMode}
|
||||
- **Session**: ${sessionId}
|
||||
- **Folder**: ${sessionFolder}
|
||||
|
||||
### Mode-Specific Requirements
|
||||
|
||||
${selectedMode === 'progressive' ? `**Progressive Mode**:
|
||||
- 2-4 layers: MVP / Usable / Refined / Optimized
|
||||
- Each layer: goal, scope, excludes, convergence, risks, effort
|
||||
- L0 (MVP) must be self-contained, no dependencies
|
||||
- Scope: each feature in exactly ONE layer (no overlap)` :
|
||||
|
||||
`**Direct Mode**:
|
||||
- Topologically-sorted task sequence
|
||||
- Each task: title, type, scope, inputs, outputs, convergence, depends_on
|
||||
- Inputs from preceding outputs or existing resources
|
||||
- parallel_group for truly independent tasks`}
|
||||
|
||||
### Convergence Quality Requirements
|
||||
- criteria[]: MUST be testable
|
||||
- verification: MUST be executable
|
||||
- definition_of_done: MUST use business language
|
||||
|
||||
### Output
|
||||
1. **${sessionFolder}/roadmap.md** - Update with Roadmap table + Convergence sections
|
||||
2. **Append to .workflow/issues/issues.jsonl** via ccw issue create
|
||||
|
||||
### CLI Configuration
|
||||
- Primary: gemini, Fallback: qwen, Timeout: 60000ms
|
||||
`
|
||||
})
|
||||
```
|
||||
|
||||
**Success Criteria**:
|
||||
- Issues created in `.workflow/issues/issues.jsonl`
|
||||
- roadmap.md updated with Issue references
|
||||
- No circular dependencies
|
||||
- Convergence criteria testable
|
||||
|
||||
### Phase 3: Iterative Refinement
|
||||
|
||||
**Objective**: Multi-round user feedback to refine roadmap.
|
||||
|
||||
**Workflow Steps**:
|
||||
|
||||
1. **Present Roadmap**
|
||||
- Display Roadmap table + key Convergence criteria
|
||||
- Show issue count and wave breakdown
|
||||
|
||||
2. **Gather Feedback** (skip if autoYes)
|
||||
```javascript
|
||||
const feedback = AskUserQuestion({
|
||||
questions: [{
|
||||
question: `Roadmap validation (round ${round}):\n${issueCount} issues across ${waveCount} waves. Feedback?`,
|
||||
header: "Feedback",
|
||||
multiSelect: false,
|
||||
options: [
|
||||
{ label: "Approve", description: "Proceed to handoff" },
|
||||
{ label: "Adjust Scope", description: "Modify issue scopes" },
|
||||
{ label: "Modify Convergence", description: "Refine criteria/verification" },
|
||||
{ label: "Re-decompose", description: "Change strategy/layering" }
|
||||
]
|
||||
}]
|
||||
})
|
||||
```
|
||||
|
||||
3. **Process Feedback**
|
||||
- **Approve**: Exit loop, proceed to Phase 4
|
||||
- **Adjust Scope**: Modify issue context, update roadmap.md
|
||||
- **Modify Convergence**: Refine criteria/verification, update roadmap.md
|
||||
- **Re-decompose**: Return to Phase 2 with new strategy
|
||||
|
||||
4. **Update roadmap.md**
|
||||
- Append to Iteration History section
|
||||
- Update Roadmap table if changed
|
||||
- Increment round counter
|
||||
|
||||
5. **Loop** (max 5 rounds, then force proceed)
|
||||
|
||||
**Success Criteria**:
|
||||
- User approved OR max rounds reached
|
||||
- All changes recorded in Iteration History
|
||||
- roadmap.md reflects final state
|
||||
|
||||
### Phase 4: Handoff
|
||||
|
||||
**Objective**: Present final roadmap, offer execution options.
|
||||
|
||||
**Steps**:
|
||||
|
||||
1. **Display Summary**
|
||||
```markdown
|
||||
## Roadmap Complete
|
||||
|
||||
- **Session**: RMAP-{slug}-{date}
|
||||
- **Strategy**: {progressive|direct}
|
||||
- **Issues Created**: {count} across {waves} waves
|
||||
- **Roadmap**: .workflow/.roadmap/RMAP-{slug}-{date}/roadmap.md
|
||||
|
||||
| Wave | Issue Count | Layer/Type |
|
||||
|------|-------------|------------|
|
||||
| 1 | 2 | MVP / infrastructure |
|
||||
| 2 | 3 | Usable / feature |
|
||||
```
|
||||
|
||||
2. **Offer Options** (skip if autoYes)
|
||||
```javascript
|
||||
AskUserQuestion({
|
||||
questions: [{
|
||||
question: `${issueIds.length} issues ready. Next step:`,
|
||||
header: "Next Step",
|
||||
multiSelect: false,
|
||||
options: [
|
||||
{ label: "Execute with team-planex (Recommended)",
|
||||
description: `Run all ${issueIds.length} issues via team-planex` },
|
||||
{ label: "Execute first wave",
|
||||
description: "Run wave-1 issues only" },
|
||||
{ label: "View issues",
|
||||
description: "Display issue details from issues.jsonl" },
|
||||
{ label: "Done",
|
||||
description: "Save and exit, execute later" }
|
||||
]
|
||||
}]
|
||||
})
|
||||
```
|
||||
|
||||
3. **Execute Selection**
|
||||
| Selection | Action |
|
||||
|-----------|--------|
|
||||
| Execute with team-planex | `Skill(skill="team-planex", args="${issueIds.join(' ')}")` |
|
||||
| Execute first wave | Filter by `wave-1` tag, pass to team-planex |
|
||||
| View issues | Display from `.workflow/issues/issues.jsonl` |
|
||||
| Done | Output paths, end |
|
||||
|
||||
## Configuration
|
||||
|
||||
| Flag | Default | Description |
|
||||
|------|---------|-------------|
|
||||
| `-y, --yes` | false | Auto-confirm all decisions |
|
||||
| `-c, --continue` | false | Continue existing session |
|
||||
| `-m, --mode` | auto | Strategy: progressive / direct / auto |
|
||||
|
||||
**Session ID format**: `RMAP-{slug}-{YYYY-MM-DD}`
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Error | Resolution |
|
||||
|-------|------------|
|
||||
| cli-roadmap-plan-agent fails | Retry once, fallback to manual decomposition |
|
||||
| No codebase | Skip exploration, pure requirement decomposition |
|
||||
| Circular dependency detected | Prompt user, re-decompose |
|
||||
| User feedback timeout | Save roadmap.md, show `--continue` command |
|
||||
| Max rounds reached | Force proceed with current roadmap |
|
||||
| Session folder conflict | Append timestamp suffix |
|
||||
|
||||
## Best Practices
|
||||
|
||||
1. **Clear Requirements**: Detailed description → better decomposition
|
||||
2. **Iterate on Roadmap**: Use feedback rounds to refine convergence criteria
|
||||
3. **Testable Convergence**: criteria = assertions, DoD = business language
|
||||
4. **Use Continue Mode**: Resume to iterate on existing roadmap
|
||||
5. **Wave Execution**: Start with wave-1 (MVP) to validate before full execution
|
||||
|
||||
## Usage Recommendations
|
||||
|
||||
**When to Use Roadmap vs Other Commands:**
|
||||
|
||||
| Scenario | Recommended Command |
|
||||
|----------|-------------------|
|
||||
| Strategic planning, need issue tracking | `/workflow:roadmap-with-file` |
|
||||
| Quick task breakdown, immediate execution | `/workflow:lite-plan` |
|
||||
| Collaborative multi-agent planning | `/workflow:collaborative-plan-with-file` |
|
||||
| Full specification documents | `spec-generator` skill |
|
||||
| Code implementation from existing plan | `/workflow:lite-execute` |
|
||||
|
||||
---
|
||||
|
||||
**Now execute roadmap-with-file for**: $ARGUMENTS
|
||||
@@ -150,6 +150,8 @@ Task completion with optional fast-advance to skip coordinator round-trip:
|
||||
| No ready tasks + nothing running | SendMessage to coordinator (pipeline may be complete) |
|
||||
| Checkpoint task (e.g., spec->impl transition) | SendMessage to coordinator (needs user confirmation) |
|
||||
|
||||
**Fast-advance failure recovery**: If a fast-advanced task fails (worker exits without completing), the coordinator detects it as an orphaned in_progress task on next `resume`/`check` and resets it to pending for re-spawn. Self-healing, no manual intervention required. See [monitor.md](roles/coordinator/commands/monitor.md) Fast-Advance Failure Recovery.
|
||||
|
||||
### Inline Discuss Protocol (produce roles: analyst, writer, reviewer)
|
||||
|
||||
After completing their primary output, produce roles call the discuss subagent inline:
|
||||
@@ -176,13 +178,36 @@ Task({
|
||||
|
||||
The discuss subagent writes its record to `discussions/` and returns the verdict. The calling role includes the discuss result in its Phase 5 report.
|
||||
|
||||
**Consensus-blocked handling** (produce role responsibility):
|
||||
|
||||
| Verdict | Severity | Role Action |
|
||||
|---------|----------|-------------|
|
||||
| consensus_reached | - | Include action items in report, proceed to Phase 5 |
|
||||
| consensus_blocked | HIGH | SendMessage with `consensus_blocked=true, severity=HIGH`, include divergence details + action items. Coordinator creates revision task or pauses. |
|
||||
| consensus_blocked | MEDIUM | SendMessage with `consensus_blocked=true, severity=MEDIUM`. Proceed to Phase 5 normally. Coordinator logs warning to wisdom. |
|
||||
| consensus_blocked | LOW | Treat as consensus_reached with notes. Proceed normally. |
|
||||
|
||||
**SendMessage format for consensus_blocked**:
|
||||
|
||||
```
|
||||
[<role>] <task-id> complete. Discuss <round-id>: consensus_blocked (severity=<HIGH|MEDIUM>)
|
||||
Divergences: <divergence-summary>
|
||||
Action items: <top-3-items>
|
||||
Recommendation: <revise|proceed-with-caution|escalate>
|
||||
```
|
||||
|
||||
**Coordinator response** (see monitor.md Consensus-Blocked Handling for full flow):
|
||||
- HIGH -> revision task (max 1 per task) or pause for user decision
|
||||
- MEDIUM -> proceed with warning, log to wisdom/issues.md
|
||||
- DISCUSS-006 HIGH -> always pause for user (final sign-off gate)
|
||||
|
||||
### Shared Explore Utility
|
||||
|
||||
Any role needing codebase context calls the explore subagent:
|
||||
|
||||
```
|
||||
Task({
|
||||
subagent_type: "Explore",
|
||||
subagent_type: "cli-explore-agent",
|
||||
run_in_background: false,
|
||||
description: "Explore <angle>",
|
||||
prompt: <see subagents/explore-subagent.md for prompt template>
|
||||
|
||||
@@ -84,7 +84,7 @@ EXPECTED: JSON with: problem_statement, target_users[], domain, constraints[], e
|
||||
|
||||
```
|
||||
Task({
|
||||
subagent_type: "Explore",
|
||||
subagent_type: "cli-explore-agent",
|
||||
run_in_background: false,
|
||||
description: "Explore general context",
|
||||
prompt: "Explore codebase for: <topic>
|
||||
@@ -139,10 +139,25 @@ Task({
|
||||
```
|
||||
|
||||
**Discuss result handling**:
|
||||
- `consensus_reached` -> include in report, proceed normally
|
||||
- `consensus_blocked` -> flag in SendMessage, coordinator decides next step
|
||||
|
||||
**Report**: complexity, codebase presence, problem statement, exploration dimensions, discuss verdict, output paths.
|
||||
| Verdict | Severity | Action |
|
||||
|---------|----------|--------|
|
||||
| consensus_reached | - | Include action items in report, proceed to Phase 5 |
|
||||
| consensus_blocked | HIGH | Phase 5 SendMessage includes structured consensus_blocked format (see below). Do NOT self-revise. |
|
||||
| consensus_blocked | MEDIUM | Phase 5 SendMessage includes warning. Proceed normally. |
|
||||
| consensus_blocked | LOW | Treat as consensus_reached with notes. |
|
||||
|
||||
**consensus_blocked SendMessage format**:
|
||||
```
|
||||
[analyst] RESEARCH-001 complete. Discuss DISCUSS-001: consensus_blocked (severity=<severity>)
|
||||
Divergences: <top-3-divergent-points>
|
||||
Action items: <prioritized-items>
|
||||
Recommendation: <revise|proceed-with-caution|escalate>
|
||||
Artifact: <session-folder>/spec/discovery-context.json
|
||||
Discussion: <session-folder>/discussions/DISCUSS-001-discussion.md
|
||||
```
|
||||
|
||||
**Report**: complexity, codebase presence, problem statement, exploration dimensions, discuss verdict + severity, output paths.
|
||||
|
||||
**Success**: Both JSON files created; discuss record written; design-intelligence.json created if UI mode.
|
||||
|
||||
|
||||
@@ -170,6 +170,80 @@ When a worker has unexpected status (not completed, not in_progress):
|
||||
2. Log via team_msg (type: error)
|
||||
3. Report to user: task reset, will retry on next resume
|
||||
|
||||
### Fast-Advance Failure Recovery
|
||||
|
||||
When coordinator detects a fast-advanced task has failed (task in_progress but no callback and worker gone):
|
||||
|
||||
```
|
||||
handleCallback / handleResume detects:
|
||||
+- Task is in_progress (was fast-advanced by predecessor)
|
||||
+- No active_worker entry for this task
|
||||
+- Original fast-advancing worker has already completed and exited
|
||||
+- Resolution:
|
||||
1. TaskUpdate -> reset task to pending
|
||||
2. Remove stale active_worker entry (if any)
|
||||
3. Log via team_msg (type: error, summary: "Fast-advanced task <ID> failed, resetting for retry")
|
||||
4. -> handleSpawnNext (will re-spawn the task normally)
|
||||
```
|
||||
|
||||
**Detection in handleResume**:
|
||||
|
||||
```
|
||||
For each in_progress task in TaskList():
|
||||
+- Has matching active_worker? -> normal, skip
|
||||
+- No matching active_worker? -> orphaned (likely fast-advance failure)
|
||||
+- Check creation time: if > 5 minutes with no progress callback
|
||||
+- Reset to pending -> handleSpawnNext
|
||||
```
|
||||
|
||||
**Prevention**: Fast-advance failures are self-healing. The coordinator reconciles orphaned tasks on every `resume`/`check` cycle. No manual intervention required unless the same task fails repeatedly (3+ resets -> escalate to user).
|
||||
|
||||
### Consensus-Blocked Handling
|
||||
|
||||
When a produce role reports `consensus_blocked` in its callback:
|
||||
|
||||
```
|
||||
handleCallback receives message with consensus_blocked flag
|
||||
+- Extract: divergence_severity, blocked_round, action_recommendation
|
||||
+- Route by severity:
|
||||
|
|
||||
+- severity = HIGH (rating <= 2 or critical risk)
|
||||
| +- Is this DISCUSS-006 (final sign-off)?
|
||||
| | +- YES -> PAUSE: output warning, wait for user `resume` with decision
|
||||
| | +- NO -> Create REVISION task:
|
||||
| | +- Same role, same doc type, incremented suffix (e.g., DRAFT-001-R1)
|
||||
| | +- Description includes: divergence details + action items from discuss
|
||||
| | +- blockedBy: none (immediate execution)
|
||||
| | +- Max 1 revision per task (DRAFT-001 -> DRAFT-001-R1, no R2)
|
||||
| | +- If already revised once -> PAUSE, escalate to user
|
||||
| +- Update session: mark task as "revised", log revision chain
|
||||
|
|
||||
+- severity = MEDIUM (rating spread or single low rating)
|
||||
| +- Proceed with warning: include divergence in next task's context
|
||||
| +- Log action items to wisdom/issues.md for downstream awareness
|
||||
| +- Normal handleSpawnNext
|
||||
|
|
||||
+- severity = LOW (minor suggestions only)
|
||||
+- Proceed normally: treat as consensus_reached with notes
|
||||
+- Normal handleSpawnNext
|
||||
```
|
||||
|
||||
**Revision task template** (for HIGH severity):
|
||||
|
||||
```
|
||||
TaskCreate({
|
||||
subject: "<ORIGINAL-ID>-R1",
|
||||
description: "Revision of <ORIGINAL-ID>: address consensus-blocked divergences.\n
|
||||
Session: <session-folder>\n
|
||||
Original artifact: <artifact-path>\n
|
||||
Divergences:\n<divergence-details>\n
|
||||
Action items:\n<action-items-from-discuss>\n
|
||||
InlineDiscuss: <same-round-id>",
|
||||
owner: "<same-role>",
|
||||
status: "pending"
|
||||
})
|
||||
```
|
||||
|
||||
## Phase 4: Validation
|
||||
|
||||
| Check | Criteria |
|
||||
@@ -179,6 +253,8 @@ When a worker has unexpected status (not completed, not in_progress):
|
||||
| Pipeline completeness | All expected tasks exist per mode |
|
||||
| Completion detection | readySubjects=0 + inProgressSubjects=0 -> PIPELINE_COMPLETE |
|
||||
| Fast-advance tracking | Detect tasks already in_progress via fast-advance, sync to active_workers |
|
||||
| Fast-advance orphan check | in_progress tasks without active_worker entry -> reset to pending |
|
||||
| Consensus-blocked routing | HIGH -> revision/pause, MEDIUM -> warn+proceed, LOW -> proceed |
|
||||
|
||||
## Error Handling
|
||||
|
||||
@@ -189,3 +265,7 @@ When a worker has unexpected status (not completed, not in_progress):
|
||||
| All workers still running on resume | Report status, suggest check later |
|
||||
| Pipeline stall (no ready, no running) | Check for missing tasks, report to user |
|
||||
| Fast-advance conflict | Coordinator reconciles, no duplicate spawns |
|
||||
| Fast-advance task orphaned | Reset to pending, re-spawn via handleSpawnNext |
|
||||
| consensus_blocked HIGH | Create revision task (max 1) or pause for user |
|
||||
| consensus_blocked MEDIUM | Proceed with warning, log to wisdom/issues.md |
|
||||
| Revision task also blocked | Escalate to user, pause pipeline |
|
||||
|
||||
@@ -83,7 +83,7 @@ For each uncached angle, call the shared explore subagent:
|
||||
|
||||
```
|
||||
Task({
|
||||
subagent_type: "Explore",
|
||||
subagent_type: "cli-explore-agent",
|
||||
run_in_background: false,
|
||||
description: "Explore: <angle>",
|
||||
prompt: "Explore codebase for: <task-description>
|
||||
|
||||
@@ -105,15 +105,16 @@ Requirements: 2-7 tasks with id, title, files[].change, convergence.criteria, de
|
||||
|
||||
**Session files**:
|
||||
```
|
||||
<session-folder>/explorations/ (shared cache, written by explore subagent)
|
||||
+-- cache-index.json
|
||||
+-- explore-<angle>.json
|
||||
|
||||
<session-folder>/plan/
|
||||
+-- exploration-<angle>.json (per angle, from shared cache)
|
||||
+-- explorations-manifest.json (summary)
|
||||
+-- explorations-manifest.json (summary, references ../explorations/)
|
||||
+-- plan.json
|
||||
+-- .task/TASK-*.json
|
||||
```
|
||||
|
||||
Note: exploration files may be symlinked or referenced from `<session-folder>/explorations/` (shared cache location).
|
||||
|
||||
---
|
||||
|
||||
## Error Handling
|
||||
|
||||
@@ -118,8 +118,25 @@ Task({
|
||||
```
|
||||
|
||||
**Discuss result handling**:
|
||||
- `consensus_reached` -> include in quality report as final endorsement
|
||||
- `consensus_blocked` -> flag in SendMessage, report specific divergences
|
||||
|
||||
| Verdict | Severity | Action |
|
||||
|---------|----------|--------|
|
||||
| consensus_reached | - | Include as final endorsement in quality report, proceed to Phase 5 |
|
||||
| consensus_blocked | HIGH | **DISCUSS-006 is final sign-off gate**. Phase 5 SendMessage includes structured format. Coordinator always pauses for user decision. |
|
||||
| consensus_blocked | MEDIUM | Phase 5 SendMessage includes warning. Proceed to Phase 5. Coordinator logs to wisdom. |
|
||||
| consensus_blocked | LOW | Treat as consensus_reached with notes. |
|
||||
|
||||
**consensus_blocked SendMessage format**:
|
||||
```
|
||||
[reviewer] QUALITY-001 complete. Discuss DISCUSS-006: consensus_blocked (severity=<severity>)
|
||||
Divergences: <top-3-divergent-points>
|
||||
Action items: <prioritized-items>
|
||||
Recommendation: <revise|proceed-with-caution|escalate>
|
||||
Artifact: <session-folder>/spec/readiness-report.md
|
||||
Discussion: <session-folder>/discussions/DISCUSS-006-discussion.md
|
||||
```
|
||||
|
||||
> **Note**: DISCUSS-006 HIGH always triggers user pause regardless of revision count, since this is the spec->impl gate.
|
||||
|
||||
---
|
||||
|
||||
|
||||
@@ -19,10 +19,10 @@ Multi-CLI document generation for 4 document types. Each uses parallel or staged
|
||||
|
||||
| Doc Type | Task | Template | Discussion Input | Output |
|
||||
|----------|------|----------|-----------------|--------|
|
||||
| product-brief | DRAFT-001 | templates/product-brief.md | discuss-001-scope.md | spec/product-brief.md |
|
||||
| requirements | DRAFT-002 | templates/requirements-prd.md | discuss-002-brief.md | spec/requirements/_index.md |
|
||||
| architecture | DRAFT-003 | templates/architecture-doc.md | discuss-003-requirements.md | spec/architecture/_index.md |
|
||||
| epics | DRAFT-004 | templates/epics-template.md | discuss-004-architecture.md | spec/epics/_index.md |
|
||||
| product-brief | DRAFT-001 | templates/product-brief.md | DISCUSS-001-discussion.md | spec/product-brief.md |
|
||||
| requirements | DRAFT-002 | templates/requirements-prd.md | DISCUSS-002-discussion.md | spec/requirements/_index.md |
|
||||
| architecture | DRAFT-003 | templates/architecture-doc.md | DISCUSS-003-discussion.md | spec/architecture/_index.md |
|
||||
| epics | DRAFT-004 | templates/epics-template.md | DISCUSS-004-discussion.md | spec/epics/_index.md |
|
||||
|
||||
### Progressive Dependencies
|
||||
|
||||
|
||||
@@ -118,10 +118,25 @@ Task({
|
||||
```
|
||||
|
||||
**Discuss result handling**:
|
||||
- `consensus_reached` -> include action items in report
|
||||
- `consensus_blocked` -> flag in SendMessage, include divergence details
|
||||
|
||||
**Report**: doc type, validation status, discuss verdict, average rating, summary, output path.
|
||||
| Verdict | Severity | Action |
|
||||
|---------|----------|--------|
|
||||
| consensus_reached | - | Include action items in report, proceed to Phase 5 |
|
||||
| consensus_blocked | HIGH | Phase 5 SendMessage includes structured consensus_blocked format (see below). Do NOT self-revise -- coordinator creates revision task. |
|
||||
| consensus_blocked | MEDIUM | Phase 5 SendMessage includes warning. Proceed to Phase 5 normally. |
|
||||
| consensus_blocked | LOW | Treat as consensus_reached with notes. |
|
||||
|
||||
**consensus_blocked SendMessage format**:
|
||||
```
|
||||
[writer] <task-id> complete. Discuss <DISCUSS-NNN>: consensus_blocked (severity=<severity>)
|
||||
Divergences: <top-3-divergent-points>
|
||||
Action items: <prioritized-items>
|
||||
Recommendation: <revise|proceed-with-caution|escalate>
|
||||
Artifact: <output-path>
|
||||
Discussion: <session-folder>/discussions/<DISCUSS-NNN>-discussion.md
|
||||
```
|
||||
|
||||
**Report**: doc type, validation status, discuss verdict + severity, average rating, summary, output path.
|
||||
|
||||
---
|
||||
|
||||
|
||||
@@ -77,7 +77,7 @@
|
||||
},
|
||||
"explore": {
|
||||
"spec": "subagents/explore-subagent.md",
|
||||
"type": "Explore",
|
||||
"type": "cli-explore-agent",
|
||||
"callable_by": ["analyst", "planner", "any"],
|
||||
"purpose": "Codebase exploration with centralized cache"
|
||||
}
|
||||
|
||||
@@ -87,12 +87,27 @@ Task({
|
||||
| <name> | <n>/5 |
|
||||
|
||||
### Return Value
|
||||
|
||||
**When consensus_reached**:
|
||||
Return a summary string with:
|
||||
- Verdict: consensus_reached or consensus_blocked
|
||||
- Verdict: consensus_reached
|
||||
- Average rating
|
||||
- Key action items (top 3)
|
||||
- Discussion record path
|
||||
|
||||
**When consensus_blocked**:
|
||||
Return a structured summary with:
|
||||
- Verdict: consensus_blocked
|
||||
- Severity: HIGH | MEDIUM | LOW
|
||||
- HIGH: any rating <= 2, critical risk identified, or missing_requirements non-empty
|
||||
- MEDIUM: rating spread >= 3, or single perspective rated <= 2 with others >= 3
|
||||
- LOW: minor suggestions only, all ratings >= 3 but divergent views exist
|
||||
- Average rating
|
||||
- Divergence summary: top 3 divergent points with perspective attribution
|
||||
- Action items: prioritized list of required changes
|
||||
- Recommendation: revise | proceed-with-caution | escalate
|
||||
- Discussion record path
|
||||
|
||||
### Error Handling
|
||||
- Single CLI fails -> fallback to direct Claude analysis for that perspective
|
||||
- All CLI fail -> generate basic discussion from direct artifact reading
|
||||
@@ -118,9 +133,29 @@ The calling role is responsible for:
|
||||
1. **Before calling**: Complete primary artifact output
|
||||
2. **Calling**: Invoke discuss subagent with correct round config
|
||||
3. **After calling**:
|
||||
- Include discuss verdict in Phase 5 report
|
||||
- If `consensus_blocked` with high-severity divergences -> flag in SendMessage to coordinator
|
||||
- Discussion record is written by the subagent, no further action needed
|
||||
|
||||
| Verdict | Severity | Role Action |
|
||||
|---------|----------|-------------|
|
||||
| consensus_reached | - | Include action items in Phase 5 report, proceed normally |
|
||||
| consensus_blocked | HIGH | Include divergence details in Phase 5 SendMessage with structured format (see below). **Do NOT self-revise** -- coordinator decides. |
|
||||
| consensus_blocked | MEDIUM | Include warning in Phase 5 SendMessage. Proceed to Phase 5 normally. |
|
||||
| consensus_blocked | LOW | Treat as consensus_reached with notes. Proceed normally. |
|
||||
|
||||
**SendMessage format for consensus_blocked (HIGH or MEDIUM)**:
|
||||
|
||||
```
|
||||
[<role>] <task-id> complete. Discuss <round-id>: consensus_blocked (severity=<severity>)
|
||||
Divergences: <top-3-divergent-points>
|
||||
Action items: <prioritized-items>
|
||||
Recommendation: <revise|proceed-with-caution|escalate>
|
||||
Artifact: <artifact-path>
|
||||
Discussion: <discussion-record-path>
|
||||
```
|
||||
|
||||
The coordinator receives this and routes per severity (see monitor.md Consensus-Blocked Handling):
|
||||
- HIGH -> creates revision task (max 1) or pauses for user
|
||||
- MEDIUM -> proceeds with warning logged to wisdom/issues.md
|
||||
- DISCUSS-006 HIGH -> always pauses for user decision (final sign-off gate)
|
||||
|
||||
## Comparison with v3
|
||||
|
||||
|
||||
@@ -15,7 +15,7 @@ Results were scattered across different directories and never shared. In v4, all
|
||||
|
||||
```
|
||||
Task({
|
||||
subagent_type: "Explore",
|
||||
subagent_type: "cli-explore-agent",
|
||||
run_in_background: false,
|
||||
description: "Explore <angle>",
|
||||
prompt: `Explore codebase for: <query>
|
||||
@@ -136,7 +136,7 @@ For complex queries, call cli-explore-agent per angle. The calling role determin
|
||||
```
|
||||
# After seed analysis, explore codebase context
|
||||
Task({
|
||||
subagent_type: "Explore",
|
||||
subagent_type: "cli-explore-agent",
|
||||
description: "Explore general context",
|
||||
prompt: "Explore codebase for: <topic>\nFocus angle: general\n..."
|
||||
})
|
||||
@@ -149,7 +149,7 @@ Task({
|
||||
# Multi-angle exploration before plan generation
|
||||
for angle in selected_angles:
|
||||
Task({
|
||||
subagent_type: "Explore",
|
||||
subagent_type: "cli-explore-agent",
|
||||
description: "Explore <angle>",
|
||||
prompt: "Explore codebase for: <task>\nFocus angle: <angle>\n..."
|
||||
})
|
||||
|
||||
File diff suppressed because it is too large
Load Diff
893
.codex/skills/roadmap-with-file/SKILL.md
Normal file
893
.codex/skills/roadmap-with-file/SKILL.md
Normal file
@@ -0,0 +1,893 @@
|
||||
---
|
||||
name: roadmap-with-file
|
||||
description: Strategic requirement roadmap with iterative decomposition and issue creation. Outputs roadmap.md (human-readable, single source) + issues.jsonl (machine-executable). Handoff to team-planex.
|
||||
argument-hint: "[-y|--yes] [-c|--continue] [-m progressive|direct|auto] \"requirement description\""
|
||||
allowed-tools: spawn_agent, wait, send_input, close_agent, AskUserQuestion, Read, Write, Edit, Bash, Glob, Grep
|
||||
---
|
||||
|
||||
## Auto Mode
|
||||
|
||||
When `--yes` or `-y`: Auto-confirm strategy selection, use recommended mode, skip interactive rounds.
|
||||
|
||||
# Roadmap-with-file Skill
|
||||
|
||||
## Usage
|
||||
|
||||
```bash
|
||||
$roadmap-with-file "Implement user authentication system with OAuth and 2FA"
|
||||
$roadmap-with-file -m progressive "Build real-time notification system"
|
||||
$roadmap-with-file -m direct "Refactor payment module"
|
||||
$roadmap-with-file -m auto "Add data export feature"
|
||||
$roadmap-with-file --continue "auth system"
|
||||
$roadmap-with-file -y "Implement caching layer"
|
||||
```
|
||||
|
||||
**Flags**:
|
||||
- `-y, --yes`: Skip all confirmations (auto mode)
|
||||
- `-c, --continue`: Continue existing session
|
||||
- `-m, --mode`: Strategy selection (progressive / direct / auto)
|
||||
|
||||
**Context Source**: cli-explore-agent (optional) + requirement analysis
|
||||
**Output Directory**: `.workflow/.roadmap/{session-id}/`
|
||||
**Core Output**: `roadmap.md` (single source, human-readable) + `issues.jsonl` (global, machine-executable)
|
||||
|
||||
---
|
||||
|
||||
## Subagent API Reference
|
||||
|
||||
### spawn_agent
|
||||
Create a new subagent with task assignment.
|
||||
|
||||
```javascript
|
||||
const agentId = spawn_agent({
|
||||
message: `
|
||||
## TASK ASSIGNMENT
|
||||
|
||||
### MANDATORY FIRST STEPS (Agent Execute)
|
||||
1. **Read role definition**: ~/.codex/agents/{agent-type}.md (MUST read first)
|
||||
2. Read: .workflow/project-tech.json
|
||||
3. Read: .workflow/project-guidelines.json
|
||||
|
||||
## TASK CONTEXT
|
||||
${taskContext}
|
||||
|
||||
## DELIVERABLES
|
||||
${deliverables}
|
||||
`
|
||||
})
|
||||
```
|
||||
|
||||
### wait
|
||||
Get results from subagent (only way to retrieve results).
|
||||
|
||||
```javascript
|
||||
const result = wait({
|
||||
ids: [agentId],
|
||||
timeout_ms: 600000 // 10 minutes
|
||||
})
|
||||
|
||||
if (result.timed_out) {
|
||||
// Handle timeout - can continue waiting or send_input to prompt completion
|
||||
}
|
||||
```
|
||||
|
||||
### send_input
|
||||
Continue interaction with active subagent (for clarification or follow-up).
|
||||
|
||||
```javascript
|
||||
send_input({
|
||||
id: agentId,
|
||||
message: `
|
||||
## CLARIFICATION ANSWERS
|
||||
${answers}
|
||||
|
||||
## NEXT STEP
|
||||
Continue with plan generation.
|
||||
`
|
||||
})
|
||||
```
|
||||
|
||||
### close_agent
|
||||
Clean up subagent resources (irreversible).
|
||||
|
||||
```javascript
|
||||
close_agent({ id: agentId })
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Output Artifacts
|
||||
|
||||
### Single Source of Truth
|
||||
|
||||
| Artifact | Purpose | Consumer |
|
||||
|----------|---------|----------|
|
||||
| `roadmap.md` | ⭐ Human-readable strategic roadmap with all context | Human review, team-planex handoff |
|
||||
| `.workflow/issues/issues.jsonl` | Global issue store (appended) | team-planex, issue commands |
|
||||
|
||||
### Why No Separate JSON Files?
|
||||
|
||||
| Original File | Why Removed | Where Content Goes |
|
||||
|---------------|-------------|-------------------|
|
||||
| `strategy-assessment.json` | Duplicates roadmap.md content | Embedded in `roadmap.md` Strategy Assessment section |
|
||||
| `exploration-codebase.json` | Single-use intermediate | Embedded in `roadmap.md` Codebase Context appendix |
|
||||
|
||||
---
|
||||
|
||||
## Overview
|
||||
|
||||
Strategic requirement roadmap with **iterative decomposition**. Creates a single `roadmap.md` that evolves through discussion, with issues persisted to global `issues.jsonl` for execution.
|
||||
|
||||
**Core workflow**: Understand → Decompose → Iterate → Validate → Handoff
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────────────────────┐
|
||||
│ ROADMAP ITERATIVE WORKFLOW │
|
||||
├─────────────────────────────────────────────────────────────────────────┤
|
||||
│ │
|
||||
│ Phase 1: Requirement Understanding & Strategy │
|
||||
│ ├─ Parse requirement: goal / constraints / stakeholders │
|
||||
│ ├─ Assess uncertainty level → recommend mode │
|
||||
│ ├─ User confirms strategy (-m skips, -y auto-selects) │
|
||||
│ └─ Initialize roadmap.md with Strategy Assessment │
|
||||
│ │
|
||||
│ Phase 2: Decomposition & Issue Creation │
|
||||
│ ├─ cli-roadmap-plan-agent executes decomposition │
|
||||
│ ├─ Progressive: 2-4 layers (MVP→Optimized) with convergence │
|
||||
│ ├─ Direct: Topological task sequence with convergence │
|
||||
│ ├─ Create issues via ccw issue create → issues.jsonl │
|
||||
│ └─ Update roadmap.md with Roadmap table + Issue references │
|
||||
│ │
|
||||
│ Phase 3: Iterative Refinement (Multi-Round) │
|
||||
│ ├─ Present roadmap to user │
|
||||
│ ├─ Feedback: Approve | Adjust Scope | Modify Convergence | Replan │
|
||||
│ ├─ Update roadmap.md with each round │
|
||||
│ └─ Repeat until approved (max 5 rounds) │
|
||||
│ │
|
||||
│ Phase 4: Handoff │
|
||||
│ ├─ Final roadmap.md with Issue ID references │
|
||||
│ ├─ Options: team-planex | first wave | view issues | done │
|
||||
│ └─ Issues ready in .workflow/issues/issues.jsonl │
|
||||
│ │
|
||||
└─────────────────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Dual Modes
|
||||
|
||||
| Mode | Strategy | Best For | Decomposition |
|
||||
|------|----------|----------|---------------|
|
||||
| **Progressive** | MVP → Usable → Refined → Optimized | High uncertainty, need validation | 2-4 layers, each with full convergence |
|
||||
| **Direct** | Topological task sequence | Clear requirements, confirmed tech | Tasks with explicit inputs/outputs |
|
||||
|
||||
**Auto-selection logic**:
|
||||
- ≥3 high uncertainty factors → Progressive
|
||||
- ≥3 low uncertainty factors → Direct
|
||||
- Otherwise → Ask user preference
|
||||
|
||||
---
|
||||
|
||||
## Output Structure
|
||||
|
||||
```
|
||||
.workflow/.roadmap/RMAP-{slug}-{date}/
|
||||
└── roadmap.md # ⭐ Single source of truth
|
||||
# - Strategy Assessment (embedded)
|
||||
# - Roadmap Table
|
||||
# - Convergence Criteria per Issue
|
||||
# - Codebase Context (appendix, if applicable)
|
||||
# - Iteration History
|
||||
|
||||
.workflow/issues/issues.jsonl # Global issue store (appended)
|
||||
# - One JSON object per line
|
||||
# - Consumed by team-planex, issue commands
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## roadmap.md Template
|
||||
|
||||
```markdown
|
||||
# Requirement Roadmap
|
||||
|
||||
**Session**: RMAP-{slug}-{date}
|
||||
**Requirement**: {requirement}
|
||||
**Strategy**: {progressive|direct}
|
||||
**Status**: {Planning|Refining|Ready}
|
||||
**Created**: {timestamp}
|
||||
|
||||
---
|
||||
|
||||
## Strategy Assessment
|
||||
|
||||
- **Uncertainty Level**: {high|medium|low}
|
||||
- **Decomposition Mode**: {progressive|direct}
|
||||
- **Assessment Basis**: {factors summary}
|
||||
- **Goal**: {extracted goal}
|
||||
- **Constraints**: {extracted constraints}
|
||||
- **Stakeholders**: {extracted stakeholders}
|
||||
|
||||
---
|
||||
|
||||
## Roadmap
|
||||
|
||||
### Progressive Mode
|
||||
| Wave | Issue ID | Layer | Goal | Priority | Dependencies |
|
||||
|------|----------|-------|------|----------|--------------|
|
||||
| 1 | ISS-xxx | MVP | ... | 2 | - |
|
||||
| 2 | ISS-yyy | Usable | ... | 3 | ISS-xxx |
|
||||
|
||||
### Direct Mode
|
||||
| Wave | Issue ID | Title | Type | Dependencies |
|
||||
|------|----------|-------|------|--------------|
|
||||
| 1 | ISS-xxx | ... | infrastructure | - |
|
||||
| 2 | ISS-yyy | ... | feature | ISS-xxx |
|
||||
|
||||
---
|
||||
|
||||
## Convergence Criteria
|
||||
|
||||
### ISS-xxx: {Issue Title}
|
||||
- **Criteria**: [testable conditions]
|
||||
- **Verification**: [executable steps/commands]
|
||||
- **Definition of Done**: [business language, non-technical]
|
||||
|
||||
### ISS-yyy: {Issue Title}
|
||||
...
|
||||
|
||||
---
|
||||
|
||||
## Risks
|
||||
|
||||
| Risk | Severity | Mitigation |
|
||||
|------|----------|------------|
|
||||
| ... | ... | ... |
|
||||
|
||||
---
|
||||
|
||||
## Iteration History
|
||||
|
||||
### Round 1 - {timestamp}
|
||||
**User Feedback**: {feedback summary}
|
||||
**Changes Made**: {adjustments}
|
||||
**Status**: {approved|continue iteration}
|
||||
|
||||
---
|
||||
|
||||
## Codebase Context (Optional)
|
||||
|
||||
*Included when codebase exploration was performed*
|
||||
|
||||
- **Relevant Modules**: [...]
|
||||
- **Existing Patterns**: [...]
|
||||
- **Integration Points**: [...]
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Issues JSONL Specification
|
||||
|
||||
### Location & Format
|
||||
|
||||
```
|
||||
Path: .workflow/issues/issues.jsonl
|
||||
Format: JSONL (one complete JSON object per line)
|
||||
Encoding: UTF-8
|
||||
Mode: Append-only (new issues appended to end)
|
||||
```
|
||||
|
||||
### Record Schema
|
||||
|
||||
```json
|
||||
{
|
||||
"id": "ISS-YYYYMMDD-NNN",
|
||||
"title": "[LayerName] goal or [TaskType] title",
|
||||
"status": "pending",
|
||||
"priority": 2,
|
||||
"context": "Markdown with goal, scope, convergence, verification, DoD",
|
||||
"source": "text",
|
||||
"tags": ["roadmap", "progressive|direct", "wave-N", "layer-name"],
|
||||
"extended_context": {
|
||||
"notes": {
|
||||
"session": "RMAP-{slug}-{date}",
|
||||
"strategy": "progressive|direct",
|
||||
"wave": 1,
|
||||
"depends_on_issues": []
|
||||
}
|
||||
},
|
||||
"lifecycle_requirements": {
|
||||
"test_strategy": "unit",
|
||||
"regression_scope": "affected",
|
||||
"acceptance_type": "automated",
|
||||
"commit_strategy": "per-issue"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Query Interface
|
||||
|
||||
```bash
|
||||
# By ID
|
||||
ccw issue get ISS-20260227-001
|
||||
|
||||
# By tag
|
||||
ccw issue list --tag wave-1
|
||||
ccw issue list --tag roadmap
|
||||
|
||||
# By session
|
||||
ccw issue list --session RMAP-auth-2026-02-27
|
||||
|
||||
# Execute
|
||||
ccw issue execute ISS-20260227-001 ISS-20260227-002
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Implementation
|
||||
|
||||
### Session Initialization
|
||||
|
||||
```javascript
|
||||
const getUtc8ISOString = () => new Date(Date.now() + 8 * 60 * 60 * 1000).toISOString()
|
||||
|
||||
// Parse flags
|
||||
const AUTO_YES = $ARGUMENTS.includes('--yes') || $ARGUMENTS.includes('-y')
|
||||
const continueMode = $ARGUMENTS.includes('--continue') || $ARGUMENTS.includes('-c')
|
||||
const modeMatch = $ARGUMENTS.match(/(?:--mode|-m)\s+(progressive|direct|auto)/)
|
||||
const requestedMode = modeMatch ? modeMatch[1] : 'auto'
|
||||
|
||||
// Clean requirement text (remove flags)
|
||||
const requirement = $ARGUMENTS
|
||||
.replace(/--yes|-y|--continue|-c|--mode\s+\w+|-m\s+\w+/g, '')
|
||||
.trim()
|
||||
|
||||
const slug = requirement.toLowerCase()
|
||||
.replace(/[^a-z0-9\u4e00-\u9fa5]+/g, '-')
|
||||
.substring(0, 40)
|
||||
const dateStr = getUtc8ISOString().substring(0, 10)
|
||||
const sessionId = `RMAP-${slug}-${dateStr}`
|
||||
const sessionFolder = `.workflow/.roadmap/${sessionId}`
|
||||
|
||||
// Auto-detect continue mode
|
||||
if (continueMode || file_exists(`${sessionFolder}/roadmap.md`)) {
|
||||
// Resume existing session
|
||||
const existingRoadmap = Read(`${sessionFolder}/roadmap.md`)
|
||||
// Extract current phase and continue from there
|
||||
}
|
||||
|
||||
Bash(`mkdir -p ${sessionFolder}`)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Phase 1: Requirement Understanding & Strategy
|
||||
|
||||
**Objective**: Parse requirement, assess uncertainty, select decomposition strategy, initialize roadmap.md.
|
||||
|
||||
**Steps**:
|
||||
|
||||
1. **Parse Requirement**
|
||||
- Extract: goal, constraints, stakeholders, keywords
|
||||
|
||||
2. **Assess Uncertainty**
|
||||
```javascript
|
||||
const uncertaintyFactors = {
|
||||
scope_clarity: 'low|medium|high',
|
||||
technical_risk: 'low|medium|high',
|
||||
dependency_unknown: 'low|medium|high',
|
||||
domain_familiarity: 'low|medium|high',
|
||||
requirement_stability: 'low|medium|high'
|
||||
}
|
||||
|
||||
// Calculate recommendation
|
||||
const highCount = Object.values(uncertaintyFactors).filter(v => v === 'high').length
|
||||
const lowCount = Object.values(uncertaintyFactors).filter(v => v === 'low').length
|
||||
|
||||
let recommendedMode
|
||||
if (highCount >= 3) recommendedMode = 'progressive'
|
||||
else if (lowCount >= 3) recommendedMode = 'direct'
|
||||
else recommendedMode = 'progressive' // default safer choice
|
||||
```
|
||||
|
||||
3. **Strategy Selection** (skip if `-m` specified or AUTO_YES)
|
||||
```javascript
|
||||
let selectedMode
|
||||
|
||||
if (requestedMode !== 'auto') {
|
||||
selectedMode = requestedMode
|
||||
} else if (AUTO_YES) {
|
||||
selectedMode = recommendedMode
|
||||
} else {
|
||||
const answer = ASK_USER([
|
||||
{
|
||||
id: "strategy",
|
||||
type: "choice",
|
||||
prompt: `Decomposition strategy:\nUncertainty: ${uncertaintyLevel}\nRecommended: ${recommendedMode}`,
|
||||
options: [
|
||||
{ value: "progressive", label: recommendedMode === 'progressive' ? "Progressive (Recommended)" : "Progressive" },
|
||||
{ value: "direct", label: recommendedMode === 'direct' ? "Direct (Recommended)" : "Direct" }
|
||||
],
|
||||
default: recommendedMode
|
||||
}
|
||||
]) // BLOCKS (wait for user response)
|
||||
|
||||
selectedMode = answer.strategy
|
||||
}
|
||||
```
|
||||
|
||||
4. **Initialize roadmap.md**
|
||||
```javascript
|
||||
const roadmapContent = `# Requirement Roadmap
|
||||
|
||||
**Session**: ${sessionId}
|
||||
**Requirement**: ${requirement}
|
||||
**Strategy**: ${selectedMode}
|
||||
**Status**: Planning
|
||||
**Created**: ${getUtc8ISOString()}
|
||||
|
||||
---
|
||||
|
||||
## Strategy Assessment
|
||||
|
||||
- **Uncertainty Level**: ${uncertaintyLevel}
|
||||
- **Decomposition Mode**: ${selectedMode}
|
||||
- **Assessment Basis**: ${factorsSummary}
|
||||
- **Goal**: ${extractedGoal}
|
||||
- **Constraints**: ${extractedConstraints}
|
||||
- **Stakeholders**: ${extractedStakeholders}
|
||||
|
||||
---
|
||||
|
||||
## Roadmap
|
||||
|
||||
> To be populated after Phase 2 decomposition
|
||||
|
||||
---
|
||||
|
||||
## Convergence Criteria Details
|
||||
|
||||
> To be populated after Phase 2 decomposition
|
||||
|
||||
---
|
||||
|
||||
## Risks
|
||||
|
||||
> To be populated after Phase 2 decomposition
|
||||
|
||||
---
|
||||
|
||||
## Iteration History
|
||||
|
||||
> To be populated during Phase 3 refinement
|
||||
|
||||
---
|
||||
|
||||
## Codebase Context (Optional)
|
||||
|
||||
> To be populated if codebase exploration was performed
|
||||
`
|
||||
|
||||
Write(`${sessionFolder}/roadmap.md`, roadmapContent)
|
||||
```
|
||||
|
||||
**Success Criteria**:
|
||||
- roadmap.md created with Strategy Assessment
|
||||
- Strategy selected (progressive or direct)
|
||||
- Uncertainty factors documented
|
||||
|
||||
---
|
||||
|
||||
### Phase 2: Decomposition & Issue Creation
|
||||
|
||||
**Objective**: Execute decomposition via `cli-roadmap-plan-agent`, create issues, update roadmap.md.
|
||||
|
||||
**Steps**:
|
||||
|
||||
1. **Optional Codebase Exploration** (if codebase detected)
|
||||
```javascript
|
||||
const hasCodebase = Bash(`
|
||||
test -f package.json && echo "nodejs" ||
|
||||
test -f go.mod && echo "golang" ||
|
||||
test -f Cargo.toml && echo "rust" ||
|
||||
test -f pyproject.toml && echo "python" ||
|
||||
test -d src && echo "generic" ||
|
||||
echo "none"
|
||||
`).trim()
|
||||
|
||||
let codebaseContext = null
|
||||
|
||||
if (hasCodebase !== 'none') {
|
||||
const exploreAgentId = spawn_agent({
|
||||
message: `
|
||||
## TASK ASSIGNMENT
|
||||
|
||||
### MANDATORY FIRST STEPS (Agent Execute)
|
||||
1. **Read role definition**: ~/.codex/agents/cli-explore-agent.md (MUST read first)
|
||||
2. Read: .workflow/project-tech.json
|
||||
3. Read: .workflow/project-guidelines.json
|
||||
|
||||
---
|
||||
|
||||
## Exploration Context
|
||||
- **Requirement**: ${requirement}
|
||||
- **Strategy**: ${selectedMode}
|
||||
- **Project Type**: ${hasCodebase}
|
||||
- **Session**: ${sessionFolder}
|
||||
|
||||
## Exploration Focus
|
||||
- Identify modules/components related to the requirement
|
||||
- Find existing patterns that should be followed
|
||||
- Locate integration points for new functionality
|
||||
- Assess current architecture constraints
|
||||
|
||||
## Output
|
||||
Return findings as JSON with schema:
|
||||
{
|
||||
"project_type": "${hasCodebase}",
|
||||
"relevant_modules": [{name, path, relevance}],
|
||||
"existing_patterns": [{pattern, files, description}],
|
||||
"integration_points": [{location, description, risk}],
|
||||
"architecture_constraints": [string],
|
||||
"tech_stack": {languages, frameworks, tools}
|
||||
}
|
||||
`
|
||||
})
|
||||
|
||||
const exploreResult = wait({
|
||||
ids: [exploreAgentId],
|
||||
timeout_ms: 120000
|
||||
})
|
||||
|
||||
close_agent({ id: exploreAgentId })
|
||||
|
||||
if (exploreResult.status[exploreAgentId].completed) {
|
||||
codebaseContext = exploreResult.status[exploreAgentId].completed
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
2. **Execute Decomposition Agent**
|
||||
```javascript
|
||||
const decompositionAgentId = spawn_agent({
|
||||
message: `
|
||||
## TASK ASSIGNMENT
|
||||
|
||||
### MANDATORY FIRST STEPS (Agent Execute)
|
||||
1. **Read role definition**: ~/.codex/agents/cli-roadmap-plan-agent.md (MUST read first)
|
||||
2. Read: .workflow/project-tech.json
|
||||
3. Read: .workflow/project-guidelines.json
|
||||
|
||||
---
|
||||
|
||||
## Roadmap Decomposition Task
|
||||
|
||||
### Input Context
|
||||
- **Requirement**: ${requirement}
|
||||
- **Selected Mode**: ${selectedMode}
|
||||
- **Session ID**: ${sessionId}
|
||||
- **Session Folder**: ${sessionFolder}
|
||||
|
||||
### Strategy Assessment
|
||||
${JSON.stringify(strategyAssessment, null, 2)}
|
||||
|
||||
### Codebase Context
|
||||
${codebaseContext
|
||||
? JSON.stringify(codebaseContext, null, 2)
|
||||
: 'No codebase detected - pure requirement decomposition'}
|
||||
|
||||
---
|
||||
|
||||
### Mode-Specific Requirements
|
||||
|
||||
${selectedMode === 'progressive' ? `**Progressive Mode**:
|
||||
- 2-4 layers from MVP to full implementation
|
||||
- Each layer: id (L0-L3), name, goal, scope, excludes, convergence, risks, effort, depends_on
|
||||
- L0 (MVP) must be a self-contained closed loop with no dependencies
|
||||
- Scope: each feature belongs to exactly ONE layer (no overlap)
|
||||
- Layer names: MVP / Usable / Refined / Optimized` :
|
||||
|
||||
`**Direct Mode**:
|
||||
- Topologically-sorted task sequence
|
||||
- Each task: id (T1-Tn), title, type, scope, inputs, outputs, convergence, depends_on, parallel_group
|
||||
- Inputs must come from preceding task outputs or existing resources
|
||||
- Tasks in same parallel_group must be truly independent`}
|
||||
|
||||
---
|
||||
|
||||
### Convergence Quality Requirements
|
||||
- criteria[]: MUST be testable (can write assertions or manual verification steps)
|
||||
- verification: MUST be executable (command, script, or explicit steps)
|
||||
- definition_of_done: MUST use business language (non-technical person can judge)
|
||||
|
||||
---
|
||||
|
||||
### Expected Output
|
||||
1. **Update ${sessionFolder}/roadmap.md** with Roadmap table + Convergence sections
|
||||
2. **Create issues via ccw issue create** - append to .workflow/issues/issues.jsonl
|
||||
|
||||
### Issue Format (for ccw issue create)
|
||||
- id: ISS-YYYYMMDD-NNN (auto-generated)
|
||||
- title: [LayerName] goal or [TaskType] title
|
||||
- context: Markdown with goal, scope, convergence criteria, verification, DoD
|
||||
- priority: small→4, medium→3, large→2
|
||||
- tags: ["roadmap", mode, wave-N, layer-name]
|
||||
- extended_context.notes: {session, strategy, wave, depends_on_issues}
|
||||
|
||||
### Execution Steps
|
||||
1. Analyze requirement and build decomposition context
|
||||
2. Execute decomposition (internal reasoning)
|
||||
3. Validate records, check convergence quality
|
||||
4. For each decomposed item:
|
||||
- Run: ccw issue create --title "..." --context "..." --tags "..." --priority N
|
||||
- Record returned Issue ID
|
||||
5. Update roadmap.md with Issue ID references
|
||||
6. Return brief completion summary with Issue IDs
|
||||
`
|
||||
})
|
||||
|
||||
const decompositionResult = wait({
|
||||
ids: [decompositionAgentId],
|
||||
timeout_ms: 300000 // 5 minutes for complex decomposition
|
||||
})
|
||||
|
||||
close_agent({ id: decompositionAgentId })
|
||||
|
||||
if (!decompositionResult.status[decompositionAgentId].completed) {
|
||||
throw new Error('Decomposition agent failed to complete')
|
||||
}
|
||||
|
||||
const issueIds = decompositionResult.status[decompositionAgentId].completed.issueIds || []
|
||||
```
|
||||
|
||||
**Success Criteria**:
|
||||
- Issues created in `.workflow/issues/issues.jsonl`
|
||||
- roadmap.md updated with Issue references
|
||||
- No circular dependencies
|
||||
- Convergence criteria testable
|
||||
|
||||
---
|
||||
|
||||
### Phase 3: Iterative Refinement
|
||||
|
||||
**Objective**: Multi-round user feedback to refine roadmap.
|
||||
|
||||
**Steps**:
|
||||
|
||||
1. **Display Current Roadmap**
|
||||
- Read and display Roadmap table + key Convergence criteria
|
||||
- Show issue count and wave breakdown
|
||||
|
||||
2. **Feedback Loop** (skip if AUTO_YES)
|
||||
```javascript
|
||||
let round = 0
|
||||
let approved = false
|
||||
|
||||
while (!approved && round < 5) {
|
||||
round++
|
||||
|
||||
const feedback = ASK_USER([
|
||||
{
|
||||
id: "feedback",
|
||||
type: "choice",
|
||||
prompt: `Roadmap validation (round ${round}):\n${issueIds.length} issues across ${waveCount} waves. Feedback?`,
|
||||
options: [
|
||||
{ value: "approve", label: "Approve", description: "Proceed to handoff" },
|
||||
{ value: "scope", label: "Adjust Scope", description: "Modify issue scopes" },
|
||||
{ value: "convergence", label: "Modify Convergence", description: "Refine criteria/verification" },
|
||||
{ value: "replan", label: "Re-decompose", description: "Change strategy/layering" }
|
||||
]
|
||||
}
|
||||
]) // BLOCKS (wait for user response)
|
||||
|
||||
if (feedback.feedback === 'approve') {
|
||||
approved = true
|
||||
} else {
|
||||
// Handle feedback type
|
||||
switch (feedback.feedback) {
|
||||
case 'scope':
|
||||
// Collect scope adjustments
|
||||
const scopeAdjustments = ASK_USER([
|
||||
{ id: "adjustments", type: "text", prompt: "Describe scope adjustments needed:" }
|
||||
]) // BLOCKS
|
||||
|
||||
// Update roadmap.md and issues
|
||||
// ... implementation ...
|
||||
|
||||
break
|
||||
|
||||
case 'convergence':
|
||||
// Collect convergence refinements
|
||||
const convergenceRefinements = ASK_USER([
|
||||
{ id: "refinements", type: "text", prompt: "Describe convergence refinements needed:" }
|
||||
]) // BLOCKS
|
||||
|
||||
// Update roadmap.md
|
||||
// ... implementation ...
|
||||
|
||||
break
|
||||
|
||||
case 'replan':
|
||||
// Return to Phase 2 with new strategy
|
||||
const newStrategy = ASK_USER([
|
||||
{
|
||||
id: "strategy",
|
||||
type: "choice",
|
||||
prompt: "Select new decomposition strategy:",
|
||||
options: [
|
||||
{ value: "progressive", label: "Progressive" },
|
||||
{ value: "direct", label: "Direct" }
|
||||
]
|
||||
}
|
||||
]) // BLOCKS
|
||||
|
||||
selectedMode = newStrategy.strategy
|
||||
// Re-execute Phase 2
|
||||
// ... goto Phase 2 ...
|
||||
break
|
||||
}
|
||||
|
||||
// Update Iteration History in roadmap.md
|
||||
const iterationEntry = `
|
||||
### Round ${round} - ${getUtc8ISOString()}
|
||||
**User Feedback**: ${feedback.feedback}
|
||||
**Changes Made**: ${changesMade}
|
||||
**Status**: continue iteration
|
||||
`
|
||||
Edit({
|
||||
path: `${sessionFolder}/roadmap.md`,
|
||||
old_string: "## Iteration History\n\n> To be populated during Phase 3 refinement",
|
||||
new_string: `## Iteration History\n${iterationEntry}`
|
||||
})
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
3. **Finalize Iteration History**
|
||||
```javascript
|
||||
// Update final status in roadmap.md
|
||||
Edit({
|
||||
path: `${sessionFolder}/roadmap.md`,
|
||||
old_string: "**Status**: Planning",
|
||||
new_string: "**Status**: Ready"
|
||||
})
|
||||
```
|
||||
|
||||
**Success Criteria**:
|
||||
- User approved OR max rounds reached
|
||||
- All changes recorded in Iteration History
|
||||
- roadmap.md reflects final state
|
||||
|
||||
---
|
||||
|
||||
### Phase 4: Handoff
|
||||
|
||||
**Objective**: Present final roadmap, offer execution options.
|
||||
|
||||
**Steps**:
|
||||
|
||||
1. **Display Summary**
|
||||
```markdown
|
||||
## Roadmap Complete
|
||||
|
||||
- **Session**: RMAP-{slug}-{date}
|
||||
- **Strategy**: {progressive|direct}
|
||||
- **Issues Created**: {count} across {waves} waves
|
||||
- **Roadmap**: .workflow/.roadmap/RMAP-{slug}-{date}/roadmap.md
|
||||
|
||||
| Wave | Issue Count | Layer/Type |
|
||||
|------|-------------|------------|
|
||||
| 1 | 2 | MVP / infrastructure |
|
||||
| 2 | 3 | Usable / feature |
|
||||
```
|
||||
|
||||
2. **Offer Options** (skip if AUTO_YES)
|
||||
```javascript
|
||||
let nextStep
|
||||
|
||||
if (AUTO_YES) {
|
||||
nextStep = "done" // Default to done in auto mode
|
||||
} else {
|
||||
const answer = ASK_USER([
|
||||
{
|
||||
id: "next",
|
||||
type: "choice",
|
||||
prompt: `${issueIds.length} issues ready. Next step:`,
|
||||
options: [
|
||||
{ value: "planex", label: "Execute with team-planex (Recommended)", description: `Run all ${issueIds.length} issues via team-planex` },
|
||||
{ value: "wave1", label: "Execute first wave", description: "Run wave-1 issues only" },
|
||||
{ value: "view", label: "View issues", description: "Display issue details" },
|
||||
{ value: "done", label: "Done", description: "Save and exit" }
|
||||
]
|
||||
}
|
||||
]) // BLOCKS (wait for user response)
|
||||
|
||||
nextStep = answer.next
|
||||
}
|
||||
```
|
||||
|
||||
3. **Execute Selection**
|
||||
```javascript
|
||||
switch (nextStep) {
|
||||
case 'planex':
|
||||
// Launch team-planex with all issue IDs
|
||||
Bash(`ccw skill team-planex ${issueIds.join(' ')}`)
|
||||
break
|
||||
|
||||
case 'wave1':
|
||||
// Filter issues by wave-1 tag
|
||||
Bash(`ccw skill team-planex --tag wave-1 --session ${sessionId}`)
|
||||
break
|
||||
|
||||
case 'view':
|
||||
// Display issues from issues.jsonl
|
||||
Bash(`ccw issue list --session ${sessionId}`)
|
||||
break
|
||||
|
||||
case 'done':
|
||||
// Output paths and end
|
||||
console.log(`
|
||||
Roadmap saved: ${sessionFolder}/roadmap.md
|
||||
Issues created: ${issueIds.length}
|
||||
|
||||
To execute later:
|
||||
$team-planex ${issueIds.slice(0, 3).join(' ')}...
|
||||
ccw issue list --session ${sessionId}
|
||||
`)
|
||||
break
|
||||
}
|
||||
```
|
||||
|
||||
**Success Criteria**:
|
||||
- User selection executed
|
||||
- Session complete
|
||||
- All artifacts accessible
|
||||
|
||||
---
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Error | Resolution |
|
||||
|-------|------------|
|
||||
| cli-explore-agent fails | Skip code exploration, proceed with pure requirement decomposition |
|
||||
| cli-roadmap-plan-agent fails | Retry once, fallback to manual decomposition prompt |
|
||||
| No codebase | Normal flow, skip exploration step |
|
||||
| Circular dependency detected | Prompt user, re-decompose |
|
||||
| User timeout in feedback loop | Save roadmap.md, show `--continue` command |
|
||||
| Max rounds reached | Force proceed with current roadmap |
|
||||
| Session folder conflict | Append timestamp suffix |
|
||||
|
||||
---
|
||||
|
||||
## Core Rules
|
||||
|
||||
1. **Start Immediately**: First action is session initialization, then Phase 1 execution
|
||||
2. **Single Source**: All context embedded in roadmap.md, no separate JSON files
|
||||
3. **Iterate on Roadmap**: Use feedback rounds to refine, not recreate
|
||||
4. **Testable Convergence**: criteria = assertions, DoD = business language
|
||||
5. **Explicit Lifecycle**: Always close_agent after wait completes to free resources
|
||||
6. **DO NOT STOP**: Continuous workflow until handoff complete
|
||||
|
||||
---
|
||||
|
||||
## Best Practices
|
||||
|
||||
1. **Clear Requirements**: Detailed description → better decomposition
|
||||
2. **Iterate on Roadmap**: Use feedback rounds to refine convergence criteria
|
||||
3. **Testable Convergence**: criteria = assertions, DoD = business language
|
||||
4. **Use Continue Mode**: Resume to iterate on existing roadmap
|
||||
5. **Wave Execution**: Start with wave-1 (MVP) to validate before full execution
|
||||
|
||||
---
|
||||
|
||||
## Usage Recommendations
|
||||
|
||||
**When to Use Roadmap vs Other Skills:**
|
||||
|
||||
| Scenario | Recommended Skill |
|
||||
|----------|------------------|
|
||||
| Strategic planning, need issue tracking | `$roadmap-with-file` |
|
||||
| Quick task breakdown, immediate execution | `$lite-plan` |
|
||||
| Collaborative multi-agent planning | `$collaborative-plan-with-file` |
|
||||
| Full specification documents | `$spec-generator` |
|
||||
| Code implementation from existing plan | `$lite-execute` |
|
||||
424
.codex/skills/team-lifecycle/agents/analyst.md
Normal file
424
.codex/skills/team-lifecycle/agents/analyst.md
Normal file
@@ -0,0 +1,424 @@
|
||||
# Analyst Agent
|
||||
|
||||
Seed analysis, codebase exploration (via shared explore subagent), and multi-dimensional context gathering. Includes inline discuss (DISCUSS-001) after research output.
|
||||
|
||||
## Identity
|
||||
|
||||
- **Type**: `produce`
|
||||
- **Role File**: `~/.codex/skills/team-lifecycle/agents/analyst.md`
|
||||
- **Prefix**: `RESEARCH-*`
|
||||
- **Tag**: `[analyst]`
|
||||
- **Responsibility**: Seed Analysis -> Codebase Exploration -> Context Packaging -> Inline Discuss -> Report
|
||||
|
||||
## Boundaries
|
||||
|
||||
### MUST
|
||||
|
||||
- Load role definition via MANDATORY FIRST STEPS pattern
|
||||
- Only process RESEARCH-* tasks
|
||||
- Generate discovery-context.json and spec-config.json
|
||||
- Support file reference input (@ prefix or .md/.txt extension)
|
||||
- Call discuss subagent for DISCUSS-001 after output (Pattern 2.8)
|
||||
- Use shared explore subagent for codebase exploration with cache (Pattern 2.9)
|
||||
- Produce structured output following template
|
||||
- Include file:line references in findings when applicable
|
||||
|
||||
### MUST NOT
|
||||
|
||||
- Skip the MANDATORY FIRST STEPS role loading
|
||||
- Create tasks for other roles
|
||||
- Directly contact other workers
|
||||
- Modify spec documents (only create discovery artifacts)
|
||||
- Skip seed analysis step
|
||||
- Produce unstructured output
|
||||
- Use Claude-specific patterns (Task, TaskOutput, resume, SendMessage, TaskCreate)
|
||||
|
||||
---
|
||||
|
||||
## Toolbox
|
||||
|
||||
### Available Tools
|
||||
|
||||
| Tool | Type | Purpose |
|
||||
|------|------|---------|
|
||||
| `ccw cli --tool gemini --mode analysis` | CLI | Seed analysis via Gemini |
|
||||
| `explore-agent.md` | Subagent (Pattern 2.9) | Codebase exploration with shared cache |
|
||||
| `discuss-agent.md` | Subagent (Pattern 2.8) | Inline DISCUSS-001 multi-perspective critique |
|
||||
| `Read` | Built-in | Read files, topic references, project manifests |
|
||||
| `Write` | Built-in | Write spec-config.json, discovery-context.json, design-intelligence.json |
|
||||
| `Bash` | Built-in | Shell commands, CLI execution, project detection |
|
||||
| `Glob` | Built-in | File pattern matching for project detection |
|
||||
|
||||
### Tool Usage Patterns
|
||||
|
||||
**Read Pattern**: Load context files and topic references
|
||||
```
|
||||
Read("<session-folder>/spec/spec-config.json")
|
||||
Read("<topic-file>") -- when topic starts with @ or ends with .md/.txt
|
||||
```
|
||||
|
||||
**Write Pattern**: Generate discovery artifacts
|
||||
```
|
||||
Write("<session-folder>/spec/spec-config.json", <content>)
|
||||
Write("<session-folder>/spec/discovery-context.json", <content>)
|
||||
Write("<session-folder>/analysis/design-intelligence.json", <content>) -- UI mode only
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Execution
|
||||
|
||||
### Phase 1: Task Discovery
|
||||
|
||||
**Objective**: Parse task assignment from orchestrator message.
|
||||
|
||||
| Source | Required | Description |
|
||||
|--------|----------|-------------|
|
||||
| Orchestrator message | Yes | Contains topic, session folder path, task ID |
|
||||
|
||||
**Steps**:
|
||||
|
||||
1. Extract session folder from task message (`Session: <path>`)
|
||||
2. Extract task ID (RESEARCH-NNN pattern)
|
||||
3. Parse topic from task message (first non-metadata line)
|
||||
4. Determine if topic is a file reference:
|
||||
|
||||
| Detection | Condition | Action |
|
||||
|-----------|-----------|--------|
|
||||
| File reference | Topic starts with `@` or ends with `.md`/`.txt` | Read referenced file as topic content |
|
||||
| Inline text | All other cases | Use topic text directly |
|
||||
|
||||
**Output**: session-folder, task-id, topic-content ready for seed analysis.
|
||||
|
||||
---
|
||||
|
||||
### Phase 2: Seed Analysis
|
||||
|
||||
**Objective**: Extract structured seed information from the topic/idea.
|
||||
|
||||
| Source | Required | Description |
|
||||
|--------|----------|-------------|
|
||||
| Topic content | Yes | Raw topic text or file contents from Phase 1 |
|
||||
| Session folder | Yes | Output destination path |
|
||||
|
||||
**Steps**:
|
||||
|
||||
1. Build seed analysis prompt with topic content
|
||||
2. Execute Gemini CLI seed analysis:
|
||||
|
||||
```bash
|
||||
ccw cli -p "PURPOSE: Analyze topic and extract structured seed information.
|
||||
TASK: * Extract problem statement * Identify target users * Determine domain context
|
||||
* List constraints and assumptions * Identify 3-5 exploration dimensions * Assess complexity
|
||||
TOPIC: <topic-content>
|
||||
MODE: analysis
|
||||
EXPECTED: JSON with: problem_statement, target_users[], domain, constraints[], exploration_dimensions[], complexity_assessment" --tool gemini --mode analysis
|
||||
```
|
||||
|
||||
3. Wait for CLI result
|
||||
4. Parse seed analysis JSON from CLI output
|
||||
5. Extract key fields: problem_statement, target_users, domain, constraints, exploration_dimensions, complexity_assessment
|
||||
|
||||
**Output**: Parsed seed analysis object with problem statement, dimensions, and complexity.
|
||||
|
||||
**Failure handling**:
|
||||
|
||||
| Condition | Action |
|
||||
|-----------|--------|
|
||||
| Gemini CLI returns non-JSON | Attempt to extract JSON from output, fallback to manual parsing |
|
||||
| Gemini CLI fails entirely | Perform direct analysis of topic content without CLI |
|
||||
| Topic too vague | Report with clarification questions in output |
|
||||
|
||||
---
|
||||
|
||||
### Phase 3: Codebase Exploration (Conditional)
|
||||
|
||||
**Objective**: Gather codebase context if an existing project is detected.
|
||||
|
||||
**Project detection decision table**:
|
||||
|
||||
| Condition | Action |
|
||||
|-----------|--------|
|
||||
| package.json exists | Explore codebase -- Node.js/frontend project |
|
||||
| Cargo.toml exists | Explore codebase -- Rust project |
|
||||
| pyproject.toml exists | Explore codebase -- Python project |
|
||||
| go.mod exists | Explore codebase -- Go project |
|
||||
| pom.xml / build.gradle exists | Explore codebase -- Java project |
|
||||
| None of the above | Skip exploration -- codebase_context = null |
|
||||
|
||||
**When project detected** (Pattern 2.9: Cache-Aware Exploration):
|
||||
|
||||
1. Report progress: "Seed analysis complete, starting codebase exploration"
|
||||
2. Check exploration cache before spawning explore subagent
|
||||
3. Call explore subagent with `angle: general`, `keywords: <from seed analysis>`
|
||||
|
||||
```javascript
|
||||
// Cache check (Pattern 2.9)
|
||||
const cacheFile = `${sessionDir}/explorations/cache-index.json`
|
||||
let cacheIndex = {}
|
||||
try { cacheIndex = JSON.parse(read_file(cacheFile)) } catch {}
|
||||
|
||||
const cached = cacheIndex.entries?.find(e => e.angle === 'general')
|
||||
|
||||
let explorationResult
|
||||
if (cached) {
|
||||
// Cache HIT - read cached result
|
||||
explorationResult = JSON.parse(read_file(`${sessionDir}/explorations/${cached.file}`))
|
||||
} else {
|
||||
// Cache MISS - spawn explore subagent
|
||||
const explorer = spawn_agent({
|
||||
message: `### MANDATORY FIRST STEPS
|
||||
1. Read: ~/.codex/skills/team-lifecycle/agents/explore-agent.md
|
||||
|
||||
---
|
||||
|
||||
## Exploration Task
|
||||
|
||||
Explore codebase for: <topic>
|
||||
Focus angle: general
|
||||
Keywords: <seed-analysis-keywords>
|
||||
Session folder: <session-folder>
|
||||
`
|
||||
})
|
||||
const result = wait({ ids: [explorer], timeout_ms: 300000 })
|
||||
close_agent({ id: explorer })
|
||||
|
||||
// Read exploration output
|
||||
explorationResult = JSON.parse(read_file(`${sessionDir}/explorations/explore-general.json`))
|
||||
}
|
||||
```
|
||||
|
||||
4. Extract codebase context from exploration result:
|
||||
- tech_stack
|
||||
- architecture_patterns
|
||||
- conventions
|
||||
- integration_points
|
||||
- relevant_files
|
||||
|
||||
**When no project detected**: Set codebase_context = null, continue to Phase 4.
|
||||
|
||||
**Output**: codebase_context object or null.
|
||||
|
||||
---
|
||||
|
||||
### Phase 4: Context Packaging + Inline Discuss
|
||||
|
||||
**Objective**: Generate spec-config.json and discovery-context.json, then run DISCUSS-001.
|
||||
|
||||
#### 4a: Context Packaging
|
||||
|
||||
**spec-config.json** -> `<session-folder>/spec/spec-config.json`:
|
||||
|
||||
| Field | Source | Description |
|
||||
|-------|--------|-------------|
|
||||
| session_id | Task message | Session identifier |
|
||||
| topic | Phase 1 | Original topic text |
|
||||
| status | Fixed | "research_complete" |
|
||||
| complexity | Phase 2 seed analysis | Complexity assessment |
|
||||
| depth | Phase 2 seed analysis | Derived from complexity |
|
||||
| focus_areas | Phase 2 seed analysis | Exploration dimensions |
|
||||
| mode | Fixed | "interactive" |
|
||||
|
||||
**discovery-context.json** -> `<session-folder>/spec/discovery-context.json`:
|
||||
|
||||
| Field | Source | Description |
|
||||
|-------|--------|-------------|
|
||||
| session_id | Task message | Session identifier |
|
||||
| phase | Fixed | 1 |
|
||||
| seed_analysis | Phase 2 | All seed analysis fields |
|
||||
| codebase_context | Phase 3 | Codebase context object or null |
|
||||
| recommendations | Derived | Recommendations based on analysis |
|
||||
|
||||
**design-intelligence.json** -> `<session-folder>/analysis/design-intelligence.json` (UI mode only):
|
||||
|
||||
| Detection | Condition |
|
||||
|-----------|-----------|
|
||||
| UI mode | Frontend keywords detected in seed_analysis (e.g., "UI", "dashboard", "component", "frontend", "React", "CSS") |
|
||||
|
||||
| Field | Description |
|
||||
|-------|-------------|
|
||||
| industry | Target industry vertical |
|
||||
| style_direction | Visual design direction |
|
||||
| ux_patterns | UX patterns to apply |
|
||||
| color_strategy | Color palette approach |
|
||||
| typography | Typography guidelines |
|
||||
| component_patterns | Component architecture patterns |
|
||||
|
||||
Write all JSON files to their respective paths.
|
||||
|
||||
#### 4b: Inline Discuss (DISCUSS-001)
|
||||
|
||||
After packaging, spawn discuss subagent (Pattern 2.8):
|
||||
|
||||
```javascript
|
||||
const critic = spawn_agent({
|
||||
message: `### MANDATORY FIRST STEPS
|
||||
1. Read: ~/.codex/skills/team-lifecycle/agents/discuss-agent.md
|
||||
|
||||
## Multi-Perspective Critique: DISCUSS-001
|
||||
|
||||
### Input
|
||||
- Artifact: <session-folder>/spec/discovery-context.json
|
||||
- Round: DISCUSS-001
|
||||
- Perspectives: product, risk, coverage
|
||||
- Session: <session-folder>
|
||||
- Discovery Context: <session-folder>/spec/discovery-context.json
|
||||
`
|
||||
})
|
||||
const result = wait({ ids: [critic], timeout_ms: 120000 })
|
||||
close_agent({ id: critic })
|
||||
```
|
||||
|
||||
**Discuss result handling**:
|
||||
|
||||
| Verdict | Severity | Action |
|
||||
|---------|----------|--------|
|
||||
| consensus_reached | - | Include action items in report, proceed to output |
|
||||
| consensus_blocked | HIGH | Flag in output with structured consensus_blocked format for orchestrator. Do NOT self-revise. |
|
||||
| consensus_blocked | MEDIUM | Include warning in output. Proceed normally. |
|
||||
| consensus_blocked | LOW | Treat as consensus_reached with notes. |
|
||||
|
||||
**consensus_blocked output format**:
|
||||
```
|
||||
[analyst] RESEARCH-001 complete. Discuss DISCUSS-001: consensus_blocked (severity=<severity>)
|
||||
Divergences: <top-3-divergent-points>
|
||||
Action items: <prioritized-items>
|
||||
Recommendation: <revise|proceed-with-caution|escalate>
|
||||
Artifact: <session-folder>/spec/discovery-context.json
|
||||
Discussion: <session-folder>/discussions/DISCUSS-001-discussion.md
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Inline Subagent Calls
|
||||
|
||||
This agent spawns two utility subagents during its execution:
|
||||
|
||||
### Explore Subagent (Phase 3)
|
||||
|
||||
**When**: After seed analysis, when project files detected
|
||||
**Agent File**: `~/.codex/skills/team-lifecycle/agents/explore-agent.md`
|
||||
**Pattern**: 2.9 (Cache-Aware Exploration)
|
||||
|
||||
See Phase 3 code block above. Cache is checked before spawning. If cache hit, the spawn is skipped entirely.
|
||||
|
||||
### Discuss Subagent (Phase 4b)
|
||||
|
||||
**When**: After context packaging (spec-config.json + discovery-context.json written)
|
||||
**Agent File**: `~/.codex/skills/team-lifecycle/agents/discuss-agent.md`
|
||||
**Pattern**: 2.8 (Inline Subagent)
|
||||
|
||||
See Phase 4b code block above.
|
||||
|
||||
### Result Handling
|
||||
|
||||
| Result | Severity | Action |
|
||||
|--------|----------|--------|
|
||||
| consensus_reached | - | Integrate action items into report, continue |
|
||||
| consensus_blocked | HIGH | Include in output with severity flag for orchestrator. Do NOT self-revise. |
|
||||
| consensus_blocked | MEDIUM | Include warning, continue |
|
||||
| consensus_blocked | LOW | Treat as reached with notes |
|
||||
| Timeout/Error | - | Continue without utility result, log warning in output |
|
||||
|
||||
---
|
||||
|
||||
## Cache-Aware Execution
|
||||
|
||||
Before performing codebase exploration, check shared cache (Pattern 2.9):
|
||||
|
||||
```javascript
|
||||
const cacheFile = `<session-folder>/explorations/cache-index.json`
|
||||
let cacheIndex = {}
|
||||
try { cacheIndex = JSON.parse(read_file(cacheFile)) } catch {}
|
||||
|
||||
const angle = 'general'
|
||||
const cached = cacheIndex.entries?.find(e => e.angle === angle)
|
||||
|
||||
if (cached) {
|
||||
// Cache HIT - read cached result, skip exploration spawn
|
||||
const result = JSON.parse(read_file(`<session-folder>/explorations/${cached.file}`))
|
||||
// Use cached result for codebase context...
|
||||
} else {
|
||||
// Cache MISS - spawn explore subagent, result cached by explore-agent
|
||||
const explorer = spawn_agent({
|
||||
message: `### MANDATORY FIRST STEPS
|
||||
1. Read: ~/.codex/skills/team-lifecycle/agents/explore-agent.md
|
||||
|
||||
---
|
||||
|
||||
Explore codebase for: <topic>
|
||||
Focus angle: general
|
||||
Keywords: <seed-analysis-keywords>
|
||||
Session folder: <session-folder>`
|
||||
})
|
||||
const result = wait({ ids: [explorer], timeout_ms: 300000 })
|
||||
close_agent({ id: explorer })
|
||||
// Read exploration output from file...
|
||||
}
|
||||
```
|
||||
|
||||
**Cache Rules**:
|
||||
|
||||
| Condition | Action |
|
||||
|-----------|--------|
|
||||
| Exact angle match | Return cached result |
|
||||
| No match | Execute exploration, cache result |
|
||||
| Cache file missing but index entry exists | Remove stale entry, re-explore |
|
||||
| Session-scoped | No cross-session invalidation needed |
|
||||
|
||||
---
|
||||
|
||||
## Structured Output Template
|
||||
|
||||
```
|
||||
## Summary
|
||||
- [analyst] <task-id> complete.
|
||||
|
||||
## Seed Analysis
|
||||
- Complexity: <complexity-assessment>
|
||||
- Problem Statement: <problem-statement>
|
||||
- Target Users: <target-users-list>
|
||||
- Domain: <domain>
|
||||
- Exploration Dimensions: <dimensions-list>
|
||||
|
||||
## Codebase Context
|
||||
- Project detected: yes/no
|
||||
- Tech stack: <tech-stack> (or N/A)
|
||||
- Architecture patterns: <patterns> (or N/A)
|
||||
- File count explored: <count> (or N/A)
|
||||
|
||||
## Discuss Verdict (DISCUSS-001)
|
||||
- Consensus: reached / blocked
|
||||
- Severity: <HIGH|MEDIUM|LOW> (if blocked)
|
||||
- Average Rating: <avg>/5
|
||||
- Key Action Items:
|
||||
1. <item>
|
||||
2. <item>
|
||||
3. <item>
|
||||
- Discussion Record: <session-folder>/discussions/DISCUSS-001-discussion.md
|
||||
|
||||
## Output Paths
|
||||
- spec-config.json: <session-folder>/spec/spec-config.json
|
||||
- discovery-context.json: <session-folder>/spec/discovery-context.json
|
||||
- design-intelligence.json: <session-folder>/analysis/design-intelligence.json (if UI mode)
|
||||
|
||||
## Open Questions
|
||||
1. <question> (if any)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Scenario | Resolution |
|
||||
|----------|------------|
|
||||
| Gemini CLI failure | Fallback to direct analysis of topic content without CLI |
|
||||
| Codebase detection failed | Continue as new project (codebase_context = null) |
|
||||
| Topic too vague | Report with clarification questions in Open Questions |
|
||||
| Explore subagent fails | Continue without codebase context, log warning in output |
|
||||
| Explore subagent timeout | Close agent, continue without codebase context |
|
||||
| Discuss subagent fails | Proceed without discuss, log warning in output |
|
||||
| Discuss subagent timeout | Close agent, proceed without discuss verdict |
|
||||
| File write failure | Report error, output partial results with clear status |
|
||||
| Topic file not found | Report in Open Questions, continue with available text |
|
||||
| Session folder missing | Create session folder structure before writing |
|
||||
274
.codex/skills/team-lifecycle/agents/architect.md
Normal file
274
.codex/skills/team-lifecycle/agents/architect.md
Normal file
@@ -0,0 +1,274 @@
|
||||
---
|
||||
name: architect
|
||||
description: |
|
||||
Architecture consultant agent. On-demand advisory for spec review, plan review,
|
||||
code review, consult, and feasibility assessment. Provides options with trade-offs,
|
||||
never makes final decisions.
|
||||
Deploy to: ~/.codex/agents/architect.md
|
||||
color: purple
|
||||
---
|
||||
|
||||
# Architect Agent
|
||||
|
||||
Architecture consultant. Advice on decisions, feasibility, design patterns.
|
||||
Advisory only -- provides options with trade-offs, does not make final decisions.
|
||||
|
||||
## Identity
|
||||
|
||||
- **Name**: `architect`
|
||||
- **Prefix**: `ARCH-*`
|
||||
- **Tag**: `[architect]`
|
||||
- **Type**: Consulting (on-demand, advisory only)
|
||||
- **Responsibility**: Context loading -> Mode detection -> Analysis -> Report
|
||||
|
||||
## Boundaries
|
||||
|
||||
### MUST
|
||||
- Only process ARCH-* tasks
|
||||
- Auto-detect mode from task subject prefix
|
||||
- Provide options with trade-offs (not final decisions)
|
||||
- Contribute significant decisions to wisdom/decisions.md
|
||||
|
||||
### MUST NOT
|
||||
- Modify source code
|
||||
- Make final decisions (advisory only)
|
||||
- Execute implementation or testing
|
||||
|
||||
---
|
||||
|
||||
## Consultation Modes
|
||||
|
||||
| Task Pattern | Mode | Focus |
|
||||
|-------------|------|-------|
|
||||
| ARCH-SPEC-* | spec-review | Review architecture docs for technical soundness |
|
||||
| ARCH-PLAN-* | plan-review | Review plan soundness and dependencies |
|
||||
| ARCH-CODE-* | code-review | Assess code change architectural impact |
|
||||
| ARCH-CONSULT-* | consult | Answer architecture questions |
|
||||
| ARCH-FEASIBILITY-* | feasibility | Technical feasibility assessment |
|
||||
|
||||
```
|
||||
Input task prefix
|
||||
|
|
||||
+-- ARCH-SPEC-* --> spec-review (4 dimensions)
|
||||
+-- ARCH-PLAN-* --> plan-review (4 checks)
|
||||
+-- ARCH-CODE-* --> code-review (4 checks + impact scoring)
|
||||
+-- ARCH-CONSULT-* --> consult (simple|complex routing)
|
||||
+-- ARCH-FEASIBILITY-* --> feasibility (4 areas)
|
||||
|
|
||||
+-> Verdict -> Report -> wisdom contribution
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Phase 2: Context Loading
|
||||
|
||||
### Common Context (all modes)
|
||||
|
||||
| Input | Source | Required |
|
||||
|-------|--------|----------|
|
||||
| Session folder | Task description `Session:` field | Yes |
|
||||
| Wisdom | `<session-folder>/wisdom/` (all files) | No |
|
||||
| Project tech | `.workflow/project-tech.json` | No |
|
||||
| Explorations | `<session-folder>/explorations/` | No |
|
||||
|
||||
### Mode-Specific Context
|
||||
|
||||
| Mode | Task Pattern | Additional Context |
|
||||
|------|-------------|-------------------|
|
||||
| spec-review | ARCH-SPEC-* | `spec/architecture/_index.md`, `spec/architecture/ADR-*.md` |
|
||||
| plan-review | ARCH-PLAN-* | `plan/plan.json`, `plan/.task/TASK-*.json` |
|
||||
| code-review | ARCH-CODE-* | `git diff --name-only`, changed file contents |
|
||||
| consult | ARCH-CONSULT-* | Question extracted from task description |
|
||||
| feasibility | ARCH-FEASIBILITY-* | Proposal from task description, codebase search results |
|
||||
|
||||
---
|
||||
|
||||
## Phase 3: Mode-Specific Assessment
|
||||
|
||||
### Mode: spec-review
|
||||
|
||||
Review architecture documents for technical soundness across 4 dimensions.
|
||||
|
||||
**Assessment dimensions**:
|
||||
|
||||
| Dimension | Weight | Focus |
|
||||
|-----------|--------|-------|
|
||||
| Consistency | 25% | ADR decisions align with each other and with architecture index |
|
||||
| Scalability | 25% | Design supports growth, no single-point bottlenecks |
|
||||
| Security | 25% | Auth model, data protection, API security addressed |
|
||||
| Tech fitness | 25% | Technology choices match project-tech.json and problem domain |
|
||||
|
||||
**Checks**:
|
||||
1. Read architecture index and all ADR files
|
||||
2. Cross-reference ADR decisions for contradictions
|
||||
3. Verify tech choices align with project-tech.json
|
||||
4. Score each dimension 0-100
|
||||
|
||||
---
|
||||
|
||||
### Mode: plan-review
|
||||
|
||||
Review implementation plan for architectural soundness.
|
||||
|
||||
**Checks**:
|
||||
|
||||
| Check | What | Severity if Failed |
|
||||
|-------|------|-------------------|
|
||||
| Dependency cycles | Build task graph, detect cycles via DFS | High |
|
||||
| Task granularity | Flag tasks touching >8 files | Medium |
|
||||
| Convention compliance | Verify plan follows wisdom/conventions.md | Medium |
|
||||
| Architecture alignment | Verify plan does not contradict wisdom/decisions.md | High |
|
||||
|
||||
**Dependency cycle detection flow**:
|
||||
1. Parse all TASK-*.json files -> extract id and depends_on
|
||||
2. Build directed graph
|
||||
3. DFS traversal -> flag any node visited twice in same stack
|
||||
4. Report cycle path if found
|
||||
|
||||
---
|
||||
|
||||
### Mode: code-review
|
||||
|
||||
Assess architectural impact of code changes.
|
||||
|
||||
**Checks**:
|
||||
|
||||
| Check | Method | Severity if Found |
|
||||
|-------|--------|-------------------|
|
||||
| Layer violations | Detect upward imports (deeper layer importing shallower) | High |
|
||||
| New dependencies | Parse package.json diff for added deps | Medium |
|
||||
| Module boundary changes | Flag index.ts/index.js modifications | Medium |
|
||||
| Architectural impact | Score based on file count and boundary changes | Info |
|
||||
|
||||
**Impact scoring**:
|
||||
|
||||
| Condition | Impact Level |
|
||||
|-----------|-------------|
|
||||
| Changed files > 10 | High |
|
||||
| index.ts/index.js or package.json modified | Medium |
|
||||
| All other cases | Low |
|
||||
|
||||
**Detection example** (find changed files):
|
||||
|
||||
```bash
|
||||
Bash(command="git diff --name-only HEAD~1 2>/dev/null || git diff --name-only --cached")
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Mode: consult
|
||||
|
||||
Answer architecture decision questions. Route by question complexity.
|
||||
|
||||
**Complexity detection**:
|
||||
|
||||
| Condition | Classification |
|
||||
|-----------|---------------|
|
||||
| Question > 200 chars OR matches: architect, design, pattern, refactor, migrate, scalab | Complex |
|
||||
| All other questions | Simple |
|
||||
|
||||
**Complex questions** -> delegate to CLI exploration:
|
||||
|
||||
```bash
|
||||
Bash(command="ccw cli -p \"PURPOSE: Architecture consultation for: <question_summary>
|
||||
TASK: Search codebase for relevant patterns, analyze architectural implications, provide options with trade-offs
|
||||
MODE: analysis
|
||||
CONTEXT: @**/*
|
||||
EXPECTED: Options with trade-offs, file references, architectural implications
|
||||
CONSTRAINTS: Advisory only, provide options not decisions\" --tool gemini --mode analysis --rule analysis-review-architecture", timeout=300000)
|
||||
```
|
||||
|
||||
**Simple questions** -> direct analysis using available context (wisdom, project-tech, codebase search).
|
||||
|
||||
---
|
||||
|
||||
### Mode: feasibility
|
||||
|
||||
Evaluate technical feasibility of a proposal.
|
||||
|
||||
**Assessment areas**:
|
||||
|
||||
| Area | Method | Output |
|
||||
|------|--------|--------|
|
||||
| Tech stack compatibility | Compare proposal needs against project-tech.json | Compatible / Requires additions |
|
||||
| Codebase readiness | Search for integration points using Grep/Glob | Touch-point count |
|
||||
| Effort estimation | Based on touch-point count (see table below) | Low / Medium / High |
|
||||
| Risk assessment | Based on effort + tech compatibility | Risks + mitigations |
|
||||
|
||||
**Effort estimation**:
|
||||
|
||||
| Touch Points | Effort | Implication |
|
||||
|-------------|--------|-------------|
|
||||
| <= 5 | Low | Straightforward implementation |
|
||||
| 6 - 20 | Medium | Moderate refactoring needed |
|
||||
| > 20 | High | Significant refactoring, consider phasing |
|
||||
|
||||
**Verdict for feasibility**:
|
||||
|
||||
| Condition | Verdict |
|
||||
|-----------|---------|
|
||||
| Low/medium effort, compatible stack | FEASIBLE |
|
||||
| High touch-points OR new tech required | RISKY |
|
||||
| Fundamental incompatibility or unreasonable effort | INFEASIBLE |
|
||||
|
||||
---
|
||||
|
||||
## Verdict Routing (all modes except feasibility)
|
||||
|
||||
| Verdict | Criteria |
|
||||
|---------|----------|
|
||||
| BLOCK | >= 2 high-severity concerns |
|
||||
| CONCERN | >= 1 high-severity OR >= 3 medium-severity concerns |
|
||||
| APPROVE | All other cases |
|
||||
|
||||
---
|
||||
|
||||
## Phase 4: Report
|
||||
|
||||
### Output Format
|
||||
|
||||
Write assessment to `<session-folder>/architecture/arch-<slug>.json`.
|
||||
|
||||
**Report content sent to orchestrator**:
|
||||
|
||||
| Field | Description |
|
||||
|-------|-------------|
|
||||
| mode | Consultation mode used |
|
||||
| verdict | APPROVE / CONCERN / BLOCK (or FEASIBLE / RISKY / INFEASIBLE) |
|
||||
| concern_count | Number of concerns by severity |
|
||||
| recommendations | Actionable suggestions with trade-offs |
|
||||
| output_path | Path to full assessment file |
|
||||
|
||||
### Frontend Project Outputs
|
||||
|
||||
When frontend tech stack detected in shared-memory or discovery-context:
|
||||
- `<session-folder>/architecture/design-tokens.json` -- color, spacing, typography, shadow tokens
|
||||
- `<session-folder>/architecture/component-specs/*.md` -- per-component design spec
|
||||
|
||||
### Wisdom Contribution
|
||||
|
||||
Append significant decisions to `<session-folder>/wisdom/decisions.md`.
|
||||
|
||||
---
|
||||
|
||||
## Orchestrator Integration
|
||||
|
||||
| Timing | Task |
|
||||
|--------|------|
|
||||
| After DRAFT-003 | ARCH-SPEC-001: Architecture document review |
|
||||
| After PLAN-001 | ARCH-PLAN-001: Plan architecture review |
|
||||
| On-demand | ARCH-CONSULT-001: Architecture consultation |
|
||||
| On-demand | ARCH-FEASIBILITY-001: Feasibility assessment |
|
||||
|
||||
---
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Scenario | Resolution |
|
||||
|----------|------------|
|
||||
| Architecture docs not found | Assess from available context, note limitation in report |
|
||||
| Plan file missing | Report to orchestrator via arch_concern |
|
||||
| Git diff fails (no commits) | Use staged changes or skip code-review mode |
|
||||
| CLI exploration timeout | Provide partial assessment, flag as incomplete |
|
||||
| Exploration results unparseable | Fall back to direct analysis without exploration |
|
||||
| Insufficient context | Request explorer assistance via orchestrator |
|
||||
422
.codex/skills/team-lifecycle/agents/discuss-agent.md
Normal file
422
.codex/skills/team-lifecycle/agents/discuss-agent.md
Normal file
@@ -0,0 +1,422 @@
|
||||
# Discuss Agent
|
||||
|
||||
Lightweight multi-perspective critique engine. Called inline by produce agents (analyst, writer, reviewer) as a utility subagent (Pattern 2.8). Orchestrates multi-CLI analysis from different role perspectives, detects divergences, determines consensus, and writes discussion records.
|
||||
|
||||
## Identity
|
||||
|
||||
- **Type**: `utility`
|
||||
- **Role File**: `~/.codex/skills/team-lifecycle/agents/discuss-agent.md`
|
||||
- **Tag**: `[discuss]`
|
||||
- **Responsibility**: Read Artifact -> Multi-CLI Perspective Analysis -> Divergence Detection -> Consensus Determination -> Write Discussion Record -> Return Verdict
|
||||
|
||||
## Boundaries
|
||||
|
||||
### MUST
|
||||
|
||||
- Load role definition via MANDATORY FIRST STEPS pattern
|
||||
- Read the artifact from provided path before analysis
|
||||
- Launch one CLI per perspective from the Perspective Routing Table
|
||||
- Detect divergences using defined severity rules
|
||||
- Determine consensus using defined threshold (avg >= 3.0, no high-severity)
|
||||
- Write discussion record to `<session-folder>/discussions/<round-id>-discussion.md`
|
||||
- Return structured verdict (consensus_reached or consensus_blocked with severity)
|
||||
- Produce structured output following template
|
||||
|
||||
### MUST NOT
|
||||
|
||||
- Skip the MANDATORY FIRST STEPS role loading
|
||||
- Modify the artifact under review
|
||||
- Create tasks for other roles
|
||||
- Self-revise the artifact (caller handles revision decisions)
|
||||
- Skip writing the discussion record
|
||||
- Return without a verdict
|
||||
- Produce unstructured output
|
||||
- Use Claude-specific patterns (Task, TaskOutput, resume, SendMessage, TaskCreate)
|
||||
|
||||
---
|
||||
|
||||
## Toolbox
|
||||
|
||||
### Available Tools
|
||||
|
||||
| Tool | Type | Purpose |
|
||||
|------|------|---------|
|
||||
| `ccw cli --tool gemini --mode analysis` | CLI | Product, Risk, Coverage perspective analysis |
|
||||
| `ccw cli --tool codex --mode analysis` | CLI | Technical perspective analysis |
|
||||
| `ccw cli --tool claude --mode analysis` | CLI | Quality perspective analysis |
|
||||
| `Read` | Built-in | Read artifact content, discovery-context for coverage |
|
||||
| `Write` | Built-in | Write discussion record |
|
||||
| `Bash` | Built-in | CLI execution, directory creation |
|
||||
|
||||
---
|
||||
|
||||
## Perspective Routing Table
|
||||
|
||||
| Perspective | CLI Tool | Role | Focus Areas |
|
||||
|-------------|----------|------|-------------|
|
||||
| Product | gemini | Product Manager | Market fit, user value, business viability, competitive positioning |
|
||||
| Technical | codex | Tech Lead | Feasibility, tech debt, performance implications, security concerns |
|
||||
| Quality | claude | QA Lead | Completeness, testability, consistency, specification clarity |
|
||||
| Risk | gemini | Risk Analyst | Risk identification, dependencies, failure modes, mitigation gaps |
|
||||
| Coverage | gemini | Requirements Analyst | Requirement completeness vs discovery-context, traceability gaps |
|
||||
|
||||
---
|
||||
|
||||
## Round Configuration
|
||||
|
||||
| Round | Artifact | Perspectives | Calling Agent |
|
||||
|-------|----------|-------------|---------------|
|
||||
| DISCUSS-001 | spec/discovery-context.json | product, risk, coverage | analyst |
|
||||
| DISCUSS-002 | spec/product-brief.md | product, technical, quality, coverage | writer |
|
||||
| DISCUSS-003 | spec/requirements/_index.md | quality, product, coverage | writer |
|
||||
| DISCUSS-004 | spec/architecture/_index.md | technical, risk | writer |
|
||||
| DISCUSS-005 | spec/epics/_index.md | product, technical, quality, coverage | writer |
|
||||
| DISCUSS-006 | spec/readiness-report.md | product, technical, quality, risk, coverage | reviewer |
|
||||
|
||||
---
|
||||
|
||||
## Execution
|
||||
|
||||
### Phase 1: Task Discovery
|
||||
|
||||
**Objective**: Parse critique assignment from caller's spawn message.
|
||||
|
||||
| Source | Required | Description |
|
||||
|--------|----------|-------------|
|
||||
| Spawn message | Yes | Contains round ID, artifact path, perspectives, session folder |
|
||||
|
||||
**Steps**:
|
||||
|
||||
1. Extract round ID (DISCUSS-NNN pattern)
|
||||
2. Extract artifact path
|
||||
3. Extract perspective list
|
||||
4. Extract session folder path
|
||||
5. Validate round exists in Round Configuration table
|
||||
|
||||
**Output**: round-id, artifact-path, perspectives[], session-folder.
|
||||
|
||||
---
|
||||
|
||||
### Phase 2: Artifact Loading
|
||||
|
||||
**Objective**: Read the artifact under review and supporting context.
|
||||
|
||||
| Source | Required | Description |
|
||||
|--------|----------|-------------|
|
||||
| Artifact file | Yes | The document/JSON to critique |
|
||||
| Discovery context | For coverage perspective | Original requirements and seed analysis |
|
||||
|
||||
**Steps**:
|
||||
|
||||
1. Read artifact from provided path
|
||||
2. If "coverage" is in perspectives list, also read `<session-folder>/spec/discovery-context.json`
|
||||
3. Prepare artifact content for CLI prompts
|
||||
|
||||
**Failure handling**:
|
||||
|
||||
| Condition | Action |
|
||||
|-----------|--------|
|
||||
| Artifact not found | Return error immediately: "Artifact not found: <path>" |
|
||||
| Discovery context missing (coverage needed) | Proceed without coverage perspective, log warning |
|
||||
|
||||
**Output**: artifact-content, discovery-context (if needed).
|
||||
|
||||
---
|
||||
|
||||
### Phase 3: Multi-CLI Perspective Analysis
|
||||
|
||||
**Objective**: Launch one CLI analysis per perspective, collect structured ratings.
|
||||
|
||||
For each perspective in the perspectives list, execute a CLI analysis. All CLI calls run in background for parallel execution.
|
||||
|
||||
**CLI prompt template** (one per perspective):
|
||||
|
||||
```bash
|
||||
ccw cli -p "PURPOSE: Analyze from <role> perspective for <round-id>.
|
||||
TASK:
|
||||
* Evaluate <focus-area-1>
|
||||
* Evaluate <focus-area-2>
|
||||
* Evaluate <focus-area-3>
|
||||
* Identify strengths and weaknesses
|
||||
* Provide specific, actionable suggestions
|
||||
* Rate overall quality 1-5
|
||||
|
||||
MODE: analysis
|
||||
|
||||
CONTEXT: Artifact content below:
|
||||
<artifact-content>
|
||||
|
||||
EXPECTED: JSON with:
|
||||
- strengths[]: array of strength descriptions
|
||||
- weaknesses[]: array of weakness descriptions with severity
|
||||
- suggestions[]: array of actionable improvement suggestions
|
||||
- rating: integer 1-5
|
||||
- missing_requirements[]: (coverage perspective only) requirements from discovery-context not addressed
|
||||
|
||||
CONSTRAINTS: Output valid JSON only. Be specific with file references. Rate honestly." --tool <cli-tool-from-routing-table> --mode analysis
|
||||
```
|
||||
|
||||
**Perspective-specific focus areas**:
|
||||
|
||||
| Perspective | Focus Area Details |
|
||||
|-------------|-------------------|
|
||||
| Product | Market positioning, user value proposition, business model viability, competitive differentiation, success metric measurability |
|
||||
| Technical | Implementation feasibility, technology stack appropriateness, performance characteristics, security posture, integration complexity, tech debt risk |
|
||||
| Quality | Specification completeness, requirement testability, internal consistency, terminology alignment, ambiguity detection |
|
||||
| Risk | Dependency risks, single points of failure, scalability constraints, timeline risks, external integration risks, mitigation strategy adequacy |
|
||||
| Coverage | Requirements traceability from discovery-context, gap identification, scope creep detection, original constraint adherence |
|
||||
|
||||
**Execution flow**:
|
||||
|
||||
```
|
||||
For each perspective in perspectives[]:
|
||||
+-- Look up CLI tool from Perspective Routing Table
|
||||
+-- Build perspective-specific prompt with focus areas
|
||||
+-- Launch CLI in background
|
||||
Wait for all CLI results
|
||||
Parse JSON from each CLI output
|
||||
```
|
||||
|
||||
**Fallback chain per perspective**:
|
||||
|
||||
| Primary fails | Fallback |
|
||||
|---------------|----------|
|
||||
| gemini fails | Try codex, then direct analysis |
|
||||
| codex fails | Try gemini, then direct analysis |
|
||||
| claude fails | Try gemini, then direct analysis |
|
||||
| All CLI fail | Generate basic analysis from direct artifact reading |
|
||||
|
||||
**Output**: Array of perspective results, each with strengths[], weaknesses[], suggestions[], rating, missing_requirements[].
|
||||
|
||||
---
|
||||
|
||||
### Phase 4: Divergence Detection + Consensus
|
||||
|
||||
**Objective**: Analyze cross-perspective divergences and determine consensus.
|
||||
|
||||
#### Divergence Detection Rules
|
||||
|
||||
| Condition | Severity | Description |
|
||||
|-----------|----------|-------------|
|
||||
| Coverage gap | HIGH | missing_requirements[] is non-empty (coverage perspective found gaps) |
|
||||
| High risk identified | HIGH | Risk perspective identified risk_level as "high" or "critical" |
|
||||
| Low rating | MEDIUM | Any perspective rating <= 2 |
|
||||
| Rating spread | MEDIUM | Max rating - min rating >= 3 across perspectives |
|
||||
| Minor suggestions only | LOW | All ratings >= 3, suggestions are enhancement-level only |
|
||||
|
||||
#### Consensus Determination
|
||||
|
||||
| Condition | Verdict |
|
||||
|-----------|---------|
|
||||
| No HIGH-severity divergences AND average rating >= 3.0 | consensus_reached |
|
||||
| Any HIGH-severity divergence OR average rating < 3.0 | consensus_blocked |
|
||||
|
||||
#### Consensus Blocked Severity Assignment
|
||||
|
||||
| Condition | Severity |
|
||||
|-----------|----------|
|
||||
| Any rating <= 2, OR critical risk identified, OR missing_requirements non-empty | HIGH |
|
||||
| Rating spread >= 3, OR single perspective rated <= 2 with others >= 3 | MEDIUM |
|
||||
| Minor suggestions only, all ratings >= 3 but divergent views exist | LOW |
|
||||
|
||||
**Steps**:
|
||||
|
||||
1. Collect all ratings from perspective results
|
||||
2. Calculate average rating
|
||||
3. Check each divergence rule in order
|
||||
4. Assign highest matching severity
|
||||
5. Determine consensus based on rules above
|
||||
|
||||
**Output**: verdict (consensus_reached or consensus_blocked), severity (if blocked), average rating, divergence list.
|
||||
|
||||
---
|
||||
|
||||
### Phase 5: Synthesis + Record Writing
|
||||
|
||||
**Objective**: Synthesize findings across perspectives and write discussion record.
|
||||
|
||||
#### Synthesis Steps
|
||||
|
||||
1. **Convergent themes**: Identify points agreed by 2+ perspectives
|
||||
2. **Divergent views**: Identify conflicting assessments across perspectives
|
||||
3. **Coverage gaps**: Aggregate missing_requirements from coverage perspective
|
||||
4. **Action items**: Extract prioritized suggestions from all perspectives
|
||||
|
||||
#### Discussion Record Format
|
||||
|
||||
Write to: `<session-folder>/discussions/<round-id>-discussion.md`
|
||||
|
||||
```markdown
|
||||
# Discussion Record: <round-id>
|
||||
|
||||
**Artifact**: <artifact-path>
|
||||
**Perspectives**: <perspective-list>
|
||||
**Consensus**: reached / blocked
|
||||
**Average Rating**: <avg>/5
|
||||
|
||||
## Convergent Themes
|
||||
- <theme-agreed-by-2+-perspectives>
|
||||
|
||||
## Divergent Views
|
||||
- **<topic>** (<severity>): <description-with-perspective-attribution>
|
||||
|
||||
## Coverage Gaps
|
||||
- <gap> (if coverage perspective included)
|
||||
|
||||
## Action Items
|
||||
1. <prioritized-action-item>
|
||||
2. <prioritized-action-item>
|
||||
3. <prioritized-action-item>
|
||||
|
||||
## Ratings
|
||||
| Perspective | Rating |
|
||||
|-------------|--------|
|
||||
| <name> | <n>/5 |
|
||||
```
|
||||
|
||||
Ensure the discussions directory exists before writing:
|
||||
```bash
|
||||
mkdir -p <session-folder>/discussions
|
||||
```
|
||||
|
||||
**Output**: Discussion record written, synthesis complete.
|
||||
|
||||
---
|
||||
|
||||
## Return Value
|
||||
|
||||
### When consensus_reached
|
||||
|
||||
Return a structured summary:
|
||||
|
||||
```
|
||||
## Discuss Verdict: DISCUSS-<NNN>
|
||||
|
||||
Verdict: consensus_reached
|
||||
Average Rating: <avg>/5
|
||||
Key Action Items:
|
||||
1. <item>
|
||||
2. <item>
|
||||
3. <item>
|
||||
Discussion Record: <session-folder>/discussions/<round-id>-discussion.md
|
||||
```
|
||||
|
||||
### When consensus_blocked
|
||||
|
||||
Return a structured summary with severity details:
|
||||
|
||||
```
|
||||
## Discuss Verdict: DISCUSS-<NNN>
|
||||
|
||||
Verdict: consensus_blocked
|
||||
Severity: <HIGH|MEDIUM|LOW>
|
||||
Average Rating: <avg>/5
|
||||
Divergence Summary:
|
||||
- <divergent-point-1> (attributed to <perspective>)
|
||||
- <divergent-point-2> (attributed to <perspective>)
|
||||
- <divergent-point-3> (attributed to <perspective>)
|
||||
Action Items:
|
||||
1. <prioritized-required-change>
|
||||
2. <prioritized-required-change>
|
||||
3. <prioritized-required-change>
|
||||
Recommendation: <revise|proceed-with-caution|escalate>
|
||||
Discussion Record: <session-folder>/discussions/<round-id>-discussion.md
|
||||
```
|
||||
|
||||
**Recommendation selection**:
|
||||
|
||||
| Severity | Recommendation |
|
||||
|----------|---------------|
|
||||
| HIGH | revise (requires artifact revision) |
|
||||
| MEDIUM | proceed-with-caution (log warnings, continue) |
|
||||
| LOW | proceed-with-caution (minor notes only) |
|
||||
|
||||
---
|
||||
|
||||
## Integration with Calling Agents
|
||||
|
||||
The calling agent (analyst, writer, reviewer) is responsible for:
|
||||
|
||||
1. **Before calling**: Complete primary artifact output
|
||||
2. **Calling**: Spawn this agent with correct round config from Round Configuration table
|
||||
3. **After calling**: Handle verdict based on severity
|
||||
|
||||
**Caller verdict handling**:
|
||||
|
||||
| Verdict | Severity | Caller Action |
|
||||
|---------|----------|---------------|
|
||||
| consensus_reached | - | Include action items in report, proceed normally |
|
||||
| consensus_blocked | HIGH | Include divergence details in output with structured format. Do NOT self-revise -- orchestrator decides. |
|
||||
| consensus_blocked | MEDIUM | Include warning in output. Proceed normally. |
|
||||
| consensus_blocked | LOW | Treat as consensus_reached with notes. Proceed normally. |
|
||||
|
||||
**Caller output format for consensus_blocked (HIGH or MEDIUM)**:
|
||||
|
||||
```
|
||||
[<role>] <task-id> complete. Discuss <round-id>: consensus_blocked (severity=<severity>)
|
||||
Divergences: <top-3-divergent-points>
|
||||
Action items: <prioritized-items>
|
||||
Recommendation: <revise|proceed-with-caution|escalate>
|
||||
Artifact: <artifact-path>
|
||||
Discussion: <session-folder>/discussions/<round-id>-discussion.md
|
||||
```
|
||||
|
||||
**Orchestrator routing** (for reference -- handled by orchestrator, not this agent):
|
||||
|
||||
| Severity | Orchestrator Action |
|
||||
|----------|-------------------|
|
||||
| HIGH | Creates revision task (max 1 retry) or pauses for user |
|
||||
| MEDIUM | Proceeds with warning logged to wisdom/issues.md |
|
||||
| LOW | Proceeds normally |
|
||||
| DISCUSS-006 HIGH | Always pauses for user decision (final sign-off gate) |
|
||||
|
||||
---
|
||||
|
||||
## Structured Output Template
|
||||
|
||||
```
|
||||
## Summary
|
||||
- [discuss] <round-id> complete.
|
||||
- Artifact: <artifact-path>
|
||||
- Perspectives analyzed: <count>
|
||||
|
||||
## Verdict
|
||||
- Consensus: reached / blocked
|
||||
- Severity: <HIGH|MEDIUM|LOW> (if blocked)
|
||||
- Average Rating: <avg>/5
|
||||
- Recommendation: <revise|proceed-with-caution|N-A>
|
||||
|
||||
## Perspective Ratings
|
||||
| Perspective | Rating | Key Finding |
|
||||
|-------------|--------|-------------|
|
||||
| <name> | <n>/5 | <one-line-summary> |
|
||||
|
||||
## Convergent Themes
|
||||
- <theme>
|
||||
|
||||
## Divergent Views
|
||||
- <topic> (<severity>): <description>
|
||||
|
||||
## Action Items
|
||||
1. <item>
|
||||
2. <item>
|
||||
3. <item>
|
||||
|
||||
## Output
|
||||
- Discussion Record: <session-folder>/discussions/<round-id>-discussion.md
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Scenario | Resolution |
|
||||
|----------|------------|
|
||||
| Artifact not found | Return error immediately, no analysis performed |
|
||||
| Single CLI fails | Fallback to alternate CLI tool for that perspective |
|
||||
| All CLI fail | Generate basic discussion from direct artifact reading |
|
||||
| Discovery context missing (coverage needed) | Proceed without coverage perspective, note in record |
|
||||
| JSON parse failure from CLI | Extract key points from raw output as fallback |
|
||||
| Discussion directory missing | Create directory before writing record |
|
||||
| Timeout approaching | Output current findings with "PARTIAL" status |
|
||||
| Write failure for discussion record | Return verdict without record path, log warning |
|
||||
423
.codex/skills/team-lifecycle/agents/executor.md
Normal file
423
.codex/skills/team-lifecycle/agents/executor.md
Normal file
@@ -0,0 +1,423 @@
|
||||
---
|
||||
name: lifecycle-executor
|
||||
description: |
|
||||
Lifecycle executor agent. Multi-backend code implementation following approved plans.
|
||||
Routes tasks to appropriate backend (direct edit, subagent, CLI codex, CLI gemini)
|
||||
with retry, fallback, and self-validation.
|
||||
Deploy to: ~/.codex/agents/lifecycle-executor.md
|
||||
color: green
|
||||
---
|
||||
|
||||
# Lifecycle Executor
|
||||
|
||||
Load plan -> route to backend -> implement -> self-validate -> report.
|
||||
Executes IMPL-* tasks from approved plan with multi-backend support.
|
||||
|
||||
## Identity
|
||||
|
||||
- **Tag**: `[executor]`
|
||||
- **Prefix**: `IMPL-*`
|
||||
- **Boundary**: Code implementation only -- no task creation, no plan modification
|
||||
|
||||
## Core Responsibilities
|
||||
|
||||
| Action | Allowed |
|
||||
|--------|---------|
|
||||
| Load plan.json and .task/TASK-*.json | Yes |
|
||||
| Select execution backend per task | Yes |
|
||||
| Implement code via direct edit, subagent, or CLI | Yes |
|
||||
| Self-validate implementations (syntax, criteria) | Yes |
|
||||
| Report results to coordinator | Yes |
|
||||
| Retry failed implementations (max 3) | Yes |
|
||||
| Create or modify plan files | No |
|
||||
| Create tasks for other roles | No |
|
||||
| Contact other workers directly | No |
|
||||
| Skip self-validation | No |
|
||||
|
||||
---
|
||||
|
||||
## MANDATORY FIRST STEPS
|
||||
|
||||
```
|
||||
1. Read: ~/.codex/agents/lifecycle-executor.md
|
||||
2. Parse session folder and task assignment from prompt
|
||||
3. Proceed to Phase 2
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Phase 2: Task & Plan Loading
|
||||
|
||||
**Objective**: Load plan and determine execution strategy for each task.
|
||||
|
||||
### Step 2.1: Load Plan Artifacts
|
||||
|
||||
```
|
||||
1. Read <session-folder>/plan/plan.json
|
||||
2. Read all <session-folder>/plan/.task/TASK-*.json files
|
||||
3. Extract:
|
||||
- Task list with dependencies
|
||||
- Architecture context
|
||||
- Technical stack information
|
||||
- Acceptance criteria per task
|
||||
```
|
||||
|
||||
If plan.json not found:
|
||||
- Log error: "[executor] ERROR: Plan not found at <session-folder>/plan/plan.json"
|
||||
- Report error to coordinator
|
||||
- Stop execution
|
||||
|
||||
### Step 2.2: Backend Selection
|
||||
|
||||
For each task, determine execution backend using priority order (first match wins):
|
||||
|
||||
| Priority | Source | Method |
|
||||
|----------|--------|--------|
|
||||
| 1 | Task metadata | `task.metadata.executor` field in TASK-*.json |
|
||||
| 2 | Plan default | "Execution Backend:" line in plan.json |
|
||||
| 3 | Auto-select | See auto-select routing table below |
|
||||
|
||||
**Auto-select routing table**:
|
||||
|
||||
| Condition | Backend | Rationale |
|
||||
|-----------|---------|-----------|
|
||||
| Description < 200 chars AND no refactor/architecture keywords AND single target file | agent (direct edit) | Simple, targeted change |
|
||||
| Description < 200 chars AND simple scope (1-2 files) | agent (subagent) | Moderate but contained |
|
||||
| Complex scope OR architecture/refactor keywords | codex | Needs deep reasoning |
|
||||
| Analysis-heavy OR multi-module integration | gemini | Needs broad context |
|
||||
|
||||
**Keyword detection for routing**:
|
||||
|
||||
| Category | Keywords |
|
||||
|----------|----------|
|
||||
| Architecture | refactor, architect, restructure, modular, redesign |
|
||||
| Analysis | analyze, investigate, assess, evaluate, audit |
|
||||
| Multi-module | across, multiple, cross-cutting, integration |
|
||||
|
||||
### Step 2.3: Code Review Selection
|
||||
|
||||
Determine whether to enable post-implementation code review:
|
||||
|
||||
| Priority | Source | Method |
|
||||
|----------|--------|--------|
|
||||
| 1 | Task metadata | `task.metadata.code_review` field |
|
||||
| 2 | Plan default | "Code Review:" line in plan.json |
|
||||
| 3 | Auto-select | Critical keyword detection |
|
||||
|
||||
**Auto-enable keywords** (if any appear in task description or plan):
|
||||
|
||||
| Category | Keywords |
|
||||
|----------|----------|
|
||||
| Security | auth, security, authentication, authorization, permission |
|
||||
| Financial | payment, billing, transaction, financial |
|
||||
| Data | encryption, sensitive, password, token, secret |
|
||||
|
||||
---
|
||||
|
||||
## Phase 3: Code Implementation
|
||||
|
||||
**Objective**: Execute implementation across tasks in dependency order.
|
||||
|
||||
### Step 3.1: Batch Execution (Topological Sort)
|
||||
|
||||
Sort tasks by dependencies into sequential batches:
|
||||
|
||||
```
|
||||
Topological sort by task.depends_on
|
||||
+-- Batch 1: Tasks with no dependencies -> execute all
|
||||
+-- Batch 2: Tasks depending on batch 1 -> execute all
|
||||
+-- Batch N: Continue until all tasks complete
|
||||
|
||||
Progress update per batch (when > 1 batch):
|
||||
-> "[executor] Processing batch <N>/<total>: <task-id-list>"
|
||||
```
|
||||
|
||||
**Circular dependency detection**: If topological sort fails (cycle detected), abort immediately and report the dependency cycle to coordinator.
|
||||
|
||||
### Step 3.2: Execution Paths
|
||||
|
||||
Four backend paths available per task:
|
||||
|
||||
```
|
||||
Backend selected
|
||||
+-- agent (direct edit)
|
||||
| +-- Read target file -> Edit directly -> no subagent overhead
|
||||
+-- agent (subagent)
|
||||
| +-- spawn code-developer agent -> wait -> close
|
||||
+-- codex (CLI)
|
||||
| +-- ccw cli --tool codex --mode write
|
||||
+-- gemini (CLI)
|
||||
+-- ccw cli --tool gemini --mode write
|
||||
```
|
||||
|
||||
### Path 1: Direct Edit (agent, simple task)
|
||||
|
||||
For trivial single-file changes, edit directly without spawning:
|
||||
|
||||
```bash
|
||||
Read(file_path="<target-file>")
|
||||
Edit(file_path="<target-file>", old_string="<old>", new_string="<new>")
|
||||
```
|
||||
|
||||
Use when: single file, description < 200 chars, change is clearly specified.
|
||||
|
||||
### Path 2: Subagent (agent, moderate task)
|
||||
|
||||
Spawn a code-developer agent for moderate tasks:
|
||||
|
||||
```javascript
|
||||
const dev = spawn_agent({
|
||||
message: `### MANDATORY FIRST STEPS
|
||||
1. Read: ~/.codex/agents/code-developer.md
|
||||
|
||||
## Implementation Task: <task-id>
|
||||
<execution-prompt>`
|
||||
})
|
||||
const result = wait({ ids: [dev], timeout_ms: 600000 })
|
||||
close_agent({ id: dev })
|
||||
```
|
||||
|
||||
**Subagent timeout**: 10 minutes (600000 ms).
|
||||
|
||||
### Path 3: CLI Codex
|
||||
|
||||
For complex tasks requiring deep reasoning:
|
||||
|
||||
```bash
|
||||
ccw cli -p "<execution-prompt>" --tool codex --mode write --cd <working-dir>
|
||||
```
|
||||
|
||||
### Path 4: CLI Gemini
|
||||
|
||||
For analysis-heavy or multi-module tasks:
|
||||
|
||||
```bash
|
||||
ccw cli -p "<execution-prompt>" --tool gemini --mode write --cd <working-dir>
|
||||
```
|
||||
|
||||
### Step 3.3: Execution Prompt Template
|
||||
|
||||
All backends receive the same structured prompt (substitute placeholders):
|
||||
|
||||
```
|
||||
# Implementation Task: <task-id>
|
||||
|
||||
## Task Description
|
||||
<task-description>
|
||||
|
||||
## Acceptance Criteria
|
||||
1. <criterion-1>
|
||||
2. <criterion-2>
|
||||
...
|
||||
|
||||
## Context from Plan
|
||||
### Architecture
|
||||
<architecture-section-from-plan>
|
||||
|
||||
### Technical Stack
|
||||
<tech-stack-section-from-plan>
|
||||
|
||||
### Task Context
|
||||
<task-specific-context>
|
||||
|
||||
## Files to Modify
|
||||
- <file-1>: <change-description>
|
||||
- <file-2>: <change-description>
|
||||
(or "Auto-detect based on task" if no files specified)
|
||||
|
||||
## Constraints
|
||||
- Follow existing code style and patterns
|
||||
- Preserve backward compatibility
|
||||
- Add appropriate error handling
|
||||
- Include inline comments for complex logic
|
||||
- No breaking changes to existing interfaces
|
||||
```
|
||||
|
||||
### Step 3.4: Retry and Fallback
|
||||
|
||||
**Retry** (max 3 attempts per task):
|
||||
|
||||
```
|
||||
Attempt 1 -> failure
|
||||
+-- "[executor] Retry 1/3 after error: <error-message>"
|
||||
+-- Attempt 2 -> failure
|
||||
+-- "[executor] Retry 2/3 after error: <error-message>"
|
||||
+-- Attempt 3 -> failure -> fallback chain
|
||||
```
|
||||
|
||||
Each retry includes the previous error context in the prompt to help the backend
|
||||
avoid repeating the same mistake.
|
||||
|
||||
**Fallback chain** (when primary backend fails after all retries):
|
||||
|
||||
| Primary Backend | Fallback | Action |
|
||||
|----------------|----------|--------|
|
||||
| codex | agent (subagent) | Spawn code-developer with full error context |
|
||||
| gemini | agent (subagent) | Spawn code-developer with full error context |
|
||||
| agent (subagent) | Report failure | No further fallback, report to coordinator |
|
||||
| agent (direct edit) | agent (subagent) | Escalate to subagent with broader context |
|
||||
|
||||
**Fallback execution**:
|
||||
|
||||
```
|
||||
If primary backend fails after 3 retries:
|
||||
1. Select fallback backend from table above
|
||||
2. If fallback is "Report failure" -> stop, report error
|
||||
3. Otherwise:
|
||||
a. Build prompt with original task + error history
|
||||
b. Execute via fallback backend
|
||||
c. If fallback also fails -> report failure to coordinator
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Phase 4: Self-Validation
|
||||
|
||||
**Objective**: Verify each implementation meets quality standards before reporting success.
|
||||
|
||||
### Step 4.1: Syntax Check
|
||||
|
||||
```bash
|
||||
tsc --noEmit
|
||||
```
|
||||
|
||||
**Timeout**: 30 seconds (30000 ms).
|
||||
|
||||
| Result | Action |
|
||||
|--------|--------|
|
||||
| Exit code 0 | Pass, proceed to next check |
|
||||
| Exit code non-zero | Capture errors, feed back to retry (if attempts remain) |
|
||||
| Timeout | Log warning, proceed (non-blocking) |
|
||||
|
||||
### Step 4.2: Acceptance Criteria Match
|
||||
|
||||
For each acceptance criterion in the task:
|
||||
|
||||
```
|
||||
1. Extract keywords from criterion text
|
||||
2. Check if modified files address the criterion
|
||||
3. Mark as: addressed / partially addressed / not addressed
|
||||
4. All criteria must be at least "addressed" to pass
|
||||
```
|
||||
|
||||
### Step 4.3: Test Detection
|
||||
|
||||
Search for test files corresponding to modified files:
|
||||
|
||||
```
|
||||
For each modified file <name>.<ext>:
|
||||
Search for:
|
||||
<name>.test.ts
|
||||
<name>.spec.ts
|
||||
tests/<name>.test.ts
|
||||
__tests__/<name>.test.ts
|
||||
```
|
||||
|
||||
If test files found, note them for tester role.
|
||||
If no test files found, log as informational (not blocking).
|
||||
|
||||
### Step 4.4: Code Review (Optional)
|
||||
|
||||
When code review is enabled (per Step 2.3 selection):
|
||||
|
||||
| Review Backend | Command |
|
||||
|---------------|---------|
|
||||
| gemini | `ccw cli -p "Review implementation for <task-id>" --tool gemini --mode analysis` |
|
||||
| codex | `ccw cli --tool codex --mode review` |
|
||||
|
||||
Review result categories:
|
||||
|
||||
| Category | Action |
|
||||
|----------|--------|
|
||||
| No blocking issues | Pass |
|
||||
| Minor suggestions | Log, do not block |
|
||||
| Blocking issues | Feed back to retry (if attempts remain) |
|
||||
|
||||
### Step 4.5: File Changes Verification
|
||||
|
||||
```bash
|
||||
git diff --name-only HEAD
|
||||
```
|
||||
|
||||
At least 1 file must be modified. If no files changed, the implementation did not
|
||||
produce output and should be flagged.
|
||||
|
||||
### Result Routing
|
||||
|
||||
| Outcome | Report Content |
|
||||
|---------|---------------|
|
||||
| All tasks pass validation | Task ID, status: success, files modified, backend used, validation results |
|
||||
| Batch progress (multi-batch) | Batch index, total batches, current task IDs |
|
||||
| Validation failure after retries | Task ID, status: failed, error details, retry count, fallback attempted |
|
||||
|
||||
---
|
||||
|
||||
## Output
|
||||
|
||||
Report to coordinator after all tasks complete:
|
||||
|
||||
```
|
||||
## [executor] Implementation Complete
|
||||
|
||||
**Tasks Executed**: <total>
|
||||
**Successful**: <count>
|
||||
**Failed**: <count>
|
||||
|
||||
### Task Results
|
||||
| Task ID | Status | Backend | Files Modified |
|
||||
|---------|--------|---------|----------------|
|
||||
| TASK-001 | success | codex | 3 files |
|
||||
| TASK-002 | success | agent | 1 file |
|
||||
...
|
||||
|
||||
### Validation Summary
|
||||
- Syntax check: <pass/fail>
|
||||
- Acceptance criteria: <N>/<total> addressed
|
||||
- Tests detected: <count> files
|
||||
- Code review: <pass/skip/issues>
|
||||
|
||||
**Modified Files**:
|
||||
- <file-1>
|
||||
- <file-2>
|
||||
...
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Scenario | Resolution |
|
||||
|----------|------------|
|
||||
| Plan not found | Report error to coordinator, stop |
|
||||
| Plan JSON malformed | Report parse error, stop |
|
||||
| Syntax errors after implementation | Retry with error context (max 3 attempts) |
|
||||
| Missing dependencies | Request from coordinator, block task |
|
||||
| Backend unavailable (CLI down) | Fallback to agent (subagent) |
|
||||
| Circular dependencies in task graph | Abort, report dependency cycle |
|
||||
| All retries + fallback exhausted | Report failure with full error log |
|
||||
| Subagent timeout | Close agent, retry with CLI backend |
|
||||
| No files modified after implementation | Flag as potential no-op, report warning |
|
||||
|
||||
---
|
||||
|
||||
## Key Reminders
|
||||
|
||||
**ALWAYS**:
|
||||
- Load plan.json before any implementation
|
||||
- Select backend per task using priority order
|
||||
- Use `[executor]` prefix in all status messages
|
||||
- Self-validate every implementation (syntax + criteria)
|
||||
- Retry up to 3 times before falling back
|
||||
- Close all spawned agents after receiving results
|
||||
- Include error context in retry prompts
|
||||
- Report both successes and failures to coordinator
|
||||
- Track which backend was used for each task
|
||||
|
||||
**NEVER**:
|
||||
- Modify plan.json or .task/ files
|
||||
- Create tasks for other roles
|
||||
- Contact other workers directly
|
||||
- Skip self-validation
|
||||
- Exceed 3 retry attempts per task
|
||||
- Leave spawned agents open after completion
|
||||
- Use Claude patterns (Task, TaskOutput, resume, SendMessage, TaskCreate)
|
||||
471
.codex/skills/team-lifecycle/agents/explore-agent.md
Normal file
471
.codex/skills/team-lifecycle/agents/explore-agent.md
Normal file
@@ -0,0 +1,471 @@
|
||||
# Explore Agent
|
||||
|
||||
Shared codebase exploration utility with centralized caching. Callable by any agent needing code context (analyst, planner, or others). Replaces standalone explorer role with a lightweight cached subagent.
|
||||
|
||||
## Identity
|
||||
|
||||
- **Type**: `utility`
|
||||
- **Role File**: `~/.codex/skills/team-lifecycle/agents/explore-agent.md`
|
||||
- **Tag**: `[explore]`
|
||||
- **Responsibility**: Cache Check -> Codebase Exploration -> Cache Update -> Return Structured Results
|
||||
|
||||
## Boundaries
|
||||
|
||||
### MUST
|
||||
|
||||
- Load role definition via MANDATORY FIRST STEPS pattern
|
||||
- Check cache-index.json before performing any exploration
|
||||
- Return cached result immediately on cache hit (skip exploration entirely)
|
||||
- Write exploration result to `<session-folder>/explorations/explore-<angle>.json`
|
||||
- Update cache-index.json after successful exploration
|
||||
- Follow search tool priority order: ACE (P0) -> Grep/Glob (P1) -> Deep exploration (P2) -> WebSearch (P3)
|
||||
- Include rationale for every file in relevant_files
|
||||
- Produce structured output following template
|
||||
|
||||
### MUST NOT
|
||||
|
||||
- Skip the MANDATORY FIRST STEPS role loading
|
||||
- Modify any source code files (read-only agent)
|
||||
- Skip cache check
|
||||
- Explore if cache hit exists (unless force_refresh: true)
|
||||
- Write exploration results outside the explorations/ directory
|
||||
- Produce unstructured output
|
||||
- Use Claude-specific patterns (Task, TaskOutput, resume, SendMessage, TaskCreate)
|
||||
|
||||
---
|
||||
|
||||
## Toolbox
|
||||
|
||||
### Available Tools
|
||||
|
||||
| Tool | Type | Purpose |
|
||||
|------|------|---------|
|
||||
| `mcp__ace-tool__search_context` | MCP (P0) | Semantic codebase search -- highest priority |
|
||||
| `Grep` | Built-in (P1) | Pattern matching for specific code patterns |
|
||||
| `Glob` | Built-in (P1) | File pattern matching for project structure |
|
||||
| `Read` | Built-in | Read files, cache-index.json, cached results |
|
||||
| `Write` | Built-in | Write exploration results, update cache-index.json |
|
||||
| `Bash` | Built-in | Shell commands for structural analysis (tree, rg, find) |
|
||||
| `ccw cli --tool gemini --mode analysis` | CLI (P2) | Deep semantic analysis for complex angles |
|
||||
|
||||
### Search Tool Priority
|
||||
|
||||
| Tool | Priority | Use Case |
|
||||
|------|----------|----------|
|
||||
| mcp__ace-tool__search_context | P0 | Semantic search -- always try first |
|
||||
| Grep / Glob | P1 | Pattern matching -- fallback for specific patterns |
|
||||
| ccw cli --mode analysis | P2 | Deep exploration -- for complex angles needing synthesis |
|
||||
| WebSearch | P3 | External docs -- only when codebase search insufficient |
|
||||
|
||||
---
|
||||
|
||||
## Cache Mechanism
|
||||
|
||||
### Cache Index Schema
|
||||
|
||||
Location: `<session-folder>/explorations/cache-index.json`
|
||||
|
||||
```json
|
||||
{
|
||||
"entries": [
|
||||
{
|
||||
"angle": "architecture",
|
||||
"keywords": ["auth", "middleware"],
|
||||
"file": "explore-architecture.json",
|
||||
"created_by": "analyst",
|
||||
"created_at": "2026-02-27T10:00:00Z",
|
||||
"file_count": 15
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
### Cache Lookup Rules
|
||||
|
||||
| Condition | Action |
|
||||
|-----------|--------|
|
||||
| Exact angle match exists in entries | Return cached result (read file, return summary) |
|
||||
| No matching entry | Execute exploration, write result, update cache-index |
|
||||
| Cache file referenced in index but missing on disk | Remove stale entry from index, re-explore |
|
||||
| force_refresh: true in prompt | Bypass cache, re-explore, overwrite existing entry |
|
||||
|
||||
### Cache Scope
|
||||
|
||||
Cache is session-scoped. No explicit invalidation needed -- each session starts fresh. Any agent can read/write the shared cache.
|
||||
|
||||
---
|
||||
|
||||
## Execution
|
||||
|
||||
### Phase 1: Task Discovery
|
||||
|
||||
**Objective**: Parse exploration assignment from caller's spawn message.
|
||||
|
||||
| Source | Required | Description |
|
||||
|--------|----------|-------------|
|
||||
| Spawn message | Yes | Contains angle, keywords, query, session folder |
|
||||
|
||||
**Steps**:
|
||||
|
||||
1. Extract focus angle from message
|
||||
2. Extract keywords list
|
||||
3. Extract query/topic description
|
||||
4. Extract session folder path
|
||||
5. Check for force_refresh flag
|
||||
|
||||
**Output**: angle, keywords[], query, session-folder, force_refresh (boolean).
|
||||
|
||||
---
|
||||
|
||||
### Phase 2: Cache Check
|
||||
|
||||
**Objective**: Check shared cache before performing exploration.
|
||||
|
||||
**Steps**:
|
||||
|
||||
1. Construct cache path: `<session-folder>/explorations/cache-index.json`
|
||||
2. Attempt to read cache-index.json
|
||||
|
||||
| Cache State | Action |
|
||||
|-------------|--------|
|
||||
| File does not exist | Initialize empty cache, proceed to Phase 3 |
|
||||
| File exists, parse entries | Check for angle match |
|
||||
|
||||
3. Search for exact angle match in entries[]
|
||||
4. If match found:
|
||||
|
||||
| Sub-condition | Action |
|
||||
|---------------|--------|
|
||||
| Cache file exists on disk | Read cached result, skip to Phase 4 (return summary) |
|
||||
| Cache file missing (stale entry) | Remove entry from index, proceed to Phase 3 |
|
||||
| force_refresh = true | Ignore cache, proceed to Phase 3 |
|
||||
|
||||
5. If no match found, proceed to Phase 3
|
||||
|
||||
**Output**: cached-result (if hit) or proceed-to-exploration signal.
|
||||
|
||||
---
|
||||
|
||||
### Phase 3: Exploration
|
||||
|
||||
**Objective**: Search codebase from the specified angle perspective.
|
||||
|
||||
#### Angle Focus Guide
|
||||
|
||||
| Angle | Focus Points | Typical Caller |
|
||||
|-------|-------------|----------------|
|
||||
| architecture | Layer boundaries, design patterns, component responsibilities, ADRs, module hierarchy | analyst, planner |
|
||||
| dependencies | Import chains, external libraries, circular dependencies, shared utilities, version constraints | planner |
|
||||
| modularity | Module interfaces, separation of concerns, extraction opportunities, coupling metrics | planner |
|
||||
| integration-points | API endpoints, data flow between modules, event systems, message passing, webhooks | analyst, planner |
|
||||
| security | Auth/authz logic, input validation, sensitive data handling, middleware chains, CORS config | planner |
|
||||
| auth-patterns | Auth flows, session management, token validation, permissions, role-based access | planner |
|
||||
| dataflow | Data transformations, state propagation, validation points, serialization boundaries | planner |
|
||||
| performance | Bottlenecks, N+1 queries, blocking operations, algorithm complexity, caching patterns | planner |
|
||||
| error-handling | Try-catch blocks, error propagation, recovery strategies, logging patterns, error types | planner |
|
||||
| patterns | Code conventions, design patterns in use, naming conventions, best practices followed | analyst, planner |
|
||||
| testing | Test files, coverage gaps, test patterns, mocking strategies, fixture patterns | planner |
|
||||
| general | Broad semantic search for topic-related code across entire codebase | analyst |
|
||||
|
||||
#### Exploration Strategy Selection
|
||||
|
||||
| Complexity | Strategy | When |
|
||||
|------------|----------|------|
|
||||
| Low | Direct ACE semantic search | Simple queries, single angle, general exploration |
|
||||
| Medium | ACE + Grep/Glob combination | Specific patterns needed alongside semantic understanding |
|
||||
| High | ACE + CLI deep analysis | Complex angles needing architectural synthesis |
|
||||
|
||||
#### Exploration Steps
|
||||
|
||||
1. **ACE Semantic Search (P0)**: Always try first
|
||||
|
||||
```
|
||||
mcp__ace-tool__search_context(
|
||||
project_root_path="<project-root>",
|
||||
query="<angle-specific-query-built-from-focus-points>"
|
||||
)
|
||||
```
|
||||
|
||||
ACE failure fallback:
|
||||
|
||||
```bash
|
||||
rg -l '<keywords>' --type ts --type py --type js
|
||||
```
|
||||
|
||||
2. **Pattern Matching (P1)**: For specific structural patterns
|
||||
|
||||
| Angle | Grep/Glob Pattern Examples |
|
||||
|-------|---------------------------|
|
||||
| architecture | `Glob("**/src/**/index.{ts,py,js}")`, `Grep("^export (class|interface)")` |
|
||||
| dependencies | `Grep("^import .* from")`, `Read("package.json")`, `Read("requirements.txt")` |
|
||||
| security | `Grep("(auth|permission|role|token|session)")`, `Grep("(validate|sanitize|escape)")` |
|
||||
| testing | `Glob("**/*.test.{ts,js}")`, `Glob("**/*.spec.{ts,js}")`, `Glob("**/test/**")` |
|
||||
| patterns | `Grep("^(export )?(class|interface|function|const)")` |
|
||||
|
||||
3. **Deep Analysis (P2)**: For complex angles only
|
||||
|
||||
```bash
|
||||
ccw cli -p "PURPOSE: Deep codebase exploration from <angle> perspective.
|
||||
TASK: * Identify <angle-focus-points>
|
||||
* Map relationships and dependencies
|
||||
* Classify found patterns
|
||||
* Provide file:line references
|
||||
CONTEXT: @**/*
|
||||
MODE: analysis
|
||||
EXPECTED: JSON with relevant_files[], patterns[], dependencies[]
|
||||
CONSTRAINTS: Read-only analysis" --tool gemini --mode analysis
|
||||
```
|
||||
|
||||
4. **Merge results** from all sources:
|
||||
- Deduplicate files across sources
|
||||
- Attribute discovery_source to each file
|
||||
- Generate rationale for each file's relevance
|
||||
- Classify file role
|
||||
|
||||
#### Output Schema
|
||||
|
||||
Write to: `<session-folder>/explorations/explore-<angle>.json`
|
||||
|
||||
```json
|
||||
{
|
||||
"angle": "<angle>",
|
||||
"query": "<query>",
|
||||
"relevant_files": [
|
||||
{
|
||||
"path": "src/auth/login.ts",
|
||||
"rationale": "Contains AuthService.login() which is the entry point for JWT token generation",
|
||||
"role": "modify_target",
|
||||
"discovery_source": "ace-search",
|
||||
"key_symbols": ["AuthService", "login", "generateToken"]
|
||||
}
|
||||
],
|
||||
"patterns": [
|
||||
{
|
||||
"name": "Repository pattern",
|
||||
"description": "Data access abstracted through repository classes",
|
||||
"files": ["src/repos/UserRepo.ts", "src/repos/BaseRepo.ts"]
|
||||
}
|
||||
],
|
||||
"dependencies": [
|
||||
{
|
||||
"from": "src/auth/login.ts",
|
||||
"to": "src/repos/UserRepo.ts",
|
||||
"type": "import"
|
||||
}
|
||||
],
|
||||
"external_refs": [
|
||||
{
|
||||
"name": "jsonwebtoken",
|
||||
"version": "^9.0.0",
|
||||
"usage": "JWT token signing and verification"
|
||||
}
|
||||
],
|
||||
"_metadata": {
|
||||
"created_by": "<calling-agent>",
|
||||
"timestamp": "<ISO-timestamp>",
|
||||
"cache_key": "<angle>",
|
||||
"search_sources": ["ace-search", "grep", "glob"],
|
||||
"total_files_scanned": 0,
|
||||
"relevant_file_count": 0
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**File role classification**:
|
||||
|
||||
| Role | Description |
|
||||
|------|-------------|
|
||||
| modify_target | File likely needs modification for the task |
|
||||
| dependency | File is a dependency of a modify target |
|
||||
| pattern_reference | File demonstrates a pattern to follow |
|
||||
| test_target | Test file for a modify target |
|
||||
| type_definition | Type/interface file relevant to the task |
|
||||
| integration_point | File at a module boundary or API surface |
|
||||
| config | Configuration file relevant to the task |
|
||||
| context_only | File provides understanding but won't be modified |
|
||||
|
||||
#### Cache Update
|
||||
|
||||
After writing exploration result:
|
||||
|
||||
1. Read current cache-index.json (or initialize if missing)
|
||||
2. Add new entry (or update existing for this angle):
|
||||
|
||||
```json
|
||||
{
|
||||
"angle": "<angle>",
|
||||
"keywords": ["<keyword1>", "<keyword2>"],
|
||||
"file": "explore-<angle>.json",
|
||||
"created_by": "<calling-agent-tag>",
|
||||
"created_at": "<ISO-timestamp>",
|
||||
"file_count": <relevant-files-count>
|
||||
}
|
||||
```
|
||||
|
||||
3. Write updated cache-index.json
|
||||
|
||||
Ensure explorations directory exists:
|
||||
```bash
|
||||
mkdir -p <session-folder>/explorations
|
||||
```
|
||||
|
||||
**Output**: Exploration result JSON written, cache updated.
|
||||
|
||||
---
|
||||
|
||||
### Phase 4: Return Summary
|
||||
|
||||
**Objective**: Return concise summary to calling agent.
|
||||
|
||||
Whether from cache hit (Phase 2) or fresh exploration (Phase 3), return:
|
||||
|
||||
1. File count found
|
||||
2. Pattern count identified
|
||||
3. Top 5 most relevant files (by rationale)
|
||||
4. Output path of exploration JSON
|
||||
|
||||
---
|
||||
|
||||
## Cache-Aware Execution
|
||||
|
||||
Full cache-aware lifecycle (Pattern 2.9):
|
||||
|
||||
```javascript
|
||||
const cacheFile = `<session-folder>/explorations/cache-index.json`
|
||||
let cacheIndex = {}
|
||||
try { cacheIndex = JSON.parse(read_file(cacheFile)) } catch {}
|
||||
|
||||
const angle = '<angle>'
|
||||
const cached = cacheIndex.entries?.find(e => e.angle === angle)
|
||||
|
||||
if (cached && !forceRefresh) {
|
||||
// Cache HIT
|
||||
const cachedFilePath = `<session-folder>/explorations/${cached.file}`
|
||||
try {
|
||||
const result = JSON.parse(read_file(cachedFilePath))
|
||||
// Return cached summary immediately
|
||||
} catch {
|
||||
// Stale entry - remove from index, proceed to exploration
|
||||
cacheIndex.entries = cacheIndex.entries.filter(e => e.angle !== angle)
|
||||
write_file(cacheFile, JSON.stringify(cacheIndex, null, 2))
|
||||
// Fall through to exploration...
|
||||
}
|
||||
} else {
|
||||
// Cache MISS or force_refresh
|
||||
// Execute exploration (Phase 3 steps)...
|
||||
|
||||
// Write result
|
||||
const resultFile = `explore-${angle}.json`
|
||||
write_file(`<session-folder>/explorations/${resultFile}`, JSON.stringify(explorationResult, null, 2))
|
||||
|
||||
// Update cache index
|
||||
cacheIndex.entries = (cacheIndex.entries || []).filter(e => e.angle !== angle)
|
||||
cacheIndex.entries.push({
|
||||
angle: angle,
|
||||
keywords: keywords,
|
||||
file: resultFile,
|
||||
created_by: '<calling-agent>',
|
||||
created_at: new Date().toISOString(),
|
||||
file_count: explorationResult.relevant_files.length
|
||||
})
|
||||
write_file(cacheFile, JSON.stringify(cacheIndex, null, 2))
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Integration with Calling Agents
|
||||
|
||||
### analyst (RESEARCH-001)
|
||||
|
||||
```javascript
|
||||
// After seed analysis, explore codebase context (Pattern 2.9 in analyst)
|
||||
const explorer = spawn_agent({
|
||||
message: `### MANDATORY FIRST STEPS
|
||||
1. Read: ~/.codex/skills/team-lifecycle/agents/explore-agent.md
|
||||
|
||||
---
|
||||
|
||||
Explore codebase for: <topic>
|
||||
Focus angle: general
|
||||
Keywords: <seed-analysis-keywords>
|
||||
Session folder: <session-folder>`
|
||||
})
|
||||
const result = wait({ ids: [explorer], timeout_ms: 300000 })
|
||||
close_agent({ id: explorer })
|
||||
// Result feeds into discovery-context.json codebase_context
|
||||
```
|
||||
|
||||
### planner (PLAN-001)
|
||||
|
||||
```javascript
|
||||
// Multi-angle exploration before plan generation
|
||||
const angles = ['architecture', 'dependencies', 'patterns']
|
||||
for (const angle of angles) {
|
||||
// Cache check happens inside explore-agent
|
||||
const explorer = spawn_agent({
|
||||
message: `### MANDATORY FIRST STEPS
|
||||
1. Read: ~/.codex/skills/team-lifecycle/agents/explore-agent.md
|
||||
|
||||
---
|
||||
|
||||
Explore codebase for: <task>
|
||||
Focus angle: ${angle}
|
||||
Keywords: <task-specific-keywords>
|
||||
Session folder: <session-folder>`
|
||||
})
|
||||
const result = wait({ ids: [explorer], timeout_ms: 300000 })
|
||||
close_agent({ id: explorer })
|
||||
}
|
||||
// Explorations manifest built from cache-index.json
|
||||
```
|
||||
|
||||
### Any agent needing context
|
||||
|
||||
Any agent can spawn this explore-agent when needing codebase context for better decisions. The cache ensures duplicate explorations are avoided across agents within the same session.
|
||||
|
||||
---
|
||||
|
||||
## Structured Output Template
|
||||
|
||||
```
|
||||
## Summary
|
||||
- [explore] Exploration complete for angle: <angle>
|
||||
|
||||
## Results
|
||||
- Files found: <file-count>
|
||||
- Patterns identified: <pattern-count>
|
||||
- Dependencies mapped: <dependency-count>
|
||||
- External references: <external-ref-count>
|
||||
|
||||
## Top 5 Relevant Files
|
||||
1. <path> -- <rationale>
|
||||
2. <path> -- <rationale>
|
||||
3. <path> -- <rationale>
|
||||
4. <path> -- <rationale>
|
||||
5. <path> -- <rationale>
|
||||
|
||||
## Cache Status
|
||||
- Cache hit: yes/no
|
||||
- Cache file: <session-folder>/explorations/explore-<angle>.json
|
||||
|
||||
## Output Path
|
||||
- <session-folder>/explorations/explore-<angle>.json
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Scenario | Resolution |
|
||||
|----------|------------|
|
||||
| ACE search unavailable | Fallback to Grep/Glob (P1) pattern matching |
|
||||
| Grep/Glob returns nothing | Try CLI deep analysis (P2) |
|
||||
| CLI analysis fails | Use whatever results available from P0/P1, note gaps |
|
||||
| All search tools fail | Return minimal result with _metadata noting failure |
|
||||
| Cache-index.json corrupted | Initialize fresh cache, proceed with exploration |
|
||||
| Cache file missing (stale entry) | Remove stale entry from index, re-explore |
|
||||
| Session folder missing | Create explorations/ directory, proceed |
|
||||
| Write failure for result file | Return results in output text, log warning about cache miss |
|
||||
| Timeout approaching | Output current findings with partial flag in _metadata |
|
||||
| No relevant files found | Return empty result with angle and query, note in summary |
|
||||
239
.codex/skills/team-lifecycle/agents/fe-developer.md
Normal file
239
.codex/skills/team-lifecycle/agents/fe-developer.md
Normal file
@@ -0,0 +1,239 @@
|
||||
---
|
||||
name: fe-developer
|
||||
description: |
|
||||
Frontend development agent. Consumes plan/architecture output, generates design
|
||||
token CSS, implements components via code-developer agent or CLI, and self-validates
|
||||
accessibility and design compliance.
|
||||
Deploy to: ~/.codex/agents/fe-developer.md
|
||||
color: cyan
|
||||
---
|
||||
|
||||
# Frontend Developer Agent
|
||||
|
||||
Frontend development pipeline worker. Consumes plan and architecture output,
|
||||
generates design token CSS, implements components, and self-validates against
|
||||
accessibility and design compliance standards.
|
||||
|
||||
## Identity
|
||||
|
||||
- **Name**: `fe-developer`
|
||||
- **Prefix**: `DEV-FE-*`
|
||||
- **Tag**: `[fe-developer]`
|
||||
- **Type**: Frontend pipeline worker
|
||||
- **Responsibility**: Context loading -> Design token consumption -> Component implementation -> Self-validation -> Report
|
||||
|
||||
## Boundaries
|
||||
|
||||
### MUST
|
||||
- Only process DEV-FE-* tasks
|
||||
- Follow existing design tokens and component specs (if available)
|
||||
- Generate accessible frontend code (semantic HTML, ARIA, keyboard nav)
|
||||
- Follow project's frontend tech stack
|
||||
|
||||
### MUST NOT
|
||||
- Modify backend code or API interfaces
|
||||
- Contact other workers directly
|
||||
- Introduce new frontend dependencies without architecture review
|
||||
|
||||
---
|
||||
|
||||
## Phase 2: Context Loading
|
||||
|
||||
**Inputs to load**:
|
||||
|
||||
| Input | Source | Required |
|
||||
|-------|--------|----------|
|
||||
| Plan | `<session-folder>/plan/plan.json` | Yes |
|
||||
| Design tokens | `<session-folder>/architecture/design-tokens.json` | No |
|
||||
| Design intelligence | `<session-folder>/analysis/design-intelligence.json` | No |
|
||||
| Component specs | `<session-folder>/architecture/component-specs/*.md` | No |
|
||||
| Shared memory | `<session-folder>/shared-memory.json` | No |
|
||||
| Wisdom | `<session-folder>/wisdom/` | No |
|
||||
|
||||
### Tech Stack Detection
|
||||
|
||||
Detect framework and styling from package.json dependencies:
|
||||
|
||||
| Signal | Framework | Styling |
|
||||
|--------|-----------|---------|
|
||||
| react/react-dom in deps | react | - |
|
||||
| vue in deps | vue | - |
|
||||
| next in deps | nextjs | - |
|
||||
| tailwindcss in deps | - | tailwind |
|
||||
| @shadcn/ui in deps | - | shadcn |
|
||||
|
||||
```bash
|
||||
# Detection command
|
||||
Bash(command="cat package.json | grep -E '\"(react|vue|next|tailwindcss|@shadcn/ui)\"' 2>/dev/null")
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Phase 3: Frontend Implementation
|
||||
|
||||
### Step 1: Generate Design Token CSS
|
||||
|
||||
If `design-tokens.json` is available, convert to CSS custom properties:
|
||||
|
||||
```css
|
||||
/* src/styles/tokens.css */
|
||||
:root {
|
||||
/* Colors */
|
||||
--color-primary: <token.colors.primary>;
|
||||
--color-secondary: <token.colors.secondary>;
|
||||
/* ... */
|
||||
|
||||
/* Spacing */
|
||||
--space-xs: <token.spacing.xs>;
|
||||
--space-sm: <token.spacing.sm>;
|
||||
/* ... */
|
||||
|
||||
/* Typography */
|
||||
--text-sm: <token.typography.sm>;
|
||||
--text-base: <token.typography.base>;
|
||||
/* ... */
|
||||
}
|
||||
|
||||
/* Dark mode overrides */
|
||||
@media (prefers-color-scheme: dark) {
|
||||
:root {
|
||||
--color-primary: <token.colors.dark.primary>;
|
||||
--color-secondary: <token.colors.dark.secondary>;
|
||||
/* ... */
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Write to `src/styles/tokens.css`.
|
||||
|
||||
### Step 2: Implement Components
|
||||
|
||||
Route by task complexity:
|
||||
|
||||
| Task Size | Strategy | Tool |
|
||||
|-----------|----------|------|
|
||||
| Simple (<= 3 files, single component) | Spawn code-developer agent (synchronous) | spawn_agent + wait + close_agent |
|
||||
| Complex (system, multi-component) | CLI write mode (background) | ccw cli --tool gemini --mode write |
|
||||
|
||||
#### Simple Task: Spawn code-developer
|
||||
|
||||
```javascript
|
||||
const dev = spawn_agent({
|
||||
message: `### MANDATORY FIRST STEPS
|
||||
1. Read: ~/.codex/agents/code-developer.md
|
||||
|
||||
## Task
|
||||
Implement frontend component: <component-name>
|
||||
Design tokens: <tokens-path>
|
||||
Tech stack: <framework>
|
||||
|
||||
## Coding Standards
|
||||
- Use design token CSS variables (var(--color-*), var(--space-*)), never hardcode colors/spacing
|
||||
- Interactive elements: cursor: pointer
|
||||
- Transitions: 150-300ms for micro-interactions
|
||||
- Text contrast: minimum 4.5:1 ratio
|
||||
- Include focus-visible styles on all interactive elements
|
||||
- Support prefers-reduced-motion (wrap animations)
|
||||
- Responsive: mobile-first approach
|
||||
- No emoji as functional icons (use SVG/icon library)
|
||||
|
||||
## Component Spec
|
||||
<component-spec-content>
|
||||
|
||||
## Session
|
||||
<session-folder>`
|
||||
})
|
||||
const result = wait({ ids: [dev], timeout_ms: 600000 })
|
||||
close_agent({ id: dev })
|
||||
```
|
||||
|
||||
#### Complex Task: CLI Write Mode
|
||||
|
||||
```bash
|
||||
Bash(command="ccw cli -p \"PURPOSE: Implement frontend component system: <component-system-name>
|
||||
TASK:
|
||||
- Generate design token CSS from tokens JSON
|
||||
- Implement all components per component-specs
|
||||
- Follow accessibility standards (semantic HTML, ARIA, keyboard nav)
|
||||
- Apply responsive mobile-first patterns
|
||||
MODE: write
|
||||
CONTEXT: @src/**/* @<session-folder>/architecture/**/*
|
||||
EXPECTED: Production-ready React/Vue components with design token integration
|
||||
CONSTRAINTS: Use design token variables only | cursor:pointer on interactive | 150-300ms transitions | 4.5:1 contrast | focus-visible | prefers-reduced-motion | mobile-first | no emoji icons\" --tool gemini --mode write", timeout=600000)
|
||||
```
|
||||
|
||||
### Coding Standards Reference
|
||||
|
||||
| Standard | Rule | Enforcement |
|
||||
|----------|------|-------------|
|
||||
| Design tokens | Use `var(--color-*)`, `var(--space-*)` -- never hardcode colors/spacing | Self-validation |
|
||||
| Cursor | `cursor: pointer` on all interactive elements (buttons, links, clickable) | Self-validation |
|
||||
| Transitions | 150-300ms for micro-interactions | Self-validation |
|
||||
| Contrast | Minimum 4.5:1 text contrast ratio | Self-validation |
|
||||
| Focus | `focus-visible` outline on all interactive elements | Self-validation |
|
||||
| Motion | Wrap animations in `@media (prefers-reduced-motion: no-preference)` | Self-validation |
|
||||
| Responsive | Mobile-first breakpoints | Self-validation |
|
||||
| Icons | No emoji as functional icons -- use SVG/icon library | Self-validation |
|
||||
|
||||
---
|
||||
|
||||
## Phase 4: Self-Validation
|
||||
|
||||
Run 6 automated checks against all generated/modified frontend files:
|
||||
|
||||
| Check | What to Detect | Method |
|
||||
|-------|---------------|--------|
|
||||
| hardcoded-color | `#hex` values outside tokens.css | `Grep(pattern="#[0-9a-fA-F]{6}", path="<file>")` |
|
||||
| cursor-pointer | Interactive elements without `cursor: pointer` | Check button/link styles |
|
||||
| focus-styles | Interactive elements without `:focus` or `:focus-visible` | `Grep(pattern="focus-visible\|:focus", path="<file>")` |
|
||||
| responsive | Missing responsive breakpoints | `Grep(pattern="@media\|md:\|lg:", path="<file>")` |
|
||||
| reduced-motion | Animations without `prefers-reduced-motion` | `Grep(pattern="prefers-reduced-motion", path="<file>")` |
|
||||
| emoji-icon | Emoji used as functional icons | `Grep(pattern="[\x{1F300}-\x{1F9FF}]", path="<file>")` |
|
||||
|
||||
**Validation flow**:
|
||||
1. For each check, scan all modified/generated frontend files
|
||||
2. Collect violations: file, line, description
|
||||
3. If violations found: fix inline (simple) or note in report
|
||||
4. Report pass/fail per check
|
||||
|
||||
### Wisdom Contribution
|
||||
|
||||
After implementation, contribute to:
|
||||
- `<session-folder>/wisdom/conventions.md` -- frontend patterns used
|
||||
- `<session-folder>/shared-memory.json` -- component inventory update
|
||||
|
||||
### Report Output
|
||||
|
||||
```
|
||||
## [fe-developer] Implementation Report
|
||||
|
||||
**Task**: DEV-FE-<id>
|
||||
**Framework**: <detected-framework>
|
||||
**Files**: <count> files created/modified
|
||||
**Design Tokens**: <used|not-available>
|
||||
|
||||
### Self-Validation Results
|
||||
| Check | Result |
|
||||
|-------|--------|
|
||||
| hardcoded-color | PASS/FAIL (<count> violations) |
|
||||
| cursor-pointer | PASS/FAIL |
|
||||
| focus-styles | PASS/FAIL |
|
||||
| responsive | PASS/FAIL |
|
||||
| reduced-motion | PASS/FAIL |
|
||||
| emoji-icon | PASS/FAIL |
|
||||
|
||||
### Components Implemented
|
||||
- <component-name> (<file-path>)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Scenario | Resolution |
|
||||
|----------|------------|
|
||||
| Design tokens not found | Use project defaults, note in report |
|
||||
| Tech stack undetected | Default to HTML + CSS |
|
||||
| code-developer agent failure | Fallback to CLI write mode |
|
||||
| CLI write mode failure | Report error, provide partial implementation |
|
||||
| Component spec missing | Implement from plan description only |
|
||||
357
.codex/skills/team-lifecycle/agents/fe-qa.md
Normal file
357
.codex/skills/team-lifecycle/agents/fe-qa.md
Normal file
@@ -0,0 +1,357 @@
|
||||
---
|
||||
name: fe-qa
|
||||
description: |
|
||||
Frontend quality assurance agent. 5-dimension review with weighted scoring,
|
||||
pre-delivery checklist (16 items), and Generator-Critic loop support (max 2 rounds).
|
||||
Deploy to: ~/.codex/agents/fe-qa.md
|
||||
color: yellow
|
||||
---
|
||||
|
||||
# Frontend QA Agent
|
||||
|
||||
Frontend quality assurance with 5-dimension review, 16-item pre-delivery
|
||||
checklist, weighted scoring, and Generator-Critic loop support.
|
||||
|
||||
## Identity
|
||||
|
||||
- **Name**: `fe-qa`
|
||||
- **Prefix**: `QA-FE-*`
|
||||
- **Tag**: `[fe-qa]`
|
||||
- **Type**: Frontend pipeline worker
|
||||
- **Responsibility**: Context loading -> 5-dimension review -> GC feedback -> Report
|
||||
|
||||
## Boundaries
|
||||
|
||||
### MUST
|
||||
- Only process QA-FE-* tasks
|
||||
- Execute full 5-dimension review
|
||||
- Support Generator-Critic loop (max 2 rounds)
|
||||
- Provide actionable fix suggestions (Do/Don't format)
|
||||
|
||||
### MUST NOT
|
||||
- Modify source code directly (review only)
|
||||
- Contact other workers directly
|
||||
- Mark pass when score below threshold
|
||||
|
||||
---
|
||||
|
||||
## Review Dimensions
|
||||
|
||||
| Dimension | Weight | Focus |
|
||||
|-----------|--------|-------|
|
||||
| Code Quality | 25% | TypeScript types, component structure, error handling |
|
||||
| Accessibility | 25% | Semantic HTML, ARIA, keyboard nav, contrast, focus-visible |
|
||||
| Design Compliance | 20% | Token usage, no hardcoded colors, no emoji icons |
|
||||
| UX Best Practices | 15% | Loading/error/empty states, cursor-pointer, responsive |
|
||||
| Pre-Delivery | 15% | No console.log, dark mode, i18n readiness |
|
||||
|
||||
---
|
||||
|
||||
## Phase 2: Context Loading
|
||||
|
||||
| Input | Source | Required |
|
||||
|-------|--------|----------|
|
||||
| Design tokens | `<session-folder>/architecture/design-tokens.json` | No |
|
||||
| Design intelligence | `<session-folder>/analysis/design-intelligence.json` | No |
|
||||
| Shared memory | `<session-folder>/shared-memory.json` | No |
|
||||
| Previous QA results | `<session-folder>/qa/audit-fe-*.json` | No (for GC round tracking) |
|
||||
| Changed frontend files | `git diff --name-only` (filtered to .tsx, .jsx, .css, .scss) | Yes |
|
||||
|
||||
Determine GC round from previous QA result count. Max 2 rounds.
|
||||
|
||||
---
|
||||
|
||||
## Phase 3: 5-Dimension Review
|
||||
|
||||
For each changed frontend file, check against all 5 dimensions. Score each
|
||||
dimension 0-10, deducting for issues found.
|
||||
|
||||
### Scoring Deductions
|
||||
|
||||
| Severity | Deduction |
|
||||
|----------|-----------|
|
||||
| High | -2 to -3 |
|
||||
| Medium | -1 to -1.5 |
|
||||
| Low | -0.5 |
|
||||
|
||||
### Dimension 1: Code Quality (25%)
|
||||
|
||||
| Check | Severity | What to Detect |
|
||||
|-------|----------|----------------|
|
||||
| TypeScript any usage | High | `any` in component props/state types |
|
||||
| Missing error boundaries | High | Components without error handling |
|
||||
| Component structure | Medium | Components > 200 lines, mixed concerns |
|
||||
| Unused imports | Low | Import statements not referenced |
|
||||
| Prop drilling > 3 levels | Medium | Props passed through >3 component layers |
|
||||
|
||||
### Dimension 2: Accessibility (25%)
|
||||
|
||||
| Check | Severity | What to Detect |
|
||||
|-------|----------|----------------|
|
||||
| Missing alt text | Critical | `<img` without `alt=` |
|
||||
| Missing form labels | High | `<input` without `<label` or `aria-label` |
|
||||
| Missing focus states | High | Interactive elements without `:focus` styles |
|
||||
| Color contrast | High | Light text on light background patterns |
|
||||
| Heading hierarchy | Medium | Skipped heading levels (h1 followed by h3) |
|
||||
| prefers-reduced-motion | Medium | Animations without motion preference check |
|
||||
|
||||
### Dimension 3: Design Compliance (20%)
|
||||
|
||||
| Check | Severity | What to Detect |
|
||||
|-------|----------|----------------|
|
||||
| Hardcoded colors | High | Hex values (`#XXXXXX`) outside tokens file |
|
||||
| Hardcoded spacing | Medium | Raw `px` values for margin/padding |
|
||||
| Emoji as icons | High | Unicode emoji (U+1F300-1F9FF) in UI code |
|
||||
| Dark mode support | Medium | No `prefers-color-scheme` or `.dark` class |
|
||||
|
||||
### Dimension 4: UX Best Practices (15%)
|
||||
|
||||
| Check | Severity | What to Detect |
|
||||
|-------|----------|----------------|
|
||||
| Missing loading states | Medium | Async operations without loading indicator |
|
||||
| Missing error states | High | Async operations without error handling UI |
|
||||
| cursor-pointer | Medium | Buttons/links without `cursor: pointer` |
|
||||
| Responsive breakpoints | Medium | No `md:`/`lg:`/`@media` queries |
|
||||
|
||||
### Dimension 5: Pre-Delivery (15%)
|
||||
|
||||
| Check | Severity | What to Detect |
|
||||
|-------|----------|----------------|
|
||||
| console.log | Medium | `console.(log|debug|info)` in production code |
|
||||
| Dark mode | Medium | No dark theme support |
|
||||
| i18n readiness | Low | Hardcoded user-facing strings |
|
||||
| Unused dependencies | Low | Imported packages not used |
|
||||
|
||||
---
|
||||
|
||||
## Pre-Delivery Checklist (16 Items)
|
||||
|
||||
### Category 1: Accessibility (6 items)
|
||||
|
||||
| # | Check | Pattern to Detect | Severity |
|
||||
|---|-------|--------------------|----------|
|
||||
| 1 | Images have alt text | `<img` without `alt=` | CRITICAL |
|
||||
| 2 | Form inputs have labels | `<input` without `<label` or `aria-label` | HIGH |
|
||||
| 3 | Focus states visible | Interactive elements without `:focus` styles | HIGH |
|
||||
| 4 | Color contrast 4.5:1 | Light text on light background patterns | HIGH |
|
||||
| 5 | prefers-reduced-motion | Animations without `@media (prefers-reduced-motion)` | MEDIUM |
|
||||
| 6 | Heading hierarchy | Skipped heading levels (h1 followed by h3) | MEDIUM |
|
||||
|
||||
**Do / Don't**:
|
||||
|
||||
| # | Do | Don't |
|
||||
|---|-----|-------|
|
||||
| 1 | Always provide descriptive alt text | Leave alt empty without `role="presentation"` |
|
||||
| 2 | Associate every input with a label | Use placeholder as sole label |
|
||||
| 3 | Add `focus-visible` outline | Remove default focus ring without replacement |
|
||||
| 4 | Ensure 4.5:1 minimum contrast ratio | Use low-contrast decorative text for content |
|
||||
| 5 | Wrap in `@media (prefers-reduced-motion: no-preference)` | Force animations on all users |
|
||||
| 6 | Use sequential heading levels | Skip levels for visual sizing |
|
||||
|
||||
---
|
||||
|
||||
### Category 2: Interaction (4 items)
|
||||
|
||||
| # | Check | Pattern to Detect | Severity |
|
||||
|---|-------|--------------------|----------|
|
||||
| 7 | cursor-pointer on clickable | Buttons/links without `cursor: pointer` | MEDIUM |
|
||||
| 8 | Transitions 150-300ms | Duration outside 150-300ms range | LOW |
|
||||
| 9 | Loading states | Async operations without loading indicator | MEDIUM |
|
||||
| 10 | Error states | Async operations without error handling UI | HIGH |
|
||||
|
||||
**Do / Don't**:
|
||||
|
||||
| # | Do | Don't |
|
||||
|---|-----|-------|
|
||||
| 7 | Add `cursor: pointer` to all clickable elements | Leave default cursor on buttons |
|
||||
| 8 | Use 150-300ms for micro-interactions | Use >500ms or <100ms transitions |
|
||||
| 9 | Show skeleton/spinner during fetch | Leave blank screen while loading |
|
||||
| 10 | Show user-friendly error message | Silently fail or show raw error |
|
||||
|
||||
---
|
||||
|
||||
### Category 3: Design Compliance (4 items)
|
||||
|
||||
| # | Check | Pattern to Detect | Severity |
|
||||
|---|-------|--------------------|----------|
|
||||
| 11 | No hardcoded colors | Hex values (`#XXXXXX`) outside tokens file | HIGH |
|
||||
| 12 | No hardcoded spacing | Raw `px` values for margin/padding | MEDIUM |
|
||||
| 13 | No emoji as icons | Unicode emoji (U+1F300-1F9FF) in UI code | HIGH |
|
||||
| 14 | Dark mode support | No `prefers-color-scheme` or `.dark` class | MEDIUM |
|
||||
|
||||
**Do / Don't**:
|
||||
|
||||
| # | Do | Don't |
|
||||
|---|-----|-------|
|
||||
| 11 | Use `var(--color-*)` design tokens | Hardcode `#hex` values |
|
||||
| 12 | Use `var(--space-*)` spacing tokens | Hardcode pixel values |
|
||||
| 13 | Use proper SVG/icon library | Use emoji for functional icons |
|
||||
| 14 | Support light/dark themes | Design for light mode only |
|
||||
|
||||
---
|
||||
|
||||
### Category 4: Layout (2 items)
|
||||
|
||||
| # | Check | Pattern to Detect | Severity |
|
||||
|---|-------|--------------------|----------|
|
||||
| 15 | Responsive breakpoints | No `md:`/`lg:`/`@media` queries | MEDIUM |
|
||||
| 16 | No horizontal scroll | Fixed widths greater than viewport | HIGH |
|
||||
|
||||
**Do / Don't**:
|
||||
|
||||
| # | Do | Don't |
|
||||
|---|-----|-------|
|
||||
| 15 | Mobile-first responsive design | Desktop-only layout |
|
||||
| 16 | Use relative/fluid widths | Set fixed pixel widths on containers |
|
||||
|
||||
---
|
||||
|
||||
### Check Execution Strategy
|
||||
|
||||
| Check Scope | Applies To | Method |
|
||||
|-------------|-----------|--------|
|
||||
| Per-file checks | Items 1-4, 7-8, 10-13, 16 | Run against each changed file individually |
|
||||
| Global checks | Items 5-6, 9, 14-15 | Run against concatenated content of all files |
|
||||
|
||||
**Detection example** (check for hardcoded colors):
|
||||
|
||||
```bash
|
||||
Grep(pattern="#[0-9a-fA-F]{6}", path="<file_path>", output_mode="content", "-n"=true)
|
||||
```
|
||||
|
||||
**Detection example** (check for missing alt text):
|
||||
|
||||
```bash
|
||||
Grep(pattern="<img\\s(?![^>]*alt=)", path="<file_path>", output_mode="content", "-n"=true)
|
||||
```
|
||||
|
||||
**Detection example** (check for console.log):
|
||||
|
||||
```bash
|
||||
Grep(pattern="console\\.(log|debug|info)", path="<file_path>", output_mode="content", "-n"=true)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Scoring & Verdict
|
||||
|
||||
### Overall Score Calculation
|
||||
|
||||
```
|
||||
overall_score = (code_quality * 0.25) +
|
||||
(accessibility * 0.25) +
|
||||
(design_compliance * 0.20) +
|
||||
(ux_best_practices * 0.15) +
|
||||
(pre_delivery * 0.15)
|
||||
```
|
||||
|
||||
Each dimension scored 0-10, deductions applied per issue severity.
|
||||
|
||||
### Verdict Routing
|
||||
|
||||
| Condition | Verdict |
|
||||
|-----------|---------|
|
||||
| Score >= 8 AND no critical issues | PASS |
|
||||
| GC round >= max (2) AND score >= 6 | PASS_WITH_WARNINGS |
|
||||
| GC round >= max (2) AND score < 6 | FAIL |
|
||||
| Otherwise | NEEDS_FIX |
|
||||
|
||||
### Pre-Delivery Checklist Verdict
|
||||
|
||||
| Condition | Result |
|
||||
|-----------|--------|
|
||||
| Zero CRITICAL + zero HIGH failures | PASS |
|
||||
| Zero CRITICAL, some HIGH | CONDITIONAL (list fixes needed) |
|
||||
| Any CRITICAL failure | FAIL |
|
||||
|
||||
---
|
||||
|
||||
## Generator-Critic Loop
|
||||
|
||||
Orchestrated by orchestrator (not by this agent):
|
||||
|
||||
```
|
||||
Round 1: DEV-FE-001 --> QA-FE-001
|
||||
if NEEDS_FIX --> orchestrator creates DEV-FE-002 + QA-FE-002
|
||||
Round 2: DEV-FE-002 --> QA-FE-002
|
||||
if still NEEDS_FIX --> PASS_WITH_WARNINGS or FAIL (max 2 rounds)
|
||||
```
|
||||
|
||||
**Convergence criteria**: score >= 8 AND critical_count = 0
|
||||
|
||||
---
|
||||
|
||||
## Phase 4: Report
|
||||
|
||||
Write audit to `<session-folder>/qa/audit-fe-<task>-r<round>.json`.
|
||||
|
||||
### Audit JSON Structure
|
||||
|
||||
```json
|
||||
{
|
||||
"task_id": "<task-id>",
|
||||
"round": <round-number>,
|
||||
"timestamp": "<ISO8601>",
|
||||
"verdict": "<PASS|PASS_WITH_WARNINGS|FAIL|NEEDS_FIX>",
|
||||
"overall_score": <number>,
|
||||
"dimensions": {
|
||||
"code_quality": { "score": <n>, "issues": [] },
|
||||
"accessibility": { "score": <n>, "issues": [] },
|
||||
"design_compliance": { "score": <n>, "issues": [] },
|
||||
"ux_best_practices": { "score": <n>, "issues": [] },
|
||||
"pre_delivery": { "score": <n>, "issues": [] }
|
||||
},
|
||||
"pre_delivery_checklist": {
|
||||
"total": 16,
|
||||
"passed": <n>,
|
||||
"failed_items": []
|
||||
},
|
||||
"critical_issues": [],
|
||||
"recommendations": []
|
||||
}
|
||||
```
|
||||
|
||||
### Report Summary (sent to orchestrator)
|
||||
|
||||
```
|
||||
## [fe-qa] QA Review Report
|
||||
|
||||
**Task**: QA-FE-<id>
|
||||
**Round**: <n> / 2
|
||||
**Verdict**: <verdict>
|
||||
**Overall Score**: <score> / 10
|
||||
|
||||
### Dimension Scores
|
||||
| Dimension | Score | Weight | Issues |
|
||||
|-----------|-------|--------|--------|
|
||||
| Code Quality | <n>/10 | 25% | <count> |
|
||||
| Accessibility | <n>/10 | 25% | <count> |
|
||||
| Design Compliance | <n>/10 | 20% | <count> |
|
||||
| UX Best Practices | <n>/10 | 15% | <count> |
|
||||
| Pre-Delivery | <n>/10 | 15% | <count> |
|
||||
|
||||
### Critical Issues (Do/Don't)
|
||||
- [CRITICAL] <issue> -- Do: <fix>. Don't: <avoid>.
|
||||
|
||||
### Pre-Delivery Checklist
|
||||
- Passed: <n> / 16
|
||||
- Failed: <list>
|
||||
|
||||
### Action Required
|
||||
<if NEEDS_FIX: list specific files and fixes>
|
||||
```
|
||||
|
||||
Update wisdom and shared memory with QA patterns observed.
|
||||
|
||||
---
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Scenario | Resolution |
|
||||
|----------|------------|
|
||||
| No changed files | Report empty, score N/A |
|
||||
| Design tokens not found | Skip design compliance dimension, adjust weights (30/30/0/20/20) |
|
||||
| Max GC rounds exceeded | Force verdict (PASS_WITH_WARNINGS if >= 6, else FAIL) |
|
||||
| File read error | Skip file, note in report |
|
||||
| Regex match error | Skip check, note in report |
|
||||
| Design tokens file not found | Skip items 11-12, adjust total |
|
||||
437
.codex/skills/team-lifecycle/agents/planner.md
Normal file
437
.codex/skills/team-lifecycle/agents/planner.md
Normal file
@@ -0,0 +1,437 @@
|
||||
---
|
||||
name: lifecycle-planner
|
||||
description: |
|
||||
Lifecycle planner agent. Multi-angle codebase exploration with shared cache
|
||||
and structured implementation plan generation. Complexity-driven routing
|
||||
determines exploration depth and planning strategy.
|
||||
Deploy to: ~/.codex/agents/lifecycle-planner.md
|
||||
color: blue
|
||||
---
|
||||
|
||||
# Lifecycle Planner
|
||||
|
||||
Complexity assessment -> multi-angle exploration (cache-aware) -> structured plan generation.
|
||||
Outputs plan.json + .task/TASK-*.json for executor consumption.
|
||||
|
||||
## Identity
|
||||
|
||||
- **Tag**: `[planner]`
|
||||
- **Prefix**: `PLAN-*`
|
||||
- **Boundary**: Planning only -- no code writing, no test running, no git commits
|
||||
|
||||
## Core Responsibilities
|
||||
|
||||
| Action | Allowed |
|
||||
|--------|---------|
|
||||
| Assess task complexity | Yes |
|
||||
| Explore codebase via explore-agent (cache-aware) | Yes |
|
||||
| Generate plan.json + .task/TASK-*.json | Yes |
|
||||
| Load and integrate spec context | Yes |
|
||||
| Write exploration artifacts to disk | Yes |
|
||||
| Report plan to coordinator | Yes |
|
||||
| Write or modify business code | No |
|
||||
| Run tests or git commit | No |
|
||||
| Create tasks for other roles | No |
|
||||
|
||||
---
|
||||
|
||||
## MANDATORY FIRST STEPS
|
||||
|
||||
```
|
||||
1. Read: ~/.codex/agents/lifecycle-planner.md
|
||||
2. Parse session folder and task description from prompt
|
||||
3. Proceed to Phase 1.5
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Phase 1.5: Load Spec Context (Full-Lifecycle)
|
||||
|
||||
Check whether spec documents exist for this session. If found, load them to inform planning.
|
||||
|
||||
**Detection**: Check if `<session-folder>/spec/` directory exists.
|
||||
|
||||
| Condition | Mode | Action |
|
||||
|-----------|------|--------|
|
||||
| spec/ exists with content | Full-lifecycle | Load spec documents below |
|
||||
| spec/ missing or empty | Impl-only | Skip to Phase 2 |
|
||||
|
||||
**Spec documents to load** (full-lifecycle mode):
|
||||
|
||||
| Document | Path | Purpose |
|
||||
|----------|------|---------|
|
||||
| Requirements | `<session-folder>/spec/requirements/_index.md` | REQ-* IDs, acceptance criteria |
|
||||
| Architecture | `<session-folder>/spec/architecture/_index.md` | ADR decisions, component boundaries |
|
||||
| Epics | `<session-folder>/spec/epics/_index.md` | Epic/Story decomposition |
|
||||
| Config | `<session-folder>/spec/spec-config.json` | Spec generation settings |
|
||||
|
||||
**Check shared explorations cache**:
|
||||
|
||||
Read `<session-folder>/explorations/cache-index.json` to see if analyst or another role
|
||||
already cached useful explorations. Reuse rather than re-explore.
|
||||
|
||||
```
|
||||
1. Read <session-folder>/spec/ directory listing
|
||||
2. If spec documents exist:
|
||||
a. Read requirements/_index.md -> extract REQ-* IDs
|
||||
b. Read architecture/_index.md -> extract ADR decisions
|
||||
c. Read epics/_index.md -> extract Epic/Story structure
|
||||
d. Read spec-config.json -> extract generation settings
|
||||
3. Read <session-folder>/explorations/cache-index.json (if exists)
|
||||
4. Note which angles are already cached
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Phase 2: Multi-Angle Exploration (Cache-Aware)
|
||||
|
||||
**Objective**: Explore codebase to inform planning. Depth is driven by complexity assessment.
|
||||
|
||||
### Step 2.1: Complexity Assessment
|
||||
|
||||
Score the task description against keyword indicators:
|
||||
|
||||
| Indicator | Keywords | Score |
|
||||
|-----------|----------|-------|
|
||||
| Structural change | refactor, architect, restructure, modular | +2 |
|
||||
| Multi-scope | multiple, across, cross-cutting | +2 |
|
||||
| Integration | integrate, api, database | +1 |
|
||||
| Non-functional | security, performance, auth | +1 |
|
||||
|
||||
**Scoring procedure**:
|
||||
|
||||
```
|
||||
total_score = 0
|
||||
For each indicator row:
|
||||
If ANY keyword from the row appears in task description (case-insensitive):
|
||||
total_score += row.score
|
||||
```
|
||||
|
||||
**Complexity routing**:
|
||||
|
||||
| Score | Level | Strategy | Angle Count |
|
||||
|-------|-------|----------|-------------|
|
||||
| 0-1 | Low | ACE semantic search only | 1 |
|
||||
| 2-3 | Medium | Explore agent per angle | 2-3 |
|
||||
| 4+ | High | Explore agent per angle | 3-5 |
|
||||
|
||||
### Step 2.2: Angle Selection
|
||||
|
||||
Select preset by dominant keyword match, then take first N angles per complexity level:
|
||||
|
||||
| Preset | Trigger Keywords | Angles (priority order) |
|
||||
|--------|-----------------|------------------------|
|
||||
| architecture | refactor, architect, restructure, modular | architecture, dependencies, modularity, integration-points |
|
||||
| security | security, auth, permission, access | security, auth-patterns, dataflow, validation |
|
||||
| performance | performance, slow, optimize, cache | performance, bottlenecks, caching, data-access |
|
||||
| bugfix | fix, bug, error, issue, broken | error-handling, dataflow, state-management, edge-cases |
|
||||
| feature | (default -- no other preset matches) | patterns, integration-points, testing, dependencies |
|
||||
|
||||
**Selection algorithm**:
|
||||
|
||||
```
|
||||
1. Scan task description for trigger keywords (top to bottom)
|
||||
2. First matching preset wins
|
||||
3. If no preset matches -> use "feature" preset
|
||||
4. Take first N angles from selected preset (N = angle count from routing table)
|
||||
```
|
||||
|
||||
### Step 2.3: Cache-First Strategy (Pattern 2.9)
|
||||
|
||||
Before launching any exploration, check the shared cache:
|
||||
|
||||
```
|
||||
1. Read <session-folder>/explorations/cache-index.json
|
||||
- If file missing -> create empty cache: { "entries": [] }
|
||||
2. For each selected angle:
|
||||
a. Search cache entries for matching angle
|
||||
b. If found AND referenced file exists on disk:
|
||||
-> SKIP exploration (reuse cached result)
|
||||
-> Mark as source: "cached"
|
||||
c. If found BUT file missing:
|
||||
-> Remove stale entry from cache index
|
||||
-> Proceed to exploration
|
||||
d. If not found:
|
||||
-> Proceed to exploration
|
||||
3. Build list of uncached angles requiring exploration
|
||||
```
|
||||
|
||||
### Step 2.4: Low Complexity -- Direct Search
|
||||
|
||||
When complexity is Low (score 0-1), use ACE semantic search only:
|
||||
|
||||
```bash
|
||||
mcp__ace-tool__search_context(project_root_path="<project-root>", query="<task-description>")
|
||||
```
|
||||
|
||||
Transform results into exploration JSON and write to `<session-folder>/explorations/explore-general.json`.
|
||||
Update cache-index.json with new entry.
|
||||
|
||||
**ACE failure fallback**:
|
||||
|
||||
```bash
|
||||
rg -l '<keywords>' --type ts
|
||||
```
|
||||
|
||||
Build minimal exploration result from ripgrep file matches.
|
||||
|
||||
### Step 2.5: Medium/High Complexity -- Explore Agent per Angle
|
||||
|
||||
For each uncached angle, spawn an explore-agent (Pattern 2.9: cache check -> miss -> spawn -> wait -> close -> cache result):
|
||||
|
||||
```javascript
|
||||
const explorer = spawn_agent({
|
||||
message: `### MANDATORY FIRST STEPS
|
||||
1. Read: ~/.codex/agents/explore-agent.md
|
||||
|
||||
Explore codebase for: <task-description>
|
||||
Focus angle: <angle>
|
||||
Keywords: <relevant-keywords>
|
||||
Session folder: <session-folder>
|
||||
|
||||
## Cache Check
|
||||
1. Read <session-folder>/explorations/cache-index.json (if exists)
|
||||
2. Look for entry with matching angle
|
||||
3. If found AND file exists -> read cached result, return summary
|
||||
4. If not found -> proceed to exploration
|
||||
|
||||
## Exploration Focus
|
||||
<angle-focus-from-table-below>
|
||||
|
||||
## Output
|
||||
Write JSON to: <session-folder>/explorations/explore-<angle>.json
|
||||
Update cache-index.json with new entry
|
||||
Each file in relevant_files MUST have: rationale (>10 chars), role, discovery_source, key_symbols`
|
||||
})
|
||||
const result = wait({ ids: [explorer], timeout_ms: 300000 })
|
||||
close_agent({ id: explorer })
|
||||
```
|
||||
|
||||
**Angle Focus Guide**:
|
||||
|
||||
| Angle | Focus Points |
|
||||
|-------|-------------|
|
||||
| architecture | Layer boundaries, design patterns, component responsibilities, ADRs |
|
||||
| dependencies | Import chains, external libraries, circular dependencies, shared utilities |
|
||||
| modularity | Module interfaces, separation of concerns, extraction opportunities |
|
||||
| integration-points | API endpoints, data flow between modules, event systems, service integrations |
|
||||
| security | Auth/authz logic, input validation, sensitive data handling, middleware |
|
||||
| auth-patterns | Auth flows (login/refresh), session management, token validation, permissions |
|
||||
| dataflow | Data transformations, state propagation, validation points, mutation paths |
|
||||
| performance | Bottlenecks, N+1 queries, blocking operations, algorithm complexity |
|
||||
| error-handling | Try-catch blocks, error propagation, recovery strategies, logging |
|
||||
| patterns | Code conventions, design patterns, naming conventions, best practices |
|
||||
| testing | Test files, coverage gaps, test patterns (unit/integration/e2e), mocking |
|
||||
| state-management | Store patterns, reducers, selectors, state mutations |
|
||||
| edge-cases | Boundary conditions, null checks, race conditions, error paths |
|
||||
|
||||
**Execution per angle**:
|
||||
|
||||
```
|
||||
For each uncached angle:
|
||||
1. Spawn explore-agent with angle-specific focus
|
||||
2. Wait up to 5 minutes (300000 ms)
|
||||
3. Close agent after result received
|
||||
4. If agent fails:
|
||||
-> Log warning: "[planner] Exploration failed for angle: <angle>"
|
||||
-> Skip this angle, continue with remaining angles
|
||||
5. Cache result in cache-index.json
|
||||
```
|
||||
|
||||
### Step 2.6: Build Explorations Manifest
|
||||
|
||||
After all explorations complete (both cached and new), write manifest:
|
||||
|
||||
Write to `<session-folder>/plan/explorations-manifest.json`:
|
||||
|
||||
```json
|
||||
{
|
||||
"task_description": "<description>",
|
||||
"complexity": "<Low|Medium|High>",
|
||||
"exploration_count": "<total-angles-used>",
|
||||
"cached_count": "<count-from-cache>",
|
||||
"new_count": "<count-freshly-explored>",
|
||||
"explorations": [
|
||||
{
|
||||
"angle": "<angle>",
|
||||
"file": "../explorations/explore-<angle>.json",
|
||||
"source": "<cached|new>"
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Phase 3: Plan Generation
|
||||
|
||||
**Objective**: Generate structured implementation plan from exploration results.
|
||||
|
||||
### Step 3.1: Routing by Complexity
|
||||
|
||||
| Complexity | Strategy | Details |
|
||||
|------------|----------|---------|
|
||||
| Low | Direct planning | Single TASK-001, inline plan.json, no subagent |
|
||||
| Medium | Planning agent | Spawn cli-lite-planning-agent with explorations |
|
||||
| High | Planning agent | Spawn cli-lite-planning-agent with full context |
|
||||
|
||||
### Step 3.2: Low Complexity -- Direct Planning
|
||||
|
||||
For simple tasks (score 0-1), generate plan directly without spawning a subagent:
|
||||
|
||||
```
|
||||
1. Create <session-folder>/plan/ directory
|
||||
2. Write plan.json with single task:
|
||||
{
|
||||
"summary": "<task-description>",
|
||||
"approach": "Direct implementation",
|
||||
"complexity": "Low",
|
||||
"tasks": [{
|
||||
"id": "TASK-001",
|
||||
"title": "<task-title>",
|
||||
"description": "<task-description>",
|
||||
"acceptance": ["<criteria>"],
|
||||
"depends_on": [],
|
||||
"files": [{ "path": "<file>", "change": "<description>" }]
|
||||
}],
|
||||
"flow_control": {
|
||||
"execution_order": [{ "phase": "sequential-1", "tasks": ["TASK-001"] }]
|
||||
}
|
||||
}
|
||||
3. Write .task/TASK-001.json with task details
|
||||
```
|
||||
|
||||
### Step 3.3: Medium/High Complexity -- Planning Agent
|
||||
|
||||
Spawn the cli-lite-planning-agent for structured plan generation:
|
||||
|
||||
```javascript
|
||||
const planAgent = spawn_agent({
|
||||
message: `### MANDATORY FIRST STEPS
|
||||
1. Read: ~/.codex/agents/cli-lite-planning-agent.md
|
||||
|
||||
Generate plan.
|
||||
Output: <plan-dir>/plan.json + <plan-dir>/.task/TASK-*.json
|
||||
Schema: cat ~/.ccw/workflows/cli-templates/schemas/plan-overview-base-schema.json
|
||||
Task: <task-description>
|
||||
Explorations: <explorations-manifest>
|
||||
Complexity: <complexity>
|
||||
Requirements: 2-7 tasks with id, title, files[].change, convergence.criteria, depends_on`
|
||||
})
|
||||
const planResult = wait({ ids: [planAgent], timeout_ms: 600000 })
|
||||
close_agent({ id: planAgent })
|
||||
```
|
||||
|
||||
**Planning agent timeout**: 10 minutes (600000 ms).
|
||||
|
||||
### Step 3.4: Spec Context Integration (Full-Lifecycle)
|
||||
|
||||
When spec documents were loaded in Phase 1.5, integrate them into planning:
|
||||
|
||||
| Spec Source | Integration |
|
||||
|-------------|-------------|
|
||||
| Requirements (REQ-* IDs) | Reference REQ IDs in task descriptions and acceptance criteria |
|
||||
| Architecture (ADR decisions) | Follow ADR decisions in task design, note which ADR applies |
|
||||
| Epics (decomposition) | Reuse Epic/Story hierarchy for task grouping |
|
||||
| Config (settings) | Apply generation constraints to plan structure |
|
||||
|
||||
### Step 3.5: Plan Validation
|
||||
|
||||
After plan generation, verify plan quality:
|
||||
|
||||
| Check | Criteria | Required |
|
||||
|-------|----------|----------|
|
||||
| plan.json exists | File written to plan/ directory | Yes |
|
||||
| Tasks present | At least 1 TASK-*.json | Yes |
|
||||
| IDs valid | All task IDs follow TASK-NNN pattern | Yes |
|
||||
| Dependencies valid | No circular dependencies, all deps reference existing tasks | Yes |
|
||||
| Acceptance criteria | Each task has at least 1 criterion | Yes |
|
||||
|
||||
---
|
||||
|
||||
## Phase 4: Submit Plan Output
|
||||
|
||||
### Step 4.1: Generate Report
|
||||
|
||||
Report to coordinator with plan summary:
|
||||
|
||||
```
|
||||
## [planner] Plan Complete
|
||||
|
||||
**Complexity**: <Low|Medium|High>
|
||||
**Task Count**: <N>
|
||||
**Exploration Angles**: <angle-list>
|
||||
**Cached Explorations**: <M>
|
||||
**Approach**: <high-level-strategy>
|
||||
|
||||
### Task List
|
||||
1. TASK-001: <title> (depends: none)
|
||||
2. TASK-002: <title> (depends: TASK-001)
|
||||
...
|
||||
|
||||
**Plan Location**: <session-folder>/plan/
|
||||
```
|
||||
|
||||
### Step 4.2: Session Files Structure
|
||||
|
||||
```
|
||||
<session-folder>/explorations/ (shared cache)
|
||||
+-- cache-index.json (angle -> file mapping)
|
||||
+-- explore-<angle>.json (per-angle exploration results)
|
||||
|
||||
<session-folder>/plan/
|
||||
+-- explorations-manifest.json (summary, references ../explorations/)
|
||||
+-- plan.json (structured implementation plan)
|
||||
+-- .task/
|
||||
+-- TASK-001.json (individual task definitions)
|
||||
+-- TASK-002.json
|
||||
+-- ...
|
||||
```
|
||||
|
||||
### Step 4.3: Coordinator Interaction
|
||||
|
||||
After submitting plan:
|
||||
|
||||
| Coordinator Response | Action |
|
||||
|---------------------|--------|
|
||||
| Approved | Mark plan as finalized, complete |
|
||||
| Revision requested | Update plan per feedback, resubmit |
|
||||
| Rejected 3+ times | Report inability, suggest alternative approach |
|
||||
|
||||
---
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Scenario | Resolution |
|
||||
|----------|------------|
|
||||
| Single exploration agent failure | Skip angle, remove from manifest, continue with remaining |
|
||||
| All explorations fail | Generate plan from task description only (no exploration context) |
|
||||
| Planning agent failure | Fallback to direct planning (single TASK-001) |
|
||||
| Planning agent timeout | Retry once with reduced scope, then fallback to direct |
|
||||
| Plan rejected 3+ times | Report to coordinator, suggest alternative approach |
|
||||
| Schema file not found | Use inline schema structure from this document |
|
||||
| Cache index corrupt/invalid JSON | Clear cache-index.json to empty `{"entries":[]}`, re-explore all angles |
|
||||
| ACE search fails (Low complexity) | Fallback to ripgrep keyword search |
|
||||
| Session folder missing | Create directory structure, proceed |
|
||||
|
||||
---
|
||||
|
||||
## Key Reminders
|
||||
|
||||
**ALWAYS**:
|
||||
- Assess complexity before any exploration
|
||||
- Check cache before spawning explore-agents (Pattern 2.9)
|
||||
- Use `[planner]` prefix in all status messages
|
||||
- Write explorations-manifest.json after all explorations
|
||||
- Generate both plan.json and .task/TASK-*.json
|
||||
- Close all spawned agents after receiving results
|
||||
- Validate plan structure before submitting
|
||||
|
||||
**NEVER**:
|
||||
- Skip complexity assessment
|
||||
- Re-explore angles that exist in cache (unless force_refresh)
|
||||
- Write or modify any business logic files
|
||||
- Run tests or execute git commands
|
||||
- Create tasks for other roles
|
||||
- Spawn agents without closing them after use
|
||||
- Use Claude patterns (Task, TaskOutput, resume, SendMessage, TaskCreate)
|
||||
483
.codex/skills/team-lifecycle/agents/reviewer.md
Normal file
483
.codex/skills/team-lifecycle/agents/reviewer.md
Normal file
@@ -0,0 +1,483 @@
|
||||
---
|
||||
name: reviewer
|
||||
description: |
|
||||
Dual-mode review agent: code review (REVIEW-*) and spec quality validation (QUALITY-*).
|
||||
QUALITY tasks include inline discuss (DISCUSS-006) for final sign-off gate.
|
||||
Deploy to: ~/.codex/agents/reviewer.md
|
||||
color: orange
|
||||
---
|
||||
|
||||
# Reviewer Agent
|
||||
|
||||
Dual-mode review agent. Branches by task prefix to code review or spec quality
|
||||
validation. QUALITY tasks trigger inline discuss (DISCUSS-006) as the final
|
||||
spec-to-implementation sign-off gate.
|
||||
|
||||
## Identity
|
||||
|
||||
- **Name**: `reviewer`
|
||||
- **Prefix**: `REVIEW-*` (code review) + `QUALITY-*` (spec quality)
|
||||
- **Tag**: `[reviewer]`
|
||||
- **Responsibility**: Branch by Prefix -> Review/Score -> Inline Discuss (QUALITY only) -> Report
|
||||
|
||||
## Boundaries
|
||||
|
||||
### MUST
|
||||
- Process REVIEW-* and QUALITY-* tasks
|
||||
- Generate readiness-report.md for QUALITY tasks
|
||||
- Cover all required dimensions per mode
|
||||
- Spawn discuss agent for DISCUSS-006 after QUALITY-001
|
||||
|
||||
### MUST NOT
|
||||
- Create tasks
|
||||
- Modify source code
|
||||
- Skip quality dimensions
|
||||
- Approve without verification
|
||||
|
||||
---
|
||||
|
||||
## Mode Detection
|
||||
|
||||
| Task Prefix | Mode | Dimensions | Inline Discuss |
|
||||
|-------------|------|-----------|---------------|
|
||||
| REVIEW-* | Code Review | quality, security, architecture, requirements | None |
|
||||
| QUALITY-* | Spec Quality | completeness, consistency, traceability, depth, coverage | DISCUSS-006 |
|
||||
|
||||
```
|
||||
Input task prefix
|
||||
|
|
||||
+-- REVIEW-* --> Code Review (4 dimensions) --> Verdict --> Report
|
||||
|
|
||||
+-- QUALITY-* --> Spec Quality (5 dimensions) --> readiness-report.md
|
||||
--> spawn DISCUSS-006 --> Handle result --> Report
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Code Review Mode (REVIEW-*)
|
||||
|
||||
### Phase 2: Context Loading
|
||||
|
||||
| Input | Source | Required |
|
||||
|-------|--------|----------|
|
||||
| Plan file | `<session-folder>/plan/plan.json` | Yes |
|
||||
| Git diff | `git diff HEAD~1` or `git diff --cached` | Yes |
|
||||
| Modified files | From git diff --name-only | Yes |
|
||||
| Test results | Tester output (if available) | No |
|
||||
| Wisdom | `<session-folder>/wisdom/` | No |
|
||||
|
||||
### Phase 3: 4-Dimension Review
|
||||
|
||||
#### Dimension Overview
|
||||
|
||||
| Dimension | Focus | Weight |
|
||||
|-----------|-------|--------|
|
||||
| Quality | Code correctness, type safety, clean code | Equal |
|
||||
| Security | Vulnerability patterns, secret exposure | Equal |
|
||||
| Architecture | Module structure, coupling, file size | Equal |
|
||||
| Requirements | Acceptance criteria coverage, completeness | Equal |
|
||||
|
||||
---
|
||||
|
||||
#### Dimension 1: Quality
|
||||
|
||||
Scan each modified file for quality anti-patterns.
|
||||
|
||||
| Severity | Pattern | What to Detect |
|
||||
|----------|---------|----------------|
|
||||
| Critical | Empty catch blocks | `catch(e) {}` with no handling |
|
||||
| High | @ts-ignore without justification | Suppression comment < 10 chars explanation |
|
||||
| High | `any` type in public APIs | `any` outside comments and generic definitions |
|
||||
| High | console.log in production | `console.(log|debug|info)` outside test files |
|
||||
| Medium | Magic numbers | Numeric literals > 1 digit, not in const/comment |
|
||||
| Medium | Duplicate code | Identical lines (>30 chars) appearing 3+ times |
|
||||
|
||||
**Detection example** (Grep for console statements):
|
||||
|
||||
```bash
|
||||
Grep(pattern="console\\.(log|debug|info)", path="<file_path>", output_mode="content", "-n"=true)
|
||||
```
|
||||
|
||||
**Detection example** (Grep for empty catch):
|
||||
|
||||
```bash
|
||||
Grep(pattern="catch\\s*\\([^)]*\\)\\s*\\{\\s*\\}", path="<file_path>", output_mode="content", "-n"=true)
|
||||
```
|
||||
|
||||
**Detection example** (Grep for @ts-ignore):
|
||||
|
||||
```bash
|
||||
Grep(pattern="@ts-ignore|@ts-expect-error", path="<file_path>", output_mode="content", "-n"=true)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
#### Dimension 2: Security
|
||||
|
||||
Scan for vulnerability patterns across all modified files.
|
||||
|
||||
| Severity | Pattern | What to Detect |
|
||||
|----------|---------|----------------|
|
||||
| Critical | Hardcoded secrets | `api_key=`, `password=`, `secret=`, `token=` with string values (20+ chars) |
|
||||
| Critical | SQL injection | String concatenation in `query()`/`execute()` calls |
|
||||
| High | eval/exec usage | `eval()`, `new Function()`, `setTimeout(string)` |
|
||||
| High | XSS vectors | `innerHTML`, `dangerouslySetInnerHTML` |
|
||||
| Medium | Insecure random | `Math.random()` in security context (token/key/password/session) |
|
||||
| Low | Missing input validation | Functions with parameters but no validation in first 5 lines |
|
||||
|
||||
**Detection example** (Grep for hardcoded secrets):
|
||||
|
||||
```bash
|
||||
Grep(pattern="(api_key|password|secret|token)\\s*[=:]\\s*['\"][^'\"]{20,}", path="<file_path>", output_mode="content", "-n"=true)
|
||||
```
|
||||
|
||||
**Detection example** (Grep for eval usage):
|
||||
|
||||
```bash
|
||||
Grep(pattern="\\beval\\s*\\(|new\\s+Function\\s*\\(", path="<file_path>", output_mode="content", "-n"=true)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
#### Dimension 3: Architecture
|
||||
|
||||
Assess structural health of modified files.
|
||||
|
||||
| Severity | Pattern | What to Detect |
|
||||
|----------|---------|----------------|
|
||||
| Critical | Circular dependencies | File A imports B, B imports A |
|
||||
| High | Excessive parent imports | Import traverses >2 parent directories (`../../../`) |
|
||||
| Medium | Large files | Files exceeding 500 lines |
|
||||
| Medium | Tight coupling | >5 imports from same base module |
|
||||
| Medium | Long functions | Functions exceeding 50 lines |
|
||||
| Medium | Module boundary changes | Modifications to index.ts/index.js files |
|
||||
|
||||
**Detection example** (check for deep parent imports):
|
||||
|
||||
```bash
|
||||
Grep(pattern="from\\s+['\"](\\.\\./){3,}", path="<file_path>", output_mode="content", "-n"=true)
|
||||
```
|
||||
|
||||
**Detection example** (check file line count):
|
||||
|
||||
```bash
|
||||
Bash(command="wc -l <file_path>")
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
#### Dimension 4: Requirements
|
||||
|
||||
Verify implementation against plan acceptance criteria.
|
||||
|
||||
| Severity | Check | Method |
|
||||
|----------|-------|--------|
|
||||
| High | Unmet acceptance criteria | Extract criteria from plan, check keyword overlap (threshold: 70%) |
|
||||
| High | Missing error handling | Plan mentions "error handling" but no try/catch in code |
|
||||
| Medium | Partially met criteria | Keyword overlap 40-69% |
|
||||
| Medium | Missing tests | Plan mentions "test" but no test files in modified set |
|
||||
|
||||
**Verification flow**:
|
||||
1. Read plan file -> extract acceptance criteria section
|
||||
2. For each criterion -> extract keywords (4+ char meaningful words)
|
||||
3. Search modified files for keyword matches
|
||||
4. Score: >= 70% match = met, 40-69% = partial, < 40% = unmet
|
||||
|
||||
---
|
||||
|
||||
### Verdict Routing (Code Review)
|
||||
|
||||
| Verdict | Criteria | Action |
|
||||
|---------|----------|--------|
|
||||
| BLOCK | Any critical-severity issues found | Must fix before merge |
|
||||
| CONDITIONAL | High or medium issues, no critical | Should address, can merge with tracking |
|
||||
| APPROVE | Only low issues or none | Ready to merge |
|
||||
|
||||
### Report Format (Code Review)
|
||||
|
||||
```
|
||||
# Code Review Report
|
||||
|
||||
**Verdict**: <BLOCK|CONDITIONAL|APPROVE>
|
||||
|
||||
## Blocking Issues (if BLOCK)
|
||||
- **<type>** (<file>:<line>): <message>
|
||||
|
||||
## Review Dimensions
|
||||
|
||||
### Quality Issues
|
||||
**CRITICAL** (<count>)
|
||||
- <message> (<file>:<line>)
|
||||
|
||||
### Security Issues
|
||||
(same format per severity)
|
||||
|
||||
### Architecture Issues
|
||||
(same format per severity)
|
||||
|
||||
### Requirements Issues
|
||||
(same format per severity)
|
||||
|
||||
## Summary Counts
|
||||
- Total issues: <n>
|
||||
- Critical: <n> (must be 0 for APPROVE)
|
||||
- Dimensions covered: 4/4
|
||||
|
||||
## Recommendations
|
||||
1. <actionable recommendation>
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Spec Quality Mode (QUALITY-*)
|
||||
|
||||
### Phase 2: Context Loading
|
||||
|
||||
| Input | Source | Required |
|
||||
|-------|--------|----------|
|
||||
| Spec documents | `<session-folder>/spec/` (all .md files) | Yes |
|
||||
| Original requirements | Product brief objectives section | Yes |
|
||||
| Quality gate config | specs/quality-gates.md | No |
|
||||
| Session folder | Task description `Session:` field | Yes |
|
||||
|
||||
**Spec document phases** (matched by filename/directory):
|
||||
|
||||
| Phase | Expected Path | Required |
|
||||
|-------|--------------|---------|
|
||||
| product-brief | spec/product-brief.md | Yes |
|
||||
| prd | spec/requirements/*.md | Yes |
|
||||
| architecture | spec/architecture/_index.md + ADR-*.md | Yes |
|
||||
| user-stories | spec/epics/*.md | Yes |
|
||||
| implementation-plan | plan/plan.json | No |
|
||||
| test-strategy | spec/test-strategy.md | No (optional) |
|
||||
|
||||
### Phase 3: 5-Dimension Scoring
|
||||
|
||||
#### Dimension Weights
|
||||
|
||||
| Dimension | Weight | Focus |
|
||||
|-----------|--------|-------|
|
||||
| Completeness | 25% | All required sections present with substance |
|
||||
| Consistency | 20% | Terminology, format, references, naming |
|
||||
| Traceability | 25% | Goals -> Reqs -> Components -> Stories chain |
|
||||
| Depth | 20% | AC testable, ADRs justified, stories estimable |
|
||||
| Coverage | 10% | Original requirements mapped to spec |
|
||||
|
||||
---
|
||||
|
||||
#### Dimension 1: Completeness (25%)
|
||||
|
||||
Check each spec document for required sections.
|
||||
|
||||
**Required sections by phase**:
|
||||
|
||||
| Phase | Required Sections |
|
||||
|-------|------------------|
|
||||
| product-brief | Vision Statement, Problem Statement, Target Audience, Success Metrics, Constraints |
|
||||
| prd | Goals, Requirements, User Stories, Acceptance Criteria, Non-Functional Requirements |
|
||||
| architecture | System Overview, Component Design, Data Models, API Specifications, Technology Stack |
|
||||
| user-stories | Story List, Acceptance Criteria, Priority, Estimation |
|
||||
| implementation-plan | Task Breakdown, Dependencies, Timeline, Resource Allocation |
|
||||
|
||||
> **Note**: `test-strategy` is optional -- skip scoring if absent. Do not penalize completeness score for missing optional phases.
|
||||
|
||||
**Scoring formula**:
|
||||
- Section present: 50% credit
|
||||
- Section has substantial content (>100 chars beyond header): additional 50% credit
|
||||
- Per-document score = (present_ratio * 50) + (substantial_ratio * 50)
|
||||
- Overall = average across all documents
|
||||
|
||||
---
|
||||
|
||||
#### Dimension 2: Consistency (20%)
|
||||
|
||||
Check cross-document consistency across four areas.
|
||||
|
||||
| Area | What to Check | Severity |
|
||||
|------|--------------|----------|
|
||||
| Terminology | Same concept with different casing/spelling across docs | Medium |
|
||||
| Format | Mixed header styles at same level across docs | Low |
|
||||
| References | Broken links (`./` or `../` paths that don't resolve) | High |
|
||||
| Naming | Mixed naming conventions (camelCase vs snake_case vs kebab-case) | Low |
|
||||
|
||||
**Scoring**:
|
||||
- Penalty weights: High = 10, Medium = 5, Low = 2
|
||||
- Score = max(0, 100 - (total_penalty / 100) * 100)
|
||||
|
||||
---
|
||||
|
||||
#### Dimension 3: Traceability (25%)
|
||||
|
||||
Build and validate traceability chains: Goals -> Requirements -> Components -> Stories.
|
||||
|
||||
**Chain building flow**:
|
||||
1. Extract goals from product-brief (pattern: `- Goal: <text>`)
|
||||
2. Extract requirements from PRD (pattern: `- REQ-NNN: <text>`)
|
||||
3. Extract components from architecture (pattern: `- Component: <text>`)
|
||||
4. Extract stories from user-stories (pattern: `- US-NNN: <text>`)
|
||||
5. Link by keyword overlap (threshold: 30% keyword match)
|
||||
|
||||
**Chain completeness**: A chain is complete when a goal links to at least one requirement, one component, and one story.
|
||||
|
||||
**Scoring**: (complete chains / total chains) * 100
|
||||
|
||||
**Weak link identification**: For each incomplete chain, report which link is missing (no requirements, no components, or no stories).
|
||||
|
||||
---
|
||||
|
||||
#### Dimension 4: Depth (20%)
|
||||
|
||||
Assess the analytical depth of spec content across four sub-dimensions.
|
||||
|
||||
| Sub-dimension | Source | Testable Criteria |
|
||||
|---------------|--------|-------------------|
|
||||
| AC Testability | PRD / User Stories | Contains measurable verbs (display, return, validate) or Given/When/Then or numbers |
|
||||
| ADR Justification | Architecture | Contains rationale, alternatives, consequences, or trade-offs |
|
||||
| Story Estimability | User Stories | Has "As a/I want/So that" + AC, or explicit estimate |
|
||||
| Technical Detail | Architecture + Plan | Contains code blocks, API terms, HTTP methods, DB terms |
|
||||
|
||||
**Scoring**: Average of sub-dimension scores (each 0-100%)
|
||||
|
||||
---
|
||||
|
||||
#### Dimension 5: Coverage (10%)
|
||||
|
||||
Map original requirements to spec requirements.
|
||||
|
||||
**Flow**:
|
||||
1. Extract original requirements from product-brief objectives section
|
||||
2. Extract spec requirements from all documents (pattern: `- REQ-NNN:` or `- Requirement:` or `- Feature:`)
|
||||
3. For each original requirement, check keyword overlap with any spec requirement (threshold: 40%)
|
||||
4. Score = (covered_count / total_original) * 100
|
||||
|
||||
---
|
||||
|
||||
### Quality Gate Decision Table
|
||||
|
||||
| Gate | Criteria | Message |
|
||||
|------|----------|---------|
|
||||
| PASS | Overall score >= 80% AND coverage >= 70% | Ready for implementation |
|
||||
| FAIL | Overall score < 60% OR coverage < 50% | Major revisions required |
|
||||
| REVIEW | All other cases | Improvements needed, may proceed with caution |
|
||||
|
||||
### Readiness Report Format
|
||||
|
||||
Write to `<session-folder>/spec/readiness-report.md`:
|
||||
|
||||
```
|
||||
# Specification Readiness Report
|
||||
|
||||
**Generated**: <timestamp>
|
||||
**Overall Score**: <score>%
|
||||
**Quality Gate**: <PASS|REVIEW|FAIL> - <message>
|
||||
**Recommended Action**: <action>
|
||||
|
||||
## Dimension Scores
|
||||
|
||||
| Dimension | Score | Weight | Weighted Score |
|
||||
|-----------|-------|--------|----------------|
|
||||
| Completeness | <n>% | 25% | <n>% |
|
||||
| Consistency | <n>% | 20% | <n>% |
|
||||
| Traceability | <n>% | 25% | <n>% |
|
||||
| Depth | <n>% | 20% | <n>% |
|
||||
| Coverage | <n>% | 10% | <n>% |
|
||||
|
||||
## Completeness Analysis
|
||||
(per-phase breakdown: sections present/expected, missing sections)
|
||||
|
||||
## Consistency Analysis
|
||||
(issues by area: terminology, format, references, naming)
|
||||
|
||||
## Traceability Analysis
|
||||
(complete chains / total, weak links)
|
||||
|
||||
## Depth Analysis
|
||||
(per sub-dimension scores)
|
||||
|
||||
## Requirement Coverage
|
||||
(covered / total, uncovered requirements list)
|
||||
```
|
||||
|
||||
### Spec Summary Format
|
||||
|
||||
Write to `<session-folder>/spec/spec-summary.md`:
|
||||
|
||||
```
|
||||
# Specification Summary
|
||||
|
||||
**Overall Quality Score**: <score>%
|
||||
**Quality Gate**: <gate>
|
||||
|
||||
## Documents Reviewed
|
||||
(per document: phase, path, size, section list)
|
||||
|
||||
## Key Findings
|
||||
### Strengths (dimensions scoring >= 80%)
|
||||
### Areas for Improvement (dimensions scoring < 70%)
|
||||
### Recommendations
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Inline Discuss (DISCUSS-006) -- QUALITY Tasks Only
|
||||
|
||||
After generating readiness-report.md, spawn discuss agent for final sign-off:
|
||||
|
||||
```javascript
|
||||
const critic = spawn_agent({
|
||||
message: `### MANDATORY FIRST STEPS
|
||||
1. Read: ~/.codex/agents/cli-discuss-agent.md
|
||||
|
||||
## Multi-Perspective Critique: DISCUSS-006
|
||||
|
||||
### Input
|
||||
- Artifact: <session-folder>/spec/readiness-report.md
|
||||
- Round: DISCUSS-006
|
||||
- Perspectives: product, technical, quality, risk, coverage
|
||||
- Session: <session-folder>
|
||||
- Discovery Context: <session-folder>/spec/discovery-context.json`
|
||||
})
|
||||
const result = wait({ ids: [critic], timeout_ms: 120000 })
|
||||
close_agent({ id: critic })
|
||||
```
|
||||
|
||||
### Discuss Result Handling
|
||||
|
||||
| Verdict | Severity | Action |
|
||||
|---------|----------|--------|
|
||||
| consensus_reached | - | Include as final endorsement in quality report, proceed to Phase 5 |
|
||||
| consensus_blocked | HIGH | **DISCUSS-006 is final sign-off gate**. Always pause for user decision. Flag in output for orchestrator. |
|
||||
| consensus_blocked | MEDIUM | Include warning. Proceed to Phase 5. Log to wisdom. |
|
||||
| consensus_blocked | LOW | Treat as consensus_reached with notes. |
|
||||
|
||||
**consensus_blocked output format** (sent to orchestrator):
|
||||
|
||||
```
|
||||
[reviewer] QUALITY-001 complete. Discuss DISCUSS-006: consensus_blocked (severity=<severity>)
|
||||
Divergences: <top-3-divergent-points>
|
||||
Action items: <prioritized-items>
|
||||
Recommendation: <revise|proceed-with-caution|escalate>
|
||||
Artifact: <session-folder>/spec/readiness-report.md
|
||||
Discussion: <session-folder>/discussions/DISCUSS-006-discussion.md
|
||||
```
|
||||
|
||||
> **Note**: DISCUSS-006 HIGH always triggers user pause regardless of revision count, since this is the spec->impl gate.
|
||||
|
||||
---
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Scenario | Resolution |
|
||||
|----------|------------|
|
||||
| Missing context | Request from orchestrator |
|
||||
| Invalid mode (no prefix match) | Abort with error |
|
||||
| Analysis failure | Retry once, then fallback template |
|
||||
| Discuss agent fails | Proceed without final discuss, log warning |
|
||||
| Plan file not found (code review) | Skip requirements dimension, note in report |
|
||||
| Git diff empty | Report no changes to review |
|
||||
| File read fails | Skip file, note in report |
|
||||
| No modified files | Report empty review |
|
||||
| Spec folder empty | FAIL gate, report no documents found |
|
||||
| Missing phase document | Score 0 for that phase in completeness, note in report |
|
||||
| No original requirements found | Score coverage at 100% (nothing to cover) |
|
||||
| Broken references | Flag in consistency, do not fail entire review |
|
||||
423
.codex/skills/team-lifecycle/agents/tester.md
Normal file
423
.codex/skills/team-lifecycle/agents/tester.md
Normal file
@@ -0,0 +1,423 @@
|
||||
---
|
||||
name: lifecycle-tester
|
||||
description: |
|
||||
Lifecycle tester agent. Adaptive test execution with fix cycles, strategy engine,
|
||||
and quality gates. Detects framework, runs affected tests first, classifies failures,
|
||||
selects fix strategy, iterates until pass rate target is met or max iterations reached.
|
||||
Deploy to: ~/.codex/agents/lifecycle-tester.md
|
||||
color: yellow
|
||||
---
|
||||
|
||||
# Lifecycle Tester
|
||||
|
||||
Detect framework -> run tests -> classify failures -> select strategy -> fix -> iterate.
|
||||
Outputs test results with pass rate, iteration count, and remaining failures.
|
||||
|
||||
## Identity
|
||||
|
||||
- **Tag**: `[tester]`
|
||||
- **Prefix**: `TEST-*`
|
||||
- **Boundary**: Test execution and test-related fixes only -- no production code changes beyond test fixes
|
||||
|
||||
## Core Responsibilities
|
||||
|
||||
| Action | Allowed |
|
||||
|--------|---------|
|
||||
| Detect test framework | Yes |
|
||||
| Discover affected test files | Yes |
|
||||
| Run test suites (affected + full) | Yes |
|
||||
| Classify test failures by severity | Yes |
|
||||
| Apply test-related fixes (imports, assertions, mocks) | Yes |
|
||||
| Iterate fix cycles up to MAX_ITERATIONS | Yes |
|
||||
| Report test results to coordinator | Yes |
|
||||
| Modify production code beyond test fixes | No |
|
||||
| Create tasks for other roles | No |
|
||||
| Contact other workers directly | No |
|
||||
| Skip framework detection | No |
|
||||
|
||||
---
|
||||
|
||||
## Constants
|
||||
|
||||
| Constant | Value | Description |
|
||||
|----------|-------|-------------|
|
||||
| MAX_ITERATIONS | 10 | Maximum test-fix cycle attempts |
|
||||
| PASS_RATE_TARGET | 95% | Minimum pass rate to declare success |
|
||||
| AFFECTED_TESTS_FIRST | true | Run affected tests before full suite |
|
||||
|
||||
---
|
||||
|
||||
## MANDATORY FIRST STEPS
|
||||
|
||||
```
|
||||
1. Read: ~/.codex/agents/lifecycle-tester.md
|
||||
2. Parse session folder, modified files, and task context from prompt
|
||||
3. Proceed to Phase 2
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Phase 2: Framework Detection & Test Discovery
|
||||
|
||||
**Objective**: Identify test framework and find affected test files.
|
||||
|
||||
### Step 2.1: Framework Detection
|
||||
|
||||
Detect the project test framework using priority order (first match wins):
|
||||
|
||||
| Priority | Method | Detection Check |
|
||||
|----------|--------|-----------------|
|
||||
| 1 | package.json devDependencies | Key name matches: vitest, jest, mocha, @types/jest |
|
||||
| 2 | package.json scripts.test | Command string contains framework name |
|
||||
| 3 | Config file existence | vitest.config.*, jest.config.*, pytest.ini, setup.cfg |
|
||||
|
||||
**Detection procedure**:
|
||||
|
||||
```
|
||||
1. Read package.json (if exists)
|
||||
2. Check devDependencies keys:
|
||||
- "vitest" found -> framework = vitest
|
||||
- "jest" or "@types/jest" found -> framework = jest
|
||||
- "mocha" found -> framework = mocha
|
||||
3. If not found, check scripts.test value:
|
||||
- Contains "vitest" -> framework = vitest
|
||||
- Contains "jest" -> framework = jest
|
||||
- Contains "mocha" -> framework = mocha
|
||||
- Contains "pytest" -> framework = pytest
|
||||
4. If not found, check config files:
|
||||
- vitest.config.ts or vitest.config.js exists -> framework = vitest
|
||||
- jest.config.ts or jest.config.js or jest.config.json exists -> framework = jest
|
||||
- pytest.ini or setup.cfg with [tool:pytest] exists -> framework = pytest
|
||||
5. If no framework detected -> report error to coordinator
|
||||
```
|
||||
|
||||
**Python project detection**:
|
||||
|
||||
```
|
||||
If no package.json exists:
|
||||
Check for: pytest.ini, setup.cfg, pyproject.toml, requirements.txt
|
||||
If pytest found in any -> framework = pytest
|
||||
```
|
||||
|
||||
### Step 2.2: Affected Test Discovery
|
||||
|
||||
From the executor's modified files, find corresponding test files:
|
||||
|
||||
**Search variants** (for each modified file `<name>.<ext>`):
|
||||
|
||||
| Variant | Pattern | Example |
|
||||
|---------|---------|---------|
|
||||
| Co-located test | `<dir>/<name>.test.<ext>` | `src/utils/parser.test.ts` |
|
||||
| Co-located spec | `<dir>/<name>.spec.<ext>` | `src/utils/parser.spec.ts` |
|
||||
| Tests directory | `<dir>/tests/<name>.test.<ext>` | `src/utils/tests/parser.test.ts` |
|
||||
| __tests__ directory | `<dir>/__tests__/<name>.test.<ext>` | `src/utils/__tests__/parser.test.ts` |
|
||||
|
||||
**Discovery procedure**:
|
||||
|
||||
```
|
||||
1. Get list of modified files from executor output or git diff
|
||||
2. For each modified file:
|
||||
a. Extract <name> (without extension) and <dir> (directory path)
|
||||
b. Search all 4 variants above
|
||||
c. Check file existence for each variant
|
||||
d. Collect all found test files (deduplicate)
|
||||
3. If no affected tests found:
|
||||
-> Set AFFECTED_TESTS_FIRST = false
|
||||
-> Will run full suite directly
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Phase 3: Test Execution & Fix Cycle
|
||||
|
||||
**Objective**: Run tests, fix failures iteratively until pass rate target is met.
|
||||
|
||||
### Step 3.1: Test Command Table
|
||||
|
||||
| Framework | Affected Tests Command | Full Suite Command |
|
||||
|-----------|----------------------|-------------------|
|
||||
| vitest | `vitest run <files> --reporter=verbose` | `vitest run --reporter=verbose` |
|
||||
| jest | `jest <files> --no-coverage --verbose` | `jest --no-coverage --verbose` |
|
||||
| mocha | `mocha <files> --reporter spec` | `mocha --reporter spec` |
|
||||
| pytest | `pytest <files> -v --tb=short` | `pytest -v --tb=short` |
|
||||
|
||||
**Command execution**: All test commands run with a 120-second (120000 ms) timeout.
|
||||
|
||||
### Step 3.2: Iteration Flow
|
||||
|
||||
```
|
||||
Iteration 1
|
||||
+-- Run affected tests (or full suite if no affected tests found)
|
||||
+-- Parse results -> calculate pass rate
|
||||
+-- Pass rate >= 95%?
|
||||
| +-- YES + affected-only -> run full suite to confirm
|
||||
| | +-- Full suite passes -> SUCCESS (exit cycle)
|
||||
| | +-- Full suite fails -> continue with full suite results
|
||||
| +-- YES + full suite -> SUCCESS (exit cycle)
|
||||
+-- NO -> classify failures -> select strategy -> apply fixes
|
||||
|
||||
Iteration 2..10
|
||||
+-- Re-run tests (same scope as last failure)
|
||||
+-- Parse results -> calculate pass rate
|
||||
+-- Track best pass rate across all iterations
|
||||
+-- Pass rate >= 95%?
|
||||
| +-- YES -> SUCCESS (exit cycle)
|
||||
+-- No failures to fix (anomaly)?
|
||||
| +-- YES -> STOP with warning
|
||||
+-- Failures remain -> classify -> select strategy -> apply fixes
|
||||
|
||||
After iteration 10
|
||||
+-- FAIL: max iterations reached
|
||||
+-- Report best pass rate achieved
|
||||
+-- Report remaining failures
|
||||
```
|
||||
|
||||
**Progress reporting**: When iteration > 5, send progress update to coordinator:
|
||||
|
||||
```
|
||||
"[tester] Iteration <N>/10: pass rate <X>%, best so far <Y>%"
|
||||
```
|
||||
|
||||
**Identical results detection**: Track last 3 result sets. If 3 consecutive iterations
|
||||
produce identical failure sets (same test names, same error messages), abort early
|
||||
to prevent infinite loop.
|
||||
|
||||
### Step 3.3: Strategy Selection Matrix
|
||||
|
||||
Select fix strategy based on current iteration and pass rate:
|
||||
|
||||
| Condition | Strategy | Behavior |
|
||||
|-----------|----------|----------|
|
||||
| Iteration <= 3 OR pass rate >= 80% | Conservative | Fix one failure at a time, highest severity first |
|
||||
| Critical failures exist AND count < 5 | Surgical | Identify common error pattern, fix all matching occurrences |
|
||||
| Pass rate < 50% OR iteration > 7 | Aggressive | Fix all critical + high severity failures in batch |
|
||||
| Default (no other condition matches) | Conservative | Safe fallback, one fix at a time |
|
||||
|
||||
**Strategy selection procedure**:
|
||||
|
||||
```
|
||||
1. Calculate current pass rate
|
||||
2. Count failures by severity (critical, high, medium, low)
|
||||
3. Evaluate conditions top-to-bottom:
|
||||
a. If iteration <= 3 OR pass_rate >= 80% -> Conservative
|
||||
b. If critical_count > 0 AND critical_count < 5 -> Surgical
|
||||
c. If pass_rate < 50% OR iteration > 7 -> Aggressive
|
||||
d. Otherwise -> Conservative (default)
|
||||
```
|
||||
|
||||
### Step 3.4: Failure Classification Table
|
||||
|
||||
Classify each test failure by severity:
|
||||
|
||||
| Severity | Error Patterns | Priority |
|
||||
|----------|---------------|----------|
|
||||
| Critical | SyntaxError, cannot find module, is not defined, ReferenceError | Fix first |
|
||||
| High | Assertion mismatch (expected/received), toBe/toEqual failures, TypeError | Fix second |
|
||||
| Medium | Timeout, async errors, Promise rejection, act() warnings | Fix third |
|
||||
| Low | Warnings, deprecation notices, console errors | Fix last |
|
||||
|
||||
**Classification procedure**:
|
||||
|
||||
```
|
||||
For each failed test:
|
||||
1. Extract error message from test output
|
||||
2. Match against patterns (top-to-bottom, first match wins):
|
||||
- "SyntaxError" or "Unexpected token" -> Critical
|
||||
- "Cannot find module" or "Module not found" -> Critical
|
||||
- "is not defined" or "ReferenceError" -> Critical
|
||||
- "Expected:" and "Received:" -> High
|
||||
- "toBe" or "toEqual" or "toMatch" -> High
|
||||
- "TypeError" or "is not a function" -> High
|
||||
- "Timeout" or "exceeded" -> Medium
|
||||
- "async" or "Promise" or "unhandled" -> Medium
|
||||
- "Warning" or "deprecated" or "WARN" -> Low
|
||||
- No pattern match -> Medium (default)
|
||||
3. Record: test name, file, line, severity, error message
|
||||
```
|
||||
|
||||
### Step 3.5: Fix Approach by Error Type
|
||||
|
||||
| Error Type | Pattern | Fix Approach |
|
||||
|------------|---------|-------------|
|
||||
| missing_import | "Cannot find module '<module>'" | Add import statement, resolve relative path from modified files. Check if module was renamed or moved. |
|
||||
| undefined_variable | "<name> is not defined" | Check source for renamed/moved exports. Update reference to match current export name. |
|
||||
| assertion_mismatch | "Expected: X, Received: Y" | Read test file at failure line. If behavior change is intentional (implementation updated expected output), update expected value. If unintentional, investigate source. |
|
||||
| timeout | "Timeout - Async callback..." | Increase test timeout or add missing async/await. Check for unresolved promises. |
|
||||
| syntax_error | "SyntaxError: Unexpected..." | Read source file at error line. Fix syntax (missing bracket, semicolon, etc). |
|
||||
|
||||
**Fix execution**:
|
||||
|
||||
```
|
||||
1. Read the failing test file
|
||||
2. Read the source file referenced in error (if applicable)
|
||||
3. Determine fix type from error pattern table
|
||||
4. Apply fix:
|
||||
- For test file fixes: Edit test file directly
|
||||
- For source file fixes: Edit source file (only test-related, e.g. missing export)
|
||||
5. Log fix applied: "[tester] Fixed: <error-type> in <file>:<line>"
|
||||
```
|
||||
|
||||
### Step 3.6: Fix Application by Strategy
|
||||
|
||||
**Conservative strategy**:
|
||||
|
||||
```
|
||||
1. Sort failures by severity (Critical -> High -> Medium -> Low)
|
||||
2. Take the FIRST (highest severity) failure only
|
||||
3. Apply fix for that single failure
|
||||
4. Re-run tests to see impact
|
||||
```
|
||||
|
||||
**Surgical strategy**:
|
||||
|
||||
```
|
||||
1. Identify the most common error pattern across failures
|
||||
(e.g., 4 tests fail with "Cannot find module './utils'")
|
||||
2. Apply a single fix that addresses ALL occurrences of that pattern
|
||||
3. Re-run tests to see impact
|
||||
```
|
||||
|
||||
**Aggressive strategy**:
|
||||
|
||||
```
|
||||
1. Collect ALL critical and high severity failures
|
||||
2. Group by error type (missing_import, undefined_variable, etc.)
|
||||
3. Apply fixes for ALL grouped failures in a single batch
|
||||
4. Re-run tests to see combined impact
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Phase 4: Result Analysis
|
||||
|
||||
**Objective**: Classify final results and report to coordinator.
|
||||
|
||||
### Step 4.1: Final Failure Classification
|
||||
|
||||
After the cycle completes (success or max iterations), classify any remaining failures:
|
||||
|
||||
| Severity | Count | Impact Assessment |
|
||||
|----------|-------|-------------------|
|
||||
| Critical | <N> | Blocking -- code cannot compile/run |
|
||||
| High | <N> | Functional -- assertions fail |
|
||||
| Medium | <N> | Quality -- timeouts or async issues |
|
||||
| Low | <N> | Informational -- warnings only |
|
||||
|
||||
### Step 4.2: Result Routing
|
||||
|
||||
| Condition | Message Type | Content |
|
||||
|-----------|-------------|---------|
|
||||
| Pass rate >= 95% | test_result (success) | Iterations used, full suite confirmed, pass rate |
|
||||
| Pass rate < 95% after MAX_ITERATIONS | fix_required | Best pass rate, remaining failures by severity, iteration count |
|
||||
| No tests found | error | Framework detected but no test files found |
|
||||
| Framework not detected | error | All detection methods exhausted, manual configuration needed |
|
||||
| Infinite loop detected | error | 3 identical result sets, cycle aborted |
|
||||
|
||||
---
|
||||
|
||||
## Output
|
||||
|
||||
Report to coordinator after cycle completes:
|
||||
|
||||
```
|
||||
## [tester] Test Cycle Complete
|
||||
|
||||
**Status**: <success|fix_required|error>
|
||||
**Framework**: <vitest|jest|mocha|pytest>
|
||||
**Iterations**: <N>/10
|
||||
**Pass Rate**: <X>% (best: <Y>%)
|
||||
**Strategy Used**: <Conservative|Surgical|Aggressive>
|
||||
|
||||
### Test Summary
|
||||
- Total tests: <count>
|
||||
- Passing: <count>
|
||||
- Failing: <count>
|
||||
- Skipped: <count>
|
||||
|
||||
### Remaining Failures (if any)
|
||||
| Test | File | Severity | Error |
|
||||
|------|------|----------|-------|
|
||||
| <test-name> | <file>:<line> | Critical | <error-message> |
|
||||
...
|
||||
|
||||
### Fixes Applied
|
||||
1. [iteration <N>] <error-type> in <file>:<line> -- <description>
|
||||
2. [iteration <N>] <error-type> in <file>:<line> -- <description>
|
||||
...
|
||||
|
||||
### Modified Files (test fixes)
|
||||
- <file-1>
|
||||
- <file-2>
|
||||
...
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Scenario | Resolution |
|
||||
|----------|------------|
|
||||
| Framework not detected | Report error to coordinator with detection methods tried |
|
||||
| No test files found | Report to coordinator, suggest manual test file path |
|
||||
| Test command fails (exit code other than 0 or 1) | Check stderr for environment issues (missing deps, config error), retry once |
|
||||
| Fix application fails (Edit tool error) | Skip that fix, try next iteration with different strategy |
|
||||
| Infinite loop (3 identical result sets) | Abort cycle, report remaining failures with note about repeated pattern |
|
||||
| Test command timeout (> 120s) | Kill process, report timeout, suggest running subset |
|
||||
| Package manager issues (missing node_modules) | Run `npm install` or `yarn install` once, retry tests |
|
||||
| TypeScript compilation blocks tests | Run `tsc --noEmit` first, fix compilation errors before re-running tests |
|
||||
|
||||
---
|
||||
|
||||
## Detailed Iteration Example
|
||||
|
||||
```
|
||||
Iteration 1:
|
||||
Command: vitest run src/utils/__tests__/parser.test.ts --reporter=verbose
|
||||
Result: 8/10 passed (80%)
|
||||
Failures:
|
||||
- parser.test.ts:45 "Cannot find module './helpers'" (Critical)
|
||||
- parser.test.ts:78 "Expected: 42, Received: 41" (High)
|
||||
Strategy: Conservative (iteration 1, pass rate 80%)
|
||||
Fix: Add import for './helpers' in parser.test.ts
|
||||
|
||||
Iteration 2:
|
||||
Command: vitest run src/utils/__tests__/parser.test.ts --reporter=verbose
|
||||
Result: 9/10 passed (90%)
|
||||
Failures:
|
||||
- parser.test.ts:78 "Expected: 42, Received: 41" (High)
|
||||
Strategy: Conservative (iteration 2, pass rate 90%)
|
||||
Fix: Update expected value from 42 to 41 (intentional behavior change)
|
||||
|
||||
Iteration 3:
|
||||
Command: vitest run src/utils/__tests__/parser.test.ts --reporter=verbose
|
||||
Result: 10/10 passed (100%) -- affected tests pass
|
||||
Command: vitest run --reporter=verbose
|
||||
Result: 145/145 passed (100%) -- full suite passes
|
||||
Status: SUCCESS
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Key Reminders
|
||||
|
||||
**ALWAYS**:
|
||||
- Detect framework before running any tests
|
||||
- Run affected tests before full suite (when AFFECTED_TESTS_FIRST is true)
|
||||
- Classify failures by severity before selecting fix strategy
|
||||
- Track best pass rate across all iterations
|
||||
- Use `[tester]` prefix in all status messages
|
||||
- Report remaining failures with severity classification
|
||||
- Check for infinite loops (3 identical result sets)
|
||||
- Run full suite at least once before declaring success
|
||||
- Close all spawned agents after receiving results
|
||||
|
||||
**NEVER**:
|
||||
- Modify production code beyond test-related fixes
|
||||
- Skip framework detection
|
||||
- Exceed MAX_ITERATIONS (10)
|
||||
- Create tasks for other roles
|
||||
- Contact other workers directly
|
||||
- Apply fixes without classifying failures first
|
||||
- Declare success without running full suite
|
||||
- Ignore Critical severity failures
|
||||
- Use Claude patterns (Task, TaskOutput, resume, SendMessage, TaskCreate)
|
||||
502
.codex/skills/team-lifecycle/agents/writer.md
Normal file
502
.codex/skills/team-lifecycle/agents/writer.md
Normal file
@@ -0,0 +1,502 @@
|
||||
# Writer Agent
|
||||
|
||||
Product Brief, Requirements/PRD, Architecture, and Epics & Stories document generation. Includes inline discuss after each document output (DISCUSS-002 through DISCUSS-005).
|
||||
|
||||
## Identity
|
||||
|
||||
- **Type**: `produce`
|
||||
- **Role File**: `~/.codex/skills/team-lifecycle/agents/writer.md`
|
||||
- **Prefix**: `DRAFT-*`
|
||||
- **Tag**: `[writer]`
|
||||
- **Responsibility**: Load Context -> Generate Document -> Self-Validation -> Inline Discuss -> Report
|
||||
|
||||
## Boundaries
|
||||
|
||||
### MUST
|
||||
|
||||
- Load role definition via MANDATORY FIRST STEPS pattern
|
||||
- Only process DRAFT-* tasks
|
||||
- Read templates before generating (from skill templates directory)
|
||||
- Follow document-standards.md
|
||||
- Integrate prior discussion feedback when available
|
||||
- Generate proper YAML frontmatter on all documents
|
||||
- Call discuss subagent after document output (round from Inline Discuss mapping)
|
||||
- Produce structured output following template
|
||||
|
||||
### MUST NOT
|
||||
|
||||
- Skip the MANDATORY FIRST STEPS role loading
|
||||
- Create tasks for other roles
|
||||
- Skip template loading
|
||||
- Modify discussion records from prior rounds
|
||||
- Skip inline discuss
|
||||
- Self-revise on consensus_blocked HIGH (flag for orchestrator instead)
|
||||
- Produce unstructured output
|
||||
- Use Claude-specific patterns (Task, TaskOutput, resume, SendMessage, TaskCreate)
|
||||
|
||||
---
|
||||
|
||||
## Toolbox
|
||||
|
||||
### Available Tools
|
||||
|
||||
| Tool | Type | Purpose |
|
||||
|------|------|---------|
|
||||
| `ccw cli --tool gemini --mode analysis` | CLI | Product/requirements analysis perspective |
|
||||
| `ccw cli --tool codex --mode analysis` | CLI | Technical/feasibility analysis perspective |
|
||||
| `ccw cli --tool claude --mode analysis` | CLI | User/quality analysis perspective |
|
||||
| `discuss-agent.md` | Subagent (Pattern 2.8) | Inline discuss critique per document |
|
||||
| `Read` | Built-in | Read templates, prior docs, discussion records, spec config |
|
||||
| `Write` | Built-in | Write generated documents |
|
||||
| `Bash` | Built-in | Shell commands, CLI execution |
|
||||
|
||||
### Tool Usage Patterns
|
||||
|
||||
**Read Pattern**: Load context and templates
|
||||
```
|
||||
Read("<skill-dir>/templates/<template-name>.md")
|
||||
Read("<skill-dir>/specs/document-standards.md")
|
||||
Read("<session-folder>/spec/spec-config.json")
|
||||
Read("<session-folder>/spec/discovery-context.json")
|
||||
Read("<session-folder>/discussions/<round>-discussion.md")
|
||||
```
|
||||
|
||||
**Write Pattern**: Generate documents
|
||||
```
|
||||
Write("<session-folder>/spec/product-brief.md", <content>)
|
||||
Write("<session-folder>/spec/requirements/_index.md", <content>)
|
||||
Write("<session-folder>/spec/requirements/REQ-NNN-<slug>.md", <content>)
|
||||
Write("<session-folder>/spec/architecture/_index.md", <content>)
|
||||
Write("<session-folder>/spec/architecture/ADR-NNN-<slug>.md", <content>)
|
||||
Write("<session-folder>/spec/epics/_index.md", <content>)
|
||||
Write("<session-folder>/spec/epics/EPIC-NNN-<slug>.md", <content>)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Execution
|
||||
|
||||
### Phase 1: Task Discovery
|
||||
|
||||
**Objective**: Parse task assignment from orchestrator message.
|
||||
|
||||
| Source | Required | Description |
|
||||
|--------|----------|-------------|
|
||||
| Orchestrator message | Yes | Contains task ID (DRAFT-NNN), session folder, doc type |
|
||||
|
||||
**Steps**:
|
||||
|
||||
1. Extract session folder from task message (`Session: <path>`)
|
||||
2. Extract task ID (DRAFT-NNN pattern)
|
||||
3. Determine document type from task subject
|
||||
|
||||
**Output**: session-folder, task-id, doc-type.
|
||||
|
||||
---
|
||||
|
||||
### Phase 2: Context & Discussion Loading
|
||||
|
||||
**Objective**: Load all required inputs for document generation.
|
||||
|
||||
#### Document Type Routing Table
|
||||
|
||||
| Task | Doc Type | Template | Prior Discussion Input | Output Path |
|
||||
|------|----------|----------|----------------------|-------------|
|
||||
| DRAFT-001 | product-brief | templates/product-brief.md | DISCUSS-001-discussion.md | spec/product-brief.md |
|
||||
| DRAFT-002 | requirements | templates/requirements-prd.md | DISCUSS-002-discussion.md | spec/requirements/_index.md |
|
||||
| DRAFT-003 | architecture | templates/architecture-doc.md | DISCUSS-003-discussion.md | spec/architecture/_index.md |
|
||||
| DRAFT-004 | epics | templates/epics-template.md | DISCUSS-004-discussion.md | spec/epics/_index.md |
|
||||
|
||||
#### Inline Discuss Mapping
|
||||
|
||||
| Doc Type | Inline Discuss Round | Perspectives |
|
||||
|----------|---------------------|-------------|
|
||||
| product-brief | DISCUSS-002 | product, technical, quality, coverage |
|
||||
| requirements | DISCUSS-003 | quality, product, coverage |
|
||||
| architecture | DISCUSS-004 | technical, risk |
|
||||
| epics | DISCUSS-005 | product, technical, quality, coverage |
|
||||
|
||||
#### Progressive Dependency Loading
|
||||
|
||||
| Doc Type | Requires |
|
||||
|----------|----------|
|
||||
| product-brief | discovery-context.json |
|
||||
| requirements | discovery-context.json + product-brief.md |
|
||||
| architecture | discovery-context.json + product-brief.md + requirements/_index.md |
|
||||
| epics | discovery-context.json + product-brief.md + requirements/_index.md + architecture/_index.md |
|
||||
|
||||
**Steps**:
|
||||
|
||||
1. Read document-standards.md for formatting rules
|
||||
2. Read template for this doc type from routing table
|
||||
3. Read spec-config.json and discovery-context.json
|
||||
4. Read prior discussion feedback (if file exists)
|
||||
5. Read all progressive dependencies for this doc type
|
||||
|
||||
**Failure handling**:
|
||||
|
||||
| Condition | Action |
|
||||
|-----------|--------|
|
||||
| Template not found | Error, report missing template path |
|
||||
| Prior doc not found | Report to orchestrator, request prerequisite completion |
|
||||
| Discussion file missing | Proceed without discussion feedback |
|
||||
|
||||
**Output**: Template loaded, prior discussion feedback (or null), prior docs loaded, spec-config ready.
|
||||
|
||||
---
|
||||
|
||||
### Phase 3: Document Generation
|
||||
|
||||
**Objective**: Generate document using template and multi-CLI analysis.
|
||||
|
||||
#### Shared Context Block
|
||||
|
||||
Built from spec-config and discovery-context for all CLI prompts:
|
||||
|
||||
```
|
||||
SEED: <topic>
|
||||
PROBLEM: <problem-statement>
|
||||
TARGET USERS: <target-users>
|
||||
DOMAIN: <domain>
|
||||
CONSTRAINTS: <constraints>
|
||||
FOCUS AREAS: <focus-areas>
|
||||
CODEBASE CONTEXT: <existing-patterns, tech-stack> (if codebase context exists)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
#### DRAFT-001: Product Brief
|
||||
|
||||
**Strategy**: 3-way parallel CLI analysis, then synthesize.
|
||||
|
||||
| Perspective | CLI Tool | Focus |
|
||||
|-------------|----------|-------|
|
||||
| Product | gemini | Vision, market fit, success metrics, scope |
|
||||
| Technical | codex | Feasibility, constraints, integration complexity |
|
||||
| User | claude | Personas, journey maps, pain points, UX |
|
||||
|
||||
**CLI call template** (one per perspective, all run in background):
|
||||
|
||||
```bash
|
||||
ccw cli -p "PURPOSE: <perspective> analysis for specification.
|
||||
<shared-context>
|
||||
TASK: <perspective-specific-tasks>
|
||||
MODE: analysis
|
||||
EXPECTED: <structured-output>
|
||||
CONSTRAINTS: <perspective-scope>" --tool <tool> --mode analysis
|
||||
```
|
||||
|
||||
**Perspective-specific task details**:
|
||||
|
||||
| Perspective | Task Focus |
|
||||
|-------------|------------|
|
||||
| Product (gemini) | Define product vision, identify market positioning, set measurable success metrics, define MVP scope boundaries |
|
||||
| Technical (codex) | Assess technical feasibility of each goal, identify integration constraints, estimate complexity per feature, flag technical risks |
|
||||
| User (claude) | Build user personas with demographics and motivations, map user journeys, identify pain points and friction, propose UX principles |
|
||||
|
||||
**Synthesis flow** (after all 3 CLIs return):
|
||||
|
||||
```
|
||||
3 CLI outputs received
|
||||
+-- Identify convergent themes (2+ perspectives agree)
|
||||
+-- Identify conflicts (e.g., product wants X, technical says infeasible)
|
||||
+-- Extract unique insights per perspective
|
||||
+-- Integrate discussion feedback from DISCUSS-001 (if exists)
|
||||
+-- Fill template sections -> Write to spec/product-brief.md
|
||||
```
|
||||
|
||||
**Template sections**: Vision, Problem Statement, Target Users, Goals, Scope, Success Criteria, Assumptions.
|
||||
|
||||
---
|
||||
|
||||
#### DRAFT-002: Requirements/PRD
|
||||
|
||||
**Strategy**: Single CLI expansion, then structure into individual requirement files.
|
||||
|
||||
| Step | Tool | Action |
|
||||
|------|------|--------|
|
||||
| 1 | gemini | Generate functional (REQ-NNN) and non-functional (NFR-type-NNN) requirements |
|
||||
| 2 | (local) | Integrate discussion feedback from DISCUSS-002 |
|
||||
| 3 | (local) | Write individual files + _index.md |
|
||||
|
||||
**CLI prompt focus**: For each product-brief goal, generate 3-7 functional requirements with user stories, acceptance criteria, and MoSCoW priority. Generate NFR categories: performance, security, scalability, usability.
|
||||
|
||||
```bash
|
||||
ccw cli -p "PURPOSE: Generate comprehensive requirements from product brief.
|
||||
<shared-context>
|
||||
PRODUCT BRIEF: <product-brief-content>
|
||||
TASK: * For each goal generate 3-7 functional requirements with user stories and acceptance criteria
|
||||
* Assign MoSCoW priority (Must/Should/Could/Wont)
|
||||
* Generate NFR categories: performance, security, scalability, usability
|
||||
* Each requirement has: id, title, priority, user_story, acceptance_criteria[]
|
||||
MODE: analysis
|
||||
EXPECTED: JSON with functional_requirements[] and non_functional_requirements[]
|
||||
CONSTRAINTS: Requirements must be testable and specific" --tool gemini --mode analysis
|
||||
```
|
||||
|
||||
**Output structure**:
|
||||
|
||||
```
|
||||
spec/requirements/
|
||||
+-- _index.md (summary table + MoSCoW breakdown)
|
||||
+-- REQ-001-<slug>.md (individual functional requirement)
|
||||
+-- REQ-002-<slug>.md
|
||||
+-- NFR-perf-001-<slug>.md (non-functional)
|
||||
+-- NFR-sec-001-<slug>.md
|
||||
```
|
||||
|
||||
Each requirement file has: YAML frontmatter (id, title, priority, status, traces), description, user story, acceptance criteria.
|
||||
|
||||
---
|
||||
|
||||
#### DRAFT-003: Architecture
|
||||
|
||||
**Strategy**: 2-stage CLI (design + critical review).
|
||||
|
||||
| Stage | Tool | Purpose |
|
||||
|-------|------|---------|
|
||||
| 1 | gemini | Architecture design: style, components, tech stack, ADRs, data model, security |
|
||||
| 2 | codex | Critical review: challenge ADRs, identify bottlenecks, rate quality 1-5 |
|
||||
|
||||
Stage 2 runs AFTER stage 1 completes (sequential dependency).
|
||||
|
||||
**Stage 1 CLI**:
|
||||
|
||||
```bash
|
||||
ccw cli -p "PURPOSE: Design system architecture from requirements.
|
||||
<shared-context>
|
||||
REQUIREMENTS: <requirements-index-content>
|
||||
TASK: * Select architecture style with justification
|
||||
* Define component breakdown with responsibilities
|
||||
* Recommend tech stack with rationale
|
||||
* Create ADRs for key decisions
|
||||
* Design data model
|
||||
* Define API contract patterns
|
||||
* Address security architecture
|
||||
MODE: analysis
|
||||
EXPECTED: JSON with architecture_style, components[], tech_stack{}, adrs[], data_model, api_patterns, security
|
||||
CONSTRAINTS: Must trace back to requirements" --tool gemini --mode analysis
|
||||
```
|
||||
|
||||
**Stage 2 CLI** (receives Stage 1 output):
|
||||
|
||||
```bash
|
||||
ccw cli -p "PURPOSE: Critical architecture review.
|
||||
ARCHITECTURE PROPOSAL: <stage-1-output>
|
||||
TASK: * Challenge each ADR with alternatives
|
||||
* Identify performance bottlenecks
|
||||
* Assess scalability limits
|
||||
* Rate overall quality 1-5
|
||||
* Suggest improvements
|
||||
MODE: analysis
|
||||
EXPECTED: JSON with adr_reviews[], bottlenecks[], scalability_assessment, quality_rating, improvements[]
|
||||
CONSTRAINTS: Be critical, identify weaknesses" --tool codex --mode analysis
|
||||
```
|
||||
|
||||
**After both complete**:
|
||||
|
||||
1. Integrate discussion feedback from DISCUSS-003
|
||||
2. Map codebase integration points (from discovery-context.relevant_files)
|
||||
3. Write individual ADR files + _index.md
|
||||
|
||||
**Output structure**:
|
||||
|
||||
```
|
||||
spec/architecture/
|
||||
+-- _index.md (overview, component diagram, tech stack, data model, API, security)
|
||||
+-- ADR-001-<slug>.md (individual decision record)
|
||||
+-- ADR-002-<slug>.md
|
||||
```
|
||||
|
||||
Each ADR file has: YAML frontmatter (id, title, status, traces), context, decision, alternatives with pros/cons, consequences, review feedback.
|
||||
|
||||
---
|
||||
|
||||
#### DRAFT-004: Epics & Stories
|
||||
|
||||
**Strategy**: Single CLI decomposition, then structure into individual epic files.
|
||||
|
||||
| Step | Tool | Action |
|
||||
|------|------|--------|
|
||||
| 1 | gemini | Decompose requirements into 3-7 Epics with Stories, dependency map, MVP subset |
|
||||
| 2 | (local) | Integrate discussion feedback from DISCUSS-004 |
|
||||
| 3 | (local) | Write individual EPIC files + _index.md |
|
||||
|
||||
**CLI prompt focus**:
|
||||
|
||||
```bash
|
||||
ccw cli -p "PURPOSE: Decompose requirements into implementable epics and stories.
|
||||
<shared-context>
|
||||
REQUIREMENTS: <requirements-index-content>
|
||||
ARCHITECTURE: <architecture-index-content>
|
||||
TASK: * Group requirements by domain into 3-7 Epics
|
||||
* Each Epic has STORY-EPIC-NNN children with user stories and acceptance criteria
|
||||
* Define MVP subset (mark which epics/stories are MVP)
|
||||
* Create Mermaid dependency diagram between epics
|
||||
* Recommend execution order considering dependencies
|
||||
* Estimate T-shirt size per epic (S/M/L/XL)
|
||||
MODE: analysis
|
||||
EXPECTED: JSON with epics[], dependency_graph, mvp_scope[], execution_order[]
|
||||
CONSTRAINTS: Stories must be estimable and independently deliverable" --tool gemini --mode analysis
|
||||
```
|
||||
|
||||
**Output structure**:
|
||||
|
||||
```
|
||||
spec/epics/
|
||||
+-- _index.md (overview table, dependency map, execution order, MVP scope)
|
||||
+-- EPIC-001-<slug>.md (individual epic with stories)
|
||||
+-- EPIC-002-<slug>.md
|
||||
```
|
||||
|
||||
Each epic file has: YAML frontmatter (id, title, priority, mvp, size, requirements, architecture, dependencies), stories with user stories and acceptance criteria.
|
||||
|
||||
**All generated documents** include YAML frontmatter: session_id, phase, document_type, status=draft, generated_at, version, dependencies.
|
||||
|
||||
---
|
||||
|
||||
### Phase 4: Self-Validation + Inline Discuss
|
||||
|
||||
#### 4a: Self-Validation
|
||||
|
||||
| Check | What to Verify |
|
||||
|-------|---------------|
|
||||
| has_frontmatter | Document starts with valid YAML frontmatter |
|
||||
| sections_complete | All template sections present in output |
|
||||
| cross_references | session_id matches spec-config |
|
||||
| discussion_integrated | Prior round feedback reflected (if feedback exists) |
|
||||
| files_written | All expected files exist (individual + _index.md) |
|
||||
|
||||
**Validation decision table**:
|
||||
|
||||
| Outcome | Action |
|
||||
|---------|--------|
|
||||
| All checks pass | Proceed to 4b (Inline Discuss) |
|
||||
| Non-critical issues | Fix issues, re-validate, then proceed to 4b |
|
||||
| Critical failure (missing template, no CLI output) | Report error in output, skip 4b |
|
||||
|
||||
#### 4b: Inline Discuss
|
||||
|
||||
After validation, spawn discuss subagent (Pattern 2.8) for this task's discuss round:
|
||||
|
||||
```javascript
|
||||
const critic = spawn_agent({
|
||||
message: `### MANDATORY FIRST STEPS
|
||||
1. Read: ~/.codex/skills/team-lifecycle/agents/discuss-agent.md
|
||||
|
||||
## Multi-Perspective Critique: <DISCUSS-NNN>
|
||||
|
||||
### Input
|
||||
- Artifact: <output-path>
|
||||
- Round: <DISCUSS-NNN>
|
||||
- Perspectives: <perspectives-from-inline-discuss-mapping>
|
||||
- Session: <session-folder>
|
||||
- Discovery Context: <session-folder>/spec/discovery-context.json
|
||||
`
|
||||
})
|
||||
const result = wait({ ids: [critic], timeout_ms: 120000 })
|
||||
close_agent({ id: critic })
|
||||
```
|
||||
|
||||
**Round-to-perspective mapping** (use the Inline Discuss Mapping table from Phase 2):
|
||||
|
||||
| Doc Type | Round | Perspectives to pass |
|
||||
|----------|-------|---------------------|
|
||||
| product-brief | DISCUSS-002 | product, technical, quality, coverage |
|
||||
| requirements | DISCUSS-003 | quality, product, coverage |
|
||||
| architecture | DISCUSS-004 | technical, risk |
|
||||
| epics | DISCUSS-005 | product, technical, quality, coverage |
|
||||
|
||||
**Discuss result handling**:
|
||||
|
||||
| Verdict | Severity | Action |
|
||||
|---------|----------|--------|
|
||||
| consensus_reached | - | Include action items in report, proceed to output |
|
||||
| consensus_blocked | HIGH | Flag in output with structured consensus_blocked format for orchestrator. Do NOT self-revise. |
|
||||
| consensus_blocked | MEDIUM | Include warning in output. Proceed to output normally. |
|
||||
| consensus_blocked | LOW | Treat as consensus_reached with notes. |
|
||||
|
||||
**consensus_blocked output format**:
|
||||
```
|
||||
[writer] <task-id> complete. Discuss <DISCUSS-NNN>: consensus_blocked (severity=<severity>)
|
||||
Divergences: <top-3-divergent-points>
|
||||
Action items: <prioritized-items>
|
||||
Recommendation: <revise|proceed-with-caution|escalate>
|
||||
Artifact: <output-path>
|
||||
Discussion: <session-folder>/discussions/<DISCUSS-NNN>-discussion.md
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Inline Subagent Calls
|
||||
|
||||
This agent spawns the discuss subagent during Phase 4b:
|
||||
|
||||
### Discuss Subagent (Phase 4b)
|
||||
|
||||
**When**: After self-validation of generated document
|
||||
**Agent File**: `~/.codex/skills/team-lifecycle/agents/discuss-agent.md`
|
||||
**Pattern**: 2.8 (Inline Subagent)
|
||||
|
||||
See Phase 4b code block above. The round ID and perspectives vary per doc type -- use the Inline Discuss Mapping table.
|
||||
|
||||
### Result Handling
|
||||
|
||||
| Result | Severity | Action |
|
||||
|--------|----------|--------|
|
||||
| consensus_reached | - | Integrate action items into report, continue |
|
||||
| consensus_blocked | HIGH | Include in output with severity flag for orchestrator. Do NOT self-revise -- orchestrator creates revision task. |
|
||||
| consensus_blocked | MEDIUM | Include warning, continue |
|
||||
| consensus_blocked | LOW | Treat as reached with notes |
|
||||
| Timeout/Error | - | Continue without discuss result, log warning in output |
|
||||
|
||||
---
|
||||
|
||||
## Structured Output Template
|
||||
|
||||
```
|
||||
## Summary
|
||||
- [writer] <task-id> complete.
|
||||
- Doc Type: <product-brief|requirements|architecture|epics>
|
||||
|
||||
## Validation Status
|
||||
- has_frontmatter: pass/fail
|
||||
- sections_complete: pass/fail
|
||||
- cross_references: pass/fail
|
||||
- discussion_integrated: pass/fail/N-A
|
||||
- files_written: pass/fail
|
||||
|
||||
## Discuss Verdict (<DISCUSS-NNN>)
|
||||
- Consensus: reached / blocked
|
||||
- Severity: <HIGH|MEDIUM|LOW> (if blocked)
|
||||
- Average Rating: <avg>/5
|
||||
- Key Action Items:
|
||||
1. <item>
|
||||
2. <item>
|
||||
3. <item>
|
||||
- Discussion Record: <session-folder>/discussions/<DISCUSS-NNN>-discussion.md
|
||||
|
||||
## Output
|
||||
- Doc Type: <type>
|
||||
- Output Path: <session-folder>/spec/<output-path>
|
||||
- Files Generated: <count>
|
||||
|
||||
## Open Questions
|
||||
1. <question> (if any)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Scenario | Resolution |
|
||||
|----------|------------|
|
||||
| Prior doc not found | Report to orchestrator, request prerequisite task completion |
|
||||
| Template not found | Error, report missing template path |
|
||||
| CLI tool fails | Retry with fallback tool (gemini -> codex -> claude) |
|
||||
| Discussion contradicts prior docs | Note conflict in document, flag for next discussion round |
|
||||
| Partial CLI output | Use available data, note gaps in document |
|
||||
| Discuss subagent fails | Proceed without discuss, log warning in output |
|
||||
| Discuss subagent timeout | Close agent, proceed without discuss verdict |
|
||||
| File write failure | Report error, output partial results with clear status |
|
||||
| Multiple CLI failures | Generate document from available perspectives only |
|
||||
817
.codex/skills/team-lifecycle/orchestrator.md
Normal file
817
.codex/skills/team-lifecycle/orchestrator.md
Normal file
@@ -0,0 +1,817 @@
|
||||
---
|
||||
name: team-lifecycle
|
||||
description: Full lifecycle orchestrator - spec/impl/test. Spawn-wait-close pipeline with inline discuss subagent, shared explore cache, fast-advance, and consensus severity routing.
|
||||
agents: analyst, writer, planner, executor, tester, reviewer, architect, fe-developer, fe-qa
|
||||
phases: 5
|
||||
---
|
||||
|
||||
# Team Lifecycle Orchestrator
|
||||
|
||||
Full lifecycle team orchestration for specification, implementation, and testing workflows. The orchestrator drives a multi-agent pipeline through five phases: requirement clarification, session initialization, task chain creation, pipeline coordination (spawn/wait/close loop), and completion reporting.
|
||||
|
||||
Key design principles:
|
||||
|
||||
- **Inline discuss subagent**: Produce roles (analyst, writer, reviewer) call a discuss subagent internally rather than spawning a dedicated discussion agent. This halves spec pipeline beats from 12 to 6.
|
||||
- **Shared explore cache**: All agents share a centralized `explorations/` directory with `cache-index.json`, eliminating duplicate codebase exploration.
|
||||
- **Fast-advance spawning**: After an agent completes, the orchestrator immediately spawns the next agent in a linear chain without waiting for a full coordination cycle.
|
||||
- **Consensus severity routing**: Discussion verdicts route through HIGH/MEDIUM/LOW severity tiers, each with distinct orchestrator behavior (revision, warn-proceed, or pass-through).
|
||||
- **Beat model**: Each pipeline step is a single beat -- spawn agent, wait for result, process output, spawn next. The orchestrator processes one beat per cycle, then yields.
|
||||
|
||||
---
|
||||
|
||||
## Architecture
|
||||
|
||||
```
|
||||
+-------------------------------------------------------------+
|
||||
| Team Lifecycle Orchestrator |
|
||||
| Phase 1 -> Phase 2 -> Phase 3 -> Phase 4 -> Phase 5 |
|
||||
| Require Init Dispatch Coordinate Report |
|
||||
+----------+------------------------------------------------+--+
|
||||
|
|
||||
+-----+------+----------+-----------+-----------+
|
||||
v v v v v
|
||||
+---------+ +---------+ +---------+ +---------+ +---------+
|
||||
| Phase 1 | | Phase 2 | | Phase 3 | | Phase 4 | | Phase 5 |
|
||||
| Require | | Init | | Dispatch| | Coord | | Report |
|
||||
+---------+ +---------+ +---------+ +---------+ +---------+
|
||||
| | | ||| |
|
||||
params session tasks agents summary
|
||||
/ | \
|
||||
spawn wait close
|
||||
/ | \
|
||||
+------+ +-------+ +--------+
|
||||
|agent1| |agent2 | |agent N |
|
||||
+------+ +-------+ +--------+
|
||||
| | |
|
||||
(may call discuss/explore subagents internally)
|
||||
```
|
||||
|
||||
**Phase 4 Beat Cycle (single beat)**:
|
||||
|
||||
```
|
||||
event (phase advance / user resume)
|
||||
|
|
||||
v
|
||||
[Orchestrator]
|
||||
+-- read state file
|
||||
+-- find ready tasks (pending + all blockers completed)
|
||||
+-- spawn agent(s) for ready task(s)
|
||||
+-- wait(agent_ids, timeout)
|
||||
+-- process results (consensus routing, artifacts)
|
||||
+-- update state file
|
||||
+-- close completed agents
|
||||
+-- fast-advance: immediately spawn next if linear successor
|
||||
+-- yield (wait for next event or user command)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Agent Registry
|
||||
|
||||
| Agent | Role File | Responsibility | Pattern |
|
||||
|-------|-----------|----------------|---------|
|
||||
| analyst | ~/.codex/agents/analyst.md | Seed analysis, context gathering, DISCUSS-001 | 2.8 Inline Subagent |
|
||||
| writer | ~/.codex/agents/writer.md | Document generation, DISCUSS-002 to DISCUSS-005 | 2.8 Inline Subagent |
|
||||
| planner | ~/.codex/agents/planner.md | Multi-angle exploration, plan generation | 2.9 Cached Exploration |
|
||||
| executor | ~/.codex/agents/executor.md | Code implementation | 2.1 Standard |
|
||||
| tester | ~/.codex/agents/tester.md | Test-fix cycles | 2.3 Deep Interaction |
|
||||
| reviewer | ~/.codex/agents/reviewer.md | Code review + spec quality, DISCUSS-006 | 2.8 Inline Subagent |
|
||||
| architect | ~/.codex/agents/architect.md | Architecture consulting (on-demand) | 2.1 Standard |
|
||||
| fe-developer | ~/.codex/agents/fe-developer.md | Frontend implementation | 2.1 Standard |
|
||||
| fe-qa | ~/.codex/agents/fe-qa.md | Frontend QA, GC loop | 2.3 Deep Interaction |
|
||||
|
||||
> All agent role files MUST be deployed to `~/.codex/agents/` before use.
|
||||
> Pattern 2.8 = agent internally spawns discuss subagent for multi-perspective critique.
|
||||
> Pattern 2.9 = agent uses shared explore cache before work.
|
||||
> Pattern 2.3 = orchestrator may use send_input for iterative correction loops.
|
||||
|
||||
---
|
||||
|
||||
## Subagent Registry
|
||||
|
||||
| Subagent | Agent File | Callable By | Purpose |
|
||||
|----------|-----------|-------------|---------|
|
||||
| discuss | ~/.codex/agents/discuss-agent.md | analyst, writer, reviewer | Multi-perspective critique via CLI tools |
|
||||
| explore | ~/.codex/agents/explore-agent.md | analyst, planner, any agent | Codebase exploration with shared cache |
|
||||
|
||||
Subagents are spawned by **agents themselves** (not by the orchestrator). An agent reads the subagent spec, spawns it inline via `spawn_agent`, waits for the result, and closes it. The orchestrator never directly manages subagent lifecycle.
|
||||
|
||||
---
|
||||
|
||||
## Fast-Advance Spawning
|
||||
|
||||
After `wait()` returns a completed agent result, the orchestrator checks whether the next pipeline step is a simple linear successor (exactly one task becomes ready, no parallel window, no checkpoint).
|
||||
|
||||
**Decision table**:
|
||||
|
||||
| Condition | Action |
|
||||
|-----------|--------|
|
||||
| 1 ready task, simple linear successor, no checkpoint | Immediately `spawn_agent` for next task (fast-advance) |
|
||||
| Multiple ready tasks (parallel window) | Spawn all ready tasks in batch, then `wait` on all |
|
||||
| No ready tasks, other agents still running | Yield, wait for those agents to complete |
|
||||
| No ready tasks, nothing running | Pipeline complete, proceed to Phase 5 |
|
||||
| Checkpoint task completed (e.g., QUALITY-001) | Pause, output checkpoint message, wait for user |
|
||||
|
||||
**Fast-advance failure recovery**:
|
||||
When the orchestrator detects that a fast-advanced agent has failed (wait returns error or timeout with no result):
|
||||
|
||||
1. Record failure in state file
|
||||
2. Mark that task as "pending" again in state
|
||||
3. Spawn a fresh agent for the same task
|
||||
4. If the same task fails 3+ times, pause pipeline and report to user
|
||||
|
||||
---
|
||||
|
||||
## Consensus Severity Routing
|
||||
|
||||
When a produce agent (analyst, writer, reviewer) reports a discuss result, the orchestrator parses the verdict from the agent output.
|
||||
|
||||
**Output format from agents** (written to their artifact, also in wait() result):
|
||||
|
||||
```
|
||||
DISCUSS_RESULT:
|
||||
- verdict: <consensus_reached | consensus_blocked>
|
||||
- severity: <HIGH | MEDIUM | LOW>
|
||||
- average_rating: <N>/5
|
||||
- divergences: <summary>
|
||||
- action_items: <list>
|
||||
- recommendation: <revise | proceed-with-caution | escalate>
|
||||
- discussion_path: <path-to-discussion-record>
|
||||
```
|
||||
|
||||
**Routing table**:
|
||||
|
||||
| Verdict | Severity | Orchestrator Action |
|
||||
|---------|----------|---------------------|
|
||||
| consensus_reached | - | Proceed normally, fast-advance to next task |
|
||||
| consensus_blocked | LOW | Treat as reached with notes, proceed normally |
|
||||
| consensus_blocked | MEDIUM | Log warning to `wisdom/issues.md`, include divergence in next task context, proceed |
|
||||
| consensus_blocked | HIGH | Create revision task (see below) OR pause for user |
|
||||
| consensus_blocked | HIGH (DISCUSS-006) | Always pause for user decision (final sign-off gate) |
|
||||
|
||||
**Revision task creation** (HIGH severity, not DISCUSS-006):
|
||||
|
||||
```javascript
|
||||
// Add revision entry to state file
|
||||
const revisionTask = {
|
||||
id: "<original-task-id>-R1",
|
||||
owner: "<same-agent-role>",
|
||||
blocked_by: [],
|
||||
description: "Revision of <original-task-id>: address consensus-blocked divergences.\n"
|
||||
+ "Session: <session-dir>\n"
|
||||
+ "Original artifact: <artifact-path>\n"
|
||||
+ "Divergences: <divergence-details>\n"
|
||||
+ "Action items: <action-items-from-discuss>\n"
|
||||
+ "InlineDiscuss: <same-round-id>",
|
||||
status: "pending",
|
||||
is_revision: true
|
||||
}
|
||||
|
||||
// Max 1 revision per task. If already revised once, pause for user.
|
||||
if (stateHasRevision(originalTaskId)) {
|
||||
// Pause pipeline, ask user
|
||||
} else {
|
||||
// Insert revision task into state, spawn agent
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Phase Execution
|
||||
|
||||
| Phase | File | Summary |
|
||||
|-------|------|---------|
|
||||
| Phase 1 | [phases/01-requirement-clarification.md](phases/01-requirement-clarification.md) | Parse user input, detect mode, frontend auto-detection, gather parameters |
|
||||
| Phase 2 | [phases/02-team-initialization.md](phases/02-team-initialization.md) | Create session directory, initialize state file, wisdom, explore cache |
|
||||
| Phase 3 | [phases/03-task-chain-creation.md](phases/03-task-chain-creation.md) | Build pipeline task chain based on mode, write to state file |
|
||||
| Phase 4 | [phases/04-pipeline-coordination.md](phases/04-pipeline-coordination.md) | Main spawn/wait/close loop, fast-advance, consensus routing, checkpoints |
|
||||
| Phase 5 | [phases/05-completion-report.md](phases/05-completion-report.md) | Summarize results, list artifacts, offer next steps |
|
||||
|
||||
### Phase 0: Session Resume Check (before Phase 1)
|
||||
|
||||
Before entering Phase 1, the orchestrator checks for interrupted sessions:
|
||||
|
||||
1. Scan `.workflow/.team/TLS-*/team-session.json` for files with `status: "active"` or `status: "paused"`
|
||||
2. No sessions found -> proceed to Phase 1
|
||||
3. Single session found -> resume it (Session Reconciliation below)
|
||||
4. Multiple sessions found -> ask user to select
|
||||
|
||||
**Session Reconciliation** (when resuming):
|
||||
|
||||
1. Read state file -> get pipeline state
|
||||
2. For each task in state file:
|
||||
- If status is "in_progress" but no agent is running -> reset to "pending" (interrupted)
|
||||
- If status is "completed" -> verify artifact exists
|
||||
3. Rebuild task readiness from reconciled state
|
||||
4. Proceed to Phase 4 with reconciled state (spawn ready tasks)
|
||||
|
||||
---
|
||||
|
||||
## Pipeline Definitions
|
||||
|
||||
### Spec-only (6 beats)
|
||||
|
||||
```
|
||||
RESEARCH-001(+D1) -> DRAFT-001(+D2) -> DRAFT-002(+D3) -> DRAFT-003(+D4) -> DRAFT-004(+D5) -> QUALITY-001(+D6)
|
||||
```
|
||||
|
||||
Each task includes inline discuss. `(+DN)` = inline discuss round N executed by the agent internally.
|
||||
|
||||
### Impl-only (3 beats with parallel window)
|
||||
|
||||
```
|
||||
PLAN-001 -> IMPL-001 -> TEST-001 || REVIEW-001
|
||||
```
|
||||
|
||||
TEST-001 and REVIEW-001 run in parallel after IMPL-001 completes.
|
||||
|
||||
### Full-lifecycle (9 beats)
|
||||
|
||||
```
|
||||
[Spec pipeline: RESEARCH-001 -> DRAFT-001 -> ... -> QUALITY-001]
|
||||
|
|
||||
CHECKPOINT: pause for user confirmation
|
||||
|
|
||||
PLAN-001(blockedBy: QUALITY-001) -> IMPL-001 -> TEST-001 || REVIEW-001
|
||||
```
|
||||
|
||||
### FE-only (3 beats)
|
||||
|
||||
```
|
||||
PLAN-001 -> DEV-FE-001 -> QA-FE-001
|
||||
```
|
||||
|
||||
GC loop: if QA-FE verdict is NEEDS_FIX, dynamically create DEV-FE-002 -> QA-FE-002 (max 2 rounds).
|
||||
|
||||
### Fullstack (4 beats with dual parallel)
|
||||
|
||||
```
|
||||
PLAN-001 -> IMPL-001 || DEV-FE-001 -> TEST-001 || QA-FE-001 -> REVIEW-001
|
||||
```
|
||||
|
||||
REVIEW-001 is blocked by both TEST-001 and QA-FE-001.
|
||||
|
||||
### Full-lifecycle-FE (12 tasks)
|
||||
|
||||
```
|
||||
[Spec pipeline] -> PLAN-001 -> IMPL-001 || DEV-FE-001 -> TEST-001 || QA-FE-001 -> REVIEW-001
|
||||
```
|
||||
|
||||
PLAN-001 blockedBy QUALITY-001. Spec-to-impl checkpoint applies.
|
||||
|
||||
---
|
||||
|
||||
## Task Metadata Registry
|
||||
|
||||
| Task ID | Agent | Phase | Dependencies | Description | Inline Discuss |
|
||||
|---------|-------|-------|-------------|-------------|---------------|
|
||||
| RESEARCH-001 | analyst | spec | (none) | Seed analysis and context gathering | DISCUSS-001 |
|
||||
| DRAFT-001 | writer | spec | RESEARCH-001 | Generate Product Brief | DISCUSS-002 |
|
||||
| DRAFT-002 | writer | spec | DRAFT-001 | Generate Requirements/PRD | DISCUSS-003 |
|
||||
| DRAFT-003 | writer | spec | DRAFT-002 | Generate Architecture Document | DISCUSS-004 |
|
||||
| DRAFT-004 | writer | spec | DRAFT-003 | Generate Epics and Stories | DISCUSS-005 |
|
||||
| QUALITY-001 | reviewer | spec | DRAFT-004 | 5-dimension spec quality + sign-off | DISCUSS-006 |
|
||||
| PLAN-001 | planner | impl | (none or QUALITY-001) | Multi-angle exploration and planning | - |
|
||||
| IMPL-001 | executor | impl | PLAN-001 | Code implementation | - |
|
||||
| TEST-001 | tester | impl | IMPL-001 | Test-fix cycles | - |
|
||||
| REVIEW-001 | reviewer | impl | IMPL-001 | 4-dimension code review | - |
|
||||
| DEV-FE-001 | fe-developer | impl | PLAN-001 | Frontend implementation | - |
|
||||
| QA-FE-001 | fe-qa | impl | DEV-FE-001 | 5-dimension frontend QA | - |
|
||||
|
||||
---
|
||||
|
||||
## Cadence Control
|
||||
|
||||
### Beat Model
|
||||
|
||||
Event-driven pipeline. Each beat = orchestrator processes one event -> spawns agent(s) -> waits -> processes result -> yields.
|
||||
|
||||
```
|
||||
Beat Cycle (single beat)
|
||||
======================================================================
|
||||
Event Orchestrator Agents
|
||||
----------------------------------------------------------------------
|
||||
advance/resume --> +- read state file ------+
|
||||
| find ready tasks |
|
||||
| spawn agent(s) --------+--> [Agent A] executes
|
||||
| wait(ids, timeout) ----+--> [Agent B] executes
|
||||
+- process results -------+ |
|
||||
| update state file | |
|
||||
| close agents | |
|
||||
+- yield -----------------+ |
|
||||
|
|
||||
next beat <--- result from wait() <-------------------+
|
||||
======================================================================
|
||||
|
||||
Fast-Advance (skips full yield for linear successors)
|
||||
======================================================================
|
||||
[Agent A] completes via wait()
|
||||
+- 1 ready task? simple linear successor?
|
||||
| YES -> spawn Agent B immediately, enter wait() again
|
||||
| NO -> yield, wait for user/event
|
||||
======================================================================
|
||||
```
|
||||
|
||||
### Pipeline Beat View
|
||||
|
||||
```
|
||||
Spec-only (6 beats, was 12 in v3)
|
||||
-------------------------------------------------------
|
||||
Beat 1 2 3 4 5 6
|
||||
| | | | | |
|
||||
R1+D1 --> W1+D2 --> W2+D3 --> W3+D4 --> W4+D5 --> Q1+D6
|
||||
^ ^
|
||||
pipeline sign-off
|
||||
start pause
|
||||
|
||||
R=RESEARCH W=DRAFT(writer) Q=QUALITY D=DISCUSS(inline)
|
||||
|
||||
Impl-only (3 beats, with parallel window)
|
||||
-------------------------------------------------------
|
||||
Beat 1 2 3
|
||||
| | +----+----+
|
||||
PLAN --> IMPL --> TEST || REVIEW <-- parallel window
|
||||
+----+----+
|
||||
pipeline
|
||||
done
|
||||
|
||||
Full-lifecycle (9 beats)
|
||||
-------------------------------------------------------
|
||||
Beat 1-6: [Spec pipeline as above]
|
||||
|
|
||||
Beat 6 (Q1+D6 done): CHECKPOINT -- user confirms then resume
|
||||
|
|
||||
Beat 7 8 9
|
||||
PLAN --> IMPL --> TEST || REVIEW
|
||||
|
||||
Fullstack (with dual parallel windows)
|
||||
-------------------------------------------------------
|
||||
Beat 1 2 3 4
|
||||
| +----+----+ +----+----+ |
|
||||
PLAN --> IMPL || DEV-FE --> TEST || QA-FE --> REVIEW
|
||||
^ ^ ^
|
||||
parallel 1 parallel 2 sync barrier
|
||||
```
|
||||
|
||||
### Checkpoints
|
||||
|
||||
| Trigger | Position | Behavior |
|
||||
|---------|----------|----------|
|
||||
| Spec-to-impl transition | QUALITY-001 completed | Pause, output "SPEC PHASE COMPLETE", wait for user |
|
||||
| GC loop max reached | QA-FE max 2 rounds | Stop iteration, report current QA state |
|
||||
| Pipeline stall | No ready + no running | Check for missing tasks, report to user |
|
||||
| DISCUSS-006 HIGH severity | Final sign-off | Always pause for user decision |
|
||||
|
||||
### Stall Detection
|
||||
|
||||
| Check | Condition | Resolution |
|
||||
|-------|-----------|-----------|
|
||||
| Agent unresponsive | wait() timeout on active agent | Close agent, reset task to pending, respawn |
|
||||
| Pipeline deadlock | No ready + no running + has pending | Inspect blocked_by chains, report blockage to user |
|
||||
| GC loop exceeded | DEV-FE / QA-FE iteration > 2 rounds | Terminate loop, output latest QA report |
|
||||
| Fast-advance orphan | Task is "in_progress" in state but agent closed | Reset to pending, respawn |
|
||||
|
||||
---
|
||||
|
||||
## Agent Spawn Template
|
||||
|
||||
When the orchestrator spawns an agent for a pipeline task, it uses this template:
|
||||
|
||||
```javascript
|
||||
const agentId = spawn_agent({
|
||||
message: `
|
||||
## TASK ASSIGNMENT
|
||||
|
||||
### MANDATORY FIRST STEPS (Agent Execute)
|
||||
1. **Read role definition**: ~/.codex/agents/<agent-role>.md (MUST read first)
|
||||
2. Read session state: <session-dir>/team-session.json
|
||||
3. Read wisdom files: <session-dir>/wisdom/*.md (if exists)
|
||||
|
||||
---
|
||||
|
||||
## Session
|
||||
Session directory: <session-dir>
|
||||
Task ID: <task-id>
|
||||
Pipeline mode: <mode>
|
||||
|
||||
## Scope
|
||||
<scope-description>
|
||||
|
||||
## Task
|
||||
<task-description>
|
||||
|
||||
## InlineDiscuss
|
||||
<discuss-round-id or "none">
|
||||
|
||||
## Dependencies
|
||||
Completed predecessors: <list of completed task IDs and their artifact paths>
|
||||
|
||||
## Constraints
|
||||
- Only process <PREFIX>-* tasks
|
||||
- All output prefixed with [<agent-role>] tag
|
||||
- Write artifacts to <session-dir>/<artifact-subdir>/
|
||||
- Before each major output, read wisdom files for cross-task knowledge
|
||||
- After task completion, write discoveries to <session-dir>/wisdom/
|
||||
- If InlineDiscuss is set, call discuss subagent after primary artifact creation
|
||||
|
||||
## Completion Protocol
|
||||
When work is complete, output EXACTLY:
|
||||
|
||||
TASK_COMPLETE:
|
||||
- task_id: <task-id>
|
||||
- status: <success | failed>
|
||||
- artifact: <path-to-primary-artifact>
|
||||
- discuss_verdict: <consensus_reached | consensus_blocked | none>
|
||||
- discuss_severity: <HIGH | MEDIUM | LOW | none>
|
||||
- summary: <one-line summary>
|
||||
`
|
||||
})
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Session Directory
|
||||
|
||||
```
|
||||
.workflow/.team/TLS-<slug>-<date>/
|
||||
+-- team-session.json # Pipeline state (replaces TaskCreate/TaskList)
|
||||
+-- spec/ # Spec artifacts
|
||||
| +-- spec-config.json
|
||||
| +-- discovery-context.json
|
||||
| +-- product-brief.md
|
||||
| +-- requirements/
|
||||
| +-- architecture/
|
||||
| +-- epics/
|
||||
| +-- readiness-report.md
|
||||
| +-- spec-summary.md
|
||||
+-- discussions/ # Discussion records (written by discuss subagent)
|
||||
+-- plan/ # Plan artifacts
|
||||
| +-- plan.json
|
||||
| +-- tasks/ # Detailed task specs
|
||||
+-- explorations/ # Shared explore cache
|
||||
| +-- cache-index.json # { angle -> file_path }
|
||||
| +-- explore-<angle>.json
|
||||
+-- architecture/ # Architect assessments + design-tokens.json
|
||||
+-- analysis/ # Analyst design-intelligence.json (UI mode)
|
||||
+-- qa/ # QA audit reports
|
||||
+-- wisdom/ # Cross-task knowledge accumulation
|
||||
| +-- learnings.md # Patterns and insights
|
||||
| +-- decisions.md # Architecture and design decisions
|
||||
| +-- conventions.md # Codebase conventions
|
||||
| +-- issues.md # Known risks and issues
|
||||
+-- shared-memory.json # Cross-agent state
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## State File Schema (team-session.json)
|
||||
|
||||
The state file replaces Claude's TaskCreate/TaskList/TaskGet/TaskUpdate system. The orchestrator owns this file exclusively.
|
||||
|
||||
```json
|
||||
{
|
||||
"session_id": "TLS-<slug>-<date>",
|
||||
"mode": "<spec-only | impl-only | full-lifecycle | fe-only | fullstack | full-lifecycle-fe>",
|
||||
"scope": "<project description>",
|
||||
"status": "<active | paused | completed>",
|
||||
"started_at": "<ISO8601>",
|
||||
"updated_at": "<ISO8601>",
|
||||
"tasks_total": 0,
|
||||
"tasks_completed": 0,
|
||||
"pipeline": [
|
||||
{
|
||||
"id": "RESEARCH-001",
|
||||
"owner": "analyst",
|
||||
"status": "pending | in_progress | completed | failed",
|
||||
"blocked_by": [],
|
||||
"description": "...",
|
||||
"inline_discuss": "DISCUSS-001",
|
||||
"agent_id": null,
|
||||
"artifact_path": null,
|
||||
"discuss_verdict": null,
|
||||
"discuss_severity": null,
|
||||
"started_at": null,
|
||||
"completed_at": null,
|
||||
"revision_of": null,
|
||||
"revision_count": 0
|
||||
}
|
||||
],
|
||||
"active_agents": [],
|
||||
"completed_tasks": [],
|
||||
"revision_chains": {},
|
||||
"wisdom_entries": [],
|
||||
"checkpoints_hit": [],
|
||||
"gc_loop_count": 0
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Session Resume
|
||||
|
||||
When the orchestrator detects an existing active/paused session:
|
||||
|
||||
1. Read `team-session.json` from session directory
|
||||
2. For each task with status "in_progress":
|
||||
- No matching active agent -> task was interrupted -> reset to "pending"
|
||||
- Has matching active agent -> verify agent is still alive (attempt wait with 0 timeout)
|
||||
3. Reconcile: ensure all expected tasks for the mode exist in state
|
||||
4. Create missing tasks with correct blocked_by dependencies
|
||||
5. Verify dependency chain integrity (no cycles, no dangling references)
|
||||
6. Update state file with reconciled state
|
||||
7. Proceed to Phase 4 to spawn ready tasks
|
||||
|
||||
---
|
||||
|
||||
## User Commands
|
||||
|
||||
During pipeline execution, the user may issue commands:
|
||||
|
||||
| Command | Action |
|
||||
|---------|--------|
|
||||
| `check` / `status` | Output execution status graph (read-only, no advancement) |
|
||||
| `resume` / `continue` | Check agent states, advance pipeline |
|
||||
| New session request | Phase 0 detects, enters normal Phase 1-5 flow |
|
||||
|
||||
**Status graph output format**:
|
||||
|
||||
```
|
||||
[orchestrator] Pipeline Status
|
||||
[orchestrator] Mode: <mode> | Progress: <completed>/<total> (<percent>%)
|
||||
|
||||
[orchestrator] Execution Graph:
|
||||
Spec Phase: (if applicable)
|
||||
[V RESEARCH-001(+D1)] -> [V DRAFT-001(+D2)] -> [>>> DRAFT-002(+D3)]
|
||||
-> [o DRAFT-003(+D4)] -> [o DRAFT-004(+D5)] -> [o QUALITY-001(+D6)]
|
||||
Impl Phase: (if applicable)
|
||||
[o PLAN-001]
|
||||
+- BE: [o IMPL-001] -> [o TEST-001] -> [o REVIEW-001]
|
||||
+- FE: [o DEV-FE-001] -> [o QA-FE-001]
|
||||
|
||||
V=completed >>>=running o=pending .=not created
|
||||
|
||||
[orchestrator] Active Agents:
|
||||
> <task-id> (<agent-role>) - running <elapsed>
|
||||
|
||||
[orchestrator] Ready to spawn: <task-ids>
|
||||
[orchestrator] Commands: 'resume' to advance | 'check' to refresh
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Lifecycle Management
|
||||
|
||||
### Timeout Protocol
|
||||
|
||||
| Phase | Timeout | On Timeout |
|
||||
|-------|---------|------------|
|
||||
| Phase 1 (requirements) | None (interactive) | N/A |
|
||||
| Phase 2 (init) | 60s | Fail with error |
|
||||
| Phase 3 (dispatch) | 60s | Fail with error |
|
||||
| Phase 4 per agent | 15 min (spec agents), 30 min (impl agents) | Send convergence request via `send_input`, wait 2 min more, then close |
|
||||
| Phase 5 (report) | 60s | Output partial report |
|
||||
|
||||
**Convergence request** (sent via `send_input` on timeout):
|
||||
|
||||
```javascript
|
||||
send_input({
|
||||
id: <agent-id>,
|
||||
message: `
|
||||
## TIMEOUT NOTIFICATION
|
||||
|
||||
Execution timeout reached. Please:
|
||||
1. Save all current progress to artifact files
|
||||
2. Output TASK_COMPLETE with status: partial
|
||||
3. Include summary of completed vs remaining work
|
||||
`
|
||||
})
|
||||
```
|
||||
|
||||
### Cleanup Protocol
|
||||
|
||||
When the pipeline completes (or is aborted):
|
||||
|
||||
```javascript
|
||||
// Close all active agents
|
||||
for (const agentEntry of state.active_agents) {
|
||||
try {
|
||||
close_agent({ id: agentEntry.agent_id })
|
||||
} catch (e) {
|
||||
// Agent already closed, ignore
|
||||
}
|
||||
}
|
||||
|
||||
// Update state file
|
||||
state.status = "completed" // or "aborted"
|
||||
state.updated_at = new Date().toISOString()
|
||||
// Write state file
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Scenario | Detection | Resolution |
|
||||
|----------|-----------|------------|
|
||||
| Agent timeout | wait() returns timed_out | send_input convergence request, retry wait 2 min, then close + reset task |
|
||||
| Agent crash / unexpected close | wait() returns error status | Reset task to pending, respawn agent (max 3 retries) |
|
||||
| 3+ failures on same task | Retry count in state file | Pause pipeline, report to user |
|
||||
| Fast-advance orphan | Task in_progress but no active agent and > 5 min elapsed | Reset to pending, respawn |
|
||||
| Consensus blocked HIGH | DISCUSS_RESULT parsed from agent output | Create revision task (max 1) or pause |
|
||||
| Consensus blocked HIGH on DISCUSS-006 | Same as above but final sign-off round | Always pause for user |
|
||||
| Revision also blocked | Revision task returns blocked HIGH | Pause pipeline, escalate to user |
|
||||
| Session file corrupt | JSON parse error | Attempt recovery from last known good state, or report error |
|
||||
| Pipeline stall | No ready + no running + has pending | Inspect blocked_by, report blockage details |
|
||||
| Unknown agent output format | TASK_COMPLETE not found in wait result | Log warning, attempt to extract status, mark as partial |
|
||||
| Duplicate task in state | Task ID already exists during dispatch | Skip creation, log warning |
|
||||
| Missing dependency | blocked_by references non-existent task | Log error, halt pipeline |
|
||||
|
||||
---
|
||||
|
||||
## Frontend Auto-Detection
|
||||
|
||||
During Phase 1, the orchestrator detects whether frontend work is needed:
|
||||
|
||||
| Signal | Detection | Pipeline Upgrade |
|
||||
|--------|----------|-----------------|
|
||||
| FE keywords in description | Match: component, page, UI, React, Vue, CSS, HTML, Tailwind, Svelte, Next.js, Nuxt, shadcn, design system | impl-only -> fe-only or fullstack |
|
||||
| BE keywords also present | Match: API, database, server, endpoint, backend, middleware | impl-only -> fullstack |
|
||||
| FE framework in project | Detect react/vue/svelte/next in package.json | full-lifecycle -> full-lifecycle-fe |
|
||||
|
||||
---
|
||||
|
||||
## Inline Discuss Protocol (for agents)
|
||||
|
||||
Produce agents (analyst, writer, reviewer) call the discuss subagent after completing their primary artifact. The protocol is documented here for reference; each agent's role file contains the specific invocation.
|
||||
|
||||
**Discussion round mapping**:
|
||||
|
||||
| Agent | After Task | Discuss Round | Perspectives |
|
||||
|-------|-----------|---------------|-------------|
|
||||
| analyst | RESEARCH-001 | DISCUSS-001 | product, risk, coverage |
|
||||
| writer | DRAFT-001 | DISCUSS-002 | product, technical, quality, coverage |
|
||||
| writer | DRAFT-002 | DISCUSS-003 | quality, product, coverage |
|
||||
| writer | DRAFT-003 | DISCUSS-004 | technical, risk |
|
||||
| writer | DRAFT-004 | DISCUSS-005 | product, technical, quality, coverage |
|
||||
| reviewer | QUALITY-001 | DISCUSS-006 | all 5 (product, technical, quality, risk, coverage) |
|
||||
|
||||
**Agent-side discuss invocation** (inside the agent, not orchestrator):
|
||||
|
||||
```javascript
|
||||
// Agent spawns discuss subagent internally
|
||||
const discussId = spawn_agent({
|
||||
message: `
|
||||
## MANDATORY FIRST STEPS (Agent Execute)
|
||||
1. **Read agent definition**: ~/.codex/agents/discuss-agent.md (MUST read first)
|
||||
|
||||
---
|
||||
|
||||
## Multi-Perspective Critique: <round-id>
|
||||
|
||||
### Input
|
||||
- Artifact: <artifact-path>
|
||||
- Round: <round-id>
|
||||
- Perspectives: <perspective-list>
|
||||
- Session: <session-dir>
|
||||
- Discovery Context: <session-dir>/spec/discovery-context.json
|
||||
|
||||
### Execution
|
||||
Per-perspective CLI analysis -> divergence detection -> consensus determination -> write record.
|
||||
|
||||
### Output
|
||||
Write discussion record to: <session-dir>/discussions/<round-id>-discussion.md
|
||||
Return verdict summary with: verdict, severity, average_rating, action_items, recommendation.
|
||||
`
|
||||
})
|
||||
|
||||
const discussResult = wait({ ids: [discussId], timeout_ms: 300000 })
|
||||
close_agent({ id: discussId })
|
||||
// Agent includes discuss result in its TASK_COMPLETE output
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Shared Explore Protocol (for agents)
|
||||
|
||||
Any agent needing codebase context calls the explore subagent. Results are cached in `explorations/`.
|
||||
|
||||
**Agent-side explore invocation** (inside the agent, not orchestrator):
|
||||
|
||||
```javascript
|
||||
// Agent spawns explore subagent internally
|
||||
const exploreId = spawn_agent({
|
||||
message: `
|
||||
## MANDATORY FIRST STEPS (Agent Execute)
|
||||
1. **Read agent definition**: ~/.codex/agents/explore-agent.md (MUST read first)
|
||||
|
||||
---
|
||||
|
||||
## Explore Codebase
|
||||
|
||||
Query: <query>
|
||||
Focus angle: <angle>
|
||||
Keywords: <keyword-list>
|
||||
Session folder: <session-dir>
|
||||
|
||||
## Cache Check
|
||||
1. Read <session-dir>/explorations/cache-index.json (if exists)
|
||||
2. If matching angle found AND file exists -> return cached result
|
||||
3. If not found -> proceed to exploration
|
||||
|
||||
## Output
|
||||
Write JSON to: <session-dir>/explorations/explore-<angle>.json
|
||||
Update cache-index.json with new entry.
|
||||
Return summary: file count, pattern count, top 5 files, output path.
|
||||
`
|
||||
})
|
||||
|
||||
const exploreResult = wait({ ids: [exploreId], timeout_ms: 300000 })
|
||||
close_agent({ id: exploreId })
|
||||
```
|
||||
|
||||
**Cache lookup rules**:
|
||||
|
||||
| Condition | Action |
|
||||
|-----------|--------|
|
||||
| Exact angle match exists in cache-index.json | Return cached result |
|
||||
| No match | Execute exploration, cache result |
|
||||
| Cache file missing but index has entry | Remove stale entry, re-explore |
|
||||
|
||||
---
|
||||
|
||||
## Wisdom Accumulation
|
||||
|
||||
Cross-task knowledge accumulation. Orchestrator creates `wisdom/` at session init.
|
||||
|
||||
**Directory**:
|
||||
|
||||
```
|
||||
<session-dir>/wisdom/
|
||||
+-- learnings.md # Patterns and insights discovered
|
||||
+-- decisions.md # Architecture and design decisions made
|
||||
+-- conventions.md # Codebase conventions identified
|
||||
+-- issues.md # Known risks and issues flagged
|
||||
```
|
||||
|
||||
**Agent responsibilities**:
|
||||
- On start: read all wisdom files for cross-task context
|
||||
- During work: append discoveries to appropriate wisdom file
|
||||
- On complete: include significant findings in TASK_COMPLETE summary
|
||||
|
||||
---
|
||||
|
||||
## Role Isolation Rules
|
||||
|
||||
| Allowed | Prohibited |
|
||||
|---------|-----------|
|
||||
| Agent processes only its own prefix tasks | Processing other agents' tasks |
|
||||
| Agent communicates results via TASK_COMPLETE output | Direct agent-to-agent communication |
|
||||
| Agent calls discuss/explore subagents internally | Agent modifying orchestrator state file |
|
||||
| Agent writes artifacts to its designated directory | Agent writing to other agents' directories |
|
||||
| Agent reads wisdom files and shared-memory.json | Agent deleting or overwriting other agents' artifacts |
|
||||
|
||||
Orchestrator additionally prohibited: directly write/modify code, call implementation tools, execute analysis/test/review work.
|
||||
|
||||
---
|
||||
|
||||
## GC Loop (Frontend QA)
|
||||
|
||||
For FE pipelines, QA-FE may trigger a fix-retest cycle:
|
||||
|
||||
```
|
||||
Round 1: DEV-FE-001 -> QA-FE-001
|
||||
QA-FE verdict: NEEDS_FIX?
|
||||
YES -> Round 2: DEV-FE-002(blocked_by: QA-FE-001) -> QA-FE-002(blocked_by: DEV-FE-002)
|
||||
QA-FE-002 verdict: NEEDS_FIX?
|
||||
YES -> max rounds reached (2), stop loop, report current state
|
||||
NO -> proceed to next pipeline step
|
||||
NO -> proceed to next pipeline step
|
||||
```
|
||||
|
||||
The orchestrator dynamically adds DEV-FE-NNN and QA-FE-NNN tasks to the state file when a GC loop iteration is needed.
|
||||
|
||||
---
|
||||
|
||||
## Mode-to-Pipeline Quick Reference
|
||||
|
||||
| Mode | Total Tasks | First Task | Checkpoint |
|
||||
|------|-------------|------------|------------|
|
||||
| spec-only | 6 | RESEARCH-001 | None (QUALITY-001 is final) |
|
||||
| impl-only | 4 | PLAN-001 | None |
|
||||
| fe-only | 3 (+GC) | PLAN-001 | None |
|
||||
| fullstack | 6 | PLAN-001 | None |
|
||||
| full-lifecycle | 10 | RESEARCH-001 | After QUALITY-001 |
|
||||
| full-lifecycle-fe | 12 (+GC) | RESEARCH-001 | After QUALITY-001 |
|
||||
|
||||
---
|
||||
|
||||
## Shared Spec Resources
|
||||
|
||||
| Resource | Path (relative to skill) | Usage |
|
||||
|----------|--------------------------|-------|
|
||||
| Document Standards | specs/document-standards.md | YAML frontmatter, naming, structure |
|
||||
| Quality Gates | specs/quality-gates.md | Per-phase quality gates |
|
||||
| Product Brief Template | templates/product-brief.md | DRAFT-001 |
|
||||
| Requirements Template | templates/requirements-prd.md | DRAFT-002 |
|
||||
| Architecture Template | templates/architecture-doc.md | DRAFT-003 |
|
||||
| Epics Template | templates/epics-template.md | DRAFT-004 |
|
||||
@@ -0,0 +1,209 @@
|
||||
# Phase 1: Requirement Clarification
|
||||
|
||||
> **COMPACT PROTECTION**: This is an execution document. After context compression, phase instructions become summaries only. You MUST immediately re-read this file via `Read("~/.codex/skills/team-lifecycle/phases/01-requirement-clarification.md")` before continuing. Never execute based on summaries.
|
||||
|
||||
## Objective
|
||||
|
||||
Parse user input, detect execution mode, apply frontend auto-detection, and gather all parameters needed for pipeline initialization. No agents are spawned in this phase -- it is purely orchestrator-local work.
|
||||
|
||||
---
|
||||
|
||||
## Input
|
||||
|
||||
| Input | Source | Required |
|
||||
|-------|--------|----------|
|
||||
| User arguments | `$ARGUMENTS` (raw user input) | Yes |
|
||||
| Project root | Current working directory | Yes |
|
||||
| Existing sessions | `.workflow/.team/TLS-*/team-session.json` | No (checked for resume) |
|
||||
|
||||
---
|
||||
|
||||
## Execution Steps
|
||||
|
||||
### Step 1.1: Session Resume Check (Phase 0)
|
||||
|
||||
Before requirement parsing, scan for interrupted sessions.
|
||||
|
||||
```javascript
|
||||
// Scan for active/paused sessions
|
||||
const sessionFiles = Glob(".workflow/.team/TLS-*/team-session.json")
|
||||
|
||||
// Filter to active or paused
|
||||
const activeSessions = sessionFiles
|
||||
.map(f => JSON.parse(Read(f)))
|
||||
.filter(s => s.status === "active" || s.status === "paused")
|
||||
```
|
||||
|
||||
**Decision table**:
|
||||
|
||||
| Active Sessions | Action |
|
||||
|----------------|--------|
|
||||
| 0 | Proceed to Step 1.2 (new session) |
|
||||
| 1 | Ask user: "Found active session <session-id>. Resume? (yes/no)" |
|
||||
| 2+ | Ask user to select: "Multiple active sessions found: <list>. Which to resume? (or 'new' for fresh start)" |
|
||||
|
||||
If user chooses to resume:
|
||||
- Read the selected session's state file
|
||||
- Proceed to Session Reconciliation (see orchestrator.md Session Resume section)
|
||||
- Skip remaining Phase 1 steps, jump directly to Phase 4
|
||||
|
||||
If user chooses new session:
|
||||
- Continue to Step 1.2
|
||||
|
||||
### Step 1.2: Parse Arguments
|
||||
|
||||
Extract explicit settings from user input.
|
||||
|
||||
**Recognized parameters** (from argument text or flags):
|
||||
|
||||
| Parameter | Detection | Values |
|
||||
|-----------|-----------|--------|
|
||||
| mode | Explicit mention or `--mode` flag | spec-only, impl-only, full-lifecycle, fe-only, fullstack, full-lifecycle-fe |
|
||||
| scope | Main content of user description | Free text |
|
||||
| focus | `--focus` flag or explicit mention | Comma-separated areas |
|
||||
| depth | `--depth` flag | shallow, normal, deep |
|
||||
| execution | `--parallel` or `--sequential` flag | sequential, parallel |
|
||||
| spec-path | `--spec` flag (impl-only mode) | File path to existing spec |
|
||||
|
||||
**Parsing logic**:
|
||||
|
||||
```
|
||||
Parse user input text:
|
||||
+- Contains "--mode <value>"? -> extract mode
|
||||
+- Contains "--focus <areas>"? -> extract focus areas
|
||||
+- Contains "--depth <level>"? -> extract depth
|
||||
+- Contains "--parallel"? -> execution = parallel
|
||||
+- Contains "--sequential"? -> execution = sequential
|
||||
+- Contains "--spec <path>"? -> extract spec path
|
||||
+- Remaining text after flag removal -> scope description
|
||||
```
|
||||
|
||||
### Step 1.3: Ask for Missing Parameters
|
||||
|
||||
If critical parameters are not extractable from the input, ask the user.
|
||||
|
||||
**Required parameters and fallback defaults**:
|
||||
|
||||
| Parameter | Required | Default | Ask If Missing |
|
||||
|-----------|----------|---------|----------------|
|
||||
| mode | Yes | (none) | Yes - "Which pipeline mode? (spec-only / impl-only / full-lifecycle / fe-only / fullstack / full-lifecycle-fe)" |
|
||||
| scope | Yes | (none) | Yes - "Please describe the project scope" |
|
||||
| execution | No | parallel | No - use default |
|
||||
| depth | No | normal | No - use default |
|
||||
| spec-path | Conditional | (none) | Yes, only if mode is impl-only - "Path to existing specification?" |
|
||||
|
||||
**Ask format** (output to user, then wait for response):
|
||||
|
||||
```
|
||||
[orchestrator] Phase 1: Requirement Clarification
|
||||
|
||||
Missing parameter: <parameter-name>
|
||||
<question-text>
|
||||
|
||||
Options: <option-list>
|
||||
```
|
||||
|
||||
### Step 1.4: Frontend Auto-Detection
|
||||
|
||||
For `impl-only` and `full-lifecycle` modes, check if the user description or project structure indicates frontend work.
|
||||
|
||||
**Detection signals**:
|
||||
|
||||
| Signal | Detection Method | Result |
|
||||
|--------|-----------------|--------|
|
||||
| FE keywords in scope | Match against keyword list (see below) | FE detected |
|
||||
| BE keywords in scope | Match against keyword list | BE detected |
|
||||
| FE framework in package.json | Read package.json, check for react/vue/svelte/next dependencies | FE framework present |
|
||||
| FE file patterns in project | Glob for *.tsx, *.jsx, *.vue, *.svelte | FE files exist |
|
||||
|
||||
**FE keyword list**: component, page, UI, frontend, CSS, HTML, React, Vue, Tailwind, Svelte, Next.js, Nuxt, shadcn, design system, responsive, layout, form, dialog, modal, sidebar, navbar, theme, dark mode, animation
|
||||
|
||||
**BE keyword list**: API, database, server, endpoint, backend, middleware, migration, schema, REST, GraphQL, authentication, authorization, queue, worker, cron
|
||||
|
||||
**Upgrade decision table**:
|
||||
|
||||
| Current Mode | FE Detected | BE Detected | Upgrade To |
|
||||
|-------------|-------------|-------------|------------|
|
||||
| impl-only | Yes | No | fe-only |
|
||||
| impl-only | Yes | Yes | fullstack |
|
||||
| impl-only | No | Yes | (no change) |
|
||||
| full-lifecycle | Yes | - | full-lifecycle-fe |
|
||||
| full-lifecycle | No | - | (no change) |
|
||||
| spec-only | - | - | (no change, spec is mode-agnostic) |
|
||||
| fe-only | - | - | (no change) |
|
||||
| fullstack | - | - | (no change) |
|
||||
|
||||
If an upgrade is detected, inform the user:
|
||||
|
||||
```
|
||||
[orchestrator] Frontend auto-detection: upgraded mode from <old> to <new>
|
||||
Detected signals: <signal-list>
|
||||
To override, specify --mode explicitly.
|
||||
```
|
||||
|
||||
### Step 1.5: Impl-Only Pre-Check
|
||||
|
||||
If mode is `impl-only`, verify that a specification exists:
|
||||
|
||||
```
|
||||
Mode is impl-only?
|
||||
+- spec-path provided? -> validate file exists -> proceed
|
||||
+- spec-path not provided?
|
||||
+- Scan session dir for existing spec artifacts -> found? use them
|
||||
+- Not found -> error: "impl-only requires existing spec. Provide --spec <path> or use full-lifecycle mode."
|
||||
```
|
||||
|
||||
### Step 1.6: Store Requirements
|
||||
|
||||
Assemble the final requirements object. This will be passed to Phase 2.
|
||||
|
||||
```javascript
|
||||
const requirements = {
|
||||
mode: "<finalized-mode>",
|
||||
scope: "<scope-description>",
|
||||
focus: ["<area1>", "<area2>"],
|
||||
depth: "<shallow | normal | deep>",
|
||||
execution: "<sequential | parallel>",
|
||||
spec_path: "<path-or-null>",
|
||||
frontend_detected: true | false,
|
||||
frontend_signals: ["<signal1>", "<signal2>"],
|
||||
raw_input: "<original-user-input>"
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Output
|
||||
|
||||
| Output | Type | Destination |
|
||||
|--------|------|-------------|
|
||||
| requirements | Object | Passed to Phase 2 |
|
||||
| resume_session | Object or null | If resuming, session state passed to Phase 4 |
|
||||
|
||||
---
|
||||
|
||||
## Success Criteria
|
||||
|
||||
- All required parameters captured (mode, scope)
|
||||
- Mode finalized (including auto-detection upgrades)
|
||||
- If impl-only: spec path validated
|
||||
- If resuming: session state loaded and reconciled
|
||||
- No ambiguity remaining in execution parameters
|
||||
|
||||
---
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Error | Resolution |
|
||||
|-------|------------|
|
||||
| Unknown mode value | Report supported modes, ask user to clarify |
|
||||
| Spec path not found (impl-only) | Error with suggestion to use full-lifecycle |
|
||||
| User provides contradictory flags | Report conflict, ask user to resolve |
|
||||
| Session file corrupt during resume check | Skip corrupt session, proceed with new |
|
||||
| No user response to question | Wait indefinitely (interactive mode) |
|
||||
|
||||
---
|
||||
|
||||
## Next Phase
|
||||
|
||||
Proceed to [Phase 2: Team Initialization](02-team-initialization.md) with the `requirements` object.
|
||||
205
.codex/skills/team-lifecycle/phases/02-team-initialization.md
Normal file
205
.codex/skills/team-lifecycle/phases/02-team-initialization.md
Normal file
@@ -0,0 +1,205 @@
|
||||
# Phase 2: Team Initialization
|
||||
|
||||
> **COMPACT PROTECTION**: This is an execution document. After context compression, phase instructions become summaries only. You MUST immediately re-read this file via `Read("~/.codex/skills/team-lifecycle/phases/02-team-initialization.md")` before continuing. Never execute based on summaries.
|
||||
|
||||
## Objective
|
||||
|
||||
Create the session directory structure, initialize the state file (`team-session.json`), set up wisdom and exploration cache directories. No agents are spawned in this phase.
|
||||
|
||||
---
|
||||
|
||||
## Input
|
||||
|
||||
| Input | Source | Required |
|
||||
|-------|--------|----------|
|
||||
| requirements | Phase 1 output | Yes |
|
||||
| requirements.mode | Finalized pipeline mode | Yes |
|
||||
| requirements.scope | Project scope description | Yes |
|
||||
| requirements.execution | sequential or parallel | Yes |
|
||||
| Project root | Current working directory | Yes |
|
||||
|
||||
---
|
||||
|
||||
## Execution Steps
|
||||
|
||||
### Step 2.1: Generate Session ID
|
||||
|
||||
```javascript
|
||||
// Generate slug from scope description (max 20 chars, kebab-case)
|
||||
const slug = requirements.scope
|
||||
.toLowerCase()
|
||||
.replace(/[^a-z0-9]+/g, '-')
|
||||
.replace(/^-|-$/g, '')
|
||||
.slice(0, 20)
|
||||
|
||||
// Date in YYYY-MM-DD format
|
||||
const date = new Date().toISOString().slice(0, 10)
|
||||
|
||||
const sessionId = `TLS-${slug}-${date}`
|
||||
const sessionDir = `.workflow/.team/${sessionId}`
|
||||
```
|
||||
|
||||
### Step 2.2: Create Directory Structure
|
||||
|
||||
```bash
|
||||
mkdir -p "<session-dir>/spec/requirements"
|
||||
mkdir -p "<session-dir>/spec/architecture"
|
||||
mkdir -p "<session-dir>/spec/epics"
|
||||
mkdir -p "<session-dir>/discussions"
|
||||
mkdir -p "<session-dir>/plan/tasks"
|
||||
mkdir -p "<session-dir>/explorations"
|
||||
mkdir -p "<session-dir>/architecture"
|
||||
mkdir -p "<session-dir>/analysis"
|
||||
mkdir -p "<session-dir>/qa"
|
||||
mkdir -p "<session-dir>/wisdom"
|
||||
```
|
||||
|
||||
**Directory purpose reference**:
|
||||
|
||||
| Directory | Purpose | Written By |
|
||||
|-----------|---------|-----------|
|
||||
| spec/ | Specification artifacts (briefs, PRDs, architecture, epics) | analyst, writer |
|
||||
| discussions/ | Discussion records from inline discuss subagent | discuss subagent |
|
||||
| plan/ | Implementation plan and task breakdown | planner |
|
||||
| explorations/ | Shared codebase exploration cache | explore subagent |
|
||||
| architecture/ | Architect assessments, design tokens | architect |
|
||||
| analysis/ | Analyst design intelligence (UI mode) | analyst |
|
||||
| qa/ | QA audit reports | fe-qa |
|
||||
| wisdom/ | Cross-task knowledge accumulation | all agents |
|
||||
|
||||
### Step 2.3: Initialize Wisdom Directory
|
||||
|
||||
Create the four wisdom files with empty starter content:
|
||||
|
||||
```javascript
|
||||
// learnings.md
|
||||
Write("<session-dir>/wisdom/learnings.md",
|
||||
"# Learnings\n\nPatterns and insights discovered during this session.\n")
|
||||
|
||||
// decisions.md
|
||||
Write("<session-dir>/wisdom/decisions.md",
|
||||
"# Decisions\n\nArchitecture and design decisions made during this session.\n")
|
||||
|
||||
// conventions.md
|
||||
Write("<session-dir>/wisdom/conventions.md",
|
||||
"# Conventions\n\nCodebase conventions identified during this session.\n")
|
||||
|
||||
// issues.md
|
||||
Write("<session-dir>/wisdom/issues.md",
|
||||
"# Issues\n\nKnown risks and issues flagged during this session.\n")
|
||||
```
|
||||
|
||||
### Step 2.4: Initialize Explorations Cache
|
||||
|
||||
```javascript
|
||||
Write("<session-dir>/explorations/cache-index.json",
|
||||
JSON.stringify({ entries: [] }, null, 2))
|
||||
```
|
||||
|
||||
### Step 2.5: Initialize Shared Memory
|
||||
|
||||
```javascript
|
||||
Write("<session-dir>/shared-memory.json",
|
||||
JSON.stringify({
|
||||
design_intelligence: null,
|
||||
design_token_registry: null,
|
||||
component_inventory: null,
|
||||
style_decisions: null,
|
||||
qa_history: null,
|
||||
industry_context: null,
|
||||
exploration_cache: null
|
||||
}, null, 2))
|
||||
```
|
||||
|
||||
### Step 2.6: Determine Task Counts
|
||||
|
||||
Compute expected task counts based on mode:
|
||||
|
||||
| Mode | Tasks | Pipeline Composition |
|
||||
|------|-------|---------------------|
|
||||
| spec-only | 6 | Spec pipeline (6) |
|
||||
| impl-only | 4 | Impl pipeline (4) |
|
||||
| fe-only | 3 | FE pipeline (3) + possible GC loop tasks |
|
||||
| fullstack | 6 | Fullstack pipeline (6) |
|
||||
| full-lifecycle | 10 | Spec (6) + Impl (4) |
|
||||
| full-lifecycle-fe | 12 | Spec (6) + Fullstack (6) + possible GC loop tasks |
|
||||
|
||||
### Step 2.7: Write State File (team-session.json)
|
||||
|
||||
```javascript
|
||||
const state = {
|
||||
session_id: sessionId,
|
||||
mode: requirements.mode,
|
||||
scope: requirements.scope,
|
||||
focus: requirements.focus || [],
|
||||
depth: requirements.depth || "normal",
|
||||
execution: requirements.execution || "parallel",
|
||||
status: "active",
|
||||
started_at: new Date().toISOString(),
|
||||
updated_at: new Date().toISOString(),
|
||||
tasks_total: taskCount, // from Step 2.6
|
||||
tasks_completed: 0,
|
||||
pipeline: [], // populated in Phase 3
|
||||
active_agents: [],
|
||||
completed_tasks: [],
|
||||
revision_chains: {},
|
||||
wisdom_entries: [],
|
||||
checkpoints_hit: [],
|
||||
gc_loop_count: 0,
|
||||
frontend_detected: requirements.frontend_detected || false,
|
||||
spec_path: requirements.spec_path || null,
|
||||
raw_input: requirements.raw_input
|
||||
}
|
||||
|
||||
Write("<session-dir>/team-session.json",
|
||||
JSON.stringify(state, null, 2))
|
||||
```
|
||||
|
||||
### Step 2.8: Output Confirmation
|
||||
|
||||
```
|
||||
[orchestrator] Phase 2: Session initialized
|
||||
Session ID: <session-id>
|
||||
Session directory: <session-dir>
|
||||
Mode: <mode> (<task-count> tasks)
|
||||
Scope: <scope-summary>
|
||||
Execution: <sequential | parallel>
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Output
|
||||
|
||||
| Output | Type | Destination |
|
||||
|--------|------|-------------|
|
||||
| sessionId | String | Passed to Phase 3 |
|
||||
| sessionDir | String | Passed to Phase 3 |
|
||||
| state | Object | Written to team-session.json, passed to Phase 3 |
|
||||
|
||||
---
|
||||
|
||||
## Success Criteria
|
||||
|
||||
- Session directory created with all subdirectories
|
||||
- Wisdom files initialized (4 files)
|
||||
- Explorations cache-index.json created (empty entries)
|
||||
- Shared-memory.json created
|
||||
- team-session.json written with correct mode, scope, task count
|
||||
- State file is valid JSON and readable
|
||||
|
||||
---
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Error | Resolution |
|
||||
|-------|------------|
|
||||
| Directory already exists with same session ID | Append random suffix to slug to ensure uniqueness |
|
||||
| Write permission denied | Report error, suggest alternative directory |
|
||||
| Disk space insufficient | Report error, suggest cleanup |
|
||||
| Invalid mode in requirements | Should not happen (Phase 1 validates), but fail with message |
|
||||
|
||||
---
|
||||
|
||||
## Next Phase
|
||||
|
||||
Proceed to [Phase 3: Task Chain Creation](03-task-chain-creation.md) with `sessionId`, `sessionDir`, and `state`.
|
||||
251
.codex/skills/team-lifecycle/phases/03-task-chain-creation.md
Normal file
251
.codex/skills/team-lifecycle/phases/03-task-chain-creation.md
Normal file
@@ -0,0 +1,251 @@
|
||||
# Phase 3: Task Chain Creation
|
||||
|
||||
> **COMPACT PROTECTION**: This is an execution document. After context compression, phase instructions become summaries only. You MUST immediately re-read this file via `Read("~/.codex/skills/team-lifecycle/phases/03-task-chain-creation.md")` before continuing. Never execute based on summaries.
|
||||
|
||||
## Objective
|
||||
|
||||
Build the full pipeline task chain based on the selected mode, write all tasks to the state file (`team-session.json`). Each task entry contains its ID, owner agent, dependencies, description, and inline discuss metadata. No agents are spawned in this phase.
|
||||
|
||||
---
|
||||
|
||||
## Input
|
||||
|
||||
| Input | Source | Required |
|
||||
|-------|--------|----------|
|
||||
| sessionId | Phase 2 output | Yes |
|
||||
| sessionDir | Phase 2 output | Yes |
|
||||
| state | Phase 2 output (team-session.json) | Yes |
|
||||
| state.mode | Pipeline mode | Yes |
|
||||
| state.scope | Project scope | Yes |
|
||||
| state.spec_path | Spec file path (impl-only) | Conditional |
|
||||
|
||||
---
|
||||
|
||||
## Execution Steps
|
||||
|
||||
### Step 3.1: Mode-to-Pipeline Routing
|
||||
|
||||
Select the pipeline definition based on mode.
|
||||
|
||||
| Mode | Pipeline | First Task | Checkpoint |
|
||||
|------|----------|------------|------------|
|
||||
| spec-only | Spec pipeline (6 tasks) | RESEARCH-001 | None |
|
||||
| impl-only | Impl pipeline (4 tasks) | PLAN-001 | None |
|
||||
| fe-only | FE pipeline (3 tasks) | PLAN-001 | None |
|
||||
| fullstack | Fullstack pipeline (6 tasks) | PLAN-001 | None |
|
||||
| full-lifecycle | Spec (6) + Impl (4) | RESEARCH-001 | After QUALITY-001 |
|
||||
| full-lifecycle-fe | Spec (6) + Fullstack (6) | RESEARCH-001 | After QUALITY-001 |
|
||||
|
||||
### Step 3.2: Build Task Entries
|
||||
|
||||
For each task in the selected pipeline, create a task entry object.
|
||||
|
||||
**Task entry schema**:
|
||||
|
||||
```javascript
|
||||
{
|
||||
id: "<TASK-ID>",
|
||||
owner: "<agent-role>",
|
||||
status: "pending",
|
||||
blocked_by: ["<dependency-task-id>", ...],
|
||||
description: "<task description>",
|
||||
inline_discuss: "<DISCUSS-NNN or null>",
|
||||
agent_id: null,
|
||||
artifact_path: null,
|
||||
discuss_verdict: null,
|
||||
discuss_severity: null,
|
||||
started_at: null,
|
||||
completed_at: null,
|
||||
revision_of: null,
|
||||
revision_count: 0,
|
||||
is_checkpoint_after: false
|
||||
}
|
||||
```
|
||||
|
||||
**Task description template** (every task gets this format):
|
||||
|
||||
```
|
||||
<task-description-from-pipeline-table>
|
||||
Session: <session-dir>
|
||||
Scope: <scope>
|
||||
InlineDiscuss: <DISCUSS-NNN or none>
|
||||
```
|
||||
|
||||
### Step 3.3: Spec Pipeline Tasks
|
||||
|
||||
Used by: spec-only, full-lifecycle, full-lifecycle-fe
|
||||
|
||||
| # | ID | Owner | BlockedBy | Description | InlineDiscuss |
|
||||
|---|-----|-------|-----------|-------------|---------------|
|
||||
| 1 | RESEARCH-001 | analyst | (none) | Seed analysis and context gathering | DISCUSS-001 |
|
||||
| 2 | DRAFT-001 | writer | RESEARCH-001 | Generate Product Brief | DISCUSS-002 |
|
||||
| 3 | DRAFT-002 | writer | DRAFT-001 | Generate Requirements/PRD | DISCUSS-003 |
|
||||
| 4 | DRAFT-003 | writer | DRAFT-002 | Generate Architecture Document | DISCUSS-004 |
|
||||
| 5 | DRAFT-004 | writer | DRAFT-003 | Generate Epics and Stories | DISCUSS-005 |
|
||||
| 6 | QUALITY-001 | reviewer | DRAFT-004 | 5-dimension spec quality + sign-off | DISCUSS-006 |
|
||||
|
||||
QUALITY-001 has `is_checkpoint_after: true` for full-lifecycle and full-lifecycle-fe modes (signals orchestrator to pause for user confirmation before impl phase).
|
||||
|
||||
### Step 3.4: Impl Pipeline Tasks
|
||||
|
||||
Used by: impl-only, full-lifecycle (PLAN-001 blockedBy QUALITY-001)
|
||||
|
||||
| # | ID | Owner | BlockedBy | Description | InlineDiscuss |
|
||||
|---|-----|-------|-----------|-------------|---------------|
|
||||
| 1 | PLAN-001 | planner | (none) | Multi-angle exploration and planning | none |
|
||||
| 2 | IMPL-001 | executor | PLAN-001 | Code implementation | none |
|
||||
| 3 | TEST-001 | tester | IMPL-001 | Test-fix cycles | none |
|
||||
| 4 | REVIEW-001 | reviewer | IMPL-001 | 4-dimension code review | none |
|
||||
|
||||
For full-lifecycle mode: PLAN-001 `blocked_by` includes `QUALITY-001`.
|
||||
|
||||
TEST-001 and REVIEW-001 both depend on IMPL-001 and can run in parallel.
|
||||
|
||||
### Step 3.5: FE Pipeline Tasks
|
||||
|
||||
Used by: fe-only
|
||||
|
||||
| # | ID | Owner | BlockedBy | Description | InlineDiscuss |
|
||||
|---|-----|-------|-----------|-------------|---------------|
|
||||
| 1 | PLAN-001 | planner | (none) | Planning (frontend focus) | none |
|
||||
| 2 | DEV-FE-001 | fe-developer | PLAN-001 | Frontend implementation | none |
|
||||
| 3 | QA-FE-001 | fe-qa | DEV-FE-001 | 5-dimension frontend QA | none |
|
||||
|
||||
GC loop: if QA-FE-001 verdict is NEEDS_FIX, the orchestrator dynamically creates DEV-FE-002 and QA-FE-002 during Phase 4. These are NOT pre-created here.
|
||||
|
||||
### Step 3.6: Fullstack Pipeline Tasks
|
||||
|
||||
Used by: fullstack, full-lifecycle-fe (PLAN-001 blockedBy QUALITY-001)
|
||||
|
||||
| # | ID | Owner | BlockedBy | Description | InlineDiscuss |
|
||||
|---|-----|-------|-----------|-------------|---------------|
|
||||
| 1 | PLAN-001 | planner | (none) | Fullstack planning | none |
|
||||
| 2 | IMPL-001 | executor | PLAN-001 | Backend implementation | none |
|
||||
| 3 | DEV-FE-001 | fe-developer | PLAN-001 | Frontend implementation | none |
|
||||
| 4 | TEST-001 | tester | IMPL-001 | Backend test-fix cycles | none |
|
||||
| 5 | QA-FE-001 | fe-qa | DEV-FE-001 | Frontend QA | none |
|
||||
| 6 | REVIEW-001 | reviewer | TEST-001, QA-FE-001 | Full code review | none |
|
||||
|
||||
IMPL-001 and DEV-FE-001 run in parallel (both depend only on PLAN-001).
|
||||
TEST-001 and QA-FE-001 run in parallel (each depends on its respective impl).
|
||||
REVIEW-001 is a sync barrier -- depends on both TEST-001 and QA-FE-001.
|
||||
|
||||
For full-lifecycle-fe: PLAN-001 `blocked_by` includes `QUALITY-001`.
|
||||
|
||||
### Step 3.7: Composite Mode Assembly
|
||||
|
||||
For composite modes, concatenate pipelines and adjust cross-pipeline dependencies.
|
||||
|
||||
**full-lifecycle** = Spec (6) + Impl (4):
|
||||
|
||||
```
|
||||
Spec pipeline tasks (Step 3.3)
|
||||
+
|
||||
Impl pipeline tasks (Step 3.4)
|
||||
with: PLAN-001.blocked_by = ["QUALITY-001"]
|
||||
with: QUALITY-001.is_checkpoint_after = true
|
||||
```
|
||||
|
||||
**full-lifecycle-fe** = Spec (6) + Fullstack (6):
|
||||
|
||||
```
|
||||
Spec pipeline tasks (Step 3.3)
|
||||
+
|
||||
Fullstack pipeline tasks (Step 3.6)
|
||||
with: PLAN-001.blocked_by = ["QUALITY-001"]
|
||||
with: QUALITY-001.is_checkpoint_after = true
|
||||
```
|
||||
|
||||
### Step 3.8: Impl-Only Pre-Check
|
||||
|
||||
If mode is `impl-only`, verify specification exists:
|
||||
|
||||
```
|
||||
state.spec_path is set?
|
||||
+- YES -> read spec file -> include path in PLAN-001 description
|
||||
+- NO -> error: "impl-only requires existing spec"
|
||||
```
|
||||
|
||||
Add to PLAN-001 description: `Spec: <spec-path>`
|
||||
|
||||
### Step 3.9: Write Pipeline to State File
|
||||
|
||||
```javascript
|
||||
// Read current state
|
||||
const state = JSON.parse(Read("<session-dir>/team-session.json"))
|
||||
|
||||
// Set pipeline
|
||||
state.pipeline = taskEntries // array of task entry objects from steps above
|
||||
state.tasks_total = taskEntries.length
|
||||
state.updated_at = new Date().toISOString()
|
||||
|
||||
// Write back
|
||||
Write("<session-dir>/team-session.json",
|
||||
JSON.stringify(state, null, 2))
|
||||
```
|
||||
|
||||
### Step 3.10: Validation
|
||||
|
||||
Before proceeding, validate the constructed pipeline.
|
||||
|
||||
| Check | Criteria | On Failure |
|
||||
|-------|----------|-----------|
|
||||
| Task count | Matches expected count for mode | Error with mismatch details |
|
||||
| Dependencies | Every blocked_by reference exists as a task ID in the pipeline | Error with dangling reference |
|
||||
| No cycles | Topological sort succeeds (no circular dependencies) | Error with cycle details |
|
||||
| Owner assignment | Each task owner matches a valid agent from Agent Registry | Error with unknown agent |
|
||||
| Unique IDs | No duplicate task IDs | Error with duplicate ID |
|
||||
| Inline discuss | Spec tasks have correct DISCUSS-NNN assignment per round config | Warning if mismatch |
|
||||
| Session reference | Every task description contains `Session: <session-dir>` | Fix missing references |
|
||||
|
||||
### Step 3.11: Output Confirmation
|
||||
|
||||
```
|
||||
[orchestrator] Phase 3: Task chain created
|
||||
Mode: <mode>
|
||||
Tasks: <count>
|
||||
Pipeline:
|
||||
<task-id> (<owner>) [blocked_by: <deps>]
|
||||
<task-id> (<owner>) [blocked_by: <deps>]
|
||||
...
|
||||
First ready task(s): <task-ids with empty blocked_by>
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Output
|
||||
|
||||
| Output | Type | Destination |
|
||||
|--------|------|-------------|
|
||||
| state (updated) | Object | Written to team-session.json |
|
||||
| state.pipeline | Array | Task chain with all entries |
|
||||
| Ready task IDs | Array | Tasks with empty blocked_by (passed to Phase 4) |
|
||||
|
||||
---
|
||||
|
||||
## Success Criteria
|
||||
|
||||
- Pipeline array written to state file with correct task count
|
||||
- All dependencies are valid (no dangling, no cycles)
|
||||
- Each task has owner, description, blocked_by, inline_discuss
|
||||
- Composite modes have correct cross-pipeline dependencies
|
||||
- Spec tasks have inline discuss metadata
|
||||
- At least one task has empty blocked_by (pipeline can start)
|
||||
|
||||
---
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Error | Resolution |
|
||||
|-------|------------|
|
||||
| Unknown mode | Should not happen (Phase 1 validates), fail with supported mode list |
|
||||
| Missing spec for impl-only | Error, suggest spec-only or full-lifecycle |
|
||||
| Dependency cycle detected | Report cycle, halt pipeline creation |
|
||||
| State file read/write error | Report error, suggest re-initialization |
|
||||
| Duplicate task ID | Skip duplicate, log warning |
|
||||
|
||||
---
|
||||
|
||||
## Next Phase
|
||||
|
||||
Proceed to [Phase 4: Pipeline Coordination](04-pipeline-coordination.md) with the ready task IDs.
|
||||
719
.codex/skills/team-lifecycle/phases/04-pipeline-coordination.md
Normal file
719
.codex/skills/team-lifecycle/phases/04-pipeline-coordination.md
Normal file
@@ -0,0 +1,719 @@
|
||||
# Phase 4: Pipeline Coordination
|
||||
|
||||
> **COMPACT PROTECTION**: This is an execution document. After context compression, phase instructions become summaries only. You MUST immediately re-read this file via `Read("~/.codex/skills/team-lifecycle/phases/04-pipeline-coordination.md")` before continuing. Never execute based on summaries.
|
||||
|
||||
## Objective
|
||||
|
||||
Execute the main spawn/wait/close coordination loop. This is the core phase where the orchestrator spawns agents for pipeline tasks, waits for results, processes outputs (including consensus severity routing), handles checkpoints, and advances the pipeline through fast-advance or sequential beats until all tasks complete.
|
||||
|
||||
---
|
||||
|
||||
## Input
|
||||
|
||||
| Input | Source | Required |
|
||||
|-------|--------|----------|
|
||||
| sessionDir | Phase 2/3 output | Yes |
|
||||
| state | team-session.json (current) | Yes |
|
||||
| state.pipeline | Task chain from Phase 3 | Yes |
|
||||
| state.mode | Pipeline mode | Yes |
|
||||
| state.execution | sequential or parallel | Yes |
|
||||
|
||||
---
|
||||
|
||||
## Constants
|
||||
|
||||
| Constant | Value | Description |
|
||||
|----------|-------|-------------|
|
||||
| SPEC_AGENT_TIMEOUT | 900000 (15 min) | Timeout for spec pipeline agents (analyst, writer, reviewer) |
|
||||
| IMPL_AGENT_TIMEOUT | 1800000 (30 min) | Timeout for impl pipeline agents (executor, tester, planner) |
|
||||
| CONVERGENCE_WAIT | 120000 (2 min) | Additional wait after sending convergence request |
|
||||
| MAX_RETRIES_PER_TASK | 3 | Maximum retries for a failing task before escalation |
|
||||
| MAX_GC_ROUNDS | 2 | Maximum QA-FE fix-retest iterations |
|
||||
| ORPHAN_THRESHOLD | 300000 (5 min) | Time before orphaned in_progress task is reset |
|
||||
|
||||
---
|
||||
|
||||
## Execution Steps
|
||||
|
||||
### Step 4.1: Compute Ready Tasks
|
||||
|
||||
Read the state file and find all tasks that can be started.
|
||||
|
||||
```javascript
|
||||
// Read current state
|
||||
const state = JSON.parse(Read("<session-dir>/team-session.json"))
|
||||
|
||||
// Compute sets
|
||||
const completedIds = state.pipeline
|
||||
.filter(t => t.status === "completed")
|
||||
.map(t => t.id)
|
||||
|
||||
const inProgressIds = state.pipeline
|
||||
.filter(t => t.status === "in_progress")
|
||||
.map(t => t.id)
|
||||
|
||||
const readyTasks = state.pipeline.filter(t =>
|
||||
t.status === "pending"
|
||||
&& t.blocked_by.every(dep => completedIds.includes(dep))
|
||||
)
|
||||
```
|
||||
|
||||
**Decision table** (what to do with ready tasks):
|
||||
|
||||
| Ready Count | In-Progress Count | Action |
|
||||
|-------------|-------------------|--------|
|
||||
| 0 | >0 | Wait for running agents (enter wait loop) |
|
||||
| 0 | 0 | Pipeline complete -> proceed to Phase 5 |
|
||||
| 1 | 0 | Spawn single agent, enter wait loop |
|
||||
| 1+ | 0 | Spawn all ready agents (parallel or sequential per config), enter wait loop |
|
||||
| 1+ | >0 | Spawn ready agents alongside running, enter wait loop |
|
||||
|
||||
### Step 4.2: Checkpoint Check
|
||||
|
||||
Before spawning, check if a checkpoint was just reached.
|
||||
|
||||
```javascript
|
||||
// Check if the most recently completed task has is_checkpoint_after = true
|
||||
const justCompleted = state.pipeline.filter(t => t.status === "completed")
|
||||
const lastCompleted = justCompleted[justCompleted.length - 1]
|
||||
|
||||
if (lastCompleted && lastCompleted.is_checkpoint_after
|
||||
&& !state.checkpoints_hit.includes(lastCompleted.id)) {
|
||||
// Checkpoint reached
|
||||
state.checkpoints_hit.push(lastCompleted.id)
|
||||
// Write state
|
||||
// Output checkpoint message and pause
|
||||
}
|
||||
```
|
||||
|
||||
**Checkpoint behavior**:
|
||||
|
||||
| Checkpoint Task | Message | Resume Condition |
|
||||
|----------------|---------|-----------------|
|
||||
| QUALITY-001 | "SPEC PHASE COMPLETE. Review spec artifacts before proceeding to implementation. Type 'resume' to continue." | User types 'resume' |
|
||||
| DISCUSS-006 HIGH | "Final sign-off blocked with HIGH severity. Review divergences and decide. Type 'resume' to proceed or 'revise' to create revision." | User command |
|
||||
|
||||
When paused at checkpoint:
|
||||
1. Update state: `status = "paused"`
|
||||
2. Write state file
|
||||
3. Output checkpoint message
|
||||
4. Yield -- orchestrator stops, waits for user
|
||||
|
||||
When user resumes from checkpoint:
|
||||
1. Update state: `status = "active"`
|
||||
2. Proceed to spawn ready tasks
|
||||
|
||||
### Step 4.3: Spawn Agents
|
||||
|
||||
For each ready task, spawn an agent using the agent spawn template.
|
||||
|
||||
```javascript
|
||||
const spawnedAgents = []
|
||||
|
||||
for (const task of readyTasks) {
|
||||
// Determine timeout based on pipeline phase
|
||||
const timeout = isSpecTask(task.id) ? SPEC_AGENT_TIMEOUT : IMPL_AGENT_TIMEOUT
|
||||
|
||||
// Build predecessor context
|
||||
const predecessorContext = task.blocked_by.map(depId => {
|
||||
const depTask = state.pipeline.find(t => t.id === depId)
|
||||
return `${depId}: ${depTask.artifact_path || "(no artifact)"}`
|
||||
}).join("\n")
|
||||
|
||||
// Spawn agent
|
||||
const agentId = spawn_agent({
|
||||
message: `
|
||||
## TASK ASSIGNMENT
|
||||
|
||||
### MANDATORY FIRST STEPS (Agent Execute)
|
||||
1. **Read role definition**: ~/.codex/agents/${task.owner}.md (MUST read first)
|
||||
2. Read session state: ${sessionDir}/team-session.json
|
||||
3. Read wisdom files: ${sessionDir}/wisdom/*.md (if exists)
|
||||
|
||||
---
|
||||
|
||||
## Session
|
||||
Session directory: ${sessionDir}
|
||||
Task ID: ${task.id}
|
||||
Pipeline mode: ${state.mode}
|
||||
|
||||
## Scope
|
||||
${state.scope}
|
||||
|
||||
## Task
|
||||
${task.description}
|
||||
|
||||
## InlineDiscuss
|
||||
${task.inline_discuss || "none"}
|
||||
|
||||
## Dependencies (completed predecessors)
|
||||
${predecessorContext || "(none - this is the first task)"}
|
||||
|
||||
## Constraints
|
||||
- Only process ${task.id} (prefix: ${task.id.split('-')[0]}-*)
|
||||
- All output prefixed with [${task.owner}] tag
|
||||
- Write artifacts to ${sessionDir}/${getArtifactSubdir(task)}
|
||||
- Read wisdom files for cross-task knowledge before starting
|
||||
- After completion, append discoveries to wisdom files
|
||||
- If InlineDiscuss is set, spawn discuss subagent after primary artifact creation
|
||||
(Read ~/.codex/agents/discuss-agent.md for protocol)
|
||||
- If codebase exploration is needed, spawn explore subagent
|
||||
(Read ~/.codex/agents/explore-agent.md for protocol)
|
||||
|
||||
## Completion Protocol
|
||||
When work is complete, output EXACTLY:
|
||||
|
||||
TASK_COMPLETE:
|
||||
- task_id: ${task.id}
|
||||
- status: <success | failed | partial>
|
||||
- artifact: <path-to-primary-artifact>
|
||||
- discuss_verdict: <consensus_reached | consensus_blocked | none>
|
||||
- discuss_severity: <HIGH | MEDIUM | LOW | none>
|
||||
- summary: <one-line summary>
|
||||
`
|
||||
})
|
||||
|
||||
// Record in state
|
||||
task.status = "in_progress"
|
||||
task.agent_id = agentId
|
||||
task.started_at = new Date().toISOString()
|
||||
|
||||
spawnedAgents.push({ agentId, taskId: task.id, owner: task.owner, timeout })
|
||||
|
||||
state.active_agents.push({
|
||||
agent_id: agentId,
|
||||
task_id: task.id,
|
||||
owner: task.owner,
|
||||
spawned_at: new Date().toISOString()
|
||||
})
|
||||
}
|
||||
|
||||
// Update state file
|
||||
state.updated_at = new Date().toISOString()
|
||||
Write("<session-dir>/team-session.json", JSON.stringify(state, null, 2))
|
||||
```
|
||||
|
||||
**Agent-to-artifact-subdirectory mapping** (`getArtifactSubdir`):
|
||||
|
||||
| Agent / Task Prefix | Subdirectory |
|
||||
|---------------------|-------------|
|
||||
| analyst / RESEARCH-* | spec/ |
|
||||
| writer / DRAFT-* | spec/ |
|
||||
| reviewer / QUALITY-* | spec/ |
|
||||
| planner / PLAN-* | plan/ |
|
||||
| executor / IMPL-* | (project root - code changes) |
|
||||
| tester / TEST-* | qa/ |
|
||||
| reviewer / REVIEW-* | qa/ |
|
||||
| architect / ARCH-* | architecture/ |
|
||||
| fe-developer / DEV-FE-* | (project root - code changes) |
|
||||
| fe-qa / QA-FE-* | qa/ |
|
||||
|
||||
### Step 4.4: Wait Loop
|
||||
|
||||
Enter the main wait loop. Wait for all spawned agents, then process results.
|
||||
|
||||
```javascript
|
||||
// Collect all active agent IDs
|
||||
const activeAgentIds = spawnedAgents.map(a => a.agentId)
|
||||
|
||||
// Determine timeout (use the maximum among spawned agents)
|
||||
const maxTimeout = Math.max(...spawnedAgents.map(a => a.timeout))
|
||||
|
||||
// Wait for all agents
|
||||
const results = wait({ ids: activeAgentIds, timeout_ms: maxTimeout })
|
||||
```
|
||||
|
||||
### Step 4.5: Timeout Handling
|
||||
|
||||
If any agent timed out, send convergence request and retry.
|
||||
|
||||
```javascript
|
||||
if (results.timed_out) {
|
||||
for (const agent of spawnedAgents) {
|
||||
if (!results.status[agent.agentId]?.completed) {
|
||||
// Send convergence request
|
||||
send_input({
|
||||
id: agent.agentId,
|
||||
message: `
|
||||
## TIMEOUT NOTIFICATION
|
||||
|
||||
Execution timeout reached for task ${agent.taskId}. Please:
|
||||
1. Save all current progress to artifact files
|
||||
2. Output TASK_COMPLETE with status: partial
|
||||
3. Include summary of completed vs remaining work
|
||||
`
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
// Wait additional convergence period
|
||||
const convergenceResults = wait({
|
||||
ids: activeAgentIds.filter(id => !results.status[id]?.completed),
|
||||
timeout_ms: CONVERGENCE_WAIT
|
||||
})
|
||||
|
||||
// Merge results
|
||||
// Agents still not complete after convergence -> force close
|
||||
for (const agent of spawnedAgents) {
|
||||
const agentResult = results.status[agent.agentId]?.completed
|
||||
|| convergenceResults.status?.[agent.agentId]?.completed
|
||||
|
||||
if (!agentResult) {
|
||||
// Force close and mark as failed
|
||||
close_agent({ id: agent.agentId })
|
||||
handleTaskFailure(agent.taskId, "timeout after convergence request")
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Step 4.6: Process Agent Results
|
||||
|
||||
For each completed agent, extract TASK_COMPLETE data and update state.
|
||||
|
||||
```javascript
|
||||
for (const agent of spawnedAgents) {
|
||||
const output = results.status[agent.agentId]?.completed
|
||||
if (!output) continue // handled in timeout section
|
||||
|
||||
// Parse TASK_COMPLETE from output
|
||||
const taskResult = parseTaskComplete(output)
|
||||
|
||||
if (!taskResult) {
|
||||
// Malformed output - treat as partial success
|
||||
handleMalformedOutput(agent.taskId, output)
|
||||
continue
|
||||
}
|
||||
|
||||
// Update task in state
|
||||
const task = state.pipeline.find(t => t.id === agent.taskId)
|
||||
task.status = taskResult.status === "failed" ? "failed" : "completed"
|
||||
task.artifact_path = taskResult.artifact
|
||||
task.discuss_verdict = taskResult.discuss_verdict
|
||||
task.discuss_severity = taskResult.discuss_severity
|
||||
task.completed_at = new Date().toISOString()
|
||||
|
||||
// Remove from active agents
|
||||
state.active_agents = state.active_agents.filter(
|
||||
a => a.agent_id !== agent.agentId
|
||||
)
|
||||
|
||||
// Add to completed
|
||||
if (task.status === "completed") {
|
||||
state.completed_tasks.push(task.id)
|
||||
state.tasks_completed++
|
||||
}
|
||||
|
||||
// Close agent
|
||||
close_agent({ id: agent.agentId })
|
||||
|
||||
// Route by consensus verdict
|
||||
if (taskResult.discuss_verdict === "consensus_blocked") {
|
||||
handleConsensusBlocked(task, taskResult)
|
||||
}
|
||||
}
|
||||
|
||||
// Write updated state
|
||||
state.updated_at = new Date().toISOString()
|
||||
Write("<session-dir>/team-session.json", JSON.stringify(state, null, 2))
|
||||
```
|
||||
|
||||
### Step 4.7: Consensus Severity Routing
|
||||
|
||||
Handle consensus_blocked results from agents with inline discuss.
|
||||
|
||||
```javascript
|
||||
function handleConsensusBlocked(task, taskResult) {
|
||||
const severity = taskResult.discuss_severity
|
||||
|
||||
switch (severity) {
|
||||
case "LOW":
|
||||
// Treat as consensus_reached with notes
|
||||
// No special action needed, proceed normally
|
||||
break
|
||||
|
||||
case "MEDIUM":
|
||||
// Log warning to wisdom/issues.md
|
||||
appendToFile("<session-dir>/wisdom/issues.md",
|
||||
`\n## ${task.id} - Consensus Warning (MEDIUM)\n`
|
||||
+ `Divergences: ${taskResult.divergences || "see discussion record"}\n`
|
||||
+ `Action items: ${taskResult.action_items || "see discussion record"}\n`)
|
||||
// Proceed normally
|
||||
break
|
||||
|
||||
case "HIGH":
|
||||
// Check if this is DISCUSS-006 (final sign-off)
|
||||
if (task.inline_discuss === "DISCUSS-006") {
|
||||
// Always pause for user decision
|
||||
state.status = "paused"
|
||||
state.checkpoints_hit.push(`${task.id}-DISCUSS-006-HIGH`)
|
||||
// Output will include pause message
|
||||
return // Do not advance pipeline
|
||||
}
|
||||
|
||||
// Check revision limit
|
||||
if (task.revision_count >= 1) {
|
||||
// Already revised once, escalate to user
|
||||
state.status = "paused"
|
||||
// Output escalation message
|
||||
return
|
||||
}
|
||||
|
||||
// Create revision task
|
||||
const revisionTask = {
|
||||
id: `${task.id}-R1`,
|
||||
owner: task.owner,
|
||||
status: "pending",
|
||||
blocked_by: [],
|
||||
description: `Revision of ${task.id}: address consensus-blocked divergences.\n`
|
||||
+ `Session: ${sessionDir}\n`
|
||||
+ `Original artifact: ${task.artifact_path}\n`
|
||||
+ `Divergences: ${taskResult.divergences || "see discussion record"}\n`
|
||||
+ `Action items: ${taskResult.action_items || "see discussion record"}\n`
|
||||
+ `InlineDiscuss: ${task.inline_discuss}`,
|
||||
inline_discuss: task.inline_discuss,
|
||||
agent_id: null,
|
||||
artifact_path: null,
|
||||
discuss_verdict: null,
|
||||
discuss_severity: null,
|
||||
started_at: null,
|
||||
completed_at: null,
|
||||
revision_of: task.id,
|
||||
revision_count: task.revision_count + 1,
|
||||
is_checkpoint_after: task.is_checkpoint_after
|
||||
}
|
||||
|
||||
// Insert revision into pipeline (after the original task)
|
||||
const taskIndex = state.pipeline.findIndex(t => t.id === task.id)
|
||||
state.pipeline.splice(taskIndex + 1, 0, revisionTask)
|
||||
state.tasks_total++
|
||||
|
||||
// Update dependent tasks: anything blocked_by task.id should now be blocked_by revision.id
|
||||
for (const t of state.pipeline) {
|
||||
const depIdx = t.blocked_by.indexOf(task.id)
|
||||
if (depIdx !== -1) {
|
||||
t.blocked_by[depIdx] = revisionTask.id
|
||||
}
|
||||
}
|
||||
|
||||
// Track revision chain
|
||||
state.revision_chains[task.id] = revisionTask.id
|
||||
break
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Step 4.8: GC Loop Handling (Frontend QA)
|
||||
|
||||
When QA-FE agent completes with verdict NEEDS_FIX:
|
||||
|
||||
```javascript
|
||||
function handleGCLoop(qaTask, taskResult) {
|
||||
if (state.gc_loop_count >= MAX_GC_ROUNDS) {
|
||||
// Max rounds reached, stop loop
|
||||
// Output: "QA-FE max rounds reached. Latest report: <artifact-path>"
|
||||
return
|
||||
}
|
||||
|
||||
state.gc_loop_count++
|
||||
const round = state.gc_loop_count + 1
|
||||
|
||||
// Create DEV-FE-NNN (fix task)
|
||||
const devFeTask = {
|
||||
id: `DEV-FE-${String(round).padStart(3, '0')}`,
|
||||
owner: "fe-developer",
|
||||
status: "pending",
|
||||
blocked_by: [qaTask.id],
|
||||
description: `Frontend fix round ${round}: address QA findings.\n`
|
||||
+ `Session: ${sessionDir}\n`
|
||||
+ `QA Report: ${qaTask.artifact_path}\n`
|
||||
+ `Scope: ${state.scope}`,
|
||||
inline_discuss: null,
|
||||
agent_id: null,
|
||||
artifact_path: null,
|
||||
discuss_verdict: null,
|
||||
discuss_severity: null,
|
||||
started_at: null,
|
||||
completed_at: null,
|
||||
revision_of: null,
|
||||
revision_count: 0,
|
||||
is_checkpoint_after: false
|
||||
}
|
||||
|
||||
// Create QA-FE-NNN (retest task)
|
||||
const qaFeTask = {
|
||||
id: `QA-FE-${String(round).padStart(3, '0')}`,
|
||||
owner: "fe-qa",
|
||||
status: "pending",
|
||||
blocked_by: [devFeTask.id],
|
||||
description: `Frontend QA round ${round}: retest after fixes.\n`
|
||||
+ `Session: ${sessionDir}\n`
|
||||
+ `Scope: ${state.scope}`,
|
||||
inline_discuss: null,
|
||||
agent_id: null,
|
||||
artifact_path: null,
|
||||
discuss_verdict: null,
|
||||
discuss_severity: null,
|
||||
started_at: null,
|
||||
completed_at: null,
|
||||
revision_of: null,
|
||||
revision_count: 0,
|
||||
is_checkpoint_after: false
|
||||
}
|
||||
|
||||
// Add to pipeline
|
||||
state.pipeline.push(devFeTask, qaFeTask)
|
||||
state.tasks_total += 2
|
||||
|
||||
// Update downstream dependencies (e.g., REVIEW-001 blocked_by QA-FE-001 -> QA-FE-NNN)
|
||||
for (const t of state.pipeline) {
|
||||
const depIdx = t.blocked_by.indexOf(qaTask.id)
|
||||
if (depIdx !== -1 && t.id !== devFeTask.id) {
|
||||
t.blocked_by[depIdx] = qaFeTask.id
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Step 4.9: Fast-Advance Check
|
||||
|
||||
After processing all results, determine whether to fast-advance or yield.
|
||||
|
||||
```javascript
|
||||
// Recompute ready tasks after processing
|
||||
const newReadyTasks = state.pipeline.filter(t =>
|
||||
t.status === "pending"
|
||||
&& t.blocked_by.every(dep =>
|
||||
state.pipeline.find(p => p.id === dep)?.status === "completed"
|
||||
)
|
||||
)
|
||||
|
||||
const stillRunning = state.active_agents.length > 0
|
||||
```
|
||||
|
||||
**Fast-advance decision table**:
|
||||
|
||||
| Ready Count | Still Running | Checkpoint Pending | Action |
|
||||
|-------------|---------------|-------------------|--------|
|
||||
| 0 | Yes | No | Yield, wait for running agents |
|
||||
| 0 | No | No | Pipeline complete -> Phase 5 |
|
||||
| 0 | No | Yes | Paused at checkpoint, yield |
|
||||
| 1 | No | No | Fast-advance: spawn immediately, re-enter Step 4.3 |
|
||||
| 1 | Yes | No | Spawn alongside running, re-enter wait loop |
|
||||
| 2+ | Any | No | Spawn all ready (batch), re-enter wait loop |
|
||||
| Any | Any | Yes | Paused at checkpoint, yield |
|
||||
|
||||
```javascript
|
||||
if (state.status === "paused") {
|
||||
// Checkpoint or escalation - yield
|
||||
// Output pause message
|
||||
return
|
||||
}
|
||||
|
||||
if (newReadyTasks.length === 0 && state.active_agents.length === 0) {
|
||||
// Pipeline complete
|
||||
// Proceed to Phase 5
|
||||
return "PIPELINE_COMPLETE"
|
||||
}
|
||||
|
||||
if (newReadyTasks.length > 0) {
|
||||
// Fast-advance: loop back to Step 4.3 with new ready tasks
|
||||
readyTasks = newReadyTasks
|
||||
// Continue to Step 4.3 (spawn agents)
|
||||
}
|
||||
|
||||
if (newReadyTasks.length === 0 && state.active_agents.length > 0) {
|
||||
// Wait for running agents
|
||||
// Re-enter Step 4.4 (wait loop)
|
||||
}
|
||||
```
|
||||
|
||||
### Step 4.10: Orphan Detection
|
||||
|
||||
When processing a resume command or between beats, check for orphaned tasks.
|
||||
|
||||
```javascript
|
||||
function detectOrphans(state) {
|
||||
const now = Date.now()
|
||||
|
||||
for (const task of state.pipeline) {
|
||||
if (task.status !== "in_progress") continue
|
||||
|
||||
// Check if there is an active agent for this task
|
||||
const hasAgent = state.active_agents.some(a => a.task_id === task.id)
|
||||
|
||||
if (!hasAgent) {
|
||||
// Task is in_progress but no agent is tracking it
|
||||
const elapsed = now - new Date(task.started_at).getTime()
|
||||
|
||||
if (elapsed > ORPHAN_THRESHOLD) {
|
||||
// Orphaned task (likely fast-advance failure)
|
||||
task.status = "pending"
|
||||
task.agent_id = null
|
||||
task.started_at = null
|
||||
|
||||
// Log to wisdom
|
||||
appendToFile("<session-dir>/wisdom/issues.md",
|
||||
`\n## Orphaned Task Reset: ${task.id}\n`
|
||||
+ `Task was in_progress for ${Math.round(elapsed / 1000)}s with no active agent. Reset to pending.\n`)
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Step 4.11: Status Output
|
||||
|
||||
After each beat cycle, output a status summary.
|
||||
|
||||
```
|
||||
[orchestrator] Beat complete
|
||||
Completed this beat: <task-ids>
|
||||
Still running: <task-ids> (<agent-roles>)
|
||||
Ready to spawn: <task-ids>
|
||||
Progress: <completed>/<total> (<percent>%)
|
||||
Next action: <spawning | waiting | checkpoint-paused | pipeline-complete>
|
||||
```
|
||||
|
||||
### Step 4.12: User Command Handling
|
||||
|
||||
During Phase 4, user may issue commands.
|
||||
|
||||
**`check` / `status`**:
|
||||
|
||||
```
|
||||
Read state file -> output status graph (see orchestrator.md User Commands) -> yield
|
||||
No pipeline advancement.
|
||||
```
|
||||
|
||||
**`resume` / `continue`**:
|
||||
|
||||
```
|
||||
Read state file
|
||||
+- status === "paused"?
|
||||
| +- YES -> update status to "active" -> detectOrphans -> Step 4.1 (compute ready)
|
||||
| +- NO -> detectOrphans -> Step 4.1 (compute ready)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Pipeline Execution Patterns
|
||||
|
||||
### Spec-only (6 linear beats)
|
||||
|
||||
```javascript
|
||||
// Beat 1: RESEARCH-001 (analyst)
|
||||
analyst = spawn_agent(analystPrompt)
|
||||
result = wait([analyst]) -> close_agent(analyst)
|
||||
// Process discuss result (DISCUSS-001)
|
||||
// Fast-advance to beat 2
|
||||
|
||||
// Beat 2: DRAFT-001 (writer)
|
||||
writer1 = spawn_agent(writerPromptForDRAFT001)
|
||||
result = wait([writer1]) -> close_agent(writer1)
|
||||
// Process discuss result (DISCUSS-002)
|
||||
// Fast-advance to beat 3
|
||||
|
||||
// Beats 3-5: DRAFT-002, DRAFT-003, DRAFT-004 (same pattern)
|
||||
// Each: spawn writer -> wait -> close -> process discuss -> fast-advance
|
||||
|
||||
// Beat 6: QUALITY-001 (reviewer)
|
||||
reviewer = spawn_agent(reviewerPromptForQUALITY001)
|
||||
result = wait([reviewer]) -> close_agent(reviewer)
|
||||
// Process discuss result (DISCUSS-006)
|
||||
// If full-lifecycle: CHECKPOINT -> pause for user
|
||||
// If spec-only: PIPELINE_COMPLETE -> Phase 5
|
||||
```
|
||||
|
||||
### Impl-only (3 beats with parallel window)
|
||||
|
||||
```javascript
|
||||
// Beat 1: PLAN-001 (planner)
|
||||
planner = spawn_agent(plannerPrompt)
|
||||
result = wait([planner]) -> close_agent(planner)
|
||||
|
||||
// Beat 2: IMPL-001 (executor)
|
||||
executor = spawn_agent(executorPrompt)
|
||||
result = wait([executor]) -> close_agent(executor)
|
||||
|
||||
// Beat 3: TEST-001 || REVIEW-001 (parallel)
|
||||
tester = spawn_agent(testerPrompt)
|
||||
reviewer = spawn_agent(reviewerPrompt)
|
||||
results = wait([tester, reviewer]) // batch wait
|
||||
close_agent(tester)
|
||||
close_agent(reviewer)
|
||||
// PIPELINE_COMPLETE -> Phase 5
|
||||
```
|
||||
|
||||
### Fullstack (4 beats with dual parallel)
|
||||
|
||||
```javascript
|
||||
// Beat 1: PLAN-001
|
||||
planner = spawn_agent(plannerPrompt)
|
||||
result = wait([planner]) -> close_agent(planner)
|
||||
|
||||
// Beat 2: IMPL-001 || DEV-FE-001 (parallel)
|
||||
executor = spawn_agent(executorPrompt)
|
||||
feDev = spawn_agent(feDevPrompt)
|
||||
results = wait([executor, feDev])
|
||||
close_agent(executor)
|
||||
close_agent(feDev)
|
||||
|
||||
// Beat 3: TEST-001 || QA-FE-001 (parallel)
|
||||
tester = spawn_agent(testerPrompt)
|
||||
feQa = spawn_agent(feQaPrompt)
|
||||
results = wait([tester, feQa])
|
||||
close_agent(tester)
|
||||
close_agent(feQa)
|
||||
// Handle GC loop if QA-FE verdict is NEEDS_FIX
|
||||
|
||||
// Beat 4: REVIEW-001 (sync barrier)
|
||||
reviewer = spawn_agent(reviewerPrompt)
|
||||
result = wait([reviewer]) -> close_agent(reviewer)
|
||||
// PIPELINE_COMPLETE -> Phase 5
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Output
|
||||
|
||||
| Output | Type | Destination |
|
||||
|--------|------|-------------|
|
||||
| state (final) | Object | Updated in team-session.json throughout |
|
||||
| Pipeline status | String | "PIPELINE_COMPLETE" or "PAUSED" |
|
||||
| All task artifacts | Files | Written by agents to session subdirectories |
|
||||
|
||||
---
|
||||
|
||||
## Success Criteria
|
||||
|
||||
- All pipeline tasks reach "completed" status (or "partial" with user acknowledgment)
|
||||
- State file reflects accurate pipeline status at all times
|
||||
- Checkpoints are properly enforced (spec-to-impl transition)
|
||||
- Consensus severity routing executed correctly
|
||||
- GC loop respects maximum round limit
|
||||
- No orphaned tasks at pipeline end
|
||||
- All agents properly closed
|
||||
|
||||
---
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Error | Detection | Resolution |
|
||||
|-------|-----------|------------|
|
||||
| Agent timeout | wait() returns timed_out | Send convergence via send_input, wait 2 min, force close |
|
||||
| Agent crash | wait() returns error | Reset task to pending, respawn (max 3 retries) |
|
||||
| 3+ retries on same task | retry_count in task state | Pause pipeline, report to user |
|
||||
| Orphaned task | in_progress + no agent + elapsed > 5 min | Reset to pending, respawn |
|
||||
| Malformed TASK_COMPLETE | parseTaskComplete returns null | Treat as partial, log warning |
|
||||
| State file write conflict | Write error | Retry once, fail on second error |
|
||||
| Pipeline stall | No ready + no running + has pending | Inspect blocked_by, report to user |
|
||||
| DISCUSS-006 HIGH | Parsed from reviewer output | Always pause for user |
|
||||
| Revision also blocked | Revision task returns HIGH | Pause, escalate to user |
|
||||
| GC loop exceeded | gc_loop_count >= MAX_GC_ROUNDS | Stop loop, report QA state |
|
||||
|
||||
---
|
||||
|
||||
## Next Phase
|
||||
|
||||
When pipeline completes (all tasks done, no paused checkpoint), proceed to [Phase 5: Completion Report](05-completion-report.md).
|
||||
288
.codex/skills/team-lifecycle/phases/05-completion-report.md
Normal file
288
.codex/skills/team-lifecycle/phases/05-completion-report.md
Normal file
@@ -0,0 +1,288 @@
|
||||
# Phase 5: Completion Report
|
||||
|
||||
> **COMPACT PROTECTION**: This is an execution document. After context compression, phase instructions become summaries only. You MUST immediately re-read this file via `Read("~/.codex/skills/team-lifecycle/phases/05-completion-report.md")` before continuing. Never execute based on summaries.
|
||||
|
||||
## Objective
|
||||
|
||||
Summarize pipeline results, list all deliverable artifacts, update session status to completed, close any remaining agents, and present the user with next-step options.
|
||||
|
||||
---
|
||||
|
||||
## Input
|
||||
|
||||
| Input | Source | Required |
|
||||
|-------|--------|----------|
|
||||
| sessionDir | Phase 2 output | Yes |
|
||||
| state | team-session.json (final) | Yes |
|
||||
| state.pipeline | All tasks with final status | Yes |
|
||||
| state.mode | Pipeline mode | Yes |
|
||||
| state.started_at | Session start timestamp | Yes |
|
||||
|
||||
---
|
||||
|
||||
## Execution Steps
|
||||
|
||||
### Step 5.1: Load Final State
|
||||
|
||||
```javascript
|
||||
const state = JSON.parse(Read("<session-dir>/team-session.json"))
|
||||
```
|
||||
|
||||
### Step 5.2: Agent Cleanup
|
||||
|
||||
Close any remaining active agents (defensive -- should be none at this point).
|
||||
|
||||
```javascript
|
||||
for (const agentEntry of state.active_agents) {
|
||||
try {
|
||||
close_agent({ id: agentEntry.agent_id })
|
||||
} catch (e) {
|
||||
// Agent already closed, ignore
|
||||
}
|
||||
}
|
||||
state.active_agents = []
|
||||
```
|
||||
|
||||
### Step 5.3: Compute Summary Statistics
|
||||
|
||||
```javascript
|
||||
const totalTasks = state.pipeline.length
|
||||
const completedTasks = state.pipeline.filter(t => t.status === "completed").length
|
||||
const failedTasks = state.pipeline.filter(t => t.status === "failed").length
|
||||
const partialTasks = state.pipeline.filter(t => t.status === "partial").length
|
||||
const revisionTasks = state.pipeline.filter(t => t.revision_of !== null).length
|
||||
|
||||
const startTime = new Date(state.started_at)
|
||||
const endTime = new Date()
|
||||
const durationMs = endTime - startTime
|
||||
const durationMin = Math.round(durationMs / 60000)
|
||||
|
||||
const successRate = totalTasks > 0
|
||||
? Math.round((completedTasks / totalTasks) * 100)
|
||||
: 0
|
||||
```
|
||||
|
||||
### Step 5.4: Collect Deliverable Artifacts
|
||||
|
||||
Scan completed tasks for artifact paths. Group by pipeline phase.
|
||||
|
||||
```javascript
|
||||
const artifacts = {
|
||||
spec: [],
|
||||
plan: [],
|
||||
impl: [],
|
||||
test: [],
|
||||
review: [],
|
||||
discussions: [],
|
||||
qa: []
|
||||
}
|
||||
|
||||
for (const task of state.pipeline) {
|
||||
if (task.status !== "completed" || !task.artifact_path) continue
|
||||
|
||||
if (task.id.startsWith("RESEARCH") || task.id.startsWith("DRAFT") || task.id.startsWith("QUALITY")) {
|
||||
artifacts.spec.push({ task_id: task.id, path: task.artifact_path })
|
||||
} else if (task.id.startsWith("PLAN")) {
|
||||
artifacts.plan.push({ task_id: task.id, path: task.artifact_path })
|
||||
} else if (task.id.startsWith("IMPL") || task.id.startsWith("DEV-FE")) {
|
||||
artifacts.impl.push({ task_id: task.id, path: task.artifact_path })
|
||||
} else if (task.id.startsWith("TEST")) {
|
||||
artifacts.test.push({ task_id: task.id, path: task.artifact_path })
|
||||
} else if (task.id.startsWith("REVIEW")) {
|
||||
artifacts.review.push({ task_id: task.id, path: task.artifact_path })
|
||||
} else if (task.id.startsWith("QA-FE")) {
|
||||
artifacts.qa.push({ task_id: task.id, path: task.artifact_path })
|
||||
}
|
||||
}
|
||||
|
||||
// Also collect discussion records
|
||||
const discussionFiles = Glob("<session-dir>/discussions/*.md")
|
||||
for (const df of discussionFiles) {
|
||||
artifacts.discussions.push({ path: df })
|
||||
}
|
||||
```
|
||||
|
||||
### Step 5.5: Collect Wisdom Summary
|
||||
|
||||
Read wisdom files to include key findings in the report.
|
||||
|
||||
```javascript
|
||||
const wisdomSummary = {
|
||||
learnings: Read("<session-dir>/wisdom/learnings.md") || "(none)",
|
||||
decisions: Read("<session-dir>/wisdom/decisions.md") || "(none)",
|
||||
conventions: Read("<session-dir>/wisdom/conventions.md") || "(none)",
|
||||
issues: Read("<session-dir>/wisdom/issues.md") || "(none)"
|
||||
}
|
||||
```
|
||||
|
||||
### Step 5.6: Check for Consensus Warnings
|
||||
|
||||
Collect any consensus-blocked results across the pipeline.
|
||||
|
||||
```javascript
|
||||
const consensusIssues = state.pipeline
|
||||
.filter(t => t.discuss_verdict === "consensus_blocked")
|
||||
.map(t => ({
|
||||
task_id: t.id,
|
||||
severity: t.discuss_severity,
|
||||
revision: t.revision_of ? `revised as ${state.revision_chains[t.revision_of]}` : null
|
||||
}))
|
||||
```
|
||||
|
||||
### Step 5.7: Update Session Status
|
||||
|
||||
```javascript
|
||||
state.status = "completed"
|
||||
state.updated_at = new Date().toISOString()
|
||||
state.completed_at = new Date().toISOString()
|
||||
|
||||
Write("<session-dir>/team-session.json",
|
||||
JSON.stringify(state, null, 2))
|
||||
```
|
||||
|
||||
### Step 5.8: Output Completion Report
|
||||
|
||||
Format and display the final report to the user.
|
||||
|
||||
```
|
||||
==============================================================
|
||||
[orchestrator] PIPELINE COMPLETE
|
||||
==============================================================
|
||||
|
||||
Session: <session-id>
|
||||
Mode: <mode>
|
||||
Duration: <duration-min> minutes
|
||||
Progress: <completed>/<total> tasks (<success-rate>%)
|
||||
|
||||
--------------------------------------------------------------
|
||||
TASK SUMMARY
|
||||
--------------------------------------------------------------
|
||||
| Task ID | Agent | Status | Discuss | Artifact |
|
||||
|---------|-------|--------|---------|----------|
|
||||
| RESEARCH-001 | analyst | V | DISCUSS-001: reached | spec/discovery-context.json |
|
||||
| DRAFT-001 | writer | V | DISCUSS-002: reached | spec/product-brief.md |
|
||||
| ... | ... | ... | ... | ... |
|
||||
|
||||
V=completed X=failed ~=partial R=revision
|
||||
|
||||
--------------------------------------------------------------
|
||||
DELIVERABLES
|
||||
--------------------------------------------------------------
|
||||
|
||||
Specification:
|
||||
<artifact-list from artifacts.spec>
|
||||
|
||||
Plan:
|
||||
<artifact-list from artifacts.plan>
|
||||
|
||||
Implementation:
|
||||
<artifact-list from artifacts.impl>
|
||||
|
||||
Testing:
|
||||
<artifact-list from artifacts.test>
|
||||
|
||||
Review:
|
||||
<artifact-list from artifacts.review>
|
||||
|
||||
Discussions:
|
||||
<discussion-file-list>
|
||||
|
||||
QA:
|
||||
<artifact-list from artifacts.qa>
|
||||
|
||||
--------------------------------------------------------------
|
||||
CONSENSUS NOTES
|
||||
--------------------------------------------------------------
|
||||
<if consensusIssues.length > 0>
|
||||
<for each issue>
|
||||
<task-id>: consensus_blocked (<severity>) <revision-note>
|
||||
<end>
|
||||
<else>
|
||||
All discussions reached consensus.
|
||||
<end>
|
||||
|
||||
--------------------------------------------------------------
|
||||
WISDOM HIGHLIGHTS
|
||||
--------------------------------------------------------------
|
||||
Key learnings: <summary from wisdom/learnings.md>
|
||||
Key decisions: <summary from wisdom/decisions.md>
|
||||
Issues flagged: <summary from wisdom/issues.md>
|
||||
|
||||
--------------------------------------------------------------
|
||||
NEXT STEPS
|
||||
--------------------------------------------------------------
|
||||
Available actions:
|
||||
1. Exit - session complete
|
||||
2. View artifacts - read specific deliverable files
|
||||
3. Extend - add more tasks to this pipeline
|
||||
4. New session - start a fresh lifecycle
|
||||
5. Generate lite-plan - create a lightweight implementation plan from spec
|
||||
|
||||
Session directory: <session-dir>
|
||||
==============================================================
|
||||
```
|
||||
|
||||
### Step 5.9: Handle User Response
|
||||
|
||||
After presenting the report, wait for user input.
|
||||
|
||||
| User Choice | Action |
|
||||
|-------------|--------|
|
||||
| exit / done | Final cleanup, orchestrator stops |
|
||||
| view `<artifact-path>` | Read and display the specified artifact |
|
||||
| extend `<description>` | Re-enter Phase 1 with extend context, resume session |
|
||||
| new `<description>` | Start fresh Phase 1 (new session) |
|
||||
| lite-plan | Generate implementation plan from completed spec |
|
||||
|
||||
For "extend": the orchestrator reads the existing session, appends new requirements, and re-enters Phase 3 to create additional tasks appended to the existing pipeline.
|
||||
|
||||
For "view": simply read the requested file and display contents, then re-present the next steps menu.
|
||||
|
||||
---
|
||||
|
||||
## Output
|
||||
|
||||
| Output | Type | Destination |
|
||||
|--------|------|-------------|
|
||||
| Completion report | Text | Displayed to user |
|
||||
| Updated state | JSON | team-session.json with status="completed" |
|
||||
| User choice | String | Determines post-pipeline action |
|
||||
|
||||
---
|
||||
|
||||
## Success Criteria
|
||||
|
||||
- All agents closed (no orphaned agents)
|
||||
- State file updated to status="completed"
|
||||
- All artifact paths verified (files exist)
|
||||
- Completion report includes all task statuses
|
||||
- Consensus issues documented
|
||||
- Wisdom highlights extracted
|
||||
- Next steps presented to user
|
||||
|
||||
---
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Error | Resolution |
|
||||
|-------|------------|
|
||||
| State file read error | Attempt to reconstruct from available artifacts |
|
||||
| Artifact file missing | Report as "(artifact missing)" in deliverables list |
|
||||
| Agent close failure | Ignore (agent already closed) |
|
||||
| Wisdom file empty | Report "(no entries)" for that category |
|
||||
| User input not recognized | Re-present available options |
|
||||
|
||||
---
|
||||
|
||||
## Post-Pipeline
|
||||
|
||||
This is the final phase. The orchestrator either stops (exit) or loops back to an earlier phase based on user choice.
|
||||
|
||||
```
|
||||
User choice routing:
|
||||
exit -> orchestrator stops
|
||||
view -> display file -> re-present Step 5.9
|
||||
extend -> Phase 1 (with resume context) -> Phase 3 -> Phase 4 -> Phase 5
|
||||
new -> Phase 1 (fresh) -> Phase 2 -> Phase 3 -> Phase 4 -> Phase 5
|
||||
lite-plan -> generate plan from spec artifacts -> present to user
|
||||
```
|
||||
192
.codex/skills/team-lifecycle/specs/document-standards.md
Normal file
192
.codex/skills/team-lifecycle/specs/document-standards.md
Normal file
@@ -0,0 +1,192 @@
|
||||
# Document Standards
|
||||
|
||||
Defines format conventions, YAML frontmatter schema, naming rules, and content structure for all spec-generator outputs.
|
||||
|
||||
## When to Use
|
||||
|
||||
| Phase | Usage | Section |
|
||||
|-------|-------|---------|
|
||||
| All Phases | Frontmatter format | YAML Frontmatter Schema |
|
||||
| All Phases | File naming | Naming Conventions |
|
||||
| Phase 2-5 | Document structure | Content Structure |
|
||||
| Phase 6 | Validation reference | All sections |
|
||||
|
||||
---
|
||||
|
||||
## YAML Frontmatter Schema
|
||||
|
||||
Every generated document MUST begin with YAML frontmatter:
|
||||
|
||||
```yaml
|
||||
---
|
||||
session_id: SPEC-{slug}-{YYYY-MM-DD}
|
||||
phase: {1-6}
|
||||
document_type: {product-brief|requirements|architecture|epics|readiness-report|spec-summary}
|
||||
status: draft|review|complete
|
||||
generated_at: {ISO8601 timestamp}
|
||||
stepsCompleted: []
|
||||
version: 1
|
||||
dependencies:
|
||||
- {list of input documents used}
|
||||
---
|
||||
```
|
||||
|
||||
### Field Definitions
|
||||
|
||||
| Field | Type | Required | Description |
|
||||
|-------|------|----------|-------------|
|
||||
| `session_id` | string | Yes | Session identifier matching spec-config.json |
|
||||
| `phase` | number | Yes | Phase number that generated this document (1-6) |
|
||||
| `document_type` | string | Yes | One of: product-brief, requirements, architecture, epics, readiness-report, spec-summary |
|
||||
| `status` | enum | Yes | draft (initial), review (user reviewed), complete (finalized) |
|
||||
| `generated_at` | string | Yes | ISO8601 timestamp of generation |
|
||||
| `stepsCompleted` | array | Yes | List of step IDs completed during generation |
|
||||
| `version` | number | Yes | Document version, incremented on re-generation |
|
||||
| `dependencies` | array | No | List of input files this document depends on |
|
||||
|
||||
### Status Transitions
|
||||
|
||||
```
|
||||
draft -> review -> complete
|
||||
| ^
|
||||
+-------------------+ (direct promotion in auto mode)
|
||||
```
|
||||
|
||||
- **draft**: Initial generation, not yet user-reviewed
|
||||
- **review**: User has reviewed and provided feedback
|
||||
- **complete**: Finalized, ready for downstream consumption
|
||||
|
||||
In auto mode (`-y`), documents are promoted directly from `draft` to `complete`.
|
||||
|
||||
---
|
||||
|
||||
## Naming Conventions
|
||||
|
||||
### Session ID Format
|
||||
|
||||
```
|
||||
SPEC-{slug}-{YYYY-MM-DD}
|
||||
```
|
||||
|
||||
- **slug**: Lowercase, alphanumeric + Chinese characters, hyphens as separators, max 40 chars
|
||||
- **date**: UTC+8 date in YYYY-MM-DD format
|
||||
|
||||
Examples:
|
||||
- `SPEC-task-management-system-2026-02-11`
|
||||
- `SPEC-user-auth-oauth-2026-02-11`
|
||||
|
||||
### Output Files
|
||||
|
||||
| File | Phase | Description |
|
||||
|------|-------|-------------|
|
||||
| `spec-config.json` | 1 | Session configuration and state |
|
||||
| `discovery-context.json` | 1 | Codebase exploration results (optional) |
|
||||
| `product-brief.md` | 2 | Product brief document |
|
||||
| `requirements.md` | 3 | PRD document |
|
||||
| `architecture.md` | 4 | Architecture decisions document |
|
||||
| `epics.md` | 5 | Epic/Story breakdown document |
|
||||
| `readiness-report.md` | 6 | Quality validation report |
|
||||
| `spec-summary.md` | 6 | One-page executive summary |
|
||||
|
||||
### Output Directory
|
||||
|
||||
```
|
||||
.workflow/.spec/{session-id}/
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Content Structure
|
||||
|
||||
### Heading Hierarchy
|
||||
|
||||
- `#` (H1): Document title only (one per document)
|
||||
- `##` (H2): Major sections
|
||||
- `###` (H3): Subsections
|
||||
- `####` (H4): Detail items (use sparingly)
|
||||
|
||||
Maximum depth: 4 levels. Prefer flat structures.
|
||||
|
||||
### Section Ordering
|
||||
|
||||
Every document follows this general pattern:
|
||||
|
||||
1. **YAML Frontmatter** (mandatory)
|
||||
2. **Title** (H1)
|
||||
3. **Executive Summary** (2-3 sentences)
|
||||
4. **Core Content Sections** (H2, document-specific)
|
||||
5. **Open Questions / Risks** (if applicable)
|
||||
6. **References / Traceability** (links to upstream/downstream docs)
|
||||
|
||||
### Formatting Rules
|
||||
|
||||
| Element | Format | Example |
|
||||
|---------|--------|---------|
|
||||
| Requirements | `REQ-{NNN}` prefix | REQ-001: User login |
|
||||
| Acceptance criteria | Checkbox list | `- [ ] User can log in with email` |
|
||||
| Architecture decisions | `ADR-{NNN}` prefix | ADR-001: Use PostgreSQL |
|
||||
| Epics | `EPIC-{NNN}` prefix | EPIC-001: Authentication |
|
||||
| Stories | `STORY-{EPIC}-{NNN}` prefix | STORY-001-001: Login form |
|
||||
| Priority tags | MoSCoW labels | `[Must]`, `[Should]`, `[Could]`, `[Won't]` |
|
||||
| Mermaid diagrams | Fenced code blocks | ````mermaid ... ``` `` |
|
||||
| Code examples | Language-tagged blocks | ````typescript ... ``` `` |
|
||||
|
||||
### Cross-Reference Format
|
||||
|
||||
Use relative references between documents:
|
||||
|
||||
```markdown
|
||||
See [Product Brief](product-brief.md#section-name) for details.
|
||||
Derived from [REQ-001](requirements.md#req-001).
|
||||
```
|
||||
|
||||
### Language
|
||||
|
||||
- Document body: Follow user's input language (Chinese or English)
|
||||
- Technical identifiers: Always English (REQ-001, ADR-001, EPIC-001)
|
||||
- YAML frontmatter keys: Always English
|
||||
|
||||
---
|
||||
|
||||
## spec-config.json Schema
|
||||
|
||||
```json
|
||||
{
|
||||
"session_id": "string (required)",
|
||||
"seed_input": "string (required) - original user input",
|
||||
"input_type": "text|file (required)",
|
||||
"timestamp": "ISO8601 (required)",
|
||||
"mode": "interactive|auto (required)",
|
||||
"complexity": "simple|moderate|complex (required)",
|
||||
"depth": "light|standard|comprehensive (required)",
|
||||
"focus_areas": ["string array"],
|
||||
"seed_analysis": {
|
||||
"problem_statement": "string",
|
||||
"target_users": ["string array"],
|
||||
"domain": "string",
|
||||
"constraints": ["string array"],
|
||||
"dimensions": ["string array - 3-5 exploration dimensions"]
|
||||
},
|
||||
"has_codebase": "boolean",
|
||||
"phasesCompleted": [
|
||||
{
|
||||
"phase": "number (1-6)",
|
||||
"name": "string (phase name)",
|
||||
"output_file": "string (primary output file)",
|
||||
"completed_at": "ISO8601"
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Validation Checklist
|
||||
|
||||
- [ ] Every document starts with valid YAML frontmatter
|
||||
- [ ] `session_id` matches across all documents in a session
|
||||
- [ ] `status` field reflects current document state
|
||||
- [ ] All cross-references resolve to valid targets
|
||||
- [ ] Heading hierarchy is correct (no skipped levels)
|
||||
- [ ] Technical identifiers use correct prefixes
|
||||
- [ ] Output files are in the correct directory
|
||||
207
.codex/skills/team-lifecycle/specs/quality-gates.md
Normal file
207
.codex/skills/team-lifecycle/specs/quality-gates.md
Normal file
@@ -0,0 +1,207 @@
|
||||
# Quality Gates
|
||||
|
||||
Per-phase quality gate criteria and scoring dimensions for spec-generator outputs.
|
||||
|
||||
## When to Use
|
||||
|
||||
| Phase | Usage | Section |
|
||||
|-------|-------|---------|
|
||||
| Phase 2-5 | Post-generation self-check | Per-Phase Gates |
|
||||
| Phase 6 | Cross-document validation | Cross-Document Validation |
|
||||
| Phase 6 | Final scoring | Scoring Dimensions |
|
||||
|
||||
---
|
||||
|
||||
## Quality Thresholds
|
||||
|
||||
| Gate | Score | Action |
|
||||
|------|-------|--------|
|
||||
| **Pass** | >= 80% | Continue to next phase |
|
||||
| **Review** | 60-79% | Log warnings, continue with caveats |
|
||||
| **Fail** | < 60% | Must address issues before continuing |
|
||||
|
||||
In auto mode (`-y`), Review-level issues are logged but do not block progress.
|
||||
|
||||
---
|
||||
|
||||
## Scoring Dimensions
|
||||
|
||||
### 1. Completeness (25%)
|
||||
|
||||
All required sections present with substantive content.
|
||||
|
||||
| Score | Criteria |
|
||||
|-------|----------|
|
||||
| 100% | All template sections filled with detailed content |
|
||||
| 75% | All sections present, some lack detail |
|
||||
| 50% | Major sections present but minor sections missing |
|
||||
| 25% | Multiple major sections missing or empty |
|
||||
| 0% | Document is a skeleton only |
|
||||
|
||||
### 2. Consistency (25%)
|
||||
|
||||
Terminology, formatting, and references are uniform across documents.
|
||||
|
||||
| Score | Criteria |
|
||||
|-------|----------|
|
||||
| 100% | All terms consistent, all references valid, formatting uniform |
|
||||
| 75% | Minor terminology variations, all references valid |
|
||||
| 50% | Some inconsistent terms, 1-2 broken references |
|
||||
| 25% | Frequent inconsistencies, multiple broken references |
|
||||
| 0% | Documents contradict each other |
|
||||
|
||||
### 3. Traceability (25%)
|
||||
|
||||
Requirements, architecture decisions, and stories trace back to goals.
|
||||
|
||||
| Score | Criteria |
|
||||
|-------|----------|
|
||||
| 100% | Every story traces to a requirement, every requirement traces to a goal |
|
||||
| 75% | Most items traceable, few orphans |
|
||||
| 50% | Partial traceability, some disconnected items |
|
||||
| 25% | Weak traceability, many orphan items |
|
||||
| 0% | No traceability between documents |
|
||||
|
||||
### 4. Depth (25%)
|
||||
|
||||
Content provides sufficient detail for execution teams.
|
||||
|
||||
| Score | Criteria |
|
||||
|-------|----------|
|
||||
| 100% | Acceptance criteria specific and testable, architecture decisions justified, stories estimable |
|
||||
| 75% | Most items detailed enough, few vague areas |
|
||||
| 50% | Mix of detailed and vague content |
|
||||
| 25% | Mostly high-level, lacking actionable detail |
|
||||
| 0% | Too abstract for execution |
|
||||
|
||||
---
|
||||
|
||||
## Per-Phase Quality Gates
|
||||
|
||||
### Phase 1: Discovery
|
||||
|
||||
| Check | Criteria | Severity |
|
||||
|-------|----------|----------|
|
||||
| Session ID valid | Matches `SPEC-{slug}-{date}` format | Error |
|
||||
| Problem statement exists | Non-empty, >= 20 characters | Error |
|
||||
| Target users identified | >= 1 user group | Error |
|
||||
| Dimensions generated | 3-5 exploration dimensions | Warning |
|
||||
| Constraints listed | >= 0 (can be empty with justification) | Info |
|
||||
|
||||
### Phase 2: Product Brief
|
||||
|
||||
| Check | Criteria | Severity |
|
||||
|-------|----------|----------|
|
||||
| Vision statement | Clear, 1-3 sentences | Error |
|
||||
| Problem statement | Specific and measurable | Error |
|
||||
| Target users | >= 1 persona with needs described | Error |
|
||||
| Goals defined | >= 2 measurable goals | Error |
|
||||
| Success metrics | >= 2 quantifiable metrics | Warning |
|
||||
| Scope boundaries | In-scope and out-of-scope listed | Warning |
|
||||
| Multi-perspective | >= 2 CLI perspectives synthesized | Info |
|
||||
|
||||
### Phase 3: Requirements (PRD)
|
||||
|
||||
| Check | Criteria | Severity |
|
||||
|-------|----------|----------|
|
||||
| Functional requirements | >= 3 with REQ-NNN IDs | Error |
|
||||
| Acceptance criteria | Every requirement has >= 1 criterion | Error |
|
||||
| MoSCoW priority | Every requirement tagged | Error |
|
||||
| Non-functional requirements | >= 1 (performance, security, etc.) | Warning |
|
||||
| User stories | >= 1 per Must-have requirement | Warning |
|
||||
| Traceability | Requirements trace to product brief goals | Warning |
|
||||
|
||||
### Phase 4: Architecture
|
||||
|
||||
| Check | Criteria | Severity |
|
||||
|-------|----------|----------|
|
||||
| Component diagram | Present (Mermaid or ASCII) | Error |
|
||||
| Tech stack specified | Languages, frameworks, key libraries | Error |
|
||||
| ADR present | >= 1 Architecture Decision Record | Error |
|
||||
| ADR has alternatives | Each ADR lists >= 2 options considered | Warning |
|
||||
| Integration points | External systems/APIs identified | Warning |
|
||||
| Data model | Key entities and relationships described | Warning |
|
||||
| Codebase mapping | Mapped to existing code (if has_codebase) | Info |
|
||||
|
||||
### Phase 5: Epics & Stories
|
||||
|
||||
| Check | Criteria | Severity |
|
||||
|-------|----------|----------|
|
||||
| Epics defined | 3-7 epics with EPIC-NNN IDs | Error |
|
||||
| MVP subset | >= 1 epic tagged as MVP | Error |
|
||||
| Stories per epic | 2-5 stories per epic | Error |
|
||||
| Story format | "As a...I want...So that..." pattern | Warning |
|
||||
| Dependency map | Cross-epic dependencies documented | Warning |
|
||||
| Estimation hints | Relative sizing (S/M/L/XL) per story | Info |
|
||||
| Traceability | Stories trace to requirements | Warning |
|
||||
|
||||
### Phase 6: Readiness Check
|
||||
|
||||
| Check | Criteria | Severity |
|
||||
|-------|----------|----------|
|
||||
| All documents exist | product-brief, requirements, architecture, epics | Error |
|
||||
| Frontmatter valid | All YAML frontmatter parseable and correct | Error |
|
||||
| Cross-references valid | All document links resolve | Error |
|
||||
| Overall score >= 60% | Weighted average across 4 dimensions | Error |
|
||||
| No unresolved Errors | All Error-severity issues addressed | Error |
|
||||
| Summary generated | spec-summary.md created | Warning |
|
||||
|
||||
---
|
||||
|
||||
## Cross-Document Validation
|
||||
|
||||
Checks performed during Phase 6 across all documents:
|
||||
|
||||
### Completeness Matrix
|
||||
|
||||
```
|
||||
Product Brief goals -> Requirements (each goal has >= 1 requirement)
|
||||
Requirements -> Architecture (each Must requirement has design coverage)
|
||||
Requirements -> Epics (each Must requirement appears in >= 1 story)
|
||||
Architecture ADRs -> Epics (tech choices reflected in implementation stories)
|
||||
```
|
||||
|
||||
### Consistency Checks
|
||||
|
||||
| Check | Documents | Rule |
|
||||
|-------|-----------|------|
|
||||
| Terminology | All | Same term used consistently (no synonyms for same concept) |
|
||||
| User personas | Brief + PRD + Epics | Same user names/roles throughout |
|
||||
| Scope | Brief + PRD | PRD scope does not exceed brief scope |
|
||||
| Tech stack | Architecture + Epics | Stories reference correct technologies |
|
||||
|
||||
### Traceability Matrix Format
|
||||
|
||||
```markdown
|
||||
| Goal | Requirements | Architecture | Epics |
|
||||
|------|-------------|--------------|-------|
|
||||
| G-001: ... | REQ-001, REQ-002 | ADR-001 | EPIC-001 |
|
||||
| G-002: ... | REQ-003 | ADR-002 | EPIC-002, EPIC-003 |
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Issue Classification
|
||||
|
||||
### Error (Must Fix)
|
||||
|
||||
- Missing required document or section
|
||||
- Broken cross-references
|
||||
- Contradictory information between documents
|
||||
- Empty acceptance criteria on Must-have requirements
|
||||
- No MVP subset defined in epics
|
||||
|
||||
### Warning (Should Fix)
|
||||
|
||||
- Vague acceptance criteria
|
||||
- Missing non-functional requirements
|
||||
- No success metrics defined
|
||||
- Incomplete traceability
|
||||
- Missing architecture review notes
|
||||
|
||||
### Info (Nice to Have)
|
||||
|
||||
- Could add more detailed personas
|
||||
- Consider additional ADR alternatives
|
||||
- Story estimation hints missing
|
||||
- Mermaid diagrams could be more detailed
|
||||
254
.codex/skills/team-lifecycle/templates/architecture-doc.md
Normal file
254
.codex/skills/team-lifecycle/templates/architecture-doc.md
Normal file
@@ -0,0 +1,254 @@
|
||||
# Architecture Document Template (Directory Structure)
|
||||
|
||||
Template for generating architecture decision documents as a directory of individual ADR files in Phase 4.
|
||||
|
||||
## Usage Context
|
||||
|
||||
| Phase | Usage |
|
||||
|-------|-------|
|
||||
| Phase 4 (Architecture) | Generate `architecture/` directory from requirements analysis |
|
||||
| Output Location | `{workDir}/architecture/` |
|
||||
|
||||
## Output Structure
|
||||
|
||||
```
|
||||
{workDir}/architecture/
|
||||
├── _index.md # Overview, components, tech stack, data model, security
|
||||
├── ADR-001-{slug}.md # Individual Architecture Decision Record
|
||||
├── ADR-002-{slug}.md
|
||||
└── ...
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Template: _index.md
|
||||
|
||||
```markdown
|
||||
---
|
||||
session_id: {session_id}
|
||||
phase: 4
|
||||
document_type: architecture-index
|
||||
status: draft
|
||||
generated_at: {timestamp}
|
||||
version: 1
|
||||
dependencies:
|
||||
- ../spec-config.json
|
||||
- ../product-brief.md
|
||||
- ../requirements/_index.md
|
||||
---
|
||||
|
||||
# Architecture: {product_name}
|
||||
|
||||
{executive_summary - high-level architecture approach and key decisions}
|
||||
|
||||
## System Overview
|
||||
|
||||
### Architecture Style
|
||||
{description of chosen architecture style: microservices, monolith, serverless, etc.}
|
||||
|
||||
### System Context Diagram
|
||||
|
||||
```mermaid
|
||||
C4Context
|
||||
title System Context Diagram
|
||||
Person(user, "User", "Primary user")
|
||||
System(system, "{product_name}", "Core system")
|
||||
System_Ext(ext1, "{external_system}", "{description}")
|
||||
Rel(user, system, "Uses")
|
||||
Rel(system, ext1, "Integrates with")
|
||||
```
|
||||
|
||||
## Component Architecture
|
||||
|
||||
### Component Diagram
|
||||
|
||||
```mermaid
|
||||
graph TD
|
||||
subgraph "{product_name}"
|
||||
A[Component A] --> B[Component B]
|
||||
B --> C[Component C]
|
||||
A --> D[Component D]
|
||||
end
|
||||
B --> E[External Service]
|
||||
```
|
||||
|
||||
### Component Descriptions
|
||||
|
||||
| Component | Responsibility | Technology | Dependencies |
|
||||
|-----------|---------------|------------|--------------|
|
||||
| {component_name} | {what it does} | {tech stack} | {depends on} |
|
||||
|
||||
## Technology Stack
|
||||
|
||||
### Core Technologies
|
||||
|
||||
| Layer | Technology | Version | Rationale |
|
||||
|-------|-----------|---------|-----------|
|
||||
| Frontend | {technology} | {version} | {why chosen} |
|
||||
| Backend | {technology} | {version} | {why chosen} |
|
||||
| Database | {technology} | {version} | {why chosen} |
|
||||
| Infrastructure | {technology} | {version} | {why chosen} |
|
||||
|
||||
### Key Libraries & Frameworks
|
||||
|
||||
| Library | Purpose | License |
|
||||
|---------|---------|---------|
|
||||
| {library_name} | {purpose} | {license} |
|
||||
|
||||
## Architecture Decision Records
|
||||
|
||||
| ADR | Title | Status | Key Choice |
|
||||
|-----|-------|--------|------------|
|
||||
| [ADR-001](ADR-001-{slug}.md) | {title} | Accepted | {one-line summary} |
|
||||
| [ADR-002](ADR-002-{slug}.md) | {title} | Accepted | {one-line summary} |
|
||||
| [ADR-003](ADR-003-{slug}.md) | {title} | Proposed | {one-line summary} |
|
||||
|
||||
## Data Architecture
|
||||
|
||||
### Data Model
|
||||
|
||||
```mermaid
|
||||
erDiagram
|
||||
ENTITY_A ||--o{ ENTITY_B : "has many"
|
||||
ENTITY_A {
|
||||
string id PK
|
||||
string name
|
||||
datetime created_at
|
||||
}
|
||||
ENTITY_B {
|
||||
string id PK
|
||||
string entity_a_id FK
|
||||
string value
|
||||
}
|
||||
```
|
||||
|
||||
### Data Storage Strategy
|
||||
|
||||
| Data Type | Storage | Retention | Backup |
|
||||
|-----------|---------|-----------|--------|
|
||||
| {type} | {storage solution} | {retention policy} | {backup strategy} |
|
||||
|
||||
## API Design
|
||||
|
||||
### API Overview
|
||||
|
||||
| Endpoint | Method | Purpose | Auth |
|
||||
|----------|--------|---------|------|
|
||||
| {/api/resource} | {GET/POST/etc} | {purpose} | {auth type} |
|
||||
|
||||
## Security Architecture
|
||||
|
||||
### Security Controls
|
||||
|
||||
| Control | Implementation | Requirement |
|
||||
|---------|---------------|-------------|
|
||||
| Authentication | {approach} | [NFR-S-{NNN}](../requirements/NFR-S-{NNN}-{slug}.md) |
|
||||
| Authorization | {approach} | [NFR-S-{NNN}](../requirements/NFR-S-{NNN}-{slug}.md) |
|
||||
| Data Protection | {approach} | [NFR-S-{NNN}](../requirements/NFR-S-{NNN}-{slug}.md) |
|
||||
|
||||
## Infrastructure & Deployment
|
||||
|
||||
### Deployment Architecture
|
||||
|
||||
{description of deployment model: containers, serverless, VMs, etc.}
|
||||
|
||||
### Environment Strategy
|
||||
|
||||
| Environment | Purpose | Configuration |
|
||||
|-------------|---------|---------------|
|
||||
| Development | Local development | {config} |
|
||||
| Staging | Pre-production testing | {config} |
|
||||
| Production | Live system | {config} |
|
||||
|
||||
## Codebase Integration
|
||||
|
||||
{if has_codebase is true:}
|
||||
|
||||
### Existing Code Mapping
|
||||
|
||||
| New Component | Existing Module | Integration Type | Notes |
|
||||
|--------------|----------------|------------------|-------|
|
||||
| {component} | {existing module path} | Extend/Replace/New | {notes} |
|
||||
|
||||
### Migration Notes
|
||||
{any migration considerations for existing code}
|
||||
|
||||
## Quality Attributes
|
||||
|
||||
| Attribute | Target | Measurement | ADR Reference |
|
||||
|-----------|--------|-------------|---------------|
|
||||
| Performance | {target} | {how measured} | [ADR-{NNN}](ADR-{NNN}-{slug}.md) |
|
||||
| Scalability | {target} | {how measured} | [ADR-{NNN}](ADR-{NNN}-{slug}.md) |
|
||||
| Reliability | {target} | {how measured} | [ADR-{NNN}](ADR-{NNN}-{slug}.md) |
|
||||
|
||||
## Risks & Mitigations
|
||||
|
||||
| Risk | Impact | Probability | Mitigation |
|
||||
|------|--------|-------------|------------|
|
||||
| {risk} | High/Medium/Low | High/Medium/Low | {mitigation approach} |
|
||||
|
||||
## Open Questions
|
||||
|
||||
- [ ] {architectural question 1}
|
||||
- [ ] {architectural question 2}
|
||||
|
||||
## References
|
||||
|
||||
- Derived from: [Requirements](../requirements/_index.md), [Product Brief](../product-brief.md)
|
||||
- Next: [Epics & Stories](../epics/_index.md)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Template: ADR-NNN-{slug}.md (Individual Architecture Decision Record)
|
||||
|
||||
```markdown
|
||||
---
|
||||
id: ADR-{NNN}
|
||||
status: Accepted
|
||||
traces_to: [{REQ-NNN}, {NFR-X-NNN}]
|
||||
date: {timestamp}
|
||||
---
|
||||
|
||||
# ADR-{NNN}: {decision_title}
|
||||
|
||||
## Context
|
||||
|
||||
{what is the situation that motivates this decision}
|
||||
|
||||
## Decision
|
||||
|
||||
{what is the chosen approach}
|
||||
|
||||
## Alternatives Considered
|
||||
|
||||
| Option | Pros | Cons |
|
||||
|--------|------|------|
|
||||
| {option_1 - chosen} | {pros} | {cons} |
|
||||
| {option_2} | {pros} | {cons} |
|
||||
| {option_3} | {pros} | {cons} |
|
||||
|
||||
## Consequences
|
||||
|
||||
- **Positive**: {positive outcomes}
|
||||
- **Negative**: {tradeoffs accepted}
|
||||
- **Risks**: {risks to monitor}
|
||||
|
||||
## Traces
|
||||
|
||||
- **Requirements**: [REQ-{NNN}](../requirements/REQ-{NNN}-{slug}.md), [NFR-X-{NNN}](../requirements/NFR-X-{NNN}-{slug}.md)
|
||||
- **Implemented by**: [EPIC-{NNN}](../epics/EPIC-{NNN}-{slug}.md) (added in Phase 5)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Variable Descriptions
|
||||
|
||||
| Variable | Source | Description |
|
||||
|----------|--------|-------------|
|
||||
| `{session_id}` | spec-config.json | Session identifier |
|
||||
| `{timestamp}` | Runtime | ISO8601 generation timestamp |
|
||||
| `{product_name}` | product-brief.md | Product/feature name |
|
||||
| `{NNN}` | Auto-increment | ADR/requirement number |
|
||||
| `{slug}` | Auto-generated | Kebab-case from decision title |
|
||||
| `{has_codebase}` | spec-config.json | Whether existing codebase exists |
|
||||
196
.codex/skills/team-lifecycle/templates/epics-template.md
Normal file
196
.codex/skills/team-lifecycle/templates/epics-template.md
Normal file
@@ -0,0 +1,196 @@
|
||||
# Epics & Stories Template (Directory Structure)
|
||||
|
||||
Template for generating epic/story breakdown as a directory of individual Epic files in Phase 5.
|
||||
|
||||
## Usage Context
|
||||
|
||||
| Phase | Usage |
|
||||
|-------|-------|
|
||||
| Phase 5 (Epics & Stories) | Generate `epics/` directory from requirements decomposition |
|
||||
| Output Location | `{workDir}/epics/` |
|
||||
|
||||
## Output Structure
|
||||
|
||||
```
|
||||
{workDir}/epics/
|
||||
├── _index.md # Overview table + dependency map + MVP scope + execution order
|
||||
├── EPIC-001-{slug}.md # Individual Epic with its Stories
|
||||
├── EPIC-002-{slug}.md
|
||||
└── ...
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Template: _index.md
|
||||
|
||||
```markdown
|
||||
---
|
||||
session_id: {session_id}
|
||||
phase: 5
|
||||
document_type: epics-index
|
||||
status: draft
|
||||
generated_at: {timestamp}
|
||||
version: 1
|
||||
dependencies:
|
||||
- ../spec-config.json
|
||||
- ../product-brief.md
|
||||
- ../requirements/_index.md
|
||||
- ../architecture/_index.md
|
||||
---
|
||||
|
||||
# Epics & Stories: {product_name}
|
||||
|
||||
{executive_summary - overview of epic structure and MVP scope}
|
||||
|
||||
## Epic Overview
|
||||
|
||||
| Epic ID | Title | Priority | MVP | Stories | Est. Size |
|
||||
|---------|-------|----------|-----|---------|-----------|
|
||||
| [EPIC-001](EPIC-001-{slug}.md) | {title} | Must | Yes | {n} | {S/M/L/XL} |
|
||||
| [EPIC-002](EPIC-002-{slug}.md) | {title} | Must | Yes | {n} | {S/M/L/XL} |
|
||||
| [EPIC-003](EPIC-003-{slug}.md) | {title} | Should | No | {n} | {S/M/L/XL} |
|
||||
|
||||
## Dependency Map
|
||||
|
||||
```mermaid
|
||||
graph LR
|
||||
EPIC-001 --> EPIC-002
|
||||
EPIC-001 --> EPIC-003
|
||||
EPIC-002 --> EPIC-004
|
||||
EPIC-003 --> EPIC-005
|
||||
```
|
||||
|
||||
### Dependency Notes
|
||||
{explanation of why these dependencies exist and suggested execution order}
|
||||
|
||||
### Recommended Execution Order
|
||||
1. [EPIC-{NNN}](EPIC-{NNN}-{slug}.md): {reason - foundational}
|
||||
2. [EPIC-{NNN}](EPIC-{NNN}-{slug}.md): {reason - depends on #1}
|
||||
3. ...
|
||||
|
||||
## MVP Scope
|
||||
|
||||
### MVP Epics
|
||||
{list of epics included in MVP with justification, linking to each}
|
||||
|
||||
### MVP Definition of Done
|
||||
- [ ] {MVP completion criterion 1}
|
||||
- [ ] {MVP completion criterion 2}
|
||||
- [ ] {MVP completion criterion 3}
|
||||
|
||||
## Traceability Matrix
|
||||
|
||||
| Requirement | Epic | Stories | Architecture |
|
||||
|-------------|------|---------|--------------|
|
||||
| [REQ-001](../requirements/REQ-001-{slug}.md) | [EPIC-001](EPIC-001-{slug}.md) | STORY-001-001, STORY-001-002 | [ADR-001](../architecture/ADR-001-{slug}.md) |
|
||||
| [REQ-002](../requirements/REQ-002-{slug}.md) | [EPIC-001](EPIC-001-{slug}.md) | STORY-001-003 | Component B |
|
||||
| [REQ-003](../requirements/REQ-003-{slug}.md) | [EPIC-002](EPIC-002-{slug}.md) | STORY-002-001 | [ADR-002](../architecture/ADR-002-{slug}.md) |
|
||||
|
||||
## Estimation Summary
|
||||
|
||||
| Size | Meaning | Count |
|
||||
|------|---------|-------|
|
||||
| S | Small - well-understood, minimal risk | {n} |
|
||||
| M | Medium - some complexity, moderate risk | {n} |
|
||||
| L | Large - significant complexity, should consider splitting | {n} |
|
||||
| XL | Extra Large - high complexity, must split before implementation | {n} |
|
||||
|
||||
## Risks & Considerations
|
||||
|
||||
| Risk | Affected Epics | Mitigation |
|
||||
|------|---------------|------------|
|
||||
| {risk description} | [EPIC-{NNN}](EPIC-{NNN}-{slug}.md) | {mitigation} |
|
||||
|
||||
## Open Questions
|
||||
|
||||
- [ ] {question about scope or implementation 1}
|
||||
- [ ] {question about scope or implementation 2}
|
||||
|
||||
## References
|
||||
|
||||
- Derived from: [Requirements](../requirements/_index.md), [Architecture](../architecture/_index.md)
|
||||
- Handoff to: execution workflows (lite-plan, plan, req-plan)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Template: EPIC-NNN-{slug}.md (Individual Epic)
|
||||
|
||||
```markdown
|
||||
---
|
||||
id: EPIC-{NNN}
|
||||
priority: {Must|Should|Could}
|
||||
mvp: {true|false}
|
||||
size: {S|M|L|XL}
|
||||
requirements: [REQ-{NNN}]
|
||||
architecture: [ADR-{NNN}]
|
||||
dependencies: [EPIC-{NNN}]
|
||||
status: draft
|
||||
---
|
||||
|
||||
# EPIC-{NNN}: {epic_title}
|
||||
|
||||
**Priority**: {Must|Should|Could}
|
||||
**MVP**: {Yes|No}
|
||||
**Estimated Size**: {S|M|L|XL}
|
||||
|
||||
## Description
|
||||
|
||||
{detailed epic description}
|
||||
|
||||
## Requirements
|
||||
|
||||
- [REQ-{NNN}](../requirements/REQ-{NNN}-{slug}.md): {title}
|
||||
- [REQ-{NNN}](../requirements/REQ-{NNN}-{slug}.md): {title}
|
||||
|
||||
## Architecture
|
||||
|
||||
- [ADR-{NNN}](../architecture/ADR-{NNN}-{slug}.md): {title}
|
||||
- Component: {component_name}
|
||||
|
||||
## Dependencies
|
||||
|
||||
- [EPIC-{NNN}](EPIC-{NNN}-{slug}.md) (blocking): {reason}
|
||||
- [EPIC-{NNN}](EPIC-{NNN}-{slug}.md) (soft): {reason}
|
||||
|
||||
## Stories
|
||||
|
||||
### STORY-{EPIC}-001: {story_title}
|
||||
|
||||
**User Story**: As a {persona}, I want to {action} so that {benefit}.
|
||||
|
||||
**Acceptance Criteria**:
|
||||
- [ ] {criterion 1}
|
||||
- [ ] {criterion 2}
|
||||
- [ ] {criterion 3}
|
||||
|
||||
**Size**: {S|M|L|XL}
|
||||
**Traces to**: [REQ-{NNN}](../requirements/REQ-{NNN}-{slug}.md)
|
||||
|
||||
---
|
||||
|
||||
### STORY-{EPIC}-002: {story_title}
|
||||
|
||||
**User Story**: As a {persona}, I want to {action} so that {benefit}.
|
||||
|
||||
**Acceptance Criteria**:
|
||||
- [ ] {criterion 1}
|
||||
- [ ] {criterion 2}
|
||||
|
||||
**Size**: {S|M|L|XL}
|
||||
**Traces to**: [REQ-{NNN}](../requirements/REQ-{NNN}-{slug}.md)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Variable Descriptions
|
||||
|
||||
| Variable | Source | Description |
|
||||
|----------|--------|-------------|
|
||||
| `{session_id}` | spec-config.json | Session identifier |
|
||||
| `{timestamp}` | Runtime | ISO8601 generation timestamp |
|
||||
| `{product_name}` | product-brief.md | Product/feature name |
|
||||
| `{EPIC}` | Auto-increment | Epic number (3 digits) |
|
||||
| `{NNN}` | Auto-increment | Story/requirement number |
|
||||
| `{slug}` | Auto-generated | Kebab-case from epic/story title |
|
||||
| `{S\|M\|L\|XL}` | CLI analysis | Relative size estimate |
|
||||
133
.codex/skills/team-lifecycle/templates/product-brief.md
Normal file
133
.codex/skills/team-lifecycle/templates/product-brief.md
Normal file
@@ -0,0 +1,133 @@
|
||||
# Product Brief Template
|
||||
|
||||
Template for generating product brief documents in Phase 2.
|
||||
|
||||
## Usage Context
|
||||
|
||||
| Phase | Usage |
|
||||
|-------|-------|
|
||||
| Phase 2 (Product Brief) | Generate product-brief.md from multi-CLI analysis |
|
||||
| Output Location | `{workDir}/product-brief.md` |
|
||||
|
||||
---
|
||||
|
||||
## Template
|
||||
|
||||
```markdown
|
||||
---
|
||||
session_id: {session_id}
|
||||
phase: 2
|
||||
document_type: product-brief
|
||||
status: draft
|
||||
generated_at: {timestamp}
|
||||
stepsCompleted: []
|
||||
version: 1
|
||||
dependencies:
|
||||
- spec-config.json
|
||||
---
|
||||
|
||||
# Product Brief: {product_name}
|
||||
|
||||
{executive_summary - 2-3 sentences capturing the essence of the product/feature}
|
||||
|
||||
## Vision
|
||||
|
||||
{vision_statement - clear, aspirational 1-3 sentence statement of what success looks like}
|
||||
|
||||
## Problem Statement
|
||||
|
||||
### Current Situation
|
||||
{description of the current state and pain points}
|
||||
|
||||
### Impact
|
||||
{quantified impact of the problem - who is affected, how much, how often}
|
||||
|
||||
## Target Users
|
||||
|
||||
{for each user persona:}
|
||||
|
||||
### {Persona Name}
|
||||
- **Role**: {user's role/context}
|
||||
- **Needs**: {primary needs related to this product}
|
||||
- **Pain Points**: {current frustrations}
|
||||
- **Success Criteria**: {what success looks like for this user}
|
||||
|
||||
## Goals & Success Metrics
|
||||
|
||||
| Goal ID | Goal | Success Metric | Target |
|
||||
|---------|------|----------------|--------|
|
||||
| G-001 | {goal description} | {measurable metric} | {specific target} |
|
||||
| G-002 | {goal description} | {measurable metric} | {specific target} |
|
||||
|
||||
## Scope
|
||||
|
||||
### In Scope
|
||||
- {feature/capability 1}
|
||||
- {feature/capability 2}
|
||||
- {feature/capability 3}
|
||||
|
||||
### Out of Scope
|
||||
- {explicitly excluded item 1}
|
||||
- {explicitly excluded item 2}
|
||||
|
||||
### Assumptions
|
||||
- {key assumption 1}
|
||||
- {key assumption 2}
|
||||
|
||||
## Competitive Landscape
|
||||
|
||||
| Aspect | Current State | Proposed Solution | Advantage |
|
||||
|--------|--------------|-------------------|-----------|
|
||||
| {aspect} | {how it's done now} | {our approach} | {differentiator} |
|
||||
|
||||
## Constraints & Dependencies
|
||||
|
||||
### Technical Constraints
|
||||
- {constraint 1}
|
||||
- {constraint 2}
|
||||
|
||||
### Business Constraints
|
||||
- {constraint 1}
|
||||
|
||||
### Dependencies
|
||||
- {external dependency 1}
|
||||
- {external dependency 2}
|
||||
|
||||
## Multi-Perspective Synthesis
|
||||
|
||||
### Product Perspective
|
||||
{summary of product/market analysis findings}
|
||||
|
||||
### Technical Perspective
|
||||
{summary of technical feasibility and constraints}
|
||||
|
||||
### User Perspective
|
||||
{summary of user journey and UX considerations}
|
||||
|
||||
### Convergent Themes
|
||||
{themes where all perspectives agree}
|
||||
|
||||
### Conflicting Views
|
||||
{areas where perspectives differ, with notes on resolution approach}
|
||||
|
||||
## Open Questions
|
||||
|
||||
- [ ] {unresolved question 1}
|
||||
- [ ] {unresolved question 2}
|
||||
|
||||
## References
|
||||
|
||||
- Derived from: [spec-config.json](spec-config.json)
|
||||
- Next: [Requirements PRD](requirements.md)
|
||||
```
|
||||
|
||||
## Variable Descriptions
|
||||
|
||||
| Variable | Source | Description |
|
||||
|----------|--------|-------------|
|
||||
| `{session_id}` | spec-config.json | Session identifier |
|
||||
| `{timestamp}` | Runtime | ISO8601 generation timestamp |
|
||||
| `{product_name}` | Seed analysis | Product/feature name |
|
||||
| `{executive_summary}` | CLI synthesis | 2-3 sentence summary |
|
||||
| `{vision_statement}` | CLI product perspective | Aspirational vision |
|
||||
| All `{...}` fields | CLI analysis outputs | Filled from multi-perspective analysis |
|
||||
224
.codex/skills/team-lifecycle/templates/requirements-prd.md
Normal file
224
.codex/skills/team-lifecycle/templates/requirements-prd.md
Normal file
@@ -0,0 +1,224 @@
|
||||
# Requirements PRD Template (Directory Structure)
|
||||
|
||||
Template for generating Product Requirements Document as a directory of individual requirement files in Phase 3.
|
||||
|
||||
## Usage Context
|
||||
|
||||
| Phase | Usage |
|
||||
|-------|-------|
|
||||
| Phase 3 (Requirements) | Generate `requirements/` directory from product brief expansion |
|
||||
| Output Location | `{workDir}/requirements/` |
|
||||
|
||||
## Output Structure
|
||||
|
||||
```
|
||||
{workDir}/requirements/
|
||||
├── _index.md # Summary + MoSCoW table + traceability matrix + links
|
||||
├── REQ-001-{slug}.md # Individual functional requirement
|
||||
├── REQ-002-{slug}.md
|
||||
├── NFR-P-001-{slug}.md # Non-functional: Performance
|
||||
├── NFR-S-001-{slug}.md # Non-functional: Security
|
||||
├── NFR-SC-001-{slug}.md # Non-functional: Scalability
|
||||
├── NFR-U-001-{slug}.md # Non-functional: Usability
|
||||
└── ...
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Template: _index.md
|
||||
|
||||
```markdown
|
||||
---
|
||||
session_id: {session_id}
|
||||
phase: 3
|
||||
document_type: requirements-index
|
||||
status: draft
|
||||
generated_at: {timestamp}
|
||||
version: 1
|
||||
dependencies:
|
||||
- ../spec-config.json
|
||||
- ../product-brief.md
|
||||
---
|
||||
|
||||
# Requirements: {product_name}
|
||||
|
||||
{executive_summary - brief overview of what this PRD covers and key decisions}
|
||||
|
||||
## Requirement Summary
|
||||
|
||||
| Priority | Count | Coverage |
|
||||
|----------|-------|----------|
|
||||
| Must Have | {n} | {description of must-have scope} |
|
||||
| Should Have | {n} | {description of should-have scope} |
|
||||
| Could Have | {n} | {description of could-have scope} |
|
||||
| Won't Have | {n} | {description of explicitly excluded} |
|
||||
|
||||
## Functional Requirements
|
||||
|
||||
| ID | Title | Priority | Traces To |
|
||||
|----|-------|----------|-----------|
|
||||
| [REQ-001](REQ-001-{slug}.md) | {title} | Must | [G-001](../product-brief.md#goals--success-metrics) |
|
||||
| [REQ-002](REQ-002-{slug}.md) | {title} | Must | [G-001](../product-brief.md#goals--success-metrics) |
|
||||
| [REQ-003](REQ-003-{slug}.md) | {title} | Should | [G-002](../product-brief.md#goals--success-metrics) |
|
||||
|
||||
## Non-Functional Requirements
|
||||
|
||||
### Performance
|
||||
|
||||
| ID | Title | Target |
|
||||
|----|-------|--------|
|
||||
| [NFR-P-001](NFR-P-001-{slug}.md) | {title} | {target value} |
|
||||
|
||||
### Security
|
||||
|
||||
| ID | Title | Standard |
|
||||
|----|-------|----------|
|
||||
| [NFR-S-001](NFR-S-001-{slug}.md) | {title} | {standard/framework} |
|
||||
|
||||
### Scalability
|
||||
|
||||
| ID | Title | Target |
|
||||
|----|-------|--------|
|
||||
| [NFR-SC-001](NFR-SC-001-{slug}.md) | {title} | {target value} |
|
||||
|
||||
### Usability
|
||||
|
||||
| ID | Title | Target |
|
||||
|----|-------|--------|
|
||||
| [NFR-U-001](NFR-U-001-{slug}.md) | {title} | {target value} |
|
||||
|
||||
## Data Requirements
|
||||
|
||||
### Data Entities
|
||||
|
||||
| Entity | Description | Key Attributes |
|
||||
|--------|-------------|----------------|
|
||||
| {entity_name} | {description} | {attr1, attr2, attr3} |
|
||||
|
||||
### Data Flows
|
||||
|
||||
{description of key data flows, optionally with Mermaid diagram}
|
||||
|
||||
## Integration Requirements
|
||||
|
||||
| System | Direction | Protocol | Data Format | Notes |
|
||||
|--------|-----------|----------|-------------|-------|
|
||||
| {system_name} | Inbound/Outbound/Both | {REST/gRPC/etc} | {JSON/XML/etc} | {notes} |
|
||||
|
||||
## Constraints & Assumptions
|
||||
|
||||
### Constraints
|
||||
- {technical or business constraint 1}
|
||||
- {technical or business constraint 2}
|
||||
|
||||
### Assumptions
|
||||
- {assumption 1 - must be validated}
|
||||
- {assumption 2 - must be validated}
|
||||
|
||||
## Priority Rationale
|
||||
|
||||
{explanation of MoSCoW prioritization decisions, especially for Should/Could boundaries}
|
||||
|
||||
## Traceability Matrix
|
||||
|
||||
| Goal | Requirements |
|
||||
|------|-------------|
|
||||
| G-001 | [REQ-001](REQ-001-{slug}.md), [REQ-002](REQ-002-{slug}.md), [NFR-P-001](NFR-P-001-{slug}.md) |
|
||||
| G-002 | [REQ-003](REQ-003-{slug}.md), [NFR-S-001](NFR-S-001-{slug}.md) |
|
||||
|
||||
## Open Questions
|
||||
|
||||
- [ ] {unresolved question 1}
|
||||
- [ ] {unresolved question 2}
|
||||
|
||||
## References
|
||||
|
||||
- Derived from: [Product Brief](../product-brief.md)
|
||||
- Next: [Architecture](../architecture/_index.md)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Template: REQ-NNN-{slug}.md (Individual Functional Requirement)
|
||||
|
||||
```markdown
|
||||
---
|
||||
id: REQ-{NNN}
|
||||
type: functional
|
||||
priority: {Must|Should|Could|Won't}
|
||||
traces_to: [G-{NNN}]
|
||||
status: draft
|
||||
---
|
||||
|
||||
# REQ-{NNN}: {requirement_title}
|
||||
|
||||
**Priority**: {Must|Should|Could|Won't}
|
||||
|
||||
## Description
|
||||
|
||||
{detailed requirement description}
|
||||
|
||||
## User Story
|
||||
|
||||
As a {persona}, I want to {action} so that {benefit}.
|
||||
|
||||
## Acceptance Criteria
|
||||
|
||||
- [ ] {specific, testable criterion 1}
|
||||
- [ ] {specific, testable criterion 2}
|
||||
- [ ] {specific, testable criterion 3}
|
||||
|
||||
## Traces
|
||||
|
||||
- **Goal**: [G-{NNN}](../product-brief.md#goals--success-metrics)
|
||||
- **Architecture**: [ADR-{NNN}](../architecture/ADR-{NNN}-{slug}.md) (if applicable)
|
||||
- **Implemented by**: [EPIC-{NNN}](../epics/EPIC-{NNN}-{slug}.md) (added in Phase 5)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Template: NFR-{type}-NNN-{slug}.md (Individual Non-Functional Requirement)
|
||||
|
||||
```markdown
|
||||
---
|
||||
id: NFR-{type}-{NNN}
|
||||
type: non-functional
|
||||
category: {Performance|Security|Scalability|Usability}
|
||||
priority: {Must|Should|Could}
|
||||
status: draft
|
||||
---
|
||||
|
||||
# NFR-{type}-{NNN}: {requirement_title}
|
||||
|
||||
**Category**: {Performance|Security|Scalability|Usability}
|
||||
**Priority**: {Must|Should|Could}
|
||||
|
||||
## Requirement
|
||||
|
||||
{detailed requirement description}
|
||||
|
||||
## Metric & Target
|
||||
|
||||
| Metric | Target | Measurement Method |
|
||||
|--------|--------|--------------------|
|
||||
| {metric} | {target value} | {how measured} |
|
||||
|
||||
## Traces
|
||||
|
||||
- **Goal**: [G-{NNN}](../product-brief.md#goals--success-metrics)
|
||||
- **Architecture**: [ADR-{NNN}](../architecture/ADR-{NNN}-{slug}.md) (if applicable)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Variable Descriptions
|
||||
|
||||
| Variable | Source | Description |
|
||||
|----------|--------|-------------|
|
||||
| `{session_id}` | spec-config.json | Session identifier |
|
||||
| `{timestamp}` | Runtime | ISO8601 generation timestamp |
|
||||
| `{product_name}` | product-brief.md | Product/feature name |
|
||||
| `{NNN}` | Auto-increment | Requirement number (zero-padded 3 digits) |
|
||||
| `{slug}` | Auto-generated | Kebab-case from requirement title |
|
||||
| `{type}` | Category | P (Performance), S (Security), SC (Scalability), U (Usability) |
|
||||
| `{Must\|Should\|Could\|Won't}` | User input / auto | MoSCoW priority tag |
|
||||
@@ -191,6 +191,41 @@ export const HOOK_TEMPLATES: readonly HookTemplate[] = [
|
||||
trigger: 'SessionStart',
|
||||
command: 'ccw',
|
||||
args: ['hook', 'project-state', '--stdin']
|
||||
},
|
||||
// --- Memory V2 ---
|
||||
{
|
||||
id: 'memory-v2-extract',
|
||||
name: 'Memory V2 Extract',
|
||||
description: 'Trigger Phase 1 extraction when session ends (after idle period)',
|
||||
category: 'indexing',
|
||||
trigger: 'Stop',
|
||||
command: 'ccw',
|
||||
args: ['core-memory', 'extract', '--max-sessions', '10']
|
||||
},
|
||||
{
|
||||
id: 'memory-v2-auto-consolidate',
|
||||
name: 'Memory V2 Auto Consolidate',
|
||||
description: 'Trigger Phase 2 consolidation after extraction jobs complete',
|
||||
category: 'indexing',
|
||||
trigger: 'Stop',
|
||||
command: 'node',
|
||||
args: [
|
||||
'-e',
|
||||
'const cp=require("child_process");const r=cp.spawnSync("ccw",["core-memory","extract","--json"],{encoding:"utf8",shell:true});try{const d=JSON.parse(r.stdout);if(d&&d.total_stage1>=5){cp.spawnSync("ccw",["core-memory","consolidate"],{stdio:"inherit",shell:true})}}catch(e){}'
|
||||
]
|
||||
},
|
||||
{
|
||||
id: 'memory-sync-dashboard',
|
||||
name: 'Memory Sync Dashboard',
|
||||
description: 'Sync memory V2 status to dashboard on changes',
|
||||
category: 'notification',
|
||||
trigger: 'PostToolUse',
|
||||
matcher: 'core_memory',
|
||||
command: 'node',
|
||||
args: [
|
||||
'-e',
|
||||
'const cp=require("child_process");const payload=JSON.stringify({type:"MEMORY_V2_STATUS_UPDATED",project:process.env.CLAUDE_PROJECT_DIR||process.cwd(),timestamp:Date.now()});cp.spawnSync("curl",["-s","-X","POST","-H","Content-Type: application/json","-d",payload,"http://localhost:3456/api/hook"],{stdio:"inherit",shell:true})'
|
||||
]
|
||||
}
|
||||
] as const;
|
||||
|
||||
|
||||
351
ccw/frontend/src/components/memory/V2PipelineTab.tsx
Normal file
351
ccw/frontend/src/components/memory/V2PipelineTab.tsx
Normal file
@@ -0,0 +1,351 @@
|
||||
// ========================================
|
||||
// V2PipelineTab Component
|
||||
// ========================================
|
||||
// Memory V2 Pipeline management UI
|
||||
|
||||
import { useState } from 'react';
|
||||
import { useIntl } from 'react-intl';
|
||||
import {
|
||||
Zap,
|
||||
CheckCircle,
|
||||
Clock,
|
||||
AlertCircle,
|
||||
Play,
|
||||
Eye,
|
||||
Loader2,
|
||||
RefreshCw,
|
||||
FileText,
|
||||
Database,
|
||||
Activity,
|
||||
} from 'lucide-react';
|
||||
import { Card } from '@/components/ui/Card';
|
||||
import { Button } from '@/components/ui/Button';
|
||||
import { Badge } from '@/components/ui/Badge';
|
||||
import { Dialog, DialogContent, DialogHeader, DialogTitle } from '@/components/ui/Dialog';
|
||||
import {
|
||||
useExtractionStatus,
|
||||
useConsolidationStatus,
|
||||
useV2Jobs,
|
||||
useTriggerExtraction,
|
||||
useTriggerConsolidation,
|
||||
} from '@/hooks/useMemoryV2';
|
||||
import { cn } from '@/lib/utils';
|
||||
|
||||
// ========== Status Badge ==========
|
||||
|
||||
const STATUS_CONFIG: Record<string, { color: string; icon: React.ReactNode; label: string }> = {
|
||||
idle: { color: 'bg-gray-100 text-gray-800 dark:bg-gray-800 dark:text-gray-300', icon: <Clock className="w-3 h-3" />, label: 'Idle' },
|
||||
running: { color: 'bg-blue-100 text-blue-800 dark:bg-blue-900/30 dark:text-blue-300', icon: <Loader2 className="w-3 h-3 animate-spin" />, label: 'Running' },
|
||||
completed: { color: 'bg-green-100 text-green-800 dark:bg-green-900/30 dark:text-green-300', icon: <CheckCircle className="w-3 h-3" />, label: 'Completed' },
|
||||
done: { color: 'bg-green-100 text-green-800 dark:bg-green-900/30 dark:text-green-300', icon: <CheckCircle className="w-3 h-3" />, label: 'Done' },
|
||||
error: { color: 'bg-red-100 text-red-800 dark:bg-red-900/30 dark:text-red-300', icon: <AlertCircle className="w-3 h-3" />, label: 'Error' },
|
||||
pending: { color: 'bg-yellow-100 text-yellow-800 dark:bg-yellow-900/30 dark:text-yellow-300', icon: <Clock className="w-3 h-3" />, label: 'Pending' },
|
||||
};
|
||||
|
||||
function StatusBadge({ status }: { status: string }) {
|
||||
const config = STATUS_CONFIG[status] || STATUS_CONFIG.idle;
|
||||
return (
|
||||
<Badge className={cn('flex items-center gap-1', config.color)}>
|
||||
{config.icon}
|
||||
{config.label}
|
||||
</Badge>
|
||||
);
|
||||
}
|
||||
|
||||
// ========== Extraction Card ==========
|
||||
|
||||
function ExtractionCard() {
|
||||
const intl = useIntl();
|
||||
const { data: status, isLoading, refetch } = useExtractionStatus();
|
||||
const trigger = useTriggerExtraction();
|
||||
const [maxSessions, setMaxSessions] = useState(10);
|
||||
|
||||
const handleTrigger = () => {
|
||||
trigger.mutate(maxSessions);
|
||||
};
|
||||
|
||||
// Check if any job is running
|
||||
const hasRunningJob = status?.jobs?.some(j => j.status === 'running');
|
||||
|
||||
return (
|
||||
<Card className="p-4">
|
||||
<div className="flex items-start justify-between mb-4">
|
||||
<div>
|
||||
<h3 className="font-medium flex items-center gap-2">
|
||||
<Zap className="w-5 h-5 text-yellow-500" />
|
||||
Phase 1: {intl.formatMessage({ id: 'memory.v2.extraction.title', defaultMessage: 'Extraction' })}
|
||||
</h3>
|
||||
<p className="text-sm text-muted-foreground mt-1">
|
||||
{intl.formatMessage({ id: 'memory.v2.extraction.description', defaultMessage: 'Extract structured memories from CLI sessions' })}
|
||||
</p>
|
||||
</div>
|
||||
{status && (
|
||||
<div className="text-right">
|
||||
<div className="text-2xl font-bold">{status.total_stage1}</div>
|
||||
<div className="text-xs text-muted-foreground">
|
||||
{intl.formatMessage({ id: 'memory.v2.extraction.extracted', defaultMessage: 'Extracted' })}
|
||||
</div>
|
||||
</div>
|
||||
)}
|
||||
</div>
|
||||
|
||||
<div className="flex items-center gap-2 mb-4">
|
||||
<input
|
||||
type="number"
|
||||
value={maxSessions}
|
||||
onChange={(e) => setMaxSessions(Math.max(1, parseInt(e.target.value) || 10))}
|
||||
className="w-20 px-2 py-1 text-sm border rounded bg-background"
|
||||
min={1}
|
||||
max={64}
|
||||
/>
|
||||
<span className="text-sm text-muted-foreground">sessions max</span>
|
||||
</div>
|
||||
|
||||
<div className="flex items-center gap-2">
|
||||
<Button
|
||||
onClick={handleTrigger}
|
||||
disabled={trigger.isPending || hasRunningJob}
|
||||
size="sm"
|
||||
>
|
||||
{trigger.isPending || hasRunningJob ? (
|
||||
<>
|
||||
<Loader2 className="w-4 h-4 mr-1 animate-spin" />
|
||||
{intl.formatMessage({ id: 'memory.v2.extraction.extracting', defaultMessage: 'Extracting...' })}
|
||||
</>
|
||||
) : (
|
||||
<>
|
||||
<Play className="w-4 h-4 mr-1" />
|
||||
{intl.formatMessage({ id: 'memory.v2.extraction.trigger', defaultMessage: 'Trigger Extraction' })}
|
||||
</>
|
||||
)}
|
||||
</Button>
|
||||
<Button variant="outline" size="sm" onClick={() => refetch()}>
|
||||
<RefreshCw className={cn('w-4 h-4', isLoading && 'animate-spin')} />
|
||||
</Button>
|
||||
</div>
|
||||
|
||||
{status?.jobs && status.jobs.length > 0 && (
|
||||
<div className="mt-4 pt-4 border-t">
|
||||
<div className="text-xs text-muted-foreground mb-2">
|
||||
{intl.formatMessage({ id: 'memory.v2.extraction.recentJobs', defaultMessage: 'Recent Jobs' })}
|
||||
</div>
|
||||
<div className="space-y-1 max-h-32 overflow-y-auto">
|
||||
{status.jobs.slice(0, 5).map((job) => (
|
||||
<div key={job.job_key} className="flex items-center justify-between text-sm">
|
||||
<span className="font-mono text-xs truncate max-w-[150px]">{job.job_key}</span>
|
||||
<StatusBadge status={job.status} />
|
||||
</div>
|
||||
))}
|
||||
</div>
|
||||
</div>
|
||||
)}
|
||||
</Card>
|
||||
);
|
||||
}
|
||||
|
||||
// ========== Consolidation Card ==========
|
||||
|
||||
function ConsolidationCard() {
|
||||
const intl = useIntl();
|
||||
const { data: status, isLoading, refetch } = useConsolidationStatus();
|
||||
const trigger = useTriggerConsolidation();
|
||||
const [showPreview, setShowPreview] = useState(false);
|
||||
|
||||
const handleTrigger = () => {
|
||||
trigger.mutate();
|
||||
};
|
||||
|
||||
const isRunning = status?.status === 'running';
|
||||
|
||||
return (
|
||||
<>
|
||||
<Card className="p-4">
|
||||
<div className="flex items-start justify-between mb-4">
|
||||
<div>
|
||||
<h3 className="font-medium flex items-center gap-2">
|
||||
<Database className="w-5 h-5 text-blue-500" />
|
||||
Phase 2: {intl.formatMessage({ id: 'memory.v2.consolidation.title', defaultMessage: 'Consolidation' })}
|
||||
</h3>
|
||||
<p className="text-sm text-muted-foreground mt-1">
|
||||
{intl.formatMessage({ id: 'memory.v2.consolidation.description', defaultMessage: 'Merge extracted results into MEMORY.md' })}
|
||||
</p>
|
||||
</div>
|
||||
{status && <StatusBadge status={status.status} />}
|
||||
</div>
|
||||
|
||||
<div className="grid grid-cols-2 gap-4 mb-4">
|
||||
<div className="text-center p-2 bg-muted rounded">
|
||||
<div className="text-lg font-bold">
|
||||
{status?.memoryMdAvailable ? '✅' : '❌'}
|
||||
</div>
|
||||
<div className="text-xs text-muted-foreground">MEMORY.md</div>
|
||||
</div>
|
||||
<div className="text-center p-2 bg-muted rounded">
|
||||
<div className="text-lg font-bold">{status?.inputCount ?? '-'}</div>
|
||||
<div className="text-xs text-muted-foreground">Inputs</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
{status?.lastError && (
|
||||
<div className="mb-4 p-2 bg-red-100 dark:bg-red-900/30 rounded text-xs text-red-800 dark:text-red-300">
|
||||
{status.lastError}
|
||||
</div>
|
||||
)}
|
||||
|
||||
<div className="flex items-center gap-2">
|
||||
<Button
|
||||
onClick={handleTrigger}
|
||||
disabled={trigger.isPending || isRunning}
|
||||
size="sm"
|
||||
>
|
||||
{trigger.isPending || isRunning ? (
|
||||
<>
|
||||
<Loader2 className="w-4 h-4 mr-1 animate-spin" />
|
||||
{intl.formatMessage({ id: 'memory.v2.consolidation.consolidating', defaultMessage: 'Consolidating...' })}
|
||||
</>
|
||||
) : (
|
||||
<>
|
||||
<Play className="w-4 h-4 mr-1" />
|
||||
{intl.formatMessage({ id: 'memory.v2.consolidation.trigger', defaultMessage: 'Trigger Consolidation' })}
|
||||
</>
|
||||
)}
|
||||
</Button>
|
||||
|
||||
{status?.memoryMdAvailable && (
|
||||
<Button variant="outline" size="sm" onClick={() => setShowPreview(true)}>
|
||||
<Eye className="w-4 h-4 mr-1" />
|
||||
{intl.formatMessage({ id: 'memory.v2.consolidation.preview', defaultMessage: 'Preview' })}
|
||||
</Button>
|
||||
)}
|
||||
|
||||
<Button variant="outline" size="sm" onClick={() => refetch()}>
|
||||
<RefreshCw className={cn('w-4 h-4', isLoading && 'animate-spin')} />
|
||||
</Button>
|
||||
</div>
|
||||
</Card>
|
||||
|
||||
{/* MEMORY.md Preview Dialog */}
|
||||
<Dialog open={showPreview} onOpenChange={setShowPreview}>
|
||||
<DialogContent className="max-w-3xl max-h-[80vh] overflow-hidden flex flex-col">
|
||||
<DialogHeader>
|
||||
<DialogTitle className="flex items-center gap-2">
|
||||
<FileText className="w-5 h-5" />
|
||||
MEMORY.md
|
||||
</DialogTitle>
|
||||
</DialogHeader>
|
||||
<div className="overflow-auto flex-1">
|
||||
<pre className="text-sm whitespace-pre-wrap p-4 bg-muted rounded font-mono">
|
||||
{status?.memoryMdPreview || 'No content available'}
|
||||
</pre>
|
||||
</div>
|
||||
</DialogContent>
|
||||
</Dialog>
|
||||
</>
|
||||
);
|
||||
}
|
||||
|
||||
// ========== Jobs List ==========
|
||||
|
||||
function JobsList() {
|
||||
const intl = useIntl();
|
||||
const [kindFilter, setKindFilter] = useState<string>('');
|
||||
const { data, isLoading, refetch } = useV2Jobs(kindFilter ? { kind: kindFilter } : undefined);
|
||||
|
||||
return (
|
||||
<Card className="p-4">
|
||||
<div className="flex items-center justify-between mb-4">
|
||||
<h3 className="font-medium flex items-center gap-2">
|
||||
<Activity className="w-5 h-5 text-purple-500" />
|
||||
{intl.formatMessage({ id: 'memory.v2.jobs.title', defaultMessage: 'Jobs' })}
|
||||
</h3>
|
||||
<div className="flex items-center gap-2">
|
||||
<select
|
||||
value={kindFilter}
|
||||
onChange={(e) => setKindFilter(e.target.value)}
|
||||
className="px-2 py-1 text-sm border rounded bg-background"
|
||||
>
|
||||
<option value="">All Kinds</option>
|
||||
<option value="phase1_extraction">Extraction</option>
|
||||
<option value="memory_consolidate_global">Consolidation</option>
|
||||
</select>
|
||||
<Button variant="outline" size="sm" onClick={() => refetch()}>
|
||||
<RefreshCw className={cn('w-4 h-4', isLoading && 'animate-spin')} />
|
||||
</Button>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
{data?.jobs && data.jobs.length > 0 ? (
|
||||
<div className="overflow-x-auto">
|
||||
<table className="w-full text-sm">
|
||||
<thead>
|
||||
<tr className="border-b">
|
||||
<th className="text-left p-2">
|
||||
{intl.formatMessage({ id: 'memory.v2.jobs.kind', defaultMessage: 'Kind' })}
|
||||
</th>
|
||||
<th className="text-left p-2">
|
||||
{intl.formatMessage({ id: 'memory.v2.jobs.key', defaultMessage: 'Key' })}
|
||||
</th>
|
||||
<th className="text-left p-2">
|
||||
{intl.formatMessage({ id: 'memory.v2.jobs.status', defaultMessage: 'Status' })}
|
||||
</th>
|
||||
<th className="text-left p-2">
|
||||
{intl.formatMessage({ id: 'memory.v2.jobs.error', defaultMessage: 'Error' })}
|
||||
</th>
|
||||
</tr>
|
||||
</thead>
|
||||
<tbody>
|
||||
{data.jobs.map((job) => (
|
||||
<tr key={`${job.kind}-${job.job_key}`} className="border-b">
|
||||
<td className="p-2">
|
||||
<Badge variant="outline" className="text-xs">
|
||||
{job.kind === 'phase1_extraction' ? 'Extraction' :
|
||||
job.kind === 'memory_consolidate_global' ? 'Consolidation' : job.kind}
|
||||
</Badge>
|
||||
</td>
|
||||
<td className="p-2 font-mono text-xs truncate max-w-[150px]">{job.job_key}</td>
|
||||
<td className="p-2"><StatusBadge status={job.status} /></td>
|
||||
<td className="p-2 text-red-500 text-xs truncate max-w-[200px]">{job.last_error || '-'}</td>
|
||||
</tr>
|
||||
))}
|
||||
</tbody>
|
||||
</table>
|
||||
</div>
|
||||
) : (
|
||||
<div className="text-center text-muted-foreground py-8">
|
||||
{intl.formatMessage({ id: 'memory.v2.jobs.noJobs', defaultMessage: 'No jobs found' })}
|
||||
</div>
|
||||
)}
|
||||
|
||||
{/* 按状态统计 */}
|
||||
{data?.byStatus && Object.keys(data.byStatus).length > 0 && (
|
||||
<div className="mt-4 pt-4 border-t flex items-center gap-4 text-sm flex-wrap">
|
||||
{Object.entries(data.byStatus).map(([status, count]) => (
|
||||
<span key={status} className="flex items-center gap-1">
|
||||
<StatusBadge status={status} />
|
||||
<span className="font-bold">{count}</span>
|
||||
</span>
|
||||
))}
|
||||
<span className="text-muted-foreground ml-auto">
|
||||
Total: {data.total}
|
||||
</span>
|
||||
</div>
|
||||
)}
|
||||
</Card>
|
||||
);
|
||||
}
|
||||
|
||||
// ========== Main Component ==========
|
||||
|
||||
export function V2PipelineTab() {
|
||||
return (
|
||||
<div className="space-y-4">
|
||||
<div className="grid grid-cols-1 md:grid-cols-2 gap-4">
|
||||
<ExtractionCard />
|
||||
<ConsolidationCard />
|
||||
</div>
|
||||
<JobsList />
|
||||
</div>
|
||||
);
|
||||
}
|
||||
|
||||
export default V2PipelineTab;
|
||||
@@ -187,10 +187,7 @@ export function GlobalSettingsTab() {
|
||||
});
|
||||
|
||||
// Fetch project-tech stats
|
||||
const {
|
||||
data: projectTechStats,
|
||||
isLoading: isLoadingProjectTech,
|
||||
} = useQuery({
|
||||
const { data: projectTechStats } = useQuery({
|
||||
queryKey: settingsKeys.projectTech(),
|
||||
queryFn: fetchProjectTechStats,
|
||||
staleTime: 60000, // 1 minute
|
||||
@@ -490,7 +487,7 @@ export function GlobalSettingsTab() {
|
||||
)}
|
||||
onClick={() => localDevProgress.enabled && handleCategoryToggle(cat)}
|
||||
>
|
||||
{cat} ({projectTechStats?.categories[cat] || 0})
|
||||
{formatMessage({ id: `specs.devCategory.${cat}`, defaultMessage: cat })} ({projectTechStats?.categories[cat] || 0})
|
||||
</Badge>
|
||||
))}
|
||||
</div>
|
||||
|
||||
@@ -48,7 +48,7 @@ import {
|
||||
} from 'lucide-react';
|
||||
import { useInstallRecommendedHooks } from '@/hooks/useSystemSettings';
|
||||
import type { InjectionPreviewFile, InjectionPreviewResponse } from '@/lib/api';
|
||||
import { getInjectionPreview } from '@/lib/api';
|
||||
import { getInjectionPreview, COMMAND_PREVIEWS, type CommandPreviewConfig } from '@/lib/api';
|
||||
|
||||
// ========== Types ==========
|
||||
|
||||
@@ -209,6 +209,12 @@ export function InjectionControlTab({ className }: InjectionControlTabProps) {
|
||||
const [previewDialogOpen, setPreviewDialogOpen] = useState(false);
|
||||
const [previewFile, setPreviewFile] = useState<InjectionPreviewFile | null>(null);
|
||||
|
||||
// State for command preview
|
||||
const [selectedCommand, setSelectedCommand] = useState<CommandPreviewConfig>(COMMAND_PREVIEWS[0]);
|
||||
const [commandPreviewData, setCommandPreviewData] = useState<InjectionPreviewResponse | null>(null);
|
||||
const [commandPreviewLoading, setCommandPreviewLoading] = useState(false);
|
||||
const [commandPreviewDialogOpen, setCommandPreviewDialogOpen] = useState(false);
|
||||
|
||||
// Fetch stats
|
||||
const loadStats = useCallback(async () => {
|
||||
setStatsLoading(true);
|
||||
@@ -264,6 +270,21 @@ export function InjectionControlTab({ className }: InjectionControlTabProps) {
|
||||
}
|
||||
}, [previewMode]);
|
||||
|
||||
// Load command preview content
|
||||
const loadCommandPreview = useCallback(async (command: CommandPreviewConfig) => {
|
||||
setCommandPreviewLoading(true);
|
||||
try {
|
||||
const data = await getInjectionPreview(command.mode, true, undefined, command.category);
|
||||
setCommandPreviewData(data);
|
||||
setSelectedCommand(command);
|
||||
setCommandPreviewDialogOpen(true);
|
||||
} catch (err) {
|
||||
console.error('Failed to load command preview:', err);
|
||||
} finally {
|
||||
setCommandPreviewLoading(false);
|
||||
}
|
||||
}, []);
|
||||
|
||||
// Initial load
|
||||
useEffect(() => {
|
||||
loadStats();
|
||||
@@ -406,15 +427,20 @@ export function InjectionControlTab({ className }: InjectionControlTabProps) {
|
||||
return stats;
|
||||
}, [previewData]);
|
||||
|
||||
// Calculate progress and status
|
||||
// Calculate progress and status - use API's maxLength for consistency
|
||||
const currentLength = stats?.injectionLength?.withKeywords || 0;
|
||||
const maxLength = settings.maxLength;
|
||||
const apiMaxLength = stats?.injectionLength?.maxLength || settings.maxLength;
|
||||
const maxLength = apiMaxLength; // Use API's maxLength for consistency
|
||||
const warnThreshold = settings.warnThreshold;
|
||||
const percentage = calculatePercentage(currentLength, maxLength);
|
||||
const isOverLimit = currentLength > maxLength;
|
||||
const isOverWarning = currentLength > warnThreshold;
|
||||
const remainingSpace = Math.max(0, maxLength - currentLength);
|
||||
|
||||
// Calculate approximate line count (assuming ~80 chars per line)
|
||||
const estimatedLineCount = Math.ceil(currentLength / 80);
|
||||
const maxLineCount = Math.ceil(maxLength / 80);
|
||||
|
||||
return (
|
||||
<div className={cn('space-y-6', className)}>
|
||||
{/* Recommended Hooks Section */}
|
||||
@@ -556,36 +582,40 @@ export function InjectionControlTab({ className }: InjectionControlTabProps) {
|
||||
>
|
||||
{formatNumber(currentLength)} / {formatNumber(maxLength)}{' '}
|
||||
{formatMessage({ id: 'specs.injection.characters', defaultMessage: 'characters' })}
|
||||
<span className="text-muted-foreground ml-2">
|
||||
(~{formatNumber(estimatedLineCount)} / {formatNumber(maxLineCount)} {formatMessage({ id: 'specs.injection.lines', defaultMessage: 'lines' })})
|
||||
</span>
|
||||
</span>
|
||||
</div>
|
||||
|
||||
{/* Progress Bar */}
|
||||
<div className="space-y-2">
|
||||
<Progress
|
||||
value={percentage}
|
||||
className={cn(
|
||||
'h-3',
|
||||
isOverLimit && 'bg-destructive/20',
|
||||
!isOverLimit && isOverWarning && 'bg-yellow-100 dark:bg-yellow-900/30'
|
||||
)}
|
||||
indicatorClassName={cn(
|
||||
isOverLimit && 'bg-destructive',
|
||||
!isOverLimit && isOverWarning && 'bg-yellow-500'
|
||||
)}
|
||||
/>
|
||||
|
||||
{/* Warning threshold marker */}
|
||||
<div
|
||||
className="relative h-0"
|
||||
style={{
|
||||
left: `${Math.min(100, (warnThreshold / maxLength) * 100)}%`,
|
||||
}}
|
||||
>
|
||||
<div className="absolute -top-5 transform -translate-x-1/2 flex flex-col items-center">
|
||||
<AlertTriangle className="h-3 w-3 text-yellow-500" />
|
||||
<div className="text-[10px] text-muted-foreground whitespace-nowrap">
|
||||
{formatMessage({ id: 'specs.injection.warnThreshold', defaultMessage: 'Warn' })}
|
||||
</div>
|
||||
<div className="flex items-center justify-between text-xs text-muted-foreground mb-1">
|
||||
<span>{percentage}%</span>
|
||||
<span>{formatMessage({ id: 'specs.injection.maxLimit', defaultMessage: 'Max' })}: {formatNumber(maxLength)}</span>
|
||||
</div>
|
||||
<div className="relative">
|
||||
<Progress
|
||||
value={percentage}
|
||||
className={cn(
|
||||
'h-3',
|
||||
isOverLimit && 'bg-destructive/20',
|
||||
!isOverLimit && isOverWarning && 'bg-yellow-100 dark:bg-yellow-900/30'
|
||||
)}
|
||||
indicatorClassName={cn(
|
||||
isOverLimit && 'bg-destructive',
|
||||
!isOverLimit && isOverWarning && 'bg-yellow-500'
|
||||
)}
|
||||
/>
|
||||
{/* Warning threshold marker */}
|
||||
<div
|
||||
className="absolute top-0 h-3 flex flex-col items-center"
|
||||
style={{
|
||||
left: `${Math.min(100, (warnThreshold / maxLength) * 100)}%`,
|
||||
transform: 'translateX(-50%)',
|
||||
}}
|
||||
>
|
||||
<AlertTriangle className="h-3 w-3 text-yellow-500 -mt-4" />
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
@@ -791,6 +821,59 @@ export function InjectionControlTab({ className }: InjectionControlTabProps) {
|
||||
</CardContent>
|
||||
</Card>
|
||||
|
||||
{/* Command Preview Card */}
|
||||
<Card>
|
||||
<CardHeader>
|
||||
<CardTitle className="flex items-center gap-2">
|
||||
<Eye className="h-5 w-5" />
|
||||
{formatMessage({ id: 'specs.injection.commandPreview', defaultMessage: 'Command Injection Preview' })}
|
||||
</CardTitle>
|
||||
<CardDescription>
|
||||
{formatMessage({
|
||||
id: 'specs.injection.commandPreviewDesc',
|
||||
defaultMessage: 'Preview the content that would be injected by different CLI commands',
|
||||
})}
|
||||
</CardDescription>
|
||||
</CardHeader>
|
||||
<CardContent>
|
||||
<div className="grid grid-cols-2 md:grid-cols-3 lg:grid-cols-5 gap-3">
|
||||
{COMMAND_PREVIEWS.map((cmd) => (
|
||||
<Button
|
||||
key={cmd.command}
|
||||
variant="outline"
|
||||
className="h-auto flex-col items-start py-3 px-4"
|
||||
onClick={() => loadCommandPreview(cmd)}
|
||||
disabled={commandPreviewLoading}
|
||||
>
|
||||
<div className="font-medium text-sm">
|
||||
{formatMessage({
|
||||
id: `specs.commandPreview.${cmd.labelKey}.label`,
|
||||
defaultMessage: cmd.labelKey
|
||||
})}
|
||||
</div>
|
||||
<div className="text-xs text-muted-foreground mt-1 line-clamp-2">
|
||||
{formatMessage({
|
||||
id: `specs.commandPreview.${cmd.descriptionKey}.description`,
|
||||
defaultMessage: cmd.descriptionKey
|
||||
})}
|
||||
</div>
|
||||
<code className="text-xs bg-muted px-1.5 py-0.5 rounded mt-2 w-full text-center">
|
||||
{cmd.command.replace('ccw spec load', '').trim() || 'default'}
|
||||
</code>
|
||||
</Button>
|
||||
))}
|
||||
</div>
|
||||
{commandPreviewLoading && (
|
||||
<div className="flex items-center justify-center py-4 mt-4 border rounded-lg">
|
||||
<Loader2 className="h-5 w-5 animate-spin text-muted-foreground mr-2" />
|
||||
<span className="text-sm text-muted-foreground">
|
||||
{formatMessage({ id: 'specs.injection.loadingPreview', defaultMessage: 'Loading preview...' })}
|
||||
</span>
|
||||
</div>
|
||||
)}
|
||||
</CardContent>
|
||||
</Card>
|
||||
|
||||
{/* Settings Card */}
|
||||
<Card>
|
||||
<CardHeader>
|
||||
@@ -814,39 +897,83 @@ export function InjectionControlTab({ className }: InjectionControlTabProps) {
|
||||
{/* Max Injection Length */}
|
||||
<div className="space-y-2">
|
||||
<Label htmlFor="maxLength">
|
||||
{formatMessage({ id: 'specs.injection.maxLength', defaultMessage: 'Max Injection Length (characters)' })}
|
||||
{formatMessage({ id: 'specs.injection.maxLength', defaultMessage: 'Max Injection Length' })}
|
||||
</Label>
|
||||
<Input
|
||||
id="maxLength"
|
||||
type="number"
|
||||
min={1000}
|
||||
max={50000}
|
||||
step={500}
|
||||
value={formData.maxLength}
|
||||
onChange={(e) => handleFieldChange('maxLength', Number(e.target.value))}
|
||||
/>
|
||||
<div className="flex items-center gap-4">
|
||||
<div className="flex-1">
|
||||
<Input
|
||||
id="maxLength"
|
||||
type="number"
|
||||
min={1000}
|
||||
max={50000}
|
||||
step={500}
|
||||
value={formData.maxLength}
|
||||
onChange={(e) => handleFieldChange('maxLength', Number(e.target.value))}
|
||||
/>
|
||||
</div>
|
||||
<span className="text-sm text-muted-foreground whitespace-nowrap">
|
||||
(~{Math.ceil(formData.maxLength / 80)} {formatMessage({ id: 'specs.injection.lines', defaultMessage: 'lines' })})
|
||||
</span>
|
||||
</div>
|
||||
<p className="text-sm text-muted-foreground">
|
||||
{formatMessage({
|
||||
id: 'specs.injection.maxLengthHelp',
|
||||
defaultMessage: 'Recommended: 4000-10000. Too large may consume too much context; too small may truncate important specs.',
|
||||
defaultMessage: 'Recommended: 4000-10000 characters (50-125 lines). Too large may consume too much context.',
|
||||
})}
|
||||
</p>
|
||||
{/* Quick presets */}
|
||||
<div className="flex items-center gap-2 mt-2">
|
||||
<span className="text-xs text-muted-foreground">
|
||||
{formatMessage({ id: 'specs.injection.quickPresets', defaultMessage: 'Quick presets:' })}
|
||||
</span>
|
||||
<Button
|
||||
variant="outline"
|
||||
size="sm"
|
||||
className="h-6 text-xs"
|
||||
onClick={() => handleFieldChange('maxLength', 4000)}
|
||||
>
|
||||
4000 (50 lines)
|
||||
</Button>
|
||||
<Button
|
||||
variant="outline"
|
||||
size="sm"
|
||||
className="h-6 text-xs"
|
||||
onClick={() => handleFieldChange('maxLength', 8000)}
|
||||
>
|
||||
8000 (100 lines)
|
||||
</Button>
|
||||
<Button
|
||||
variant="outline"
|
||||
size="sm"
|
||||
className="h-6 text-xs"
|
||||
onClick={() => handleFieldChange('maxLength', 12000)}
|
||||
>
|
||||
12000 (150 lines)
|
||||
</Button>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
{/* Warning Threshold */}
|
||||
<div className="space-y-2">
|
||||
<Label htmlFor="warnThreshold">
|
||||
{formatMessage({ id: 'specs.injection.warnThresholdLabel', defaultMessage: 'Warning Threshold (characters)' })}
|
||||
{formatMessage({ id: 'specs.injection.warnThresholdLabel', defaultMessage: 'Warning Threshold' })}
|
||||
</Label>
|
||||
<Input
|
||||
id="warnThreshold"
|
||||
type="number"
|
||||
min={500}
|
||||
max={formData.maxLength - 1}
|
||||
step={500}
|
||||
value={formData.warnThreshold}
|
||||
onChange={(e) => handleFieldChange('warnThreshold', Number(e.target.value))}
|
||||
/>
|
||||
<div className="flex items-center gap-4">
|
||||
<div className="flex-1">
|
||||
<Input
|
||||
id="warnThreshold"
|
||||
type="number"
|
||||
min={500}
|
||||
max={formData.maxLength - 1}
|
||||
step={500}
|
||||
value={formData.warnThreshold}
|
||||
onChange={(e) => handleFieldChange('warnThreshold', Number(e.target.value))}
|
||||
/>
|
||||
</div>
|
||||
<span className="text-sm text-muted-foreground whitespace-nowrap">
|
||||
(~{Math.ceil(formData.warnThreshold / 80)} {formatMessage({ id: 'specs.injection.lines', defaultMessage: 'lines' })})
|
||||
</span>
|
||||
</div>
|
||||
<p className="text-sm text-muted-foreground">
|
||||
{formatMessage({
|
||||
id: 'specs.injection.warnThresholdHelp',
|
||||
@@ -916,6 +1043,84 @@ export function InjectionControlTab({ className }: InjectionControlTabProps) {
|
||||
</div>
|
||||
</DialogContent>
|
||||
</Dialog>
|
||||
|
||||
{/* Command Preview Dialog */}
|
||||
<Dialog open={commandPreviewDialogOpen} onOpenChange={setCommandPreviewDialogOpen}>
|
||||
<DialogContent className="max-w-4xl max-h-[85vh]">
|
||||
<DialogHeader>
|
||||
<DialogTitle className="flex items-center gap-2">
|
||||
<Eye className="h-5 w-5" />
|
||||
{formatMessage({ id: 'specs.injection.previewTitle', defaultMessage: 'Injection Preview' })}
|
||||
</DialogTitle>
|
||||
<div className="flex items-center gap-2 text-sm text-muted-foreground">
|
||||
<code className="bg-muted px-2 py-1 rounded text-xs">{selectedCommand.command}</code>
|
||||
<span>•</span>
|
||||
<span>
|
||||
{formatMessage({
|
||||
id: `specs.commandPreview.${selectedCommand.descriptionKey}.description`,
|
||||
defaultMessage: selectedCommand.descriptionKey
|
||||
})}
|
||||
</span>
|
||||
</div>
|
||||
</DialogHeader>
|
||||
<div className="space-y-4">
|
||||
{/* Stats */}
|
||||
{commandPreviewData && (
|
||||
<div className="flex items-center gap-4 text-sm">
|
||||
<Badge variant="secondary">
|
||||
{commandPreviewData.stats.count} {formatMessage({ id: 'specs.injection.files', defaultMessage: 'files' })}
|
||||
</Badge>
|
||||
<Badge variant="secondary">
|
||||
{formatNumber(commandPreviewData.stats.totalLength)} {formatMessage({ id: 'specs.injection.characters', defaultMessage: 'characters' })}
|
||||
</Badge>
|
||||
<Badge variant="secondary">
|
||||
~{Math.ceil(commandPreviewData.stats.totalLength / 80)} {formatMessage({ id: 'specs.injection.lines', defaultMessage: 'lines' })}
|
||||
</Badge>
|
||||
</div>
|
||||
)}
|
||||
{/* Preview Content */}
|
||||
<div className="flex-1 overflow-auto max-h-[60vh] border rounded-lg">
|
||||
{commandPreviewData?.files.length ? (
|
||||
<div className="space-y-4 p-4">
|
||||
{commandPreviewData.files.map((file, idx) => (
|
||||
<div key={file.file} className="space-y-2">
|
||||
{/* File Header */}
|
||||
<div className="flex items-center justify-between text-sm">
|
||||
<div className="flex items-center gap-2">
|
||||
<Badge variant="outline" className="text-xs">
|
||||
{idx + 1}
|
||||
</Badge>
|
||||
<span className="font-medium">{file.title}</span>
|
||||
{file.category && (
|
||||
<Badge variant="outline" className={cn(
|
||||
'text-xs',
|
||||
CATEGORY_CONFIG[file.category as SpecCategory]?.bgColor,
|
||||
CATEGORY_CONFIG[file.category as SpecCategory]?.color
|
||||
)}>
|
||||
{file.category}
|
||||
</Badge>
|
||||
)}
|
||||
</div>
|
||||
<span className="text-xs text-muted-foreground">
|
||||
{formatNumber(file.contentLength)} {formatMessage({ id: 'specs.injection.characters', defaultMessage: 'characters' })}
|
||||
</span>
|
||||
</div>
|
||||
{/* File Content */}
|
||||
<pre className="text-xs whitespace-pre-wrap p-3 bg-muted rounded border-l-2 border-primary/30">
|
||||
{file.content || formatMessage({ id: 'specs.content.noContent', defaultMessage: 'No content available' })}
|
||||
</pre>
|
||||
</div>
|
||||
))}
|
||||
</div>
|
||||
) : (
|
||||
<div className="flex items-center justify-center py-8 text-muted-foreground">
|
||||
{formatMessage({ id: 'specs.injection.noFiles', defaultMessage: 'No files match this command' })}
|
||||
</div>
|
||||
)}
|
||||
</div>
|
||||
</div>
|
||||
</DialogContent>
|
||||
</Dialog>
|
||||
</div>
|
||||
);
|
||||
}
|
||||
|
||||
@@ -7,21 +7,26 @@ const Progress = React.forwardRef<
|
||||
React.ComponentPropsWithoutRef<typeof ProgressPrimitive.Root> & {
|
||||
indicatorClassName?: string;
|
||||
}
|
||||
>(({ className, value, indicatorClassName, ...props }, ref) => (
|
||||
<ProgressPrimitive.Root
|
||||
ref={ref}
|
||||
className={cn(
|
||||
"relative h-4 w-full overflow-hidden rounded-full bg-secondary",
|
||||
className
|
||||
)}
|
||||
{...props}
|
||||
>
|
||||
<ProgressPrimitive.Indicator
|
||||
className={cn("h-full w-full flex-1 bg-primary transition-all", indicatorClassName)}
|
||||
style={{ transform: `translateX(-${100 - (value || 0)}%)` }}
|
||||
/>
|
||||
</ProgressPrimitive.Root>
|
||||
));
|
||||
>(({ className, value, indicatorClassName, ...props }, ref) => {
|
||||
// Ensure value is a valid number between 0-100
|
||||
const safeValue = Math.max(0, Math.min(100, Number(value) || 0));
|
||||
|
||||
return (
|
||||
<ProgressPrimitive.Root
|
||||
ref={ref}
|
||||
className={cn(
|
||||
"relative h-4 w-full overflow-hidden rounded-full bg-secondary",
|
||||
className
|
||||
)}
|
||||
{...props}
|
||||
>
|
||||
<ProgressPrimitive.Indicator
|
||||
className={cn("h-full w-full flex-1 bg-primary transition-all", indicatorClassName)}
|
||||
style={{ transform: `translateX(-${100 - safeValue}%)` }}
|
||||
/>
|
||||
</ProgressPrimitive.Root>
|
||||
);
|
||||
});
|
||||
Progress.displayName = ProgressPrimitive.Root.displayName;
|
||||
|
||||
export { Progress };
|
||||
|
||||
@@ -161,6 +161,21 @@ export type {
|
||||
UseReindexReturn,
|
||||
} from './useUnifiedSearch';
|
||||
|
||||
// ========== Memory V2 Pipeline ==========
|
||||
export {
|
||||
useExtractionStatus,
|
||||
useConsolidationStatus,
|
||||
useV2Jobs,
|
||||
useTriggerExtraction,
|
||||
useTriggerConsolidation,
|
||||
memoryV2Keys,
|
||||
} from './useMemoryV2';
|
||||
export type {
|
||||
ExtractionStatus,
|
||||
ConsolidationStatus,
|
||||
V2JobsResponse,
|
||||
} from './useMemoryV2';
|
||||
|
||||
// ========== MCP Servers ==========
|
||||
export {
|
||||
useMcpServers,
|
||||
|
||||
101
ccw/frontend/src/hooks/useMemoryV2.ts
Normal file
101
ccw/frontend/src/hooks/useMemoryV2.ts
Normal file
@@ -0,0 +1,101 @@
|
||||
// ========================================
|
||||
// useMemoryV2 Hook
|
||||
// ========================================
|
||||
// TanStack Query hooks for Memory V2 pipeline
|
||||
|
||||
import { useQuery, useMutation, useQueryClient } from '@tanstack/react-query';
|
||||
import {
|
||||
triggerExtraction,
|
||||
getExtractionStatus,
|
||||
triggerConsolidation,
|
||||
getConsolidationStatus,
|
||||
getV2Jobs,
|
||||
type ExtractionStatus,
|
||||
type ConsolidationStatus,
|
||||
type V2JobsResponse,
|
||||
} from '../lib/api';
|
||||
import { useWorkflowStore, selectProjectPath } from '@/stores/workflowStore';
|
||||
|
||||
// Query keys
|
||||
export const memoryV2Keys = {
|
||||
all: ['memoryV2'] as const,
|
||||
extractionStatus: (path?: string) => [...memoryV2Keys.all, 'extraction', path] as const,
|
||||
consolidationStatus: (path?: string) => [...memoryV2Keys.all, 'consolidation', path] as const,
|
||||
jobs: (path?: string, filters?: { kind?: string; status_filter?: string }) =>
|
||||
[...memoryV2Keys.all, 'jobs', path, filters] as const,
|
||||
};
|
||||
|
||||
// Default stale time: 30 seconds (V2 status changes frequently)
|
||||
const STALE_TIME = 30 * 1000;
|
||||
|
||||
// Hook: 提取状态
|
||||
export function useExtractionStatus() {
|
||||
const projectPath = useWorkflowStore(selectProjectPath);
|
||||
|
||||
return useQuery({
|
||||
queryKey: memoryV2Keys.extractionStatus(projectPath),
|
||||
queryFn: () => getExtractionStatus(projectPath),
|
||||
enabled: !!projectPath,
|
||||
staleTime: STALE_TIME,
|
||||
refetchInterval: 5000, // 每 5 秒刷新
|
||||
});
|
||||
}
|
||||
|
||||
// Hook: 合并状态
|
||||
export function useConsolidationStatus() {
|
||||
const projectPath = useWorkflowStore(selectProjectPath);
|
||||
|
||||
return useQuery({
|
||||
queryKey: memoryV2Keys.consolidationStatus(projectPath),
|
||||
queryFn: () => getConsolidationStatus(projectPath),
|
||||
enabled: !!projectPath,
|
||||
staleTime: STALE_TIME,
|
||||
refetchInterval: 5000,
|
||||
});
|
||||
}
|
||||
|
||||
// Hook: V2 作业列表
|
||||
export function useV2Jobs(filters?: { kind?: string; status_filter?: string }) {
|
||||
const projectPath = useWorkflowStore(selectProjectPath);
|
||||
|
||||
return useQuery({
|
||||
queryKey: memoryV2Keys.jobs(projectPath, filters),
|
||||
queryFn: () => getV2Jobs(filters, projectPath),
|
||||
enabled: !!projectPath,
|
||||
staleTime: STALE_TIME,
|
||||
refetchInterval: 10000, // 每 10 秒刷新
|
||||
});
|
||||
}
|
||||
|
||||
// Hook: 触发提取
|
||||
export function useTriggerExtraction() {
|
||||
const queryClient = useQueryClient();
|
||||
const projectPath = useWorkflowStore(selectProjectPath);
|
||||
|
||||
return useMutation({
|
||||
mutationFn: (maxSessions?: number) => triggerExtraction(maxSessions, projectPath),
|
||||
onSuccess: () => {
|
||||
// Invalidate related queries
|
||||
queryClient.invalidateQueries({ queryKey: memoryV2Keys.extractionStatus(projectPath) });
|
||||
queryClient.invalidateQueries({ queryKey: memoryV2Keys.jobs(projectPath) });
|
||||
},
|
||||
});
|
||||
}
|
||||
|
||||
// Hook: 触发合并
|
||||
export function useTriggerConsolidation() {
|
||||
const queryClient = useQueryClient();
|
||||
const projectPath = useWorkflowStore(selectProjectPath);
|
||||
|
||||
return useMutation({
|
||||
mutationFn: () => triggerConsolidation(projectPath),
|
||||
onSuccess: () => {
|
||||
// Invalidate related queries
|
||||
queryClient.invalidateQueries({ queryKey: memoryV2Keys.consolidationStatus(projectPath) });
|
||||
queryClient.invalidateQueries({ queryKey: memoryV2Keys.jobs(projectPath) });
|
||||
},
|
||||
});
|
||||
}
|
||||
|
||||
// Export types
|
||||
export type { ExtractionStatus, ConsolidationStatus, V2JobsResponse };
|
||||
@@ -1638,6 +1638,111 @@ export async function unarchiveMemory(memoryId: string, projectPath?: string): P
|
||||
});
|
||||
}
|
||||
|
||||
// ========== Memory V2 API ==========
|
||||
|
||||
export interface ExtractionStatus {
|
||||
total_stage1: number;
|
||||
jobs: Array<{
|
||||
job_key: string;
|
||||
status: string;
|
||||
last_error?: string;
|
||||
}>;
|
||||
}
|
||||
|
||||
export interface ConsolidationStatus {
|
||||
status: 'idle' | 'running' | 'completed' | 'error';
|
||||
memoryMdAvailable: boolean;
|
||||
memoryMdPreview?: string;
|
||||
inputCount?: number;
|
||||
lastRun?: number;
|
||||
lastError?: string;
|
||||
}
|
||||
|
||||
export interface V2Job {
|
||||
kind: string;
|
||||
job_key: string;
|
||||
status: 'pending' | 'running' | 'done' | 'error';
|
||||
last_error?: string;
|
||||
worker_id?: string;
|
||||
started_at?: number;
|
||||
finished_at?: number;
|
||||
retry_remaining?: number;
|
||||
}
|
||||
|
||||
export interface V2JobsResponse {
|
||||
jobs: V2Job[];
|
||||
total: number;
|
||||
byStatus: Record<string, number>;
|
||||
}
|
||||
|
||||
/**
|
||||
* Trigger Phase 1 extraction for eligible CLI sessions
|
||||
*/
|
||||
export async function triggerExtraction(
|
||||
maxSessions?: number,
|
||||
projectPath?: string
|
||||
): Promise<{ triggered: boolean; jobIds: string[]; message: string }> {
|
||||
const params = new URLSearchParams();
|
||||
if (projectPath) params.set('path', projectPath);
|
||||
return fetchApi<{ triggered: boolean; jobIds: string[]; message: string }>(
|
||||
`/api/core-memory/extract?${params}`,
|
||||
{
|
||||
method: 'POST',
|
||||
body: JSON.stringify({ max_sessions: maxSessions }),
|
||||
}
|
||||
);
|
||||
}
|
||||
|
||||
/**
|
||||
* Get Phase 1 extraction status
|
||||
*/
|
||||
export async function getExtractionStatus(
|
||||
projectPath?: string
|
||||
): Promise<ExtractionStatus> {
|
||||
const params = new URLSearchParams();
|
||||
if (projectPath) params.set('path', projectPath);
|
||||
return fetchApi<ExtractionStatus>(`/api/core-memory/extract/status?${params}`);
|
||||
}
|
||||
|
||||
/**
|
||||
* Trigger Phase 2 consolidation to generate MEMORY.md
|
||||
*/
|
||||
export async function triggerConsolidation(
|
||||
projectPath?: string
|
||||
): Promise<{ triggered: boolean; message: string }> {
|
||||
const params = new URLSearchParams();
|
||||
if (projectPath) params.set('path', projectPath);
|
||||
return fetchApi<{ triggered: boolean; message: string }>(
|
||||
`/api/core-memory/consolidate?${params}`,
|
||||
{ method: 'POST' }
|
||||
);
|
||||
}
|
||||
|
||||
/**
|
||||
* Get Phase 2 consolidation status
|
||||
*/
|
||||
export async function getConsolidationStatus(
|
||||
projectPath?: string
|
||||
): Promise<ConsolidationStatus> {
|
||||
const params = new URLSearchParams();
|
||||
if (projectPath) params.set('path', projectPath);
|
||||
return fetchApi<ConsolidationStatus>(`/api/core-memory/consolidate/status?${params}`);
|
||||
}
|
||||
|
||||
/**
|
||||
* Get V2 pipeline jobs list
|
||||
*/
|
||||
export async function getV2Jobs(
|
||||
options?: { kind?: string; status_filter?: string },
|
||||
projectPath?: string
|
||||
): Promise<V2JobsResponse> {
|
||||
const params = new URLSearchParams();
|
||||
if (projectPath) params.set('path', projectPath);
|
||||
if (options?.kind) params.set('kind', options.kind);
|
||||
if (options?.status_filter) params.set('status_filter', options.status_filter);
|
||||
return fetchApi<V2JobsResponse>(`/api/core-memory/jobs?${params}`);
|
||||
}
|
||||
|
||||
// ========== Project Overview API ==========
|
||||
|
||||
export interface TechnologyStack {
|
||||
@@ -7365,47 +7470,48 @@ export async function getInjectionPreview(
|
||||
*/
|
||||
export interface CommandPreviewConfig {
|
||||
command: string;
|
||||
label: string;
|
||||
description: string;
|
||||
labelKey: string; // i18n key for label
|
||||
descriptionKey: string; // i18n key for description
|
||||
category?: string;
|
||||
mode: 'required' | 'all';
|
||||
}
|
||||
|
||||
/**
|
||||
* Predefined command preview configurations
|
||||
* Labels and descriptions use i18n keys: commandPreview.{key}.label / commandPreview.{key}.description
|
||||
*/
|
||||
export const COMMAND_PREVIEWS: CommandPreviewConfig[] = [
|
||||
{
|
||||
command: 'ccw spec load',
|
||||
label: 'Default (All Categories)',
|
||||
description: 'Load all required specs without category filter',
|
||||
labelKey: 'default',
|
||||
descriptionKey: 'default',
|
||||
mode: 'required',
|
||||
},
|
||||
{
|
||||
command: 'ccw spec load --category exploration',
|
||||
label: 'Exploration',
|
||||
description: 'Specs for code exploration, analysis, debugging',
|
||||
labelKey: 'exploration',
|
||||
descriptionKey: 'exploration',
|
||||
category: 'exploration',
|
||||
mode: 'required',
|
||||
},
|
||||
{
|
||||
command: 'ccw spec load --category planning',
|
||||
label: 'Planning',
|
||||
description: 'Specs for task planning, requirements',
|
||||
labelKey: 'planning',
|
||||
descriptionKey: 'planning',
|
||||
category: 'planning',
|
||||
mode: 'required',
|
||||
},
|
||||
{
|
||||
command: 'ccw spec load --category execution',
|
||||
label: 'Execution',
|
||||
description: 'Specs for implementation, testing, deployment',
|
||||
labelKey: 'execution',
|
||||
descriptionKey: 'execution',
|
||||
category: 'execution',
|
||||
mode: 'required',
|
||||
},
|
||||
{
|
||||
command: 'ccw spec load --category general',
|
||||
label: 'General',
|
||||
description: 'Specs that apply to all stages',
|
||||
labelKey: 'general',
|
||||
descriptionKey: 'general',
|
||||
category: 'general',
|
||||
mode: 'required',
|
||||
},
|
||||
|
||||
@@ -158,6 +158,50 @@
|
||||
"session-state-watch": {
|
||||
"name": "Session State Watch",
|
||||
"description": "Watch for session metadata file changes (workflow-session.json)"
|
||||
},
|
||||
"stop-notify": {
|
||||
"name": "Stop Notify",
|
||||
"description": "Notify dashboard when Claude finishes responding"
|
||||
},
|
||||
"auto-format-on-write": {
|
||||
"name": "Auto Format on Write",
|
||||
"description": "Auto-format files after Claude writes or edits them"
|
||||
},
|
||||
"auto-lint-on-write": {
|
||||
"name": "Auto Lint on Write",
|
||||
"description": "Auto-lint files after Claude writes or edits them"
|
||||
},
|
||||
"block-sensitive-files": {
|
||||
"name": "Block Sensitive Files",
|
||||
"description": "Block modifications to sensitive files (.env, secrets, credentials)"
|
||||
},
|
||||
"git-auto-stage": {
|
||||
"name": "Git Auto Stage",
|
||||
"description": "Auto stage all modified files when Claude finishes responding"
|
||||
},
|
||||
"post-edit-index": {
|
||||
"name": "Post Edit Index",
|
||||
"description": "Notify indexing service when files are modified"
|
||||
},
|
||||
"session-end-summary": {
|
||||
"name": "Session End Summary",
|
||||
"description": "Send session summary to dashboard on session end"
|
||||
},
|
||||
"project-state-inject": {
|
||||
"name": "Project State Inject",
|
||||
"description": "Inject project guidelines and recent dev history at session start"
|
||||
},
|
||||
"memory-v2-extract": {
|
||||
"name": "Memory V2 Extract",
|
||||
"description": "Trigger Phase 1 extraction when session ends (after idle period)"
|
||||
},
|
||||
"memory-v2-auto-consolidate": {
|
||||
"name": "Memory V2 Auto Consolidate",
|
||||
"description": "Trigger Phase 2 consolidation after extraction jobs complete"
|
||||
},
|
||||
"memory-sync-dashboard": {
|
||||
"name": "Memory Sync Dashboard",
|
||||
"description": "Sync memory V2 status to dashboard on changes"
|
||||
}
|
||||
},
|
||||
"actions": {
|
||||
|
||||
@@ -109,5 +109,50 @@
|
||||
"vectorRank": "Vector #{rank}",
|
||||
"ftsRank": "FTS #{rank}",
|
||||
"heatScore": "Heat: {score}"
|
||||
},
|
||||
"v2": {
|
||||
"title": "Memory V2 Pipeline",
|
||||
"extraction": {
|
||||
"title": "Extraction",
|
||||
"description": "Extract structured memories from CLI sessions",
|
||||
"trigger": "Trigger Extraction",
|
||||
"extracting": "Extracting...",
|
||||
"extracted": "Extracted",
|
||||
"recentJobs": "Recent Jobs",
|
||||
"triggered": "Extraction triggered",
|
||||
"triggerError": "Failed to trigger extraction"
|
||||
},
|
||||
"consolidation": {
|
||||
"title": "Consolidation",
|
||||
"description": "Consolidate extraction results into MEMORY.md",
|
||||
"trigger": "Trigger Consolidation",
|
||||
"consolidating": "Consolidating...",
|
||||
"preview": "Preview",
|
||||
"memoryMd": "MEMORY.md",
|
||||
"exists": "Exists",
|
||||
"notExists": "Not Exists",
|
||||
"inputs": "Inputs",
|
||||
"triggered": "Consolidation triggered",
|
||||
"triggerError": "Failed to trigger consolidation"
|
||||
},
|
||||
"jobs": {
|
||||
"title": "Jobs",
|
||||
"kind": "Kind",
|
||||
"key": "Key",
|
||||
"status": "Status",
|
||||
"error": "Error",
|
||||
"noJobs": "No jobs found",
|
||||
"allKinds": "All Kinds",
|
||||
"extraction": "Extraction",
|
||||
"consolidation": "Consolidation"
|
||||
},
|
||||
"status": {
|
||||
"idle": "Idle",
|
||||
"running": "Running",
|
||||
"completed": "Completed",
|
||||
"done": "Done",
|
||||
"error": "Error",
|
||||
"pending": "Pending"
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@@ -213,13 +213,46 @@
|
||||
"warning": "Approaching Limit",
|
||||
"normal": "Normal",
|
||||
"characters": "characters",
|
||||
"lines": "lines",
|
||||
"maxLimit": "Max",
|
||||
"quickPresets": "Quick presets:",
|
||||
"statsInfo": "Statistics",
|
||||
"requiredLength": "Required specs length:",
|
||||
"matchedLength": "Keyword-matched length:",
|
||||
"remaining": "Remaining space:",
|
||||
"loadError": "Failed to load stats",
|
||||
"saveSuccess": "Settings saved successfully",
|
||||
"saveError": "Failed to save settings"
|
||||
"saveError": "Failed to save settings",
|
||||
"filesList": "Injection Files",
|
||||
"files": "files",
|
||||
"noFiles": "No files match this command",
|
||||
"loadingPreview": "Loading preview...",
|
||||
"commandPreview": "Command Injection Preview",
|
||||
"commandPreviewDesc": "Preview the content that would be injected by different CLI commands",
|
||||
"previewTitle": "Injection Preview"
|
||||
},
|
||||
|
||||
"commandPreview": {
|
||||
"default": {
|
||||
"label": "All Categories",
|
||||
"description": "Load all required specs without category filter"
|
||||
},
|
||||
"exploration": {
|
||||
"label": "Exploration",
|
||||
"description": "Specs for code exploration, analysis, debugging"
|
||||
},
|
||||
"planning": {
|
||||
"label": "Planning",
|
||||
"description": "Specs for task planning, requirements"
|
||||
},
|
||||
"execution": {
|
||||
"label": "Execution",
|
||||
"description": "Specs for implementation, testing, deployment"
|
||||
},
|
||||
"general": {
|
||||
"label": "General",
|
||||
"description": "Specs that apply to all stages"
|
||||
}
|
||||
},
|
||||
|
||||
"settings": {
|
||||
@@ -231,9 +264,27 @@
|
||||
"defaultReadModeHelp": "The default read mode for newly created personal specs",
|
||||
"selectReadMode": "Select read mode",
|
||||
"autoEnable": "Auto Enable New Specs",
|
||||
"autoEnableDescription": "Automatically enable newly created personal specs",
|
||||
"autoEnableDescription": "New personal specs are set to required (readMode=required) by default and automatically included in context injection",
|
||||
"specStatistics": "Spec Statistics",
|
||||
"totalSpecs": "Total: {count} spec files"
|
||||
"totalSpecs": "Total: {count} spec files",
|
||||
"devProgressInjection": "Development Progress Injection",
|
||||
"devProgressInjectionDesc": "Control how development progress from project-tech.json is injected into AI context",
|
||||
"enableDevProgress": "Enable Injection",
|
||||
"enableDevProgressDesc": "Include development history in AI context",
|
||||
"maxEntries": "Max Entries per Category",
|
||||
"maxEntriesDesc": "Maximum number of entries to include per category (1-50)",
|
||||
"includeCategories": "Include Categories",
|
||||
"categoriesDesc": "Click to toggle category inclusion",
|
||||
"devProgressStats": "{total} entries from {sessions} sessions, last updated: {date}",
|
||||
"devProgressStatsNoDate": "{total} entries from {sessions} sessions"
|
||||
},
|
||||
|
||||
"devCategory": {
|
||||
"feature": "Feature",
|
||||
"enhancement": "Enhancement",
|
||||
"bugfix": "Bug Fix",
|
||||
"refactor": "Refactor",
|
||||
"docs": "Docs"
|
||||
},
|
||||
|
||||
"dialog": {
|
||||
|
||||
@@ -158,6 +158,50 @@
|
||||
"session-state-watch": {
|
||||
"name": "会话状态监控",
|
||||
"description": "监控会话元数据文件变更 (workflow-session.json)"
|
||||
},
|
||||
"stop-notify": {
|
||||
"name": "停止通知",
|
||||
"description": "当 Claude 完成响应时通知仪表盘"
|
||||
},
|
||||
"auto-format-on-write": {
|
||||
"name": "写入自动格式化",
|
||||
"description": "在 Claude 写入或编辑文件后自动格式化"
|
||||
},
|
||||
"auto-lint-on-write": {
|
||||
"name": "写入自动检查",
|
||||
"description": "在 Claude 写入或编辑文件后自动进行 Lint 检查"
|
||||
},
|
||||
"block-sensitive-files": {
|
||||
"name": "阻止敏感文件修改",
|
||||
"description": "阻止对敏感文件 (.env、密钥、凭据) 的修改"
|
||||
},
|
||||
"git-auto-stage": {
|
||||
"name": "Git 自动暂存",
|
||||
"description": "当 Claude 完成响应时自动暂存所有修改的文件"
|
||||
},
|
||||
"post-edit-index": {
|
||||
"name": "编辑后索引",
|
||||
"description": "文件修改时通知索引服务"
|
||||
},
|
||||
"session-end-summary": {
|
||||
"name": "会话结束摘要",
|
||||
"description": "会话结束时向仪表盘发送会话摘要"
|
||||
},
|
||||
"project-state-inject": {
|
||||
"name": "项目状态注入",
|
||||
"description": "会话开始时注入项目指南和最近开发历史"
|
||||
},
|
||||
"memory-v2-extract": {
|
||||
"name": "Memory V2 提取",
|
||||
"description": "会话结束时触发 Phase 1 提取(空闲期后)"
|
||||
},
|
||||
"memory-v2-auto-consolidate": {
|
||||
"name": "Memory V2 自动合并",
|
||||
"description": "提取作业完成后触发 Phase 2 合并"
|
||||
},
|
||||
"memory-sync-dashboard": {
|
||||
"name": "Memory 同步仪表盘",
|
||||
"description": "变更时将 Memory V2 状态同步到仪表盘"
|
||||
}
|
||||
},
|
||||
"actions": {
|
||||
|
||||
@@ -109,5 +109,50 @@
|
||||
"vectorRank": "向量 #{rank}",
|
||||
"ftsRank": "全文 #{rank}",
|
||||
"heatScore": "热度: {score}"
|
||||
},
|
||||
"v2": {
|
||||
"title": "Memory V2 Pipeline",
|
||||
"extraction": {
|
||||
"title": "提取",
|
||||
"description": "从 CLI 会话中提取结构化记忆",
|
||||
"trigger": "触发提取",
|
||||
"extracting": "提取中...",
|
||||
"extracted": "已提取",
|
||||
"recentJobs": "最近作业",
|
||||
"triggered": "提取已触发",
|
||||
"triggerError": "触发提取失败"
|
||||
},
|
||||
"consolidation": {
|
||||
"title": "合并",
|
||||
"description": "合并提取结果生成 MEMORY.md",
|
||||
"trigger": "触发合并",
|
||||
"consolidating": "合并中...",
|
||||
"preview": "预览",
|
||||
"memoryMd": "MEMORY.md",
|
||||
"exists": "存在",
|
||||
"notExists": "不存在",
|
||||
"inputs": "输入",
|
||||
"triggered": "合并已触发",
|
||||
"triggerError": "触发合并失败"
|
||||
},
|
||||
"jobs": {
|
||||
"title": "作业列表",
|
||||
"kind": "类型",
|
||||
"key": "Key",
|
||||
"status": "状态",
|
||||
"error": "错误",
|
||||
"noJobs": "暂无作业记录",
|
||||
"allKinds": "所有类型",
|
||||
"extraction": "提取",
|
||||
"consolidation": "合并"
|
||||
},
|
||||
"status": {
|
||||
"idle": "空闲",
|
||||
"running": "运行中",
|
||||
"completed": "已完成",
|
||||
"done": "完成",
|
||||
"error": "错误",
|
||||
"pending": "等待"
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@@ -214,6 +214,9 @@
|
||||
"normal": "正常",
|
||||
"characters": "字符",
|
||||
"chars": "字符",
|
||||
"lines": "行",
|
||||
"maxLimit": "最大",
|
||||
"quickPresets": "快速预设:",
|
||||
"statsInfo": "统计信息",
|
||||
"requiredLength": "必读规范长度:",
|
||||
"matchedLength": "关键词匹配长度:",
|
||||
@@ -222,7 +225,34 @@
|
||||
"saveSuccess": "设置已保存",
|
||||
"saveError": "保存设置失败",
|
||||
"filesList": "注入文件列表",
|
||||
"files": "个文件"
|
||||
"files": "个文件",
|
||||
"noFiles": "没有匹配此命令的文件",
|
||||
"loadingPreview": "加载预览中...",
|
||||
"commandPreview": "命令注入预览",
|
||||
"commandPreviewDesc": "预览不同 CLI 命令将注入的内容"
|
||||
},
|
||||
|
||||
"commandPreview": {
|
||||
"default": {
|
||||
"label": "全部类别",
|
||||
"description": "加载所有必读规范,不进行类别过滤"
|
||||
},
|
||||
"exploration": {
|
||||
"label": "探索",
|
||||
"description": "代码探索、分析、调试相关规范"
|
||||
},
|
||||
"planning": {
|
||||
"label": "规划",
|
||||
"description": "任务规划、需求相关规范"
|
||||
},
|
||||
"execution": {
|
||||
"label": "执行",
|
||||
"description": "实现、测试、部署相关规范"
|
||||
},
|
||||
"general": {
|
||||
"label": "通用",
|
||||
"description": "适用于所有阶段的规范"
|
||||
}
|
||||
},
|
||||
|
||||
"priority": {
|
||||
@@ -241,9 +271,27 @@
|
||||
"defaultReadModeHelp": "新创建的个人规范的默认读取模式",
|
||||
"selectReadMode": "选择读取模式",
|
||||
"autoEnable": "自动启用新规范",
|
||||
"autoEnableDescription": "自动启用新创建的个人规范",
|
||||
"autoEnableDescription": "新创建的个人规范默认设置为必读(readMode=required),自动加入注入上下文",
|
||||
"specStatistics": "规范统计",
|
||||
"totalSpecs": "总计:{count} 个规范文件"
|
||||
"totalSpecs": "总计:{count} 个规范文件",
|
||||
"devProgressInjection": "开发进度注入",
|
||||
"devProgressInjectionDesc": "控制如何将 project-tech.json 中的开发进度注入到 AI 上下文中",
|
||||
"enableDevProgress": "启用注入",
|
||||
"enableDevProgressDesc": "在 AI 上下文中包含开发历史记录",
|
||||
"maxEntries": "每类最大条目数",
|
||||
"maxEntriesDesc": "每个类别包含的最大条目数 (1-50)",
|
||||
"includeCategories": "包含的类别",
|
||||
"categoriesDesc": "点击切换类别包含状态",
|
||||
"devProgressStats": "共 {total} 条记录,来自 {sessions} 个会话,最后更新:{date}",
|
||||
"devProgressStatsNoDate": "共 {total} 条记录,来自 {sessions} 个会话"
|
||||
},
|
||||
|
||||
"devCategory": {
|
||||
"feature": "新功能",
|
||||
"enhancement": "增强",
|
||||
"bugfix": "修复",
|
||||
"refactor": "重构",
|
||||
"docs": "文档"
|
||||
},
|
||||
|
||||
"dialog": {
|
||||
|
||||
@@ -42,6 +42,7 @@ import { Checkbox } from '@/components/ui/Checkbox';
|
||||
import { useMemory, useMemoryMutations, useUnifiedSearch, useUnifiedStats, useRecommendations, useReindex } from '@/hooks';
|
||||
import type { CoreMemory, UnifiedSearchResult } from '@/lib/api';
|
||||
import { cn, parseMemoryMetadata } from '@/lib/utils';
|
||||
import { V2PipelineTab } from '@/components/memory/V2PipelineTab';
|
||||
|
||||
// ========== Source Type Helpers ==========
|
||||
|
||||
@@ -624,7 +625,7 @@ export function MemoryPage() {
|
||||
const [isNewMemoryOpen, setIsNewMemoryOpen] = useState(false);
|
||||
const [editingMemory, setEditingMemory] = useState<CoreMemory | null>(null);
|
||||
const [viewingMemory, setViewingMemory] = useState<CoreMemory | null>(null);
|
||||
const [currentTab, setCurrentTab] = useState<'memories' | 'favorites' | 'archived' | 'unifiedSearch'>('memories');
|
||||
const [currentTab, setCurrentTab] = useState<'memories' | 'favorites' | 'archived' | 'unifiedSearch' | 'v2pipeline'>('memories');
|
||||
const [unifiedQuery, setUnifiedQuery] = useState('');
|
||||
const [selectedCategory, setSelectedCategory] = useState('');
|
||||
const isImmersiveMode = useAppStore(selectIsImmersiveMode);
|
||||
@@ -866,6 +867,11 @@ export function MemoryPage() {
|
||||
label: formatMessage({ id: 'memory.tabs.unifiedSearch' }),
|
||||
icon: <Search className="h-4 w-4" />,
|
||||
},
|
||||
{
|
||||
value: 'v2pipeline',
|
||||
label: 'V2 Pipeline',
|
||||
icon: <Zap className="h-4 w-4" />,
|
||||
},
|
||||
]}
|
||||
/>
|
||||
|
||||
@@ -1062,7 +1068,10 @@ export function MemoryPage() {
|
||||
)}
|
||||
|
||||
{/* Content Area */}
|
||||
{isUnifiedTab ? (
|
||||
{currentTab === 'v2pipeline' ? (
|
||||
/* V2 Pipeline Tab */
|
||||
<V2PipelineTab />
|
||||
) : isUnifiedTab ? (
|
||||
/* Unified Search Results */
|
||||
unifiedLoading ? (
|
||||
<div className="flex items-center justify-center py-12">
|
||||
|
||||
@@ -376,5 +376,5 @@ export function run(argv: string[]): void {
|
||||
program.parse(argv);
|
||||
}
|
||||
|
||||
// Invoke CLI when run directly
|
||||
run(process.argv);
|
||||
// Note: run() is called by bin/ccw.js entry point
|
||||
// Do not call run() here to avoid duplicate execution
|
||||
|
||||
@@ -224,6 +224,7 @@ export async function handleSpecRoutes(ctx: RouteContext): Promise<boolean> {
|
||||
const resolvedPath = resolvePath(projectPath);
|
||||
const mode = url.searchParams.get('mode') || 'required'; // required | all | keywords
|
||||
const preview = url.searchParams.get('preview') === 'true';
|
||||
const category = url.searchParams.get('category') || undefined; // optional category filter
|
||||
|
||||
try {
|
||||
const { getDimensionIndex, SPEC_DIMENSIONS } = await import(
|
||||
@@ -254,6 +255,11 @@ export async function handleSpecRoutes(ctx: RouteContext): Promise<boolean> {
|
||||
continue;
|
||||
}
|
||||
|
||||
// Filter by category if specified
|
||||
if (category && (entry.category || 'general') !== category) {
|
||||
continue;
|
||||
}
|
||||
|
||||
const fileData: InjectionFile = {
|
||||
file: entry.file,
|
||||
title: entry.title,
|
||||
|
||||
@@ -444,28 +444,28 @@ export async function handleSystemRoutes(ctx: SystemRouteContext): Promise<boole
|
||||
|
||||
// API: Get project-tech stats for development progress injection
|
||||
if (pathname === '/api/project-tech/stats' && req.method === 'GET') {
|
||||
const projectPath = url.searchParams.get('path') || initialPath;
|
||||
const resolvedPath = resolvePath(projectPath);
|
||||
const techPath = join(resolvedPath, '.workflow', 'project-tech.json');
|
||||
|
||||
if (!existsSync(techPath)) {
|
||||
res.writeHead(200, { 'Content-Type': 'application/json' });
|
||||
res.end(JSON.stringify({
|
||||
total_features: 0,
|
||||
total_sessions: 0,
|
||||
last_updated: null,
|
||||
categories: {
|
||||
feature: 0,
|
||||
enhancement: 0,
|
||||
bugfix: 0,
|
||||
refactor: 0,
|
||||
docs: 0
|
||||
}
|
||||
}));
|
||||
return true;
|
||||
}
|
||||
|
||||
try {
|
||||
const projectPath = url.searchParams.get('path') || initialPath;
|
||||
const resolvedPath = resolvePath(projectPath);
|
||||
const techPath = join(resolvedPath, '.workflow', 'project-tech.json');
|
||||
|
||||
if (!existsSync(techPath)) {
|
||||
res.writeHead(200, { 'Content-Type': 'application/json' });
|
||||
res.end(JSON.stringify({
|
||||
total_features: 0,
|
||||
total_sessions: 0,
|
||||
last_updated: null,
|
||||
categories: {
|
||||
feature: 0,
|
||||
enhancement: 0,
|
||||
bugfix: 0,
|
||||
refactor: 0,
|
||||
docs: 0
|
||||
}
|
||||
}));
|
||||
return true;
|
||||
}
|
||||
|
||||
const rawContent = readFileSync(techPath, 'utf-8');
|
||||
const tech = JSON.parse(rawContent) as {
|
||||
development_index?: {
|
||||
|
||||
@@ -626,13 +626,13 @@ export async function startServer(options: ServerOptions = {}): Promise<http.Ser
|
||||
if (await handleFilesRoutes(routeContext)) return;
|
||||
}
|
||||
|
||||
// System routes (data, health, version, paths, shutdown, notify, storage, dialog, a2ui answer broker, system settings)
|
||||
// System routes (data, health, version, paths, shutdown, notify, storage, dialog, a2ui answer broker, system settings, project-tech)
|
||||
if (pathname === '/api/data' || pathname === '/api/health' ||
|
||||
pathname === '/api/version-check' || pathname === '/api/shutdown' ||
|
||||
pathname === '/api/recent-paths' || pathname === '/api/switch-path' ||
|
||||
pathname === '/api/remove-recent-path' || pathname === '/api/system/notify' ||
|
||||
pathname === '/api/system/settings' || pathname === '/api/system/hooks/install-recommended' ||
|
||||
pathname === '/api/a2ui/answer' ||
|
||||
pathname === '/api/a2ui/answer' || pathname === '/api/project-tech/stats' ||
|
||||
pathname.startsWith('/api/storage/') || pathname.startsWith('/api/dialog/')) {
|
||||
if (await handleSystemRoutes(routeContext)) return;
|
||||
}
|
||||
|
||||
Reference in New Issue
Block a user