Add unit tests for various components and stores in the terminal dashboard

- Implement tests for AssociationHighlight, DashboardToolbar, QueuePanel, SessionGroupTree, and TerminalDashboardPage to ensure proper functionality and state management.
- Create tests for cliSessionStore, issueQueueIntegrationStore, queueExecutionStore, queueSchedulerStore, sessionManagerStore, and terminalGridStore to validate state resets and workspace scoping.
- Mock necessary dependencies and state management hooks to isolate tests and ensure accurate behavior.
This commit is contained in:
catlog22
2026-03-08 21:38:20 +08:00
parent 9aa07e8d01
commit 62d8aa3623
157 changed files with 36544 additions and 71 deletions

View File

@@ -0,0 +1,749 @@
---
name: team-ultra-analyze
description: Deep collaborative analysis pipeline. Multi-perspective exploration, deep analysis, user-driven discussion loops, and cross-perspective synthesis. Supports Quick, Standard, and Deep pipeline modes.
argument-hint: "[-y|--yes] [-c|--concurrency N] [--continue] [--mode quick|standard|deep] \"analysis topic\""
allowed-tools: spawn_agents_on_csv, spawn_agent, wait, send_input, close_agent, Read, Write, Edit, Bash, Glob, Grep, AskUserQuestion
---
## Auto Mode
When `--yes` or `-y`: Auto-confirm task decomposition, skip interactive validation, use defaults.
# Team Ultra Analyze
## Usage
```bash
$team-ultra-analyze "Analyze authentication module architecture and security"
$team-ultra-analyze -c 4 --mode deep "Deep analysis of payment processing pipeline"
$team-ultra-analyze -y --mode quick "Quick overview of API endpoint structure"
$team-ultra-analyze --continue "uan-auth-analysis-20260308"
```
**Flags**:
- `-y, --yes`: Skip all confirmations (auto mode)
- `-c, --concurrency N`: Max concurrent agents within each wave (default: 3)
- `--mode`: Pipeline mode override (quick|standard|deep)
- `--continue`: Resume existing session
**Output Directory**: `.workflow/.csv-wave/{session-id}/`
**Core Output**: `tasks.csv` (master state) + `results.csv` (final) + `discoveries.ndjson` (shared exploration) + `context.md` (human-readable report)
---
## Overview
Deep collaborative analysis with multi-perspective exploration, deep analysis, user-driven discussion loops, and cross-perspective synthesis. Each perspective gets its own explorer and analyst, working in parallel. Discussion rounds allow the user to steer analysis depth and direction.
**Execution Model**: Hybrid — CSV wave pipeline (primary) + individual agent spawn (secondary for discussion feedback loop)
```
┌─────────────────────────────────────────────────────────────────────────┐
│ TEAM ULTRA ANALYZE WORKFLOW │
├─────────────────────────────────────────────────────────────────────────┤
│ │
│ Phase 0: Pre-Wave Interactive │
│ ├─ Topic parsing + dimension detection │
│ ├─ Pipeline mode selection (quick/standard/deep) │
│ ├─ Perspective assignment │
│ └─ Output: refined requirements for decomposition │
│ │
│ Phase 1: Requirement → CSV + Classification │
│ ├─ Parse topic into exploration + analysis + discussion + synthesis │
│ ├─ Assign roles: explorer, analyst, discussant, synthesizer │
│ ├─ Classify tasks: csv-wave | interactive (exec_mode) │
│ ├─ Compute dependency waves (topological sort → depth grouping) │
│ ├─ Generate tasks.csv with wave + exec_mode columns │
│ └─ User validates task breakdown (skip if -y) │
│ │
│ Phase 2: Wave Execution Engine (Extended) │
│ ├─ For each wave (1..N): │
│ │ ├─ Build wave CSV (filter csv-wave tasks for this wave) │
│ │ ├─ Inject previous findings into prev_context column │
│ │ ├─ spawn_agents_on_csv(wave CSV) │
│ │ ├─ Execute post-wave interactive tasks (if any) │
│ │ ├─ Merge all results into master tasks.csv │
│ │ └─ Check: any failed? → skip dependents │
│ └─ discoveries.ndjson shared across all modes (append-only) │
│ │
│ Phase 3: Post-Wave Interactive (Discussion Loop) │
│ ├─ After discussant completes: user feedback gate │
│ ├─ User chooses: continue deeper | adjust direction | done │
│ ├─ Creates dynamic tasks (DISCUSS-N, ANALYZE-fix-N) as needed │
│ └─ Max discussion rounds: quick=0, standard=1, deep=5 │
│ │
│ Phase 4: Results Aggregation │
│ ├─ Export final results.csv │
│ ├─ Generate context.md with all findings │
│ ├─ Display summary: completed/failed/skipped per wave │
│ └─ Offer: view results | export | archive │
│ │
└─────────────────────────────────────────────────────────────────────────┘
```
---
## Task Classification Rules
Each task is classified by `exec_mode`:
| exec_mode | Mechanism | Criteria |
|-----------|-----------|----------|
| `csv-wave` | `spawn_agents_on_csv` | One-shot, structured I/O, no multi-round interaction |
| `interactive` | `spawn_agent`/`wait`/`send_input`/`close_agent` | Multi-round, user feedback, direction control |
**Classification Decision**:
| Task Property | Classification |
|---------------|---------------|
| Codebase exploration (single perspective) | `csv-wave` |
| Parallel exploration (multiple perspectives) | `csv-wave` (parallel in same wave) |
| Deep analysis (single perspective) | `csv-wave` |
| Parallel analysis (multiple perspectives) | `csv-wave` (parallel in same wave) |
| Direction-fix analysis (adjusted focus) | `csv-wave` |
| Discussion processing (aggregate results) | `csv-wave` |
| Final synthesis (cross-perspective integration) | `csv-wave` |
| Discussion feedback gate (user interaction) | `interactive` |
| Topic clarification (Phase 0) | `interactive` |
---
## CSV Schema
### tasks.csv (Master State)
```csv
id,title,description,role,perspective,dimensions,discussion_round,discussion_type,deps,context_from,exec_mode,wave,status,findings,error
"EXPLORE-001","Explore from technical perspective","Search codebase from technical perspective. Collect files, patterns, findings.","explorer","technical","architecture;implementation","0","","","","csv-wave","1","pending","",""
"ANALYZE-001","Deep analysis from technical perspective","Analyze exploration results from technical perspective. Generate insights with confidence levels.","analyst","technical","architecture;implementation","0","","EXPLORE-001","EXPLORE-001","csv-wave","2","pending","",""
"DISCUSS-001","Initial discussion round","Aggregate all analysis results. Identify convergent themes, conflicts, top discussion points.","discussant","","","1","initial","ANALYZE-001;ANALYZE-002","ANALYZE-001;ANALYZE-002","csv-wave","3","pending","",""
```
**Columns**:
| Column | Phase | Description |
|--------|-------|-------------|
| `id` | Input | Unique task identifier (string) |
| `title` | Input | Short task title |
| `description` | Input | Detailed task description |
| `role` | Input | Worker role: explorer, analyst, discussant, synthesizer |
| `perspective` | Input | Analysis perspective: technical, architectural, business, domain_expert |
| `dimensions` | Input | Analysis dimensions (semicolon-separated): architecture, implementation, performance, security, concept, comparison, decision |
| `discussion_round` | Input | Discussion round number (0 = N/A, 1+ = round number) |
| `discussion_type` | Input | Discussion type: initial, deepen, direction-adjusted, specific-questions |
| `deps` | Input | Semicolon-separated dependency task IDs |
| `context_from` | Input | Semicolon-separated task IDs whose findings this task needs |
| `exec_mode` | Input | `csv-wave` or `interactive` |
| `wave` | Computed | Wave number (computed by topological sort, 1-based) |
| `status` | Output | `pending``completed` / `failed` / `skipped` |
| `findings` | Output | Key discoveries or implementation notes (max 500 chars) |
| `error` | Output | Error message if failed (empty if success) |
### Per-Wave CSV (Temporary)
Each wave generates a temporary `wave-{N}.csv` with extra `prev_context` column (csv-wave tasks only).
---
## Agent Registry (Interactive Agents)
| Agent | Role File | Pattern | Responsibility | Position |
|-------|-----------|---------|----------------|----------|
| discussion-feedback | agents/discussion-feedback.md | 2.3 (wait-respond) | Collect user feedback after discussion round, create dynamic tasks | post-wave (after discussant wave) |
| topic-analyzer | agents/topic-analyzer.md | 2.3 (wait-respond) | Parse topic, detect dimensions, select pipeline mode and perspectives | standalone (Phase 0) |
> **COMPACT PROTECTION**: Agent files are execution documents. When context compression occurs, **you MUST immediately `Read` the corresponding agent.md** to reload.
---
## Output Artifacts
| File | Purpose | Lifecycle |
|------|---------|-----------|
| `tasks.csv` | Master state — all tasks with status/findings | Updated after each wave |
| `wave-{N}.csv` | Per-wave input (temporary, csv-wave tasks only) | Created before wave, deleted after |
| `results.csv` | Final export of all task results | Created in Phase 4 |
| `discoveries.ndjson` | Shared exploration board (all agents, both modes) | Append-only, carries across waves |
| `context.md` | Human-readable execution report | Created in Phase 4 |
| `interactive/{id}-result.json` | Results from interactive tasks | Created per interactive task |
---
## Session Structure
```
.workflow/.csv-wave/{session-id}/
├── tasks.csv # Master state (all tasks, both modes)
├── results.csv # Final results export
├── discoveries.ndjson # Shared discovery board (all agents)
├── context.md # Human-readable report
├── wave-{N}.csv # Temporary per-wave input (csv-wave only)
└── interactive/ # Interactive task artifacts
└── {id}-result.json # Per-task results
```
---
## Implementation
### Session Initialization
```javascript
const getUtc8ISOString = () => new Date(Date.now() + 8 * 60 * 60 * 1000).toISOString()
// Parse flags
const AUTO_YES = $ARGUMENTS.includes('--yes') || $ARGUMENTS.includes('-y')
const continueMode = $ARGUMENTS.includes('--continue')
const concurrencyMatch = $ARGUMENTS.match(/(?:--concurrency|-c)\s+(\d+)/)
const maxConcurrency = concurrencyMatch ? parseInt(concurrencyMatch[1]) : 3
const modeMatch = $ARGUMENTS.match(/--mode\s+(quick|standard|deep)/)
const explicitMode = modeMatch ? modeMatch[1] : null
// Clean requirement text (remove flags)
const topic = $ARGUMENTS
.replace(/--yes|-y|--continue|--concurrency\s+\d+|-c\s+\d+|--mode\s+\w+/g, '')
.trim()
const slug = topic.toLowerCase()
.replace(/[^a-z0-9\u4e00-\u9fa5]+/g, '-')
.substring(0, 40)
const dateStr = getUtc8ISOString().substring(0, 10).replace(/-/g, '')
let sessionId = `uan-${slug}-${dateStr}`
let sessionFolder = `.workflow/.csv-wave/${sessionId}`
// Continue mode: find existing session
if (continueMode) {
const existing = Bash(`ls -t .workflow/.csv-wave/uan-* 2>/dev/null | head -1`).trim()
if (existing) {
sessionId = existing.split('/').pop()
sessionFolder = existing
}
}
Bash(`mkdir -p ${sessionFolder}/interactive`)
```
---
### Phase 0: Pre-Wave Interactive
**Objective**: Parse topic, detect analysis dimensions, select pipeline mode, and assign perspectives.
**Execution**:
```javascript
const analyzer = spawn_agent({
message: `
## TASK ASSIGNMENT
### MANDATORY FIRST STEPS (Agent Execute)
1. **Read role definition**: .codex/skills/team-ultra-analyze/agents/topic-analyzer.md (MUST read first)
2. Read: .workflow/project-tech.json (if exists)
---
Goal: Analyze topic and recommend pipeline configuration
Topic: ${topic}
Explicit Mode: ${explicitMode || 'auto-detect'}
### Task
1. Detect analysis dimensions from topic keywords:
- architecture, implementation, performance, security, concept, comparison, decision
2. Select perspectives based on dimensions:
- technical, architectural, business, domain_expert
3. Determine pipeline mode (if not explicitly set):
- Complexity 1-3 → quick, 4-6 → standard, 7+ → deep
4. Return structured configuration
`
})
const analyzerResult = wait({ ids: [analyzer], timeout_ms: 120000 })
if (analyzerResult.timed_out) {
send_input({ id: analyzer, message: "Please finalize and output current findings." })
wait({ ids: [analyzer], timeout_ms: 60000 })
}
close_agent({ id: analyzer })
// Parse result: pipeline_mode, perspectives[], dimensions[], depth
Write(`${sessionFolder}/interactive/topic-analyzer-result.json`, JSON.stringify({
task_id: "topic-analysis",
status: "completed",
pipeline_mode: parsedMode,
perspectives: parsedPerspectives,
dimensions: parsedDimensions,
depth: parsedDepth,
timestamp: getUtc8ISOString()
}))
```
If not AUTO_YES, present user with configuration for confirmation:
```javascript
if (!AUTO_YES) {
const answer = AskUserQuestion({
questions: [{
question: `Topic: "${topic}"\nPipeline: ${pipeline_mode}\nPerspectives: ${perspectives.join(', ')}\nDimensions: ${dimensions.join(', ')}\n\nApprove?`,
header: "Analysis Configuration",
multiSelect: false,
options: [
{ label: "Approve", description: `Use ${pipeline_mode} mode with ${perspectives.length} perspectives` },
{ label: "Quick", description: "1 explorer → 1 analyst → synthesizer (fast)" },
{ label: "Standard", description: "N explorers → N analysts → discussion → synthesizer" },
{ label: "Deep", description: "N explorers → N analysts → discussion loop (up to 5 rounds) → synthesizer" }
]
}]
})
}
```
**Success Criteria**:
- Refined requirements available for Phase 1 decomposition
- Interactive agents closed, results stored
---
### Phase 1: Requirement → CSV + Classification
**Objective**: Build tasks.csv from selected pipeline mode and perspectives.
**Decomposition Rules**:
| Pipeline | Tasks | Wave Structure |
|----------|-------|---------------|
| quick | EXPLORE-001 → ANALYZE-001 → SYNTH-001 | 3 waves, serial, depth=1 |
| standard | EXPLORE-001..N → ANALYZE-001..N → DISCUSS-001 → SYNTH-001 | 4 wave groups, parallel explore+analyze |
| deep | EXPLORE-001..N → ANALYZE-001..N → DISCUSS-001 (→ dynamic tasks) → SYNTH-001 | 3+ waves, SYNTH created after discussion loop |
Where N = number of selected perspectives.
**Classification Rules**:
All work tasks (exploration, analysis, discussion processing, synthesis) are `csv-wave`. The discussion feedback gate (user interaction after discussant completes) is `interactive`.
**Pipeline Task Definitions**:
#### Quick Pipeline (3 csv-wave tasks)
| Task ID | Role | Wave | Deps | Perspective | Description |
|---------|------|------|------|-------------|-------------|
| EXPLORE-001 | explorer | 1 | (none) | general | Explore codebase structure for analysis topic |
| ANALYZE-001 | analyst | 2 | EXPLORE-001 | technical | Deep analysis from technical perspective |
| SYNTH-001 | synthesizer | 3 | ANALYZE-001 | (all) | Integrate analysis into final conclusions |
#### Standard Pipeline (2N+2 tasks, parallel windows)
| Task ID | Role | Wave | Deps | Perspective | Description |
|---------|------|------|------|-------------|-------------|
| EXPLORE-001..N | explorer | 1 | (none) | per-perspective | Parallel codebase exploration, one per perspective |
| ANALYZE-001..N | analyst | 2 | EXPLORE-N | per-perspective | Parallel deep analysis, one per perspective |
| DISCUSS-001 | discussant | 3 | all ANALYZE-* | (all) | Aggregate analyses, identify themes and conflicts |
| FEEDBACK-001 | (interactive) | 4 | DISCUSS-001 | - | User feedback: done → create SYNTH, continue → more discussion |
| SYNTH-001 | synthesizer | 5 | FEEDBACK-001 | (all) | Cross-perspective integration and conclusions |
#### Deep Pipeline (2N+1 initial tasks + dynamic)
Same as Standard, but SYNTH-001 is omitted initially. Created dynamically after the discussion loop (up to 5 rounds) completes. Additional dynamic tasks:
- `DISCUSS-N` — subsequent discussion round
- `ANALYZE-fix-N` — supplementary analysis with adjusted focus
- `SYNTH-001` — created after final discussion round
**Wave Computation**: Kahn's BFS topological sort with depth tracking (csv-wave tasks only).
**User Validation**: Display task breakdown with wave + exec_mode assignment (skip if AUTO_YES).
**Success Criteria**:
- tasks.csv created with valid schema, wave, and exec_mode assignments
- No circular dependencies
- User approved (or AUTO_YES)
---
### Phase 2: Wave Execution Engine (Extended)
**Objective**: Execute tasks wave-by-wave with hybrid mechanism support and cross-wave context propagation.
```javascript
const failedIds = new Set()
const skippedIds = new Set()
let discussionRound = 0
const MAX_DISCUSSION_ROUNDS = pipeline_mode === 'deep' ? 5 : pipeline_mode === 'standard' ? 1 : 0
for (let wave = 1; wave <= maxWave; wave++) {
console.log(`\n## Wave ${wave}/${maxWave}\n`)
// 1. Read current master CSV
const masterCsv = parseCsv(Read(`${sessionFolder}/tasks.csv`))
// 2. Separate csv-wave and interactive tasks for this wave
const waveTasks = masterCsv.filter(row => parseInt(row.wave) === wave)
const csvTasks = waveTasks.filter(t => t.exec_mode === 'csv-wave')
const interactiveTasks = waveTasks.filter(t => t.exec_mode === 'interactive')
// 3. Skip tasks whose deps failed
const executableCsvTasks = []
for (const task of csvTasks) {
const deps = task.deps.split(';').filter(Boolean)
if (deps.some(d => failedIds.has(d) || skippedIds.has(d))) {
skippedIds.add(task.id)
updateMasterCsvRow(sessionFolder, task.id, {
status: 'skipped', error: 'Dependency failed or skipped'
})
continue
}
executableCsvTasks.push(task)
}
// 4. Build prev_context for each csv-wave task
for (const task of executableCsvTasks) {
const contextIds = task.context_from.split(';').filter(Boolean)
const prevFindings = contextIds
.map(id => {
const prevRow = masterCsv.find(r => r.id === id)
if (prevRow && prevRow.status === 'completed' && prevRow.findings) {
return `[Task ${id}: ${prevRow.title}] ${prevRow.findings}`
}
return null
})
.filter(Boolean)
.join('\n')
task.prev_context = prevFindings || 'No previous context available'
}
// 5. Write wave CSV and execute csv-wave tasks
if (executableCsvTasks.length > 0) {
const waveHeader = 'id,title,description,role,perspective,dimensions,discussion_round,discussion_type,deps,context_from,exec_mode,wave,prev_context'
const waveRows = executableCsvTasks.map(t =>
[t.id, t.title, t.description, t.role, t.perspective, t.dimensions,
t.discussion_round, t.discussion_type, t.deps, t.context_from, t.exec_mode, t.wave, t.prev_context]
.map(cell => `"${String(cell).replace(/"/g, '""')}"`)
.join(',')
)
Write(`${sessionFolder}/wave-${wave}.csv`, [waveHeader, ...waveRows].join('\n'))
const waveResult = spawn_agents_on_csv({
csv_path: `${sessionFolder}/wave-${wave}.csv`,
id_column: "id",
instruction: buildAnalysisInstruction(sessionFolder, wave),
max_concurrency: maxConcurrency,
max_runtime_seconds: 600,
output_csv_path: `${sessionFolder}/wave-${wave}-results.csv`,
output_schema: {
type: "object",
properties: {
id: { type: "string" },
status: { type: "string", enum: ["completed", "failed"] },
findings: { type: "string" },
error: { type: "string" }
},
required: ["id", "status", "findings"]
}
})
// Merge results into master CSV
const waveResults = parseCsv(Read(`${sessionFolder}/wave-${wave}-results.csv`))
for (const result of waveResults) {
updateMasterCsvRow(sessionFolder, result.id, {
status: result.status,
findings: result.findings || '',
error: result.error || ''
})
if (result.status === 'failed') failedIds.add(result.id)
}
Bash(`rm -f "${sessionFolder}/wave-${wave}.csv"`)
}
// 6. Execute post-wave interactive tasks (Discussion Feedback)
for (const task of interactiveTasks) {
if (task.status !== 'pending') continue
const deps = task.deps.split(';').filter(Boolean)
if (deps.some(d => failedIds.has(d) || skippedIds.has(d))) {
skippedIds.add(task.id)
continue
}
discussionRound++
// Discussion Feedback Gate
if (pipeline_mode === 'quick' || discussionRound > MAX_DISCUSSION_ROUNDS) {
// No discussion or max rounds reached — proceed to synthesis
if (!masterCsv.find(t => t.id === 'SYNTH-001')) {
// Create SYNTH-001 dynamically
const lastDiscuss = masterCsv.filter(t => t.id.startsWith('DISCUSS'))
.sort((a, b) => b.id.localeCompare(a.id))[0]
addTaskToMasterCsv(sessionFolder, {
id: 'SYNTH-001', title: 'Final synthesis',
description: 'Integrate all analysis into final conclusions',
role: 'synthesizer', perspective: '', dimensions: '',
discussion_round: '0', discussion_type: '',
deps: lastDiscuss ? lastDiscuss.id : '', context_from: 'all',
exec_mode: 'csv-wave', wave: String(wave + 1),
status: 'pending', findings: '', error: ''
})
maxWave = wave + 1
}
updateMasterCsvRow(sessionFolder, task.id, {
status: 'completed',
findings: `Discussion round ${discussionRound}: proceeding to synthesis`
})
continue
}
// Spawn discussion feedback agent
const feedbackAgent = spawn_agent({
message: `
## TASK ASSIGNMENT
### MANDATORY FIRST STEPS (Agent Execute)
1. **Read role definition**: .codex/skills/team-ultra-analyze/agents/discussion-feedback.md (MUST read first)
2. Read: ${sessionFolder}/discoveries.ndjson (shared discoveries)
---
Goal: Collect user feedback on discussion round ${discussionRound}
Session: ${sessionFolder}
Discussion Round: ${discussionRound}/${MAX_DISCUSSION_ROUNDS}
Pipeline Mode: ${pipeline_mode}
### Context
The discussant has completed round ${discussionRound}. Present the user with discussion results and collect feedback on next direction.
`
})
const feedbackResult = wait({ ids: [feedbackAgent], timeout_ms: 300000 })
if (feedbackResult.timed_out) {
send_input({ id: feedbackAgent, message: "Please finalize: user did not respond, default to 'Done'." })
wait({ ids: [feedbackAgent], timeout_ms: 60000 })
}
close_agent({ id: feedbackAgent })
// Parse feedback decision: "continue_deeper" | "adjust_direction" | "done"
Write(`${sessionFolder}/interactive/${task.id}-result.json`, JSON.stringify({
task_id: task.id, status: "completed",
discussion_round: discussionRound,
feedback: feedbackDecision,
timestamp: getUtc8ISOString()
}))
// Handle feedback
if (feedbackDecision === 'done') {
// Create SYNTH-001 blocked by last DISCUSS task
addTaskToMasterCsv(sessionFolder, {
id: 'SYNTH-001', deps: task.id.replace('FEEDBACK', 'DISCUSS'),
role: 'synthesizer', exec_mode: 'csv-wave', wave: String(wave + 1)
})
maxWave = wave + 1
} else if (feedbackDecision === 'adjust_direction') {
// Create ANALYZE-fix-N and DISCUSS-N+1
const fixId = `ANALYZE-fix-${discussionRound}`
const nextDiscussId = `DISCUSS-${String(discussionRound + 1).padStart(3, '0')}`
addTaskToMasterCsv(sessionFolder, {
id: fixId, role: 'analyst', exec_mode: 'csv-wave', wave: String(wave + 1)
})
addTaskToMasterCsv(sessionFolder, {
id: nextDiscussId, role: 'discussant', deps: fixId,
exec_mode: 'csv-wave', wave: String(wave + 2)
})
addTaskToMasterCsv(sessionFolder, {
id: `FEEDBACK-${String(discussionRound + 1).padStart(3, '0')}`,
exec_mode: 'interactive', deps: nextDiscussId, wave: String(wave + 3)
})
maxWave = wave + 3
} else {
// continue_deeper: Create DISCUSS-N+1
const nextDiscussId = `DISCUSS-${String(discussionRound + 1).padStart(3, '0')}`
addTaskToMasterCsv(sessionFolder, {
id: nextDiscussId, role: 'discussant', exec_mode: 'csv-wave', wave: String(wave + 1)
})
addTaskToMasterCsv(sessionFolder, {
id: `FEEDBACK-${String(discussionRound + 1).padStart(3, '0')}`,
exec_mode: 'interactive', deps: nextDiscussId, wave: String(wave + 2)
})
maxWave = wave + 2
}
updateMasterCsvRow(sessionFolder, task.id, {
status: 'completed',
findings: `Discussion feedback: ${feedbackDecision}, round ${discussionRound}`
})
}
}
```
**Success Criteria**:
- All waves executed in order
- Both csv-wave and interactive tasks handled per wave
- Each wave's results merged into master CSV before next wave starts
- Dependent tasks skipped when predecessor failed
- discoveries.ndjson accumulated across all waves and mechanisms
- Discussion loop controlled with proper round tracking
- Dynamic tasks created correctly based on user feedback
---
### Phase 3: Post-Wave Interactive
**Objective**: Handle discussion loop completion and ensure synthesis is triggered.
After all discussion rounds are exhausted or user chooses "done":
1. Ensure SYNTH-001 exists in master CSV
2. Ensure SYNTH-001 is unblocked (blocked by last completed discussion task)
3. Execute remaining waves (synthesis)
**Success Criteria**:
- Post-wave interactive processing complete
- Interactive agents closed, results stored
---
### Phase 4: Results Aggregation
**Objective**: Generate final results and human-readable report.
```javascript
const masterCsv = Read(`${sessionFolder}/tasks.csv`)
Write(`${sessionFolder}/results.csv`, masterCsv)
const tasks = parseCsv(masterCsv)
const completed = tasks.filter(t => t.status === 'completed')
const failed = tasks.filter(t => t.status === 'failed')
const skipped = tasks.filter(t => t.status === 'skipped')
const contextContent = `# Ultra Analyze Report
**Session**: ${sessionId}
**Topic**: ${topic}
**Pipeline**: ${pipeline_mode}
**Perspectives**: ${perspectives.join(', ')}
**Discussion Rounds**: ${discussionRound}
**Completed**: ${getUtc8ISOString()}
---
## Summary
| Metric | Count |
|--------|-------|
| Total Tasks | ${tasks.length} |
| Completed | ${completed.length} |
| Failed | ${failed.length} |
| Skipped | ${skipped.length} |
| Discussion Rounds | ${discussionRound} |
---
## Wave Execution
${waveDetails}
---
## Analysis Artifacts
- Explorations: discoveries with type "exploration" in discoveries.ndjson
- Analyses: discoveries with type "analysis" in discoveries.ndjson
- Discussion: discoveries with type "discussion" in discoveries.ndjson
- Conclusions: discoveries with type "conclusion" in discoveries.ndjson
---
## Conclusions
${synthesisFindings}
`
Write(`${sessionFolder}/context.md`, contextContent)
```
If not AUTO_YES, offer completion options:
```javascript
if (!AUTO_YES) {
const answer = AskUserQuestion({
questions: [{
question: "Ultra-Analyze pipeline complete. What would you like to do?",
header: "Completion",
multiSelect: false,
options: [
{ label: "Archive & Clean (Recommended)", description: "Archive session" },
{ label: "Keep Active", description: "Keep session for follow-up" },
{ label: "Export Results", description: "Export deliverables to specified location" }
]
}]
})
}
```
**Success Criteria**:
- results.csv exported (all tasks, both modes)
- context.md generated
- All interactive agents closed
- Summary displayed to user
---
## Shared Discovery Board Protocol
All agents across all waves share `discoveries.ndjson`. This enables cross-role knowledge sharing.
**Discovery Types**:
| Type | Dedup Key | Data Schema | Description |
|------|-----------|-------------|-------------|
| `exploration` | `data.perspective+data.file` | `{perspective, file, relevance, summary, patterns[]}` | Explored file/module |
| `analysis` | `data.perspective+data.insight` | `{perspective, insight, confidence, evidence, file_ref}` | Analysis insight |
| `pattern` | `data.name` | `{name, file, description, type}` | Code/architecture pattern |
| `discussion_point` | `data.topic` | `{topic, perspectives[], convergence, open_questions[]}` | Discussion point |
| `recommendation` | `data.action` | `{action, rationale, priority, confidence}` | Recommendation |
| `conclusion` | `data.point` | `{point, evidence, confidence, perspectives_supporting[]}` | Final conclusion |
**Format**: NDJSON, each line is self-contained JSON:
```jsonl
{"ts":"2026-03-08T10:00:00+08:00","worker":"EXPLORE-001","type":"exploration","data":{"perspective":"technical","file":"src/auth/index.ts","relevance":"high","summary":"Auth module entry point with OAuth and JWT exports","patterns":["module-pattern","strategy-pattern"]}}
{"ts":"2026-03-08T10:05:00+08:00","worker":"ANALYZE-001","type":"analysis","data":{"perspective":"technical","insight":"Auth module uses strategy pattern for provider switching","confidence":"high","evidence":"src/auth/strategies/*.ts","file_ref":"src/auth/index.ts:15"}}
{"ts":"2026-03-08T10:10:00+08:00","worker":"DISCUSS-001","type":"discussion_point","data":{"topic":"Authentication scalability","perspectives":["technical","architectural"],"convergence":"Both perspectives agree on stateless JWT approach","open_questions":["Token refresh strategy for long sessions"]}}
```
**Protocol Rules**:
1. Read board before own exploration → skip covered areas
2. Write discoveries immediately via `echo >>` → don't batch
3. Deduplicate — check existing entries by type + dedup key
4. Append-only — never modify or delete existing lines
---
## Error Handling
| Error | Resolution |
|-------|------------|
| Circular dependency | Detect in wave computation, abort with error message |
| CSV agent timeout | Mark as failed in results, continue with wave |
| CSV agent failed | Mark as failed, skip dependent tasks in later waves |
| Interactive agent timeout | Urge convergence via send_input, then close if still timed out |
| Interactive agent failed | Mark as failed, skip dependents |
| All agents in wave failed | Log error, offer retry or abort |
| CSV parse error | Validate CSV format before execution, show line number |
| discoveries.ndjson corrupt | Ignore malformed lines, continue with valid entries |
| Discussion loop exceeds 5 rounds | Force synthesis, offer continuation |
| Explorer finds nothing | Continue with limited context, note limitation |
| CLI tool unavailable | Fallback chain: gemini → codex → direct analysis |
| User timeout in discussion | Save state, default to "done", proceed to synthesis |
| Continue mode: no session found | List available sessions, prompt user to select |
---
## Core Rules
1. **Start Immediately**: First action is session initialization, then Phase 0/1
2. **Wave Order is Sacred**: Never execute wave N before wave N-1 completes and results are merged
3. **CSV is Source of Truth**: Master tasks.csv holds all state (both csv-wave and interactive)
4. **CSV First**: Default to csv-wave for tasks; only use interactive when user interaction is needed
5. **Context Propagation**: prev_context built from master CSV, not from memory
6. **Discovery Board is Append-Only**: Never clear, modify, or recreate discoveries.ndjson — both mechanisms share it
7. **Skip on Failure**: If a dependency failed, skip the dependent task (regardless of mechanism)
8. **Lifecycle Balance**: Every spawn_agent MUST have a matching close_agent
9. **Cleanup Temp Files**: Remove wave-{N}.csv after results are merged
10. **DO NOT STOP**: Continuous execution until all waves complete or all remaining tasks are skipped

View File

@@ -0,0 +1,155 @@
# Discussion Feedback Agent
Collect user feedback after a discussion round and determine next action for the analysis pipeline.
## Identity
- **Type**: `interactive`
- **Responsibility**: User feedback collection and discussion loop control
## Boundaries
### MUST
- Load role definition via MANDATORY FIRST STEPS pattern
- Present discussion results to the user clearly
- Collect explicit user feedback via AskUserQuestion
- Return structured decision for orchestrator to act on
- Respect max discussion round limits
### MUST NOT
- Perform analysis or exploration (delegate to csv-wave agents)
- Create tasks directly (orchestrator handles dynamic task creation)
- Skip user interaction (this is the user-in-the-loop checkpoint)
- Exceed the configured max discussion rounds
---
## Toolbox
### Available Tools
| Tool | Type | Purpose |
|------|------|---------|
| `Read` | builtin | Load discussion results and session state |
| `AskUserQuestion` | builtin | Collect user feedback on discussion |
---
## Execution
### Phase 1: Context Loading
**Objective**: Load discussion results for user presentation
**Input**:
| Source | Required | Description |
|--------|----------|-------------|
| Session folder | Yes | Path to session directory |
| Discussion round | Yes | Current round number |
| Max discussion rounds | Yes | Maximum allowed rounds |
| Pipeline mode | Yes | quick, standard, or deep |
**Steps**:
1. Read the session's discoveries.ndjson for discussion_point entries
2. Parse prev_context for the discussant's findings
3. Extract key themes, conflicts, and open questions from findings
4. Load current discussion_round from spawn message
**Output**: Discussion summary ready for user presentation
---
### Phase 2: User Feedback Collection
**Objective**: Present results and collect next-step decision
**Steps**:
1. Format discussion summary for user:
- Convergent themes identified
- Conflicting views between perspectives
- Top open questions
- Round progress (current/max)
2. Present options via AskUserQuestion:
```
AskUserQuestion({
questions: [{
question: "Discussion round <N>/<max> complete.\n\nThemes: <themes>\nConflicts: <conflicts>\nOpen Questions: <questions>\n\nWhat next?",
header: "Discussion Feedback",
multiSelect: false,
options: [
{ label: "Continue deeper", description: "Current direction is good, investigate open questions deeper" },
{ label: "Adjust direction", description: "Shift analysis focus to a different area" },
{ label: "Done", description: "Sufficient depth reached, proceed to final synthesis" }
]
}]
})
```
3. If user chooses "Adjust direction":
- Follow up with another AskUserQuestion asking for the new focus area
- Capture the adjusted focus text
**Output**: User decision and optional adjusted focus
---
### Phase 3: Decision Formatting
**Objective**: Package user decision for orchestrator
**Steps**:
1. Map user choice to decision string:
| User Choice | Decision | Additional Data |
|------------|----------|-----------------|
| Continue deeper | `continue_deeper` | None |
| Adjust direction | `adjust_direction` | `adjusted_focus: <user input>` |
| Done | `done` | None |
2. Format structured output for orchestrator
**Output**: Structured decision
---
## Structured Output Template
```
## Summary
- Discussion Round: <current>/<max>
- User Decision: continue_deeper | adjust_direction | done
## Discussion Summary Presented
- Themes: <list>
- Conflicts: <list>
- Open Questions: <list>
## Decision Details
- Decision: <decision>
- Adjusted Focus: <focus text, if adjust_direction>
- Rationale: <user's reasoning, if provided>
## Next Action (for orchestrator)
- continue_deeper: Create DISCUSS-<N+1> task, then FEEDBACK-<N+1>
- adjust_direction: Create ANALYZE-fix-<N> task, then DISCUSS-<N+1>, then FEEDBACK-<N+1>
- done: Create SYNTH-001 task blocked by last DISCUSS task
```
---
## Error Handling
| Scenario | Resolution |
|----------|------------|
| User does not respond | After timeout, default to "done" and proceed to synthesis |
| Max rounds reached | Inform user this is the final round, only offer "Done" option |
| No discussion data found | Present what is available, note limitations |
| Timeout approaching | Output current state with default "done" decision |

View File

@@ -0,0 +1,153 @@
# Topic Analyzer Agent
Parse analysis topic, detect dimensions, select pipeline mode, and assign perspectives.
## Identity
- **Type**: `interactive`
- **Responsibility**: Topic analysis and pipeline configuration
## Boundaries
### MUST
- Load role definition via MANDATORY FIRST STEPS pattern
- Perform text-level analysis only (no source code reading)
- Produce structured output with pipeline configuration
- Detect dimensions from topic keywords
- Recommend appropriate perspectives for the topic
### MUST NOT
- Read source code or explore codebase (that is the explorer's job)
- Perform any analysis (that is the analyst's job)
- Make final pipeline decisions without providing rationale
---
## Toolbox
### Available Tools
| Tool | Type | Purpose |
|------|------|---------|
| `Read` | builtin | Load project context if available |
---
## Execution
### Phase 1: Dimension Detection
**Objective**: Scan topic keywords to identify analysis dimensions
**Input**:
| Source | Required | Description |
|--------|----------|-------------|
| Topic text | Yes | The analysis topic from user |
| Explicit mode | No | --mode override if provided |
**Steps**:
1. Scan topic for dimension keywords:
| Dimension | Keywords |
|-----------|----------|
| architecture | architecture, design, structure |
| implementation | implement, code, source |
| performance | performance, optimize, speed |
| security | security, auth, vulnerability |
| concept | concept, theory, principle |
| comparison | compare, vs, difference |
| decision | decision, choice, tradeoff |
2. Select matching dimensions (default to general if none match)
**Output**: List of detected dimensions
---
### Phase 2: Pipeline Mode Selection
**Objective**: Determine pipeline mode and depth
**Steps**:
1. If explicit `--mode` provided, use it directly
2. Otherwise, auto-detect from complexity scoring:
| Factor | Points |
|--------|--------|
| Per detected dimension | +1 |
| Deep-mode keywords (deep, thorough, detailed, comprehensive) | +2 |
| Cross-domain (3+ dimensions) | +1 |
| Score | Pipeline Mode |
|-------|--------------|
| 1-3 | quick |
| 4-6 | standard |
| 7+ | deep |
3. Determine depth = number of selected perspectives
**Output**: Pipeline mode and depth
---
### Phase 3: Perspective Assignment
**Objective**: Select analysis perspectives based on topic and dimensions
**Steps**:
1. Map dimensions to perspectives:
| Dimension Match | Perspective | Focus |
|----------------|-------------|-------|
| architecture, implementation | technical | Implementation details, code patterns |
| architecture, security | architectural | System design, scalability |
| concept, comparison, decision | business | Value, ROI, strategy |
| domain-specific keywords | domain_expert | Domain patterns, standards |
2. Quick mode: always 1 perspective (technical by default)
3. Standard/Deep mode: 2-4 perspectives based on dimension coverage
**Output**: List of perspectives with focus areas
---
## Structured Output Template
```
## Summary
- Topic: <topic>
- Pipeline Mode: <quick|standard|deep>
- Depth: <number of perspectives>
## Dimension Detection
- Detected dimensions: <list>
- Complexity score: <score>
## Perspectives
1. <perspective>: <focus area>
2. <perspective>: <focus area>
## Discussion Configuration
- Max discussion rounds: <0|1|5>
## Pipeline Structure
- Total tasks: <count>
- Parallel stages: <description>
- Dynamic tasks possible: <yes/no>
```
---
## Error Handling
| Scenario | Resolution |
|----------|------------|
| Topic too vague | Suggest clarifying questions, default to standard mode |
| No dimension matches | Default to "general" dimension with technical perspective |
| Timeout approaching | Output current analysis with "PARTIAL" status |

View File

@@ -0,0 +1,169 @@
## TASK ASSIGNMENT
### MANDATORY FIRST STEPS
1. Read shared discoveries: .workflow/.csv-wave/{session-id}/discoveries.ndjson (if exists, skip if not)
2. Read project context: .workflow/project-tech.json (if exists)
---
## Your Task
**Task ID**: {id}
**Title**: {title}
**Role**: {role}
**Description**: {description}
**Perspective**: {perspective}
**Dimensions**: {dimensions}
**Discussion Round**: {discussion_round}
**Discussion Type**: {discussion_type}
### Previous Tasks' Findings (Context)
{prev_context}
---
## Execution Protocol
1. **Read discoveries**: Load shared discoveries from the session's discoveries.ndjson for cross-task context
2. **Use context**: Apply previous tasks' findings from prev_context above
3. **Execute by role**:
### Role: explorer (EXPLORE-* tasks)
Explore codebase structure from the assigned perspective, collecting structured context for downstream analysis.
- Determine exploration strategy by perspective:
| Perspective | Focus | Search Depth |
|-------------|-------|-------------|
| general | Overall codebase structure and patterns | broad |
| technical | Implementation details, code patterns, feasibility | medium |
| architectural | System design, module boundaries, interactions | broad |
| business | Business logic, domain models, value flows | medium |
| domain_expert | Domain patterns, standards, best practices | deep |
- Use available tools (Read, Glob, Grep, Bash) to search the codebase
- Collect: relevant files (path, relevance, summary), code patterns, key findings, module relationships
- Generate questions for downstream analysis
- Focus exploration on the dimensions listed in the Dimensions field
### Role: analyst (ANALYZE-* tasks)
Perform deep analysis on exploration results from the assigned perspective.
- Load exploration results from prev_context
- Detect if this is a direction-fix task (description mentions "adjusted focus"):
- Normal: analyze from assigned perspective using corresponding exploration results
- Direction-fix: re-analyze from adjusted perspective using all available explorations
- Select analysis approach by perspective:
| Perspective | CLI Tool | Focus |
|-------------|----------|-------|
| technical | gemini | Implementation patterns, code quality, feasibility |
| architectural | gemini | System design, scalability, component interactions |
| business | gemini | Value, ROI, stakeholder impact |
| domain_expert | gemini | Domain-specific patterns, best practices |
- Use `ccw cli` for deep analysis:
```bash
ccw cli -p "PURPOSE: Deep analysis of '<topic>' from <perspective> perspective
TASK: • Analyze patterns found in exploration • Generate insights with confidence levels • Identify discussion points
MODE: analysis
CONTEXT: @**/* | Memory: Exploration findings
EXPECTED: Structured insights with confidence levels and evidence" --tool gemini --mode analysis
```
- Generate structured output:
- key_insights: [{insight, confidence (high/medium/low), evidence (file:line)}]
- key_findings: [{finding, file_ref, impact}]
- discussion_points: [questions needing cross-perspective discussion]
- open_questions: [areas needing further exploration]
- recommendations: [{action, rationale, priority}]
### Role: discussant (DISCUSS-* tasks)
Process analysis results and generate discussion summary. Strategy depends on discussion type.
- **initial**: Cross-perspective aggregation
- Aggregate all analysis results from prev_context
- Identify convergent themes across perspectives
- Identify conflicting views between perspectives
- Generate top 5 discussion points and open questions
- Produce structured round summary
- **deepen**: Deep investigation of open questions
- Use CLI tool to investigate uncertain insights:
```bash
ccw cli -p "PURPOSE: Investigate open questions and uncertain insights
TASK: • Focus on questions from previous round • Find supporting evidence • Validate uncertain insights
MODE: analysis
CONTEXT: @**/*
EXPECTED: Evidence-based findings" --tool gemini --mode analysis
```
- **direction-adjusted**: Re-analysis from adjusted focus
- Use CLI to re-analyze from adjusted perspective based on user feedback
- **specific-questions**: Targeted Q&A
- Use CLI for targeted investigation of user-specified questions
- For all types, produce round summary:
- updated_understanding: {confirmed[], corrected[], new_insights[]}
- convergent themes, conflicting views
- remaining open questions
### Role: synthesizer (SYNTH-* tasks)
Integrate all explorations, analyses, and discussions into final conclusions.
- Read all available artifacts from prev_context (explorations, analyses, discussions)
- Execute synthesis in four steps:
1. **Theme Extraction**: Identify convergent themes across perspectives, rank by cross-perspective confirmation
2. **Conflict Resolution**: Identify contradictions, present trade-off analysis
3. **Evidence Consolidation**: Deduplicate findings, aggregate by file reference, assign confidence levels
4. **Recommendation Prioritization**: Sort by priority, deduplicate, cap at 10
- Confidence levels:
| Level | Criteria |
|-------|----------|
| High | Multiple sources confirm, strong evidence |
| Medium | Single source or partial evidence |
| Low | Speculative, needs verification |
- Produce final conclusions:
- Executive summary
- Key conclusions with evidence and confidence
- Prioritized recommendations
- Open questions
- Cross-perspective synthesis (convergent themes, conflicts resolved, unique contributions)
4. **Share discoveries**: Append exploration findings to shared board:
```bash
echo '{"ts":"<ISO8601>","worker":"{id}","type":"<type>","data":{...}}' >> .workflow/.csv-wave/{session-id}/discoveries.ndjson
```
Discovery types to share:
- `exploration`: {perspective, file, relevance, summary, patterns[]} — explored file/module
- `analysis`: {perspective, insight, confidence, evidence, file_ref} — analysis insight
- `pattern`: {name, file, description, type} — code/architecture pattern
- `discussion_point`: {topic, perspectives[], convergence, open_questions[]} — discussion point
- `recommendation`: {action, rationale, priority, confidence} — recommendation
- `conclusion`: {point, evidence, confidence, perspectives_supporting[]} — final conclusion
5. **Report result**: Return JSON via report_agent_job_result
---
## Output (report_agent_job_result)
Return JSON:
{
"id": "{id}",
"status": "completed" | "failed",
"findings": "Key discoveries and implementation notes (max 500 chars)",
"error": ""
}
**Role-specific findings guidance**:
- **explorer**: List file count, key files, patterns found. Example: "Found 12 files related to auth. Key: src/auth/index.ts (entry), src/auth/strategies/*.ts (providers). Patterns: strategy, middleware chain."
- **analyst**: List insight count, top insights with confidence. Example: "3 insights: (1) Strategy pattern for providers [high], (2) Missing token rotation [medium], (3) No rate limiting [high]. 2 discussion points."
- **discussant**: List themes, conflicts, question count. Example: "Convergent: JWT security (2 perspectives). Conflict: middleware approach. 3 open questions on refresh tokens."
- **synthesizer**: List conclusion count, top recommendations. Example: "5 conclusions, 4 recommendations. Top: Implement refresh token rotation [high priority, high confidence]."

View File

@@ -0,0 +1,180 @@
# Team Ultra Analyze — CSV Schema
## Master CSV: tasks.csv
### Column Definitions
#### Input Columns (Set by Decomposer)
| Column | Type | Required | Description | Example |
|--------|------|----------|-------------|---------|
| `id` | string | Yes | Unique task identifier | `"EXPLORE-001"` |
| `title` | string | Yes | Short task title | `"Explore from technical perspective"` |
| `description` | string | Yes | Detailed task description (self-contained) | `"Search codebase from technical perspective..."` |
| `role` | string | Yes | Worker role: explorer, analyst, discussant, synthesizer | `"explorer"` |
| `perspective` | string | No | Analysis perspective: technical, architectural, business, domain_expert | `"technical"` |
| `dimensions` | string | No | Analysis dimensions (semicolon-separated) | `"architecture;implementation"` |
| `discussion_round` | integer | No | Discussion round number (0 = N/A, 1+ = round) | `"1"` |
| `discussion_type` | string | No | Discussion type: initial, deepen, direction-adjusted, specific-questions | `"initial"` |
| `deps` | string | No | Semicolon-separated dependency task IDs | `"EXPLORE-001"` |
| `context_from` | string | No | Semicolon-separated task IDs for context | `"EXPLORE-001"` |
| `exec_mode` | enum | Yes | Execution mechanism: `csv-wave` or `interactive` | `"csv-wave"` |
#### Computed Columns (Set by Wave Engine)
| Column | Type | Description | Example |
|--------|------|-------------|---------|
| `wave` | integer | Wave number (1-based, from topological sort) | `2` |
| `prev_context` | string | Aggregated findings from context_from tasks (per-wave CSV only) | `"[Task EXPLORE-001] Found 12 relevant files..."` |
#### Output Columns (Set by Agent)
| Column | Type | Description | Example |
|--------|------|-------------|---------|
| `status` | enum | `pending``completed` / `failed` / `skipped` | `"completed"` |
| `findings` | string | Key discoveries (max 500 chars) | `"Found 12 files related to auth module..."` |
| `error` | string | Error message if failed | `""` |
---
### exec_mode Values
| Value | Mechanism | Description |
|-------|-----------|-------------|
| `csv-wave` | `spawn_agents_on_csv` | One-shot batch execution within wave |
| `interactive` | `spawn_agent`/`wait`/`send_input`/`close_agent` | Multi-round individual execution |
Interactive tasks appear in master CSV for dependency tracking but are NOT included in wave-{N}.csv files.
---
### Example Data
```csv
id,title,description,role,perspective,dimensions,discussion_round,discussion_type,deps,context_from,exec_mode,wave,status,findings,error
"EXPLORE-001","Explore from technical perspective","Search codebase from technical perspective. Collect files, patterns, and findings related to authentication module.","explorer","technical","architecture;implementation","0","","","","csv-wave","1","pending","",""
"EXPLORE-002","Explore from architectural perspective","Search codebase from architectural perspective. Focus on module boundaries, component interactions, and system design patterns.","explorer","architectural","architecture;security","0","","","","csv-wave","1","pending","",""
"ANALYZE-001","Deep analysis from technical perspective","Analyze exploration results from technical perspective. Generate insights with confidence levels and evidence references.","analyst","technical","architecture;implementation","0","","EXPLORE-001","EXPLORE-001","csv-wave","2","pending","",""
"ANALYZE-002","Deep analysis from architectural perspective","Analyze exploration results from architectural perspective. Focus on system design quality and scalability.","analyst","architectural","architecture;security","0","","EXPLORE-002","EXPLORE-002","csv-wave","2","pending","",""
"DISCUSS-001","Initial discussion round","Aggregate all analysis results across perspectives. Identify convergent themes, conflicting views, and top discussion points.","discussant","","","1","initial","ANALYZE-001;ANALYZE-002","ANALYZE-001;ANALYZE-002","csv-wave","3","pending","",""
"FEEDBACK-001","Discussion feedback gate","Collect user feedback on discussion results. Decide: continue deeper, adjust direction, or proceed to synthesis.","","","","1","","DISCUSS-001","DISCUSS-001","interactive","4","pending","",""
"SYNTH-001","Final synthesis","Integrate all explorations, analyses, and discussions into final conclusions with prioritized recommendations.","synthesizer","","","0","","FEEDBACK-001","EXPLORE-001;EXPLORE-002;ANALYZE-001;ANALYZE-002;DISCUSS-001","csv-wave","5","pending","",""
```
---
### Column Lifecycle
```
Decomposer (Phase 1) Wave Engine (Phase 2) Agent (Execution)
───────────────────── ──────────────────── ─────────────────
id ───────────► id ──────────► id
title ───────────► title ──────────► (reads)
description ───────────► description ──────────► (reads)
role ───────────► role ──────────► (reads)
perspective ───────────► perspective ──────────► (reads)
dimensions ───────────► dimensions ──────────► (reads)
discussion_round ──────► discussion_round ─────► (reads)
discussion_type ───────► discussion_type ──────► (reads)
deps ───────────► deps ──────────► (reads)
context_from───────────► context_from──────────► (reads)
exec_mode ───────────► exec_mode ──────────► (reads)
wave ──────────► (reads)
prev_context ──────────► (reads)
status
findings
error
```
---
## Output Schema (JSON)
Agent output via `report_agent_job_result` (csv-wave tasks):
```json
{
"id": "EXPLORE-001",
"status": "completed",
"findings": "Found 12 files related to auth module. Key files: src/auth/index.ts, src/auth/strategies/*.ts. Patterns: strategy pattern for provider switching, middleware chain for request validation.",
"error": ""
}
```
Analyst output:
```json
{
"id": "ANALYZE-001",
"status": "completed",
"findings": "3 key insights: (1) Auth uses strategy pattern [high confidence], (2) JWT validation lacks refresh token rotation [medium], (3) Rate limiting missing on auth endpoints [high]. 2 discussion points identified.",
"error": ""
}
```
Discussant output:
```json
{
"id": "DISCUSS-001",
"status": "completed",
"findings": "Convergent themes: JWT security concerns (2 perspectives agree), strategy pattern approval. Conflicts: architectural vs technical on middleware approach. Top questions: refresh token strategy, rate limit placement.",
"error": ""
}
```
Interactive tasks output via structured text or JSON written to `interactive/{id}-result.json`.
---
## Discovery Types
| Type | Dedup Key | Data Schema | Description |
|------|-----------|-------------|-------------|
| `exploration` | `data.perspective+data.file` | `{perspective, file, relevance, summary, patterns[]}` | Explored file or module |
| `analysis` | `data.perspective+data.insight` | `{perspective, insight, confidence, evidence, file_ref}` | Analysis insight |
| `pattern` | `data.name` | `{name, file, description, type}` | Code or architecture pattern |
| `discussion_point` | `data.topic` | `{topic, perspectives[], convergence, open_questions[]}` | Discussion point |
| `recommendation` | `data.action` | `{action, rationale, priority, confidence}` | Recommendation |
| `conclusion` | `data.point` | `{point, evidence, confidence, perspectives_supporting[]}` | Final conclusion |
### Discovery NDJSON Format
```jsonl
{"ts":"2026-03-08T10:00:00+08:00","worker":"EXPLORE-001","type":"exploration","data":{"perspective":"technical","file":"src/auth/index.ts","relevance":"high","summary":"Auth module entry point with OAuth and JWT exports","patterns":["module-pattern","strategy-pattern"]}}
{"ts":"2026-03-08T10:01:00+08:00","worker":"EXPLORE-001","type":"pattern","data":{"name":"strategy-pattern","file":"src/auth/strategies/","description":"Provider switching via strategy pattern","type":"behavioral"}}
{"ts":"2026-03-08T10:05:00+08:00","worker":"ANALYZE-001","type":"analysis","data":{"perspective":"technical","insight":"JWT validation lacks refresh token rotation","confidence":"medium","evidence":"No rotation logic in src/auth/jwt/verify.ts","file_ref":"src/auth/jwt/verify.ts:42"}}
{"ts":"2026-03-08T10:10:00+08:00","worker":"DISCUSS-001","type":"discussion_point","data":{"topic":"JWT Security","perspectives":["technical","architectural"],"convergence":"Both agree on rotation need","open_questions":["Sliding vs fixed window?"]}}
{"ts":"2026-03-08T10:15:00+08:00","worker":"SYNTH-001","type":"conclusion","data":{"point":"Auth module needs refresh token rotation","evidence":"src/auth/jwt/verify.ts lacks rotation","confidence":"high","perspectives_supporting":["technical","architectural"]}}
```
> Both csv-wave and interactive agents read/write the same discoveries.ndjson file.
---
## Cross-Mechanism Context Flow
| Source | Target | Mechanism |
|--------|--------|-----------|
| CSV task findings | Interactive task | Injected via spawn message or send_input |
| Interactive task result | CSV task prev_context | Read from interactive/{id}-result.json |
| Any agent discovery | Any agent | Shared via discoveries.ndjson |
---
## Validation Rules
| Rule | Check | Error |
|------|-------|-------|
| Unique IDs | No duplicate `id` values | "Duplicate task ID: {id}" |
| Valid deps | All dep IDs exist in tasks | "Unknown dependency: {dep_id}" |
| No self-deps | Task cannot depend on itself | "Self-dependency: {id}" |
| No circular deps | Topological sort completes | "Circular dependency detected involving: {ids}" |
| context_from valid | All context IDs exist and in earlier waves | "Invalid context_from: {id}" |
| exec_mode valid | Value is `csv-wave` or `interactive` | "Invalid exec_mode: {value}" |
| Description non-empty | Every task has description | "Empty description for task: {id}" |
| Status enum | status in {pending, completed, failed, skipped} | "Invalid status: {status}" |
| Valid role | role in {explorer, analyst, discussant, synthesizer} | "Invalid role: {role}" |
| Valid perspective | perspective in {technical, architectural, business, domain_expert, general, ""} | "Invalid perspective: {value}" |
| Discussion round non-negative | discussion_round >= 0 | "Invalid discussion_round: {value}" |
| Cross-mechanism deps | Interactive→CSV deps resolve correctly | "Cross-mechanism dependency unresolvable: {id}" |