mirror of
https://github.com/catlog22/Claude-Code-Workflow.git
synced 2026-03-28 20:01:17 +08:00
feat: migrate all codex team skills from spawn_agents_on_csv to spawn_agent + wait_agent architecture
- Delete 21 old team skill directories using CSV-wave pipeline pattern (~100+ files) - Delete old team-lifecycle (v3) and team-planex-v2 - Create generic team-worker.toml and team-supervisor.toml (replacing tlv4-specific TOMLs) - Convert 19 team skills from Claude Code format (Agent/SendMessage/TaskCreate) to Codex format (spawn_agent/wait_agent/tasks.json/request_user_input) - Update team-lifecycle-v4 to use generic agent types (team_worker/team_supervisor) - Convert all coordinator role files: dispatch.md, monitor.md, role.md - Convert all worker role files: remove run_in_background, fix Bash syntax - Convert all specs/pipelines.md references - Final state: 20 team skills, 217 .md files, zero Claude Code API residuals Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
This commit is contained in:
@@ -1,786 +1,166 @@
|
||||
---
|
||||
name: team-ultra-analyze
|
||||
description: Deep collaborative analysis pipeline. Multi-perspective exploration, deep analysis, user-driven discussion loops, and cross-perspective synthesis. Supports Quick, Standard, and Deep pipeline modes.
|
||||
argument-hint: "[-y|--yes] [-c|--concurrency N] [--continue] [--mode quick|standard|deep] \"analysis topic\""
|
||||
allowed-tools: spawn_agents_on_csv, spawn_agent, wait, send_input, close_agent, Read, Write, Edit, Bash, Glob, Grep, request_user_input
|
||||
description: Deep collaborative analysis team skill. All roles route via this SKILL.md. Beat model is coordinator-only (monitor.md). Structure is roles/ + specs/. Triggers on "team ultra-analyze", "team analyze".
|
||||
allowed-tools: spawn_agent(*), wait_agent(*), send_input(*), close_agent(*), report_agent_job_result(*), request_user_input(*), Read(*), Write(*), Edit(*), Bash(*), Glob(*), Grep(*)
|
||||
---
|
||||
|
||||
## Auto Mode
|
||||
|
||||
When `--yes` or `-y`: Auto-confirm task decomposition, skip interactive validation, use defaults.
|
||||
|
||||
# Team Ultra Analyze
|
||||
|
||||
## Usage
|
||||
Deep collaborative analysis: explore -> analyze -> discuss -> synthesize. Supports Quick/Standard/Deep pipeline modes with configurable depth (N parallel agents). Discussion loops enable user-guided progressive understanding.
|
||||
|
||||
```bash
|
||||
$team-ultra-analyze "Analyze authentication module architecture and security"
|
||||
$team-ultra-analyze -c 4 --mode deep "Deep analysis of payment processing pipeline"
|
||||
$team-ultra-analyze -y --mode quick "Quick overview of API endpoint structure"
|
||||
$team-ultra-analyze --continue "uan-auth-analysis-20260308"
|
||||
```
|
||||
|
||||
**Flags**:
|
||||
- `-y, --yes`: Skip all confirmations (auto mode)
|
||||
- `-c, --concurrency N`: Max concurrent agents within each wave (default: 3)
|
||||
- `--mode`: Pipeline mode override (quick|standard|deep)
|
||||
- `--continue`: Resume existing session
|
||||
|
||||
**Output Directory**: `.workflow/.csv-wave/{session-id}/`
|
||||
**Core Output**: `tasks.csv` (master state) + `results.csv` (final) + `discoveries.ndjson` (shared exploration) + `context.md` (human-readable report)
|
||||
|
||||
---
|
||||
|
||||
## Overview
|
||||
|
||||
Deep collaborative analysis with multi-perspective exploration, deep analysis, user-driven discussion loops, and cross-perspective synthesis. Each perspective gets its own explorer and analyst, working in parallel. Discussion rounds allow the user to steer analysis depth and direction.
|
||||
|
||||
**Execution Model**: Hybrid — CSV wave pipeline (primary) + individual agent spawn (secondary for discussion feedback loop)
|
||||
## Architecture
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────────────────────┐
|
||||
│ TEAM ULTRA ANALYZE WORKFLOW │
|
||||
├─────────────────────────────────────────────────────────────────────────┤
|
||||
│ │
|
||||
│ Phase 0: Pre-Wave Interactive │
|
||||
│ ├─ Topic parsing + dimension detection │
|
||||
│ ├─ Pipeline mode selection (quick/standard/deep) │
|
||||
│ ├─ Perspective assignment │
|
||||
│ └─ Output: refined requirements for decomposition │
|
||||
│ │
|
||||
│ Phase 1: Requirement → CSV + Classification │
|
||||
│ ├─ Parse topic into exploration + analysis + discussion + synthesis │
|
||||
│ ├─ Assign roles: explorer, analyst, discussant, synthesizer │
|
||||
│ ├─ Classify tasks: csv-wave | interactive (exec_mode) │
|
||||
│ ├─ Compute dependency waves (topological sort → depth grouping) │
|
||||
│ ├─ Generate tasks.csv with wave + exec_mode columns │
|
||||
│ └─ User validates task breakdown (skip if -y) │
|
||||
│ │
|
||||
│ Phase 2: Wave Execution Engine (Extended) │
|
||||
│ ├─ For each wave (1..N): │
|
||||
│ │ ├─ Build wave CSV (filter csv-wave tasks for this wave) │
|
||||
│ │ ├─ Inject previous findings into prev_context column │
|
||||
│ │ ├─ spawn_agents_on_csv(wave CSV) │
|
||||
│ │ ├─ Execute post-wave interactive tasks (if any) │
|
||||
│ │ ├─ Merge all results into master tasks.csv │
|
||||
│ │ └─ Check: any failed? → skip dependents │
|
||||
│ └─ discoveries.ndjson shared across all modes (append-only) │
|
||||
│ │
|
||||
│ Phase 3: Post-Wave Interactive (Discussion Loop) │
|
||||
│ ├─ After discussant completes: user feedback gate │
|
||||
│ ├─ User chooses: continue deeper | adjust direction | done │
|
||||
│ ├─ Creates dynamic tasks (DISCUSS-N, ANALYZE-fix-N) as needed │
|
||||
│ └─ Max discussion rounds: quick=0, standard=1, deep=5 │
|
||||
│ │
|
||||
│ Phase 4: Results Aggregation │
|
||||
│ ├─ Export final results.csv │
|
||||
│ ├─ Generate context.md with all findings │
|
||||
│ ├─ Display summary: completed/failed/skipped per wave │
|
||||
│ └─ Offer: view results | export | archive │
|
||||
│ │
|
||||
└─────────────────────────────────────────────────────────────────────────┘
|
||||
Skill(skill="team-ultra-analyze", args="<topic>")
|
||||
|
|
||||
SKILL.md (this file) = Router
|
||||
|
|
||||
+--------------+--------------+
|
||||
| |
|
||||
no --role flag --role <name>
|
||||
| |
|
||||
Coordinator Worker
|
||||
roles/coordinator/role.md roles/<name>/role.md
|
||||
|
|
||||
+-- analyze -> dispatch -> spawn workers -> STOP
|
||||
|
|
||||
+-------+-------+-------+-------+
|
||||
v v v v
|
||||
[team-worker agents, each loads roles/<role>/role.md]
|
||||
|
||||
Pipeline (Standard mode):
|
||||
[EXPLORE-1..N](parallel) -> [ANALYZE-1..N](parallel) -> DISCUSS-001 -> SYNTH-001
|
||||
|
||||
Pipeline (Deep mode):
|
||||
[EXPLORE-1..N] -> [ANALYZE-1..N] -> DISCUSS-001 -> ANALYZE-fix -> DISCUSS-002 -> ... -> SYNTH-001
|
||||
|
||||
Pipeline (Quick mode):
|
||||
EXPLORE-001 -> ANALYZE-001 -> SYNTH-001
|
||||
```
|
||||
|
||||
---
|
||||
## Role Registry
|
||||
|
||||
## Task Classification Rules
|
||||
| Role | Path | Prefix | Inner Loop |
|
||||
|------|------|--------|------------|
|
||||
| coordinator | [roles/coordinator/role.md](roles/coordinator/role.md) | — | — |
|
||||
| explorer | [roles/explorer/role.md](roles/explorer/role.md) | EXPLORE-* | false |
|
||||
| analyst | [roles/analyst/role.md](roles/analyst/role.md) | ANALYZE-* | false |
|
||||
| discussant | [roles/discussant/role.md](roles/discussant/role.md) | DISCUSS-* | false |
|
||||
| synthesizer | [roles/synthesizer/role.md](roles/synthesizer/role.md) | SYNTH-* | false |
|
||||
|
||||
Each task is classified by `exec_mode`:
|
||||
## Role Router
|
||||
|
||||
| exec_mode | Mechanism | Criteria |
|
||||
|-----------|-----------|----------|
|
||||
| `csv-wave` | `spawn_agents_on_csv` | One-shot, structured I/O, no multi-round interaction |
|
||||
| `interactive` | `spawn_agent`/`wait`/`send_input`/`close_agent` | Multi-round, user feedback, direction control |
|
||||
Parse `$ARGUMENTS`:
|
||||
- Has `--role <name>` → Read `roles/<name>/role.md`, execute Phase 2-4
|
||||
- No `--role` → `roles/coordinator/role.md`, execute entry router
|
||||
|
||||
**Classification Decision**:
|
||||
## Shared Constants
|
||||
|
||||
| Task Property | Classification |
|
||||
|---------------|---------------|
|
||||
| Codebase exploration (single perspective) | `csv-wave` |
|
||||
| Parallel exploration (multiple perspectives) | `csv-wave` (parallel in same wave) |
|
||||
| Deep analysis (single perspective) | `csv-wave` |
|
||||
| Parallel analysis (multiple perspectives) | `csv-wave` (parallel in same wave) |
|
||||
| Direction-fix analysis (adjusted focus) | `csv-wave` |
|
||||
| Discussion processing (aggregate results) | `csv-wave` |
|
||||
| Final synthesis (cross-perspective integration) | `csv-wave` |
|
||||
| Discussion feedback gate (user interaction) | `interactive` |
|
||||
| Topic clarification (Phase 0) | `interactive` |
|
||||
- **Session prefix**: `UAN`
|
||||
- **Session path**: `.workflow/.team/UAN-<slug>-<date>/`
|
||||
- **Team name**: `ultra-analyze`
|
||||
- **CLI tools**: `ccw cli --mode analysis` (read-only), `ccw cli --mode write` (modifications)
|
||||
- **Message bus**: `mcp__ccw-tools__team_msg(session_id=<session-id>, ...)`
|
||||
|
||||
---
|
||||
## Worker Spawn Template
|
||||
|
||||
## CSV Schema
|
||||
|
||||
### tasks.csv (Master State)
|
||||
|
||||
```csv
|
||||
id,title,description,role,perspective,dimensions,discussion_round,discussion_type,deps,context_from,exec_mode,wave,status,findings,error
|
||||
"EXPLORE-001","Explore from technical perspective","Search codebase from technical perspective. Collect files, patterns, findings.","explorer","technical","architecture;implementation","0","","","","csv-wave","1","pending","",""
|
||||
"ANALYZE-001","Deep analysis from technical perspective","Analyze exploration results from technical perspective. Generate insights with confidence levels.","analyst","technical","architecture;implementation","0","","EXPLORE-001","EXPLORE-001","csv-wave","2","pending","",""
|
||||
"DISCUSS-001","Initial discussion round","Aggregate all analysis results. Identify convergent themes, conflicts, top discussion points.","discussant","","","1","initial","ANALYZE-001;ANALYZE-002","ANALYZE-001;ANALYZE-002","csv-wave","3","pending","",""
|
||||
```
|
||||
|
||||
**Columns**:
|
||||
|
||||
| Column | Phase | Description |
|
||||
|--------|-------|-------------|
|
||||
| `id` | Input | Unique task identifier (string) |
|
||||
| `title` | Input | Short task title |
|
||||
| `description` | Input | Detailed task description |
|
||||
| `role` | Input | Worker role: explorer, analyst, discussant, synthesizer |
|
||||
| `perspective` | Input | Analysis perspective: technical, architectural, business, domain_expert |
|
||||
| `dimensions` | Input | Analysis dimensions (semicolon-separated): architecture, implementation, performance, security, concept, comparison, decision |
|
||||
| `discussion_round` | Input | Discussion round number (0 = N/A, 1+ = round number) |
|
||||
| `discussion_type` | Input | Discussion type: initial, deepen, direction-adjusted, specific-questions |
|
||||
| `deps` | Input | Semicolon-separated dependency task IDs |
|
||||
| `context_from` | Input | Semicolon-separated task IDs whose findings this task needs |
|
||||
| `exec_mode` | Input | `csv-wave` or `interactive` |
|
||||
| `wave` | Computed | Wave number (computed by topological sort, 1-based) |
|
||||
| `status` | Output | `pending` → `completed` / `failed` / `skipped` |
|
||||
| `findings` | Output | Key discoveries or implementation notes (max 500 chars) |
|
||||
| `error` | Output | Error message if failed (empty if success) |
|
||||
|
||||
### Per-Wave CSV (Temporary)
|
||||
|
||||
Each wave generates a temporary `wave-{N}.csv` with extra `prev_context` column (csv-wave tasks only).
|
||||
|
||||
---
|
||||
|
||||
## Agent Registry (Interactive Agents)
|
||||
|
||||
| Agent | Role File | Pattern | Responsibility | Position |
|
||||
|-------|-----------|---------|----------------|----------|
|
||||
| discussion-feedback | agents/discussion-feedback.md | 2.3 (wait-respond) | Collect user feedback after discussion round, create dynamic tasks | post-wave (after discussant wave) |
|
||||
| topic-analyzer | agents/topic-analyzer.md | 2.3 (wait-respond) | Parse topic, detect dimensions, select pipeline mode and perspectives | standalone (Phase 0) |
|
||||
|
||||
> **COMPACT PROTECTION**: Agent files are execution documents. When context compression occurs, **you MUST immediately `Read` the corresponding agent.md** to reload.
|
||||
|
||||
---
|
||||
|
||||
## Output Artifacts
|
||||
|
||||
| File | Purpose | Lifecycle |
|
||||
|------|---------|-----------|
|
||||
| `tasks.csv` | Master state — all tasks with status/findings | Updated after each wave |
|
||||
| `wave-{N}.csv` | Per-wave input (temporary, csv-wave tasks only) | Created before wave, deleted after |
|
||||
| `results.csv` | Final export of all task results | Created in Phase 4 |
|
||||
| `discoveries.ndjson` | Shared exploration board (all agents, both modes) | Append-only, carries across waves |
|
||||
| `context.md` | Human-readable execution report | Created in Phase 4 |
|
||||
| `interactive/{id}-result.json` | Results from interactive tasks | Created per interactive task |
|
||||
|
||||
---
|
||||
|
||||
## Session Structure
|
||||
Coordinator spawns workers using this template:
|
||||
|
||||
```
|
||||
.workflow/.csv-wave/{session-id}/
|
||||
├── tasks.csv # Master state (all tasks, both modes)
|
||||
├── results.csv # Final results export
|
||||
├── discoveries.ndjson # Shared discovery board (all agents)
|
||||
├── context.md # Human-readable report
|
||||
├── wave-{N}.csv # Temporary per-wave input (csv-wave only)
|
||||
└── interactive/ # Interactive task artifacts
|
||||
└── {id}-result.json # Per-task results
|
||||
```
|
||||
spawn_agent({
|
||||
agent_type: "team_worker",
|
||||
items: [
|
||||
{ type: "text", text: `## Role Assignment
|
||||
role: <role>
|
||||
role_spec: <skill_root>/roles/<role>/role.md
|
||||
session: <session-folder>
|
||||
session_id: <session-id>
|
||||
requirement: <topic-description>
|
||||
inner_loop: false
|
||||
|
||||
---
|
||||
Read role_spec file (<skill_root>/roles/<role>/role.md) to load Phase 2-4 domain instructions.` },
|
||||
|
||||
## Implementation
|
||||
{ type: "text", text: `## Task Context
|
||||
task_id: <task-id>
|
||||
title: <task-title>
|
||||
description: <task-description>
|
||||
pipeline_phase: <pipeline-phase>` },
|
||||
|
||||
### Session Initialization
|
||||
|
||||
```javascript
|
||||
const getUtc8ISOString = () => new Date(Date.now() + 8 * 60 * 60 * 1000).toISOString()
|
||||
|
||||
// Parse flags
|
||||
const AUTO_YES = $ARGUMENTS.includes('--yes') || $ARGUMENTS.includes('-y')
|
||||
const continueMode = $ARGUMENTS.includes('--continue')
|
||||
const concurrencyMatch = $ARGUMENTS.match(/(?:--concurrency|-c)\s+(\d+)/)
|
||||
const maxConcurrency = concurrencyMatch ? parseInt(concurrencyMatch[1]) : 3
|
||||
const modeMatch = $ARGUMENTS.match(/--mode\s+(quick|standard|deep)/)
|
||||
const explicitMode = modeMatch ? modeMatch[1] : null
|
||||
|
||||
// Clean requirement text (remove flags)
|
||||
const topic = $ARGUMENTS
|
||||
.replace(/--yes|-y|--continue|--concurrency\s+\d+|-c\s+\d+|--mode\s+\w+/g, '')
|
||||
.trim()
|
||||
|
||||
const slug = topic.toLowerCase()
|
||||
.replace(/[^a-z0-9\u4e00-\u9fa5]+/g, '-')
|
||||
.substring(0, 40)
|
||||
const dateStr = getUtc8ISOString().substring(0, 10).replace(/-/g, '')
|
||||
let sessionId = `uan-${slug}-${dateStr}`
|
||||
let sessionFolder = `.workflow/.csv-wave/${sessionId}`
|
||||
|
||||
// Continue mode: find existing session
|
||||
if (continueMode) {
|
||||
const existing = Bash(`ls -t .workflow/.csv-wave/uan-* 2>/dev/null | head -1`).trim()
|
||||
if (existing) {
|
||||
sessionId = existing.split('/').pop()
|
||||
sessionFolder = existing
|
||||
}
|
||||
}
|
||||
|
||||
Bash(`mkdir -p ${sessionFolder}/interactive`)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Phase 0: Pre-Wave Interactive
|
||||
|
||||
**Objective**: Parse topic, detect analysis dimensions, select pipeline mode, and assign perspectives.
|
||||
|
||||
**Execution**:
|
||||
|
||||
```javascript
|
||||
const analyzer = spawn_agent({
|
||||
message: `
|
||||
## TASK ASSIGNMENT
|
||||
|
||||
### MANDATORY FIRST STEPS (Agent Execute)
|
||||
1. **Read role definition**: ~ or <project>/.codex/skills/team-ultra-analyze/agents/topic-analyzer.md (MUST read first)
|
||||
2. Read: .workflow/project-tech.json (if exists)
|
||||
|
||||
---
|
||||
|
||||
Goal: Analyze topic and recommend pipeline configuration
|
||||
Topic: ${topic}
|
||||
Explicit Mode: ${explicitMode || 'auto-detect'}
|
||||
|
||||
### Task
|
||||
1. Detect analysis dimensions from topic keywords:
|
||||
- architecture, implementation, performance, security, concept, comparison, decision
|
||||
2. Select perspectives based on dimensions:
|
||||
- technical, architectural, business, domain_expert
|
||||
3. Determine pipeline mode (if not explicitly set):
|
||||
- Complexity 1-3 → quick, 4-6 → standard, 7+ → deep
|
||||
4. Return structured configuration
|
||||
`
|
||||
{ type: "text", text: `## Upstream Context
|
||||
<prev_context>` }
|
||||
]
|
||||
})
|
||||
|
||||
const analyzerResult = wait({ ids: [analyzer], timeout_ms: 120000 })
|
||||
|
||||
if (analyzerResult.timed_out) {
|
||||
send_input({ id: analyzer, message: "Please finalize and output current findings." })
|
||||
wait({ ids: [analyzer], timeout_ms: 60000 })
|
||||
}
|
||||
|
||||
close_agent({ id: analyzer })
|
||||
|
||||
// Parse result: pipeline_mode, perspectives[], dimensions[], depth
|
||||
Write(`${sessionFolder}/interactive/topic-analyzer-result.json`, JSON.stringify({
|
||||
task_id: "topic-analysis",
|
||||
status: "completed",
|
||||
pipeline_mode: parsedMode,
|
||||
perspectives: parsedPerspectives,
|
||||
dimensions: parsedDimensions,
|
||||
depth: parsedDepth,
|
||||
timestamp: getUtc8ISOString()
|
||||
}))
|
||||
```
|
||||
|
||||
If not AUTO_YES, present user with configuration for confirmation:
|
||||
After spawning, use `wait_agent({ ids: [...], timeout_ms: 900000 })` to collect results, then `close_agent({ id })` each worker.
|
||||
|
||||
```javascript
|
||||
if (!AUTO_YES) {
|
||||
const answer = request_user_input({
|
||||
questions: [{
|
||||
question: `Topic: "${topic}" — Pipeline: ${pipeline_mode}. Approve or override?`,
|
||||
header: "Config",
|
||||
id: "analysis_config",
|
||||
options: [
|
||||
{ label: "Approve (Recommended)", description: `Use ${pipeline_mode} mode with ${perspectives.length} perspectives` },
|
||||
{ label: "Quick", description: "1 explorer -> 1 analyst -> synthesizer (fast)" },
|
||||
{ label: "Standard/Deep", description: "N explorers -> N analysts -> discussion -> synthesizer" }
|
||||
]
|
||||
}]
|
||||
})
|
||||
}
|
||||
## User Commands
|
||||
|
||||
| Command | Action |
|
||||
|---------|--------|
|
||||
| `check` / `status` | Output execution status diagram, do not advance pipeline |
|
||||
| `resume` / `continue` | Check worker status, advance to next pipeline step |
|
||||
|
||||
## Session Directory
|
||||
|
||||
```
|
||||
.workflow/.team/UAN-{slug}-{YYYY-MM-DD}/
|
||||
+-- .msg/messages.jsonl # Message bus log
|
||||
+-- .msg/meta.json # Session metadata + cross-role state
|
||||
+-- discussion.md # Understanding evolution and discussion timeline
|
||||
+-- explorations/ # Explorer output
|
||||
| +-- exploration-001.json
|
||||
| +-- exploration-002.json
|
||||
+-- analyses/ # Analyst output
|
||||
| +-- analysis-001.json
|
||||
| +-- analysis-002.json
|
||||
+-- discussions/ # Discussant output
|
||||
| +-- discussion-round-001.json
|
||||
+-- conclusions.json # Synthesizer output
|
||||
+-- wisdom/ # Cross-task knowledge
|
||||
| +-- learnings.md
|
||||
| +-- decisions.md
|
||||
| +-- conventions.md
|
||||
| +-- issues.md
|
||||
```
|
||||
|
||||
**Success Criteria**:
|
||||
- Refined requirements available for Phase 1 decomposition
|
||||
- Interactive agents closed, results stored
|
||||
## Completion Action
|
||||
|
||||
---
|
||||
When pipeline completes, coordinator presents:
|
||||
|
||||
### Phase 1: Requirement → CSV + Classification
|
||||
|
||||
**Objective**: Build tasks.csv from selected pipeline mode and perspectives.
|
||||
|
||||
**Decomposition Rules**:
|
||||
|
||||
| Pipeline | Tasks | Wave Structure |
|
||||
|----------|-------|---------------|
|
||||
| quick | EXPLORE-001 → ANALYZE-001 → SYNTH-001 | 3 waves, serial, depth=1 |
|
||||
| standard | EXPLORE-001..N → ANALYZE-001..N → DISCUSS-001 → SYNTH-001 | 4 wave groups, parallel explore+analyze |
|
||||
| deep | EXPLORE-001..N → ANALYZE-001..N → DISCUSS-001 (→ dynamic tasks) → SYNTH-001 | 3+ waves, SYNTH created after discussion loop |
|
||||
|
||||
Where N = number of selected perspectives.
|
||||
|
||||
**Classification Rules**:
|
||||
|
||||
All work tasks (exploration, analysis, discussion processing, synthesis) are `csv-wave`. The discussion feedback gate (user interaction after discussant completes) is `interactive`.
|
||||
|
||||
**Pipeline Task Definitions**:
|
||||
|
||||
#### Quick Pipeline (3 csv-wave tasks)
|
||||
|
||||
| Task ID | Role | Wave | Deps | Perspective | Description |
|
||||
|---------|------|------|------|-------------|-------------|
|
||||
| EXPLORE-001 | explorer | 1 | (none) | general | Explore codebase structure for analysis topic |
|
||||
| ANALYZE-001 | analyst | 2 | EXPLORE-001 | technical | Deep analysis from technical perspective |
|
||||
| SYNTH-001 | synthesizer | 3 | ANALYZE-001 | (all) | Integrate analysis into final conclusions |
|
||||
|
||||
#### Standard Pipeline (2N+2 tasks, parallel windows)
|
||||
|
||||
| Task ID | Role | Wave | Deps | Perspective | Description |
|
||||
|---------|------|------|------|-------------|-------------|
|
||||
| EXPLORE-001..N | explorer | 1 | (none) | per-perspective | Parallel codebase exploration, one per perspective |
|
||||
| ANALYZE-001..N | analyst | 2 | EXPLORE-N | per-perspective | Parallel deep analysis, one per perspective |
|
||||
| DISCUSS-001 | discussant | 3 | all ANALYZE-* | (all) | Aggregate analyses, identify themes and conflicts |
|
||||
| FEEDBACK-001 | (interactive) | 4 | DISCUSS-001 | - | User feedback: done → create SYNTH, continue → more discussion |
|
||||
| SYNTH-001 | synthesizer | 5 | FEEDBACK-001 | (all) | Cross-perspective integration and conclusions |
|
||||
|
||||
#### Deep Pipeline (2N+1 initial tasks + dynamic)
|
||||
|
||||
Same as Standard, but SYNTH-001 is omitted initially. Created dynamically after the discussion loop (up to 5 rounds) completes. Additional dynamic tasks:
|
||||
- `DISCUSS-N` — subsequent discussion round
|
||||
- `ANALYZE-fix-N` — supplementary analysis with adjusted focus
|
||||
- `SYNTH-001` — created after final discussion round
|
||||
|
||||
**Wave Computation**: Kahn's BFS topological sort with depth tracking (csv-wave tasks only).
|
||||
|
||||
**User Validation**: Display task breakdown with wave + exec_mode assignment (skip if AUTO_YES).
|
||||
|
||||
**Success Criteria**:
|
||||
- tasks.csv created with valid schema, wave, and exec_mode assignments
|
||||
- No circular dependencies
|
||||
- User approved (or AUTO_YES)
|
||||
|
||||
---
|
||||
|
||||
### Phase 2: Wave Execution Engine (Extended)
|
||||
|
||||
**Objective**: Execute tasks wave-by-wave with hybrid mechanism support and cross-wave context propagation.
|
||||
|
||||
```javascript
|
||||
const failedIds = new Set()
|
||||
const skippedIds = new Set()
|
||||
let discussionRound = 0
|
||||
const MAX_DISCUSSION_ROUNDS = pipeline_mode === 'deep' ? 5 : pipeline_mode === 'standard' ? 1 : 0
|
||||
|
||||
for (let wave = 1; wave <= maxWave; wave++) {
|
||||
console.log(`\n## Wave ${wave}/${maxWave}\n`)
|
||||
|
||||
// 1. Read current master CSV
|
||||
const masterCsv = parseCsv(Read(`${sessionFolder}/tasks.csv`))
|
||||
|
||||
// 2. Separate csv-wave and interactive tasks for this wave
|
||||
const waveTasks = masterCsv.filter(row => parseInt(row.wave) === wave)
|
||||
const csvTasks = waveTasks.filter(t => t.exec_mode === 'csv-wave')
|
||||
const interactiveTasks = waveTasks.filter(t => t.exec_mode === 'interactive')
|
||||
|
||||
// 3. Skip tasks whose deps failed
|
||||
const executableCsvTasks = []
|
||||
for (const task of csvTasks) {
|
||||
const deps = task.deps.split(';').filter(Boolean)
|
||||
if (deps.some(d => failedIds.has(d) || skippedIds.has(d))) {
|
||||
skippedIds.add(task.id)
|
||||
updateMasterCsvRow(sessionFolder, task.id, {
|
||||
status: 'skipped', error: 'Dependency failed or skipped'
|
||||
})
|
||||
continue
|
||||
}
|
||||
executableCsvTasks.push(task)
|
||||
}
|
||||
|
||||
// 4. Build prev_context for each csv-wave task
|
||||
for (const task of executableCsvTasks) {
|
||||
const contextIds = task.context_from.split(';').filter(Boolean)
|
||||
const prevFindings = contextIds
|
||||
.map(id => {
|
||||
const prevRow = masterCsv.find(r => r.id === id)
|
||||
if (prevRow && prevRow.status === 'completed' && prevRow.findings) {
|
||||
return `[Task ${id}: ${prevRow.title}] ${prevRow.findings}`
|
||||
}
|
||||
return null
|
||||
})
|
||||
.filter(Boolean)
|
||||
.join('\n')
|
||||
task.prev_context = prevFindings || 'No previous context available'
|
||||
}
|
||||
|
||||
// 5. Write wave CSV and execute csv-wave tasks
|
||||
if (executableCsvTasks.length > 0) {
|
||||
const waveHeader = 'id,title,description,role,perspective,dimensions,discussion_round,discussion_type,deps,context_from,exec_mode,wave,prev_context'
|
||||
const waveRows = executableCsvTasks.map(t =>
|
||||
[t.id, t.title, t.description, t.role, t.perspective, t.dimensions,
|
||||
t.discussion_round, t.discussion_type, t.deps, t.context_from, t.exec_mode, t.wave, t.prev_context]
|
||||
.map(cell => `"${String(cell).replace(/"/g, '""')}"`)
|
||||
.join(',')
|
||||
)
|
||||
Write(`${sessionFolder}/wave-${wave}.csv`, [waveHeader, ...waveRows].join('\n'))
|
||||
|
||||
const waveResult = spawn_agents_on_csv({
|
||||
csv_path: `${sessionFolder}/wave-${wave}.csv`,
|
||||
id_column: "id",
|
||||
instruction: buildAnalysisInstruction(sessionFolder, wave),
|
||||
max_concurrency: maxConcurrency,
|
||||
max_runtime_seconds: 600,
|
||||
output_csv_path: `${sessionFolder}/wave-${wave}-results.csv`,
|
||||
output_schema: {
|
||||
type: "object",
|
||||
properties: {
|
||||
id: { type: "string" },
|
||||
status: { type: "string", enum: ["completed", "failed"] },
|
||||
findings: { type: "string" },
|
||||
error: { type: "string" }
|
||||
},
|
||||
required: ["id", "status", "findings"]
|
||||
}
|
||||
})
|
||||
|
||||
// Merge results into master CSV
|
||||
const waveResults = parseCsv(Read(`${sessionFolder}/wave-${wave}-results.csv`))
|
||||
for (const result of waveResults) {
|
||||
updateMasterCsvRow(sessionFolder, result.id, {
|
||||
status: result.status,
|
||||
findings: result.findings || '',
|
||||
error: result.error || ''
|
||||
})
|
||||
if (result.status === 'failed') failedIds.add(result.id)
|
||||
}
|
||||
|
||||
Bash(`rm -f "${sessionFolder}/wave-${wave}.csv"`)
|
||||
}
|
||||
|
||||
// 6. Execute post-wave interactive tasks (Discussion Feedback)
|
||||
for (const task of interactiveTasks) {
|
||||
if (task.status !== 'pending') continue
|
||||
const deps = task.deps.split(';').filter(Boolean)
|
||||
if (deps.some(d => failedIds.has(d) || skippedIds.has(d))) {
|
||||
skippedIds.add(task.id)
|
||||
continue
|
||||
}
|
||||
|
||||
discussionRound++
|
||||
|
||||
// Discussion Feedback Gate
|
||||
if (pipeline_mode === 'quick' || discussionRound > MAX_DISCUSSION_ROUNDS) {
|
||||
// No discussion or max rounds reached — proceed to synthesis
|
||||
if (!masterCsv.find(t => t.id === 'SYNTH-001')) {
|
||||
// Create SYNTH-001 dynamically
|
||||
const lastDiscuss = masterCsv.filter(t => t.id.startsWith('DISCUSS'))
|
||||
.sort((a, b) => b.id.localeCompare(a.id))[0]
|
||||
addTaskToMasterCsv(sessionFolder, {
|
||||
id: 'SYNTH-001', title: 'Final synthesis',
|
||||
description: 'Integrate all analysis into final conclusions',
|
||||
role: 'synthesizer', perspective: '', dimensions: '',
|
||||
discussion_round: '0', discussion_type: '',
|
||||
deps: lastDiscuss ? lastDiscuss.id : '', context_from: 'all',
|
||||
exec_mode: 'csv-wave', wave: String(wave + 1),
|
||||
status: 'pending', findings: '', error: ''
|
||||
})
|
||||
maxWave = wave + 1
|
||||
}
|
||||
updateMasterCsvRow(sessionFolder, task.id, {
|
||||
status: 'completed',
|
||||
findings: `Discussion round ${discussionRound}: proceeding to synthesis`
|
||||
})
|
||||
continue
|
||||
}
|
||||
|
||||
// Spawn discussion feedback agent
|
||||
const feedbackAgent = spawn_agent({
|
||||
message: `
|
||||
## TASK ASSIGNMENT
|
||||
|
||||
### MANDATORY FIRST STEPS (Agent Execute)
|
||||
1. **Read role definition**: ~ or <project>/.codex/skills/team-ultra-analyze/agents/discussion-feedback.md (MUST read first)
|
||||
2. Read: ${sessionFolder}/discoveries.ndjson (shared discoveries)
|
||||
|
||||
---
|
||||
|
||||
Goal: Collect user feedback on discussion round ${discussionRound}
|
||||
Session: ${sessionFolder}
|
||||
Discussion Round: ${discussionRound}/${MAX_DISCUSSION_ROUNDS}
|
||||
Pipeline Mode: ${pipeline_mode}
|
||||
|
||||
### Context
|
||||
The discussant has completed round ${discussionRound}. Present the user with discussion results and collect feedback on next direction.
|
||||
`
|
||||
})
|
||||
|
||||
const feedbackResult = wait({ ids: [feedbackAgent], timeout_ms: 300000 })
|
||||
if (feedbackResult.timed_out) {
|
||||
send_input({ id: feedbackAgent, message: "Please finalize: user did not respond, default to 'Done'." })
|
||||
wait({ ids: [feedbackAgent], timeout_ms: 60000 })
|
||||
}
|
||||
close_agent({ id: feedbackAgent })
|
||||
|
||||
// Parse feedback decision: "continue_deeper" | "adjust_direction" | "done"
|
||||
Write(`${sessionFolder}/interactive/${task.id}-result.json`, JSON.stringify({
|
||||
task_id: task.id, status: "completed",
|
||||
discussion_round: discussionRound,
|
||||
feedback: feedbackDecision,
|
||||
timestamp: getUtc8ISOString()
|
||||
}))
|
||||
|
||||
// Handle feedback
|
||||
if (feedbackDecision === 'done') {
|
||||
// Create SYNTH-001 blocked by last DISCUSS task
|
||||
addTaskToMasterCsv(sessionFolder, {
|
||||
id: 'SYNTH-001', deps: task.id.replace('FEEDBACK', 'DISCUSS'),
|
||||
role: 'synthesizer', exec_mode: 'csv-wave', wave: String(wave + 1)
|
||||
})
|
||||
maxWave = wave + 1
|
||||
} else if (feedbackDecision === 'adjust_direction') {
|
||||
// Create ANALYZE-fix-N and DISCUSS-N+1
|
||||
const fixId = `ANALYZE-fix-${discussionRound}`
|
||||
const nextDiscussId = `DISCUSS-${String(discussionRound + 1).padStart(3, '0')}`
|
||||
addTaskToMasterCsv(sessionFolder, {
|
||||
id: fixId, role: 'analyst', exec_mode: 'csv-wave', wave: String(wave + 1)
|
||||
})
|
||||
addTaskToMasterCsv(sessionFolder, {
|
||||
id: nextDiscussId, role: 'discussant', deps: fixId,
|
||||
exec_mode: 'csv-wave', wave: String(wave + 2)
|
||||
})
|
||||
addTaskToMasterCsv(sessionFolder, {
|
||||
id: `FEEDBACK-${String(discussionRound + 1).padStart(3, '0')}`,
|
||||
exec_mode: 'interactive', deps: nextDiscussId, wave: String(wave + 3)
|
||||
})
|
||||
maxWave = wave + 3
|
||||
} else {
|
||||
// continue_deeper: Create DISCUSS-N+1
|
||||
const nextDiscussId = `DISCUSS-${String(discussionRound + 1).padStart(3, '0')}`
|
||||
addTaskToMasterCsv(sessionFolder, {
|
||||
id: nextDiscussId, role: 'discussant', exec_mode: 'csv-wave', wave: String(wave + 1)
|
||||
})
|
||||
addTaskToMasterCsv(sessionFolder, {
|
||||
id: `FEEDBACK-${String(discussionRound + 1).padStart(3, '0')}`,
|
||||
exec_mode: 'interactive', deps: nextDiscussId, wave: String(wave + 2)
|
||||
})
|
||||
maxWave = wave + 2
|
||||
}
|
||||
|
||||
updateMasterCsvRow(sessionFolder, task.id, {
|
||||
status: 'completed',
|
||||
findings: `Discussion feedback: ${feedbackDecision}, round ${discussionRound}`
|
||||
})
|
||||
}
|
||||
}
|
||||
```
|
||||
request_user_input({
|
||||
questions: [{
|
||||
question: "Ultra-Analyze pipeline complete. What would you like to do?",
|
||||
header: "Completion",
|
||||
multiSelect: false,
|
||||
options: [
|
||||
{ label: "Archive & Clean (Recommended)", description: "Archive session, clean up tasks and team resources" },
|
||||
{ label: "Keep Active", description: "Keep session active for follow-up work or inspection" },
|
||||
{ label: "Export Results", description: "Export deliverables to a specified location, then clean" }
|
||||
]
|
||||
}]
|
||||
})
|
||||
```
|
||||
|
||||
**Success Criteria**:
|
||||
- All waves executed in order
|
||||
- Both csv-wave and interactive tasks handled per wave
|
||||
- Each wave's results merged into master CSV before next wave starts
|
||||
- Dependent tasks skipped when predecessor failed
|
||||
- discoveries.ndjson accumulated across all waves and mechanisms
|
||||
- Discussion loop controlled with proper round tracking
|
||||
- Dynamic tasks created correctly based on user feedback
|
||||
| Choice | Action |
|
||||
|--------|--------|
|
||||
| Archive & Clean | Update session status="completed" -> output final summary |
|
||||
| Keep Active | Update session status="paused" -> output resume instructions |
|
||||
| Export Results | request_user_input for target path -> copy deliverables -> Archive & Clean |
|
||||
|
||||
---
|
||||
## Specs Reference
|
||||
|
||||
### Phase 3: Post-Wave Interactive
|
||||
|
||||
**Objective**: Handle discussion loop completion and ensure synthesis is triggered.
|
||||
|
||||
After all discussion rounds are exhausted or user chooses "done":
|
||||
1. Ensure SYNTH-001 exists in master CSV
|
||||
2. Ensure SYNTH-001 is unblocked (blocked by last completed discussion task)
|
||||
3. Execute remaining waves (synthesis)
|
||||
|
||||
**Success Criteria**:
|
||||
- Post-wave interactive processing complete
|
||||
- Interactive agents closed, results stored
|
||||
|
||||
---
|
||||
|
||||
### Phase 4: Results Aggregation
|
||||
|
||||
**Objective**: Generate final results and human-readable report.
|
||||
|
||||
```javascript
|
||||
const masterCsv = Read(`${sessionFolder}/tasks.csv`)
|
||||
Write(`${sessionFolder}/results.csv`, masterCsv)
|
||||
|
||||
const tasks = parseCsv(masterCsv)
|
||||
const completed = tasks.filter(t => t.status === 'completed')
|
||||
const failed = tasks.filter(t => t.status === 'failed')
|
||||
const skipped = tasks.filter(t => t.status === 'skipped')
|
||||
|
||||
const contextContent = `# Ultra Analyze Report
|
||||
|
||||
**Session**: ${sessionId}
|
||||
**Topic**: ${topic}
|
||||
**Pipeline**: ${pipeline_mode}
|
||||
**Perspectives**: ${perspectives.join(', ')}
|
||||
**Discussion Rounds**: ${discussionRound}
|
||||
**Completed**: ${getUtc8ISOString()}
|
||||
|
||||
---
|
||||
|
||||
## Summary
|
||||
|
||||
| Metric | Count |
|
||||
|--------|-------|
|
||||
| Total Tasks | ${tasks.length} |
|
||||
| Completed | ${completed.length} |
|
||||
| Failed | ${failed.length} |
|
||||
| Skipped | ${skipped.length} |
|
||||
| Discussion Rounds | ${discussionRound} |
|
||||
|
||||
---
|
||||
|
||||
## Wave Execution
|
||||
|
||||
${waveDetails}
|
||||
|
||||
---
|
||||
|
||||
## Analysis Artifacts
|
||||
|
||||
- Explorations: discoveries with type "exploration" in discoveries.ndjson
|
||||
- Analyses: discoveries with type "analysis" in discoveries.ndjson
|
||||
- Discussion: discoveries with type "discussion" in discoveries.ndjson
|
||||
- Conclusions: discoveries with type "conclusion" in discoveries.ndjson
|
||||
|
||||
---
|
||||
|
||||
## Conclusions
|
||||
|
||||
${synthesisFindings}
|
||||
`
|
||||
|
||||
Write(`${sessionFolder}/context.md`, contextContent)
|
||||
```
|
||||
|
||||
If not AUTO_YES, offer completion options:
|
||||
|
||||
```javascript
|
||||
if (!AUTO_YES) {
|
||||
const answer = request_user_input({
|
||||
questions: [{
|
||||
question: "Ultra-Analyze pipeline complete. Choose next action.",
|
||||
header: "Done",
|
||||
id: "completion",
|
||||
options: [
|
||||
{ label: "Archive (Recommended)", description: "Archive session" },
|
||||
{ label: "Keep Active", description: "Keep session for follow-up" },
|
||||
{ label: "Export Results", description: "Export deliverables to specified location" }
|
||||
]
|
||||
}]
|
||||
})
|
||||
}
|
||||
```
|
||||
|
||||
**Success Criteria**:
|
||||
- results.csv exported (all tasks, both modes)
|
||||
- context.md generated
|
||||
- All interactive agents closed
|
||||
- Summary displayed to user
|
||||
|
||||
---
|
||||
|
||||
## Shared Discovery Board Protocol
|
||||
|
||||
All agents across all waves share `discoveries.ndjson`. This enables cross-role knowledge sharing.
|
||||
|
||||
**Discovery Types**:
|
||||
|
||||
| Type | Dedup Key | Data Schema | Description |
|
||||
|------|-----------|-------------|-------------|
|
||||
| `exploration` | `data.perspective+data.file` | `{perspective, file, relevance, summary, patterns[]}` | Explored file/module |
|
||||
| `analysis` | `data.perspective+data.insight` | `{perspective, insight, confidence, evidence, file_ref}` | Analysis insight |
|
||||
| `pattern` | `data.name` | `{name, file, description, type}` | Code/architecture pattern |
|
||||
| `discussion_point` | `data.topic` | `{topic, perspectives[], convergence, open_questions[]}` | Discussion point |
|
||||
| `recommendation` | `data.action` | `{action, rationale, priority, confidence}` | Recommendation |
|
||||
| `conclusion` | `data.point` | `{point, evidence, confidence, perspectives_supporting[]}` | Final conclusion |
|
||||
|
||||
**Format**: NDJSON, each line is self-contained JSON:
|
||||
|
||||
```jsonl
|
||||
{"ts":"2026-03-08T10:00:00+08:00","worker":"EXPLORE-001","type":"exploration","data":{"perspective":"technical","file":"src/auth/index.ts","relevance":"high","summary":"Auth module entry point with OAuth and JWT exports","patterns":["module-pattern","strategy-pattern"]}}
|
||||
{"ts":"2026-03-08T10:05:00+08:00","worker":"ANALYZE-001","type":"analysis","data":{"perspective":"technical","insight":"Auth module uses strategy pattern for provider switching","confidence":"high","evidence":"src/auth/strategies/*.ts","file_ref":"src/auth/index.ts:15"}}
|
||||
{"ts":"2026-03-08T10:10:00+08:00","worker":"DISCUSS-001","type":"discussion_point","data":{"topic":"Authentication scalability","perspectives":["technical","architectural"],"convergence":"Both perspectives agree on stateless JWT approach","open_questions":["Token refresh strategy for long sessions"]}}
|
||||
```
|
||||
|
||||
**Protocol Rules**:
|
||||
1. Read board before own exploration → skip covered areas
|
||||
2. Write discoveries immediately via `echo >>` → don't batch
|
||||
3. Deduplicate — check existing entries by type + dedup key
|
||||
4. Append-only — never modify or delete existing lines
|
||||
|
||||
---
|
||||
- [specs/team-config.json](specs/team-config.json) — Team configuration and pipeline settings
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Error | Resolution |
|
||||
|-------|------------|
|
||||
| Circular dependency | Detect in wave computation, abort with error message |
|
||||
| CSV agent timeout | Mark as failed in results, continue with wave |
|
||||
| CSV agent failed | Mark as failed, skip dependent tasks in later waves |
|
||||
| Interactive agent timeout | Urge convergence via send_input, then close if still timed out |
|
||||
| Interactive agent failed | Mark as failed, skip dependents |
|
||||
| All agents in wave failed | Log error, offer retry or abort |
|
||||
| CSV parse error | Validate CSV format before execution, show line number |
|
||||
| discoveries.ndjson corrupt | Ignore malformed lines, continue with valid entries |
|
||||
| Discussion loop exceeds 5 rounds | Force synthesis, offer continuation |
|
||||
| Explorer finds nothing | Continue with limited context, note limitation |
|
||||
| CLI tool unavailable | Fallback chain: gemini → codex → direct analysis |
|
||||
| User timeout in discussion | Save state, default to "done", proceed to synthesis |
|
||||
| Continue mode: no session found | List available sessions, prompt user to select |
|
||||
|
||||
---
|
||||
|
||||
## Core Rules
|
||||
|
||||
1. **Start Immediately**: First action is session initialization, then Phase 0/1
|
||||
2. **Wave Order is Sacred**: Never execute wave N before wave N-1 completes and results are merged
|
||||
3. **CSV is Source of Truth**: Master tasks.csv holds all state (both csv-wave and interactive)
|
||||
4. **CSV First**: Default to csv-wave for tasks; only use interactive when user interaction is needed
|
||||
5. **Context Propagation**: prev_context built from master CSV, not from memory
|
||||
6. **Discovery Board is Append-Only**: Never clear, modify, or recreate discoveries.ndjson — both mechanisms share it
|
||||
7. **Skip on Failure**: If a dependency failed, skip the dependent task (regardless of mechanism)
|
||||
8. **Lifecycle Balance**: Every spawn_agent MUST have a matching close_agent
|
||||
9. **Cleanup Temp Files**: Remove wave-{N}.csv after results are merged
|
||||
10. **DO NOT STOP**: Continuous execution until all waves complete or all remaining tasks are skipped
|
||||
|
||||
|
||||
---
|
||||
|
||||
## Coordinator Role Constraints (Main Agent)
|
||||
|
||||
**CRITICAL**: The coordinator (main agent executing this skill) is responsible for **orchestration only**, NOT implementation.
|
||||
|
||||
15. **Coordinator Does NOT Execute Code**: The main agent MUST NOT write, modify, or implement any code directly. All implementation work is delegated to spawned team agents. The coordinator only:
|
||||
- Spawns agents with task assignments
|
||||
- Waits for agent callbacks
|
||||
- Merges results and coordinates workflow
|
||||
- Manages workflow transitions between phases
|
||||
|
||||
16. **Patient Waiting is Mandatory**: Agent execution takes significant time (typically 10-30 minutes per phase, sometimes longer). The coordinator MUST:
|
||||
- Wait patiently for `wait()` calls to complete
|
||||
- NOT skip workflow steps due to perceived delays
|
||||
- NOT assume agents have failed just because they're taking time
|
||||
- Trust the timeout mechanisms defined in the skill
|
||||
|
||||
17. **Use send_input for Clarification**: When agents need guidance or appear stuck, the coordinator MUST:
|
||||
- Use `send_input()` to ask questions or provide clarification
|
||||
- NOT skip the agent or move to next phase prematurely
|
||||
- Give agents opportunity to respond before escalating
|
||||
- Example: `send_input({ id: agent_id, message: "Please provide status update or clarify blockers" })`
|
||||
|
||||
18. **No Workflow Shortcuts**: The coordinator MUST NOT:
|
||||
- Skip phases or stages defined in the workflow
|
||||
- Bypass required approval or review steps
|
||||
- Execute dependent tasks before prerequisites complete
|
||||
- Assume task completion without explicit agent callback
|
||||
- Make up or fabricate agent results
|
||||
|
||||
19. **Respect Long-Running Processes**: This is a complex multi-agent workflow that requires patience:
|
||||
- Total execution time may range from 30-90 minutes or longer
|
||||
- Each phase may take 10-30 minutes depending on complexity
|
||||
- The coordinator must remain active and attentive throughout the entire process
|
||||
- Do not terminate or skip steps due to time concerns
|
||||
| Scenario | Resolution |
|
||||
|----------|------------|
|
||||
| Unknown --role value | Error with role registry list |
|
||||
| Role file not found | Error with expected path (roles/{name}/role.md) |
|
||||
| Discussion loop stuck >5 rounds | Force synthesis, offer continuation |
|
||||
| CLI tool unavailable | Fallback chain: gemini -> codex -> manual analysis |
|
||||
| Explorer agent fails | Continue with available context, note limitation |
|
||||
| Fast-advance conflict | Coordinator reconciles on next callback |
|
||||
| Completion action fails | Default to Keep Active |
|
||||
|
||||
@@ -1,155 +0,0 @@
|
||||
# Discussion Feedback Agent
|
||||
|
||||
Collect user feedback after a discussion round and determine next action for the analysis pipeline.
|
||||
|
||||
## Identity
|
||||
|
||||
- **Type**: `interactive`
|
||||
- **Responsibility**: User feedback collection and discussion loop control
|
||||
|
||||
## Boundaries
|
||||
|
||||
### MUST
|
||||
|
||||
- Load role definition via MANDATORY FIRST STEPS pattern
|
||||
- Present discussion results to the user clearly
|
||||
- Collect explicit user feedback via request_user_input
|
||||
- Return structured decision for orchestrator to act on
|
||||
- Respect max discussion round limits
|
||||
|
||||
### MUST NOT
|
||||
|
||||
- Perform analysis or exploration (delegate to csv-wave agents)
|
||||
- Create tasks directly (orchestrator handles dynamic task creation)
|
||||
- Skip user interaction (this is the user-in-the-loop checkpoint)
|
||||
- Exceed the configured max discussion rounds
|
||||
|
||||
---
|
||||
|
||||
## Toolbox
|
||||
|
||||
### Available Tools
|
||||
|
||||
| Tool | Type | Purpose |
|
||||
|------|------|---------|
|
||||
| `Read` | builtin | Load discussion results and session state |
|
||||
| `request_user_input` | builtin | Collect user feedback on discussion |
|
||||
|
||||
---
|
||||
|
||||
## Execution
|
||||
|
||||
### Phase 1: Context Loading
|
||||
|
||||
**Objective**: Load discussion results for user presentation
|
||||
|
||||
**Input**:
|
||||
|
||||
| Source | Required | Description |
|
||||
|--------|----------|-------------|
|
||||
| Session folder | Yes | Path to session directory |
|
||||
| Discussion round | Yes | Current round number |
|
||||
| Max discussion rounds | Yes | Maximum allowed rounds |
|
||||
| Pipeline mode | Yes | quick, standard, or deep |
|
||||
|
||||
**Steps**:
|
||||
|
||||
1. Read the session's discoveries.ndjson for discussion_point entries
|
||||
2. Parse prev_context for the discussant's findings
|
||||
3. Extract key themes, conflicts, and open questions from findings
|
||||
4. Load current discussion_round from spawn message
|
||||
|
||||
**Output**: Discussion summary ready for user presentation
|
||||
|
||||
---
|
||||
|
||||
### Phase 2: User Feedback Collection
|
||||
|
||||
**Objective**: Present results and collect next-step decision
|
||||
|
||||
**Steps**:
|
||||
|
||||
1. Format discussion summary for user:
|
||||
- Convergent themes identified
|
||||
- Conflicting views between perspectives
|
||||
- Top open questions
|
||||
- Round progress (current/max)
|
||||
|
||||
2. Present options via request_user_input:
|
||||
|
||||
```
|
||||
request_user_input({
|
||||
questions: [{
|
||||
question: "Discussion round <N>/<max> complete.\n\nThemes: <themes>\nConflicts: <conflicts>\nOpen Questions: <questions>\n\nWhat next?",
|
||||
header: "Feedback",
|
||||
id: "discussion_next",
|
||||
options: [
|
||||
{ label: "Continue deeper (Recommended)", description: "Current direction is good, investigate open questions deeper" },
|
||||
{ label: "Adjust direction", description: "Shift analysis focus to a different area" },
|
||||
{ label: "Done", description: "Sufficient depth reached, proceed to final synthesis" }
|
||||
]
|
||||
}]
|
||||
})
|
||||
```
|
||||
|
||||
3. If user chooses "Adjust direction":
|
||||
- Follow up with another request_user_input asking for the new focus area
|
||||
- Capture the adjusted focus text
|
||||
|
||||
**Output**: User decision and optional adjusted focus
|
||||
|
||||
---
|
||||
|
||||
### Phase 3: Decision Formatting
|
||||
|
||||
**Objective**: Package user decision for orchestrator
|
||||
|
||||
**Steps**:
|
||||
|
||||
1. Map user choice to decision string:
|
||||
|
||||
| User Choice | Decision | Additional Data |
|
||||
|------------|----------|-----------------|
|
||||
| Continue deeper | `continue_deeper` | None |
|
||||
| Adjust direction | `adjust_direction` | `adjusted_focus: <user input>` |
|
||||
| Done | `done` | None |
|
||||
|
||||
2. Format structured output for orchestrator
|
||||
|
||||
**Output**: Structured decision
|
||||
|
||||
---
|
||||
|
||||
## Structured Output Template
|
||||
|
||||
```
|
||||
## Summary
|
||||
- Discussion Round: <current>/<max>
|
||||
- User Decision: continue_deeper | adjust_direction | done
|
||||
|
||||
## Discussion Summary Presented
|
||||
- Themes: <list>
|
||||
- Conflicts: <list>
|
||||
- Open Questions: <list>
|
||||
|
||||
## Decision Details
|
||||
- Decision: <decision>
|
||||
- Adjusted Focus: <focus text, if adjust_direction>
|
||||
- Rationale: <user's reasoning, if provided>
|
||||
|
||||
## Next Action (for orchestrator)
|
||||
- continue_deeper: Create DISCUSS-<N+1> task, then FEEDBACK-<N+1>
|
||||
- adjust_direction: Create ANALYZE-fix-<N> task, then DISCUSS-<N+1>, then FEEDBACK-<N+1>
|
||||
- done: Create SYNTH-001 task blocked by last DISCUSS task
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Scenario | Resolution |
|
||||
|----------|------------|
|
||||
| User does not respond | After timeout, default to "done" and proceed to synthesis |
|
||||
| Max rounds reached | Inform user this is the final round, only offer "Done" option |
|
||||
| No discussion data found | Present what is available, note limitations |
|
||||
| Timeout approaching | Output current state with default "done" decision |
|
||||
@@ -1,153 +0,0 @@
|
||||
# Topic Analyzer Agent
|
||||
|
||||
Parse analysis topic, detect dimensions, select pipeline mode, and assign perspectives.
|
||||
|
||||
## Identity
|
||||
|
||||
- **Type**: `interactive`
|
||||
- **Responsibility**: Topic analysis and pipeline configuration
|
||||
|
||||
## Boundaries
|
||||
|
||||
### MUST
|
||||
|
||||
- Load role definition via MANDATORY FIRST STEPS pattern
|
||||
- Perform text-level analysis only (no source code reading)
|
||||
- Produce structured output with pipeline configuration
|
||||
- Detect dimensions from topic keywords
|
||||
- Recommend appropriate perspectives for the topic
|
||||
|
||||
### MUST NOT
|
||||
|
||||
- Read source code or explore codebase (that is the explorer's job)
|
||||
- Perform any analysis (that is the analyst's job)
|
||||
- Make final pipeline decisions without providing rationale
|
||||
|
||||
---
|
||||
|
||||
## Toolbox
|
||||
|
||||
### Available Tools
|
||||
|
||||
| Tool | Type | Purpose |
|
||||
|------|------|---------|
|
||||
| `Read` | builtin | Load project context if available |
|
||||
|
||||
---
|
||||
|
||||
## Execution
|
||||
|
||||
### Phase 1: Dimension Detection
|
||||
|
||||
**Objective**: Scan topic keywords to identify analysis dimensions
|
||||
|
||||
**Input**:
|
||||
|
||||
| Source | Required | Description |
|
||||
|--------|----------|-------------|
|
||||
| Topic text | Yes | The analysis topic from user |
|
||||
| Explicit mode | No | --mode override if provided |
|
||||
|
||||
**Steps**:
|
||||
|
||||
1. Scan topic for dimension keywords:
|
||||
|
||||
| Dimension | Keywords |
|
||||
|-----------|----------|
|
||||
| architecture | architecture, design, structure |
|
||||
| implementation | implement, code, source |
|
||||
| performance | performance, optimize, speed |
|
||||
| security | security, auth, vulnerability |
|
||||
| concept | concept, theory, principle |
|
||||
| comparison | compare, vs, difference |
|
||||
| decision | decision, choice, tradeoff |
|
||||
|
||||
2. Select matching dimensions (default to general if none match)
|
||||
|
||||
**Output**: List of detected dimensions
|
||||
|
||||
---
|
||||
|
||||
### Phase 2: Pipeline Mode Selection
|
||||
|
||||
**Objective**: Determine pipeline mode and depth
|
||||
|
||||
**Steps**:
|
||||
|
||||
1. If explicit `--mode` provided, use it directly
|
||||
2. Otherwise, auto-detect from complexity scoring:
|
||||
|
||||
| Factor | Points |
|
||||
|--------|--------|
|
||||
| Per detected dimension | +1 |
|
||||
| Deep-mode keywords (deep, thorough, detailed, comprehensive) | +2 |
|
||||
| Cross-domain (3+ dimensions) | +1 |
|
||||
|
||||
| Score | Pipeline Mode |
|
||||
|-------|--------------|
|
||||
| 1-3 | quick |
|
||||
| 4-6 | standard |
|
||||
| 7+ | deep |
|
||||
|
||||
3. Determine depth = number of selected perspectives
|
||||
|
||||
**Output**: Pipeline mode and depth
|
||||
|
||||
---
|
||||
|
||||
### Phase 3: Perspective Assignment
|
||||
|
||||
**Objective**: Select analysis perspectives based on topic and dimensions
|
||||
|
||||
**Steps**:
|
||||
|
||||
1. Map dimensions to perspectives:
|
||||
|
||||
| Dimension Match | Perspective | Focus |
|
||||
|----------------|-------------|-------|
|
||||
| architecture, implementation | technical | Implementation details, code patterns |
|
||||
| architecture, security | architectural | System design, scalability |
|
||||
| concept, comparison, decision | business | Value, ROI, strategy |
|
||||
| domain-specific keywords | domain_expert | Domain patterns, standards |
|
||||
|
||||
2. Quick mode: always 1 perspective (technical by default)
|
||||
3. Standard/Deep mode: 2-4 perspectives based on dimension coverage
|
||||
|
||||
**Output**: List of perspectives with focus areas
|
||||
|
||||
---
|
||||
|
||||
## Structured Output Template
|
||||
|
||||
```
|
||||
## Summary
|
||||
- Topic: <topic>
|
||||
- Pipeline Mode: <quick|standard|deep>
|
||||
- Depth: <number of perspectives>
|
||||
|
||||
## Dimension Detection
|
||||
- Detected dimensions: <list>
|
||||
- Complexity score: <score>
|
||||
|
||||
## Perspectives
|
||||
1. <perspective>: <focus area>
|
||||
2. <perspective>: <focus area>
|
||||
|
||||
## Discussion Configuration
|
||||
- Max discussion rounds: <0|1|5>
|
||||
|
||||
## Pipeline Structure
|
||||
- Total tasks: <count>
|
||||
- Parallel stages: <description>
|
||||
- Dynamic tasks possible: <yes/no>
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Scenario | Resolution |
|
||||
|----------|------------|
|
||||
| Topic too vague | Suggest clarifying questions, default to standard mode |
|
||||
| No dimension matches | Default to "general" dimension with technical perspective |
|
||||
| Timeout approaching | Output current analysis with "PARTIAL" status |
|
||||
@@ -1,169 +0,0 @@
|
||||
## TASK ASSIGNMENT
|
||||
|
||||
### MANDATORY FIRST STEPS
|
||||
1. Read shared discoveries: .workflow/.csv-wave/{session-id}/discoveries.ndjson (if exists, skip if not)
|
||||
2. Read project context: .workflow/project-tech.json (if exists)
|
||||
|
||||
---
|
||||
|
||||
## Your Task
|
||||
|
||||
**Task ID**: {id}
|
||||
**Title**: {title}
|
||||
**Role**: {role}
|
||||
**Description**: {description}
|
||||
**Perspective**: {perspective}
|
||||
**Dimensions**: {dimensions}
|
||||
**Discussion Round**: {discussion_round}
|
||||
**Discussion Type**: {discussion_type}
|
||||
|
||||
### Previous Tasks' Findings (Context)
|
||||
{prev_context}
|
||||
|
||||
---
|
||||
|
||||
## Execution Protocol
|
||||
|
||||
1. **Read discoveries**: Load shared discoveries from the session's discoveries.ndjson for cross-task context
|
||||
2. **Use context**: Apply previous tasks' findings from prev_context above
|
||||
3. **Execute by role**:
|
||||
|
||||
### Role: explorer (EXPLORE-* tasks)
|
||||
Explore codebase structure from the assigned perspective, collecting structured context for downstream analysis.
|
||||
|
||||
- Determine exploration strategy by perspective:
|
||||
|
||||
| Perspective | Focus | Search Depth |
|
||||
|-------------|-------|-------------|
|
||||
| general | Overall codebase structure and patterns | broad |
|
||||
| technical | Implementation details, code patterns, feasibility | medium |
|
||||
| architectural | System design, module boundaries, interactions | broad |
|
||||
| business | Business logic, domain models, value flows | medium |
|
||||
| domain_expert | Domain patterns, standards, best practices | deep |
|
||||
|
||||
- Use available tools (Read, Glob, Grep, Bash) to search the codebase
|
||||
- Collect: relevant files (path, relevance, summary), code patterns, key findings, module relationships
|
||||
- Generate questions for downstream analysis
|
||||
- Focus exploration on the dimensions listed in the Dimensions field
|
||||
|
||||
### Role: analyst (ANALYZE-* tasks)
|
||||
Perform deep analysis on exploration results from the assigned perspective.
|
||||
|
||||
- Load exploration results from prev_context
|
||||
- Detect if this is a direction-fix task (description mentions "adjusted focus"):
|
||||
- Normal: analyze from assigned perspective using corresponding exploration results
|
||||
- Direction-fix: re-analyze from adjusted perspective using all available explorations
|
||||
|
||||
- Select analysis approach by perspective:
|
||||
|
||||
| Perspective | CLI Tool | Focus |
|
||||
|-------------|----------|-------|
|
||||
| technical | gemini | Implementation patterns, code quality, feasibility |
|
||||
| architectural | gemini | System design, scalability, component interactions |
|
||||
| business | gemini | Value, ROI, stakeholder impact |
|
||||
| domain_expert | gemini | Domain-specific patterns, best practices |
|
||||
|
||||
- Use `ccw cli` for deep analysis:
|
||||
```bash
|
||||
ccw cli -p "PURPOSE: Deep analysis of '<topic>' from <perspective> perspective
|
||||
TASK: • Analyze patterns found in exploration • Generate insights with confidence levels • Identify discussion points
|
||||
MODE: analysis
|
||||
CONTEXT: @**/* | Memory: Exploration findings
|
||||
EXPECTED: Structured insights with confidence levels and evidence" --tool gemini --mode analysis
|
||||
```
|
||||
|
||||
- Generate structured output:
|
||||
- key_insights: [{insight, confidence (high/medium/low), evidence (file:line)}]
|
||||
- key_findings: [{finding, file_ref, impact}]
|
||||
- discussion_points: [questions needing cross-perspective discussion]
|
||||
- open_questions: [areas needing further exploration]
|
||||
- recommendations: [{action, rationale, priority}]
|
||||
|
||||
### Role: discussant (DISCUSS-* tasks)
|
||||
Process analysis results and generate discussion summary. Strategy depends on discussion type.
|
||||
|
||||
- **initial**: Cross-perspective aggregation
|
||||
- Aggregate all analysis results from prev_context
|
||||
- Identify convergent themes across perspectives
|
||||
- Identify conflicting views between perspectives
|
||||
- Generate top 5 discussion points and open questions
|
||||
- Produce structured round summary
|
||||
|
||||
- **deepen**: Deep investigation of open questions
|
||||
- Use CLI tool to investigate uncertain insights:
|
||||
```bash
|
||||
ccw cli -p "PURPOSE: Investigate open questions and uncertain insights
|
||||
TASK: • Focus on questions from previous round • Find supporting evidence • Validate uncertain insights
|
||||
MODE: analysis
|
||||
CONTEXT: @**/*
|
||||
EXPECTED: Evidence-based findings" --tool gemini --mode analysis
|
||||
```
|
||||
|
||||
- **direction-adjusted**: Re-analysis from adjusted focus
|
||||
- Use CLI to re-analyze from adjusted perspective based on user feedback
|
||||
|
||||
- **specific-questions**: Targeted Q&A
|
||||
- Use CLI for targeted investigation of user-specified questions
|
||||
|
||||
- For all types, produce round summary:
|
||||
- updated_understanding: {confirmed[], corrected[], new_insights[]}
|
||||
- convergent themes, conflicting views
|
||||
- remaining open questions
|
||||
|
||||
### Role: synthesizer (SYNTH-* tasks)
|
||||
Integrate all explorations, analyses, and discussions into final conclusions.
|
||||
|
||||
- Read all available artifacts from prev_context (explorations, analyses, discussions)
|
||||
- Execute synthesis in four steps:
|
||||
1. **Theme Extraction**: Identify convergent themes across perspectives, rank by cross-perspective confirmation
|
||||
2. **Conflict Resolution**: Identify contradictions, present trade-off analysis
|
||||
3. **Evidence Consolidation**: Deduplicate findings, aggregate by file reference, assign confidence levels
|
||||
4. **Recommendation Prioritization**: Sort by priority, deduplicate, cap at 10
|
||||
|
||||
- Confidence levels:
|
||||
|
||||
| Level | Criteria |
|
||||
|-------|----------|
|
||||
| High | Multiple sources confirm, strong evidence |
|
||||
| Medium | Single source or partial evidence |
|
||||
| Low | Speculative, needs verification |
|
||||
|
||||
- Produce final conclusions:
|
||||
- Executive summary
|
||||
- Key conclusions with evidence and confidence
|
||||
- Prioritized recommendations
|
||||
- Open questions
|
||||
- Cross-perspective synthesis (convergent themes, conflicts resolved, unique contributions)
|
||||
|
||||
4. **Share discoveries**: Append exploration findings to shared board:
|
||||
```bash
|
||||
echo '{"ts":"<ISO8601>","worker":"{id}","type":"<type>","data":{...}}' >> .workflow/.csv-wave/{session-id}/discoveries.ndjson
|
||||
```
|
||||
|
||||
Discovery types to share:
|
||||
- `exploration`: {perspective, file, relevance, summary, patterns[]} — explored file/module
|
||||
- `analysis`: {perspective, insight, confidence, evidence, file_ref} — analysis insight
|
||||
- `pattern`: {name, file, description, type} — code/architecture pattern
|
||||
- `discussion_point`: {topic, perspectives[], convergence, open_questions[]} — discussion point
|
||||
- `recommendation`: {action, rationale, priority, confidence} — recommendation
|
||||
- `conclusion`: {point, evidence, confidence, perspectives_supporting[]} — final conclusion
|
||||
|
||||
5. **Report result**: Return JSON via report_agent_job_result
|
||||
|
||||
---
|
||||
|
||||
## Output (report_agent_job_result)
|
||||
|
||||
Return JSON:
|
||||
{
|
||||
"id": "{id}",
|
||||
"status": "completed" | "failed",
|
||||
"findings": "Key discoveries and implementation notes (max 500 chars)",
|
||||
"error": ""
|
||||
}
|
||||
|
||||
**Role-specific findings guidance**:
|
||||
- **explorer**: List file count, key files, patterns found. Example: "Found 12 files related to auth. Key: src/auth/index.ts (entry), src/auth/strategies/*.ts (providers). Patterns: strategy, middleware chain."
|
||||
- **analyst**: List insight count, top insights with confidence. Example: "3 insights: (1) Strategy pattern for providers [high], (2) Missing token rotation [medium], (3) No rate limiting [high]. 2 discussion points."
|
||||
- **discussant**: List themes, conflicts, question count. Example: "Convergent: JWT security (2 perspectives). Conflict: middleware approach. 3 open questions on refresh tokens."
|
||||
- **synthesizer**: List conclusion count, top recommendations. Example: "5 conclusions, 4 recommendations. Top: Implement refresh token rotation [high priority, high confidence]."
|
||||
90
.codex/skills/team-ultra-analyze/roles/analyst/role.md
Normal file
90
.codex/skills/team-ultra-analyze/roles/analyst/role.md
Normal file
@@ -0,0 +1,90 @@
|
||||
---
|
||||
role: analyst
|
||||
prefix: ANALYZE
|
||||
inner_loop: false
|
||||
additional_prefixes: [ANALYZE-fix]
|
||||
message_types:
|
||||
success: analysis_ready
|
||||
error: error
|
||||
---
|
||||
|
||||
# Deep Analyst
|
||||
|
||||
Perform deep multi-perspective analysis on exploration results via CLI tools. Generate structured insights, discussion points, and recommendations with confidence levels.
|
||||
|
||||
## Phase 2: Context Loading
|
||||
|
||||
| Input | Source | Required |
|
||||
|-------|--------|----------|
|
||||
| Task description | From task subject/description | Yes |
|
||||
| Session path | Extracted from task description | Yes |
|
||||
| Exploration results | `<session>/explorations/*.json` | Yes |
|
||||
|
||||
1. Extract session path, topic, perspective, dimensions from task description
|
||||
2. Detect direction-fix mode: `type:\s*direction-fix` with `adjusted_focus:\s*(.+)`
|
||||
3. Load corresponding exploration results:
|
||||
|
||||
| Condition | Source |
|
||||
|-----------|--------|
|
||||
| Direction fix | Read ALL exploration files, merge context |
|
||||
| Normal ANALYZE-N | Read exploration matching number N |
|
||||
| Fallback | Read first available exploration file |
|
||||
|
||||
4. Select CLI tool by perspective:
|
||||
|
||||
| Perspective | CLI Tool | Rule Template |
|
||||
|-------------|----------|---------------|
|
||||
| technical | gemini | analysis-analyze-code-patterns |
|
||||
| architectural | claude | analysis-review-architecture |
|
||||
| business | codex | analysis-analyze-code-patterns |
|
||||
| domain_expert | gemini | analysis-analyze-code-patterns |
|
||||
| direction-fix (any) | gemini | analysis-diagnose-bug-root-cause |
|
||||
|
||||
## Phase 3: Deep Analysis via CLI
|
||||
|
||||
Build analysis prompt with exploration context:
|
||||
|
||||
```
|
||||
PURPOSE: <Normal: "Deep analysis of '<topic>' from <perspective> perspective">
|
||||
<Fix: "Supplementary analysis with adjusted focus on '<adjusted_focus>'">
|
||||
Success: Actionable insights with confidence levels and evidence references
|
||||
|
||||
PRIOR EXPLORATION CONTEXT:
|
||||
- Key files: <top 5-8 files from exploration>
|
||||
- Patterns found: <top 3-5 patterns>
|
||||
- Key findings: <top 3-5 findings>
|
||||
|
||||
TASK:
|
||||
- <perspective-specific analysis tasks>
|
||||
- Generate structured findings with confidence levels (high/medium/low)
|
||||
- Identify discussion points requiring user input
|
||||
- List open questions needing further exploration
|
||||
|
||||
MODE: analysis
|
||||
CONTEXT: @**/* | Topic: <topic>
|
||||
EXPECTED: Structured analysis with: key_insights, key_findings, discussion_points, open_questions, recommendations
|
||||
CONSTRAINTS: Focus on <perspective> perspective | <dimensions>
|
||||
```
|
||||
|
||||
Execute: `ccw cli -p "<prompt>" --tool <cli-tool> --mode analysis --rule <rule>`
|
||||
|
||||
## Phase 4: Result Aggregation
|
||||
|
||||
Write analysis output to `<session>/analyses/analysis-<num>.json`:
|
||||
|
||||
```json
|
||||
{
|
||||
"perspective": "<perspective>",
|
||||
"dimensions": ["<dim1>", "<dim2>"],
|
||||
"is_direction_fix": false,
|
||||
"key_insights": [{"insight": "...", "confidence": "high", "evidence": "file:line"}],
|
||||
"key_findings": [{"finding": "...", "file_ref": "...", "impact": "..."}],
|
||||
"discussion_points": ["..."],
|
||||
"open_questions": ["..."],
|
||||
"recommendations": [{"action": "...", "rationale": "...", "priority": "high"}],
|
||||
"_metadata": {"cli_tool": "...", "cli_rule": "...", "perspective": "...", "timestamp": "..."}
|
||||
}
|
||||
```
|
||||
|
||||
Update `<session>/wisdom/.msg/meta.json` under `analyst` namespace:
|
||||
- Read existing -> merge `{ "analyst": { perspective, insight_count, finding_count, is_direction_fix } }` -> write back
|
||||
@@ -0,0 +1,73 @@
|
||||
# Analyze Task
|
||||
|
||||
Parse topic -> detect pipeline mode and perspectives -> output analysis config.
|
||||
|
||||
**CONSTRAINT**: Text-level analysis only. NO source code reading, NO codebase exploration.
|
||||
|
||||
## Signal Detection
|
||||
|
||||
### Pipeline Mode Detection
|
||||
|
||||
Parse `--mode` from arguments first. If not specified, auto-detect from topic description:
|
||||
|
||||
| Condition | Mode | Depth |
|
||||
|-----------|------|-------|
|
||||
| `--mode=quick` or topic contains "quick/overview/fast" | Quick | 1 |
|
||||
| `--mode=deep` or topic contains "deep/thorough/detailed/comprehensive" | Deep | N (from perspectives) |
|
||||
| Default (no match) | Standard | N (from perspectives) |
|
||||
|
||||
### Dimension Detection
|
||||
|
||||
Scan topic keywords to select analysis perspectives:
|
||||
|
||||
| Dimension | Keywords |
|
||||
|-----------|----------|
|
||||
| architecture | architecture, design, structure |
|
||||
| implementation | implement, code, source |
|
||||
| performance | performance, optimize, speed |
|
||||
| security | security, auth, vulnerability |
|
||||
| concept | concept, theory, principle |
|
||||
| comparison | compare, vs, difference |
|
||||
| decision | decision, choice, tradeoff |
|
||||
|
||||
**Depth** = number of selected perspectives. Quick mode always uses depth=1.
|
||||
|
||||
## Pipeline Mode Rules
|
||||
|
||||
| Mode | Task Structure |
|
||||
|------|----------------|
|
||||
| quick | EXPLORE-001 -> ANALYZE-001 -> SYNTH-001 (serial, depth=1) |
|
||||
| standard | EXPLORE-001..N (parallel) -> ANALYZE-001..N (parallel) -> DISCUSS-001 -> SYNTH-001 |
|
||||
| deep | Same as standard but SYNTH-001 omitted (created dynamically after discussion loop) |
|
||||
|
||||
## Output
|
||||
|
||||
Write analysis config to coordinator state (not a file), to be used by dispatch.md:
|
||||
|
||||
```json
|
||||
{
|
||||
"pipeline_mode": "<quick|standard|deep>",
|
||||
"depth": <number>,
|
||||
"perspectives": ["<perspective1>", "<perspective2>"],
|
||||
"topic": "<original topic>",
|
||||
"dimensions": ["<dim1>", "<dim2>"]
|
||||
}
|
||||
```
|
||||
|
||||
## Complexity Scoring
|
||||
|
||||
| Factor | Points |
|
||||
|--------|--------|
|
||||
| Per perspective | +1 |
|
||||
| Deep mode | +2 |
|
||||
| Cross-domain (3+ perspectives) | +1 |
|
||||
|
||||
Results: 1-3 Quick, 4-6 Standard, 7+ Deep (if not explicitly set)
|
||||
|
||||
## Discussion Loop Configuration
|
||||
|
||||
| Mode | Max Discussion Rounds |
|
||||
|------|----------------------|
|
||||
| quick | 0 |
|
||||
| standard | 1 |
|
||||
| deep | 5 |
|
||||
@@ -0,0 +1,225 @@
|
||||
# Command: Dispatch
|
||||
|
||||
Create the analysis task chain with correct dependencies and structured task descriptions. Supports Quick, Standard, and Deep pipeline modes.
|
||||
|
||||
## Phase 2: Context Loading
|
||||
|
||||
| Input | Source | Required |
|
||||
|-------|--------|----------|
|
||||
| User topic | From coordinator Phase 1 | Yes |
|
||||
| Session folder | From coordinator Phase 2 | Yes |
|
||||
| Pipeline mode | From coordinator Phase 1 | Yes |
|
||||
| Perspectives | From coordinator Phase 1 (dimension detection) | Yes |
|
||||
|
||||
1. Load topic, pipeline mode, and selected perspectives from coordinator state
|
||||
2. Load pipeline stage definitions from SKILL.md Task Metadata Registry
|
||||
3. Determine depth = number of selected perspectives (Quick: always 1)
|
||||
|
||||
## Phase 3: Task Chain Creation
|
||||
|
||||
### Task Entry Template
|
||||
|
||||
Each task in tasks.json `tasks` object:
|
||||
```json
|
||||
{
|
||||
"<TASK-ID>": {
|
||||
"title": "<concise title>",
|
||||
"description": "PURPOSE: <what this task achieves> | Success: <measurable completion criteria>\nTASK:\n - <step 1: specific action>\n - <step 2: specific action>\n - <step 3: specific action>\nCONTEXT:\n - Session: <session-folder>\n - Topic: <analysis-topic>\n - Perspective: <perspective or 'all'>\n - Upstream artifacts: <artifact-1>, <artifact-2>\n - Shared memory: <session>/wisdom/.msg/meta.json\nEXPECTED: <deliverable path> + <quality criteria>\nCONSTRAINTS: <scope limits, focus areas>\n---\nInnerLoop: false",
|
||||
"role": "<role-name>",
|
||||
"prefix": "<PREFIX>",
|
||||
"deps": ["<dependency-list>"],
|
||||
"status": "pending",
|
||||
"findings": "",
|
||||
"error": ""
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Mode Router
|
||||
|
||||
| Mode | Action |
|
||||
|------|--------|
|
||||
| `quick` | Create 3 tasks: EXPLORE-001 -> ANALYZE-001 -> SYNTH-001 |
|
||||
| `standard` | Create N explorers + N analysts + DISCUSS-001 + SYNTH-001 |
|
||||
| `deep` | Same as standard but omit SYNTH-001 (created after discussion loop) |
|
||||
|
||||
---
|
||||
|
||||
### Quick Mode Task Chain
|
||||
|
||||
**EXPLORE-001** (explorer):
|
||||
```json
|
||||
{
|
||||
"EXPLORE-001": {
|
||||
"title": "Explore codebase structure for analysis topic",
|
||||
"description": "PURPOSE: Explore codebase structure for analysis topic | Success: Key files, patterns, and findings collected\nTASK:\n - Detect project structure and relevant modules\n - Search for code related to analysis topic\n - Collect file references, patterns, and key findings\nCONTEXT:\n - Session: <session-folder>\n - Topic: <topic>\n - Perspective: general\n - Dimensions: <dimensions>\n - Shared memory: <session>/wisdom/.msg/meta.json\nEXPECTED: <session>/explorations/exploration-001.json | Structured exploration with files and findings\nCONSTRAINTS: Focus on <topic> scope\n---\nInnerLoop: false",
|
||||
"role": "explorer",
|
||||
"prefix": "EXPLORE",
|
||||
"deps": [],
|
||||
"status": "pending",
|
||||
"findings": "",
|
||||
"error": ""
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**ANALYZE-001** (analyst):
|
||||
```json
|
||||
{
|
||||
"ANALYZE-001": {
|
||||
"title": "Deep analysis from technical perspective",
|
||||
"description": "PURPOSE: Deep analysis of topic from technical perspective | Success: Actionable insights with confidence levels\nTASK:\n - Load exploration results and build analysis context\n - Analyze from technical perspective across selected dimensions\n - Generate insights, findings, discussion points, recommendations\nCONTEXT:\n - Session: <session-folder>\n - Topic: <topic>\n - Perspective: technical\n - Dimensions: <dimensions>\n - Upstream artifacts: explorations/exploration-001.json\n - Shared memory: <session>/wisdom/.msg/meta.json\nEXPECTED: <session>/analyses/analysis-001.json | Structured analysis with evidence\nCONSTRAINTS: Focus on technical perspective | <dimensions>\n---\nInnerLoop: false",
|
||||
"role": "analyst",
|
||||
"prefix": "ANALYZE",
|
||||
"deps": ["EXPLORE-001"],
|
||||
"status": "pending",
|
||||
"findings": "",
|
||||
"error": ""
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**SYNTH-001** (synthesizer):
|
||||
```json
|
||||
{
|
||||
"SYNTH-001": {
|
||||
"title": "Integrate analysis into final conclusions",
|
||||
"description": "PURPOSE: Integrate analysis into final conclusions | Success: Executive summary with recommendations\nTASK:\n - Load all exploration, analysis, and discussion artifacts\n - Extract themes, consolidate evidence, prioritize recommendations\n - Write conclusions and update discussion.md\nCONTEXT:\n - Session: <session-folder>\n - Topic: <topic>\n - Upstream artifacts: explorations/*.json, analyses/*.json\n - Shared memory: <session>/wisdom/.msg/meta.json\nEXPECTED: <session>/conclusions.json + discussion.md update | Final conclusions with confidence levels\nCONSTRAINTS: Pure integration, no new exploration\n---\nInnerLoop: false",
|
||||
"role": "synthesizer",
|
||||
"prefix": "SYNTH",
|
||||
"deps": ["ANALYZE-001"],
|
||||
"status": "pending",
|
||||
"findings": "",
|
||||
"error": ""
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Standard Mode Task Chain
|
||||
|
||||
Create tasks in dependency order with parallel exploration and analysis windows:
|
||||
|
||||
**EXPLORE-001..N** (explorer, parallel): One per perspective. Each receives unique agent name (explorer-1, explorer-2, ...) for task discovery matching.
|
||||
|
||||
```json
|
||||
{
|
||||
"EXPLORE-<NNN>": {
|
||||
"title": "Explore codebase from <perspective> angle",
|
||||
"description": "PURPOSE: Explore codebase from <perspective> angle | Success: Perspective-specific files and patterns collected\nTASK:\n - Search codebase from <perspective> perspective\n - Collect files, patterns, findings relevant to this angle\n - Generate questions for downstream analysis\nCONTEXT:\n - Session: <session-folder>\n - Topic: <topic>\n - Perspective: <perspective>\n - Dimensions: <dimensions>\n - Shared memory: <session>/wisdom/.msg/meta.json\nEXPECTED: <session>/explorations/exploration-<NNN>.json\nCONSTRAINTS: Focus on <perspective> angle\n---\nInnerLoop: false",
|
||||
"role": "explorer",
|
||||
"prefix": "EXPLORE",
|
||||
"deps": [],
|
||||
"status": "pending",
|
||||
"findings": "",
|
||||
"error": ""
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**ANALYZE-001..N** (analyst, parallel): One per perspective. Each blocked by its corresponding EXPLORE-N.
|
||||
|
||||
```json
|
||||
{
|
||||
"ANALYZE-<NNN>": {
|
||||
"title": "Deep analysis from <perspective> perspective",
|
||||
"description": "PURPOSE: Deep analysis from <perspective> perspective | Success: Insights with confidence and evidence\nTASK:\n - Load exploration-<NNN> results\n - Analyze from <perspective> perspective\n - Generate insights, discussion points, open questions\nCONTEXT:\n - Session: <session-folder>\n - Topic: <topic>\n - Perspective: <perspective>\n - Dimensions: <dimensions>\n - Upstream artifacts: explorations/exploration-<NNN>.json\n - Shared memory: <session>/wisdom/.msg/meta.json\nEXPECTED: <session>/analyses/analysis-<NNN>.json\nCONSTRAINTS: <perspective> perspective | <dimensions>\n---\nInnerLoop: false",
|
||||
"role": "analyst",
|
||||
"prefix": "ANALYZE",
|
||||
"deps": ["EXPLORE-<NNN>"],
|
||||
"status": "pending",
|
||||
"findings": "",
|
||||
"error": ""
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**DISCUSS-001** (discussant): Blocked by all ANALYZE tasks.
|
||||
|
||||
```json
|
||||
{
|
||||
"DISCUSS-001": {
|
||||
"title": "Process analysis results into discussion summary",
|
||||
"description": "PURPOSE: Process analysis results into discussion summary | Success: Convergent themes and discussion points identified\nTASK:\n - Aggregate all analysis results across perspectives\n - Identify convergent themes and conflicting views\n - Generate top discussion points and open questions\nCONTEXT:\n - Session: <session-folder>\n - Topic: <topic>\n - Round: 1\n - Type: initial\n - Upstream artifacts: analyses/*.json\n - Shared memory: <session>/wisdom/.msg/meta.json\nEXPECTED: <session>/discussions/discussion-round-001.json + discussion.md update\nCONSTRAINTS: Aggregate only, no new exploration\n---\nInnerLoop: false",
|
||||
"role": "discussant",
|
||||
"prefix": "DISCUSS",
|
||||
"deps": ["ANALYZE-001", "...", "ANALYZE-<N>"],
|
||||
"status": "pending",
|
||||
"findings": "",
|
||||
"error": ""
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**SYNTH-001** (synthesizer): Blocked by DISCUSS-001.
|
||||
|
||||
```json
|
||||
{
|
||||
"SYNTH-001": {
|
||||
"title": "Cross-perspective integration into final conclusions",
|
||||
"description": "PURPOSE: Cross-perspective integration into final conclusions | Success: Executive summary with prioritized recommendations\n...same as Quick mode SYNTH-001 but with discussion artifacts...",
|
||||
"role": "synthesizer",
|
||||
"prefix": "SYNTH",
|
||||
"deps": ["DISCUSS-001"],
|
||||
"status": "pending",
|
||||
"findings": "",
|
||||
"error": ""
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Deep Mode Task Chain
|
||||
|
||||
Same as Standard mode, but **omit SYNTH-001**. It will be created dynamically after the discussion loop completes, with deps on the last DISCUSS-N task.
|
||||
|
||||
---
|
||||
|
||||
## Discussion Loop Task Creation
|
||||
|
||||
Dynamic tasks added to tasks.json during discussion loop:
|
||||
|
||||
**DISCUSS-N** (subsequent rounds):
|
||||
```json
|
||||
{
|
||||
"DISCUSS-<NNN>": {
|
||||
"title": "Process discussion round <N>",
|
||||
"description": "PURPOSE: Process discussion round <N> | Success: Updated understanding with user feedback integrated\nTASK:\n - Process user feedback: <feedback>\n - Execute <type> discussion strategy\n - Update discussion timeline\nCONTEXT:\n - Session: <session-folder>\n - Topic: <topic>\n - Round: <N>\n - Type: <deepen|direction-adjusted|specific-questions>\n - User feedback: <feedback>\n - Shared memory: <session>/wisdom/.msg/meta.json\nEXPECTED: <session>/discussions/discussion-round-<NNN>.json\n---\nInnerLoop: false",
|
||||
"role": "discussant",
|
||||
"prefix": "DISCUSS",
|
||||
"deps": [],
|
||||
"status": "pending",
|
||||
"findings": "",
|
||||
"error": ""
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**ANALYZE-fix-N** (direction adjustment):
|
||||
```json
|
||||
{
|
||||
"ANALYZE-fix-<N>": {
|
||||
"title": "Supplementary analysis with adjusted focus",
|
||||
"description": "PURPOSE: Supplementary analysis with adjusted focus | Success: New insights from adjusted direction\nTASK:\n - Re-analyze from adjusted perspective: <adjusted_focus>\n - Build on previous exploration findings\n - Generate updated discussion points\nCONTEXT:\n - Session: <session-folder>\n - Topic: <topic>\n - Type: direction-fix\n - Adjusted focus: <adjusted_focus>\n - Shared memory: <session>/wisdom/.msg/meta.json\nEXPECTED: <session>/analyses/analysis-fix-<N>.json\n---\nInnerLoop: false",
|
||||
"role": "analyst",
|
||||
"prefix": "ANALYZE",
|
||||
"deps": [],
|
||||
"status": "pending",
|
||||
"findings": "",
|
||||
"error": ""
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Phase 4: Validation
|
||||
|
||||
Verify task chain integrity:
|
||||
|
||||
| Check | Method | Expected |
|
||||
|-------|--------|----------|
|
||||
| Task count correct | tasks.json count | quick: 3, standard: 2N+2, deep: 2N+1 |
|
||||
| Dependencies correct | Trace deps | Acyclic, correct ordering |
|
||||
| All descriptions have PURPOSE/TASK/CONTEXT/EXPECTED | Pattern check | All present |
|
||||
| Session path in every task | Check CONTEXT | Session: <folder> present |
|
||||
@@ -0,0 +1,327 @@
|
||||
# Command: Monitor
|
||||
|
||||
Handle all coordinator monitoring events: status checks, pipeline advancement, discussion loop control, and completion. Uses spawn_agent + wait_agent for synchronous coordination.
|
||||
|
||||
## Constants
|
||||
|
||||
| Key | Value |
|
||||
|-----|-------|
|
||||
| WORKER_AGENT | team_worker |
|
||||
| MAX_DISCUSSION_ROUNDS_QUICK | 0 |
|
||||
| MAX_DISCUSSION_ROUNDS_STANDARD | 1 |
|
||||
| MAX_DISCUSSION_ROUNDS_DEEP | 5 |
|
||||
|
||||
## Phase 2: Context Loading
|
||||
|
||||
| Input | Source | Required |
|
||||
|-------|--------|----------|
|
||||
| Session state | tasks.json | Yes |
|
||||
| Trigger event | From Entry Router detection | Yes |
|
||||
| Pipeline mode | From tasks.json `pipeline_mode` | Yes |
|
||||
| Discussion round | From tasks.json `discussion_round` | Yes |
|
||||
|
||||
1. Load tasks.json for current state, `pipeline_mode`, `discussion_round`
|
||||
2. Read tasks from tasks.json to get current task statuses
|
||||
3. Identify trigger event type from Entry Router
|
||||
4. Compute max discussion rounds from pipeline mode:
|
||||
|
||||
```
|
||||
MAX_ROUNDS = pipeline_mode === 'deep' ? 5
|
||||
: pipeline_mode === 'standard' ? 1
|
||||
: 0
|
||||
```
|
||||
|
||||
## Phase 3: Event Handlers
|
||||
|
||||
### handleCallback
|
||||
|
||||
Triggered when a worker completes (wait_agent returns).
|
||||
|
||||
1. Determine role from completed task prefix, then resolve completed tasks:
|
||||
|
||||
**Role detection** (from task prefix):
|
||||
|
||||
| Task Prefix | Role |
|
||||
|-------------|------|
|
||||
| `EXPLORE-*` | explorer |
|
||||
| `ANALYZE-*` | analyst |
|
||||
| `DISCUSS-*` | discussant |
|
||||
| `SYNTH-*` | synthesizer |
|
||||
|
||||
2. Mark task completed in tasks.json:
|
||||
|
||||
```
|
||||
state.tasks[taskId].status = 'completed'
|
||||
```
|
||||
|
||||
3. Record completion in session state via team_msg
|
||||
|
||||
4. **Role-specific post-completion logic**:
|
||||
|
||||
| Completed Role | Pipeline Mode | Post-Completion Action |
|
||||
|---------------|---------------|------------------------|
|
||||
| explorer | all | Log: exploration ready. Proceed to handleSpawnNext |
|
||||
| analyst | all | Log: analysis ready. Proceed to handleSpawnNext |
|
||||
| discussant | all | **Discussion feedback gate** (see below) |
|
||||
| synthesizer | all | Proceed to handleComplete |
|
||||
|
||||
5. **Discussion Feedback Gate** (when discussant completes):
|
||||
|
||||
When a DISCUSS-* task completes, the coordinator collects user feedback BEFORE spawning the next task.
|
||||
|
||||
```
|
||||
// Read current discussion_round from tasks.json
|
||||
discussion_round = state.discussion_round || 0
|
||||
discussion_round++
|
||||
|
||||
// Update tasks.json
|
||||
state.discussion_round = discussion_round
|
||||
|
||||
// Check if discussion loop applies
|
||||
IF pipeline_mode === 'quick':
|
||||
// No discussion in quick mode -- proceed to handleSpawnNext (SYNTH)
|
||||
-> handleSpawnNext
|
||||
|
||||
ELSE IF discussion_round >= MAX_ROUNDS:
|
||||
// Reached max rounds -- force proceed to synthesis
|
||||
Log: "Max discussion rounds reached, proceeding to synthesis"
|
||||
IF no SYNTH-001 task exists in tasks.json:
|
||||
Create SYNTH-001 task in tasks.json with deps on last DISCUSS task
|
||||
-> handleSpawnNext
|
||||
|
||||
ELSE:
|
||||
// Collect user feedback
|
||||
request_user_input({
|
||||
questions: [{
|
||||
question: "Discussion round <N> complete. What next?",
|
||||
header: "Discussion Feedback",
|
||||
multiSelect: false,
|
||||
options: [
|
||||
{ label: "Continue deeper", description: "Current direction is good, go deeper" },
|
||||
{ label: "Adjust direction", description: "Shift analysis focus" },
|
||||
{ label: "Done", description: "Sufficient depth, proceed to synthesis" }
|
||||
]
|
||||
}]
|
||||
})
|
||||
```
|
||||
|
||||
6. **Feedback handling** (after request_user_input returns):
|
||||
|
||||
| Feedback | Action |
|
||||
|----------|--------|
|
||||
| "Continue deeper" | Create new DISCUSS-`<N+1>` task in tasks.json (pending, no deps). Record decision in discussion.md. Proceed to handleSpawnNext |
|
||||
| "Adjust direction" | request_user_input for new focus. Create ANALYZE-fix-`<N>` task in tasks.json (pending). Create DISCUSS-`<N+1>` task (pending, deps: [ANALYZE-fix-`<N>`]). Record direction change in discussion.md. Proceed to handleSpawnNext |
|
||||
| "Done" | Check if SYNTH-001 already exists in tasks.json: if yes, ensure deps is updated to reference last DISCUSS task; if no, create SYNTH-001 (pending, deps: [last DISCUSS]). Record decision in discussion.md. Proceed to handleSpawnNext |
|
||||
|
||||
**Dynamic task creation** -- add entries to tasks.json `tasks` object:
|
||||
|
||||
DISCUSS-N (subsequent round):
|
||||
```json
|
||||
{
|
||||
"DISCUSS-<NNN>": {
|
||||
"title": "Process discussion round <N>",
|
||||
"description": "PURPOSE: Process discussion round <N> | Success: Updated understanding\nTASK:\n - Process previous round results\n - Execute <type> discussion strategy\n - Update discussion timeline\nCONTEXT:\n - Session: <session-folder>\n - Topic: <topic>\n - Round: <N>\n - Type: <deepen|direction-adjusted|specific-questions>\n - Shared memory: <session>/wisdom/.msg/meta.json\nEXPECTED: <session>/discussions/discussion-round-<NNN>.json\n---\nInnerLoop: false",
|
||||
"role": "discussant",
|
||||
"prefix": "DISCUSS",
|
||||
"deps": [],
|
||||
"status": "pending",
|
||||
"findings": "",
|
||||
"error": ""
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
ANALYZE-fix-N (direction adjustment):
|
||||
```json
|
||||
{
|
||||
"ANALYZE-fix-<N>": {
|
||||
"title": "Supplementary analysis with adjusted focus",
|
||||
"description": "PURPOSE: Supplementary analysis with adjusted focus | Success: New insights from adjusted direction\nTASK:\n - Re-analyze from adjusted perspective: <adjusted_focus>\n - Build on previous exploration findings\n - Generate updated discussion points\nCONTEXT:\n - Session: <session-folder>\n - Topic: <topic>\n - Type: direction-fix\n - Adjusted focus: <adjusted_focus>\n - Shared memory: <session>/wisdom/.msg/meta.json\nEXPECTED: <session>/analyses/analysis-fix-<N>.json\n---\nInnerLoop: false",
|
||||
"role": "analyst",
|
||||
"prefix": "ANALYZE",
|
||||
"deps": [],
|
||||
"status": "pending",
|
||||
"findings": "",
|
||||
"error": ""
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
SYNTH-001 (created dynamically -- check existence first):
|
||||
```javascript
|
||||
// Guard: only create if SYNTH-001 doesn't exist yet in tasks.json
|
||||
if (!state.tasks['SYNTH-001']) {
|
||||
state.tasks['SYNTH-001'] = {
|
||||
title: "Integrate all analysis into final conclusions",
|
||||
description: "PURPOSE: Integrate all analysis into final conclusions | Success: Executive summary with recommendations...",
|
||||
role: "synthesizer",
|
||||
prefix: "SYNTH",
|
||||
deps: ["<last-DISCUSS-task-id>"],
|
||||
status: "pending",
|
||||
findings: "",
|
||||
error: ""
|
||||
}
|
||||
} else {
|
||||
// Always update deps to reference the last DISCUSS task
|
||||
state.tasks['SYNTH-001'].deps = ["<last-DISCUSS-task-id>"]
|
||||
}
|
||||
```
|
||||
|
||||
7. Record user feedback to decision_trail via team_msg:
|
||||
|
||||
```
|
||||
mcp__ccw-tools__team_msg({
|
||||
operation: "log", session_id: sessionId, from: "coordinator",
|
||||
type: "state_update",
|
||||
data: { decision_trail_entry: {
|
||||
round: discussion_round,
|
||||
decision: feedback,
|
||||
context: "User feedback at discussion round N",
|
||||
timestamp: current ISO timestamp
|
||||
}}
|
||||
})
|
||||
```
|
||||
|
||||
8. Proceed to handleSpawnNext
|
||||
|
||||
### handleSpawnNext
|
||||
|
||||
Find and spawn the next ready tasks.
|
||||
|
||||
1. Read tasks.json, find tasks where:
|
||||
- Status is "pending"
|
||||
- All deps tasks have status "completed"
|
||||
|
||||
2. For each ready task, determine role from task prefix:
|
||||
|
||||
| Task Prefix | Role | Role Spec |
|
||||
|-------------|------|-----------|
|
||||
| `EXPLORE-*` | explorer | `<skill_root>/roles/explorer/role.md` |
|
||||
| `ANALYZE-*` | analyst | `<skill_root>/roles/analyst/role.md` |
|
||||
| `DISCUSS-*` | discussant | `<skill_root>/roles/discussant/role.md` |
|
||||
| `SYNTH-*` | synthesizer | `<skill_root>/roles/synthesizer/role.md` |
|
||||
|
||||
3. Spawn team_worker for each ready task:
|
||||
|
||||
```javascript
|
||||
// 1) Update status in tasks.json
|
||||
state.tasks[taskId].status = 'in_progress'
|
||||
|
||||
// 2) Spawn worker
|
||||
const agentId = spawn_agent({
|
||||
agent_type: "team_worker",
|
||||
items: [
|
||||
{ type: "text", text: `## Role Assignment
|
||||
role: ${role}
|
||||
role_spec: ${skillRoot}/roles/${role}/role.md
|
||||
session: ${sessionFolder}
|
||||
session_id: ${sessionId}
|
||||
requirement: ${taskDescription}
|
||||
agent_name: ${agentName}
|
||||
inner_loop: false` },
|
||||
|
||||
{ type: "text", text: `## Current Task
|
||||
- Task ID: ${taskId}
|
||||
- Task: ${taskSubject}
|
||||
|
||||
Read role_spec file to load Phase 2-4 domain instructions.
|
||||
Execute built-in Phase 1 (task discovery, owner=${agentName}) -> role-spec Phase 2-4 -> built-in Phase 5 (report).` }
|
||||
]
|
||||
})
|
||||
|
||||
// 3) Track agent
|
||||
state.active_agents[taskId] = { agentId, role, started_at: now }
|
||||
```
|
||||
|
||||
After spawning all ready tasks:
|
||||
|
||||
```javascript
|
||||
// 4) Batch wait for all spawned workers
|
||||
const agentIds = Object.values(state.active_agents)
|
||||
.map(a => a.agentId)
|
||||
wait_agent({ ids: agentIds })
|
||||
|
||||
// 5) Collect results and update tasks.json
|
||||
for (const [taskId, agent] of Object.entries(state.active_agents)) {
|
||||
state.tasks[taskId].status = 'completed'
|
||||
delete state.active_agents[taskId]
|
||||
}
|
||||
```
|
||||
|
||||
4. **Parallel spawn rules**:
|
||||
|
||||
| Mode | Stage | Spawn Behavior |
|
||||
|------|-------|---------------|
|
||||
| quick | All stages | One worker at a time (serial pipeline) |
|
||||
| standard/deep | EXPLORE phase | Spawn all EXPLORE-001..N in parallel, wait_agent for all |
|
||||
| standard/deep | ANALYZE phase | Spawn all ANALYZE-001..N in parallel, wait_agent for all |
|
||||
| all | DISCUSS phase | One discussant at a time |
|
||||
| all | SYNTH phase | One synthesizer |
|
||||
|
||||
5. **STOP** after processing -- wait for next event
|
||||
|
||||
### handleCheck
|
||||
|
||||
Output current pipeline status from tasks.json without advancing.
|
||||
|
||||
```
|
||||
Pipeline Status (<mode> mode):
|
||||
[DONE] EXPLORE-001 (explorer) -> exploration-001.json
|
||||
[DONE] EXPLORE-002 (explorer) -> exploration-002.json
|
||||
[DONE] ANALYZE-001 (analyst) -> analysis-001.json
|
||||
[RUN] ANALYZE-002 (analyst) -> analyzing...
|
||||
[WAIT] DISCUSS-001 (discussant) -> blocked by ANALYZE-002
|
||||
[----] SYNTH-001 (synthesizer) -> blocked by DISCUSS-001
|
||||
|
||||
Discussion Rounds: 0/<max>
|
||||
Pipeline Mode: <mode>
|
||||
Session: <session-id>
|
||||
```
|
||||
|
||||
Output status -- do NOT advance pipeline.
|
||||
|
||||
### handleResume
|
||||
|
||||
Resume pipeline after user pause or interruption.
|
||||
|
||||
1. Audit tasks.json for inconsistencies:
|
||||
- Tasks stuck in "in_progress" -> reset to "pending"
|
||||
- Tasks with completed deps but still "pending" -> include in spawn list
|
||||
2. Proceed to handleSpawnNext
|
||||
|
||||
### handleComplete
|
||||
|
||||
Triggered when all pipeline tasks are completed.
|
||||
|
||||
**Completion check**:
|
||||
|
||||
| Mode | Completion Condition |
|
||||
|------|---------------------|
|
||||
| quick | EXPLORE-001 + ANALYZE-001 + SYNTH-001 all completed |
|
||||
| standard | All EXPLORE + ANALYZE + DISCUSS-001 + SYNTH-001 completed |
|
||||
| deep | All EXPLORE + ANALYZE + all DISCUSS-N + SYNTH-001 completed |
|
||||
|
||||
1. Verify all tasks completed in tasks.json. If any not completed, return to handleSpawnNext
|
||||
2. If all completed, **inline-execute coordinator Phase 5** (report + completion action). Do NOT STOP here -- continue directly into Phase 5 within the same turn.
|
||||
|
||||
## Phase 4: State Persistence
|
||||
|
||||
After every handler execution **except handleComplete**:
|
||||
|
||||
1. Update tasks.json with current state:
|
||||
- `discussion_round`: current round count
|
||||
- `active_agents`: list of in-progress agents
|
||||
2. Verify task list consistency (no orphan tasks, no broken dependencies)
|
||||
3. **STOP** and wait for next event
|
||||
|
||||
> **handleComplete exception**: handleComplete does NOT STOP -- it transitions directly to coordinator Phase 5.
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Scenario | Resolution |
|
||||
|----------|------------|
|
||||
| Worker spawn fails | Retry once. If still fails, report to user via request_user_input: retry / skip / abort |
|
||||
| Discussion loop exceeds max rounds | Force create SYNTH-001, proceed to synthesis |
|
||||
| Synthesis fails | Report partial results from analyses and discussions |
|
||||
| Pipeline stall (no ready + no running) | Check deps chains, report blockage to user |
|
||||
| Missing task artifacts | Log warning, continue with available data |
|
||||
223
.codex/skills/team-ultra-analyze/roles/coordinator/role.md
Normal file
223
.codex/skills/team-ultra-analyze/roles/coordinator/role.md
Normal file
@@ -0,0 +1,223 @@
|
||||
# Coordinator - Ultra Analyze Team
|
||||
|
||||
**Role**: coordinator
|
||||
**Type**: Orchestrator
|
||||
**Team**: ultra-analyze
|
||||
|
||||
Orchestrates the analysis pipeline: topic clarification, pipeline mode selection, task dispatch, discussion loop management, and final synthesis. Spawns team_worker agents for all worker roles.
|
||||
|
||||
## Boundaries
|
||||
|
||||
### MUST
|
||||
|
||||
- Use `team_worker` agent type for all worker spawns (NOT `general-purpose`)
|
||||
- Follow Command Execution Protocol for dispatch and monitor commands
|
||||
- Respect pipeline stage dependencies (deps)
|
||||
- Stop after spawning workers -- wait for results via wait_agent
|
||||
- Handle discussion loop with max 5 rounds (Deep mode)
|
||||
- Execute completion action in Phase 5
|
||||
|
||||
### MUST NOT
|
||||
|
||||
- Implement domain logic (exploring, analyzing, discussing, synthesizing) -- workers handle this
|
||||
- Spawn workers without creating tasks first
|
||||
- Skip checkpoints when configured
|
||||
- Force-advance pipeline past failed stages
|
||||
- Directly call cli-explore-agent, CLI analysis tools, or execute codebase exploration
|
||||
|
||||
---
|
||||
|
||||
## Command Execution Protocol
|
||||
|
||||
When coordinator needs to execute a command (dispatch, monitor):
|
||||
|
||||
1. **Read the command file**: `roles/coordinator/commands/<command-name>.md`
|
||||
2. **Follow the workflow** defined in the command file (Phase 2-4 structure)
|
||||
3. **Commands are inline execution guides** -- NOT separate agents or subprocesses
|
||||
4. **Execute synchronously** -- complete the command workflow before proceeding
|
||||
|
||||
---
|
||||
|
||||
## Entry Router
|
||||
|
||||
When coordinator is invoked, detect invocation type:
|
||||
|
||||
| Detection | Condition | Handler |
|
||||
|-----------|-----------|---------|
|
||||
| Status check | Arguments contain "check" or "status" | -> handleCheck (monitor.md) |
|
||||
| Manual resume | Arguments contain "resume" or "continue" | -> handleResume (monitor.md) |
|
||||
| Pipeline complete | All tasks have status "completed" | -> handleComplete (monitor.md) |
|
||||
| Interrupted session | Active/paused session exists | -> Phase 0 |
|
||||
| New session | None of above | -> Phase 1 |
|
||||
|
||||
For check/resume/complete: load `@commands/monitor.md` and execute matched handler, then STOP.
|
||||
|
||||
### Router Implementation
|
||||
|
||||
1. **Load session context** (if exists):
|
||||
- Scan `.workflow/.team/UAN-*/.msg/meta.json` for active/paused sessions
|
||||
- If found, extract session folder path, status, and `pipeline_mode`
|
||||
|
||||
2. **Parse $ARGUMENTS** for detection keywords:
|
||||
- Check for "check", "status", "resume", "continue" keywords
|
||||
|
||||
3. **Route to handler**:
|
||||
- For monitor handlers: Read `commands/monitor.md`, execute matched handler, STOP
|
||||
- For Phase 0: Execute Session Resume Check below
|
||||
- For Phase 1: Execute Topic Understanding below
|
||||
|
||||
---
|
||||
|
||||
## Phase 0: Session Resume Check
|
||||
|
||||
Triggered when an active/paused session is detected on coordinator entry.
|
||||
|
||||
1. Load tasks.json from detected session folder
|
||||
2. Read tasks from tasks.json
|
||||
|
||||
3. Reconcile session state vs task status:
|
||||
|
||||
| Task Status | Session Expects | Action |
|
||||
|-------------|----------------|--------|
|
||||
| in_progress | Should be running | Reset to pending (worker was interrupted) |
|
||||
| completed | Already tracked | Skip |
|
||||
| pending + unblocked | Ready to run | Include in spawn list |
|
||||
|
||||
4. Spawn workers for ready tasks -> Phase 4 coordination loop
|
||||
|
||||
---
|
||||
|
||||
## Phase 1: Topic Understanding & Requirement Clarification
|
||||
|
||||
TEXT-LEVEL ONLY. No source code reading.
|
||||
|
||||
1. Parse user task description from $ARGUMENTS
|
||||
2. Extract explicit settings: `--mode`, scope, focus areas
|
||||
3. Delegate to `@commands/analyze.md` for signal detection and pipeline mode selection
|
||||
4. **Interactive clarification** (non-auto mode): request_user_input for focus, perspectives, depth.
|
||||
|
||||
---
|
||||
|
||||
## Phase 2: Create Session + Initialize
|
||||
|
||||
1. Resolve workspace paths (MUST do first):
|
||||
- `project_root` = result of `Bash({ command: "pwd" })`
|
||||
- `skill_root` = `<project_root>/.codex/skills/team-ultra-analyze`
|
||||
3. Generate session ID: `UAN-{slug}-{YYYY-MM-DD}`
|
||||
4. Create session folder structure:
|
||||
|
||||
```
|
||||
.workflow/.team/UAN-{slug}-{date}/
|
||||
+-- .msg/messages.jsonl
|
||||
+-- .msg/meta.json
|
||||
+-- discussion.md
|
||||
+-- explorations/
|
||||
+-- analyses/
|
||||
+-- discussions/
|
||||
+-- wisdom/
|
||||
+-- learnings.md, decisions.md, conventions.md, issues.md
|
||||
```
|
||||
|
||||
5. Write initial tasks.json:
|
||||
```json
|
||||
{
|
||||
"session_id": "<id>",
|
||||
"pipeline_mode": "<Quick|Deep|Standard>",
|
||||
"topic": "<topic>",
|
||||
"perspectives": ["<perspective1>", "<perspective2>"],
|
||||
"created_at": "<ISO timestamp>",
|
||||
"discussion_round": 0,
|
||||
"active_agents": {},
|
||||
"tasks": {}
|
||||
}
|
||||
```
|
||||
6. Initialize .msg/meta.json with pipeline metadata via team_msg:
|
||||
```typescript
|
||||
mcp__ccw-tools__team_msg({
|
||||
operation: "log",
|
||||
session_id: "<session-id>",
|
||||
from: "coordinator",
|
||||
type: "state_update",
|
||||
summary: "Session initialized",
|
||||
data: {
|
||||
pipeline_mode: "<Quick|Deep|Standard>",
|
||||
pipeline_stages: ["explorer", "analyst", "discussant", "synthesizer"],
|
||||
roles: ["coordinator", "explorer", "analyst", "discussant", "synthesizer"]
|
||||
}
|
||||
})
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Phase 3: Create Task Chain
|
||||
|
||||
Execute `@commands/dispatch.md` inline (Command Execution Protocol):
|
||||
1. Read `roles/coordinator/commands/dispatch.md`
|
||||
2. Follow dispatch Phase 2 -> Phase 3 -> Phase 4
|
||||
3. Result: all pipeline tasks created in tasks.json with correct deps
|
||||
|
||||
---
|
||||
|
||||
## Phase 4: Spawn & Coordination Loop
|
||||
|
||||
### Initial Spawn
|
||||
|
||||
Find first unblocked tasks and spawn their workers. Use SKILL.md Worker Spawn Template with:
|
||||
- `role_spec: <skill_root>/roles/<role>/role.md`
|
||||
- `inner_loop: false`
|
||||
|
||||
**STOP** after spawning and waiting for results.
|
||||
|
||||
### Coordination (via monitor.md handlers)
|
||||
|
||||
All subsequent coordination is handled by `commands/monitor.md` handlers triggered after wait_agent returns.
|
||||
|
||||
---
|
||||
|
||||
## Phase 5: Report + Completion Action
|
||||
|
||||
### Report
|
||||
|
||||
1. Load session state -> count completed tasks, calculate duration
|
||||
2. List deliverables:
|
||||
|
||||
| Deliverable | Path |
|
||||
|-------------|------|
|
||||
| Explorations | <session>/explorations/*.json |
|
||||
| Analyses | <session>/analyses/*.json |
|
||||
| Discussion | <session>/discussion.md |
|
||||
| Conclusions | <session>/conclusions.json |
|
||||
|
||||
3. Include discussion summaries and decision trail
|
||||
4. Output pipeline summary: task count, duration, mode
|
||||
|
||||
5. **Completion Action** (interactive):
|
||||
|
||||
```
|
||||
request_user_input({
|
||||
questions: [{
|
||||
question: "Ultra-Analyze pipeline complete. What would you like to do?",
|
||||
header: "Completion",
|
||||
multiSelect: false,
|
||||
options: [
|
||||
{ label: "Archive & Clean (Recommended)", description: "Archive session, clean up tasks and resources" },
|
||||
{ label: "Keep Active", description: "Keep session active for follow-up work or inspection" },
|
||||
{ label: "Export Results", description: "Export deliverables to a specified location, then clean" }
|
||||
]
|
||||
}]
|
||||
})
|
||||
```
|
||||
|
||||
6. Handle user choice per SKILL.md Completion Action section.
|
||||
|
||||
---
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Scenario | Resolution |
|
||||
|----------|------------|
|
||||
| Explorer finds nothing | Continue with limited context, note limitation |
|
||||
| Discussion loop stuck >5 rounds | Force synthesis, offer continuation |
|
||||
| CLI unavailable | Fallback chain: gemini -> codex -> claude |
|
||||
| User timeout in discussion | Save state, show resume command |
|
||||
| Session folder conflict | Append timestamp suffix |
|
||||
104
.codex/skills/team-ultra-analyze/roles/discussant/role.md
Normal file
104
.codex/skills/team-ultra-analyze/roles/discussant/role.md
Normal file
@@ -0,0 +1,104 @@
|
||||
---
|
||||
role: discussant
|
||||
prefix: DISCUSS
|
||||
inner_loop: false
|
||||
message_types:
|
||||
success: discussion_processed
|
||||
error: error
|
||||
---
|
||||
|
||||
# Discussant
|
||||
|
||||
Process analysis results and user feedback. Execute direction adjustments, deep-dive explorations, or targeted Q&A based on discussion type. Update discussion timeline.
|
||||
|
||||
## Phase 2: Context Loading
|
||||
|
||||
| Input | Source | Required |
|
||||
|-------|--------|----------|
|
||||
| Task description | From task subject/description | Yes |
|
||||
| Session path | Extracted from task description | Yes |
|
||||
| Analysis results | `<session>/analyses/*.json` | Yes |
|
||||
| Exploration results | `<session>/explorations/*.json` | No |
|
||||
|
||||
1. Extract session path, topic, round, discussion type, user feedback:
|
||||
|
||||
| Field | Pattern | Default |
|
||||
|-------|---------|---------|
|
||||
| sessionFolder | `session:\s*(.+)` | required |
|
||||
| topic | `topic:\s*(.+)` | required |
|
||||
| round | `round:\s*(\d+)` | 1 |
|
||||
| discussType | `type:\s*(.+)` | "initial" |
|
||||
| userFeedback | `user_feedback:\s*(.+)` | empty |
|
||||
|
||||
2. Read all analysis and exploration results
|
||||
3. Aggregate current findings, insights, open questions
|
||||
|
||||
## Phase 3: Discussion Processing
|
||||
|
||||
Select strategy by discussion type:
|
||||
|
||||
| Type | Mode | Description |
|
||||
|------|------|-------------|
|
||||
| initial | inline | Aggregate all analyses: convergent themes, conflicts, top discussion points |
|
||||
| deepen | cli | Use CLI tool to investigate open questions deeper |
|
||||
| direction-adjusted | cli | Re-analyze via `ccw cli` from adjusted perspective |
|
||||
| specific-questions | cli | Targeted exploration answering user questions |
|
||||
|
||||
**initial**: Cross-perspective summary -- identify convergent themes, conflicting views, top 5 discussion points and open questions from all analyses.
|
||||
|
||||
**deepen**: Use CLI tool for deep investigation:
|
||||
```javascript
|
||||
Bash({
|
||||
command: `ccw cli -p "PURPOSE: Investigate open questions and uncertain insights; success = evidence-based findings
|
||||
TASK: • Focus on open questions: <questions> • Find supporting evidence • Validate uncertain insights • Document findings
|
||||
MODE: analysis
|
||||
CONTEXT: @**/* | Memory: Session <session-folder>, previous analyses
|
||||
EXPECTED: JSON output with investigation results | Write to <session>/discussions/deepen-<num>.json
|
||||
CONSTRAINTS: Evidence-based analysis only
|
||||
" --tool gemini --mode analysis --rule analysis-trace-code-execution`,
|
||||
})
|
||||
```
|
||||
|
||||
**direction-adjusted**: CLI re-analysis from adjusted focus:
|
||||
```javascript
|
||||
Bash({
|
||||
command: `ccw cli -p "Re-analyze '<topic>' with adjusted focus on '<userFeedback>'" --tool gemini --mode analysis`,
|
||||
})
|
||||
```
|
||||
|
||||
**specific-questions**: Use CLI tool for targeted Q&A:
|
||||
```javascript
|
||||
Bash({
|
||||
command: `ccw cli -p "PURPOSE: Answer specific user questions about <topic>; success = clear, evidence-based answers
|
||||
TASK: • Answer: <userFeedback> • Provide code references • Explain context
|
||||
MODE: analysis
|
||||
CONTEXT: @**/* | Memory: Session <session-folder>
|
||||
EXPECTED: JSON output with answers and evidence | Write to <session>/discussions/questions-<num>.json
|
||||
CONSTRAINTS: Direct answers with code references
|
||||
" --tool gemini --mode analysis`,
|
||||
})
|
||||
```
|
||||
|
||||
## Phase 4: Update Discussion Timeline
|
||||
|
||||
1. Write round content to `<session>/discussions/discussion-round-<num>.json`:
|
||||
```json
|
||||
{
|
||||
"round": 1, "type": "initial", "user_feedback": "...",
|
||||
"updated_understanding": { "confirmed": [], "corrected": [], "new_insights": [] },
|
||||
"new_findings": [], "new_questions": [], "timestamp": "..."
|
||||
}
|
||||
```
|
||||
|
||||
2. Append round section to `<session>/discussion.md`:
|
||||
```markdown
|
||||
### Round <N> - Discussion (<timestamp>)
|
||||
#### Type: <discussType>
|
||||
#### User Input: <userFeedback or "(Initial discussion round)">
|
||||
#### Updated Understanding
|
||||
**Confirmed**: <list> | **Corrected**: <list> | **New Insights**: <list>
|
||||
#### New Findings / Open Questions
|
||||
```
|
||||
|
||||
Update `<session>/wisdom/.msg/meta.json` under `discussant` namespace:
|
||||
- Read existing -> merge `{ "discussant": { round, type, new_insight_count, corrected_count } }` -> write back
|
||||
74
.codex/skills/team-ultra-analyze/roles/explorer/role.md
Normal file
74
.codex/skills/team-ultra-analyze/roles/explorer/role.md
Normal file
@@ -0,0 +1,74 @@
|
||||
---
|
||||
role: explorer
|
||||
prefix: EXPLORE
|
||||
inner_loop: false
|
||||
message_types:
|
||||
success: exploration_ready
|
||||
error: error
|
||||
---
|
||||
|
||||
# Codebase Explorer
|
||||
|
||||
Explore codebase structure through cli-explore-agent, collecting structured context (files, patterns, findings) for downstream analysis. One explorer per analysis perspective.
|
||||
|
||||
## Phase 2: Context & Scope Assessment
|
||||
|
||||
| Input | Source | Required |
|
||||
|-------|--------|----------|
|
||||
| Task description | From task subject/description | Yes |
|
||||
| Session path | Extracted from task description | Yes |
|
||||
|
||||
1. Load debug specs: Run `ccw spec load --category debug` for known issues and root-cause notes
|
||||
2. Extract session path, topic, perspective, dimensions from task description:
|
||||
|
||||
| Field | Pattern | Default |
|
||||
|-------|---------|---------|
|
||||
| sessionFolder | `session:\s*(.+)` | required |
|
||||
| topic | `topic:\s*(.+)` | required |
|
||||
| perspective | `perspective:\s*(.+)` | "general" |
|
||||
| dimensions | `dimensions:\s*(.+)` | "general" |
|
||||
|
||||
2. Determine exploration number from task subject (EXPLORE-N)
|
||||
3. Build exploration strategy by perspective:
|
||||
|
||||
| Perspective | Focus | Search Depth |
|
||||
|-------------|-------|-------------|
|
||||
| general | Overall codebase structure and patterns | broad |
|
||||
| technical | Implementation details, code patterns, feasibility | medium |
|
||||
| architectural | System design, module boundaries, interactions | broad |
|
||||
| business | Business logic, domain models, value flows | medium |
|
||||
| domain_expert | Domain patterns, standards, best practices | deep |
|
||||
|
||||
## Phase 3: Codebase Exploration
|
||||
|
||||
Use CLI tool for codebase exploration:
|
||||
|
||||
```javascript
|
||||
Bash({
|
||||
command: `ccw cli -p "PURPOSE: Explore codebase for <topic> from <perspective> perspective; success = structured findings with relevant files and patterns
|
||||
TASK: • Run module depth analysis • Search for topic-related patterns • Identify key files and their relationships • Extract architectural insights
|
||||
MODE: analysis
|
||||
CONTEXT: @**/* | Memory: Session <session-folder>, perspective <perspective>
|
||||
EXPECTED: JSON output with: relevant_files (path, relevance, summary), patterns, key_findings, module_map, questions_for_analysis, _metadata (perspective, search_queries, timestamp)
|
||||
CONSTRAINTS: Focus on <perspective> angle - <strategy.focus> | Write to <session>/explorations/exploration-<num>.json
|
||||
" --tool gemini --mode analysis --rule analysis-analyze-code-patterns`,
|
||||
})
|
||||
```
|
||||
|
||||
**ACE fallback** (when CLI produces no output):
|
||||
```javascript
|
||||
mcp__ace-tool__search_context({ project_root_path: ".", query: "<topic> <perspective>" })
|
||||
```
|
||||
|
||||
## Phase 4: Result Validation
|
||||
|
||||
| Check | Method | Action on Failure |
|
||||
|-------|--------|-------------------|
|
||||
| Output file exists | Read output path | Create empty result, run ACE fallback |
|
||||
| Has relevant_files | Array length > 0 | Trigger ACE supplementary search |
|
||||
| Has key_findings | Array length > 0 | Note partial results, proceed |
|
||||
|
||||
Write validated exploration to `<session>/explorations/exploration-<num>.json`.
|
||||
|
||||
Update `<session>/wisdom/.msg/meta.json` under `explorer` namespace:
|
||||
- Read existing -> merge `{ "explorer": { perspective, file_count, finding_count } }` -> write back
|
||||
78
.codex/skills/team-ultra-analyze/roles/synthesizer/role.md
Normal file
78
.codex/skills/team-ultra-analyze/roles/synthesizer/role.md
Normal file
@@ -0,0 +1,78 @@
|
||||
---
|
||||
role: synthesizer
|
||||
prefix: SYNTH
|
||||
inner_loop: false
|
||||
message_types:
|
||||
success: synthesis_ready
|
||||
error: error
|
||||
---
|
||||
|
||||
# Synthesizer
|
||||
|
||||
Integrate all explorations, analyses, and discussions into final conclusions. Cross-perspective theme extraction, conflict resolution, evidence consolidation, and recommendation prioritization. Pure integration role -- no external tools or CLI calls.
|
||||
|
||||
## Phase 2: Context Loading
|
||||
|
||||
| Input | Source | Required |
|
||||
|-------|--------|----------|
|
||||
| Task description | From task subject/description | Yes |
|
||||
| Session path | Extracted from task description | Yes |
|
||||
| All artifacts | `<session>/explorations/*.json`, `analyses/*.json`, `discussions/*.json` | Yes |
|
||||
| Decision trail | From wisdom/.msg/meta.json | No |
|
||||
|
||||
1. Extract session path and topic from task description
|
||||
2. Read all exploration, analysis, and discussion round files
|
||||
3. Load decision trail and current understanding from meta.json
|
||||
4. Select synthesis strategy:
|
||||
|
||||
| Condition | Strategy |
|
||||
|-----------|----------|
|
||||
| Single analysis, no discussions | simple (Quick mode summary) |
|
||||
| Multiple analyses, >2 discussion rounds | deep (track evolution) |
|
||||
| Default | standard (cross-perspective integration) |
|
||||
|
||||
## Phase 3: Cross-Perspective Synthesis
|
||||
|
||||
Execute synthesis across four dimensions:
|
||||
|
||||
**1. Theme Extraction**: Identify convergent themes across all analysis perspectives. Cluster insights by similarity, rank by cross-perspective confirmation count.
|
||||
|
||||
**2. Conflict Resolution**: Identify contradictions between perspectives. Present both sides with trade-off analysis when irreconcilable.
|
||||
|
||||
**3. Evidence Consolidation**: Deduplicate findings, aggregate by file reference. Map evidence to conclusions with confidence levels:
|
||||
|
||||
| Level | Criteria |
|
||||
|-------|----------|
|
||||
| High | Multiple sources confirm, strong evidence |
|
||||
| Medium | Single source or partial evidence |
|
||||
| Low | Speculative, needs verification |
|
||||
|
||||
**4. Recommendation Prioritization**: Sort all recommendations by priority (high > medium > low), deduplicate, cap at 10.
|
||||
|
||||
Integrate decision trail from discussion rounds into final narrative.
|
||||
|
||||
## Phase 4: Write Conclusions
|
||||
|
||||
1. Write `<session>/conclusions.json`:
|
||||
```json
|
||||
{
|
||||
"session_id": "...", "topic": "...", "completed": "ISO-8601",
|
||||
"summary": "Executive summary...",
|
||||
"key_conclusions": [{"point": "...", "evidence": "...", "confidence": "high"}],
|
||||
"recommendations": [{"action": "...", "rationale": "...", "priority": "high"}],
|
||||
"open_questions": ["..."],
|
||||
"decision_trail": [{"round": 1, "decision": "...", "context": "..."}],
|
||||
"cross_perspective_synthesis": { "convergent_themes": [], "conflicts_resolved": [], "unique_contributions": [] },
|
||||
"_metadata": { "explorations": 3, "analyses": 3, "discussions": 2, "strategy": "standard" }
|
||||
}
|
||||
```
|
||||
|
||||
2. Append conclusions section to `<session>/discussion.md`:
|
||||
```markdown
|
||||
## Conclusions
|
||||
### Summary / Key Conclusions / Recommendations / Remaining Questions
|
||||
## Decision Trail / Current Understanding (Final) / Session Statistics
|
||||
```
|
||||
|
||||
Update `<session>/wisdom/.msg/meta.json` under `synthesizer` namespace:
|
||||
- Read existing -> merge `{ "synthesizer": { conclusion_count, recommendation_count, open_question_count } }` -> write back
|
||||
@@ -1,180 +0,0 @@
|
||||
# Team Ultra Analyze — CSV Schema
|
||||
|
||||
## Master CSV: tasks.csv
|
||||
|
||||
### Column Definitions
|
||||
|
||||
#### Input Columns (Set by Decomposer)
|
||||
|
||||
| Column | Type | Required | Description | Example |
|
||||
|--------|------|----------|-------------|---------|
|
||||
| `id` | string | Yes | Unique task identifier | `"EXPLORE-001"` |
|
||||
| `title` | string | Yes | Short task title | `"Explore from technical perspective"` |
|
||||
| `description` | string | Yes | Detailed task description (self-contained) | `"Search codebase from technical perspective..."` |
|
||||
| `role` | string | Yes | Worker role: explorer, analyst, discussant, synthesizer | `"explorer"` |
|
||||
| `perspective` | string | No | Analysis perspective: technical, architectural, business, domain_expert | `"technical"` |
|
||||
| `dimensions` | string | No | Analysis dimensions (semicolon-separated) | `"architecture;implementation"` |
|
||||
| `discussion_round` | integer | No | Discussion round number (0 = N/A, 1+ = round) | `"1"` |
|
||||
| `discussion_type` | string | No | Discussion type: initial, deepen, direction-adjusted, specific-questions | `"initial"` |
|
||||
| `deps` | string | No | Semicolon-separated dependency task IDs | `"EXPLORE-001"` |
|
||||
| `context_from` | string | No | Semicolon-separated task IDs for context | `"EXPLORE-001"` |
|
||||
| `exec_mode` | enum | Yes | Execution mechanism: `csv-wave` or `interactive` | `"csv-wave"` |
|
||||
|
||||
#### Computed Columns (Set by Wave Engine)
|
||||
|
||||
| Column | Type | Description | Example |
|
||||
|--------|------|-------------|---------|
|
||||
| `wave` | integer | Wave number (1-based, from topological sort) | `2` |
|
||||
| `prev_context` | string | Aggregated findings from context_from tasks (per-wave CSV only) | `"[Task EXPLORE-001] Found 12 relevant files..."` |
|
||||
|
||||
#### Output Columns (Set by Agent)
|
||||
|
||||
| Column | Type | Description | Example |
|
||||
|--------|------|-------------|---------|
|
||||
| `status` | enum | `pending` → `completed` / `failed` / `skipped` | `"completed"` |
|
||||
| `findings` | string | Key discoveries (max 500 chars) | `"Found 12 files related to auth module..."` |
|
||||
| `error` | string | Error message if failed | `""` |
|
||||
|
||||
---
|
||||
|
||||
### exec_mode Values
|
||||
|
||||
| Value | Mechanism | Description |
|
||||
|-------|-----------|-------------|
|
||||
| `csv-wave` | `spawn_agents_on_csv` | One-shot batch execution within wave |
|
||||
| `interactive` | `spawn_agent`/`wait`/`send_input`/`close_agent` | Multi-round individual execution |
|
||||
|
||||
Interactive tasks appear in master CSV for dependency tracking but are NOT included in wave-{N}.csv files.
|
||||
|
||||
---
|
||||
|
||||
### Example Data
|
||||
|
||||
```csv
|
||||
id,title,description,role,perspective,dimensions,discussion_round,discussion_type,deps,context_from,exec_mode,wave,status,findings,error
|
||||
"EXPLORE-001","Explore from technical perspective","Search codebase from technical perspective. Collect files, patterns, and findings related to authentication module.","explorer","technical","architecture;implementation","0","","","","csv-wave","1","pending","",""
|
||||
"EXPLORE-002","Explore from architectural perspective","Search codebase from architectural perspective. Focus on module boundaries, component interactions, and system design patterns.","explorer","architectural","architecture;security","0","","","","csv-wave","1","pending","",""
|
||||
"ANALYZE-001","Deep analysis from technical perspective","Analyze exploration results from technical perspective. Generate insights with confidence levels and evidence references.","analyst","technical","architecture;implementation","0","","EXPLORE-001","EXPLORE-001","csv-wave","2","pending","",""
|
||||
"ANALYZE-002","Deep analysis from architectural perspective","Analyze exploration results from architectural perspective. Focus on system design quality and scalability.","analyst","architectural","architecture;security","0","","EXPLORE-002","EXPLORE-002","csv-wave","2","pending","",""
|
||||
"DISCUSS-001","Initial discussion round","Aggregate all analysis results across perspectives. Identify convergent themes, conflicting views, and top discussion points.","discussant","","","1","initial","ANALYZE-001;ANALYZE-002","ANALYZE-001;ANALYZE-002","csv-wave","3","pending","",""
|
||||
"FEEDBACK-001","Discussion feedback gate","Collect user feedback on discussion results. Decide: continue deeper, adjust direction, or proceed to synthesis.","","","","1","","DISCUSS-001","DISCUSS-001","interactive","4","pending","",""
|
||||
"SYNTH-001","Final synthesis","Integrate all explorations, analyses, and discussions into final conclusions with prioritized recommendations.","synthesizer","","","0","","FEEDBACK-001","EXPLORE-001;EXPLORE-002;ANALYZE-001;ANALYZE-002;DISCUSS-001","csv-wave","5","pending","",""
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Column Lifecycle
|
||||
|
||||
```
|
||||
Decomposer (Phase 1) Wave Engine (Phase 2) Agent (Execution)
|
||||
───────────────────── ──────────────────── ─────────────────
|
||||
id ───────────► id ──────────► id
|
||||
title ───────────► title ──────────► (reads)
|
||||
description ───────────► description ──────────► (reads)
|
||||
role ───────────► role ──────────► (reads)
|
||||
perspective ───────────► perspective ──────────► (reads)
|
||||
dimensions ───────────► dimensions ──────────► (reads)
|
||||
discussion_round ──────► discussion_round ─────► (reads)
|
||||
discussion_type ───────► discussion_type ──────► (reads)
|
||||
deps ───────────► deps ──────────► (reads)
|
||||
context_from───────────► context_from──────────► (reads)
|
||||
exec_mode ───────────► exec_mode ──────────► (reads)
|
||||
wave ──────────► (reads)
|
||||
prev_context ──────────► (reads)
|
||||
status
|
||||
findings
|
||||
error
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Output Schema (JSON)
|
||||
|
||||
Agent output via `report_agent_job_result` (csv-wave tasks):
|
||||
|
||||
```json
|
||||
{
|
||||
"id": "EXPLORE-001",
|
||||
"status": "completed",
|
||||
"findings": "Found 12 files related to auth module. Key files: src/auth/index.ts, src/auth/strategies/*.ts. Patterns: strategy pattern for provider switching, middleware chain for request validation.",
|
||||
"error": ""
|
||||
}
|
||||
```
|
||||
|
||||
Analyst output:
|
||||
|
||||
```json
|
||||
{
|
||||
"id": "ANALYZE-001",
|
||||
"status": "completed",
|
||||
"findings": "3 key insights: (1) Auth uses strategy pattern [high confidence], (2) JWT validation lacks refresh token rotation [medium], (3) Rate limiting missing on auth endpoints [high]. 2 discussion points identified.",
|
||||
"error": ""
|
||||
}
|
||||
```
|
||||
|
||||
Discussant output:
|
||||
|
||||
```json
|
||||
{
|
||||
"id": "DISCUSS-001",
|
||||
"status": "completed",
|
||||
"findings": "Convergent themes: JWT security concerns (2 perspectives agree), strategy pattern approval. Conflicts: architectural vs technical on middleware approach. Top questions: refresh token strategy, rate limit placement.",
|
||||
"error": ""
|
||||
}
|
||||
```
|
||||
|
||||
Interactive tasks output via structured text or JSON written to `interactive/{id}-result.json`.
|
||||
|
||||
---
|
||||
|
||||
## Discovery Types
|
||||
|
||||
| Type | Dedup Key | Data Schema | Description |
|
||||
|------|-----------|-------------|-------------|
|
||||
| `exploration` | `data.perspective+data.file` | `{perspective, file, relevance, summary, patterns[]}` | Explored file or module |
|
||||
| `analysis` | `data.perspective+data.insight` | `{perspective, insight, confidence, evidence, file_ref}` | Analysis insight |
|
||||
| `pattern` | `data.name` | `{name, file, description, type}` | Code or architecture pattern |
|
||||
| `discussion_point` | `data.topic` | `{topic, perspectives[], convergence, open_questions[]}` | Discussion point |
|
||||
| `recommendation` | `data.action` | `{action, rationale, priority, confidence}` | Recommendation |
|
||||
| `conclusion` | `data.point` | `{point, evidence, confidence, perspectives_supporting[]}` | Final conclusion |
|
||||
|
||||
### Discovery NDJSON Format
|
||||
|
||||
```jsonl
|
||||
{"ts":"2026-03-08T10:00:00+08:00","worker":"EXPLORE-001","type":"exploration","data":{"perspective":"technical","file":"src/auth/index.ts","relevance":"high","summary":"Auth module entry point with OAuth and JWT exports","patterns":["module-pattern","strategy-pattern"]}}
|
||||
{"ts":"2026-03-08T10:01:00+08:00","worker":"EXPLORE-001","type":"pattern","data":{"name":"strategy-pattern","file":"src/auth/strategies/","description":"Provider switching via strategy pattern","type":"behavioral"}}
|
||||
{"ts":"2026-03-08T10:05:00+08:00","worker":"ANALYZE-001","type":"analysis","data":{"perspective":"technical","insight":"JWT validation lacks refresh token rotation","confidence":"medium","evidence":"No rotation logic in src/auth/jwt/verify.ts","file_ref":"src/auth/jwt/verify.ts:42"}}
|
||||
{"ts":"2026-03-08T10:10:00+08:00","worker":"DISCUSS-001","type":"discussion_point","data":{"topic":"JWT Security","perspectives":["technical","architectural"],"convergence":"Both agree on rotation need","open_questions":["Sliding vs fixed window?"]}}
|
||||
{"ts":"2026-03-08T10:15:00+08:00","worker":"SYNTH-001","type":"conclusion","data":{"point":"Auth module needs refresh token rotation","evidence":"src/auth/jwt/verify.ts lacks rotation","confidence":"high","perspectives_supporting":["technical","architectural"]}}
|
||||
```
|
||||
|
||||
> Both csv-wave and interactive agents read/write the same discoveries.ndjson file.
|
||||
|
||||
---
|
||||
|
||||
## Cross-Mechanism Context Flow
|
||||
|
||||
| Source | Target | Mechanism |
|
||||
|--------|--------|-----------|
|
||||
| CSV task findings | Interactive task | Injected via spawn message or send_input |
|
||||
| Interactive task result | CSV task prev_context | Read from interactive/{id}-result.json |
|
||||
| Any agent discovery | Any agent | Shared via discoveries.ndjson |
|
||||
|
||||
---
|
||||
|
||||
## Validation Rules
|
||||
|
||||
| Rule | Check | Error |
|
||||
|------|-------|-------|
|
||||
| Unique IDs | No duplicate `id` values | "Duplicate task ID: {id}" |
|
||||
| Valid deps | All dep IDs exist in tasks | "Unknown dependency: {dep_id}" |
|
||||
| No self-deps | Task cannot depend on itself | "Self-dependency: {id}" |
|
||||
| No circular deps | Topological sort completes | "Circular dependency detected involving: {ids}" |
|
||||
| context_from valid | All context IDs exist and in earlier waves | "Invalid context_from: {id}" |
|
||||
| exec_mode valid | Value is `csv-wave` or `interactive` | "Invalid exec_mode: {value}" |
|
||||
| Description non-empty | Every task has description | "Empty description for task: {id}" |
|
||||
| Status enum | status in {pending, completed, failed, skipped} | "Invalid status: {status}" |
|
||||
| Valid role | role in {explorer, analyst, discussant, synthesizer} | "Invalid role: {role}" |
|
||||
| Valid perspective | perspective in {technical, architectural, business, domain_expert, general, ""} | "Invalid perspective: {value}" |
|
||||
| Discussion round non-negative | discussion_round >= 0 | "Invalid discussion_round: {value}" |
|
||||
| Cross-mechanism deps | Interactive→CSV deps resolve correctly | "Cross-mechanism dependency unresolvable: {id}" |
|
||||
64
.codex/skills/team-ultra-analyze/specs/pipelines.md
Normal file
64
.codex/skills/team-ultra-analyze/specs/pipelines.md
Normal file
@@ -0,0 +1,64 @@
|
||||
# Pipeline Definitions — Team Ultra Analyze
|
||||
|
||||
## Pipeline Modes
|
||||
|
||||
### Quick Mode (3 tasks, serial)
|
||||
|
||||
```
|
||||
EXPLORE-001 -> ANALYZE-001 -> SYNTH-001
|
||||
```
|
||||
|
||||
| Task | Role | Dependencies |
|
||||
|------|------|-------------|
|
||||
| EXPLORE-001 | explorer | (none) |
|
||||
| ANALYZE-001 | analyst | EXPLORE-001 |
|
||||
| SYNTH-001 | synthesizer | ANALYZE-001 |
|
||||
|
||||
### Standard Mode (2N+2 tasks, parallel windows)
|
||||
|
||||
```
|
||||
[EXPLORE-001..N](parallel) -> [ANALYZE-001..N](parallel) -> DISCUSS-001 -> SYNTH-001
|
||||
```
|
||||
|
||||
| Task | Role | Dependencies |
|
||||
|------|------|-------------|
|
||||
| EXPLORE-001..N | explorer | (none, parallel) |
|
||||
| ANALYZE-001..N | analyst | corresponding EXPLORE-N |
|
||||
| DISCUSS-001 | discussant | all ANALYZE tasks |
|
||||
| SYNTH-001 | synthesizer | DISCUSS-001 |
|
||||
|
||||
### Deep Mode (2N+1 tasks initially, dynamic loop)
|
||||
|
||||
Same as Standard but SYNTH-001 is omitted at dispatch. Created dynamically after discussion loop completes.
|
||||
|
||||
Dynamic tasks created during discussion loop:
|
||||
- `DISCUSS-N` (round N) — created based on user feedback
|
||||
- `ANALYZE-fix-N` (direction fix) — created when user requests adjusted focus
|
||||
- `SYNTH-001` — created after final discussion round
|
||||
|
||||
## Task Metadata Registry
|
||||
|
||||
| Task ID | Role | Dependencies | Description |
|
||||
|---------|------|-------------|-------------|
|
||||
| EXPLORE-1..depth | explorer | (none) | Parallel codebase exploration, one per perspective |
|
||||
| ANALYZE-1..depth | analyst | EXPLORE-1..depth (all) | Parallel deep analysis, one per perspective |
|
||||
| DISCUSS-001 | discussant | ANALYZE-1..depth (all) | Process analysis results, identify gaps |
|
||||
| ANALYZE-fix-N | analyst | DISCUSS-N | Re-analysis for adjusted focus (Deep mode) |
|
||||
| DISCUSS-002..N | discussant | ANALYZE-fix-N | Subsequent discussion rounds (Deep mode, max 5) |
|
||||
| SYNTH-001 | synthesizer | Last DISCUSS-N | Cross-perspective integration and conclusions |
|
||||
|
||||
## Discussion Loop Control
|
||||
|
||||
| Mode | Max Rounds | Trigger |
|
||||
|------|-----------|---------|
|
||||
| quick | 0 | No discussion |
|
||||
| standard | 1 | After DISCUSS-001 |
|
||||
| deep | 5 | After each DISCUSS-N |
|
||||
|
||||
## Checkpoints
|
||||
|
||||
| Trigger | Location | Behavior |
|
||||
|---------|----------|----------|
|
||||
| Discussion round (Deep mode) | After DISCUSS-N completes | Pause, AskUser for direction/continuation |
|
||||
| Discussion loop limit | >5 rounds | Force synthesis, offer continuation |
|
||||
| Pipeline stall | No ready + no running | Check missing tasks, report to user |
|
||||
129
.codex/skills/team-ultra-analyze/specs/team-config.json
Normal file
129
.codex/skills/team-ultra-analyze/specs/team-config.json
Normal file
@@ -0,0 +1,129 @@
|
||||
{
|
||||
"team_name": "ultra-analyze",
|
||||
"version": "1.0.0",
|
||||
"description": "深度分析团队 - 将单体分析工作流拆分为5角色协作:探索→分析→讨论→综合,支持多管道模式和讨论循环",
|
||||
"skill_entry": "team-ultra-analyze",
|
||||
"invocation": "Skill(skill=\"team-ultra-analyze\", args=\"--role=coordinator ...\")",
|
||||
|
||||
"roles": {
|
||||
"coordinator": {
|
||||
"name": "coordinator",
|
||||
"responsibility": "Orchestration",
|
||||
"task_prefix": null,
|
||||
"description": "分析团队协调者。话题澄清、管道选择、会话管理、讨论循环驱动、结果汇报",
|
||||
"message_types_sent": ["pipeline_selected", "discussion_round", "direction_adjusted", "task_unblocked", "error", "shutdown"],
|
||||
"message_types_received": ["exploration_ready", "analysis_ready", "discussion_processed", "synthesis_ready", "error"],
|
||||
"commands": ["dispatch", "monitor"]
|
||||
},
|
||||
"explorer": {
|
||||
"name": "explorer",
|
||||
"responsibility": "Orchestration (代码库探索编排)",
|
||||
"task_prefix": "EXPLORE-*",
|
||||
"description": "代码库探索者。通过 cli-explore-agent 多角度并行探索代码库,收集上下文",
|
||||
"message_types_sent": ["exploration_ready", "error"],
|
||||
"message_types_received": [],
|
||||
"commands": ["explore"],
|
||||
"cli_tools": ["gemini"] },
|
||||
"analyst": {
|
||||
"name": "analyst",
|
||||
"responsibility": "Read-only analysis (深度分析)",
|
||||
"task_prefix": "ANALYZE-*",
|
||||
"description": "深度分析师。基于探索结果,通过 CLI 多视角深度分析,生成结构化洞察",
|
||||
"message_types_sent": ["analysis_ready", "error"],
|
||||
"message_types_received": [],
|
||||
"commands": ["analyze"],
|
||||
"cli_tools": ["gemini", "codex", "claude"]
|
||||
},
|
||||
"discussant": {
|
||||
"name": "discussant",
|
||||
"responsibility": "Analysis + Exploration (讨论处理)",
|
||||
"task_prefix": "DISCUSS-*",
|
||||
"description": "讨论处理者。根据用户反馈调整分析方向,执行深入探索或补充分析",
|
||||
"message_types_sent": ["discussion_processed", "error"],
|
||||
"message_types_received": [],
|
||||
"commands": ["deepen"],
|
||||
"cli_tools": ["gemini"],
|
||||
"cli_tools": ["gemini"] },
|
||||
"synthesizer": {
|
||||
"name": "synthesizer",
|
||||
"responsibility": "Read-only analysis (综合结论)",
|
||||
"task_prefix": "SYNTH-*",
|
||||
"description": "综合整合者。跨视角整合所有探索、分析、讨论结果,生成最终结论和建议",
|
||||
"message_types_sent": ["synthesis_ready", "error"],
|
||||
"message_types_received": [],
|
||||
"commands": ["synthesize"]
|
||||
}
|
||||
},
|
||||
|
||||
"pipeline_modes": {
|
||||
"quick": {
|
||||
"description": "快速分析:单探索→单分析→直接综合",
|
||||
"stages": ["EXPLORE", "ANALYZE", "SYNTH"],
|
||||
"entry_role": "explorer",
|
||||
"estimated_time": "10-15min"
|
||||
},
|
||||
"standard": {
|
||||
"description": "标准分析:多角度并行探索→多视角分析→讨论→综合",
|
||||
"stages": ["EXPLORE-multi", "ANALYZE-multi", "DISCUSS", "SYNTH"],
|
||||
"entry_role": "explorer",
|
||||
"parallel_stages": [["EXPLORE-001", "EXPLORE-002"], ["ANALYZE-001", "ANALYZE-002"]],
|
||||
"estimated_time": "30-60min"
|
||||
},
|
||||
"deep": {
|
||||
"description": "深度分析:多探索→多分析→讨论循环→综合",
|
||||
"stages": ["EXPLORE-multi", "ANALYZE-multi", "DISCUSS-loop", "SYNTH"],
|
||||
"entry_role": "explorer",
|
||||
"parallel_stages": [["EXPLORE-001", "EXPLORE-002", "EXPLORE-003"], ["ANALYZE-001", "ANALYZE-002", "ANALYZE-003"]],
|
||||
"discussion_loop": { "max_rounds": 5, "participants": ["discussant", "analyst"] },
|
||||
"estimated_time": "1-2hr"
|
||||
}
|
||||
},
|
||||
|
||||
"discussion_loop": {
|
||||
"max_rounds": 5,
|
||||
"trigger": "user feedback via coordinator",
|
||||
"participants": ["discussant", "analyst"],
|
||||
"flow": "coordinator(AskUser) → DISCUSS-N(deepen) → [optional ANALYZE-fix] → coordinator(AskUser) → ... → SYNTH"
|
||||
},
|
||||
|
||||
"shared_memory": {
|
||||
"file": "shared-memory.json",
|
||||
"fields": {
|
||||
"explorations": { "owner": "explorer", "type": "array" },
|
||||
"analyses": { "owner": "analyst", "type": "array" },
|
||||
"discussions": { "owner": "discussant", "type": "array" },
|
||||
"synthesis": { "owner": "synthesizer", "type": "object" },
|
||||
"decision_trail": { "owner": "coordinator", "type": "array" },
|
||||
"current_understanding": { "owner": "coordinator", "type": "object" }
|
||||
}
|
||||
},
|
||||
|
||||
"collaboration_patterns": [
|
||||
"CP-1: Linear Pipeline (Quick mode)",
|
||||
"CP-3: Fan-out (Explorer/Analyst parallel exploration)",
|
||||
"CP-2: Review-Fix Cycle (Discussion loop: Discussant ↔ Analyst)",
|
||||
"CP-8: User-in-the-loop (Coordinator ↔ User discussion rounds)"
|
||||
],
|
||||
|
||||
"session_directory": {
|
||||
"pattern": ".workflow/.team/UAN-{slug}-{date}",
|
||||
"subdirectories": ["explorations", "analyses", "discussions"]
|
||||
},
|
||||
|
||||
"analysis_dimensions": {
|
||||
"architecture": ["架构", "architecture", "design", "structure", "设计"],
|
||||
"implementation": ["实现", "implement", "code", "coding", "代码"],
|
||||
"performance": ["性能", "performance", "optimize", "bottleneck", "优化"],
|
||||
"security": ["安全", "security", "auth", "permission", "权限"],
|
||||
"concept": ["概念", "concept", "theory", "principle", "原理"],
|
||||
"comparison": ["比较", "compare", "vs", "difference", "区别"],
|
||||
"decision": ["决策", "decision", "choice", "tradeoff", "选择"]
|
||||
},
|
||||
|
||||
"analysis_perspectives": {
|
||||
"technical": { "tool": "gemini", "focus": "Implementation, code patterns, technical feasibility" },
|
||||
"architectural": { "tool": "claude", "focus": "System design, scalability, component interactions" },
|
||||
"business": { "tool": "codex", "focus": "Value, ROI, stakeholder impact, strategy" },
|
||||
"domain_expert": { "tool": "gemini", "focus": "Domain-specific patterns, best practices, standards" }
|
||||
}
|
||||
}
|
||||
Reference in New Issue
Block a user