mirror of
https://github.com/catlog22/Claude-Code-Workflow.git
synced 2026-03-05 16:13:08 +08:00
Add comprehensive documentation templates and schemas for project documentation workflow
- Introduced agent instruction template for generating documentation tasks. - Created CSV schema for tasks including definitions and example entries. - Defined documentation dimensions across five waves with specific focus areas. - Established quality standards for documentation, including completeness, accuracy, clarity, and context integration.
This commit is contained in:
@@ -1,630 +0,0 @@
|
|||||||
---
|
|
||||||
name: numerical-analysis-workflow
|
|
||||||
description: Global-to-local numerical computation project analysis workflow. Decomposes analysis into 6-phase diamond topology (Global → Theory → Algorithm → Module → Local → Integration) with parallel analysis tracks per phase, cross-phase context propagation, and LaTeX formula support. Produces comprehensive analysis documents covering mathematical foundations, numerical stability, convergence, error bounds, and software architecture.
|
|
||||||
argument-hint: "[-y|--yes] [-c|--concurrency N] [--continue] \"project path or description\""
|
|
||||||
allowed-tools: spawn_agents_on_csv, Read, Write, Edit, Bash, Glob, Grep, AskUserQuestion
|
|
||||||
---
|
|
||||||
|
|
||||||
## Auto Mode
|
|
||||||
|
|
||||||
When `--yes` or `-y`: Auto-confirm track decomposition, skip interactive validation, use defaults.
|
|
||||||
|
|
||||||
# Numerical Analysis Workflow
|
|
||||||
|
|
||||||
## Usage
|
|
||||||
|
|
||||||
```bash
|
|
||||||
$numerical-analysis-workflow "Analyze the FEM solver in src/solver/"
|
|
||||||
$numerical-analysis-workflow -c 3 "Analyze CFD simulation pipeline for numerical stability"
|
|
||||||
$numerical-analysis-workflow -y "Full analysis of PDE discretization in src/pde/"
|
|
||||||
$numerical-analysis-workflow --continue "nadw-fem-solver-20260304"
|
|
||||||
```
|
|
||||||
|
|
||||||
**Flags**:
|
|
||||||
- `-y, --yes`: Skip all confirmations (auto mode)
|
|
||||||
- `-c, --concurrency N`: Max concurrent agents within each wave (default: 3)
|
|
||||||
- `--continue`: Resume existing session
|
|
||||||
|
|
||||||
**Output Directory**: `.workflow/.csv-wave/{session-id}/`
|
|
||||||
**Core Output**: `tasks.csv` (master state) + `results.csv` (final) + `discoveries.ndjson` (shared exploration) + `context.md` (human-readable report)
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Overview
|
|
||||||
|
|
||||||
Six-phase diamond topology for analyzing numerical computation software projects. Each phase represents a wave; within each wave, 2-5 parallel analysis tracks produce focused documents. Context packages propagate cumulatively between waves, enabling perspective reuse — theory informs algorithm design, algorithm informs implementation, all converge at integration.
|
|
||||||
|
|
||||||
**Core workflow**: Survey → Theorize → Design → Analyze Modules → Optimize Locally → Integrate & Validate
|
|
||||||
|
|
||||||
```
|
|
||||||
┌─────────────────────────────────────────────────────────────────────────┐
|
|
||||||
│ NUMERICAL ANALYSIS DIAMOND WORKFLOW (NADW) │
|
|
||||||
├─────────────────────────────────────────────────────────────────────────┤
|
|
||||||
│ │
|
|
||||||
│ Wave 1: Global Survey [3 tracks] │
|
|
||||||
│ ├─ T1.1 Problem Domain Survey (math models, governing equations) │
|
|
||||||
│ ├─ T1.2 Software Architecture Overview (modules, data flow) │
|
|
||||||
│ └─ T1.3 Validation Strategy (benchmarks, KPIs, acceptance) │
|
|
||||||
│ ↓ Context Package P1 │
|
|
||||||
│ │
|
|
||||||
│ Wave 2: Theoretical Foundations [3 tracks] │
|
|
||||||
│ ├─ T2.1 Mathematical Formulation (LaTeX derivation, weak forms) │
|
|
||||||
│ ├─ T2.2 Convergence Analysis (error bounds, convergence order) │
|
|
||||||
│ └─ T2.3 Complexity Analysis (time/space Big-O, operation counts) │
|
|
||||||
│ ↓ Context Package P1+P2 │
|
|
||||||
│ │
|
|
||||||
│ Wave 3: Algorithm Design & Stability [3 tracks] │
|
|
||||||
│ ├─ T3.1 Algorithm Specification (method selection, pseudocode) │
|
|
||||||
│ ├─ T3.2 Numerical Stability Report (condition numbers, error prop) │
|
|
||||||
│ └─ T3.3 Performance Model (FLOPS, memory bandwidth, parallelism) │
|
|
||||||
│ ↓ Context Package P1+P2+P3 │
|
|
||||||
│ │
|
|
||||||
│ Wave 4: Module Implementation [3 tracks] │
|
|
||||||
│ ├─ T4.1 Core Module Analysis (algorithm-code mapping) │
|
|
||||||
│ ├─ T4.2 Data Structure Review (sparse formats, memory layout) │
|
|
||||||
│ └─ T4.3 API Contract Analysis (interfaces, error handling) │
|
|
||||||
│ ↓ Context Package P1-P4 │
|
|
||||||
│ │
|
|
||||||
│ Wave 5: Local Function-Level [3 tracks] │
|
|
||||||
│ ├─ T5.1 Optimization Report (hotspots, vectorization, cache) │
|
|
||||||
│ ├─ T5.2 Edge Case Analysis (singularities, overflow, degeneracy) │
|
|
||||||
│ └─ T5.3 Precision Audit (catastrophic cancellation, accumulation) │
|
|
||||||
│ ↓ Context Package P1-P5 │
|
|
||||||
│ │
|
|
||||||
│ Wave 6: Integration & QA [3 tracks] │
|
|
||||||
│ ├─ T6.1 Integration Test Plan (end-to-end, regression, benchmark) │
|
|
||||||
│ ├─ T6.2 Benchmark Results (actual vs theoretical performance) │
|
|
||||||
│ └─ T6.3 Final QA Report (all-phase synthesis, risk matrix, roadmap) │
|
|
||||||
│ │
|
|
||||||
└─────────────────────────────────────────────────────────────────────────┘
|
|
||||||
```
|
|
||||||
|
|
||||||
**Diamond Topology** (Wide → Deep → Wide):
|
|
||||||
```
|
|
||||||
Wave 1: [T1.1] [T1.2] [T1.3] ← Global扇出
|
|
||||||
Wave 2: [T2.1] [T2.2] [T2.3] ← Theory深入
|
|
||||||
Wave 3: [T3.1] [T3.2] [T3.3] ← Algorithm桥接
|
|
||||||
Wave 4: [T4.1] [T4.2] [T4.3] ← Module聚焦
|
|
||||||
Wave 5: [T5.1] [T5.2] [T5.3] ← Local最细
|
|
||||||
Wave 6: [T6.1] [T6.2] [T6.3] ← Integration汇聚
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## CSV Schema
|
|
||||||
|
|
||||||
### tasks.csv (Master State)
|
|
||||||
|
|
||||||
```csv
|
|
||||||
id,title,description,track_role,analysis_dimension,formula_refs,precision_req,scope,deps,context_from,wave,status,findings,severity_distribution,latex_formulas,doc_path,error
|
|
||||||
"T1.1","Problem Domain Survey","Survey governing equations and mathematical models for the numerical computation project. Identify PDE types, boundary conditions, conservation laws.","Problem_Domain_Analyst","domain_modeling","","","src/**","","","1","","","","","",""
|
|
||||||
"T2.1","Mathematical Formulation","Derive precise mathematical formulations using LaTeX. Transform governing equations into weak forms suitable for discretization.","Mathematician","formula_derivation","T1.1:governing_eqs","","src/**","T1.1","T1.1","2","","","","","",""
|
|
||||||
"T3.1","Algorithm Specification","Select numerical methods and design algorithms based on theoretical analysis. Produce pseudocode for core computational kernels.","Algorithm_Designer","method_selection","T2.1:weak_forms;T2.2:convergence_conds","double","src/solver/**","T2.1","T2.1;T2.2;T2.3","3","","","","","",""
|
|
||||||
```
|
|
||||||
|
|
||||||
**Columns**:
|
|
||||||
|
|
||||||
| Column | Phase | Description |
|
|
||||||
|--------|-------|-------------|
|
|
||||||
| `id` | Input | Unique task identifier (T{wave}.{track}) |
|
|
||||||
| `title` | Input | Short task title |
|
|
||||||
| `description` | Input | Detailed task description (self-contained for agent) |
|
|
||||||
| `track_role` | Input | Analysis role name (e.g., Mathematician, Stability_Analyst) |
|
|
||||||
| `analysis_dimension` | Input | Analysis focus area (domain_modeling, formula_derivation, stability, etc.) |
|
|
||||||
| `formula_refs` | Input | Semicolon-separated references to formulas from earlier tasks (TaskID:formula_name) |
|
|
||||||
| `precision_req` | Input | Required floating-point precision (float/double/quad/adaptive) |
|
|
||||||
| `scope` | Input | File/directory scope for analysis (glob pattern) |
|
|
||||||
| `deps` | Input | Semicolon-separated dependency task IDs |
|
|
||||||
| `context_from` | Input | Semicolon-separated task IDs whose findings this task needs |
|
|
||||||
| `wave` | Computed | Wave number (1-6, from phase assignment) |
|
|
||||||
| `status` | Output | `pending` → `completed` / `failed` / `skipped` |
|
|
||||||
| `findings` | Output | Key discoveries and conclusions (max 500 chars) |
|
|
||||||
| `severity_distribution` | Output | Issue counts: Critical/High/Medium/Low |
|
|
||||||
| `latex_formulas` | Output | Key LaTeX formulas discovered or derived (semicolon-separated) |
|
|
||||||
| `doc_path` | Output | Path to generated analysis document |
|
|
||||||
| `error` | Output | Error message if failed |
|
|
||||||
|
|
||||||
### Per-Wave CSV (Temporary)
|
|
||||||
|
|
||||||
Each wave generates a temporary `wave-{N}.csv` with extra `prev_context` column.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Output Artifacts
|
|
||||||
|
|
||||||
| File | Purpose | Lifecycle |
|
|
||||||
|------|---------|-----------|
|
|
||||||
| `tasks.csv` | Master state — all tasks with status/findings | Updated after each wave |
|
|
||||||
| `wave-{N}.csv` | Per-wave input (temporary) | Created before wave, deleted after |
|
|
||||||
| `results.csv` | Final export of all task results | Created in Phase 3 |
|
|
||||||
| `discoveries.ndjson` | Shared exploration board across all agents | Append-only, carries across waves |
|
|
||||||
| `context.md` | Human-readable execution report | Created in Phase 3 |
|
|
||||||
| `docs/P{N}_*.md` | Per-track analysis documents | Created by each agent |
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Session Structure
|
|
||||||
|
|
||||||
```
|
|
||||||
.workflow/.csv-wave/{session-id}/
|
|
||||||
├── tasks.csv # Master state (updated per wave)
|
|
||||||
├── results.csv # Final results export
|
|
||||||
├── discoveries.ndjson # Shared discovery board (all agents)
|
|
||||||
├── context.md # Human-readable report
|
|
||||||
├── docs/ # Analysis documents per track
|
|
||||||
│ ├── P1_Domain_Survey.md
|
|
||||||
│ ├── P1_Architecture_Overview.md
|
|
||||||
│ ├── P1_Validation_Strategy.md
|
|
||||||
│ ├── P2_Mathematical_Formulation.md
|
|
||||||
│ ├── P2_Convergence_Analysis.md
|
|
||||||
│ ├── P2_Complexity_Analysis.md
|
|
||||||
│ ├── P3_Algorithm_Specification.md
|
|
||||||
│ ├── P3_Numerical_Stability_Report.md
|
|
||||||
│ ├── P3_Performance_Model.md
|
|
||||||
│ ├── P4_Module_Implementation_Analysis.md
|
|
||||||
│ ├── P4_Data_Structure_Review.md
|
|
||||||
│ ├── P4_API_Contract.md
|
|
||||||
│ ├── P5_Optimization_Report.md
|
|
||||||
│ ├── P5_Edge_Case_Analysis.md
|
|
||||||
│ ├── P5_Precision_Audit.md
|
|
||||||
│ ├── P6_Integration_Test_Plan.md
|
|
||||||
│ ├── P6_Benchmark_Results.md
|
|
||||||
│ └── P6_Final_QA_Report.md
|
|
||||||
└── wave-{N}.csv # Temporary per-wave input (cleaned up)
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Implementation
|
|
||||||
|
|
||||||
### Session Initialization
|
|
||||||
|
|
||||||
```javascript
|
|
||||||
const getUtc8ISOString = () => new Date(Date.now() + 8 * 60 * 60 * 1000).toISOString()
|
|
||||||
|
|
||||||
// Parse flags
|
|
||||||
const AUTO_YES = $ARGUMENTS.includes('--yes') || $ARGUMENTS.includes('-y')
|
|
||||||
const continueMode = $ARGUMENTS.includes('--continue')
|
|
||||||
const concurrencyMatch = $ARGUMENTS.match(/(?:--concurrency|-c)\s+(\d+)/)
|
|
||||||
const maxConcurrency = concurrencyMatch ? parseInt(concurrencyMatch[1]) : 3
|
|
||||||
|
|
||||||
// Clean requirement text
|
|
||||||
const requirement = $ARGUMENTS
|
|
||||||
.replace(/--yes|-y|--continue|--concurrency\s+\d+|-c\s+\d+/g, '')
|
|
||||||
.trim()
|
|
||||||
|
|
||||||
const slug = requirement.toLowerCase()
|
|
||||||
.replace(/[^a-z0-9\u4e00-\u9fa5]+/g, '-')
|
|
||||||
.substring(0, 40)
|
|
||||||
const dateStr = getUtc8ISOString().substring(0, 10).replace(/-/g, '')
|
|
||||||
const sessionId = `nadw-${slug}-${dateStr}`
|
|
||||||
const sessionFolder = `.workflow/.csv-wave/${sessionId}`
|
|
||||||
|
|
||||||
Bash(`mkdir -p ${sessionFolder}/docs`)
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### Phase 1: Requirement → CSV (Decomposition)
|
|
||||||
|
|
||||||
**Objective**: Analyze the target project/requirement, decompose into 18 analysis tasks (6 waves × 3 tracks), compute wave assignments, generate tasks.csv.
|
|
||||||
|
|
||||||
**Decomposition Rules**:
|
|
||||||
|
|
||||||
| Wave | Phase Name | Track Roles | Analysis Focus |
|
|
||||||
|------|-----------|-------------|----------------|
|
|
||||||
| 1 | Global Survey | Problem_Domain_Analyst, Software_Architect, Validation_Strategist | Mathematical models, architecture, validation strategy |
|
|
||||||
| 2 | Theoretical Foundations | Mathematician, Convergence_Analyst, Complexity_Analyst | Formula derivation, convergence proofs, complexity bounds |
|
|
||||||
| 3 | Algorithm Design | Algorithm_Designer, Stability_Analyst, Performance_Modeler | Method selection, numerical stability, performance prediction |
|
|
||||||
| 4 | Module Implementation | Module_Implementer, Data_Structure_Designer, Interface_Analyst | Code-algorithm mapping, data structures, API contracts |
|
|
||||||
| 5 | Local Function-Level | Code_Optimizer, Edge_Case_Analyst, Precision_Auditor | Hotspot optimization, boundary handling, float precision |
|
|
||||||
| 6 | Integration & QA | Integration_Tester, Benchmark_Engineer, QA_Auditor | End-to-end testing, benchmarks, final quality report |
|
|
||||||
|
|
||||||
**Dependency Structure** (Diamond Topology):
|
|
||||||
|
|
||||||
| Task | deps | context_from | Rationale |
|
|
||||||
|------|------|-------------|-----------|
|
|
||||||
| T1.* | (none) | (none) | Wave 1: independent, global survey |
|
|
||||||
| T2.1 | T1.1 | T1.1 | Formalization needs governing equations |
|
|
||||||
| T2.2 | T1.1 | T1.1 | Convergence analysis needs model identification |
|
|
||||||
| T2.3 | T1.1;T1.2 | T1.1;T1.2 | Complexity needs both model and architecture |
|
|
||||||
| T3.1 | T2.1 | T2.1;T2.2;T2.3 | Algorithm design needs all theory |
|
|
||||||
| T3.2 | T2.1;T2.2 | T2.1;T2.2 | Stability needs formulas and convergence |
|
|
||||||
| T3.3 | T2.3 | T1.2;T2.3 | Performance model needs architecture + complexity |
|
|
||||||
| T4.* | T3.* | T1.*;T3.* | Module analysis needs global + algorithm context |
|
|
||||||
| T5.* | T4.* | T3.*;T4.* | Local analysis needs algorithm + module context |
|
|
||||||
| T6.* | T5.* | T1.*;T2.*;T3.*;T4.*;T5.* | Integration receives ALL context |
|
|
||||||
|
|
||||||
**Decomposition CLI Call**:
|
|
||||||
|
|
||||||
```javascript
|
|
||||||
Bash({
|
|
||||||
command: `ccw cli -p "PURPOSE: Decompose numerical computation project analysis into 18 tasks across 6 phases.
|
|
||||||
TASK:
|
|
||||||
• Analyze the project to identify: governing equations, numerical methods used, module structure
|
|
||||||
• Generate 18 analysis tasks (3 per phase × 6 phases) following the NADW diamond topology
|
|
||||||
• Each task must be self-contained with clear scope and analysis dimension
|
|
||||||
• Assign track_role, analysis_dimension, formula_refs, precision_req, scope for each task
|
|
||||||
• Set deps and context_from following the diamond dependency pattern
|
|
||||||
MODE: analysis
|
|
||||||
CONTEXT: @**/*
|
|
||||||
EXPECTED: JSON with tasks array. Each task: {id, title, description, track_role, analysis_dimension, formula_refs, precision_req, scope, deps[], context_from[]}
|
|
||||||
CONSTRAINTS: Exactly 6 waves, 3 tasks per wave. Wave 1=Global, Wave 2=Theory, Wave 3=Algorithm, Wave 4=Module, Wave 5=Local, Wave 6=Integration.
|
|
||||||
|
|
||||||
PROJECT TO ANALYZE: ${requirement}" --tool gemini --mode analysis --rule planning-breakdown-task-steps`,
|
|
||||||
run_in_background: true
|
|
||||||
})
|
|
||||||
```
|
|
||||||
|
|
||||||
**Wave Computation**: Fixed 6-wave assignment per the diamond topology. Tasks within each wave are independent.
|
|
||||||
|
|
||||||
**CSV Generation**: Parse JSON response, validate 18 tasks with correct wave assignments, generate tasks.csv with proper escaping.
|
|
||||||
|
|
||||||
**User Validation**: Display task breakdown grouped by wave (skip if AUTO_YES):
|
|
||||||
|
|
||||||
```
|
|
||||||
Wave 1 (Global Survey):
|
|
||||||
T1.1 Problem Domain Survey → Problem_Domain_Analyst
|
|
||||||
T1.2 Software Architecture Overview → Software_Architect
|
|
||||||
T1.3 Validation Strategy → Validation_Strategist
|
|
||||||
|
|
||||||
Wave 2 (Theoretical Foundations):
|
|
||||||
T2.1 Mathematical Formulation → Mathematician
|
|
||||||
T2.2 Convergence Analysis → Convergence_Analyst
|
|
||||||
T2.3 Complexity Analysis → Complexity_Analyst
|
|
||||||
...
|
|
||||||
```
|
|
||||||
|
|
||||||
**Success Criteria**:
|
|
||||||
- tasks.csv created with 18 tasks, 6 waves, valid schema
|
|
||||||
- No circular dependencies
|
|
||||||
- Each task has track_role and analysis_dimension
|
|
||||||
- User approved (or AUTO_YES)
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### Phase 2: Wave Execution Engine
|
|
||||||
|
|
||||||
**Objective**: Execute analysis tasks wave-by-wave via spawn_agents_on_csv with cross-wave context propagation and cumulative context packages.
|
|
||||||
|
|
||||||
```javascript
|
|
||||||
// Read master CSV
|
|
||||||
const masterCsv = Read(`${sessionFolder}/tasks.csv`)
|
|
||||||
const tasks = parseCsv(masterCsv)
|
|
||||||
const maxWave = 6
|
|
||||||
|
|
||||||
for (let wave = 1; wave <= maxWave; wave++) {
|
|
||||||
// 1. Filter tasks for this wave
|
|
||||||
const waveTasks = tasks.filter(t => t.wave === wave && t.status === 'pending')
|
|
||||||
|
|
||||||
// 2. Skip tasks whose deps failed/skipped
|
|
||||||
for (const task of waveTasks) {
|
|
||||||
const depIds = task.deps.split(';').filter(Boolean)
|
|
||||||
const depStatuses = depIds.map(id => tasks.find(t => t.id === id)?.status)
|
|
||||||
if (depStatuses.some(s => s === 'failed' || s === 'skipped')) {
|
|
||||||
task.status = 'skipped'
|
|
||||||
task.error = `Dependency failed: ${depIds.filter((id, i) => ['failed','skipped'].includes(depStatuses[i])).join(', ')}`
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
const pendingTasks = waveTasks.filter(t => t.status === 'pending')
|
|
||||||
if (pendingTasks.length === 0) continue
|
|
||||||
|
|
||||||
// 3. Build prev_context from context_from + master CSV findings
|
|
||||||
for (const task of pendingTasks) {
|
|
||||||
const contextIds = task.context_from.split(';').filter(Boolean)
|
|
||||||
const prevFindings = contextIds.map(id => {
|
|
||||||
const src = tasks.find(t => t.id === id)
|
|
||||||
return src?.findings ? `[${src.id} ${src.title}]: ${src.findings}` : ''
|
|
||||||
}).filter(Boolean).join('\n\n')
|
|
||||||
|
|
||||||
// Also include latex_formulas from context sources
|
|
||||||
const prevFormulas = contextIds.map(id => {
|
|
||||||
const src = tasks.find(t => t.id === id)
|
|
||||||
return src?.latex_formulas ? `[${src.id} formulas]: ${src.latex_formulas}` : ''
|
|
||||||
}).filter(Boolean).join('\n')
|
|
||||||
|
|
||||||
task.prev_context = prevFindings + (prevFormulas ? '\n\n--- Referenced Formulas ---\n' + prevFormulas : '')
|
|
||||||
}
|
|
||||||
|
|
||||||
// 4. Write per-wave CSV
|
|
||||||
Write(`${sessionFolder}/wave-${wave}.csv`, toCsv(pendingTasks))
|
|
||||||
|
|
||||||
// 5. Execute wave
|
|
||||||
spawn_agents_on_csv({
|
|
||||||
csv_path: `${sessionFolder}/wave-${wave}.csv`,
|
|
||||||
id_column: "id",
|
|
||||||
instruction: buildInstructionTemplate(sessionFolder, wave),
|
|
||||||
max_concurrency: maxConcurrency,
|
|
||||||
max_runtime_seconds: 900,
|
|
||||||
output_csv_path: `${sessionFolder}/wave-${wave}-results.csv`,
|
|
||||||
output_schema: {
|
|
||||||
type: "object",
|
|
||||||
properties: {
|
|
||||||
id: { type: "string" },
|
|
||||||
status: { type: "string", enum: ["completed", "failed"] },
|
|
||||||
findings: { type: "string" },
|
|
||||||
severity_distribution: { type: "string" },
|
|
||||||
latex_formulas: { type: "string" },
|
|
||||||
doc_path: { type: "string" },
|
|
||||||
error: { type: "string" }
|
|
||||||
},
|
|
||||||
required: ["id", "status", "findings"]
|
|
||||||
}
|
|
||||||
})
|
|
||||||
|
|
||||||
// 6. Merge results into master CSV
|
|
||||||
const waveResults = parseCsv(Read(`${sessionFolder}/wave-${wave}-results.csv`))
|
|
||||||
for (const result of waveResults) {
|
|
||||||
const masterTask = tasks.find(t => t.id === result.id)
|
|
||||||
if (masterTask) {
|
|
||||||
masterTask.status = result.status
|
|
||||||
masterTask.findings = result.findings
|
|
||||||
masterTask.severity_distribution = result.severity_distribution || ''
|
|
||||||
masterTask.latex_formulas = result.latex_formulas || ''
|
|
||||||
masterTask.doc_path = result.doc_path || ''
|
|
||||||
masterTask.error = result.error || ''
|
|
||||||
}
|
|
||||||
}
|
|
||||||
Write(`${sessionFolder}/tasks.csv`, toCsv(tasks))
|
|
||||||
|
|
||||||
// 7. Cleanup temp wave CSV
|
|
||||||
Bash(`rm -f ${sessionFolder}/wave-${wave}.csv ${sessionFolder}/wave-${wave}-results.csv`)
|
|
||||||
|
|
||||||
// 8. Display wave summary
|
|
||||||
const completed = waveResults.filter(r => r.status === 'completed').length
|
|
||||||
const failed = waveResults.filter(r => r.status === 'failed').length
|
|
||||||
// Output: "Wave {wave}: {completed} completed, {failed} failed"
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
**Instruction Template** (embedded — see instructions/agent-instruction.md for standalone):
|
|
||||||
|
|
||||||
```javascript
|
|
||||||
function buildInstructionTemplate(sessionFolder, wave) {
|
|
||||||
const phaseNames = {
|
|
||||||
1: 'Global Survey', 2: 'Theoretical Foundations', 3: 'Algorithm Design',
|
|
||||||
4: 'Module Implementation', 5: 'Local Function-Level', 6: 'Integration & QA'
|
|
||||||
}
|
|
||||||
return `## TASK ASSIGNMENT — ${phaseNames[wave]}
|
|
||||||
|
|
||||||
### MANDATORY FIRST STEPS
|
|
||||||
1. Read shared discoveries: ${sessionFolder}/discoveries.ndjson (if exists, skip if not)
|
|
||||||
2. Read project context: .workflow/project-tech.json (if exists)
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Your Task
|
|
||||||
|
|
||||||
**Task ID**: {id}
|
|
||||||
**Title**: {title}
|
|
||||||
**Role**: {track_role}
|
|
||||||
**Analysis Dimension**: {analysis_dimension}
|
|
||||||
**Description**: {description}
|
|
||||||
**Formula References**: {formula_refs}
|
|
||||||
**Precision Requirement**: {precision_req}
|
|
||||||
**Scope**: {scope}
|
|
||||||
|
|
||||||
### Previous Tasks' Findings (Context)
|
|
||||||
{prev_context}
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Execution Protocol
|
|
||||||
|
|
||||||
1. **Read discoveries**: Load ${sessionFolder}/discoveries.ndjson for shared exploration findings
|
|
||||||
2. **Use context**: Apply previous tasks' findings from prev_context above
|
|
||||||
3. **Execute analysis**:
|
|
||||||
- Read target files within scope: {scope}
|
|
||||||
- Apply analysis criteria for dimension: {analysis_dimension}
|
|
||||||
- Document mathematical formulas in LaTeX notation ($$...$$)
|
|
||||||
- Classify findings by severity (Critical/High/Medium/Low)
|
|
||||||
- Include file:line references for code-related findings
|
|
||||||
4. **Generate document**: Write analysis report to ${sessionFolder}/docs/ following the standard template:
|
|
||||||
- Metadata (Phase, Track, Date)
|
|
||||||
- Executive Summary
|
|
||||||
- Analysis Scope
|
|
||||||
- Findings with severity, evidence, LaTeX formulas, impact, recommendations
|
|
||||||
- Cross-References to other phases
|
|
||||||
- Perspective Package (structured summary for context propagation)
|
|
||||||
5. **Share discoveries**: Append exploration findings to shared board:
|
|
||||||
\`\`\`bash
|
|
||||||
echo '{"ts":"<ISO8601>","worker":"{id}","type":"<type>","data":{...}}' >> ${sessionFolder}/discoveries.ndjson
|
|
||||||
\`\`\`
|
|
||||||
6. **Report result**: Return JSON via report_agent_job_result
|
|
||||||
|
|
||||||
### Discovery Types to Share
|
|
||||||
- \`governing_equation\`: {eq_name, latex, domain, boundary_conditions} — Governing equations found
|
|
||||||
- \`numerical_method\`: {method_name, type, order, stability_class} — Numerical methods identified
|
|
||||||
- \`stability_issue\`: {location, condition_number, severity, description} — Stability concerns
|
|
||||||
- \`convergence_property\`: {method, rate, order, conditions} — Convergence properties
|
|
||||||
- \`precision_risk\`: {location, operation, risk_type, recommendation} — Floating-point precision risks
|
|
||||||
- \`performance_bottleneck\`: {location, operation_count, memory_pattern, suggestion} — Performance issues
|
|
||||||
- \`architecture_pattern\`: {pattern_name, files, description} — Architecture patterns found
|
|
||||||
- \`test_gap\`: {component, missing_coverage, priority} — Missing test coverage
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Output (report_agent_job_result)
|
|
||||||
|
|
||||||
Return JSON:
|
|
||||||
{
|
|
||||||
"id": "{id}",
|
|
||||||
"status": "completed" | "failed",
|
|
||||||
"findings": "Key discoveries and conclusions (max 500 chars)",
|
|
||||||
"severity_distribution": "Critical:N High:N Medium:N Low:N",
|
|
||||||
"latex_formulas": "key formulas separated by semicolons",
|
|
||||||
"doc_path": "relative path to generated analysis document",
|
|
||||||
"error": ""
|
|
||||||
}`
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
**Success Criteria**:
|
|
||||||
- All 6 waves executed in order
|
|
||||||
- Each wave's results merged into master CSV before next wave starts
|
|
||||||
- Dependent tasks skipped when predecessor failed
|
|
||||||
- discoveries.ndjson accumulated across all waves
|
|
||||||
- Analysis documents generated in docs/ directory
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### Phase 3: Results Aggregation
|
|
||||||
|
|
||||||
**Objective**: Generate final results and comprehensive human-readable report synthesizing all 6 phases.
|
|
||||||
|
|
||||||
```javascript
|
|
||||||
// 1. Export final results.csv
|
|
||||||
Bash(`cp ${sessionFolder}/tasks.csv ${sessionFolder}/results.csv`)
|
|
||||||
|
|
||||||
// 2. Generate context.md
|
|
||||||
const tasks = parseCsv(Read(`${sessionFolder}/tasks.csv`))
|
|
||||||
const completed = tasks.filter(t => t.status === 'completed').length
|
|
||||||
const failed = tasks.filter(t => t.status === 'failed').length
|
|
||||||
const skipped = tasks.filter(t => t.status === 'skipped').length
|
|
||||||
|
|
||||||
let contextMd = `# Numerical Analysis Report: ${requirement}\n\n`
|
|
||||||
contextMd += `**Session**: ${sessionId}\n`
|
|
||||||
contextMd += `**Date**: ${getUtc8ISOString().substring(0, 10)}\n`
|
|
||||||
contextMd += `**Total Tasks**: ${tasks.length} | Completed: ${completed} | Failed: ${failed} | Skipped: ${skipped}\n\n`
|
|
||||||
|
|
||||||
// Per-wave summary
|
|
||||||
const phaseNames = ['', 'Global Survey', 'Theoretical Foundations', 'Algorithm Design',
|
|
||||||
'Module Implementation', 'Local Function-Level', 'Integration & QA']
|
|
||||||
for (let w = 1; w <= 6; w++) {
|
|
||||||
const waveTasks = tasks.filter(t => t.wave === w)
|
|
||||||
contextMd += `## Wave ${w}: ${phaseNames[w]}\n\n`
|
|
||||||
for (const t of waveTasks) {
|
|
||||||
contextMd += `### ${t.id}: ${t.title} [${t.status}]\n`
|
|
||||||
contextMd += `**Role**: ${t.track_role} | **Dimension**: ${t.analysis_dimension}\n\n`
|
|
||||||
if (t.findings) contextMd += `**Findings**: ${t.findings}\n\n`
|
|
||||||
if (t.latex_formulas) contextMd += `**Key Formulas**:\n$$${t.latex_formulas.split(';').join('$$\n\n$$')}$$\n\n`
|
|
||||||
if (t.severity_distribution) contextMd += `**Issues**: ${t.severity_distribution}\n\n`
|
|
||||||
if (t.doc_path) contextMd += `**Full Report**: [${t.doc_path}](${t.doc_path})\n\n`
|
|
||||||
contextMd += `---\n\n`
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Collected formulas section
|
|
||||||
const allFormulas = tasks.filter(t => t.latex_formulas).flatMap(t =>
|
|
||||||
t.latex_formulas.split(';').map(f => ({ task: t.id, formula: f.trim() }))
|
|
||||||
)
|
|
||||||
if (allFormulas.length > 0) {
|
|
||||||
contextMd += `## Collected Mathematical Formulas\n\n`
|
|
||||||
for (const f of allFormulas) {
|
|
||||||
contextMd += `- **${f.task}**: $$${f.formula}$$\n`
|
|
||||||
}
|
|
||||||
contextMd += `\n`
|
|
||||||
}
|
|
||||||
|
|
||||||
// All discoveries summary
|
|
||||||
contextMd += `## Discovery Board Summary\n\n`
|
|
||||||
contextMd += `See: ${sessionFolder}/discoveries.ndjson\n\n`
|
|
||||||
|
|
||||||
Write(`${sessionFolder}/context.md`, contextMd)
|
|
||||||
|
|
||||||
// 3. Display summary
|
|
||||||
// Output wave-by-wave completion status table
|
|
||||||
```
|
|
||||||
|
|
||||||
**Success Criteria**:
|
|
||||||
- results.csv exported
|
|
||||||
- context.md generated with all findings, formulas, cross-references
|
|
||||||
- Summary displayed to user
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Shared Discovery Board Protocol
|
|
||||||
|
|
||||||
### Standard Discovery Types
|
|
||||||
|
|
||||||
| Type | Dedup Key | Data Schema | Description |
|
|
||||||
|------|-----------|-------------|-------------|
|
|
||||||
| `governing_equation` | `eq_name` | `{eq_name, latex, domain, boundary_conditions}` | Governing equations found in the project |
|
|
||||||
| `numerical_method` | `method_name` | `{method_name, type, order, stability_class}` | Numerical methods identified |
|
|
||||||
| `stability_issue` | `location` | `{location, condition_number, severity, description}` | Numerical stability concerns |
|
|
||||||
| `convergence_property` | `method` | `{method, rate, order, conditions}` | Convergence properties proven or observed |
|
|
||||||
| `precision_risk` | `location+operation` | `{location, operation, risk_type, recommendation}` | Floating-point precision risks |
|
|
||||||
| `performance_bottleneck` | `location` | `{location, operation_count, memory_pattern, suggestion}` | Performance bottlenecks |
|
|
||||||
| `architecture_pattern` | `pattern_name` | `{pattern_name, files, description}` | Software architecture patterns |
|
|
||||||
| `test_gap` | `component` | `{component, missing_coverage, priority}` | Missing test coverage |
|
|
||||||
|
|
||||||
### Protocol Rules
|
|
||||||
|
|
||||||
1. **Read first**: Always read discoveries.ndjson before starting analysis
|
|
||||||
2. **Write immediately**: Append discoveries as soon as found, don't batch
|
|
||||||
3. **Deduplicate**: Check dedup key before appending (same key = skip)
|
|
||||||
4. **Append-only**: Never clear, modify, or recreate discoveries.ndjson
|
|
||||||
5. **Cross-wave accumulation**: Discoveries persist and accumulate across all 6 waves
|
|
||||||
|
|
||||||
### NDJSON Format
|
|
||||||
|
|
||||||
```jsonl
|
|
||||||
{"ts":"2026-03-04T10:00:00Z","worker":"T1.1","type":"governing_equation","data":{"eq_name":"Navier-Stokes","latex":"\\rho(\\frac{\\partial \\mathbf{v}}{\\partial t} + \\mathbf{v} \\cdot \\nabla \\mathbf{v}) = -\\nabla p + \\mu \\nabla^2 \\mathbf{v}","domain":"fluid_dynamics","boundary_conditions":"no-slip walls, inlet velocity"}}
|
|
||||||
{"ts":"2026-03-04T10:05:00Z","worker":"T2.2","type":"convergence_property","data":{"method":"Galerkin FEM","rate":"optimal","order":"h^{k+1} in L2","conditions":"quasi-uniform mesh, sufficient regularity"}}
|
|
||||||
{"ts":"2026-03-04T10:10:00Z","worker":"T3.2","type":"stability_issue","data":{"location":"src/solver/assembler.rs:142","condition_number":"1e12","severity":"High","description":"Ill-conditioned stiffness matrix for high aspect ratio elements"}}
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Perspective Reuse Matrix
|
|
||||||
|
|
||||||
Each phase's output serves as context for subsequent phases:
|
|
||||||
|
|
||||||
| Source Phase | P2 Reuse | P3 Reuse | P4 Reuse | P5 Reuse | P6 Reuse |
|
|
||||||
|-------------|---------|---------|---------|---------|---------|
|
|
||||||
| P1 Governing Eqs | Formalize → LaTeX | Constrain method selection | Code-equation mapping | Singularity sources | Correctness baseline |
|
|
||||||
| P1 Architecture | Constrain discretization | Parallel strategy | Module boundaries | Hotspot location | Integration scope |
|
|
||||||
| P1 Validation | - | Benchmark selection | Test criteria | Edge case sources | Final validation |
|
|
||||||
| P2 Formulas | - | Parameter constraints | Loop termination | Precision requirements | Theoretical verification |
|
|
||||||
| P2 Convergence | - | Mesh refinement strategy | Iteration control | Error tolerance | Rate verification |
|
|
||||||
| P2 Complexity | - | Performance baseline | Data structure choice | Optimization targets | Performance comparison |
|
|
||||||
| P3 Pseudocode | - | - | Implementation reference | Line-by-line audit | Regression baseline |
|
|
||||||
| P3 Stability | - | - | Precision selection | Cancellation detection | Numerical verification |
|
|
||||||
| P3 Performance | - | - | Memory layout | Vectorization targets | Benchmark targets |
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Error Handling
|
|
||||||
|
|
||||||
| Error | Resolution |
|
|
||||||
|-------|------------|
|
|
||||||
| Circular dependency | Detect in wave computation, abort with error message |
|
|
||||||
| Agent timeout | Mark as failed in results, continue with wave |
|
|
||||||
| Agent failed | Mark as failed, skip dependent tasks in later waves |
|
|
||||||
| All agents in wave failed | Log error, offer retry or abort |
|
|
||||||
| CSV parse error | Validate CSV format before execution, show line number |
|
|
||||||
| discoveries.ndjson corrupt | Ignore malformed lines, continue with valid entries |
|
|
||||||
| Continue mode: no session found | List available sessions, prompt user to select |
|
|
||||||
| LaTeX parse error | Store raw formula, flag for manual review |
|
|
||||||
| Scope files not found | Warn and continue with available files |
|
|
||||||
| Precision conflict between tracks | Flag in discoveries, defer to QA_Auditor in Wave 6 |
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Quality Gates (Per-Wave)
|
|
||||||
|
|
||||||
| Wave | Gate Criteria | Threshold |
|
|
||||||
|------|--------------|-----------|
|
|
||||||
| 1 | Core model identified + architecture mapped + KPI defined | All 3 tracks completed |
|
|
||||||
| 2 | Key formulas in LaTeX + convergence conditions stated + complexity determined | All 3 tracks completed |
|
|
||||||
| 3 | Pseudocode producible + stability assessed + performance predicted | ≥ 2 of 3 tracks completed |
|
|
||||||
| 4 | Code-algorithm mapping complete + data structures reviewed + APIs documented | ≥ 2 of 3 tracks completed |
|
|
||||||
| 5 | Hotspots identified + edge cases cataloged + precision risks flagged | ≥ 2 of 3 tracks completed |
|
|
||||||
| 6 | Test plan complete + benchmarks run + QA report synthesized | All 3 tracks completed |
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Core Rules
|
|
||||||
|
|
||||||
1. **Start Immediately**: First action is session initialization, then Phase 1
|
|
||||||
2. **Wave Order is Sacred**: Never execute wave N before wave N-1 completes and results are merged
|
|
||||||
3. **CSV is Source of Truth**: Master tasks.csv holds all state
|
|
||||||
4. **Context Propagation**: prev_context built from master CSV findings, not from memory
|
|
||||||
5. **Discovery Board is Append-Only**: Never clear, modify, or recreate discoveries.ndjson
|
|
||||||
6. **Skip on Failure**: If a dependency failed, skip the dependent task
|
|
||||||
7. **Cleanup Temp Files**: Remove wave-{N}.csv after results are merged
|
|
||||||
8. **LaTeX Preservation**: Mathematical formulas must be preserved in LaTeX notation throughout all phases
|
|
||||||
9. **Perspective Compounding**: Each wave MUST receive cumulative context from all preceding waves
|
|
||||||
10. **DO NOT STOP**: Continuous execution until all waves complete or all remaining tasks are skipped
|
|
||||||
@@ -1,179 +0,0 @@
|
|||||||
# Agent Instruction Template
|
|
||||||
|
|
||||||
Template for generating agent instruction prompts used in `spawn_agents_on_csv`.
|
|
||||||
|
|
||||||
## Key Concept
|
|
||||||
|
|
||||||
The instruction template is a **prompt with column placeholders** (`{column_name}`). When `spawn_agents_on_csv` executes, each agent receives the template with its row's column values substituted.
|
|
||||||
|
|
||||||
**Critical rule**: The instruction template is the ONLY context the agent has. It must be self-contained — the agent cannot access the master CSV or other agents' data.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Template
|
|
||||||
|
|
||||||
```markdown
|
|
||||||
## TASK ASSIGNMENT — Numerical Analysis
|
|
||||||
|
|
||||||
### MANDATORY FIRST STEPS
|
|
||||||
1. Read shared discoveries: {session_folder}/discoveries.ndjson (if exists, skip if not)
|
|
||||||
2. Read project context: .workflow/project-tech.json (if exists)
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Your Task
|
|
||||||
|
|
||||||
**Task ID**: {id}
|
|
||||||
**Title**: {title}
|
|
||||||
**Role**: {track_role}
|
|
||||||
**Analysis Dimension**: {analysis_dimension}
|
|
||||||
**Description**: {description}
|
|
||||||
**Formula References**: {formula_refs}
|
|
||||||
**Precision Requirement**: {precision_req}
|
|
||||||
**Scope**: {scope}
|
|
||||||
|
|
||||||
### Previous Tasks' Findings (Context)
|
|
||||||
{prev_context}
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Execution Protocol
|
|
||||||
|
|
||||||
1. **Read discoveries**: Load {session_folder}/discoveries.ndjson for shared exploration findings from other tracks
|
|
||||||
2. **Use context**: Apply previous tasks' findings from prev_context above — this contains cumulative analysis from all preceding phases
|
|
||||||
3. **Execute analysis based on your role**:
|
|
||||||
|
|
||||||
#### For Domain Analysis Roles (Wave 1: Problem_Domain_Analyst, Software_Architect, Validation_Strategist)
|
|
||||||
- Survey the project codebase within scope: {scope}
|
|
||||||
- Identify mathematical models, governing equations, boundary conditions
|
|
||||||
- Map software architecture: modules, data flow, dependencies
|
|
||||||
- Define validation strategy: benchmarks, KPIs, acceptance criteria
|
|
||||||
|
|
||||||
#### For Theory Roles (Wave 2: Mathematician, Convergence_Analyst, Complexity_Analyst)
|
|
||||||
- Build on governing equations from Wave 1 context
|
|
||||||
- Derive precise mathematical formulations using LaTeX notation
|
|
||||||
- Prove or analyze convergence properties with error bounds
|
|
||||||
- Determine computational complexity (time and space)
|
|
||||||
- All formulas MUST use LaTeX: `$$formula$$`
|
|
||||||
|
|
||||||
#### For Algorithm Roles (Wave 3: Algorithm_Designer, Stability_Analyst, Performance_Modeler)
|
|
||||||
- Select numerical methods based on theoretical analysis from Wave 2
|
|
||||||
- Write algorithm pseudocode for core computational kernels
|
|
||||||
- Analyze condition numbers and error propagation
|
|
||||||
- Build performance model: FLOPS count, memory bandwidth, parallel efficiency
|
|
||||||
|
|
||||||
#### For Module Roles (Wave 4: Module_Implementer, Data_Structure_Designer, Interface_Analyst)
|
|
||||||
- Map algorithms from Wave 3 to actual code modules
|
|
||||||
- Review data structures: sparse matrix formats, mesh data, memory layout
|
|
||||||
- Document module interfaces, data contracts, error handling patterns
|
|
||||||
|
|
||||||
#### For Local Analysis Roles (Wave 5: Code_Optimizer, Edge_Case_Analyst, Precision_Auditor)
|
|
||||||
- Identify performance hotspots with file:line references
|
|
||||||
- Catalog edge cases: singularities, division by zero, overflow/underflow
|
|
||||||
- Audit floating-point operations for catastrophic cancellation, accumulation errors
|
|
||||||
- Provide specific optimization recommendations (vectorization, cache, parallelism)
|
|
||||||
|
|
||||||
#### For Integration Roles (Wave 6: Integration_Tester, Benchmark_Engineer, QA_Auditor)
|
|
||||||
- Design end-to-end test plans using benchmarks from Wave 1
|
|
||||||
- Run or plan performance benchmarks comparing actual vs theoretical (Wave 3)
|
|
||||||
- Synthesize ALL findings from Waves 1-5 into final quality report
|
|
||||||
- Produce risk matrix and improvement roadmap
|
|
||||||
|
|
||||||
4. **Generate analysis document**: Write to {session_folder}/docs/ using this template:
|
|
||||||
|
|
||||||
```markdown
|
|
||||||
# [Phase {wave}] {title}
|
|
||||||
|
|
||||||
## Metadata
|
|
||||||
- **Phase**: {wave} | **Track**: {id} | **Role**: {track_role}
|
|
||||||
- **Dimension**: {analysis_dimension}
|
|
||||||
- **Date**: [ISO8601]
|
|
||||||
- **Input Context**: Context from tasks {context_from}
|
|
||||||
|
|
||||||
## Executive Summary
|
|
||||||
[2-3 sentences: core conclusions]
|
|
||||||
|
|
||||||
## Analysis Scope
|
|
||||||
[Boundaries, assumptions, files analyzed within {scope}]
|
|
||||||
|
|
||||||
## Findings
|
|
||||||
|
|
||||||
### Finding 1: [Title]
|
|
||||||
**Severity**: Critical / High / Medium / Low
|
|
||||||
**Evidence**: [Code reference file:line or formula derivation]
|
|
||||||
$$\text{LaTeX formula if applicable}$$
|
|
||||||
**Impact**: [Effect on project correctness, performance, or stability]
|
|
||||||
**Recommendation**: [Specific actionable suggestion]
|
|
||||||
|
|
||||||
### Finding N: ...
|
|
||||||
|
|
||||||
## Mathematical Formulas
|
|
||||||
[All key formulas derived or referenced in this analysis]
|
|
||||||
|
|
||||||
## Cross-References
|
|
||||||
[References to findings from other phases/tracks]
|
|
||||||
|
|
||||||
## Perspective Package
|
|
||||||
[Structured summary for context propagation to later phases]
|
|
||||||
- Key conclusions: ...
|
|
||||||
- Formulas for reuse: ...
|
|
||||||
- Open questions: ...
|
|
||||||
- Risks identified: ...
|
|
||||||
```
|
|
||||||
|
|
||||||
5. **Share discoveries**: Append findings to shared board:
|
|
||||||
```bash
|
|
||||||
echo '{"ts":"<ISO8601>","worker":"{id}","type":"<type>","data":{...}}' >> {session_folder}/discoveries.ndjson
|
|
||||||
```
|
|
||||||
|
|
||||||
6. **Report result**: Return JSON via report_agent_job_result
|
|
||||||
|
|
||||||
### Discovery Types to Share
|
|
||||||
- `governing_equation`: {eq_name, latex, domain, boundary_conditions} — Governing equations found
|
|
||||||
- `numerical_method`: {method_name, type, order, stability_class} — Numerical methods identified
|
|
||||||
- `stability_issue`: {location, condition_number, severity, description} — Stability concerns
|
|
||||||
- `convergence_property`: {method, rate, order, conditions} — Convergence properties
|
|
||||||
- `precision_risk`: {location, operation, risk_type, recommendation} — Float precision risks
|
|
||||||
- `performance_bottleneck`: {location, operation_count, memory_pattern, suggestion} — Performance issues
|
|
||||||
- `architecture_pattern`: {pattern_name, files, description} — Architecture patterns found
|
|
||||||
- `test_gap`: {component, missing_coverage, priority} — Missing test coverage
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Output (report_agent_job_result)
|
|
||||||
|
|
||||||
Return JSON:
|
|
||||||
{
|
|
||||||
"id": "{id}",
|
|
||||||
"status": "completed" | "failed",
|
|
||||||
"findings": "Key discoveries and conclusions (max 500 chars)",
|
|
||||||
"severity_distribution": "Critical:N High:N Medium:N Low:N",
|
|
||||||
"latex_formulas": "key formulas in LaTeX separated by semicolons",
|
|
||||||
"doc_path": "relative path to generated analysis document (e.g., docs/P2_Mathematical_Formulation.md)",
|
|
||||||
"error": ""
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Placeholder Distinction
|
|
||||||
|
|
||||||
| Syntax | Resolved By | When |
|
|
||||||
|--------|-----------|------|
|
|
||||||
| `{column_name}` | spawn_agents_on_csv | During agent execution (runtime) |
|
|
||||||
| `{session_folder}` | Wave engine | Before spawning (set in instruction string) |
|
|
||||||
|
|
||||||
The SKILL.md embeds this template with `{session_folder}` replaced by the actual session path. Column placeholders `{column_name}` remain for runtime substitution.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Instruction Size Guidelines
|
|
||||||
|
|
||||||
| Track Type | Target Length | Notes |
|
|
||||||
|-----------|-------------|-------|
|
|
||||||
| Wave 1 (Global) | 500-1000 chars | Broad survey, needs exploration guidance |
|
|
||||||
| Wave 2 (Theory) | 1000-2000 chars | Requires mathematical rigor instructions |
|
|
||||||
| Wave 3 (Algorithm) | 1000-1500 chars | Needs pseudocode format guidance |
|
|
||||||
| Wave 4 (Module) | 800-1200 chars | Focused on code-algorithm mapping |
|
|
||||||
| Wave 5 (Local) | 800-1500 chars | Detailed precision/optimization criteria |
|
|
||||||
| Wave 6 (Integration) | 1500-2500 chars | Must synthesize all prior phases |
|
|
||||||
@@ -1,162 +0,0 @@
|
|||||||
# Numerical Analysis Workflow — CSV Schema
|
|
||||||
|
|
||||||
## Master CSV: tasks.csv
|
|
||||||
|
|
||||||
### Column Definitions
|
|
||||||
|
|
||||||
#### Input Columns (Set by Decomposer)
|
|
||||||
|
|
||||||
| Column | Type | Required | Description | Example |
|
|
||||||
|--------|------|----------|-------------|---------|
|
|
||||||
| `id` | string | Yes | Unique task identifier (T{wave}.{track}) | `"T2.1"` |
|
|
||||||
| `title` | string | Yes | Short task title | `"Mathematical Formulation"` |
|
|
||||||
| `description` | string | Yes | Detailed task description (self-contained) | `"Derive precise mathematical formulations..."` |
|
|
||||||
| `track_role` | string | Yes | Analysis role name | `"Mathematician"` |
|
|
||||||
| `analysis_dimension` | string | Yes | Analysis focus area | `"formula_derivation"` |
|
|
||||||
| `formula_refs` | string | No | References to formulas from earlier tasks (TaskID:formula_name;...) | `"T1.1:governing_eqs;T2.2:convergence_conds"` |
|
|
||||||
| `precision_req` | string | No | Required floating-point precision | `"double"` |
|
|
||||||
| `scope` | string | No | File/directory scope for analysis (glob) | `"src/solver/**"` |
|
|
||||||
| `deps` | string | No | Semicolon-separated dependency task IDs | `"T2.1;T2.2"` |
|
|
||||||
| `context_from` | string | No | Semicolon-separated task IDs for context | `"T1.1;T2.1"` |
|
|
||||||
|
|
||||||
#### Computed Columns (Set by Wave Engine)
|
|
||||||
|
|
||||||
| Column | Type | Description | Example |
|
|
||||||
|--------|------|-------------|---------|
|
|
||||||
| `wave` | integer | Wave number (1-6, fixed per diamond topology) | `3` |
|
|
||||||
| `prev_context` | string | Aggregated findings + formulas from context_from tasks (per-wave CSV only) | `"[T2.1] Weak form derived..."` |
|
|
||||||
|
|
||||||
#### Output Columns (Set by Agent)
|
|
||||||
|
|
||||||
| Column | Type | Description | Example |
|
|
||||||
|--------|------|-------------|---------|
|
|
||||||
| `status` | enum | `pending` → `completed` / `failed` / `skipped` | `"completed"` |
|
|
||||||
| `findings` | string | Key discoveries (max 500 chars) | `"Identified CFL condition..."` |
|
|
||||||
| `severity_distribution` | string | Issue counts by severity | `"Critical:0 High:2 Medium:3 Low:1"` |
|
|
||||||
| `latex_formulas` | string | Key LaTeX formulas (semicolon-separated) | `"\\Delta t \\leq \\frac{h}{c};\\kappa(A) = \\|A\\|\\|A^{-1}\\|"` |
|
|
||||||
| `doc_path` | string | Path to generated analysis document | `"docs/P3_Numerical_Stability_Report.md"` |
|
|
||||||
| `error` | string | Error message if failed | `""` |
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### Example Data
|
|
||||||
|
|
||||||
```csv
|
|
||||||
id,title,description,track_role,analysis_dimension,formula_refs,precision_req,scope,deps,context_from,wave,status,findings,severity_distribution,latex_formulas,doc_path,error
|
|
||||||
"T1.1","Problem Domain Survey","Survey governing equations and mathematical models. Identify PDE types, boundary conditions, conservation laws, and physical domain.","Problem_Domain_Analyst","domain_modeling","","","src/**","","","1","completed","Identified Navier-Stokes equations with k-epsilon turbulence model. Incompressible flow assumption. No-slip boundary conditions.","Critical:0 High:0 Medium:1 Low:2","\\rho(\\frac{\\partial v}{\\partial t} + v \\cdot \\nabla v) = -\\nabla p + \\mu \\nabla^2 v","docs/P1_Domain_Survey.md",""
|
|
||||||
"T2.1","Mathematical Formulation","Derive precise mathematical formulations using LaTeX. Transform governing equations into weak forms suitable for FEM discretization.","Mathematician","formula_derivation","T1.1:governing_eqs","","src/**","T1.1","T1.1","2","completed","Weak form derived for NS equations. Galerkin formulation with inf-sup stable elements (Taylor-Hood P2/P1).","Critical:0 High:0 Medium:0 Low:1","\\int_\\Omega \\mu \\nabla u : \\nabla v \\, d\\Omega - \\int_\\Omega p \\nabla \\cdot v \\, d\\Omega = \\int_\\Omega f \\cdot v \\, d\\Omega","docs/P2_Mathematical_Formulation.md",""
|
|
||||||
"T3.2","Numerical Stability Report","Analyze numerical stability of selected algorithms. Evaluate condition numbers, error propagation characteristics, and precision requirements.","Stability_Analyst","stability_analysis","T2.1:weak_forms;T2.2:convergence_conds","double","src/solver/**","T2.1;T2.2","T2.1;T2.2","3","pending","","","","",""
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### Column Lifecycle
|
|
||||||
|
|
||||||
```
|
|
||||||
Decomposer (Phase 1) Wave Engine (Phase 2) Agent (Execution)
|
|
||||||
───────────────────── ──────────────────── ─────────────────
|
|
||||||
id ───────────► id ──────────► id
|
|
||||||
title ───────────► title ──────────► (reads)
|
|
||||||
description ───────────► description ──────────► (reads)
|
|
||||||
track_role ───────────► track_role ──────────► (reads)
|
|
||||||
analysis_dimension ─────► analysis_dimension ────► (reads)
|
|
||||||
formula_refs ──────────► formula_refs ─────────► (reads)
|
|
||||||
precision_req ─────────► precision_req ─────────► (reads)
|
|
||||||
scope ───────────► scope ──────────► (reads)
|
|
||||||
deps ───────────► deps ──────────► (reads)
|
|
||||||
context_from───────────► context_from──────────► (reads)
|
|
||||||
wave ──────────► (reads)
|
|
||||||
prev_context ──────────► (reads)
|
|
||||||
status
|
|
||||||
findings
|
|
||||||
severity_distribution
|
|
||||||
latex_formulas
|
|
||||||
doc_path
|
|
||||||
error
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Output Schema (JSON)
|
|
||||||
|
|
||||||
Agent output via `report_agent_job_result`:
|
|
||||||
|
|
||||||
```json
|
|
||||||
{
|
|
||||||
"type": "object",
|
|
||||||
"properties": {
|
|
||||||
"id": { "type": "string", "description": "Task ID (T{wave}.{track})" },
|
|
||||||
"status": { "type": "string", "enum": ["completed", "failed"] },
|
|
||||||
"findings": { "type": "string", "description": "Key discoveries, max 500 chars" },
|
|
||||||
"severity_distribution": { "type": "string", "description": "Critical:N High:N Medium:N Low:N" },
|
|
||||||
"latex_formulas": { "type": "string", "description": "Key formulas in LaTeX, semicolon-separated" },
|
|
||||||
"doc_path": { "type": "string", "description": "Path to generated analysis document" },
|
|
||||||
"error": { "type": "string", "description": "Error message if failed" }
|
|
||||||
},
|
|
||||||
"required": ["id", "status", "findings"]
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Discovery Types
|
|
||||||
|
|
||||||
| Type | Dedup Key | Data Schema | Description |
|
|
||||||
|------|-----------|-------------|-------------|
|
|
||||||
| `governing_equation` | `eq_name` | `{eq_name, latex, domain, boundary_conditions}` | Governing equations found |
|
|
||||||
| `numerical_method` | `method_name` | `{method_name, type, order, stability_class}` | Numerical methods identified |
|
|
||||||
| `stability_issue` | `location` | `{location, condition_number, severity, description}` | Stability concerns |
|
|
||||||
| `convergence_property` | `method` | `{method, rate, order, conditions}` | Convergence properties |
|
|
||||||
| `precision_risk` | `location+operation` | `{location, operation, risk_type, recommendation}` | Float precision risks |
|
|
||||||
| `performance_bottleneck` | `location` | `{location, operation_count, memory_pattern, suggestion}` | Performance bottlenecks |
|
|
||||||
| `architecture_pattern` | `pattern_name` | `{pattern_name, files, description}` | Architecture patterns |
|
|
||||||
| `test_gap` | `component` | `{component, missing_coverage, priority}` | Missing test coverage |
|
|
||||||
|
|
||||||
### Discovery NDJSON Format
|
|
||||||
|
|
||||||
```jsonl
|
|
||||||
{"ts":"2026-03-04T10:00:00Z","worker":"T1.1","type":"governing_equation","data":{"eq_name":"Navier-Stokes","latex":"\\rho(\\frac{\\partial v}{\\partial t} + v \\cdot \\nabla v) = -\\nabla p + \\mu \\nabla^2 v","domain":"fluid_dynamics","boundary_conditions":"no-slip walls"}}
|
|
||||||
{"ts":"2026-03-04T10:05:00Z","worker":"T2.2","type":"convergence_property","data":{"method":"Galerkin FEM","rate":"optimal","order":"h^{k+1}","conditions":"quasi-uniform mesh"}}
|
|
||||||
{"ts":"2026-03-04T10:10:00Z","worker":"T3.2","type":"stability_issue","data":{"location":"src/solver/assembler.rs:142","condition_number":"1e12","severity":"High","description":"Ill-conditioned stiffness matrix"}}
|
|
||||||
{"ts":"2026-03-04T10:15:00Z","worker":"T5.3","type":"precision_risk","data":{"location":"src/solver/residual.rs:87","operation":"subtraction of nearly equal values","risk_type":"catastrophic_cancellation","recommendation":"Use compensated summation or reformulate"}}
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Validation Rules
|
|
||||||
|
|
||||||
| Rule | Check | Error |
|
|
||||||
|------|-------|-------|
|
|
||||||
| Unique IDs | No duplicate `id` values | "Duplicate task ID: {id}" |
|
|
||||||
| Valid deps | All dep IDs exist in tasks | "Unknown dependency: {dep_id}" |
|
|
||||||
| No self-deps | Task cannot depend on itself | "Self-dependency: {id}" |
|
|
||||||
| No circular deps | Topological sort completes | "Circular dependency detected involving: {ids}" |
|
|
||||||
| context_from valid | All context IDs exist and in earlier waves | "Invalid context_from: {id}" |
|
|
||||||
| Description non-empty | Every task has description | "Empty description for task: {id}" |
|
|
||||||
| Status enum | status in {pending, completed, failed, skipped} | "Invalid status: {status}" |
|
|
||||||
| Wave range | wave in {1..6} | "Invalid wave number: {wave}" |
|
|
||||||
| Track role valid | track_role matches known roles | "Unknown track_role: {role}" |
|
|
||||||
| Formula refs format | TaskID:formula_name pattern | "Malformed formula_refs: {value}" |
|
|
||||||
|
|
||||||
### Analysis Dimension Values
|
|
||||||
|
|
||||||
| Dimension | Used In Wave | Description |
|
|
||||||
|-----------|-------------|-------------|
|
|
||||||
| `domain_modeling` | 1 | Physical/mathematical domain survey |
|
|
||||||
| `architecture_analysis` | 1 | Software architecture analysis |
|
|
||||||
| `validation_design` | 1 | Validation and benchmark strategy |
|
|
||||||
| `formula_derivation` | 2 | Mathematical formulation and derivation |
|
|
||||||
| `convergence_analysis` | 2 | Convergence theory and error bounds |
|
|
||||||
| `complexity_analysis` | 2 | Computational complexity analysis |
|
|
||||||
| `method_selection` | 3 | Numerical method selection and design |
|
|
||||||
| `stability_analysis` | 3 | Numerical stability assessment |
|
|
||||||
| `performance_modeling` | 3 | Performance prediction and modeling |
|
|
||||||
| `implementation_analysis` | 4 | Module-level code analysis |
|
|
||||||
| `data_structure_review` | 4 | Data structure and memory layout review |
|
|
||||||
| `interface_analysis` | 4 | API contract and interface analysis |
|
|
||||||
| `optimization` | 5 | Function-level performance optimization |
|
|
||||||
| `edge_case_analysis` | 5 | Boundary and singularity handling |
|
|
||||||
| `precision_audit` | 5 | Floating-point precision audit |
|
|
||||||
| `integration_testing` | 6 | System integration testing |
|
|
||||||
| `benchmarking` | 6 | Performance benchmarking |
|
|
||||||
| `quality_assurance` | 6 | Final quality audit and synthesis |
|
|
||||||
@@ -1,237 +0,0 @@
|
|||||||
# Analysis Dimensions for Numerical Computation
|
|
||||||
|
|
||||||
Defines the 18 analysis dimensions across 6 phases of the NADW workflow.
|
|
||||||
|
|
||||||
## Purpose
|
|
||||||
|
|
||||||
| Phase | Usage |
|
|
||||||
|-------|-------|
|
|
||||||
| Phase 1 (Decomposition) | Reference when assigning analysis_dimension to tasks |
|
|
||||||
| Phase 2 (Execution) | Agents use to understand their analysis focus |
|
|
||||||
| Phase 3 (Aggregation) | Organize findings by dimension |
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 1. Wave 1: Global Survey Dimensions
|
|
||||||
|
|
||||||
### 1.1 Domain Modeling (`domain_modeling`)
|
|
||||||
|
|
||||||
**Analyst Role**: Problem_Domain_Analyst
|
|
||||||
|
|
||||||
**Focus Areas**:
|
|
||||||
- Governing equations (PDEs, ODEs, integral equations)
|
|
||||||
- Physical domain and boundary conditions
|
|
||||||
- Conservation laws and constitutive relations
|
|
||||||
- Problem classification (elliptic, parabolic, hyperbolic)
|
|
||||||
- Dimensional analysis and non-dimensionalization
|
|
||||||
|
|
||||||
**Key Outputs**:
|
|
||||||
- Equation inventory with LaTeX notation
|
|
||||||
- Boundary condition catalog
|
|
||||||
- Problem classification matrix
|
|
||||||
- Physical parameter ranges
|
|
||||||
|
|
||||||
**Formula Types to Identify**:
|
|
||||||
$$\frac{\partial u}{\partial t} + \mathcal{L}u = f \quad \text{(general PDE form)}$$
|
|
||||||
$$u|_{\partial\Omega} = g \quad \text{(Dirichlet BC)}$$
|
|
||||||
$$\frac{\partial u}{\partial n}|_{\partial\Omega} = h \quad \text{(Neumann BC)}$$
|
|
||||||
|
|
||||||
### 1.2 Architecture Analysis (`architecture_analysis`)
|
|
||||||
|
|
||||||
**Analyst Role**: Software_Architect
|
|
||||||
|
|
||||||
**Focus Areas**:
|
|
||||||
- Module decomposition and dependency graph
|
|
||||||
- Data flow between computational stages
|
|
||||||
- I/O patterns (mesh input, solution output, checkpointing)
|
|
||||||
- Parallelism strategy (MPI, OpenMP, GPU)
|
|
||||||
- Build system and dependency management
|
|
||||||
|
|
||||||
**Key Outputs**:
|
|
||||||
- High-level component diagram
|
|
||||||
- Data flow diagram
|
|
||||||
- Technology stack inventory
|
|
||||||
- Parallelism strategy assessment
|
|
||||||
|
|
||||||
### 1.3 Validation Design (`validation_design`)
|
|
||||||
|
|
||||||
**Analyst Role**: Validation_Strategist
|
|
||||||
|
|
||||||
**Focus Areas**:
|
|
||||||
- Benchmark cases with known analytical solutions
|
|
||||||
- Manufactured solution methodology
|
|
||||||
- Grid convergence study design
|
|
||||||
- Key Performance Indicators (KPIs)
|
|
||||||
- Acceptance criteria definition
|
|
||||||
|
|
||||||
**Key Outputs**:
|
|
||||||
- Benchmark case catalog
|
|
||||||
- Validation methodology matrix
|
|
||||||
- KPI definitions with targets
|
|
||||||
- Acceptance test specifications
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 2. Wave 2: Theoretical Foundation Dimensions
|
|
||||||
|
|
||||||
### 2.1 Formula Derivation (`formula_derivation`)
|
|
||||||
|
|
||||||
**Analyst Role**: Mathematician
|
|
||||||
|
|
||||||
**Focus Areas**:
|
|
||||||
- Strong-to-weak form transformation
|
|
||||||
- Discretization schemes (FEM, FDM, FVM, spectral)
|
|
||||||
- Variational formulations
|
|
||||||
- Linearization techniques (Newton, Picard)
|
|
||||||
- Stabilization methods (SUPG, GLS, VMS)
|
|
||||||
|
|
||||||
**Key Formula Templates**:
|
|
||||||
$$\text{Weak form: } a(u,v) = l(v) \quad \forall v \in V_h$$
|
|
||||||
$$a(u,v) = \int_\Omega \nabla u \cdot \nabla v \, d\Omega$$
|
|
||||||
$$l(v) = \int_\Omega f v \, d\Omega + \int_{\Gamma_N} g v \, dS$$
|
|
||||||
|
|
||||||
### 2.2 Convergence Analysis (`convergence_analysis`)
|
|
||||||
|
|
||||||
**Analyst Role**: Convergence_Analyst
|
|
||||||
|
|
||||||
**Focus Areas**:
|
|
||||||
- A priori error estimates
|
|
||||||
- A posteriori error estimators
|
|
||||||
- Convergence order verification
|
|
||||||
- Lax equivalence theorem applicability
|
|
||||||
- CFL conditions for time-dependent problems
|
|
||||||
|
|
||||||
**Key Formula Templates**:
|
|
||||||
$$\|u - u_h\|_{L^2} \leq C h^{k+1} |u|_{H^{k+1}} \quad \text{(optimal L2 rate)}$$
|
|
||||||
$$\|u - u_h\|_{H^1} \leq C h^k |u|_{H^{k+1}} \quad \text{(optimal H1 rate)}$$
|
|
||||||
$$\Delta t \leq \frac{C h}{\|v\|_\infty} \quad \text{(CFL condition)}$$
|
|
||||||
|
|
||||||
### 2.3 Complexity Analysis (`complexity_analysis`)
|
|
||||||
|
|
||||||
**Analyst Role**: Complexity_Analyst
|
|
||||||
|
|
||||||
**Focus Areas**:
|
|
||||||
- Assembly operation counts
|
|
||||||
- Solver complexity (direct vs iterative)
|
|
||||||
- Preconditioner cost analysis
|
|
||||||
- Memory scaling with problem size
|
|
||||||
- Communication overhead in parallel settings
|
|
||||||
|
|
||||||
**Key Formula Templates**:
|
|
||||||
$$T_{assembly} = O(N_{elem} \cdot p^{2d}) \quad \text{(FEM assembly)}$$
|
|
||||||
$$T_{solve} = O(N^{3/2}) \quad \text{(2D direct)}, \quad O(N \log N) \quad \text{(multigrid)}$$
|
|
||||||
$$M_{storage} = O(nnz) \quad \text{(sparse storage)}$$
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 3. Wave 3: Algorithm Design Dimensions
|
|
||||||
|
|
||||||
### 3.1 Method Selection (`method_selection`)
|
|
||||||
|
|
||||||
**Analyst Role**: Algorithm_Designer
|
|
||||||
|
|
||||||
**Focus Areas**:
|
|
||||||
- Spatial discretization method selection
|
|
||||||
- Time integration scheme selection
|
|
||||||
- Linear/nonlinear solver selection
|
|
||||||
- Preconditioner selection
|
|
||||||
- Mesh generation strategy
|
|
||||||
|
|
||||||
**Decision Criteria**:
|
|
||||||
|
|
||||||
| Criterion | Weight | Metrics |
|
|
||||||
|-----------|--------|---------|
|
|
||||||
| Accuracy order | High | Convergence rate, error bounds |
|
|
||||||
| Stability | High | Unconditional vs conditional, CFL |
|
|
||||||
| Efficiency | Medium | FLOPS per DOF, memory per DOF |
|
|
||||||
| Parallelizability | Medium | Communication-to-computation ratio |
|
|
||||||
| Implementation complexity | Low | Lines of code, library availability |
|
|
||||||
|
|
||||||
### 3.2 Stability Analysis (`stability_analysis`)
|
|
||||||
|
|
||||||
**Analyst Role**: Stability_Analyst
|
|
||||||
|
|
||||||
**Focus Areas**:
|
|
||||||
- Von Neumann stability analysis
|
|
||||||
- Matrix condition numbers
|
|
||||||
- Amplification factors
|
|
||||||
- Inf-sup (LBB) stability for mixed methods
|
|
||||||
- Mesh-dependent stability bounds
|
|
||||||
|
|
||||||
**Key Formula Templates**:
|
|
||||||
$$\kappa(A) = \|A\| \cdot \|A^{-1}\| \quad \text{(condition number)}$$
|
|
||||||
$$|g(\xi)| \leq 1 \quad \forall \xi \quad \text{(von Neumann stability)}$$
|
|
||||||
$$\inf_{q_h \in Q_h} \sup_{v_h \in V_h} \frac{b(v_h, q_h)}{\|v_h\| \|q_h\|} \geq \beta > 0 \quad \text{(inf-sup)}$$
|
|
||||||
|
|
||||||
### 3.3 Performance Modeling (`performance_modeling`)
|
|
||||||
|
|
||||||
**Analyst Role**: Performance_Modeler
|
|
||||||
|
|
||||||
**Focus Areas**:
|
|
||||||
- Arithmetic intensity (FLOPS/byte)
|
|
||||||
- Roofline model analysis
|
|
||||||
- Strong/weak scaling prediction
|
|
||||||
- Memory bandwidth bottleneck identification
|
|
||||||
- Cache utilization estimates
|
|
||||||
|
|
||||||
**Key Formula Templates**:
|
|
||||||
$$AI = \frac{\text{FLOPS}}{\text{Bytes transferred}} \quad \text{(arithmetic intensity)}$$
|
|
||||||
$$P_{max} = \min(P_{peak}, AI \times BW_{mem}) \quad \text{(roofline bound)}$$
|
|
||||||
$$E_{parallel}(p) = \frac{T_1}{p \cdot T_p} \quad \text{(parallel efficiency)}$$
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 4. Wave 4: Module Implementation Dimensions
|
|
||||||
|
|
||||||
### 4.1 Implementation Analysis (`implementation_analysis`)
|
|
||||||
|
|
||||||
**Focus**: Algorithm-to-code mapping, implementation correctness, coding patterns
|
|
||||||
|
|
||||||
### 4.2 Data Structure Review (`data_structure_review`)
|
|
||||||
|
|
||||||
**Focus**: Sparse matrix formats (CSR/CSC/COO), mesh data structures, memory layout optimization
|
|
||||||
|
|
||||||
### 4.3 Interface Analysis (`interface_analysis`)
|
|
||||||
|
|
||||||
**Focus**: Module APIs, data contracts between components, error handling patterns
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 5. Wave 5: Local Function-Level Dimensions
|
|
||||||
|
|
||||||
### 5.1 Optimization (`optimization`)
|
|
||||||
|
|
||||||
**Focus**: Hotspot identification, vectorization opportunities, cache optimization, loop restructuring
|
|
||||||
|
|
||||||
### 5.2 Edge Case Analysis (`edge_case_analysis`)
|
|
||||||
|
|
||||||
**Focus**: Division by zero, matrix singularity, degenerate mesh elements, boundary layer singularities
|
|
||||||
|
|
||||||
### 5.3 Precision Audit (`precision_audit`)
|
|
||||||
|
|
||||||
**Focus**: Catastrophic cancellation, accumulation errors, mixed-precision opportunities, compensated algorithms
|
|
||||||
|
|
||||||
**Critical Patterns to Detect**:
|
|
||||||
|
|
||||||
| Pattern | Risk | Mitigation |
|
|
||||||
|---------|------|-----------|
|
|
||||||
| `a - b` where `a ≈ b` | Catastrophic cancellation | Reformulate or use higher precision |
|
|
||||||
| `sum += small_value` in loop | Accumulation error | Kahan summation |
|
|
||||||
| `1.0/x` where `x → 0` | Overflow/loss of significance | Guard with threshold |
|
|
||||||
| Mixed float32/float64 | Silent precision loss | Explicit type annotations |
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 6. Wave 6: Integration & QA Dimensions
|
|
||||||
|
|
||||||
### 6.1 Integration Testing (`integration_testing`)
|
|
||||||
|
|
||||||
**Focus**: End-to-end test design, regression suite, manufactured solutions verification
|
|
||||||
|
|
||||||
### 6.2 Benchmarking (`benchmarking`)
|
|
||||||
|
|
||||||
**Focus**: Actual vs predicted performance, scalability tests, profiling results
|
|
||||||
|
|
||||||
### 6.3 Quality Assurance (`quality_assurance`)
|
|
||||||
|
|
||||||
**Focus**: All-phase synthesis, risk matrix, improvement roadmap, final recommendations
|
|
||||||
@@ -1,214 +0,0 @@
|
|||||||
# Phase Topology — Diamond Deep Tree
|
|
||||||
|
|
||||||
Wave coordination patterns for the Numerical Analysis Diamond Workflow (NADW).
|
|
||||||
|
|
||||||
## Purpose
|
|
||||||
|
|
||||||
| Phase | Usage |
|
|
||||||
|-------|-------|
|
|
||||||
| Phase 1 (Decomposition) | Reference when assigning waves and dependencies |
|
|
||||||
| Phase 2 (Execution) | Context flow between waves |
|
|
||||||
| Phase 3 (Aggregation) | Structure the final report by topology |
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 1. Topology Overview
|
|
||||||
|
|
||||||
The NADW uses a **Staged Diamond** topology — six sequential waves, each with 3 parallel tracks. Context flows cumulatively from earlier waves to later ones.
|
|
||||||
|
|
||||||
```
|
|
||||||
Wave 1: [T1.1] [T1.2] [T1.3] Global Survey (3 parallel)
|
|
||||||
↓ Context P1
|
|
||||||
Wave 2: [T2.1] [T2.2] [T2.3] Theory (3 parallel)
|
|
||||||
↓ Context P1+P2
|
|
||||||
Wave 3: [T3.1] [T3.2] [T3.3] Algorithm (3 parallel)
|
|
||||||
↓ Context P1+P2+P3
|
|
||||||
Wave 4: [T4.1] [T4.2] [T4.3] Module (3 parallel)
|
|
||||||
↓ Context P1-P4
|
|
||||||
Wave 5: [T5.1] [T5.2] [T5.3] Local (3 parallel)
|
|
||||||
↓ Context P1-P5
|
|
||||||
Wave 6: [T6.1] [T6.2] [T6.3] Integration (3 parallel)
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 2. Wave Definitions
|
|
||||||
|
|
||||||
### Wave 1: Global Survey
|
|
||||||
|
|
||||||
| Property | Value |
|
|
||||||
|----------|-------|
|
|
||||||
| Phase Name | Global Survey |
|
|
||||||
| Track Count | 3 |
|
|
||||||
| Dependencies | None (entry wave) |
|
|
||||||
| Context Input | Project codebase only |
|
|
||||||
| Context Output | Governing equations, architecture map, validation KPIs |
|
|
||||||
| Max Parallelism | 3 |
|
|
||||||
|
|
||||||
**Tracks**:
|
|
||||||
| ID | Role | Dimension | Scope |
|
|
||||||
|----|------|-----------|-------|
|
|
||||||
| T1.1 | Problem_Domain_Analyst | domain_modeling | Full project |
|
|
||||||
| T1.2 | Software_Architect | architecture_analysis | Full project |
|
|
||||||
| T1.3 | Validation_Strategist | validation_design | Full project |
|
|
||||||
|
|
||||||
### Wave 2: Theoretical Foundations
|
|
||||||
|
|
||||||
| Property | Value |
|
|
||||||
|----------|-------|
|
|
||||||
| Phase Name | Theoretical Foundations |
|
|
||||||
| Track Count | 3 |
|
|
||||||
| Dependencies | Wave 1 |
|
|
||||||
| Context Input | Context Package P1 |
|
|
||||||
| Context Output | LaTeX formulas, convergence theorems, complexity bounds |
|
|
||||||
| Max Parallelism | 3 |
|
|
||||||
|
|
||||||
**Tracks**:
|
|
||||||
| ID | Role | Dimension | Deps | context_from |
|
|
||||||
|----|------|-----------|------|-------------|
|
|
||||||
| T2.1 | Mathematician | formula_derivation | T1.1 | T1.1 |
|
|
||||||
| T2.2 | Convergence_Analyst | convergence_analysis | T1.1 | T1.1 |
|
|
||||||
| T2.3 | Complexity_Analyst | complexity_analysis | T1.1;T1.2 | T1.1;T1.2 |
|
|
||||||
|
|
||||||
### Wave 3: Algorithm Design & Stability
|
|
||||||
|
|
||||||
| Property | Value |
|
|
||||||
|----------|-------|
|
|
||||||
| Phase Name | Algorithm Design |
|
|
||||||
| Track Count | 3 |
|
|
||||||
| Dependencies | Wave 2 |
|
|
||||||
| Context Input | Context Package P1+P2 |
|
|
||||||
| Context Output | Pseudocode, stability conditions, performance model |
|
|
||||||
| Max Parallelism | 3 |
|
|
||||||
|
|
||||||
**Tracks**:
|
|
||||||
| ID | Role | Dimension | Deps | context_from |
|
|
||||||
|----|------|-----------|------|-------------|
|
|
||||||
| T3.1 | Algorithm_Designer | method_selection | T2.1 | T2.1;T2.2;T2.3 |
|
|
||||||
| T3.2 | Stability_Analyst | stability_analysis | T2.1;T2.2 | T2.1;T2.2 |
|
|
||||||
| T3.3 | Performance_Modeler | performance_modeling | T2.3 | T1.2;T2.3 |
|
|
||||||
|
|
||||||
### Wave 4: Module Implementation
|
|
||||||
|
|
||||||
| Property | Value |
|
|
||||||
|----------|-------|
|
|
||||||
| Phase Name | Module Implementation |
|
|
||||||
| Track Count | 3 |
|
|
||||||
| Dependencies | Wave 3 |
|
|
||||||
| Context Input | Context Package P1-P3 |
|
|
||||||
| Context Output | Code-algorithm mapping, data structure decisions, API contracts |
|
|
||||||
| Max Parallelism | 3 |
|
|
||||||
|
|
||||||
**Tracks**:
|
|
||||||
| ID | Role | Dimension | Deps | context_from |
|
|
||||||
|----|------|-----------|------|-------------|
|
|
||||||
| T4.1 | Module_Implementer | implementation_analysis | T3.1 | T1.2;T3.1 |
|
|
||||||
| T4.2 | Data_Structure_Designer | data_structure_review | T3.1;T3.3 | T2.3;T3.1;T3.3 |
|
|
||||||
| T4.3 | Interface_Analyst | interface_analysis | T3.1 | T1.2;T3.1 |
|
|
||||||
|
|
||||||
### Wave 5: Local Function-Level
|
|
||||||
|
|
||||||
| Property | Value |
|
|
||||||
|----------|-------|
|
|
||||||
| Phase Name | Local Function-Level |
|
|
||||||
| Track Count | 3 |
|
|
||||||
| Dependencies | Wave 4 |
|
|
||||||
| Context Input | Context Package P1-P4 |
|
|
||||||
| Context Output | Optimization recommendations, edge case catalog, precision risk matrix |
|
|
||||||
| Max Parallelism | 3 |
|
|
||||||
|
|
||||||
**Tracks**:
|
|
||||||
| ID | Role | Dimension | Deps | context_from |
|
|
||||||
|----|------|-----------|------|-------------|
|
|
||||||
| T5.1 | Code_Optimizer | optimization | T4.1 | T3.3;T4.1 |
|
|
||||||
| T5.2 | Edge_Case_Analyst | edge_case_analysis | T4.1 | T1.1;T3.2;T4.1 |
|
|
||||||
| T5.3 | Precision_Auditor | precision_audit | T4.1;T4.2 | T3.2;T4.1;T4.2 |
|
|
||||||
|
|
||||||
### Wave 6: Integration & QA
|
|
||||||
|
|
||||||
| Property | Value |
|
|
||||||
|----------|-------|
|
|
||||||
| Phase Name | Integration & QA |
|
|
||||||
| Track Count | 3 |
|
|
||||||
| Dependencies | Wave 5 |
|
|
||||||
| Context Input | Context Package P1-P5 (ALL cumulative) |
|
|
||||||
| Context Output | Final test plan, benchmark report, QA assessment |
|
|
||||||
| Max Parallelism | 3 |
|
|
||||||
|
|
||||||
**Tracks**:
|
|
||||||
| ID | Role | Dimension | Deps | context_from |
|
|
||||||
|----|------|-----------|------|-------------|
|
|
||||||
| T6.1 | Integration_Tester | integration_testing | T5.1;T5.2 | T1.3;T5.1;T5.2 |
|
|
||||||
| T6.2 | Benchmark_Engineer | benchmarking | T5.1 | T1.3;T3.3;T5.1 |
|
|
||||||
| T6.3 | QA_Auditor | quality_assurance | T5.1;T5.2;T5.3 | T1.1;T2.1;T3.1;T4.1;T5.1;T5.2;T5.3 |
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 3. Context Flow Map
|
|
||||||
|
|
||||||
### Directed Context (prev_context column)
|
|
||||||
|
|
||||||
```
|
|
||||||
T1.1 ──► T2.1, T2.2, T2.3
|
|
||||||
T1.2 ──► T2.3, T3.3, T4.1, T4.3
|
|
||||||
T1.3 ──► T6.1, T6.2
|
|
||||||
T2.1 ──► T3.1, T3.2
|
|
||||||
T2.2 ──► T3.1, T3.2
|
|
||||||
T2.3 ──► T3.1, T3.3, T4.2
|
|
||||||
T3.1 ──► T4.1, T4.2, T4.3
|
|
||||||
T3.2 ──► T5.2, T5.3
|
|
||||||
T3.3 ──► T4.2, T5.1, T6.2
|
|
||||||
T4.1 ──► T5.1, T5.2, T5.3
|
|
||||||
T4.2 ──► T5.3
|
|
||||||
T5.1 ──► T6.1, T6.2
|
|
||||||
T5.2 ──► T6.1
|
|
||||||
T5.3 ──► T6.3
|
|
||||||
```
|
|
||||||
|
|
||||||
### Broadcast Context (discoveries.ndjson)
|
|
||||||
|
|
||||||
All agents read/append to the same discoveries.ndjson. Key discovery types flow across waves:
|
|
||||||
|
|
||||||
```
|
|
||||||
Wave 1: governing_equation, architecture_pattern ──► all subsequent waves
|
|
||||||
Wave 2: convergence_property ──► Wave 3-6
|
|
||||||
Wave 3: stability_issue, numerical_method ──► Wave 4-6
|
|
||||||
Wave 4: (implementation findings) ──► Wave 5-6
|
|
||||||
Wave 5: precision_risk, performance_bottleneck ──► Wave 6
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 4. Perspective Reuse Matrix
|
|
||||||
|
|
||||||
How each wave's output is reused by later waves:
|
|
||||||
|
|
||||||
| Source | P2 Reuse | P3 Reuse | P4 Reuse | P5 Reuse | P6 Reuse |
|
|
||||||
|--------|---------|---------|---------|---------|---------|
|
|
||||||
| **P1 Equations** | Formalize → LaTeX | Constrain methods | Code-eq mapping | Singularity sources | Correctness baseline |
|
|
||||||
| **P1 Architecture** | Constrain discretization | Parallel strategy | Module boundaries | Hotspot location | Integration scope |
|
|
||||||
| **P1 Validation** | - | Benchmark selection | Test criteria | Edge case sources | Final validation |
|
|
||||||
| **P2 Formulas** | - | Parameter constraints | Loop termination | Precision requirements | Theory verification |
|
|
||||||
| **P2 Convergence** | - | Mesh refinement | Iteration control | Error tolerance | Rate verification |
|
|
||||||
| **P2 Complexity** | - | Performance baseline | Data structure choice | Optimization targets | Perf comparison |
|
|
||||||
| **P3 Pseudocode** | - | - | Impl reference | Line-by-line audit | Regression baseline |
|
|
||||||
| **P3 Stability** | - | - | Precision selection | Cancellation detection | Numerical verification |
|
|
||||||
| **P3 Performance** | - | - | Memory layout | Vectorization targets | Benchmark targets |
|
|
||||||
| **P4 Modules** | - | - | - | Function-level focus | Module test plan |
|
|
||||||
| **P5 Optimization** | - | - | - | - | Performance tests |
|
|
||||||
| **P5 Edge Cases** | - | - | - | - | Regression tests |
|
|
||||||
| **P5 Precision** | - | - | - | - | Numerical tests |
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 5. Diamond Properties
|
|
||||||
|
|
||||||
| Property | Value |
|
|
||||||
|----------|-------|
|
|
||||||
| Total Waves | 6 |
|
|
||||||
| Total Tasks | 18 (3 per wave) |
|
|
||||||
| Max Parallelism per Wave | 3 |
|
|
||||||
| Widest Context Fan-in | T6.3 (receives from 7 tasks) |
|
|
||||||
| Deepest Dependency Chain | T1.1 → T2.1 → T3.1 → T4.1 → T5.1 → T6.1 (depth 6) |
|
|
||||||
| Context Accumulation | Cumulative (each wave adds to previous context) |
|
|
||||||
| Topology Type | Staged Parallel with Diamond convergence at Wave 6 |
|
|
||||||
@@ -1,173 +0,0 @@
|
|||||||
# Quality Standards for Numerical Analysis Workflow
|
|
||||||
|
|
||||||
Quality assessment criteria for NADW analysis reports.
|
|
||||||
|
|
||||||
## When to Use
|
|
||||||
|
|
||||||
| Phase | Usage | Section |
|
|
||||||
|-------|-------|---------|
|
|
||||||
| Phase 2 (Execution) | Guide agent analysis quality | All dimensions |
|
|
||||||
| Phase 3 (Aggregation) | Score generated reports | Quality Gates |
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Quality Dimensions
|
|
||||||
|
|
||||||
### 1. Mathematical Rigor (30%)
|
|
||||||
|
|
||||||
| Score | Criteria |
|
|
||||||
|-------|----------|
|
|
||||||
| 100% | All formulas correct, properly derived, LaTeX well-formatted, error bounds proven |
|
|
||||||
| 80% | Formulas correct, some derivation steps skipped, bounds stated without full proof |
|
|
||||||
| 60% | Key formulas present, some notation inconsistencies, bounds estimated |
|
|
||||||
| 40% | Formulas incomplete or contain errors |
|
|
||||||
| 0% | No mathematical content |
|
|
||||||
|
|
||||||
**Checklist**:
|
|
||||||
- [ ] Governing equations identified and written in LaTeX
|
|
||||||
- [ ] Weak forms correctly derived from strong forms
|
|
||||||
- [ ] Convergence order stated with conditions
|
|
||||||
- [ ] Error bounds provided (a priori or a posteriori)
|
|
||||||
- [ ] CFL/stability conditions explicitly stated
|
|
||||||
- [ ] Condition numbers estimated for key matrices
|
|
||||||
- [ ] Complexity bounds (time and space) determined
|
|
||||||
- [ ] LaTeX notation consistent throughout all documents
|
|
||||||
|
|
||||||
### 2. Code-Theory Mapping (25%)
|
|
||||||
|
|
||||||
| Score | Criteria |
|
|
||||||
|-------|----------|
|
|
||||||
| 100% | Every algorithm mapped to code with file:line references, data structures justified |
|
|
||||||
| 80% | Major algorithms mapped, most references accurate |
|
|
||||||
| 60% | Key mappings present, some code references missing |
|
|
||||||
| 40% | Superficial mapping, few code references |
|
|
||||||
| 0% | No code-theory connection |
|
|
||||||
|
|
||||||
**Checklist**:
|
|
||||||
- [ ] Each numerical method traced to implementing function/module
|
|
||||||
- [ ] Data structures justified against algorithm requirements
|
|
||||||
- [ ] Sparse matrix format matched to access patterns
|
|
||||||
- [ ] Time integration scheme identified in code
|
|
||||||
- [ ] Boundary condition implementation verified
|
|
||||||
- [ ] Solver configuration traced to convergence requirements
|
|
||||||
- [ ] Preconditioner choice justified
|
|
||||||
|
|
||||||
### 3. Numerical Quality Assessment (25%)
|
|
||||||
|
|
||||||
| Score | Criteria |
|
|
||||||
|-------|----------|
|
|
||||||
| 100% | Stability fully analyzed, precision risks cataloged, all edge cases covered |
|
|
||||||
| 80% | Stability assessed, major precision risks found, common edge cases covered |
|
|
||||||
| 60% | Basic stability check, some precision risks, incomplete edge cases |
|
|
||||||
| 40% | Superficial stability mention, few precision issues found |
|
|
||||||
| 0% | No numerical quality analysis |
|
|
||||||
|
|
||||||
**Checklist**:
|
|
||||||
- [ ] Condition numbers estimated for key operations
|
|
||||||
- [ ] Catastrophic cancellation risks identified with file:line
|
|
||||||
- [ ] Accumulation error potential assessed
|
|
||||||
- [ ] Float precision choices justified (float32 vs float64)
|
|
||||||
- [ ] Edge cases cataloged (singularities, degenerate inputs)
|
|
||||||
- [ ] Overflow/underflow risks identified
|
|
||||||
- [ ] Mixed-precision operations flagged
|
|
||||||
|
|
||||||
### 4. Cross-Phase Coherence (20%)
|
|
||||||
|
|
||||||
| Score | Criteria |
|
|
||||||
|-------|----------|
|
|
||||||
| 100% | All 6 phases connected, findings build on each other, no contradictions |
|
|
||||||
| 80% | Most phases connected, minor gaps in context propagation |
|
|
||||||
| 60% | Key connections present, some phases isolated |
|
|
||||||
| 40% | Limited cross-referencing between phases |
|
|
||||||
| 0% | Phases completely isolated |
|
|
||||||
|
|
||||||
**Checklist**:
|
|
||||||
- [ ] Wave 2 formulas reference Wave 1 governing equations
|
|
||||||
- [ ] Wave 3 algorithms justified by Wave 2 theory
|
|
||||||
- [ ] Wave 4 implementation verified against Wave 3 pseudocode
|
|
||||||
- [ ] Wave 5 optimization targets from Wave 3 performance model
|
|
||||||
- [ ] Wave 5 precision requirements from Wave 2/3 analysis
|
|
||||||
- [ ] Wave 6 test plan covers findings from all prior waves
|
|
||||||
- [ ] Wave 6 benchmarks compare against Wave 3 predictions
|
|
||||||
- [ ] No contradictory findings between phases
|
|
||||||
- [ ] Discoveries board used for cross-track sharing
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Quality Gates (Per-Wave)
|
|
||||||
|
|
||||||
| Wave | Phase | Gate Criteria | Required Tracks |
|
|
||||||
|------|-------|--------------|-----------------|
|
|
||||||
| 1 | Global Survey | Core model identified + architecture mapped + ≥1 KPI | 3/3 completed |
|
|
||||||
| 2 | Theory | Key formulas LaTeX'd + convergence stated + complexity determined | 3/3 completed |
|
|
||||||
| 3 | Algorithm | Pseudocode produced + stability assessed + performance predicted | ≥2/3 completed |
|
|
||||||
| 4 | Module | Code-algorithm mapping + data structures reviewed + APIs documented | ≥2/3 completed |
|
|
||||||
| 5 | Local | Hotspots identified + edge cases cataloged + precision risks flagged | ≥2/3 completed |
|
|
||||||
| 6 | Integration | Test plan complete + benchmarks planned + QA report synthesized | 3/3 completed |
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Overall Quality Gates
|
|
||||||
|
|
||||||
| Gate | Threshold | Action |
|
|
||||||
|------|-----------|--------|
|
|
||||||
| PASS | >= 80% across all dimensions | Report ready for delivery |
|
|
||||||
| REVIEW | 70-79% in any dimension | Flag dimension for improvement, user decides |
|
|
||||||
| FAIL | < 70% in any dimension | Block delivery, identify gaps, suggest re-analysis |
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Issue Classification
|
|
||||||
|
|
||||||
### Errors (Must Fix)
|
|
||||||
|
|
||||||
- Missing governing equation identification (Wave 1)
|
|
||||||
- LaTeX formulas with mathematical errors (Wave 2)
|
|
||||||
- Algorithm pseudocode that doesn't match convergence requirements (Wave 3)
|
|
||||||
- Code references to non-existent files/functions (Wave 4)
|
|
||||||
- Unidentified catastrophic cancellation in critical path (Wave 5)
|
|
||||||
- Test plan that doesn't cover identified stability issues (Wave 6)
|
|
||||||
- Contradictory findings between phases
|
|
||||||
- Missing context propagation (later phase ignores earlier findings)
|
|
||||||
|
|
||||||
### Warnings (Should Fix)
|
|
||||||
|
|
||||||
- Formulas without derivation steps
|
|
||||||
- Convergence bounds stated without proof or reference
|
|
||||||
- Missing edge case for known singularity
|
|
||||||
- Performance model without memory bandwidth consideration
|
|
||||||
- Data structure choice not justified
|
|
||||||
- Test plan without manufactured solution verification
|
|
||||||
- Benchmark without theoretical baseline comparison
|
|
||||||
|
|
||||||
### Notes (Nice to Have)
|
|
||||||
|
|
||||||
- Additional bibliography references
|
|
||||||
- Alternative algorithm comparisons
|
|
||||||
- Extended precision sensitivity analysis
|
|
||||||
- Scaling prediction beyond current problem size
|
|
||||||
- Code style or naming convention suggestions
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Severity Levels for Findings
|
|
||||||
|
|
||||||
| Severity | Definition | Example |
|
|
||||||
|----------|-----------|---------|
|
|
||||||
| **Critical** | Incorrect results or numerical failure | Wrong boundary condition → divergent solution |
|
|
||||||
| **High** | Significant accuracy or performance degradation | Condition number 10^15 → double precision insufficient |
|
|
||||||
| **Medium** | Suboptimal but functional | O(N^2) where O(N log N) is possible |
|
|
||||||
| **Low** | Minor improvement opportunity | Unnecessary array copy in non-critical path |
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Document Quality Metrics
|
|
||||||
|
|
||||||
| Metric | Target | Measurement |
|
|
||||||
|--------|--------|-------------|
|
|
||||||
| Formula coverage | ≥ 90% of core equations in LaTeX | Count identified vs documented |
|
|
||||||
| Code reference density | ≥ 1 file:line per finding | Count references per finding |
|
|
||||||
| Cross-phase references | ≥ 3 per document (Waves 3-6) | Count cross-references |
|
|
||||||
| Severity distribution | ≥ 1 per severity level | Count per level |
|
|
||||||
| Discovery board contributions | ≥ 2 per track | Count NDJSON entries per worker |
|
|
||||||
| Perspective package | Present in every document | Boolean per document |
|
|
||||||
798
.codex/skills/project-documentation-workflow/SKILL.md
Normal file
798
.codex/skills/project-documentation-workflow/SKILL.md
Normal file
@@ -0,0 +1,798 @@
|
|||||||
|
---
|
||||||
|
name: project-documentation-workflow
|
||||||
|
description: Wave-based comprehensive project documentation generator with dynamic task decomposition. Analyzes project structure and generates appropriate documentation tasks, computes optimal execution waves via topological sort, produces complete documentation suite including architecture, methods, theory, features, usage, and design philosophy.
|
||||||
|
argument-hint: "[-y|--yes] [-c|--concurrency N] [--continue] \"project path or description\""
|
||||||
|
allowed-tools: spawn_agents_on_csv, Read, Write, Edit, Bash, Glob, Grep, AskUserQuestion
|
||||||
|
---
|
||||||
|
|
||||||
|
## Auto Mode
|
||||||
|
|
||||||
|
When `--yes` or `-y`: Auto-confirm task decomposition, skip interactive validation, use defaults.
|
||||||
|
|
||||||
|
# Project Documentation Workflow (Optimized)
|
||||||
|
|
||||||
|
## Usage
|
||||||
|
|
||||||
|
```bash
|
||||||
|
$project-documentation-workflow "Document the authentication module in src/auth/"
|
||||||
|
$project-documentation-workflow -c 4 "Generate full docs for the FEM solver project"
|
||||||
|
$project-documentation-workflow -y "Document entire codebase with architecture and API"
|
||||||
|
$project-documentation-workflow --continue "doc-auth-module-20260304"
|
||||||
|
```
|
||||||
|
|
||||||
|
**Flags**:
|
||||||
|
- `-y, --yes`: Skip all confirmations (auto mode)
|
||||||
|
- `-c, --concurrency N`: Max concurrent agents within each wave (default: 3)
|
||||||
|
- `--continue`: Resume existing session
|
||||||
|
|
||||||
|
**Output Directory**: `.workflow/.csv-wave/{session-id}/`
|
||||||
|
**Core Output**: `tasks.csv` + `results.csv` + `discoveries.ndjson` + `wave-summaries/` + `docs/` (完整文档集)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Overview
|
||||||
|
|
||||||
|
**优化版**:动态任务分解 + 拓扑排序波次计算 + 波次间综合步骤。
|
||||||
|
|
||||||
|
```
|
||||||
|
┌─────────────────────────────────────────────────────────────────────────┐
|
||||||
|
│ PROJECT DOCUMENTATION WORKFLOW (Dynamic & Optimized) │
|
||||||
|
├─────────────────────────────────────────────────────────────────────────┤
|
||||||
|
│ │
|
||||||
|
│ Phase 0: Dynamic Decomposition │
|
||||||
|
│ ├─ Analyze project structure, complexity, domain │
|
||||||
|
│ ├─ Generate appropriate documentation tasks (动态数量) │
|
||||||
|
│ ├─ Compute task dependencies (deps) │
|
||||||
|
│ ├─ Compute execution waves (topological sort) │
|
||||||
|
│ └─ User validates task breakdown (skip if -y) │
|
||||||
|
│ │
|
||||||
|
│ Phase 1: Wave Execution (with Inter-Wave Synthesis) │
|
||||||
|
│ ├─ For each wave (1..N, dynamically computed): │
|
||||||
|
│ │ ├─ Load Wave Summary from previous wave │
|
||||||
|
│ │ ├─ Build wave CSV with prev_context injection │
|
||||||
|
│ │ ├─ spawn_agents_on_csv(wave CSV) │
|
||||||
|
│ │ ├─ Collect results, merge into master tasks.csv │
|
||||||
|
│ │ ├─ Generate Wave Summary (波次综合) │
|
||||||
|
│ │ └─ Check: any failed? → skip dependents │
|
||||||
|
│ └─ discoveries.ndjson shared across all waves │
|
||||||
|
│ │
|
||||||
|
│ Phase 2: Results Aggregation │
|
||||||
|
│ ├─ Export final results.csv │
|
||||||
|
│ ├─ Generate context.md with all findings │
|
||||||
|
│ ├─ Generate docs/index.md navigation │
|
||||||
|
│ └─ Display summary: completed/failed/skipped per wave │
|
||||||
|
│ │
|
||||||
|
└─────────────────────────────────────────────────────────────────────────┘
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## CSV Schema
|
||||||
|
|
||||||
|
### tasks.csv (Master State)
|
||||||
|
|
||||||
|
```csv
|
||||||
|
id,title,description,doc_type,target_scope,doc_sections,formula_support,priority,deps,context_from,wave,status,findings,doc_path,key_discoveries,error
|
||||||
|
"doc-001","项目概述","撰写项目的整体概述","overview","README.md,package.json","purpose,background,positioning,audience","false","high","","","1","pending","","","",""
|
||||||
|
```
|
||||||
|
|
||||||
|
**Columns**:
|
||||||
|
|
||||||
|
| Column | Type | Required | Description |
|
||||||
|
|--------|------|----------|-------------|
|
||||||
|
| `id` | string | Yes | Task ID (doc-NNN, auto-generated) |
|
||||||
|
| `title` | string | Yes | Document title |
|
||||||
|
| `description` | string | Yes | Detailed task description |
|
||||||
|
| `doc_type` | enum | Yes | `overview|architecture|theory|implementation|feature|api|usage|synthesis` |
|
||||||
|
| `target_scope` | string | Yes | File scope (glob pattern) |
|
||||||
|
| `doc_sections` | string | Yes | Required sections (comma-separated) |
|
||||||
|
| `formula_support` | boolean | No | LaTeX formula support |
|
||||||
|
| `priority` | enum | No | `high|medium|low` (for task ordering) |
|
||||||
|
| `deps` | string | No | Dependency task IDs (semicolon-separated) |
|
||||||
|
| `context_from` | string | No | Context source task IDs |
|
||||||
|
| `wave` | integer | Computed | Wave number (computed by topological sort) |
|
||||||
|
| `status` | enum | Output | `pending→completed|failed|skipped` |
|
||||||
|
| `findings` | string | Output | Key findings summary |
|
||||||
|
| `doc_path` | string | Output | Generated document path |
|
||||||
|
| `key_discoveries` | string | Output | Key discoveries (JSON) |
|
||||||
|
| `error` | string | Output | Error message |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Implementation
|
||||||
|
|
||||||
|
### Session Initialization
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
const getUtc8ISOString = () => new Date(Date.now() + 8 * 60 * 60 * 1000).toISOString()
|
||||||
|
|
||||||
|
const AUTO_YES = $ARGUMENTS.includes('--yes') || $ARGUMENTS.includes('-y')
|
||||||
|
const continueMode = $ARGUMENTS.includes('--continue')
|
||||||
|
const concurrencyMatch = $ARGUMENTS.match(/(?:--concurrency|-c)\s+(\d+)/)
|
||||||
|
const maxConcurrency = concurrencyMatch ? parseInt(concurrencyMatch[1]) : 3
|
||||||
|
|
||||||
|
const requirement = $ARGUMENTS
|
||||||
|
.replace(/--yes|-y|--continue|--concurrency\s+\d+|-c\s+\d+/g, '')
|
||||||
|
.trim()
|
||||||
|
|
||||||
|
const slug = requirement.toLowerCase()
|
||||||
|
.replace(/[^a-z0-9\u4e00-\u9fa5]+/g, '-')
|
||||||
|
.substring(0, 40)
|
||||||
|
const dateStr = getUtc8ISOString().substring(0, 10).replace(/-/g, '')
|
||||||
|
const sessionId = `doc-${slug}-${dateStr}`
|
||||||
|
const sessionFolder = `.workflow/.csv-wave/${sessionId}`
|
||||||
|
|
||||||
|
Bash(`mkdir -p ${sessionFolder}/docs ${sessionFolder}/wave-summaries`)
|
||||||
|
|
||||||
|
// Initialize discoveries.ndjson
|
||||||
|
Write(`${sessionFolder}/discoveries.ndjson`, `# Discovery Board - ${sessionId}\n# Format: NDJSON\n`)
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Phase 0: Dynamic Task Decomposition
|
||||||
|
|
||||||
|
**Objective**: Analyze the project and dynamically generate appropriate documentation tasks.
|
||||||
|
|
||||||
|
#### Step 1: Project Analysis
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
Bash({
|
||||||
|
command: `ccw cli -p "PURPOSE: Analyze the project and determine appropriate documentation tasks.
|
||||||
|
TASK:
|
||||||
|
1. Scan project structure to identify:
|
||||||
|
- Project type (library/application/service/CLI/tool)
|
||||||
|
- Primary language(s) and frameworks
|
||||||
|
- Project scale (small/medium/large based on file count and complexity)
|
||||||
|
- Key modules and their purposes
|
||||||
|
- Existing documentation (README, docs/, etc.)
|
||||||
|
|
||||||
|
2. Determine documentation needs based on project characteristics:
|
||||||
|
- For ALL projects: overview, tech-stack, directory-structure
|
||||||
|
- For libraries: api-reference, usage-guide, best-practices
|
||||||
|
- For applications: system-architecture, feature-list, usage-guide
|
||||||
|
- For numerical/scientific projects: theoretical-foundations (with formula_support=true)
|
||||||
|
- For services: api-reference, module-interactions, deployment
|
||||||
|
- For complex projects (>50 files): add design-patterns, data-model
|
||||||
|
- For simple projects (<10 files): reduce to essential docs only
|
||||||
|
|
||||||
|
3. Generate task list with:
|
||||||
|
- Unique task IDs (doc-001, doc-002, ...)
|
||||||
|
- Appropriate doc_type for each task
|
||||||
|
- Target scope (glob patterns) based on actual project structure
|
||||||
|
- Required sections for each document type
|
||||||
|
- Dependencies (deps) between related tasks
|
||||||
|
- Context sources (context_from) for information flow
|
||||||
|
- Priority (high for essential docs, medium for useful, low for optional)
|
||||||
|
|
||||||
|
4. Task dependency rules:
|
||||||
|
- overview tasks: no deps (Wave 1)
|
||||||
|
- architecture tasks: depend on overview tasks
|
||||||
|
- implementation tasks: depend on architecture tasks
|
||||||
|
- feature/api tasks: depend on implementation
|
||||||
|
- synthesis tasks: depend on most other tasks
|
||||||
|
|
||||||
|
MODE: analysis
|
||||||
|
CONTEXT: @**/*
|
||||||
|
EXPECTED: JSON with:
|
||||||
|
- project_info: {type, scale, languages, frameworks, modules[]}
|
||||||
|
- recommended_waves: number of waves suggested
|
||||||
|
- tasks: [{id, title, description, doc_type, target_scope, doc_sections, formula_support, priority, deps[], context_from[]}]
|
||||||
|
|
||||||
|
CONSTRAINTS:
|
||||||
|
- Small projects: 5-8 tasks max
|
||||||
|
- Medium projects: 10-15 tasks
|
||||||
|
- Large projects: 15-25 tasks
|
||||||
|
- Each doc_type should appear at most once unless justified
|
||||||
|
- deps must form a valid DAG (no cycles)
|
||||||
|
|
||||||
|
PROJECT TO ANALYZE: ${requirement}" --tool gemini --mode analysis --rule planning-breakdown-task-steps`,
|
||||||
|
run_in_background: true
|
||||||
|
})
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Step 2: Topological Sort (Wave Computation)
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
function computeWaves(tasks) {
|
||||||
|
// Build adjacency list
|
||||||
|
const graph = new Map()
|
||||||
|
const inDegree = new Map()
|
||||||
|
const taskMap = new Map()
|
||||||
|
|
||||||
|
for (const task of tasks) {
|
||||||
|
taskMap.set(task.id, task)
|
||||||
|
graph.set(task.id, [])
|
||||||
|
inDegree.set(task.id, 0)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Fill edges based on deps
|
||||||
|
for (const task of tasks) {
|
||||||
|
const deps = task.deps.filter(d => taskMap.has(d))
|
||||||
|
for (const dep of deps) {
|
||||||
|
graph.get(dep).push(task.id)
|
||||||
|
inDegree.set(task.id, inDegree.get(task.id) + 1)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Kahn's BFS algorithm
|
||||||
|
const waves = []
|
||||||
|
let currentWave = []
|
||||||
|
|
||||||
|
// Start with tasks that have no dependencies
|
||||||
|
for (const [id, degree] of inDegree) {
|
||||||
|
if (degree === 0) currentWave.push(id)
|
||||||
|
}
|
||||||
|
|
||||||
|
while (currentWave.length > 0) {
|
||||||
|
waves.push([...currentWave])
|
||||||
|
const nextWave = []
|
||||||
|
|
||||||
|
for (const id of currentWave) {
|
||||||
|
for (const neighbor of graph.get(id)) {
|
||||||
|
inDegree.set(neighbor, inDegree.get(neighbor) - 1)
|
||||||
|
if (inDegree.get(neighbor) === 0) {
|
||||||
|
nextWave.push(neighbor)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
currentWave = nextWave
|
||||||
|
}
|
||||||
|
|
||||||
|
// Assign wave numbers
|
||||||
|
for (let w = 0; w < waves.length; w++) {
|
||||||
|
for (const id of waves[w]) {
|
||||||
|
taskMap.get(id).wave = w + 1
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Check for cycles
|
||||||
|
const assignedCount = tasks.filter(t => t.wave > 0).length
|
||||||
|
if (assignedCount < tasks.length) {
|
||||||
|
throw new Error(`Circular dependency detected! Only ${assignedCount}/${tasks.length} tasks assigned.`)
|
||||||
|
}
|
||||||
|
|
||||||
|
return {
|
||||||
|
tasks: tasks,
|
||||||
|
waveCount: waves.length,
|
||||||
|
waveDistribution: waves.map((w, i) => ({ wave: i + 1, tasks: w.length }))
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Step 3: User Validation
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
// Parse decomposition result
|
||||||
|
const analysisResult = JSON.parse(decompositionOutput)
|
||||||
|
const { tasks, project_info, waveCount } = analysisResult
|
||||||
|
|
||||||
|
// Compute waves
|
||||||
|
const { tasks: tasksWithWaves, waveCount: computedWaves, waveDistribution } = computeWaves(tasks)
|
||||||
|
|
||||||
|
// Display to user (skip if AUTO_YES)
|
||||||
|
if (!AUTO_YES) {
|
||||||
|
console.log(`
|
||||||
|
╔════════════════════════════════════════════════════════════════╗
|
||||||
|
║ PROJECT ANALYSIS RESULT ║
|
||||||
|
╠════════════════════════════════════════════════════════════════╣
|
||||||
|
║ Type: ${project_info.type.padEnd(20)} Scale: ${project_info.scale.padEnd(10)} ║
|
||||||
|
║ Languages: ${project_info.languages.join(', ').substring(0, 40).padEnd(40)} ║
|
||||||
|
║ Modules: ${project_info.modules.length} identified ║
|
||||||
|
╠════════════════════════════════════════════════════════════════╣
|
||||||
|
║ WAVE DISTRIBUTION (${computedWaves} waves, ${tasksWithWaves.length} tasks) ║
|
||||||
|
${waveDistribution.map(w => `║ Wave ${w.wave}: ${w.tasks} tasks${' '.repeat(50 - w.tasks.toString().length)}`).join('\n')}
|
||||||
|
╚════════════════════════════════════════════════════════════════╝
|
||||||
|
`)
|
||||||
|
|
||||||
|
// Show tasks by wave
|
||||||
|
for (let w = 1; w <= computedWaves; w++) {
|
||||||
|
const waveTasks = tasksWithWaves.filter(t => t.wave === w)
|
||||||
|
console.log(`\nWave ${w}:`)
|
||||||
|
for (const t of waveTasks) {
|
||||||
|
console.log(` ${t.id}: ${t.title} [${t.doc_type}]`)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
const confirm = AskUserQuestion("Proceed with this task breakdown?")
|
||||||
|
if (!confirm) {
|
||||||
|
console.log("Aborted. Use --continue to resume with modified tasks.")
|
||||||
|
return
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Generate tasks.csv
|
||||||
|
Write(`${sessionFolder}/tasks.csv`, toCsv(tasksWithWaves))
|
||||||
|
Write(`${sessionFolder}/project-info.json`, JSON.stringify(project_info, null, 2))
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Phase 1: Wave Execution (with Inter-Wave Synthesis)
|
||||||
|
|
||||||
|
**Key Optimization**: Add Wave Summary generation between waves for better context propagation.
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
const masterCsv = Read(`${sessionFolder}/tasks.csv`)
|
||||||
|
let tasks = parseCsv(masterCsv)
|
||||||
|
const maxWave = Math.max(...tasks.map(t => t.wave))
|
||||||
|
|
||||||
|
for (let wave = 1; wave <= maxWave; wave++) {
|
||||||
|
console.log(`\n{'='*60}`)
|
||||||
|
console.log(`Wave ${wave}/${maxWave}`)
|
||||||
|
console.log('='.repeat(60))
|
||||||
|
|
||||||
|
// 1. Load Wave Summary from previous wave
|
||||||
|
const waveSummaryPath = `${sessionFolder}/wave-summaries/wave-${wave-1}-summary.md`
|
||||||
|
let prevWaveSummary = ''
|
||||||
|
if (wave > 1 && fileExists(waveSummaryPath)) {
|
||||||
|
prevWaveSummary = Read(waveSummaryPath)
|
||||||
|
console.log(`Loaded Wave ${wave-1} Summary (${prevWaveSummary.length} chars)`)
|
||||||
|
}
|
||||||
|
|
||||||
|
// 2. Filter tasks for this wave
|
||||||
|
const waveTasks = tasks.filter(t => t.wave === wave && t.status === 'pending')
|
||||||
|
|
||||||
|
// 3. Check dependencies
|
||||||
|
for (const task of waveTasks) {
|
||||||
|
const depIds = (task.deps || '').split(';').filter(Boolean)
|
||||||
|
const depStatuses = depIds.map(id => tasks.find(t => t.id === id)?.status)
|
||||||
|
if (depStatuses.some(s => s === 'failed' || s === 'skipped')) {
|
||||||
|
task.status = 'skipped'
|
||||||
|
task.error = `Dependency failed: ${depIds.filter((id, i) =>
|
||||||
|
['failed','skipped'].includes(depStatuses[i])).join(', ')}`
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
const pendingTasks = waveTasks.filter(t => t.status === 'pending')
|
||||||
|
if (pendingTasks.length === 0) {
|
||||||
|
console.log(`Wave ${wave}: No pending tasks, skipping...`)
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
|
||||||
|
// 4. Build enhanced prev_context
|
||||||
|
for (const task of pendingTasks) {
|
||||||
|
// a. From context_from tasks
|
||||||
|
const contextIds = (task.context_from || '').split(';').filter(Boolean)
|
||||||
|
const prevFindings = contextIds.map(id => {
|
||||||
|
const src = tasks.find(t => t.id === id)
|
||||||
|
if (!src?.findings) return ''
|
||||||
|
return `## [${src.id}] ${src.title}\n${src.findings}`
|
||||||
|
}).filter(Boolean).join('\n\n')
|
||||||
|
|
||||||
|
// b. From previous wave summary (HIGH DENSITY CONTEXT)
|
||||||
|
const waveContext = prevWaveSummary ?
|
||||||
|
`\n\n## Wave ${wave-1} Summary\n${prevWaveSummary}` : ''
|
||||||
|
|
||||||
|
// c. From discoveries.ndjson (relevant entries)
|
||||||
|
const discoveries = Read(`${sessionFolder}/discoveries.ndjson`)
|
||||||
|
const relevantDiscoveries = discoveries
|
||||||
|
.split('\n')
|
||||||
|
.filter(line => line.startsWith('{'))
|
||||||
|
.map(line => JSON.parse(line))
|
||||||
|
.filter(d => isRelevantDiscovery(d, task))
|
||||||
|
.slice(0, 10) // Limit to 10 most relevant
|
||||||
|
.map(d => `- [${d.type}] ${JSON.stringify(d.data)}`)
|
||||||
|
.join('\n')
|
||||||
|
|
||||||
|
const discoveryContext = relevantDiscoveries ?
|
||||||
|
`\n\n## Relevant Discoveries\n${relevantDiscoveries}` : ''
|
||||||
|
|
||||||
|
task.prev_context = prevFindings + waveContext + discoveryContext
|
||||||
|
}
|
||||||
|
|
||||||
|
// 5. Write wave CSV
|
||||||
|
Write(`${sessionFolder}/wave-${wave}.csv`, toCsv(pendingTasks))
|
||||||
|
|
||||||
|
// 6. Execute wave
|
||||||
|
spawn_agents_on_csv({
|
||||||
|
csv_path: `${sessionFolder}/wave-${wave}.csv`,
|
||||||
|
id_column: "id",
|
||||||
|
instruction: buildOptimizedInstruction(sessionFolder, wave),
|
||||||
|
max_concurrency: maxConcurrency,
|
||||||
|
max_runtime_seconds: 900,
|
||||||
|
output_csv_path: `${sessionFolder}/wave-${wave}-results.csv`,
|
||||||
|
output_schema: {
|
||||||
|
type: "object",
|
||||||
|
properties: {
|
||||||
|
id: { type: "string" },
|
||||||
|
status: { type: "string", enum: ["completed", "failed"] },
|
||||||
|
findings: { type: "string" },
|
||||||
|
doc_path: { type: "string" },
|
||||||
|
key_discoveries: { type: "string" },
|
||||||
|
error: { type: "string" }
|
||||||
|
}
|
||||||
|
}
|
||||||
|
})
|
||||||
|
|
||||||
|
// 7. Merge results
|
||||||
|
const results = parseCsv(Read(`${sessionFolder}/wave-${wave}-results.csv`))
|
||||||
|
for (const r of results) {
|
||||||
|
const t = tasks.find(t => t.id === r.id)
|
||||||
|
if (t) Object.assign(t, r)
|
||||||
|
}
|
||||||
|
Write(`${sessionFolder}/tasks.csv`, toCsv(tasks))
|
||||||
|
|
||||||
|
// 8. Generate Wave Summary (NEW: Inter-Wave Synthesis)
|
||||||
|
const completedThisWave = results.filter(r => r.status === 'completed')
|
||||||
|
if (completedThisWave.length > 0) {
|
||||||
|
const waveSummary = generateWaveSummary(wave, completedThisWave, tasks)
|
||||||
|
Write(`${sessionFolder}/wave-summaries/wave-${wave}-summary.md`, waveSummary)
|
||||||
|
console.log(`Generated Wave ${wave} Summary`)
|
||||||
|
}
|
||||||
|
|
||||||
|
// 9. Cleanup temp files
|
||||||
|
Bash(`rm -f ${sessionFolder}/wave-${wave}.csv ${sessionFolder}/wave-${wave}-results.csv`)
|
||||||
|
|
||||||
|
// 10. Display wave summary
|
||||||
|
const completed = results.filter(r => r.status === 'completed').length
|
||||||
|
const failed = results.filter(r => r.status === 'failed').length
|
||||||
|
console.log(`Wave ${wave} Complete: ${completed} completed, ${failed} failed`)
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Wave Summary Generation (Inter-Wave Synthesis)
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
function generateWaveSummary(waveNum, completedTasks, allTasks) {
|
||||||
|
let summary = `# Wave ${waveNum} Summary\n\n`
|
||||||
|
summary += `**Completed Tasks**: ${completedTasks.length}\n\n`
|
||||||
|
|
||||||
|
// Group by doc_type
|
||||||
|
const byType = {}
|
||||||
|
for (const task of completedTasks) {
|
||||||
|
const type = task.doc_type || 'unknown'
|
||||||
|
if (!byType[type]) byType[type] = []
|
||||||
|
byType[type].push(task)
|
||||||
|
}
|
||||||
|
|
||||||
|
for (const [type, tasks] of Object.entries(byType)) {
|
||||||
|
summary += `## ${type.toUpperCase()}\n\n`
|
||||||
|
for (const t of tasks) {
|
||||||
|
summary += `### ${t.title}\n`
|
||||||
|
if (t.findings) {
|
||||||
|
summary += `${t.findings.substring(0, 300)}${t.findings.length > 300 ? '...' : ''}\n\n`
|
||||||
|
}
|
||||||
|
if (t.key_discoveries) {
|
||||||
|
try {
|
||||||
|
const discoveries = JSON.parse(t.key_discoveries)
|
||||||
|
summary += `**Key Points**:\n`
|
||||||
|
for (const d of discoveries.slice(0, 3)) {
|
||||||
|
summary += `- ${d.name || d.type}: ${d.description || JSON.stringify(d).substring(0, 100)}\n`
|
||||||
|
}
|
||||||
|
summary += '\n'
|
||||||
|
} catch (e) {}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Add cross-references for next wave
|
||||||
|
const nextWaveTasks = allTasks.filter(t => t.wave === waveNum + 1)
|
||||||
|
if (nextWaveTasks.length > 0) {
|
||||||
|
summary += `## Context for Wave ${waveNum + 1}\n\n`
|
||||||
|
summary += `Next wave will focus on: ${nextWaveTasks.map(t => t.title).join(', ')}\n`
|
||||||
|
}
|
||||||
|
|
||||||
|
return summary
|
||||||
|
}
|
||||||
|
|
||||||
|
function isRelevantDiscovery(discovery, task) {
|
||||||
|
// Check if discovery is relevant to the task
|
||||||
|
const taskScope = task.target_scope || ''
|
||||||
|
const taskType = task.doc_type || ''
|
||||||
|
|
||||||
|
// Always include architecture discoveries for architecture tasks
|
||||||
|
if (taskType === 'architecture' && discovery.type.includes('component')) return true
|
||||||
|
if (taskType === 'implementation' && discovery.type.includes('algorithm')) return true
|
||||||
|
if (taskType === 'api' && discovery.type.includes('api')) return true
|
||||||
|
|
||||||
|
// Check file relevance
|
||||||
|
if (discovery.data?.file) {
|
||||||
|
return taskScope.includes(discovery.data.file.split('/')[0])
|
||||||
|
}
|
||||||
|
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Optimized Instruction Template
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
function buildOptimizedInstruction(sessionFolder, wave) {
|
||||||
|
return `## DOCUMENTATION TASK — Wave ${wave}
|
||||||
|
|
||||||
|
### ⚠️ MANDATORY FIRST STEPS (DO NOT SKIP)
|
||||||
|
|
||||||
|
1. **CHECK DISCOVERIES FIRST** (避免重复工作):
|
||||||
|
\`\`\`bash
|
||||||
|
# Search for existing discoveries about your topic
|
||||||
|
grep -i "{doc_type}" ${sessionFolder}/discoveries.ndjson
|
||||||
|
grep -i "{target_keywords}" ${sessionFolder}/discoveries.ndjson
|
||||||
|
\`\`\`
|
||||||
|
|
||||||
|
2. **Read Wave Summary** (高密度上下文):
|
||||||
|
- Read: ${sessionFolder}/wave-summaries/wave-${wave-1}-summary.md (if exists)
|
||||||
|
|
||||||
|
3. **Read prev_context** (provided below)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Your Task
|
||||||
|
|
||||||
|
**Task ID**: {id}
|
||||||
|
**Title**: {title}
|
||||||
|
**Document Type**: {doc_type}
|
||||||
|
**Target Scope**: {target_scope}
|
||||||
|
**Required Sections**: {doc_sections}
|
||||||
|
**LaTeX Support**: {formula_support}
|
||||||
|
**Priority**: {priority}
|
||||||
|
|
||||||
|
### Task Description
|
||||||
|
{description}
|
||||||
|
|
||||||
|
### Previous Context (USE THIS!)
|
||||||
|
{prev_context}
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Execution Protocol
|
||||||
|
|
||||||
|
### Step 1: Discovery Check (MANDATORY)
|
||||||
|
Before reading any source files:
|
||||||
|
- Search discoveries.ndjson for existing findings
|
||||||
|
- Note any pre-discovered components, patterns, algorithms
|
||||||
|
- Avoid re-documenting what's already found
|
||||||
|
|
||||||
|
### Step 2: Scope Analysis
|
||||||
|
- Read files matching \`{target_scope}\`
|
||||||
|
- Identify key structures, functions, classes
|
||||||
|
- Extract relevant code patterns
|
||||||
|
|
||||||
|
### Step 3: Context Integration
|
||||||
|
- Build on findings from prev_context
|
||||||
|
- Reference Wave Summary insights
|
||||||
|
- Connect to discoveries from other agents
|
||||||
|
|
||||||
|
### Step 4: Document Generation
|
||||||
|
**Output Path**: Determine based on doc_type:
|
||||||
|
- \`overview\` → \`docs/01-overview/\`
|
||||||
|
- \`architecture\` → \`docs/02-architecture/\`
|
||||||
|
- \`implementation\` → \`docs/03-implementation/\`
|
||||||
|
- \`feature\` → \`docs/04-features/\`
|
||||||
|
- \`api\` → \`docs/04-features/\`
|
||||||
|
- \`usage\` → \`docs/04-features/\`
|
||||||
|
- \`synthesis\` → \`docs/05-synthesis/\`
|
||||||
|
|
||||||
|
**Document Structure**:
|
||||||
|
\`\`\`markdown
|
||||||
|
# {Title}
|
||||||
|
|
||||||
|
## Overview
|
||||||
|
[Brief introduction]
|
||||||
|
|
||||||
|
## {Required Section 1}
|
||||||
|
[Content with code examples]
|
||||||
|
|
||||||
|
## {Required Section 2}
|
||||||
|
[Content with diagrams if applicable]
|
||||||
|
|
||||||
|
...
|
||||||
|
|
||||||
|
## Code Examples
|
||||||
|
\`\`\`{language}
|
||||||
|
// file:line references
|
||||||
|
\`\`\`
|
||||||
|
|
||||||
|
## Cross-References
|
||||||
|
- Related: [Doc](path)
|
||||||
|
- Depends: [Prereq](path)
|
||||||
|
|
||||||
|
## Summary
|
||||||
|
[Key takeaways]
|
||||||
|
\`\`\`
|
||||||
|
|
||||||
|
### Step 5: Share Discoveries (MANDATORY)
|
||||||
|
Append to discovery board:
|
||||||
|
\`\`\`bash
|
||||||
|
echo '{"ts":"${getUtc8ISOString()}","worker":"{id}","type":"<TYPE>","data":{...}}' >> ${sessionFolder}/discoveries.ndjson
|
||||||
|
\`\`\`
|
||||||
|
|
||||||
|
**Discovery Types**:
|
||||||
|
- \`component_found\`: {name, type, file, purpose}
|
||||||
|
- \`pattern_found\`: {pattern_name, location, description}
|
||||||
|
- \`algorithm_found\`: {name, file, complexity, purpose}
|
||||||
|
- \`formula_found\`: {name, latex, file, context}
|
||||||
|
- \`feature_found\`: {name, entry_point, description}
|
||||||
|
- \`api_found\`: {endpoint, file, parameters, returns}
|
||||||
|
- \`config_found\`: {name, file, type, default_value}
|
||||||
|
|
||||||
|
### Step 6: Report
|
||||||
|
\`\`\`json
|
||||||
|
{
|
||||||
|
"id": "{id}",
|
||||||
|
"status": "completed",
|
||||||
|
"findings": "Key discoveries (max 500 chars, structured for context propagation)",
|
||||||
|
"doc_path": "docs/XX-category/filename.md",
|
||||||
|
"key_discoveries": "[{\"name\":\"...\",\"type\":\"...\",\"description\":\"...\",\"file\":\"...\"}]",
|
||||||
|
"error": ""
|
||||||
|
}
|
||||||
|
\`\`\`
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Quality Requirements
|
||||||
|
|
||||||
|
| Requirement | Criteria |
|
||||||
|
|-------------|----------|
|
||||||
|
| Section Coverage | ALL sections in doc_sections present |
|
||||||
|
| Code References | Include file:line for code |
|
||||||
|
| Discovery Sharing | At least 2 discoveries shared |
|
||||||
|
| Context Usage | Reference prev_context findings |
|
||||||
|
| Cross-References | Link to related docs |
|
||||||
|
`
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Phase 2: Results Aggregation
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
// 1. Generate docs/index.md
|
||||||
|
const tasks = parseCsv(Read(`${sessionFolder}/tasks.csv`))
|
||||||
|
const completed = tasks.filter(t => t.status === 'completed')
|
||||||
|
|
||||||
|
// Group by doc_type for navigation
|
||||||
|
const byType = {}
|
||||||
|
for (const t of completed) {
|
||||||
|
const type = t.doc_type || 'other'
|
||||||
|
if (!byType[type]) byType[type] = []
|
||||||
|
byType[type].push(t)
|
||||||
|
}
|
||||||
|
|
||||||
|
let index = `# Project Documentation Index\n\n`
|
||||||
|
index += `**Generated**: ${getUtc8ISOString().substring(0, 10)}\n`
|
||||||
|
index += `**Total Documents**: ${completed.length}\n\n`
|
||||||
|
|
||||||
|
const typeLabels = {
|
||||||
|
overview: '📋 概览',
|
||||||
|
architecture: '🏗️ 架构',
|
||||||
|
implementation: '⚙️ 实现',
|
||||||
|
theory: '📐 理论',
|
||||||
|
feature: '✨ 功能',
|
||||||
|
api: '🔌 API',
|
||||||
|
usage: '📖 使用',
|
||||||
|
synthesis: '💡 综合'
|
||||||
|
}
|
||||||
|
|
||||||
|
for (const [type, typeTasks] of Object.entries(byType)) {
|
||||||
|
const label = typeLabels[type] || type
|
||||||
|
index += `## ${label}\n\n`
|
||||||
|
for (const t of typeTasks) {
|
||||||
|
index += `- [${t.title}](${t.doc_path})\n`
|
||||||
|
}
|
||||||
|
index += `\n`
|
||||||
|
}
|
||||||
|
|
||||||
|
// Add wave summaries reference
|
||||||
|
index += `## 📊 Execution Reports\n\n`
|
||||||
|
index += `- [Wave Summaries](wave-summaries/)\n`
|
||||||
|
index += `- [Full Context](../context.md)\n`
|
||||||
|
|
||||||
|
Write(`${sessionFolder}/docs/index.md`, index)
|
||||||
|
|
||||||
|
// 2. Export results.csv
|
||||||
|
Bash(`cp ${sessionFolder}/tasks.csv ${sessionFolder}/results.csv`)
|
||||||
|
|
||||||
|
// 3. Generate context.md
|
||||||
|
const projectInfo = JSON.parse(Read(`${sessionFolder}/project-info.json`))
|
||||||
|
let contextMd = `# Documentation Report\n\n`
|
||||||
|
contextMd += `**Session**: ${sessionId}\n`
|
||||||
|
contextMd += `**Date**: ${getUtc8ISOString().substring(0, 10)}\n\n`
|
||||||
|
|
||||||
|
contextMd += `## Project Info\n`
|
||||||
|
contextMd += `- **Type**: ${projectInfo.type}\n`
|
||||||
|
contextMd += `- **Scale**: ${projectInfo.scale}\n`
|
||||||
|
contextMd += `- **Languages**: ${projectInfo.languages?.join(', ') || 'N/A'}\n\n`
|
||||||
|
|
||||||
|
const statusCounts = {
|
||||||
|
completed: tasks.filter(t => t.status === 'completed').length,
|
||||||
|
failed: tasks.filter(t => t.status === 'failed').length,
|
||||||
|
skipped: tasks.filter(t => t.status === 'skipped').length
|
||||||
|
}
|
||||||
|
contextMd += `## Summary\n`
|
||||||
|
contextMd += `| Status | Count |\n`
|
||||||
|
contextMd += `|--------|-------|\n`
|
||||||
|
contextMd += `| ✅ Completed | ${statusCounts.completed} |\n`
|
||||||
|
contextMd += `| ❌ Failed | ${statusCounts.failed} |\n`
|
||||||
|
contextMd += `| ⏭️ Skipped | ${statusCounts.skipped} |\n\n`
|
||||||
|
|
||||||
|
// Per-wave summary
|
||||||
|
const maxWave = Math.max(...tasks.map(t => t.wave))
|
||||||
|
contextMd += `## Wave Execution\n\n`
|
||||||
|
for (let w = 1; w <= maxWave; w++) {
|
||||||
|
const waveTasks = tasks.filter(t => t.wave === w)
|
||||||
|
contextMd += `### Wave ${w}\n\n`
|
||||||
|
for (const t of waveTasks) {
|
||||||
|
const icon = t.status === 'completed' ? '✅' : t.status === 'failed' ? '❌' : '⏭️'
|
||||||
|
contextMd += `${icon} **${t.title}** [${t.doc_type}]\n`
|
||||||
|
if (t.findings) {
|
||||||
|
contextMd += ` ${t.findings.substring(0, 200)}${t.findings.length > 200 ? '...' : ''}\n`
|
||||||
|
}
|
||||||
|
if (t.doc_path) {
|
||||||
|
contextMd += ` → [${t.doc_path}](${t.doc_path})\n`
|
||||||
|
}
|
||||||
|
contextMd += `\n`
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
Write(`${sessionFolder}/context.md`, contextMd)
|
||||||
|
|
||||||
|
// 4. Display final summary
|
||||||
|
console.log(`
|
||||||
|
╔════════════════════════════════════════════════════════════════╗
|
||||||
|
║ DOCUMENTATION COMPLETE ║
|
||||||
|
╠════════════════════════════════════════════════════════════════╣
|
||||||
|
║ ✅ Completed: ${statusCounts.completed.toString().padStart(2)} tasks ║
|
||||||
|
║ ❌ Failed: ${statusCounts.failed.toString().padStart(2)} tasks ║
|
||||||
|
║ ⏭️ Skipped: ${statusCounts.skipped.toString().padStart(2)} tasks ║
|
||||||
|
╠════════════════════════════════════════════════════════════════╣
|
||||||
|
║ Output: ${sessionFolder.padEnd(50)} ║
|
||||||
|
╚════════════════════════════════════════════════════════════════╝
|
||||||
|
`)
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Optimized Output Structure
|
||||||
|
|
||||||
|
```
|
||||||
|
.workflow/.csv-wave/doc-{slug}-{date}/
|
||||||
|
├── project-info.json # 项目分析结果
|
||||||
|
├── tasks.csv # Master CSV (动态生成的任务)
|
||||||
|
├── results.csv # 最终结果
|
||||||
|
├── discoveries.ndjson # 发现板
|
||||||
|
├── context.md # 执行报告
|
||||||
|
│
|
||||||
|
├── wave-summaries/ # NEW: 波次摘要
|
||||||
|
│ ├── wave-1-summary.md
|
||||||
|
│ ├── wave-2-summary.md
|
||||||
|
│ └── ...
|
||||||
|
│
|
||||||
|
└── docs/
|
||||||
|
├── index.md # 文档导航
|
||||||
|
├── 01-overview/
|
||||||
|
├── 02-architecture/
|
||||||
|
├── 03-implementation/
|
||||||
|
├── 04-features/
|
||||||
|
└── 05-synthesis/
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Optimization Summary
|
||||||
|
|
||||||
|
| 优化点 | 原版 | 优化版 |
|
||||||
|
|--------|------|--------|
|
||||||
|
| **任务数量** | 固定17任务 | 动态生成 (5-25基于项目规模) |
|
||||||
|
| **波次计算** | 硬编码5波 | 拓扑排序动态计算 |
|
||||||
|
| **上下文传播** | 仅 prev_context | prev_context + Wave Summary + Discoveries |
|
||||||
|
| **发现利用** | 依赖自觉 | 强制第一步检查 |
|
||||||
|
| **文档密度** | 原始 findings | 结构化 Wave Summary |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Core Rules
|
||||||
|
|
||||||
|
1. **Dynamic First**: 任务列表动态生成,不预设
|
||||||
|
2. **Wave Order is Sacred**: 波次由拓扑排序决定
|
||||||
|
3. **Discovery Check Mandatory**: 必须先检查发现板
|
||||||
|
4. **Wave Summary**: 每波次结束生成摘要
|
||||||
|
5. **Context Compound**: 上下文累积传播
|
||||||
|
6. **Quality Gates**: 每文档必须覆盖所有 doc_sections
|
||||||
|
7. **DO NOT STOP**: 持续执行直到所有波次完成
|
||||||
@@ -0,0 +1,250 @@
|
|||||||
|
# Agent Instruction Template (Optimized)
|
||||||
|
|
||||||
|
Enhanced instruction template with mandatory discovery check and wave summary integration.
|
||||||
|
|
||||||
|
## Purpose
|
||||||
|
|
||||||
|
| Phase | Usage |
|
||||||
|
|-------|-------|
|
||||||
|
| Wave Execution | Injected as `instruction` parameter to `spawn_agents_on_csv` |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Optimized Instruction Template
|
||||||
|
|
||||||
|
```markdown
|
||||||
|
## DOCUMENTATION TASK — Wave {wave_number}
|
||||||
|
|
||||||
|
### ⚠️ MANDATORY FIRST STEPS (DO NOT SKIP)
|
||||||
|
|
||||||
|
**CRITICAL**: Complete these steps BEFORE reading any source files!
|
||||||
|
|
||||||
|
1. **🔍 CHECK DISCOVERIES FIRST** (避免重复工作):
|
||||||
|
```bash
|
||||||
|
# Search for existing discoveries about your topic
|
||||||
|
cat {session_folder}/discoveries.ndjson | grep -i "{doc_type}"
|
||||||
|
cat {session_folder}/discoveries.ndjson | grep -i "component\|algorithm\|pattern"
|
||||||
|
```
|
||||||
|
|
||||||
|
**What to look for**:
|
||||||
|
- Already discovered components in your scope
|
||||||
|
- Existing pattern definitions
|
||||||
|
- Pre-documented algorithms
|
||||||
|
|
||||||
|
2. **📊 Read Wave Summary** (高密度上下文):
|
||||||
|
- File: {session_folder}/wave-summaries/wave-{prev_wave}-summary.md
|
||||||
|
- This contains synthesized findings from previous wave
|
||||||
|
- **USE THIS** - it's pre-digested context!
|
||||||
|
|
||||||
|
3. **📋 Read prev_context** (provided below)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Your Task
|
||||||
|
|
||||||
|
**Task ID**: {id}
|
||||||
|
**Title**: {title}
|
||||||
|
**Document Type**: {doc_type}
|
||||||
|
**Target Scope**: {target_scope}
|
||||||
|
**Required Sections**: {doc_sections}
|
||||||
|
**LaTeX Support**: {formula_support}
|
||||||
|
**Priority**: {priority}
|
||||||
|
|
||||||
|
### Task Description
|
||||||
|
{description}
|
||||||
|
|
||||||
|
### Previous Context (USE THIS!)
|
||||||
|
{prev_context}
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Execution Protocol
|
||||||
|
|
||||||
|
### Step 1: Discovery Check (MANDATORY - 2 min max)
|
||||||
|
|
||||||
|
Before reading ANY source files, check discoveries.ndjson:
|
||||||
|
|
||||||
|
| Discovery Type | What It Tells You | Skip If Found |
|
||||||
|
|----------------|-------------------|---------------|
|
||||||
|
| `component_found` | Existing component | Reuse description |
|
||||||
|
| `pattern_found` | Design patterns | Reference existing |
|
||||||
|
| `algorithm_found` | Core algorithms | Don't re-analyze |
|
||||||
|
| `api_found` | API signatures | Copy from discovery |
|
||||||
|
|
||||||
|
**Goal**: Avoid duplicating work already done by previous agents.
|
||||||
|
|
||||||
|
### Step 2: Scope Analysis
|
||||||
|
|
||||||
|
Read files matching `{target_scope}`:
|
||||||
|
- Identify key structures, functions, classes
|
||||||
|
- Extract relevant code patterns
|
||||||
|
- Note file:line references for examples
|
||||||
|
|
||||||
|
### Step 3: Context Integration
|
||||||
|
|
||||||
|
Synthesize from multiple sources:
|
||||||
|
- **Wave Summary**: High-density insights from previous wave
|
||||||
|
- **prev_context**: Specific findings from context_from tasks
|
||||||
|
- **Discoveries**: Cross-cutting findings from all agents
|
||||||
|
|
||||||
|
### Step 4: Document Generation
|
||||||
|
|
||||||
|
**Determine Output Path** by doc_type:
|
||||||
|
|
||||||
|
| doc_type | Output Directory |
|
||||||
|
|----------|-----------------|
|
||||||
|
| `overview` | `docs/01-overview/` |
|
||||||
|
| `architecture` | `docs/02-architecture/` |
|
||||||
|
| `implementation` | `docs/03-implementation/` |
|
||||||
|
| `theory` | `docs/03-implementation/` |
|
||||||
|
| `feature` | `docs/04-features/` |
|
||||||
|
| `api` | `docs/04-features/` |
|
||||||
|
| `usage` | `docs/04-features/` |
|
||||||
|
| `synthesis` | `docs/05-synthesis/` |
|
||||||
|
|
||||||
|
**Document Structure** (ALL sections required):
|
||||||
|
|
||||||
|
```markdown
|
||||||
|
# {Title}
|
||||||
|
|
||||||
|
## Overview
|
||||||
|
[Brief introduction - 2-3 sentences]
|
||||||
|
|
||||||
|
## {Required Section 1}
|
||||||
|
[Content with code examples and file:line references]
|
||||||
|
|
||||||
|
## {Required Section 2}
|
||||||
|
[Content with Mermaid diagrams if applicable]
|
||||||
|
|
||||||
|
... (repeat for ALL sections in doc_sections)
|
||||||
|
|
||||||
|
## Code Examples
|
||||||
|
```{language}
|
||||||
|
// src/module/file.ts:42-56
|
||||||
|
function example() {
|
||||||
|
// ...
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
## Diagrams (if applicable)
|
||||||
|
```mermaid
|
||||||
|
graph TD
|
||||||
|
A[Component] --> B[Dependency]
|
||||||
|
```
|
||||||
|
|
||||||
|
## Cross-References
|
||||||
|
- Related: [Document](path/to/related.md)
|
||||||
|
- Depends on: [Prerequisite](path/to/prereq.md)
|
||||||
|
- See also: [Reference](path/to/ref.md)
|
||||||
|
|
||||||
|
## Summary
|
||||||
|
[3-5 key takeaways in bullet points]
|
||||||
|
```
|
||||||
|
|
||||||
|
### Step 5: Share Discoveries (MANDATORY)
|
||||||
|
|
||||||
|
After completing analysis, share findings:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
echo '{"ts":"<ISO8601>","worker":"{id}","type":"<TYPE>","data":{...}}' >> {session_folder}/discoveries.ndjson
|
||||||
|
```
|
||||||
|
|
||||||
|
**Discovery Types** (use appropriate type):
|
||||||
|
|
||||||
|
| Type | When to Use | Data Fields |
|
||||||
|
|------|-------------|-------------|
|
||||||
|
| `component_found` | Found a significant module/class | `{name, type, file, purpose}` |
|
||||||
|
| `pattern_found` | Identified design pattern | `{pattern_name, location, description}` |
|
||||||
|
| `algorithm_found` | Core algorithm identified | `{name, file, complexity, purpose}` |
|
||||||
|
| `formula_found` | Mathematical formula (theory docs) | `{name, latex, file, context}` |
|
||||||
|
| `feature_found` | User-facing feature | `{name, entry_point, description}` |
|
||||||
|
| `api_found` | API endpoint or function | `{endpoint, file, parameters, returns}` |
|
||||||
|
| `config_found` | Configuration option | `{name, file, type, default_value}` |
|
||||||
|
|
||||||
|
**Share at least 2 discoveries** per task.
|
||||||
|
|
||||||
|
### Step 6: Report Results
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"id": "{id}",
|
||||||
|
"status": "completed",
|
||||||
|
"findings": "Structured summary (max 500 chars). Include: 1) Main components found, 2) Key patterns, 3) Critical insights. Format for easy parsing by next wave.",
|
||||||
|
"doc_path": "docs/XX-category/filename.md",
|
||||||
|
"key_discoveries": "[{\"name\":\"ComponentA\",\"type\":\"class\",\"description\":\"Handles X\",\"file\":\"src/a.ts:10\"}]",
|
||||||
|
"error": ""
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Quality Checklist
|
||||||
|
|
||||||
|
Before reporting complete, verify:
|
||||||
|
|
||||||
|
- [ ] **All sections present**: Every section in `doc_sections` is included
|
||||||
|
- [ ] **Code references**: Include `file:line` for code examples
|
||||||
|
- [ ] **Discovery sharing**: At least 2 discoveries added to board
|
||||||
|
- [ ] **Context usage**: Referenced findings from prev_context or Wave Summary
|
||||||
|
- [ ] **Cross-references**: Links to related documentation
|
||||||
|
- [ ] **Summary**: Clear key takeaways
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## LaTeX Formatting (when formula_support=true)
|
||||||
|
|
||||||
|
Use `$$...$$` for display math:
|
||||||
|
|
||||||
|
```markdown
|
||||||
|
The weak form is defined as:
|
||||||
|
|
||||||
|
$$
|
||||||
|
a(u,v) = \int_\Omega \nabla u \cdot \nabla v \, d\Omega
|
||||||
|
$$
|
||||||
|
|
||||||
|
Where:
|
||||||
|
- $u$ is the trial function
|
||||||
|
- $v$ is the test function
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Tips for Effective Documentation
|
||||||
|
|
||||||
|
1. **Be Precise**: Use specific file:line references
|
||||||
|
2. **Be Concise**: Summarize key points, don't copy-paste entire files
|
||||||
|
3. **Be Connected**: Reference related docs and discoveries
|
||||||
|
4. **Be Structured**: Follow the required section order
|
||||||
|
5. **Be Helpful**: Include practical examples and use cases
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Placeholder Reference
|
||||||
|
|
||||||
|
| Placeholder | Resolved By | Description |
|
||||||
|
|-------------|-------------|-------------|
|
||||||
|
| `{wave_number}` | Wave Engine | Current wave number |
|
||||||
|
| `{session_folder}` | Wave Engine | Session directory path |
|
||||||
|
| `{prev_wave}` | Wave Engine | Previous wave number (wave_number - 1) |
|
||||||
|
| `{id}` | CSV row | Task ID |
|
||||||
|
| `{title}` | CSV row | Document title |
|
||||||
|
| `{doc_type}` | CSV row | Document type |
|
||||||
|
| `{target_scope}` | CSV row | File glob pattern |
|
||||||
|
| `{doc_sections}` | CSV row | Required sections |
|
||||||
|
| `{formula_support}` | CSV row | LaTeX support flag |
|
||||||
|
| `{priority}` | CSV row | Task priority |
|
||||||
|
| `{description}` | CSV row | Task description |
|
||||||
|
| `{prev_context}` | Wave Engine | Aggregated context |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Optimization Notes
|
||||||
|
|
||||||
|
This instruction template includes:
|
||||||
|
|
||||||
|
1. **Mandatory Discovery Check**: Forces agents to check discoveries.ndjson first
|
||||||
|
2. **Wave Summary Integration**: References previous wave's synthesized findings
|
||||||
|
3. **Structured Reporting**: Findings formatted for easy parsing by next wave
|
||||||
|
4. **Quality Checklist**: Explicit verification before completion
|
||||||
|
5. **Discovery Requirements**: Minimum 2 discoveries per task
|
||||||
@@ -0,0 +1,208 @@
|
|||||||
|
# CSV Schema — Project Documentation Workflow (Optimized)
|
||||||
|
|
||||||
|
Dynamic task decomposition with topological wave computation.
|
||||||
|
|
||||||
|
## tasks.csv (Master State)
|
||||||
|
|
||||||
|
### Column Definitions
|
||||||
|
|
||||||
|
| Column | Type | Required | Description | Example |
|
||||||
|
|--------|------|----------|-------------|---------|
|
||||||
|
| `id` | string | Yes | Task ID (doc-NNN, auto-generated) | `"doc-001"` |
|
||||||
|
| `title` | string | Yes | Document title | `"系统架构图"` |
|
||||||
|
| `description` | string | Yes | Detailed task description (self-contained) | `"绘制系统架构图..."` |
|
||||||
|
| `doc_type` | enum | Yes | Document type | `"architecture"` |
|
||||||
|
| `target_scope` | string | Yes | File scope (glob pattern) | `"src/**"` |
|
||||||
|
| `doc_sections` | string | Yes | Required sections (comma-separated) | `"components,dependencies"` |
|
||||||
|
| `formula_support` | boolean | No | LaTeX formula support needed | `"true"` |
|
||||||
|
| `priority` | enum | No | Task priority | `"high"` |
|
||||||
|
| `deps` | string | No | Dependency task IDs (semicolon-separated) | `"doc-001;doc-002"` |
|
||||||
|
| `context_from` | string | No | Context source task IDs | `"doc-001;doc-003"` |
|
||||||
|
| `wave` | integer | Computed | Wave number (computed by topological sort) | `1` |
|
||||||
|
| `status` | enum | Output | `pending` → `completed`/`failed`/`skipped` | `"completed"` |
|
||||||
|
| `findings` | string | Output | Key findings summary (max 500 chars) | `"Found 3 main components..."` |
|
||||||
|
| `doc_path` | string | Output | Generated document path | `"docs/02-architecture/system-architecture.md"` |
|
||||||
|
| `key_discoveries` | string | Output | Key discoveries (JSON array) | `"[{\"name\":\"...\",\"type\":\"...\"}]"` |
|
||||||
|
| `error` | string | Output | Error message if failed | `""` |
|
||||||
|
|
||||||
|
### doc_type Values
|
||||||
|
|
||||||
|
| Value | Typical Wave | Description |
|
||||||
|
|-------|--------------|-------------|
|
||||||
|
| `overview` | 1 | Project overview, tech stack, structure |
|
||||||
|
| `architecture` | 2 | System architecture, patterns, interactions |
|
||||||
|
| `implementation` | 3 | Algorithms, data structures, utilities |
|
||||||
|
| `theory` | 3 | Mathematical foundations, formulas (LaTeX) |
|
||||||
|
| `feature` | 4 | Feature documentation |
|
||||||
|
| `usage` | 4 | Usage guide, installation, configuration |
|
||||||
|
| `api` | 4 | API reference |
|
||||||
|
| `synthesis` | 5+ | Design philosophy, best practices, summary |
|
||||||
|
|
||||||
|
### priority Values
|
||||||
|
|
||||||
|
| Value | Description | Typical Use |
|
||||||
|
|-------|-------------|-------------|
|
||||||
|
| `high` | Essential document | overview, architecture |
|
||||||
|
| `medium` | Useful but optional | implementation details |
|
||||||
|
| `low` | Nice to have | extended examples |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Dynamic Task Generation
|
||||||
|
|
||||||
|
### Task Count Guidelines
|
||||||
|
|
||||||
|
| Project Scale | File Count | Recommended Tasks | Waves |
|
||||||
|
|--------------|------------|-------------------|-------|
|
||||||
|
| Small | < 20 files | 5-8 tasks | 2-3 |
|
||||||
|
| Medium | 20-100 files | 10-15 tasks | 3-4 |
|
||||||
|
| Large | > 100 files | 15-25 tasks | 4-6 |
|
||||||
|
|
||||||
|
### Project Type → Task Templates
|
||||||
|
|
||||||
|
| Project Type | Essential Tasks | Optional Tasks |
|
||||||
|
|-------------|-----------------|----------------|
|
||||||
|
| **Library** | overview, api-reference, usage-guide | design-patterns, best-practices |
|
||||||
|
| **Application** | overview, architecture, feature-list, usage-guide | api-reference, deployment |
|
||||||
|
| **Service/API** | overview, architecture, api-reference | module-interactions, deployment |
|
||||||
|
| **CLI Tool** | overview, usage-guide, api-reference | architecture |
|
||||||
|
| **Numerical/Scientific** | overview, architecture, theoretical-foundations | algorithms, data-structures |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Wave Computation (Topological Sort)
|
||||||
|
|
||||||
|
### Algorithm: Kahn's BFS
|
||||||
|
|
||||||
|
```
|
||||||
|
Input: tasks with deps field
|
||||||
|
Output: tasks with wave field
|
||||||
|
|
||||||
|
1. Build adjacency list from deps
|
||||||
|
2. Initialize in-degree for each task
|
||||||
|
3. Queue tasks with in-degree 0 (Wave 1)
|
||||||
|
4. While queue not empty:
|
||||||
|
a. Current wave = all queued tasks
|
||||||
|
b. For each completed task, decrement dependents' in-degree
|
||||||
|
c. Queue tasks with in-degree 0 for next wave
|
||||||
|
5. Assign wave numbers
|
||||||
|
6. Detect cycles: if unassigned tasks remain → circular dependency
|
||||||
|
```
|
||||||
|
|
||||||
|
### Dependency Rules
|
||||||
|
|
||||||
|
| doc_type | Typical deps | Rationale |
|
||||||
|
|----------|--------------|-----------|
|
||||||
|
| `overview` | (none) | Foundation tasks |
|
||||||
|
| `architecture` | `overview` tasks | Needs project understanding |
|
||||||
|
| `implementation` | `architecture` tasks | Needs design context |
|
||||||
|
| `theory` | `overview` + `architecture` | Needs model understanding |
|
||||||
|
| `feature` | `implementation` tasks | Needs code knowledge |
|
||||||
|
| `api` | `implementation` tasks | Needs function signatures |
|
||||||
|
| `usage` | `feature` tasks | Needs feature knowledge |
|
||||||
|
| `synthesis` | Most other tasks | Integrates all findings |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Example CSV (Small Project - 7 tasks, 3 waves)
|
||||||
|
|
||||||
|
```csv
|
||||||
|
id,title,description,doc_type,target_scope,doc_sections,formula_support,priority,deps,context_from,wave,status,findings,doc_path,key_discoveries,error
|
||||||
|
"doc-001","项目概述","撰写项目概述","overview","README.md,package.json","purpose,background,audience","false","high","","","1","pending","","","",""
|
||||||
|
"doc-002","技术栈","分析技术栈","overview","package.json,tsconfig.json","languages,frameworks,dependencies","false","medium","","doc-001","1","pending","","","",""
|
||||||
|
"doc-003","系统架构","绘制架构图","architecture","src/**","components,dependencies,dataflow","false","high","doc-001","doc-001;doc-002","2","pending","","","",""
|
||||||
|
"doc-004","核心算法","文档化核心算法","implementation","src/core/**","algorithms,complexity,examples","false","high","doc-003","doc-003","3","pending","","","",""
|
||||||
|
"doc-005","API参考","API文档","api","src/**/*.ts","endpoints,parameters,examples","false","high","doc-003","doc-003;doc-004","3","pending","","","",""
|
||||||
|
"doc-006","使用指南","使用说明","usage","README.md,examples/**","installation,configuration,running","false","high","doc-004;doc-005","doc-004;doc-005","4","pending","","","",""
|
||||||
|
"doc-007","最佳实践","推荐用法","synthesis","src/**,examples/**","recommendations,pitfalls,examples","false","medium","doc-006","doc-004;doc-005;doc-006","5","pending","","","",""
|
||||||
|
```
|
||||||
|
|
||||||
|
### Computed Wave Distribution
|
||||||
|
|
||||||
|
| Wave | Tasks | Parallelism |
|
||||||
|
|------|-------|-------------|
|
||||||
|
| 1 | doc-001, doc-002 | 2 concurrent |
|
||||||
|
| 2 | doc-003 | 1 (sequential) |
|
||||||
|
| 3 | doc-004, doc-005 | 2 concurrent |
|
||||||
|
| 4 | doc-006 | 1 (sequential) |
|
||||||
|
| 5 | doc-007 | 1 (sequential) |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Per-Wave CSV (Temporary)
|
||||||
|
|
||||||
|
Extra columns added by Wave Engine:
|
||||||
|
|
||||||
|
| Column | Type | Description |
|
||||||
|
|--------|------|-------------|
|
||||||
|
| `prev_context` | string | Aggregated findings + Wave Summary + Relevant Discoveries |
|
||||||
|
|
||||||
|
### prev_context Assembly
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
prev_context =
|
||||||
|
// 1. From context_from tasks
|
||||||
|
context_from.map(id => task.findings).join('\n\n') +
|
||||||
|
|
||||||
|
// 2. From Wave Summary (if wave > 1)
|
||||||
|
'\n\n## Previous Wave Summary\n' + waveSummary +
|
||||||
|
|
||||||
|
// 3. From Discoveries (filtered by relevance)
|
||||||
|
'\n\n## Relevant Discoveries\n' + relevantDiscoveries
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Output Schema (Agent Report)
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"id": "doc-003",
|
||||||
|
"status": "completed",
|
||||||
|
"findings": "Identified 4 core components: Parser, Analyzer, Generator, Exporter. Data flows left-to-right with feedback loop for error recovery. Main entry point is src/index.ts.",
|
||||||
|
"doc_path": "docs/02-architecture/system-architecture.md",
|
||||||
|
"key_discoveries": "[{\"name\":\"Parser\",\"type\":\"component\",\"file\":\"src/parser/index.ts\",\"description\":\"Transforms input to AST\"}]",
|
||||||
|
"error": ""
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Validation Rules
|
||||||
|
|
||||||
|
| Rule | Check | Error |
|
||||||
|
|------|-------|-------|
|
||||||
|
| Unique IDs | No duplicate `id` values | "Duplicate task ID: {id}" |
|
||||||
|
| Valid deps | All dep IDs exist in task list | "Unknown dependency: {dep_id}" |
|
||||||
|
| No self-deps | Task cannot depend on itself | "Self-dependency: {id}" |
|
||||||
|
| No cycles | Topological sort completes | "Circular dependency involving: {ids}" |
|
||||||
|
| Context valid | All context_from IDs in earlier or same wave | "Invalid context_from: {id}" |
|
||||||
|
| Valid doc_type | doc_type ∈ enum values | "Invalid doc_type: {type}" |
|
||||||
|
| Valid priority | priority ∈ {high,medium,low} | "Invalid priority: {priority}" |
|
||||||
|
| Status enum | status ∈ {pending,completed,failed,skipped} | "Invalid status" |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Wave Summary Schema
|
||||||
|
|
||||||
|
Each wave generates a summary file: `wave-summaries/wave-{N}-summary.md`
|
||||||
|
|
||||||
|
```markdown
|
||||||
|
# Wave {N} Summary
|
||||||
|
|
||||||
|
**Completed Tasks**: {count}
|
||||||
|
|
||||||
|
## By Document Type
|
||||||
|
|
||||||
|
### {doc_type}
|
||||||
|
#### {task.title}
|
||||||
|
{task.findings (truncated to 300 chars)}
|
||||||
|
|
||||||
|
**Key Points**:
|
||||||
|
- {discovery.name}: {discovery.description}
|
||||||
|
...
|
||||||
|
|
||||||
|
## Context for Wave {N+1}
|
||||||
|
|
||||||
|
Next wave will focus on: {next_wave_task_titles}
|
||||||
|
```
|
||||||
@@ -0,0 +1,195 @@
|
|||||||
|
# Quality Standards for Documentation
|
||||||
|
|
||||||
|
Quality assessment criteria for generated documentation.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Quality Dimensions
|
||||||
|
|
||||||
|
### 1. Completeness (30%)
|
||||||
|
|
||||||
|
| Score | Criteria |
|
||||||
|
|-------|----------|
|
||||||
|
| 100% | All required sections present, all topics covered |
|
||||||
|
| 80% | All sections present, some topics incomplete |
|
||||||
|
| 60% | Most sections present, some missing |
|
||||||
|
| 40% | Several sections missing |
|
||||||
|
| 0% | Major sections absent |
|
||||||
|
|
||||||
|
**Checklist per Document**:
|
||||||
|
- [ ] All sections in `doc_sections` are present
|
||||||
|
- [ ] Each section has substantial content (not just placeholders)
|
||||||
|
- [ ] Cross-references to related documentation
|
||||||
|
- [ ] Code examples where applicable
|
||||||
|
- [ ] Diagrams where applicable
|
||||||
|
|
||||||
|
### 2. Accuracy (25%)
|
||||||
|
|
||||||
|
| Score | Criteria |
|
||||||
|
|-------|----------|
|
||||||
|
| 100% | All information correct, file references accurate |
|
||||||
|
| 80% | Minor inaccuracies, most references correct |
|
||||||
|
| 60% | Some errors, most file references work |
|
||||||
|
| 40% | Multiple errors, broken references |
|
||||||
|
| 0% | Significant inaccuracies |
|
||||||
|
|
||||||
|
**Checklist**:
|
||||||
|
- [ ] Code examples compile/run correctly
|
||||||
|
- [ ] File paths and line numbers are accurate
|
||||||
|
- [ ] API signatures match actual implementation
|
||||||
|
- [ ] Configuration examples are valid
|
||||||
|
- [ ] Dependencies and versions are correct
|
||||||
|
|
||||||
|
### 3. Clarity (25%)
|
||||||
|
|
||||||
|
| Score | Criteria |
|
||||||
|
|-------|----------|
|
||||||
|
| 100% | Clear, concise, well-organized, easy to navigate |
|
||||||
|
| 80% | Clear, minor organization issues |
|
||||||
|
| 60% | Understandable but could be clearer |
|
||||||
|
| 40% | Confusing in places |
|
||||||
|
| 0% | Unclear, hard to follow |
|
||||||
|
|
||||||
|
**Checklist**:
|
||||||
|
- [ ] Logical flow and organization
|
||||||
|
- [ ] Clear headings and subheadings
|
||||||
|
- [ ] Appropriate level of detail
|
||||||
|
- [ ] Well-formatted code blocks
|
||||||
|
- [ ] Readable diagrams
|
||||||
|
|
||||||
|
### 4. Context Integration (20%)
|
||||||
|
|
||||||
|
| Score | Criteria |
|
||||||
|
|-------|----------|
|
||||||
|
| 100% | Builds on previous waves, references earlier findings |
|
||||||
|
| 80% | Good context usage, minor gaps |
|
||||||
|
| 60% | Some context usage, could be more |
|
||||||
|
| 40% | Limited context usage |
|
||||||
|
| 0% | No context from previous waves |
|
||||||
|
|
||||||
|
**Checklist**:
|
||||||
|
- [ ] References findings from context_from tasks
|
||||||
|
- [ ] Uses discoveries.ndjson
|
||||||
|
- [ ] Cross-references related documentation
|
||||||
|
- [ ] Builds on previous wave outputs
|
||||||
|
- [ ] Maintains consistency across documents
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Quality Gates (Per-Wave)
|
||||||
|
|
||||||
|
| Wave | Gate Criteria | Required Completion |
|
||||||
|
|------|--------------|---------------------|
|
||||||
|
| 1 | Overview + Tech Stack + Directory | 3/3 completed |
|
||||||
|
| 2 | Architecture docs with diagrams | ≥3/4 completed |
|
||||||
|
| 3 | Implementation details with code | ≥3/4 completed |
|
||||||
|
| 4 | Feature + Usage + API docs | ≥2/3 completed |
|
||||||
|
| 5 | Synthesis with README | 3/3 completed |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Document Quality Metrics
|
||||||
|
|
||||||
|
| Metric | Target | Measurement |
|
||||||
|
|--------|--------|-------------|
|
||||||
|
| Section coverage | 100% | Required sections present |
|
||||||
|
| Code example density | ≥1 per major topic | Code blocks per section |
|
||||||
|
| File reference accuracy | ≥95% | Valid file:line references |
|
||||||
|
| Cross-reference density | ≥2 per document | Links to other docs |
|
||||||
|
| Diagram presence | Required for architecture | Diagrams in arch docs |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Issue Severity Levels
|
||||||
|
|
||||||
|
### Critical (Must Fix)
|
||||||
|
- Missing required section
|
||||||
|
- Broken cross-references
|
||||||
|
- Incorrect API documentation
|
||||||
|
- Code examples that don't work
|
||||||
|
- File references to non-existent files
|
||||||
|
|
||||||
|
### High (Should Fix)
|
||||||
|
- Incomplete section content
|
||||||
|
- Minor inaccuracies
|
||||||
|
- Missing diagrams in architecture docs
|
||||||
|
- Outdated dependency versions
|
||||||
|
- Unclear explanations
|
||||||
|
|
||||||
|
### Medium (Nice to Fix)
|
||||||
|
- Formatting issues
|
||||||
|
- Suboptimal organization
|
||||||
|
- Missing optional content
|
||||||
|
- Minor typos
|
||||||
|
|
||||||
|
### Low (Optional)
|
||||||
|
- Style consistency
|
||||||
|
- Additional examples
|
||||||
|
- More detailed explanations
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Quality Scoring
|
||||||
|
|
||||||
|
### Per-Document Score
|
||||||
|
|
||||||
|
```
|
||||||
|
Score = (Completeness × 0.30) + (Accuracy × 0.25) + (Clarity × 0.25) + (Context × 0.20)
|
||||||
|
```
|
||||||
|
|
||||||
|
### Overall Session Score
|
||||||
|
|
||||||
|
```
|
||||||
|
Score = Σ(Document Score × Weight) / Σ(Weights)
|
||||||
|
```
|
||||||
|
|
||||||
|
**Weights by Wave**:
|
||||||
|
| Wave | Weight | Rationale |
|
||||||
|
|------|--------|-----------|
|
||||||
|
| 1 | 1.0 | Foundation |
|
||||||
|
| 2 | 1.5 | Core architecture |
|
||||||
|
| 3 | 1.5 | Implementation depth |
|
||||||
|
| 4 | 1.0 | User-facing |
|
||||||
|
| 5 | 2.0 | Synthesis quality |
|
||||||
|
|
||||||
|
### Quality Thresholds
|
||||||
|
|
||||||
|
| Score | Status | Action |
|
||||||
|
|-------|--------|--------|
|
||||||
|
| ≥85% | PASS | Ready for delivery |
|
||||||
|
| 70-84% | REVIEW | Flag for improvement |
|
||||||
|
| <70% | FAIL | Block, require re-generation |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Documentation Style Guide
|
||||||
|
|
||||||
|
### Markdown Standards
|
||||||
|
- Use ATX headings (# ## ###)
|
||||||
|
- Code blocks with language hints
|
||||||
|
- Mermaid for diagrams
|
||||||
|
- Tables for structured data
|
||||||
|
|
||||||
|
### Code Reference Format
|
||||||
|
```
|
||||||
|
src/module/file.ts:42-56
|
||||||
|
```
|
||||||
|
|
||||||
|
### Cross-Reference Format
|
||||||
|
```markdown
|
||||||
|
See: [Document Title](path/to/doc.md#section)
|
||||||
|
```
|
||||||
|
|
||||||
|
### LaTeX Format (when formula_support=true)
|
||||||
|
```markdown
|
||||||
|
$$
|
||||||
|
\frac{\partial u}{\partial t} = \nabla^2 u
|
||||||
|
$$
|
||||||
|
```
|
||||||
|
|
||||||
|
### Diagram Format
|
||||||
|
```mermaid
|
||||||
|
graph TD
|
||||||
|
A[Start] --> B[Process]
|
||||||
|
B --> C[End]
|
||||||
|
```
|
||||||
@@ -11,11 +11,13 @@ import {
|
|||||||
ChevronDown,
|
ChevronDown,
|
||||||
ChevronUp,
|
ChevronUp,
|
||||||
Lock,
|
Lock,
|
||||||
|
Link,
|
||||||
} from 'lucide-react';
|
} from 'lucide-react';
|
||||||
import { Card } from '@/components/ui/Card';
|
import { Card } from '@/components/ui/Card';
|
||||||
import { Badge } from '@/components/ui/Badge';
|
import { Badge } from '@/components/ui/Badge';
|
||||||
import { cn } from '@/lib/utils';
|
import { cn } from '@/lib/utils';
|
||||||
import type { McpServer } from '@/lib/api';
|
import type { McpServer } from '@/lib/api';
|
||||||
|
import { isHttpMcpServer, isStdioMcpServer } from '@/lib/api';
|
||||||
|
|
||||||
// ========== Types ==========
|
// ========== Types ==========
|
||||||
|
|
||||||
@@ -40,6 +42,20 @@ export function CodexMcpCard({
|
|||||||
}: CodexMcpCardProps) {
|
}: CodexMcpCardProps) {
|
||||||
const { formatMessage } = useIntl();
|
const { formatMessage } = useIntl();
|
||||||
|
|
||||||
|
const isHttp = isHttpMcpServer(server);
|
||||||
|
const isStdio = isStdioMcpServer(server);
|
||||||
|
|
||||||
|
// Get display text for server summary line
|
||||||
|
const getServerSummary = () => {
|
||||||
|
if (isHttp) {
|
||||||
|
return server.url;
|
||||||
|
}
|
||||||
|
if (isStdio) {
|
||||||
|
return `${server.command || ''} ${server.args?.join(' ') || ''}`.trim();
|
||||||
|
}
|
||||||
|
return '';
|
||||||
|
};
|
||||||
|
|
||||||
return (
|
return (
|
||||||
<Card className={cn('overflow-hidden', !enabled && 'opacity-60')}>
|
<Card className={cn('overflow-hidden', !enabled && 'opacity-60')}>
|
||||||
{/* Header */}
|
{/* Header */}
|
||||||
@@ -63,6 +79,13 @@ export function CodexMcpCard({
|
|||||||
<span className="text-sm font-medium text-foreground">
|
<span className="text-sm font-medium text-foreground">
|
||||||
{server.name}
|
{server.name}
|
||||||
</span>
|
</span>
|
||||||
|
{/* Transport type badge */}
|
||||||
|
{isHttp && (
|
||||||
|
<Badge variant="outline" className="text-xs text-blue-600 border-blue-300">
|
||||||
|
<Link className="w-3 h-3 mr-1" />
|
||||||
|
{formatMessage({ id: 'mcp.transport.http' })}
|
||||||
|
</Badge>
|
||||||
|
)}
|
||||||
{/* Read-only badge */}
|
{/* Read-only badge */}
|
||||||
<Badge variant="secondary" className="text-xs flex items-center gap-1">
|
<Badge variant="secondary" className="text-xs flex items-center gap-1">
|
||||||
<Lock className="w-3 h-3" />
|
<Lock className="w-3 h-3" />
|
||||||
@@ -75,8 +98,8 @@ export function CodexMcpCard({
|
|||||||
</Badge>
|
</Badge>
|
||||||
)}
|
)}
|
||||||
</div>
|
</div>
|
||||||
<p className="text-sm text-muted-foreground mt-1 font-mono">
|
<p className="text-sm text-muted-foreground mt-1 font-mono truncate max-w-md" title={getServerSummary()}>
|
||||||
{server.command} {server.args?.join(' ') || ''}
|
{getServerSummary()}
|
||||||
</p>
|
</p>
|
||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
@@ -100,6 +123,61 @@ export function CodexMcpCard({
|
|||||||
{/* Expanded Content */}
|
{/* Expanded Content */}
|
||||||
{isExpanded && (
|
{isExpanded && (
|
||||||
<div className="border-t border-border p-4 space-y-3 bg-muted/30">
|
<div className="border-t border-border p-4 space-y-3 bg-muted/30">
|
||||||
|
{/* HTTP Server Details */}
|
||||||
|
{isHttp && (
|
||||||
|
<>
|
||||||
|
{/* URL */}
|
||||||
|
<div>
|
||||||
|
<p className="text-xs text-muted-foreground mb-1">{formatMessage({ id: 'mcp.http.url' })}</p>
|
||||||
|
<code className="text-sm bg-background px-2 py-1 rounded block overflow-x-auto break-all">
|
||||||
|
{server.url}
|
||||||
|
</code>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
{/* HTTP Headers - show count only for read-only card */}
|
||||||
|
{(server.headers || server.httpHeaders || server.bearerTokenEnvVar) && (
|
||||||
|
<div>
|
||||||
|
<p className="text-xs text-muted-foreground mb-1">{formatMessage({ id: 'mcp.http.headers' })}</p>
|
||||||
|
<div className="space-y-1">
|
||||||
|
{server.headers && Object.entries(server.headers).map(([name]) => (
|
||||||
|
<div key={name} className="flex items-center gap-2 text-sm">
|
||||||
|
<Badge variant="secondary" className="font-mono">{name}</Badge>
|
||||||
|
<span className="text-muted-foreground">=</span>
|
||||||
|
<code className="text-xs bg-background px-2 py-1 rounded flex-1 overflow-x-auto">
|
||||||
|
****
|
||||||
|
</code>
|
||||||
|
</div>
|
||||||
|
))}
|
||||||
|
{server.httpHeaders && Object.entries(server.httpHeaders).map(([name]) => (
|
||||||
|
<div key={name} className="flex items-center gap-2 text-sm">
|
||||||
|
<Badge variant="secondary" className="font-mono">{name}</Badge>
|
||||||
|
<span className="text-muted-foreground">=</span>
|
||||||
|
<code className="text-xs bg-background px-2 py-1 rounded flex-1 overflow-x-auto">
|
||||||
|
****
|
||||||
|
</code>
|
||||||
|
</div>
|
||||||
|
))}
|
||||||
|
{server.bearerTokenEnvVar && (
|
||||||
|
<div className="flex items-center gap-2 text-sm">
|
||||||
|
<Badge variant="secondary" className="font-mono">Authorization</Badge>
|
||||||
|
<span className="text-muted-foreground">=</span>
|
||||||
|
<code className="text-xs bg-background px-2 py-1 rounded flex-1 overflow-x-auto">
|
||||||
|
Bearer $${server.bearerTokenEnvVar}
|
||||||
|
</code>
|
||||||
|
<Badge variant="outline" className="text-xs text-blue-500">
|
||||||
|
{formatMessage({ id: 'mcp.http.envVar' })}
|
||||||
|
</Badge>
|
||||||
|
</div>
|
||||||
|
)}
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
)}
|
||||||
|
</>
|
||||||
|
)}
|
||||||
|
|
||||||
|
{/* STDIO Server Details */}
|
||||||
|
{isStdio && (
|
||||||
|
<>
|
||||||
{/* Command details */}
|
{/* Command details */}
|
||||||
<div>
|
<div>
|
||||||
<p className="text-xs text-muted-foreground mb-1">{formatMessage({ id: 'mcp.command' })}</p>
|
<p className="text-xs text-muted-foreground mb-1">{formatMessage({ id: 'mcp.command' })}</p>
|
||||||
@@ -113,7 +191,7 @@ export function CodexMcpCard({
|
|||||||
<div>
|
<div>
|
||||||
<p className="text-xs text-muted-foreground mb-1">{formatMessage({ id: 'mcp.args' })}</p>
|
<p className="text-xs text-muted-foreground mb-1">{formatMessage({ id: 'mcp.args' })}</p>
|
||||||
<div className="flex flex-wrap gap-1">
|
<div className="flex flex-wrap gap-1">
|
||||||
{server.args.map((arg, idx) => (
|
{server.args.map((arg: string, idx: number) => (
|
||||||
<Badge key={idx} variant="outline" className="font-mono text-xs">
|
<Badge key={idx} variant="outline" className="font-mono text-xs">
|
||||||
{arg}
|
{arg}
|
||||||
</Badge>
|
</Badge>
|
||||||
@@ -139,6 +217,8 @@ export function CodexMcpCard({
|
|||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
)}
|
)}
|
||||||
|
</>
|
||||||
|
)}
|
||||||
|
|
||||||
{/* Read-only notice */}
|
{/* Read-only notice */}
|
||||||
<div className="flex items-center gap-2 px-3 py-2 bg-muted/50 rounded-md border border-border">
|
<div className="flex items-center gap-2 px-3 py-2 bg-muted/50 rounded-md border border-border">
|
||||||
|
|||||||
@@ -15,6 +15,9 @@ import {
|
|||||||
Edit3,
|
Edit3,
|
||||||
Trash2,
|
Trash2,
|
||||||
Lock,
|
Lock,
|
||||||
|
Link,
|
||||||
|
Eye,
|
||||||
|
EyeOff,
|
||||||
} from 'lucide-react';
|
} from 'lucide-react';
|
||||||
import {
|
import {
|
||||||
AlertDialog,
|
AlertDialog,
|
||||||
@@ -32,6 +35,7 @@ import { Card } from '@/components/ui/Card';
|
|||||||
import { Badge } from '@/components/ui/Badge';
|
import { Badge } from '@/components/ui/Badge';
|
||||||
import { cn } from '@/lib/utils';
|
import { cn } from '@/lib/utils';
|
||||||
import type { McpServer } from '@/lib/api';
|
import type { McpServer } from '@/lib/api';
|
||||||
|
import { isHttpMcpServer, isStdioMcpServer } from '@/lib/api';
|
||||||
|
|
||||||
// ========== Types ==========
|
// ========== Types ==========
|
||||||
|
|
||||||
@@ -54,6 +58,14 @@ export interface CodexMcpEditableCardProps {
|
|||||||
|
|
||||||
// ========== Component ==========
|
// ========== Component ==========
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Mask a header value for display (show first 4 chars + ***)
|
||||||
|
*/
|
||||||
|
function maskHeaderValue(value: string): string {
|
||||||
|
if (!value || value.length <= 4) return '****';
|
||||||
|
return value.substring(0, 4) + '****';
|
||||||
|
}
|
||||||
|
|
||||||
export function CodexMcpEditableCard({
|
export function CodexMcpEditableCard({
|
||||||
server,
|
server,
|
||||||
enabled,
|
enabled,
|
||||||
@@ -67,6 +79,58 @@ export function CodexMcpEditableCard({
|
|||||||
}: CodexMcpEditableCardProps) {
|
}: CodexMcpEditableCardProps) {
|
||||||
const { formatMessage } = useIntl();
|
const { formatMessage } = useIntl();
|
||||||
const [isConfirmDeleteOpen, setIsConfirmDeleteOpen] = useState(false);
|
const [isConfirmDeleteOpen, setIsConfirmDeleteOpen] = useState(false);
|
||||||
|
const [showHeaderValues, setShowHeaderValues] = useState<Record<string, boolean>>({});
|
||||||
|
|
||||||
|
const isHttp = isHttpMcpServer(server);
|
||||||
|
const isStdio = isStdioMcpServer(server);
|
||||||
|
|
||||||
|
// Get display text for server summary line
|
||||||
|
const getServerSummary = () => {
|
||||||
|
if (isHttp) {
|
||||||
|
return server.url;
|
||||||
|
}
|
||||||
|
return `${server.command || ''} ${server.args?.join(' ') || ''}`.trim();
|
||||||
|
};
|
||||||
|
|
||||||
|
// Get all headers for HTTP server (Codex format: http_headers, env_http_headers, bearerTokenEnvVar)
|
||||||
|
const getHttpHeaders = (): Array<{ name: string; value: string; isEnvVar?: boolean }> => {
|
||||||
|
if (!isHttp) return [];
|
||||||
|
|
||||||
|
const headers: Array<{ name: string; value: string; isEnvVar?: boolean }> = [];
|
||||||
|
|
||||||
|
// Codex format: httpHeaders object
|
||||||
|
if (server.httpHeaders) {
|
||||||
|
Object.entries(server.httpHeaders).forEach(([name, value]) => {
|
||||||
|
headers.push({ name, value });
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
// Codex format: bearerTokenEnvVar (adds Authorization header)
|
||||||
|
if (server.bearerTokenEnvVar) {
|
||||||
|
headers.push({
|
||||||
|
name: 'Authorization',
|
||||||
|
value: `Bearer $${server.bearerTokenEnvVar}`,
|
||||||
|
isEnvVar: true,
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
// Claude format: headers object (for compatibility)
|
||||||
|
if (server.headers) {
|
||||||
|
Object.entries(server.headers).forEach(([name, value]) => {
|
||||||
|
headers.push({ name, value });
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
return headers;
|
||||||
|
};
|
||||||
|
|
||||||
|
// Toggle header value visibility
|
||||||
|
const toggleHeaderValue = (headerName: string) => {
|
||||||
|
setShowHeaderValues(prev => ({
|
||||||
|
...prev,
|
||||||
|
[headerName]: !prev[headerName]
|
||||||
|
}));
|
||||||
|
};
|
||||||
|
|
||||||
// Handle toggle with optimistic update
|
// Handle toggle with optimistic update
|
||||||
const handleToggle = async () => {
|
const handleToggle = async () => {
|
||||||
@@ -111,6 +175,13 @@ export function CodexMcpEditableCard({
|
|||||||
<span className="text-sm font-medium text-foreground">
|
<span className="text-sm font-medium text-foreground">
|
||||||
{server.name}
|
{server.name}
|
||||||
</span>
|
</span>
|
||||||
|
{/* Transport type badge */}
|
||||||
|
{isHttp && (
|
||||||
|
<Badge variant="outline" className="text-xs text-blue-600 border-blue-300">
|
||||||
|
<Link className="w-3 h-3 mr-1" />
|
||||||
|
{formatMessage({ id: 'mcp.transport.http' })}
|
||||||
|
</Badge>
|
||||||
|
)}
|
||||||
{isEditable ? (
|
{isEditable ? (
|
||||||
<>
|
<>
|
||||||
{/* Editable badge with actions */}
|
{/* Editable badge with actions */}
|
||||||
@@ -141,8 +212,8 @@ export function CodexMcpEditableCard({
|
|||||||
</>
|
</>
|
||||||
)}
|
)}
|
||||||
</div>
|
</div>
|
||||||
<p className="text-sm text-muted-foreground mt-1 font-mono">
|
<p className="text-sm text-muted-foreground mt-1 font-mono truncate max-w-md" title={getServerSummary()}>
|
||||||
{server.command} {server.args?.join(' ') || ''}
|
{getServerSummary()}
|
||||||
</p>
|
</p>
|
||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
@@ -234,6 +305,60 @@ export function CodexMcpEditableCard({
|
|||||||
{/* Expanded Content */}
|
{/* Expanded Content */}
|
||||||
{isExpanded && (
|
{isExpanded && (
|
||||||
<div className="border-t border-border p-4 space-y-3 bg-muted/30">
|
<div className="border-t border-border p-4 space-y-3 bg-muted/30">
|
||||||
|
{/* HTTP Server Details */}
|
||||||
|
{isHttp && (
|
||||||
|
<>
|
||||||
|
{/* URL */}
|
||||||
|
<div>
|
||||||
|
<p className="text-xs text-muted-foreground mb-1">{formatMessage({ id: 'mcp.http.url' })}</p>
|
||||||
|
<code className="text-sm bg-background px-2 py-1 rounded block overflow-x-auto break-all">
|
||||||
|
{server.url}
|
||||||
|
</code>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
{/* HTTP Headers */}
|
||||||
|
{getHttpHeaders().length > 0 && (
|
||||||
|
<div>
|
||||||
|
<p className="text-xs text-muted-foreground mb-1">{formatMessage({ id: 'mcp.http.headers' })}</p>
|
||||||
|
<div className="space-y-1">
|
||||||
|
{getHttpHeaders().map((header) => (
|
||||||
|
<div key={header.name} className="flex items-center gap-2 text-sm">
|
||||||
|
<Badge variant="secondary" className="font-mono">{header.name}</Badge>
|
||||||
|
<span className="text-muted-foreground">=</span>
|
||||||
|
<code className="text-xs bg-background px-2 py-1 rounded flex-1 overflow-x-auto">
|
||||||
|
{showHeaderValues[header.name] ? header.value : maskHeaderValue(header.value)}
|
||||||
|
</code>
|
||||||
|
<Button
|
||||||
|
variant="ghost"
|
||||||
|
size="sm"
|
||||||
|
className="h-6 w-6 p-0"
|
||||||
|
onClick={() => toggleHeaderValue(header.name)}
|
||||||
|
title={showHeaderValues[header.name]
|
||||||
|
? formatMessage({ id: 'mcp.http.hideValue' })
|
||||||
|
: formatMessage({ id: 'mcp.http.showValue' })}
|
||||||
|
>
|
||||||
|
{showHeaderValues[header.name] ? (
|
||||||
|
<EyeOff className="w-3 h-3 text-muted-foreground" />
|
||||||
|
) : (
|
||||||
|
<Eye className="w-3 h-3 text-muted-foreground" />
|
||||||
|
)}
|
||||||
|
</Button>
|
||||||
|
{header.isEnvVar && (
|
||||||
|
<Badge variant="outline" className="text-xs text-blue-500">
|
||||||
|
{formatMessage({ id: 'mcp.http.envVar' })}
|
||||||
|
</Badge>
|
||||||
|
)}
|
||||||
|
</div>
|
||||||
|
))}
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
)}
|
||||||
|
</>
|
||||||
|
)}
|
||||||
|
|
||||||
|
{/* STDIO Server Details */}
|
||||||
|
{isStdio && (
|
||||||
|
<>
|
||||||
{/* Command details */}
|
{/* Command details */}
|
||||||
<div>
|
<div>
|
||||||
<p className="text-xs text-muted-foreground mb-1">{formatMessage({ id: 'mcp.command' })}</p>
|
<p className="text-xs text-muted-foreground mb-1">{formatMessage({ id: 'mcp.command' })}</p>
|
||||||
@@ -273,6 +398,8 @@ export function CodexMcpEditableCard({
|
|||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
)}
|
)}
|
||||||
|
</>
|
||||||
|
)}
|
||||||
|
|
||||||
{/* Notice based on editable state */}
|
{/* Notice based on editable state */}
|
||||||
<div className={cn(
|
<div className={cn(
|
||||||
|
|||||||
@@ -17,7 +17,7 @@ import {
|
|||||||
import { Checkbox } from '@/components/ui/Checkbox';
|
import { Checkbox } from '@/components/ui/Checkbox';
|
||||||
import { Badge } from '@/components/ui/Badge';
|
import { Badge } from '@/components/ui/Badge';
|
||||||
import { useMcpServers } from '@/hooks';
|
import { useMcpServers } from '@/hooks';
|
||||||
import { crossCliCopy, fetchCodexMcpServers } from '@/lib/api';
|
import { crossCliCopy, fetchCodexMcpServers, isHttpMcpServer, isStdioMcpServer } from '@/lib/api';
|
||||||
import { cn } from '@/lib/utils';
|
import { cn } from '@/lib/utils';
|
||||||
import { useWorkflowStore, selectProjectPath } from '@/stores/workflowStore';
|
import { useWorkflowStore, selectProjectPath } from '@/stores/workflowStore';
|
||||||
|
|
||||||
@@ -41,7 +41,8 @@ export interface CrossCliCopyButtonProps {
|
|||||||
|
|
||||||
interface ServerCheckboxItem {
|
interface ServerCheckboxItem {
|
||||||
name: string;
|
name: string;
|
||||||
command: string;
|
/** Display text - command for STDIO, URL for HTTP */
|
||||||
|
displayText: string;
|
||||||
enabled: boolean;
|
enabled: boolean;
|
||||||
selected: boolean;
|
selected: boolean;
|
||||||
}
|
}
|
||||||
@@ -78,7 +79,7 @@ export function CrossCliCopyButton({
|
|||||||
setServerItems(
|
setServerItems(
|
||||||
servers.map((s) => ({
|
servers.map((s) => ({
|
||||||
name: s.name,
|
name: s.name,
|
||||||
command: s.command,
|
displayText: isHttpMcpServer(s) ? s.url : (isStdioMcpServer(s) ? s.command : ''),
|
||||||
enabled: s.enabled,
|
enabled: s.enabled,
|
||||||
selected: false,
|
selected: false,
|
||||||
}))
|
}))
|
||||||
@@ -91,7 +92,7 @@ export function CrossCliCopyButton({
|
|||||||
setServerItems(
|
setServerItems(
|
||||||
(codex.servers ?? []).map((s) => ({
|
(codex.servers ?? []).map((s) => ({
|
||||||
name: s.name,
|
name: s.name,
|
||||||
command: s.command,
|
displayText: isHttpMcpServer(s) ? s.url : (isStdioMcpServer(s) ? s.command : ''),
|
||||||
enabled: s.enabled,
|
enabled: s.enabled,
|
||||||
selected: false,
|
selected: false,
|
||||||
}))
|
}))
|
||||||
@@ -281,7 +282,7 @@ export function CrossCliCopyButton({
|
|||||||
)}
|
)}
|
||||||
</div>
|
</div>
|
||||||
<p className="text-xs text-muted-foreground font-mono truncate">
|
<p className="text-xs text-muted-foreground font-mono truncate">
|
||||||
{server.command}
|
{server.displayText}
|
||||||
</p>
|
</p>
|
||||||
</label>
|
</label>
|
||||||
</div>
|
</div>
|
||||||
|
|||||||
@@ -10,7 +10,7 @@ import { Checkbox } from '@/components/ui/Checkbox';
|
|||||||
import { Badge } from '@/components/ui/Badge';
|
import { Badge } from '@/components/ui/Badge';
|
||||||
import { Button } from '@/components/ui/Button';
|
import { Button } from '@/components/ui/Button';
|
||||||
import { useMcpServers } from '@/hooks';
|
import { useMcpServers } from '@/hooks';
|
||||||
import { crossCliCopy, fetchCodexMcpServers } from '@/lib/api';
|
import { crossCliCopy, fetchCodexMcpServers, isHttpMcpServer, isStdioMcpServer } from '@/lib/api';
|
||||||
import { cn } from '@/lib/utils';
|
import { cn } from '@/lib/utils';
|
||||||
import { useWorkflowStore, selectProjectPath } from '@/stores/workflowStore';
|
import { useWorkflowStore, selectProjectPath } from '@/stores/workflowStore';
|
||||||
|
|
||||||
@@ -25,7 +25,8 @@ export interface CrossCliSyncPanelProps {
|
|||||||
|
|
||||||
interface ServerCheckboxItem {
|
interface ServerCheckboxItem {
|
||||||
name: string;
|
name: string;
|
||||||
command: string;
|
/** Display text - command for STDIO, URL for HTTP */
|
||||||
|
displayText: string;
|
||||||
enabled: boolean;
|
enabled: boolean;
|
||||||
selected: boolean;
|
selected: boolean;
|
||||||
}
|
}
|
||||||
@@ -64,7 +65,7 @@ export function CrossCliSyncPanel({ onSuccess, className }: CrossCliSyncPanelPro
|
|||||||
setCodexServers(
|
setCodexServers(
|
||||||
(codex.servers ?? []).map((s) => ({
|
(codex.servers ?? []).map((s) => ({
|
||||||
name: s.name,
|
name: s.name,
|
||||||
command: s.command,
|
displayText: isHttpMcpServer(s) ? s.url : (isStdioMcpServer(s) ? s.command : ''),
|
||||||
enabled: s.enabled,
|
enabled: s.enabled,
|
||||||
selected: false,
|
selected: false,
|
||||||
}))
|
}))
|
||||||
@@ -323,7 +324,7 @@ export function CrossCliSyncPanel({ onSuccess, className }: CrossCliSyncPanelPro
|
|||||||
)}
|
)}
|
||||||
</div>
|
</div>
|
||||||
<p className="text-xs text-muted-foreground font-mono truncate">
|
<p className="text-xs text-muted-foreground font-mono truncate">
|
||||||
{server.command}
|
{server.displayText}
|
||||||
</p>
|
</p>
|
||||||
</label>
|
</label>
|
||||||
</div>
|
</div>
|
||||||
@@ -421,7 +422,7 @@ export function CrossCliSyncPanel({ onSuccess, className }: CrossCliSyncPanelPro
|
|||||||
)}
|
)}
|
||||||
</div>
|
</div>
|
||||||
<p className="text-xs text-muted-foreground font-mono truncate">
|
<p className="text-xs text-muted-foreground font-mono truncate">
|
||||||
{server.command}
|
{server.displayText}
|
||||||
</p>
|
</p>
|
||||||
</label>
|
</label>
|
||||||
</div>
|
</div>
|
||||||
|
|||||||
@@ -32,6 +32,8 @@ import {
|
|||||||
saveMcpTemplate,
|
saveMcpTemplate,
|
||||||
type McpServer,
|
type McpServer,
|
||||||
type McpProjectConfigType,
|
type McpProjectConfigType,
|
||||||
|
isStdioMcpServer,
|
||||||
|
isHttpMcpServer,
|
||||||
} from '@/lib/api';
|
} from '@/lib/api';
|
||||||
import { mcpServersKeys, useMcpTemplates } from '@/hooks';
|
import { mcpServersKeys, useMcpTemplates } from '@/hooks';
|
||||||
import { cn } from '@/lib/utils';
|
import { cn } from '@/lib/utils';
|
||||||
@@ -122,11 +124,6 @@ function HttpHeadersInput({ headers, onChange, disabled }: HttpHeadersInputProps
|
|||||||
});
|
});
|
||||||
};
|
};
|
||||||
|
|
||||||
const maskValue = (value: string) => {
|
|
||||||
if (!value) return '';
|
|
||||||
return '*'.repeat(Math.min(value.length, 8));
|
|
||||||
};
|
|
||||||
|
|
||||||
return (
|
return (
|
||||||
<div className="space-y-2">
|
<div className="space-y-2">
|
||||||
{headers.map((header) => (
|
{headers.map((header) => (
|
||||||
@@ -246,9 +243,8 @@ export function McpServerDialog({
|
|||||||
// Helper to detect transport type from server data
|
// Helper to detect transport type from server data
|
||||||
const detectTransportType = useCallback((serverData: McpServer | undefined): McpTransportType => {
|
const detectTransportType = useCallback((serverData: McpServer | undefined): McpTransportType => {
|
||||||
if (!serverData) return 'stdio';
|
if (!serverData) return 'stdio';
|
||||||
// If server has url field (from extended McpServer type), it's HTTP
|
// Use type guard to check for HTTP server
|
||||||
const extendedServer = serverData as McpServer & { url?: string };
|
if (isHttpMcpServer(serverData)) return 'http';
|
||||||
if (extendedServer.url) return 'http';
|
|
||||||
return 'stdio';
|
return 'stdio';
|
||||||
}, []);
|
}, []);
|
||||||
|
|
||||||
@@ -258,17 +254,12 @@ export function McpServerDialog({
|
|||||||
const detectedType = detectTransportType(server);
|
const detectedType = detectTransportType(server);
|
||||||
setTransportType(detectedType);
|
setTransportType(detectedType);
|
||||||
|
|
||||||
const extendedServer = server as McpServer & {
|
// Parse HTTP headers if present (for HTTP servers)
|
||||||
url?: string;
|
|
||||||
http_headers?: Record<string, string>;
|
|
||||||
env_http_headers?: Record<string, string>;
|
|
||||||
bearer_token_env_var?: string;
|
|
||||||
};
|
|
||||||
|
|
||||||
// Parse HTTP headers if present
|
|
||||||
const httpHeaders: HttpHeader[] = [];
|
const httpHeaders: HttpHeader[] = [];
|
||||||
if (extendedServer.http_headers) {
|
if (isHttpMcpServer(server)) {
|
||||||
Object.entries(extendedServer.http_headers).forEach(([name, value], idx) => {
|
// HTTP server - extract headers
|
||||||
|
if (server.httpHeaders) {
|
||||||
|
Object.entries(server.httpHeaders).forEach(([name, value], idx) => {
|
||||||
httpHeaders.push({
|
httpHeaders.push({
|
||||||
id: `header-http-${idx}`,
|
id: `header-http-${idx}`,
|
||||||
name,
|
name,
|
||||||
@@ -277,31 +268,42 @@ export function McpServerDialog({
|
|||||||
});
|
});
|
||||||
});
|
});
|
||||||
}
|
}
|
||||||
if (extendedServer.env_http_headers) {
|
if (server.envHttpHeaders) {
|
||||||
Object.entries(extendedServer.env_http_headers).forEach(([name, envVar], idx) => {
|
// envHttpHeaders is an array of header names that get values from env vars
|
||||||
|
server.envHttpHeaders.forEach((headerName, idx) => {
|
||||||
httpHeaders.push({
|
httpHeaders.push({
|
||||||
id: `header-env-${idx}`,
|
id: `header-env-${idx}`,
|
||||||
name,
|
name: headerName,
|
||||||
value: envVar,
|
value: '', // Env var name is not stored in value
|
||||||
isEnvVar: true,
|
isEnvVar: true,
|
||||||
});
|
});
|
||||||
});
|
});
|
||||||
}
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Get STDIO fields safely using type guard
|
||||||
|
const stdioCommand = isStdioMcpServer(server) ? server.command : '';
|
||||||
|
const stdioArgs = isStdioMcpServer(server) ? (server.args || []) : [];
|
||||||
|
const stdioEnv = isStdioMcpServer(server) ? (server.env || {}) : {};
|
||||||
|
|
||||||
|
// Get HTTP fields safely using type guard
|
||||||
|
const httpUrl = isHttpMcpServer(server) ? server.url : '';
|
||||||
|
const httpBearerToken = isHttpMcpServer(server) ? (server.bearerTokenEnvVar || '') : '';
|
||||||
|
|
||||||
setFormData({
|
setFormData({
|
||||||
name: server.name,
|
name: server.name,
|
||||||
command: server.command || '',
|
command: stdioCommand,
|
||||||
args: server.args || [],
|
args: stdioArgs,
|
||||||
env: server.env || {},
|
env: stdioEnv,
|
||||||
url: extendedServer.url || '',
|
url: httpUrl,
|
||||||
headers: httpHeaders,
|
headers: httpHeaders,
|
||||||
bearerTokenEnvVar: extendedServer.bearer_token_env_var || '',
|
bearerTokenEnvVar: httpBearerToken,
|
||||||
scope: server.scope,
|
scope: server.scope,
|
||||||
enabled: server.enabled,
|
enabled: server.enabled,
|
||||||
});
|
});
|
||||||
setArgsInput((server.args || []).join(', '));
|
setArgsInput(stdioArgs.join(', '));
|
||||||
setEnvInput(
|
setEnvInput(
|
||||||
Object.entries(server.env || {})
|
Object.entries(stdioEnv)
|
||||||
.map(([k, v]) => `${k}=${v}`)
|
.map(([k, v]) => `${k}=${v}`)
|
||||||
.join('\n')
|
.join('\n')
|
||||||
);
|
);
|
||||||
@@ -488,50 +490,46 @@ export function McpServerDialog({
|
|||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
|
|
||||||
// Build server config based on transport type
|
// Build server config based on transport type using discriminated union
|
||||||
const serverConfig: McpServer & {
|
let serverConfig: McpServer;
|
||||||
url?: string;
|
|
||||||
http_headers?: Record<string, string>;
|
if (transportType === 'stdio') {
|
||||||
env_http_headers?: Record<string, string>;
|
serverConfig = {
|
||||||
bearer_token_env_var?: string;
|
|
||||||
} = {
|
|
||||||
name: formData.name,
|
name: formData.name,
|
||||||
|
transport: 'stdio',
|
||||||
|
command: formData.command,
|
||||||
|
args: formData.args.length > 0 ? formData.args : undefined,
|
||||||
|
env: Object.keys(formData.env).length > 0 ? formData.env : undefined,
|
||||||
scope: formData.scope,
|
scope: formData.scope,
|
||||||
enabled: formData.enabled,
|
enabled: formData.enabled,
|
||||||
};
|
};
|
||||||
|
|
||||||
if (transportType === 'stdio') {
|
|
||||||
serverConfig.command = formData.command;
|
|
||||||
serverConfig.args = formData.args;
|
|
||||||
serverConfig.env = formData.env;
|
|
||||||
} else {
|
} else {
|
||||||
// HTTP transport
|
// HTTP transport - separate headers into static and env-based
|
||||||
serverConfig.url = formData.url;
|
|
||||||
serverConfig.command = ''; // Empty command for HTTP servers
|
|
||||||
|
|
||||||
// Separate headers into static and env-based
|
|
||||||
const httpHeaders: Record<string, string> = {};
|
const httpHeaders: Record<string, string> = {};
|
||||||
const envHttpHeaders: Record<string, string> = {};
|
const envHttpHeaders: string[] = [];
|
||||||
|
|
||||||
formData.headers.forEach((h) => {
|
formData.headers.forEach((h) => {
|
||||||
if (h.name.trim()) {
|
if (h.name.trim()) {
|
||||||
if (h.isEnvVar) {
|
if (h.isEnvVar) {
|
||||||
envHttpHeaders[h.name.trim()] = h.value.trim();
|
// For env-based headers, store the header name that will be populated from env var
|
||||||
|
envHttpHeaders.push(h.name.trim());
|
||||||
} else {
|
} else {
|
||||||
httpHeaders[h.name.trim()] = h.value.trim();
|
httpHeaders[h.name.trim()] = h.value.trim();
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
});
|
});
|
||||||
|
|
||||||
if (Object.keys(httpHeaders).length > 0) {
|
serverConfig = {
|
||||||
serverConfig.http_headers = httpHeaders;
|
name: formData.name,
|
||||||
}
|
transport: 'http',
|
||||||
if (Object.keys(envHttpHeaders).length > 0) {
|
url: formData.url,
|
||||||
serverConfig.env_http_headers = envHttpHeaders;
|
headers: Object.keys(httpHeaders).length > 0 ? httpHeaders : undefined,
|
||||||
}
|
httpHeaders: Object.keys(httpHeaders).length > 0 ? httpHeaders : undefined,
|
||||||
if (formData.bearerTokenEnvVar.trim()) {
|
envHttpHeaders: envHttpHeaders.length > 0 ? envHttpHeaders : undefined,
|
||||||
serverConfig.bearer_token_env_var = formData.bearerTokenEnvVar.trim();
|
bearerTokenEnvVar: formData.bearerTokenEnvVar.trim() || undefined,
|
||||||
}
|
scope: formData.scope,
|
||||||
|
enabled: formData.enabled,
|
||||||
|
};
|
||||||
}
|
}
|
||||||
|
|
||||||
// Save as template if checked (only for STDIO)
|
// Save as template if checked (only for STDIO)
|
||||||
|
|||||||
@@ -19,12 +19,27 @@ import {
|
|||||||
import { useProjectOperations, useMcpServerMutations } from '@/hooks';
|
import { useProjectOperations, useMcpServerMutations } from '@/hooks';
|
||||||
import { cn } from '@/lib/utils';
|
import { cn } from '@/lib/utils';
|
||||||
import type { McpServer } from '@/lib/api';
|
import type { McpServer } from '@/lib/api';
|
||||||
|
import { isHttpMcpServer, isStdioMcpServer } from '@/lib/api';
|
||||||
|
|
||||||
// ========== Types ==========
|
// ========== Types ==========
|
||||||
|
|
||||||
export interface OtherProjectServer extends McpServer {
|
/**
|
||||||
|
* Server from another project
|
||||||
|
* Contains base server fields plus project metadata
|
||||||
|
*/
|
||||||
|
export interface OtherProjectServer {
|
||||||
|
name: string;
|
||||||
|
enabled: boolean;
|
||||||
|
/** Display text - command for STDIO, URL for HTTP */
|
||||||
|
displayText: string;
|
||||||
|
/** Transport type */
|
||||||
|
transport: 'stdio' | 'http';
|
||||||
|
/** Project path */
|
||||||
projectPath: string;
|
projectPath: string;
|
||||||
|
/** Project name */
|
||||||
projectName: string;
|
projectName: string;
|
||||||
|
/** Original server data */
|
||||||
|
originalServer: McpServer;
|
||||||
}
|
}
|
||||||
|
|
||||||
export interface OtherProjectsSectionProps {
|
export interface OtherProjectsSectionProps {
|
||||||
@@ -63,12 +78,17 @@ export function OtherProjectsSection({
|
|||||||
|
|
||||||
for (const [path, serverList] of Object.entries(response.servers)) {
|
for (const [path, serverList] of Object.entries(response.servers)) {
|
||||||
const projectName = path.split(/[/\\]/).filter(Boolean).pop() || path;
|
const projectName = path.split(/[/\\]/).filter(Boolean).pop() || path;
|
||||||
for (const server of (serverList as Omit<McpServer, 'scope'>[])) {
|
for (const server of (serverList as McpServer[])) {
|
||||||
servers.push({
|
servers.push({
|
||||||
...server,
|
name: server.name,
|
||||||
scope: 'project',
|
enabled: server.enabled,
|
||||||
|
displayText: isHttpMcpServer(server)
|
||||||
|
? server.url
|
||||||
|
: (isStdioMcpServer(server) ? server.command : ''),
|
||||||
|
transport: server.transport,
|
||||||
projectPath: path,
|
projectPath: path,
|
||||||
projectName,
|
projectName,
|
||||||
|
originalServer: server,
|
||||||
});
|
});
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@@ -88,14 +108,14 @@ export function OtherProjectsSection({
|
|||||||
// Generate a unique name by combining project name and server name
|
// Generate a unique name by combining project name and server name
|
||||||
const uniqueName = `${server.projectName}-${server.name}`.toLowerCase().replace(/\s+/g, '-');
|
const uniqueName = `${server.projectName}-${server.name}`.toLowerCase().replace(/\s+/g, '-');
|
||||||
|
|
||||||
await createServer({
|
// Create server based on transport type
|
||||||
|
const serverToCreate: McpServer = {
|
||||||
|
...server.originalServer,
|
||||||
name: uniqueName,
|
name: uniqueName,
|
||||||
command: server.command,
|
|
||||||
args: server.args,
|
|
||||||
env: server.env,
|
|
||||||
scope: 'project',
|
scope: 'project',
|
||||||
enabled: server.enabled,
|
};
|
||||||
});
|
|
||||||
|
await createServer(serverToCreate);
|
||||||
|
|
||||||
onImportSuccess?.(uniqueName, server.projectPath);
|
onImportSuccess?.(uniqueName, server.projectPath);
|
||||||
} catch (error) {
|
} catch (error) {
|
||||||
@@ -203,7 +223,7 @@ export function OtherProjectsSection({
|
|||||||
)}
|
)}
|
||||||
</div>
|
</div>
|
||||||
<p className="text-xs text-muted-foreground font-mono truncate">
|
<p className="text-xs text-muted-foreground font-mono truncate">
|
||||||
{server.command} {(server.args || []).join(' ')}
|
{server.displayText}
|
||||||
</p>
|
</p>
|
||||||
<p className="text-xs text-muted-foreground truncate">
|
<p className="text-xs text-muted-foreground truncate">
|
||||||
<span className="font-medium">{server.projectName}</span>
|
<span className="font-medium">{server.projectName}</span>
|
||||||
|
|||||||
@@ -27,6 +27,17 @@
|
|||||||
"command": "Command",
|
"command": "Command",
|
||||||
"args": "Arguments",
|
"args": "Arguments",
|
||||||
"env": "Environment Variables",
|
"env": "Environment Variables",
|
||||||
|
"transport": {
|
||||||
|
"http": "HTTP",
|
||||||
|
"stdio": "STDIO"
|
||||||
|
},
|
||||||
|
"http": {
|
||||||
|
"url": "Server URL",
|
||||||
|
"headers": "HTTP Headers",
|
||||||
|
"showValue": "Show value",
|
||||||
|
"hideValue": "Hide value",
|
||||||
|
"envVar": "Env Var"
|
||||||
|
},
|
||||||
"codex": {
|
"codex": {
|
||||||
"configPath": "Config Path",
|
"configPath": "Config Path",
|
||||||
"readOnly": "Read-only",
|
"readOnly": "Read-only",
|
||||||
|
|||||||
@@ -22,6 +22,9 @@ import {
|
|||||||
ChevronUp,
|
ChevronUp,
|
||||||
BookmarkPlus,
|
BookmarkPlus,
|
||||||
AlertTriangle,
|
AlertTriangle,
|
||||||
|
Link,
|
||||||
|
Eye,
|
||||||
|
EyeOff,
|
||||||
} from 'lucide-react';
|
} from 'lucide-react';
|
||||||
import { Card } from '@/components/ui/Card';
|
import { Card } from '@/components/ui/Card';
|
||||||
import { Button } from '@/components/ui/Button';
|
import { Button } from '@/components/ui/Button';
|
||||||
@@ -54,6 +57,9 @@ import {
|
|||||||
type McpServer,
|
type McpServer,
|
||||||
type McpServerConflict,
|
type McpServerConflict,
|
||||||
type CcwMcpConfig,
|
type CcwMcpConfig,
|
||||||
|
type HttpMcpServer,
|
||||||
|
isHttpMcpServer,
|
||||||
|
isStdioMcpServer,
|
||||||
} from '@/lib/api';
|
} from '@/lib/api';
|
||||||
import { cn } from '@/lib/utils';
|
import { cn } from '@/lib/utils';
|
||||||
|
|
||||||
@@ -70,8 +76,68 @@ interface McpServerCardProps {
|
|||||||
conflictInfo?: McpServerConflict;
|
conflictInfo?: McpServerConflict;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Mask a header value for display (show first 4 chars + ***)
|
||||||
|
*/
|
||||||
|
function maskHeaderValue(value: string): string {
|
||||||
|
if (!value || value.length <= 4) return '****';
|
||||||
|
return value.substring(0, 4) + '****';
|
||||||
|
}
|
||||||
|
|
||||||
function McpServerCard({ server, isExpanded, onToggleExpand, onToggle, onEdit, onDelete, onSaveAsTemplate, conflictInfo }: McpServerCardProps) {
|
function McpServerCard({ server, isExpanded, onToggleExpand, onToggle, onEdit, onDelete, onSaveAsTemplate, conflictInfo }: McpServerCardProps) {
|
||||||
const { formatMessage } = useIntl();
|
const { formatMessage } = useIntl();
|
||||||
|
const [showHeaderValues, setShowHeaderValues] = useState<Record<string, boolean>>({});
|
||||||
|
|
||||||
|
const isHttp = isHttpMcpServer(server);
|
||||||
|
const isStdio = isStdioMcpServer(server);
|
||||||
|
|
||||||
|
// Get display text for server summary line
|
||||||
|
const getServerSummary = () => {
|
||||||
|
if (isHttp) {
|
||||||
|
return server.url;
|
||||||
|
}
|
||||||
|
return `${server.command} ${server.args?.join(' ') || ''}`.trim();
|
||||||
|
};
|
||||||
|
|
||||||
|
// Get all headers for HTTP server (combine Claude + Codex formats)
|
||||||
|
const getHttpHeaders = (): Array<{ name: string; value: string; isEnvVar?: boolean }> => {
|
||||||
|
if (!isHttp) return [];
|
||||||
|
|
||||||
|
const headers: Array<{ name: string; value: string; isEnvVar?: boolean }> = [];
|
||||||
|
|
||||||
|
// Claude format: headers object
|
||||||
|
if (server.headers) {
|
||||||
|
Object.entries(server.headers).forEach(([name, value]) => {
|
||||||
|
headers.push({ name, value });
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
// Codex format: httpHeaders object
|
||||||
|
if (server.httpHeaders) {
|
||||||
|
Object.entries(server.httpHeaders).forEach(([name, value]) => {
|
||||||
|
headers.push({ name, value });
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
// Codex format: bearerTokenEnvVar (adds Authorization header)
|
||||||
|
if (server.bearerTokenEnvVar) {
|
||||||
|
headers.push({
|
||||||
|
name: 'Authorization',
|
||||||
|
value: `Bearer $${server.bearerTokenEnvVar}`,
|
||||||
|
isEnvVar: true,
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
return headers;
|
||||||
|
};
|
||||||
|
|
||||||
|
// Toggle header value visibility
|
||||||
|
const toggleHeaderValue = (headerName: string) => {
|
||||||
|
setShowHeaderValues(prev => ({
|
||||||
|
...prev,
|
||||||
|
[headerName]: !prev[headerName]
|
||||||
|
}));
|
||||||
|
};
|
||||||
|
|
||||||
return (
|
return (
|
||||||
<Card className={cn('overflow-hidden', !server.enabled && 'opacity-60')}>
|
<Card className={cn('overflow-hidden', !server.enabled && 'opacity-60')}>
|
||||||
@@ -96,6 +162,13 @@ function McpServerCard({ server, isExpanded, onToggleExpand, onToggle, onEdit, o
|
|||||||
<span className="text-sm font-medium text-foreground">
|
<span className="text-sm font-medium text-foreground">
|
||||||
{server.name}
|
{server.name}
|
||||||
</span>
|
</span>
|
||||||
|
{/* Transport type badge */}
|
||||||
|
{isHttp && (
|
||||||
|
<Badge variant="outline" className="text-xs text-blue-600 border-blue-300">
|
||||||
|
<Link className="w-3 h-3 mr-1" />
|
||||||
|
{formatMessage({ id: 'mcp.transport.http' })}
|
||||||
|
</Badge>
|
||||||
|
)}
|
||||||
<Badge variant={server.scope === 'global' ? 'default' : 'secondary'} className="text-xs">
|
<Badge variant={server.scope === 'global' ? 'default' : 'secondary'} className="text-xs">
|
||||||
{server.scope === 'global' ? (
|
{server.scope === 'global' ? (
|
||||||
<><Globe className="w-3 h-3 mr-1" />{formatMessage({ id: 'mcp.scope.global' })}</>
|
<><Globe className="w-3 h-3 mr-1" />{formatMessage({ id: 'mcp.scope.global' })}</>
|
||||||
@@ -115,8 +188,8 @@ function McpServerCard({ server, isExpanded, onToggleExpand, onToggle, onEdit, o
|
|||||||
</Badge>
|
</Badge>
|
||||||
)}
|
)}
|
||||||
</div>
|
</div>
|
||||||
<p className="text-sm text-muted-foreground mt-1 font-mono">
|
<p className="text-sm text-muted-foreground mt-1 font-mono truncate max-w-md" title={getServerSummary()}>
|
||||||
{server.command} {server.args?.join(' ') || ''}
|
{getServerSummary()}
|
||||||
</p>
|
</p>
|
||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
@@ -178,6 +251,60 @@ function McpServerCard({ server, isExpanded, onToggleExpand, onToggle, onEdit, o
|
|||||||
{/* Expanded Content */}
|
{/* Expanded Content */}
|
||||||
{isExpanded && (
|
{isExpanded && (
|
||||||
<div className="border-t border-border p-4 space-y-3 bg-muted/30">
|
<div className="border-t border-border p-4 space-y-3 bg-muted/30">
|
||||||
|
{/* HTTP Server Details */}
|
||||||
|
{isHttp && (
|
||||||
|
<>
|
||||||
|
{/* URL */}
|
||||||
|
<div>
|
||||||
|
<p className="text-xs text-muted-foreground mb-1">{formatMessage({ id: 'mcp.http.url' })}</p>
|
||||||
|
<code className="text-sm bg-background px-2 py-1 rounded block overflow-x-auto break-all">
|
||||||
|
{server.url}
|
||||||
|
</code>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
{/* HTTP Headers */}
|
||||||
|
{getHttpHeaders().length > 0 && (
|
||||||
|
<div>
|
||||||
|
<p className="text-xs text-muted-foreground mb-1">{formatMessage({ id: 'mcp.http.headers' })}</p>
|
||||||
|
<div className="space-y-1">
|
||||||
|
{getHttpHeaders().map((header) => (
|
||||||
|
<div key={header.name} className="flex items-center gap-2 text-sm">
|
||||||
|
<Badge variant="secondary" className="font-mono">{header.name}</Badge>
|
||||||
|
<span className="text-muted-foreground">=</span>
|
||||||
|
<code className="text-xs bg-background px-2 py-1 rounded flex-1 overflow-x-auto">
|
||||||
|
{showHeaderValues[header.name] ? header.value : maskHeaderValue(header.value)}
|
||||||
|
</code>
|
||||||
|
<Button
|
||||||
|
variant="ghost"
|
||||||
|
size="sm"
|
||||||
|
className="h-6 w-6 p-0"
|
||||||
|
onClick={() => toggleHeaderValue(header.name)}
|
||||||
|
title={showHeaderValues[header.name]
|
||||||
|
? formatMessage({ id: 'mcp.http.hideValue' })
|
||||||
|
: formatMessage({ id: 'mcp.http.showValue' })}
|
||||||
|
>
|
||||||
|
{showHeaderValues[header.name] ? (
|
||||||
|
<EyeOff className="w-3 h-3 text-muted-foreground" />
|
||||||
|
) : (
|
||||||
|
<Eye className="w-3 h-3 text-muted-foreground" />
|
||||||
|
)}
|
||||||
|
</Button>
|
||||||
|
{header.isEnvVar && (
|
||||||
|
<Badge variant="outline" className="text-xs text-blue-500">
|
||||||
|
{formatMessage({ id: 'mcp.http.envVar' })}
|
||||||
|
</Badge>
|
||||||
|
)}
|
||||||
|
</div>
|
||||||
|
))}
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
)}
|
||||||
|
</>
|
||||||
|
)}
|
||||||
|
|
||||||
|
{/* STDIO Server Details */}
|
||||||
|
{isStdio && (
|
||||||
|
<>
|
||||||
{/* Command details */}
|
{/* Command details */}
|
||||||
<div>
|
<div>
|
||||||
<p className="text-xs text-muted-foreground mb-1">{formatMessage({ id: 'mcp.command' })}</p>
|
<p className="text-xs text-muted-foreground mb-1">{formatMessage({ id: 'mcp.command' })}</p>
|
||||||
@@ -217,6 +344,8 @@ function McpServerCard({ server, isExpanded, onToggleExpand, onToggle, onEdit, o
|
|||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
)}
|
)}
|
||||||
|
</>
|
||||||
|
)}
|
||||||
|
|
||||||
{/* Conflict warning panel */}
|
{/* Conflict warning panel */}
|
||||||
{conflictInfo && (
|
{conflictInfo && (
|
||||||
@@ -609,18 +738,28 @@ export function McpManagerPage() {
|
|||||||
};
|
};
|
||||||
|
|
||||||
// Filter servers by search query
|
// Filter servers by search query
|
||||||
const filteredServers = servers.filter((s) =>
|
const filteredServers = servers.filter((s) => {
|
||||||
s.name.toLowerCase().includes(searchQuery.toLowerCase()) ||
|
const query = searchQuery.toLowerCase();
|
||||||
s.command.toLowerCase().includes(searchQuery.toLowerCase())
|
const nameMatch = s.name.toLowerCase().includes(query);
|
||||||
);
|
// For STDIO servers, search in command; for HTTP servers, search in url
|
||||||
|
const transportMatch = isHttpMcpServer(s)
|
||||||
|
? s.url?.toLowerCase().includes(query)
|
||||||
|
: s.command?.toLowerCase().includes(query);
|
||||||
|
return nameMatch || transportMatch;
|
||||||
|
});
|
||||||
|
|
||||||
// Filter Codex servers by search query
|
// Filter Codex servers by search query
|
||||||
const codexServers = codexQuery.data?.servers ?? [];
|
const codexServers = codexQuery.data?.servers ?? [];
|
||||||
const codexConfigPath = codexQuery.data?.configPath ?? '';
|
const codexConfigPath = codexQuery.data?.configPath ?? '';
|
||||||
const filteredCodexServers = codexServers.filter((s) =>
|
const filteredCodexServers = codexServers.filter((s) => {
|
||||||
s.name.toLowerCase().includes(searchQuery.toLowerCase()) ||
|
const query = searchQuery.toLowerCase();
|
||||||
s.command.toLowerCase().includes(searchQuery.toLowerCase())
|
const nameMatch = s.name.toLowerCase().includes(query);
|
||||||
);
|
// For STDIO servers, search in command; for HTTP servers, search in url
|
||||||
|
const transportMatch = isHttpMcpServer(s)
|
||||||
|
? s.url?.toLowerCase().includes(query)
|
||||||
|
: s.command?.toLowerCase().includes(query);
|
||||||
|
return nameMatch || transportMatch;
|
||||||
|
});
|
||||||
|
|
||||||
const currentServers = cliMode === 'codex' ? filteredCodexServers : filteredServers;
|
const currentServers = cliMode === 'codex' ? filteredCodexServers : filteredServers;
|
||||||
const currentExpanded = cliMode === 'codex' ? codexExpandedServers : expandedServers;
|
const currentExpanded = cliMode === 'codex' ? codexExpandedServers : expandedServers;
|
||||||
|
|||||||
Reference in New Issue
Block a user