diff --git a/.claude/skills/team-arch-opt/roles/coordinator/role.md b/.claude/skills/team-arch-opt/roles/coordinator/role.md index 4f6b6b80..c44f82d6 100644 --- a/.claude/skills/team-arch-opt/roles/coordinator/role.md +++ b/.claude/skills/team-arch-opt/roles/coordinator/role.md @@ -186,13 +186,25 @@ Bash("mkdir -p .workflow//artifacts/pipelines/A .workflow//wisdom/.msg/meta.json", { "session_id": "", "requirement": "", "parallel_mode": "" }) +4. Initialize meta.json with pipeline metadata: +```typescript +// Use team_msg to write pipeline metadata to .msg/meta.json +mcp__ccw-tools__team_msg({ + operation: "log", + session_id: "", + from: "coordinator", + type: "state_update", + summary: "Session initialized", + data: { + pipeline_mode: "", + pipeline_stages: ["analyzer", "designer", "refactorer", "validator", "reviewer"], + roles: ["coordinator", "analyzer", "designer", "refactorer", "validator", "reviewer"], + team_name: "arch-opt" + } +}) ``` -4. Create team: +5. Create team: ``` TeamCreate({ team_name: "arch-opt" }) diff --git a/.claude/skills/team-brainstorm/roles/coordinator/role.md b/.claude/skills/team-brainstorm/roles/coordinator/role.md index e9845357..2cada1cd 100644 --- a/.claude/skills/team-brainstorm/roles/coordinator/role.md +++ b/.claude/skills/team-brainstorm/roles/coordinator/role.md @@ -186,22 +186,26 @@ Bash("mkdir -p .workflow/.team//ideas .workflow/.team//c } ``` -4. Initialize .msg/meta.json: - -```json -{ - "session_id": "", - "team_name": "brainstorm", - "topic": "", - "pipeline": "", - "angles": [], - "gc_round": 0, - "generated_ideas": [], - "critique_insights": [], - "synthesis_themes": [], - "evaluation_scores": [], - "status": "active" -} +4. Initialize meta.json with pipeline metadata: +```typescript +// Use team_msg to write pipeline metadata to .msg/meta.json +mcp__ccw-tools__team_msg({ + operation: "log", + session_id: "", + from: "coordinator", + type: "state_update", + summary: "Session initialized", + data: { + pipeline_mode: "", + pipeline_stages: ["ideator", "challenger", "synthesizer", "evaluator"], + roles: ["coordinator", "ideator", "challenger", "synthesizer", "evaluator"], + team_name: "brainstorm", + topic: "", + angles: ["", ""], + gc_round": 0, + status: "active" + } +}) ``` 5. Create team: diff --git a/.claude/skills/team-coordinate-v2/roles/coordinator/role.md b/.claude/skills/team-coordinate-v2/roles/coordinator/role.md index df230a43..65888eac 100644 --- a/.claude/skills/team-coordinate-v2/roles/coordinator/role.md +++ b/.claude/skills/team-coordinate-v2/roles/coordinator/role.md @@ -200,8 +200,24 @@ Regardless of complexity score or role count, coordinator MUST: - `explorations/cache-index.json` (`{ "entries": [] }`) - `discussions/` (empty directory) -9. **Initialize cross-role state** via team_msg: - - `team_msg(operation="log", session_id=, from="coordinator", type="state_update", data={})` +9. **Initialize pipeline metadata** via team_msg: +```typescript +// 使用 team_msg 将 pipeline 元数据写入 .msg/meta.json +// 注意: 此处为动态角色,执行时需将 替换为 task-analysis.json 中生成的实际角色列表 +mcp__ccw-tools__team_msg({ + operation: "log", + session_id: "", + from: "coordinator", + type: "state_update", + summary: "Session initialized", + data: { + pipeline_mode: "", + pipeline_stages: ["", "", "<...dynamic-roles>"], + roles: ["coordinator", "", "", "<...dynamic-roles>"], + team_name: "" // 从 session ID 或任务描述中提取 + } +}) +``` 10. **Write team-session.json** with: session_id, task_description, status="active", roles, pipeline (empty), active_workers=[], completion_action="interactive", created_at diff --git a/.claude/skills/team-frontend/roles/coordinator/role.md b/.claude/skills/team-frontend/roles/coordinator/role.md index 0797d8a9..33cad6b5 100644 --- a/.claude/skills/team-frontend/roles/coordinator/role.md +++ b/.claude/skills/team-frontend/roles/coordinator/role.md @@ -161,10 +161,22 @@ Bash("mkdir -p .workflow/.team/FE--/{.msg,wisdom,analysis,arch } ``` -3. Initialize .msg/meta.json: - -``` -Write("/.msg/meta.json", { "session_id": "", "requirement": "", "pipeline_mode": "" }) +3. Initialize meta.json with pipeline metadata: +```typescript +// Use team_msg to write pipeline metadata to .msg/meta.json +mcp__ccw-tools__team_msg({ + operation: "log", + session_id: "", + from: "coordinator", + type: "state_update", + summary: "Session initialized", + data: { + pipeline_mode: "", + pipeline_stages: ["analyst", "architect", "developer", "qa"], + roles: ["coordinator", "analyst", "architect", "developer", "qa"], + team_name: "frontend" + } +}) ``` 4. Create team: diff --git a/.claude/skills/team-issue/roles/coordinator/role.md b/.claude/skills/team-issue/roles/coordinator/role.md index da3c31df..430788b9 100644 --- a/.claude/skills/team-issue/roles/coordinator/role.md +++ b/.claude/skills/team-issue/roles/coordinator/role.md @@ -160,21 +160,22 @@ Bash("ccw issue list --status registered,pending --json") Bash("mkdir -p .workflow/.team-plan/issue/explorations .workflow/.team-plan/issue/solutions .workflow/.team-plan/issue/audits .workflow/.team-plan/issue/queue .workflow/.team-plan/issue/builds .workflow/.team-plan/issue/wisdom") ``` -2. Write session state to `.msg/meta.json`: - -```json -{ - "session_id": "", - "status": "active", - "team_name": "issue", - "mode": "", - "issue_ids": [], - "requirement": "", - "execution_method": "", - "code_review": "", - "timestamp": "", - "fix_cycles": {} -} +2. Initialize meta.json with pipeline metadata: +```typescript +// Use team_msg to write pipeline metadata to .msg/meta.json +mcp__ccw-tools__team_msg({ + operation: "log", + session_id: "", + from: "coordinator", + type: "state_update", + summary: "Session initialized", + data: { + pipeline_mode: "", + pipeline_stages: ["explorer", "planner", "reviewer", "integrator", "implementer"], + roles: ["coordinator", "explorer", "planner", "reviewer", "integrator", "implementer"], + team_name: "issue" + } +}) ``` 3. Initialize wisdom directory (learnings.md, decisions.md, conventions.md, issues.md) diff --git a/.claude/skills/team-iterdev/roles/coordinator/role.md b/.claude/skills/team-iterdev/roles/coordinator/role.md index 728a68d9..b0618d6a 100644 --- a/.claude/skills/team-iterdev/roles/coordinator/role.md +++ b/.claude/skills/team-iterdev/roles/coordinator/role.md @@ -162,20 +162,22 @@ Bash("mkdir -p .workflow/.team//design .workflow/.team// } ``` -7. Initialize .msg/meta.json: - -```json -{ - "session_id": "", - "requirement": "", - "pipeline": "", - "architecture_decisions": [], - "implementation_context": [], - "review_feedback_trends": [], - "gc_round": 0, - "max_gc_rounds": 3, - "sprint_history": [] -} +7. Initialize meta.json with pipeline metadata: +```typescript +// Use team_msg to write pipeline metadata to .msg/meta.json +mcp__ccw-tools__team_msg({ + operation: "log", + session_id: "", + from: "coordinator", + type: "state_update", + summary: "Session initialized", + data: { + pipeline_mode: "", + pipeline_stages: ["architect", "developer", "tester", "reviewer"], + roles: ["coordinator", "architect", "developer", "tester", "reviewer"], + team_name: "iterdev" + } +}) ``` --- diff --git a/.claude/skills/team-lifecycle-v5/roles/coordinator/role.md b/.claude/skills/team-lifecycle-v5/roles/coordinator/role.md index 1f90f623..a59acbec 100644 --- a/.claude/skills/team-lifecycle-v5/roles/coordinator/role.md +++ b/.claude/skills/team-lifecycle-v5/roles/coordinator/role.md @@ -127,17 +127,37 @@ For callback/check/resume: load `commands/monitor.md` and execute the appropriat 4. Initialize wisdom directory (learnings.md, decisions.md, conventions.md, issues.md) 5. Initialize explorations directory with empty cache-index.json 6. Write team-session.json +7. **Initialize meta.json with pipeline metadata** (CRITICAL for UI): + +```typescript +// Use team_msg to write pipeline metadata to .msg/meta.json +mcp__ccw-tools__team_msg({ + operation: "log", + session_id: "", + from: "coordinator", + type: "state_update", + summary: "Session initialized", + data: { + pipeline_mode: "", // e.g., "full", "spec-only", "impl-only" + pipeline_stages: ["analyst", "writer", "planner", "executor", "tester", "reviewer"], // Roles as stages + roles: ["coordinator", "analyst", "writer", "planner", "executor", "tester", "reviewer"], + team_name: "lifecycle" // Skill name for badge display + } +}) +``` + +**pipeline_stages format**: Array of role names that represent pipeline stages. The UI will display these as the pipeline workflow. **Task counts by mode**: -| Mode | Tasks | -|------|-------| -| spec-only | 6 | -| impl-only | 4 | -| fe-only | 3 | -| fullstack | 6 | -| full-lifecycle | 10 | -| full-lifecycle-fe | 12 | +| Mode | Tasks | pipeline_stages | +|------|-------|-----------------| +| spec-only | 6 | `["analyst", "writer", "reviewer"]` | +| impl-only | 4 | `["planner", "executor", "tester", "reviewer"]` | +| fe-only | 3 | `["planner", "fe-developer", "fe-qa"]` | +| fullstack | 6 | `["planner", "executor", "fe-developer", "tester", "fe-qa", "reviewer"]` | +| full-lifecycle | 10 | `["analyst", "writer", "planner", "executor", "tester", "reviewer"]` | +| full-lifecycle-fe | 12 | `["analyst", "writer", "planner", "executor", "fe-developer", "tester", "fe-qa", "reviewer"]` | --- diff --git a/.claude/skills/team-perf-opt/roles/coordinator/role.md b/.claude/skills/team-perf-opt/roles/coordinator/role.md index b5259f14..2cc84f5f 100644 --- a/.claude/skills/team-perf-opt/roles/coordinator/role.md +++ b/.claude/skills/team-perf-opt/roles/coordinator/role.md @@ -186,10 +186,22 @@ Bash("mkdir -p .workflow//artifacts/pipelines/A .workflow//.msg/meta.json", { "session_id": "", "requirement": "", "parallel_mode": "" }) +3. Initialize meta.json with pipeline metadata: +```typescript +// Use team_msg to write pipeline metadata to .msg/meta.json +mcp__ccw-tools__team_msg({ + operation: "log", + session_id: "", + from: "coordinator", + type: "state_update", + summary: "Session initialized", + data: { + pipeline_mode: "", + pipeline_stages: ["profiler", "strategist", "optimizer", "benchmarker", "reviewer"], + roles: ["coordinator", "profiler", "strategist", "optimizer", "benchmarker", "reviewer"], + team_name: "perf-opt" + } +}) ``` 4. Create team: diff --git a/.claude/skills/team-planex/roles/coordinator/role.md b/.claude/skills/team-planex/roles/coordinator/role.md index 03601ee6..2c2419c8 100644 --- a/.claude/skills/team-planex/roles/coordinator/role.md +++ b/.claude/skills/team-planex/roles/coordinator/role.md @@ -95,18 +95,24 @@ For callback/check/resume: load `commands/monitor.md` and execute the appropriat 3. Create subdirectories: `artifacts/solutions/`, `wisdom/` 4. Call `TeamCreate` with team name (default: "planex") 5. Initialize wisdom files (learnings.md, decisions.md, conventions.md, issues.md) -6. Write .msg/meta.json: - -``` -{ +6. Initialize meta.json with pipeline metadata: +```typescript +// Use team_msg to write pipeline metadata to .msg/meta.json +mcp__ccw-tools__team_msg({ + operation: "log", session_id: "", - input_type: "", - input: "", - execution_method: "", - status: "active", - active_workers: [], - started_at: "" -} + from: "coordinator", + type: "state_update", + summary: "Session initialized", + data: { + pipeline_mode: "plan-execute", + pipeline_stages: ["planner", "executor"], + roles: ["coordinator", "planner", "executor"], + team_name: "planex", + input_type: "", + execution_method: "" + } +}) ``` --- diff --git a/.claude/skills/team-quality-assurance/roles/coordinator/role.md b/.claude/skills/team-quality-assurance/roles/coordinator/role.md index b42a7c1a..b35598f9 100644 --- a/.claude/skills/team-quality-assurance/roles/coordinator/role.md +++ b/.claude/skills/team-quality-assurance/roles/coordinator/role.md @@ -139,23 +139,34 @@ For callback/check/resume/complete: load `commands/monitor.md` and execute match 1. Generate session ID 2. Create session folder 3. Call TeamCreate with team name -4. Initialize .msg/meta.json with empty fields +4. Initialize meta.json with pipeline metadata and shared state: +```typescript +// Use team_msg to write pipeline metadata to .msg/meta.json +mcp__ccw-tools__team_msg({ + operation: "log", + session_id: "", + from: "coordinator", + type: "state_update", + summary: "Session initialized", + data: { + pipeline_mode: "", + pipeline_stages: ["scout", "strategist", "generator", "executor", "analyst"], + roles: ["coordinator", "scout", "strategist", "generator", "executor", "analyst"], + team_name: "quality-assurance", + discovered_issues: [], + test_strategy: {}, + generated_tests: {}, + execution_results: {}, + defect_patterns: [], + coverage_history: [], + quality_score: null + } +}) +``` + 5. Initialize wisdom directory (learnings.md, decisions.md, conventions.md, issues.md) 6. Write session file with: session_id, mode, scope, status="active" -**Shared Memory Structure**: -``` -{ - "discovered_issues": [], - "test_strategy": {}, - "generated_tests": {}, - "execution_results": {}, - "defect_patterns": [], - "coverage_history": [], - "quality_score": null -} -``` - **Success**: Team created, session file written, shared memory initialized. --- diff --git a/.claude/skills/team-review/roles/coordinator/role.md b/.claude/skills/team-review/roles/coordinator/role.md index 244c8e55..f531f842 100644 --- a/.claude/skills/team-review/roles/coordinator/role.md +++ b/.claude/skills/team-review/roles/coordinator/role.md @@ -197,7 +197,26 @@ Bash("ccw team log --session-id --from coordinator --type dispatch_ │ └── meta.json ``` -3. Initialize .msg/meta.json with: workflow_id, mode, target, dimensions, auto flag +3. Initialize .msg/meta.json with pipeline metadata: +```typescript +// Use team_msg to write pipeline metadata to .msg/meta.json +mcp__ccw-tools__team_msg({ + operation: "log", + session_id: "", + from: "coordinator", + type: "state_update", + summary: "Session initialized", + data: { + pipeline_mode: "", + pipeline_stages: ["scanner", "reviewer", "fixer"], + roles: ["coordinator", "scanner", "reviewer", "fixer"], + team_name: "review", + target: "", + dimensions: "", + auto_confirm: "" + } +}) +``` **Success**: Session folder created, shared memory initialized. diff --git a/.claude/skills/team-roadmap-dev/roles/coordinator/role.md b/.claude/skills/team-roadmap-dev/roles/coordinator/role.md index 648eb97b..b9c28329 100644 --- a/.claude/skills/team-roadmap-dev/roles/coordinator/role.md +++ b/.claude/skills/team-roadmap-dev/roles/coordinator/role.md @@ -198,8 +198,27 @@ Delegate to `commands/roadmap-discuss.md`: **Workflow**: 1. Call `TeamCreate({ team_name: "roadmap-dev" })` -2. Spawn worker roles (see SKILL.md Coordinator Spawn Template) -3. Load `commands/dispatch.md` for task chain creation + +2. Initialize meta.json with pipeline metadata: +```typescript +// Use team_msg to write pipeline metadata to .msg/meta.json +mcp__ccw-tools__team_msg({ + operation: "log", + session_id: "", + from: "coordinator", + type: "state_update", + summary: "Session initialized", + data: { + pipeline_mode: "roadmap-driven", + pipeline_stages: ["planner", "executor", "verifier"], + roles: ["coordinator", "planner", "executor", "verifier"], + team_name: "roadmap-dev" + } +}) +``` + +3. Spawn worker roles (see SKILL.md Coordinator Spawn Template) +4. Load `commands/dispatch.md` for task chain creation | Step | Action | |------|--------| diff --git a/.claude/skills/team-tech-debt/roles/coordinator/role.md b/.claude/skills/team-tech-debt/roles/coordinator/role.md index 50f20c89..5a7d8c1b 100644 --- a/.claude/skills/team-tech-debt/roles/coordinator/role.md +++ b/.claude/skills/team-tech-debt/roles/coordinator/role.md @@ -224,17 +224,30 @@ Bash("ccw team log --session-id --from coordinator --type ", + from: "coordinator", + type: "state_update", + summary: "Session initialized", + data: { + pipeline_mode: "", + pipeline_stages: ["scanner", "assessor", "planner", "executor", "validator"], + roles: ["coordinator", "scanner", "assessor", "planner", "executor", "validator"], + team_name: "tech-debt", + debt_inventory: [], + priority_matrix: {}, + remediation_plan: {}, + fix_results: {}, + validation_results: {}, + debt_score_before: null, + debt_score_after: null + } +}) +``` 4. Call TeamCreate with team name "tech-debt" diff --git a/.claude/skills/team-testing/roles/coordinator/role.md b/.claude/skills/team-testing/roles/coordinator/role.md index b70c5000..7cd2c727 100644 --- a/.claude/skills/team-testing/roles/coordinator/role.md +++ b/.claude/skills/team-testing/roles/coordinator/role.md @@ -157,10 +157,22 @@ Bash("mkdir -p .workflow/.team/TST--/strategy .workflow/.team/TST-/wisdom/.msg/meta.json", { "session_id": "", "requirement": "", "pipeline": "" }) +3. Initialize .msg/meta.json with pipeline metadata: +```typescript +// Use team_msg to write pipeline metadata to .msg/meta.json +mcp__ccw-tools__team_msg({ + operation: "log", + session_id: "", + from: "coordinator", + type: "state_update", + summary: "Session initialized", + data: { + pipeline_mode: "", + pipeline_stages: ["strategist", "generator", "executor", "analyst"], + roles: ["coordinator", "strategist", "generator", "executor", "analyst"], + team_name: "testing" + } +}) ``` 4. Create team: diff --git a/.claude/skills/team-uidesign/roles/coordinator/role.md b/.claude/skills/team-uidesign/roles/coordinator/role.md index 6c0638d3..51bfb00c 100644 --- a/.claude/skills/team-uidesign/roles/coordinator/role.md +++ b/.claude/skills/team-uidesign/roles/coordinator/role.md @@ -170,10 +170,22 @@ Bash("mkdir -p .workflow/.team/UDS--/research .workflow/.team/UDS-/wisdom/.msg/meta.json", { "session_id": "", "requirement": "", "pipeline": "" }) +3. Initialize .msg/meta.json with pipeline metadata: +```typescript +// Use team_msg to write pipeline metadata to .msg/meta.json +mcp__ccw-tools__team_msg({ + operation: "log", + session_id: "", + from: "coordinator", + type: "state_update", + summary: "Session initialized", + data: { + pipeline_mode: "", + pipeline_stages: ["researcher", "designer", "reviewer", "implementer"], + roles: ["coordinator", "researcher", "designer", "reviewer", "implementer"], + team_name: "uidesign" + } +}) ``` 4. Create team: diff --git a/.claude/skills/team-ultra-analyze/roles/coordinator/role.md b/.claude/skills/team-ultra-analyze/roles/coordinator/role.md index e3d37e62..6a34dde7 100644 --- a/.claude/skills/team-ultra-analyze/roles/coordinator/role.md +++ b/.claude/skills/team-ultra-analyze/roles/coordinator/role.md @@ -156,7 +156,24 @@ TeamCreate({ team_name: "ultra-analyze" }) ``` 3. Write session.json with mode, requirement, timestamp -4. Initialize .msg/meta.json +4. Initialize .msg/meta.json with pipeline metadata: +```typescript +// Use team_msg to write pipeline metadata to .msg/meta.json +mcp__ccw-tools__team_msg({ + operation: "log", + session_id: "", + from: "coordinator", + type: "state_update", + summary: "Session initialized", + data: { + pipeline_mode: "", + pipeline_stages: ["explorer", "analyst", "discussant", "synthesizer"], + roles: ["coordinator", "explorer", "analyst", "discussant", "synthesizer"], + team_name: "ultra-analyze" + } +}) +``` + 5. Call `TeamCreate({ team_name: "ultra-analyze" })` --- diff --git a/.codex/skills/numerical-analysis-workflow/SKILL.md b/.codex/skills/numerical-analysis-workflow/SKILL.md new file mode 100644 index 00000000..16f5e0a4 --- /dev/null +++ b/.codex/skills/numerical-analysis-workflow/SKILL.md @@ -0,0 +1,630 @@ +--- +name: numerical-analysis-workflow +description: Global-to-local numerical computation project analysis workflow. Decomposes analysis into 6-phase diamond topology (Global → Theory → Algorithm → Module → Local → Integration) with parallel analysis tracks per phase, cross-phase context propagation, and LaTeX formula support. Produces comprehensive analysis documents covering mathematical foundations, numerical stability, convergence, error bounds, and software architecture. +argument-hint: "[-y|--yes] [-c|--concurrency N] [--continue] \"project path or description\"" +allowed-tools: spawn_agents_on_csv, Read, Write, Edit, Bash, Glob, Grep, AskUserQuestion +--- + +## Auto Mode + +When `--yes` or `-y`: Auto-confirm track decomposition, skip interactive validation, use defaults. + +# Numerical Analysis Workflow + +## Usage + +```bash +$numerical-analysis-workflow "Analyze the FEM solver in src/solver/" +$numerical-analysis-workflow -c 3 "Analyze CFD simulation pipeline for numerical stability" +$numerical-analysis-workflow -y "Full analysis of PDE discretization in src/pde/" +$numerical-analysis-workflow --continue "nadw-fem-solver-20260304" +``` + +**Flags**: +- `-y, --yes`: Skip all confirmations (auto mode) +- `-c, --concurrency N`: Max concurrent agents within each wave (default: 3) +- `--continue`: Resume existing session + +**Output Directory**: `.workflow/.csv-wave/{session-id}/` +**Core Output**: `tasks.csv` (master state) + `results.csv` (final) + `discoveries.ndjson` (shared exploration) + `context.md` (human-readable report) + +--- + +## Overview + +Six-phase diamond topology for analyzing numerical computation software projects. Each phase represents a wave; within each wave, 2-5 parallel analysis tracks produce focused documents. Context packages propagate cumulatively between waves, enabling perspective reuse — theory informs algorithm design, algorithm informs implementation, all converge at integration. + +**Core workflow**: Survey → Theorize → Design → Analyze Modules → Optimize Locally → Integrate & Validate + +``` +┌─────────────────────────────────────────────────────────────────────────┐ +│ NUMERICAL ANALYSIS DIAMOND WORKFLOW (NADW) │ +├─────────────────────────────────────────────────────────────────────────┤ +│ │ +│ Wave 1: Global Survey [3 tracks] │ +│ ├─ T1.1 Problem Domain Survey (math models, governing equations) │ +│ ├─ T1.2 Software Architecture Overview (modules, data flow) │ +│ └─ T1.3 Validation Strategy (benchmarks, KPIs, acceptance) │ +│ ↓ Context Package P1 │ +│ │ +│ Wave 2: Theoretical Foundations [3 tracks] │ +│ ├─ T2.1 Mathematical Formulation (LaTeX derivation, weak forms) │ +│ ├─ T2.2 Convergence Analysis (error bounds, convergence order) │ +│ └─ T2.3 Complexity Analysis (time/space Big-O, operation counts) │ +│ ↓ Context Package P1+P2 │ +│ │ +│ Wave 3: Algorithm Design & Stability [3 tracks] │ +│ ├─ T3.1 Algorithm Specification (method selection, pseudocode) │ +│ ├─ T3.2 Numerical Stability Report (condition numbers, error prop) │ +│ └─ T3.3 Performance Model (FLOPS, memory bandwidth, parallelism) │ +│ ↓ Context Package P1+P2+P3 │ +│ │ +│ Wave 4: Module Implementation [3 tracks] │ +│ ├─ T4.1 Core Module Analysis (algorithm-code mapping) │ +│ ├─ T4.2 Data Structure Review (sparse formats, memory layout) │ +│ └─ T4.3 API Contract Analysis (interfaces, error handling) │ +│ ↓ Context Package P1-P4 │ +│ │ +│ Wave 5: Local Function-Level [3 tracks] │ +│ ├─ T5.1 Optimization Report (hotspots, vectorization, cache) │ +│ ├─ T5.2 Edge Case Analysis (singularities, overflow, degeneracy) │ +│ └─ T5.3 Precision Audit (catastrophic cancellation, accumulation) │ +│ ↓ Context Package P1-P5 │ +│ │ +│ Wave 6: Integration & QA [3 tracks] │ +│ ├─ T6.1 Integration Test Plan (end-to-end, regression, benchmark) │ +│ ├─ T6.2 Benchmark Results (actual vs theoretical performance) │ +│ └─ T6.3 Final QA Report (all-phase synthesis, risk matrix, roadmap) │ +│ │ +└─────────────────────────────────────────────────────────────────────────┘ +``` + +**Diamond Topology** (Wide → Deep → Wide): +``` +Wave 1: [T1.1] [T1.2] [T1.3] ← Global扇出 +Wave 2: [T2.1] [T2.2] [T2.3] ← Theory深入 +Wave 3: [T3.1] [T3.2] [T3.3] ← Algorithm桥接 +Wave 4: [T4.1] [T4.2] [T4.3] ← Module聚焦 +Wave 5: [T5.1] [T5.2] [T5.3] ← Local最细 +Wave 6: [T6.1] [T6.2] [T6.3] ← Integration汇聚 +``` + +--- + +## CSV Schema + +### tasks.csv (Master State) + +```csv +id,title,description,track_role,analysis_dimension,formula_refs,precision_req,scope,deps,context_from,wave,status,findings,severity_distribution,latex_formulas,doc_path,error +"T1.1","Problem Domain Survey","Survey governing equations and mathematical models for the numerical computation project. Identify PDE types, boundary conditions, conservation laws.","Problem_Domain_Analyst","domain_modeling","","","src/**","","","1","","","","","","" +"T2.1","Mathematical Formulation","Derive precise mathematical formulations using LaTeX. Transform governing equations into weak forms suitable for discretization.","Mathematician","formula_derivation","T1.1:governing_eqs","","src/**","T1.1","T1.1","2","","","","","","" +"T3.1","Algorithm Specification","Select numerical methods and design algorithms based on theoretical analysis. Produce pseudocode for core computational kernels.","Algorithm_Designer","method_selection","T2.1:weak_forms;T2.2:convergence_conds","double","src/solver/**","T2.1","T2.1;T2.2;T2.3","3","","","","","","" +``` + +**Columns**: + +| Column | Phase | Description | +|--------|-------|-------------| +| `id` | Input | Unique task identifier (T{wave}.{track}) | +| `title` | Input | Short task title | +| `description` | Input | Detailed task description (self-contained for agent) | +| `track_role` | Input | Analysis role name (e.g., Mathematician, Stability_Analyst) | +| `analysis_dimension` | Input | Analysis focus area (domain_modeling, formula_derivation, stability, etc.) | +| `formula_refs` | Input | Semicolon-separated references to formulas from earlier tasks (TaskID:formula_name) | +| `precision_req` | Input | Required floating-point precision (float/double/quad/adaptive) | +| `scope` | Input | File/directory scope for analysis (glob pattern) | +| `deps` | Input | Semicolon-separated dependency task IDs | +| `context_from` | Input | Semicolon-separated task IDs whose findings this task needs | +| `wave` | Computed | Wave number (1-6, from phase assignment) | +| `status` | Output | `pending` → `completed` / `failed` / `skipped` | +| `findings` | Output | Key discoveries and conclusions (max 500 chars) | +| `severity_distribution` | Output | Issue counts: Critical/High/Medium/Low | +| `latex_formulas` | Output | Key LaTeX formulas discovered or derived (semicolon-separated) | +| `doc_path` | Output | Path to generated analysis document | +| `error` | Output | Error message if failed | + +### Per-Wave CSV (Temporary) + +Each wave generates a temporary `wave-{N}.csv` with extra `prev_context` column. + +--- + +## Output Artifacts + +| File | Purpose | Lifecycle | +|------|---------|-----------| +| `tasks.csv` | Master state — all tasks with status/findings | Updated after each wave | +| `wave-{N}.csv` | Per-wave input (temporary) | Created before wave, deleted after | +| `results.csv` | Final export of all task results | Created in Phase 3 | +| `discoveries.ndjson` | Shared exploration board across all agents | Append-only, carries across waves | +| `context.md` | Human-readable execution report | Created in Phase 3 | +| `docs/P{N}_*.md` | Per-track analysis documents | Created by each agent | + +--- + +## Session Structure + +``` +.workflow/.csv-wave/{session-id}/ +├── tasks.csv # Master state (updated per wave) +├── results.csv # Final results export +├── discoveries.ndjson # Shared discovery board (all agents) +├── context.md # Human-readable report +├── docs/ # Analysis documents per track +│ ├── P1_Domain_Survey.md +│ ├── P1_Architecture_Overview.md +│ ├── P1_Validation_Strategy.md +│ ├── P2_Mathematical_Formulation.md +│ ├── P2_Convergence_Analysis.md +│ ├── P2_Complexity_Analysis.md +│ ├── P3_Algorithm_Specification.md +│ ├── P3_Numerical_Stability_Report.md +│ ├── P3_Performance_Model.md +│ ├── P4_Module_Implementation_Analysis.md +│ ├── P4_Data_Structure_Review.md +│ ├── P4_API_Contract.md +│ ├── P5_Optimization_Report.md +│ ├── P5_Edge_Case_Analysis.md +│ ├── P5_Precision_Audit.md +│ ├── P6_Integration_Test_Plan.md +│ ├── P6_Benchmark_Results.md +│ └── P6_Final_QA_Report.md +└── wave-{N}.csv # Temporary per-wave input (cleaned up) +``` + +--- + +## Implementation + +### Session Initialization + +```javascript +const getUtc8ISOString = () => new Date(Date.now() + 8 * 60 * 60 * 1000).toISOString() + +// Parse flags +const AUTO_YES = $ARGUMENTS.includes('--yes') || $ARGUMENTS.includes('-y') +const continueMode = $ARGUMENTS.includes('--continue') +const concurrencyMatch = $ARGUMENTS.match(/(?:--concurrency|-c)\s+(\d+)/) +const maxConcurrency = concurrencyMatch ? parseInt(concurrencyMatch[1]) : 3 + +// Clean requirement text +const requirement = $ARGUMENTS + .replace(/--yes|-y|--continue|--concurrency\s+\d+|-c\s+\d+/g, '') + .trim() + +const slug = requirement.toLowerCase() + .replace(/[^a-z0-9\u4e00-\u9fa5]+/g, '-') + .substring(0, 40) +const dateStr = getUtc8ISOString().substring(0, 10).replace(/-/g, '') +const sessionId = `nadw-${slug}-${dateStr}` +const sessionFolder = `.workflow/.csv-wave/${sessionId}` + +Bash(`mkdir -p ${sessionFolder}/docs`) +``` + +--- + +### Phase 1: Requirement → CSV (Decomposition) + +**Objective**: Analyze the target project/requirement, decompose into 18 analysis tasks (6 waves × 3 tracks), compute wave assignments, generate tasks.csv. + +**Decomposition Rules**: + +| Wave | Phase Name | Track Roles | Analysis Focus | +|------|-----------|-------------|----------------| +| 1 | Global Survey | Problem_Domain_Analyst, Software_Architect, Validation_Strategist | Mathematical models, architecture, validation strategy | +| 2 | Theoretical Foundations | Mathematician, Convergence_Analyst, Complexity_Analyst | Formula derivation, convergence proofs, complexity bounds | +| 3 | Algorithm Design | Algorithm_Designer, Stability_Analyst, Performance_Modeler | Method selection, numerical stability, performance prediction | +| 4 | Module Implementation | Module_Implementer, Data_Structure_Designer, Interface_Analyst | Code-algorithm mapping, data structures, API contracts | +| 5 | Local Function-Level | Code_Optimizer, Edge_Case_Analyst, Precision_Auditor | Hotspot optimization, boundary handling, float precision | +| 6 | Integration & QA | Integration_Tester, Benchmark_Engineer, QA_Auditor | End-to-end testing, benchmarks, final quality report | + +**Dependency Structure** (Diamond Topology): + +| Task | deps | context_from | Rationale | +|------|------|-------------|-----------| +| T1.* | (none) | (none) | Wave 1: independent, global survey | +| T2.1 | T1.1 | T1.1 | Formalization needs governing equations | +| T2.2 | T1.1 | T1.1 | Convergence analysis needs model identification | +| T2.3 | T1.1;T1.2 | T1.1;T1.2 | Complexity needs both model and architecture | +| T3.1 | T2.1 | T2.1;T2.2;T2.3 | Algorithm design needs all theory | +| T3.2 | T2.1;T2.2 | T2.1;T2.2 | Stability needs formulas and convergence | +| T3.3 | T2.3 | T1.2;T2.3 | Performance model needs architecture + complexity | +| T4.* | T3.* | T1.*;T3.* | Module analysis needs global + algorithm context | +| T5.* | T4.* | T3.*;T4.* | Local analysis needs algorithm + module context | +| T6.* | T5.* | T1.*;T2.*;T3.*;T4.*;T5.* | Integration receives ALL context | + +**Decomposition CLI Call**: + +```javascript +Bash({ + command: `ccw cli -p "PURPOSE: Decompose numerical computation project analysis into 18 tasks across 6 phases. +TASK: + • Analyze the project to identify: governing equations, numerical methods used, module structure + • Generate 18 analysis tasks (3 per phase × 6 phases) following the NADW diamond topology + • Each task must be self-contained with clear scope and analysis dimension + • Assign track_role, analysis_dimension, formula_refs, precision_req, scope for each task + • Set deps and context_from following the diamond dependency pattern +MODE: analysis +CONTEXT: @**/* +EXPECTED: JSON with tasks array. Each task: {id, title, description, track_role, analysis_dimension, formula_refs, precision_req, scope, deps[], context_from[]} +CONSTRAINTS: Exactly 6 waves, 3 tasks per wave. Wave 1=Global, Wave 2=Theory, Wave 3=Algorithm, Wave 4=Module, Wave 5=Local, Wave 6=Integration. + +PROJECT TO ANALYZE: ${requirement}" --tool gemini --mode analysis --rule planning-breakdown-task-steps`, + run_in_background: true +}) +``` + +**Wave Computation**: Fixed 6-wave assignment per the diamond topology. Tasks within each wave are independent. + +**CSV Generation**: Parse JSON response, validate 18 tasks with correct wave assignments, generate tasks.csv with proper escaping. + +**User Validation**: Display task breakdown grouped by wave (skip if AUTO_YES): + +``` +Wave 1 (Global Survey): + T1.1 Problem Domain Survey → Problem_Domain_Analyst + T1.2 Software Architecture Overview → Software_Architect + T1.3 Validation Strategy → Validation_Strategist + +Wave 2 (Theoretical Foundations): + T2.1 Mathematical Formulation → Mathematician + T2.2 Convergence Analysis → Convergence_Analyst + T2.3 Complexity Analysis → Complexity_Analyst +... +``` + +**Success Criteria**: +- tasks.csv created with 18 tasks, 6 waves, valid schema +- No circular dependencies +- Each task has track_role and analysis_dimension +- User approved (or AUTO_YES) + +--- + +### Phase 2: Wave Execution Engine + +**Objective**: Execute analysis tasks wave-by-wave via spawn_agents_on_csv with cross-wave context propagation and cumulative context packages. + +```javascript +// Read master CSV +const masterCsv = Read(`${sessionFolder}/tasks.csv`) +const tasks = parseCsv(masterCsv) +const maxWave = 6 + +for (let wave = 1; wave <= maxWave; wave++) { + // 1. Filter tasks for this wave + const waveTasks = tasks.filter(t => t.wave === wave && t.status === 'pending') + + // 2. Skip tasks whose deps failed/skipped + for (const task of waveTasks) { + const depIds = task.deps.split(';').filter(Boolean) + const depStatuses = depIds.map(id => tasks.find(t => t.id === id)?.status) + if (depStatuses.some(s => s === 'failed' || s === 'skipped')) { + task.status = 'skipped' + task.error = `Dependency failed: ${depIds.filter((id, i) => ['failed','skipped'].includes(depStatuses[i])).join(', ')}` + continue + } + } + + const pendingTasks = waveTasks.filter(t => t.status === 'pending') + if (pendingTasks.length === 0) continue + + // 3. Build prev_context from context_from + master CSV findings + for (const task of pendingTasks) { + const contextIds = task.context_from.split(';').filter(Boolean) + const prevFindings = contextIds.map(id => { + const src = tasks.find(t => t.id === id) + return src?.findings ? `[${src.id} ${src.title}]: ${src.findings}` : '' + }).filter(Boolean).join('\n\n') + + // Also include latex_formulas from context sources + const prevFormulas = contextIds.map(id => { + const src = tasks.find(t => t.id === id) + return src?.latex_formulas ? `[${src.id} formulas]: ${src.latex_formulas}` : '' + }).filter(Boolean).join('\n') + + task.prev_context = prevFindings + (prevFormulas ? '\n\n--- Referenced Formulas ---\n' + prevFormulas : '') + } + + // 4. Write per-wave CSV + Write(`${sessionFolder}/wave-${wave}.csv`, toCsv(pendingTasks)) + + // 5. Execute wave + spawn_agents_on_csv({ + csv_path: `${sessionFolder}/wave-${wave}.csv`, + id_column: "id", + instruction: buildInstructionTemplate(sessionFolder, wave), + max_concurrency: maxConcurrency, + max_runtime_seconds: 900, + output_csv_path: `${sessionFolder}/wave-${wave}-results.csv`, + output_schema: { + type: "object", + properties: { + id: { type: "string" }, + status: { type: "string", enum: ["completed", "failed"] }, + findings: { type: "string" }, + severity_distribution: { type: "string" }, + latex_formulas: { type: "string" }, + doc_path: { type: "string" }, + error: { type: "string" } + }, + required: ["id", "status", "findings"] + } + }) + + // 6. Merge results into master CSV + const waveResults = parseCsv(Read(`${sessionFolder}/wave-${wave}-results.csv`)) + for (const result of waveResults) { + const masterTask = tasks.find(t => t.id === result.id) + if (masterTask) { + masterTask.status = result.status + masterTask.findings = result.findings + masterTask.severity_distribution = result.severity_distribution || '' + masterTask.latex_formulas = result.latex_formulas || '' + masterTask.doc_path = result.doc_path || '' + masterTask.error = result.error || '' + } + } + Write(`${sessionFolder}/tasks.csv`, toCsv(tasks)) + + // 7. Cleanup temp wave CSV + Bash(`rm -f ${sessionFolder}/wave-${wave}.csv ${sessionFolder}/wave-${wave}-results.csv`) + + // 8. Display wave summary + const completed = waveResults.filter(r => r.status === 'completed').length + const failed = waveResults.filter(r => r.status === 'failed').length + // Output: "Wave {wave}: {completed} completed, {failed} failed" +} +``` + +**Instruction Template** (embedded — see instructions/agent-instruction.md for standalone): + +```javascript +function buildInstructionTemplate(sessionFolder, wave) { + const phaseNames = { + 1: 'Global Survey', 2: 'Theoretical Foundations', 3: 'Algorithm Design', + 4: 'Module Implementation', 5: 'Local Function-Level', 6: 'Integration & QA' + } + return `## TASK ASSIGNMENT — ${phaseNames[wave]} + +### MANDATORY FIRST STEPS +1. Read shared discoveries: ${sessionFolder}/discoveries.ndjson (if exists, skip if not) +2. Read project context: .workflow/project-tech.json (if exists) + +--- + +## Your Task + +**Task ID**: {id} +**Title**: {title} +**Role**: {track_role} +**Analysis Dimension**: {analysis_dimension} +**Description**: {description} +**Formula References**: {formula_refs} +**Precision Requirement**: {precision_req} +**Scope**: {scope} + +### Previous Tasks' Findings (Context) +{prev_context} + +--- + +## Execution Protocol + +1. **Read discoveries**: Load ${sessionFolder}/discoveries.ndjson for shared exploration findings +2. **Use context**: Apply previous tasks' findings from prev_context above +3. **Execute analysis**: + - Read target files within scope: {scope} + - Apply analysis criteria for dimension: {analysis_dimension} + - Document mathematical formulas in LaTeX notation ($$...$$) + - Classify findings by severity (Critical/High/Medium/Low) + - Include file:line references for code-related findings +4. **Generate document**: Write analysis report to ${sessionFolder}/docs/ following the standard template: + - Metadata (Phase, Track, Date) + - Executive Summary + - Analysis Scope + - Findings with severity, evidence, LaTeX formulas, impact, recommendations + - Cross-References to other phases + - Perspective Package (structured summary for context propagation) +5. **Share discoveries**: Append exploration findings to shared board: + \`\`\`bash + echo '{"ts":"","worker":"{id}","type":"","data":{...}}' >> ${sessionFolder}/discoveries.ndjson + \`\`\` +6. **Report result**: Return JSON via report_agent_job_result + +### Discovery Types to Share +- \`governing_equation\`: {eq_name, latex, domain, boundary_conditions} — Governing equations found +- \`numerical_method\`: {method_name, type, order, stability_class} — Numerical methods identified +- \`stability_issue\`: {location, condition_number, severity, description} — Stability concerns +- \`convergence_property\`: {method, rate, order, conditions} — Convergence properties +- \`precision_risk\`: {location, operation, risk_type, recommendation} — Floating-point precision risks +- \`performance_bottleneck\`: {location, operation_count, memory_pattern, suggestion} — Performance issues +- \`architecture_pattern\`: {pattern_name, files, description} — Architecture patterns found +- \`test_gap\`: {component, missing_coverage, priority} — Missing test coverage + +--- + +## Output (report_agent_job_result) + +Return JSON: +{ + "id": "{id}", + "status": "completed" | "failed", + "findings": "Key discoveries and conclusions (max 500 chars)", + "severity_distribution": "Critical:N High:N Medium:N Low:N", + "latex_formulas": "key formulas separated by semicolons", + "doc_path": "relative path to generated analysis document", + "error": "" +}` +} +``` + +**Success Criteria**: +- All 6 waves executed in order +- Each wave's results merged into master CSV before next wave starts +- Dependent tasks skipped when predecessor failed +- discoveries.ndjson accumulated across all waves +- Analysis documents generated in docs/ directory + +--- + +### Phase 3: Results Aggregation + +**Objective**: Generate final results and comprehensive human-readable report synthesizing all 6 phases. + +```javascript +// 1. Export final results.csv +Bash(`cp ${sessionFolder}/tasks.csv ${sessionFolder}/results.csv`) + +// 2. Generate context.md +const tasks = parseCsv(Read(`${sessionFolder}/tasks.csv`)) +const completed = tasks.filter(t => t.status === 'completed').length +const failed = tasks.filter(t => t.status === 'failed').length +const skipped = tasks.filter(t => t.status === 'skipped').length + +let contextMd = `# Numerical Analysis Report: ${requirement}\n\n` +contextMd += `**Session**: ${sessionId}\n` +contextMd += `**Date**: ${getUtc8ISOString().substring(0, 10)}\n` +contextMd += `**Total Tasks**: ${tasks.length} | Completed: ${completed} | Failed: ${failed} | Skipped: ${skipped}\n\n` + +// Per-wave summary +const phaseNames = ['', 'Global Survey', 'Theoretical Foundations', 'Algorithm Design', + 'Module Implementation', 'Local Function-Level', 'Integration & QA'] +for (let w = 1; w <= 6; w++) { + const waveTasks = tasks.filter(t => t.wave === w) + contextMd += `## Wave ${w}: ${phaseNames[w]}\n\n` + for (const t of waveTasks) { + contextMd += `### ${t.id}: ${t.title} [${t.status}]\n` + contextMd += `**Role**: ${t.track_role} | **Dimension**: ${t.analysis_dimension}\n\n` + if (t.findings) contextMd += `**Findings**: ${t.findings}\n\n` + if (t.latex_formulas) contextMd += `**Key Formulas**:\n$$${t.latex_formulas.split(';').join('$$\n\n$$')}$$\n\n` + if (t.severity_distribution) contextMd += `**Issues**: ${t.severity_distribution}\n\n` + if (t.doc_path) contextMd += `**Full Report**: [${t.doc_path}](${t.doc_path})\n\n` + contextMd += `---\n\n` + } +} + +// Collected formulas section +const allFormulas = tasks.filter(t => t.latex_formulas).flatMap(t => + t.latex_formulas.split(';').map(f => ({ task: t.id, formula: f.trim() })) +) +if (allFormulas.length > 0) { + contextMd += `## Collected Mathematical Formulas\n\n` + for (const f of allFormulas) { + contextMd += `- **${f.task}**: $$${f.formula}$$\n` + } + contextMd += `\n` +} + +// All discoveries summary +contextMd += `## Discovery Board Summary\n\n` +contextMd += `See: ${sessionFolder}/discoveries.ndjson\n\n` + +Write(`${sessionFolder}/context.md`, contextMd) + +// 3. Display summary +// Output wave-by-wave completion status table +``` + +**Success Criteria**: +- results.csv exported +- context.md generated with all findings, formulas, cross-references +- Summary displayed to user + +--- + +## Shared Discovery Board Protocol + +### Standard Discovery Types + +| Type | Dedup Key | Data Schema | Description | +|------|-----------|-------------|-------------| +| `governing_equation` | `eq_name` | `{eq_name, latex, domain, boundary_conditions}` | Governing equations found in the project | +| `numerical_method` | `method_name` | `{method_name, type, order, stability_class}` | Numerical methods identified | +| `stability_issue` | `location` | `{location, condition_number, severity, description}` | Numerical stability concerns | +| `convergence_property` | `method` | `{method, rate, order, conditions}` | Convergence properties proven or observed | +| `precision_risk` | `location+operation` | `{location, operation, risk_type, recommendation}` | Floating-point precision risks | +| `performance_bottleneck` | `location` | `{location, operation_count, memory_pattern, suggestion}` | Performance bottlenecks | +| `architecture_pattern` | `pattern_name` | `{pattern_name, files, description}` | Software architecture patterns | +| `test_gap` | `component` | `{component, missing_coverage, priority}` | Missing test coverage | + +### Protocol Rules + +1. **Read first**: Always read discoveries.ndjson before starting analysis +2. **Write immediately**: Append discoveries as soon as found, don't batch +3. **Deduplicate**: Check dedup key before appending (same key = skip) +4. **Append-only**: Never clear, modify, or recreate discoveries.ndjson +5. **Cross-wave accumulation**: Discoveries persist and accumulate across all 6 waves + +### NDJSON Format + +```jsonl +{"ts":"2026-03-04T10:00:00Z","worker":"T1.1","type":"governing_equation","data":{"eq_name":"Navier-Stokes","latex":"\\rho(\\frac{\\partial \\mathbf{v}}{\\partial t} + \\mathbf{v} \\cdot \\nabla \\mathbf{v}) = -\\nabla p + \\mu \\nabla^2 \\mathbf{v}","domain":"fluid_dynamics","boundary_conditions":"no-slip walls, inlet velocity"}} +{"ts":"2026-03-04T10:05:00Z","worker":"T2.2","type":"convergence_property","data":{"method":"Galerkin FEM","rate":"optimal","order":"h^{k+1} in L2","conditions":"quasi-uniform mesh, sufficient regularity"}} +{"ts":"2026-03-04T10:10:00Z","worker":"T3.2","type":"stability_issue","data":{"location":"src/solver/assembler.rs:142","condition_number":"1e12","severity":"High","description":"Ill-conditioned stiffness matrix for high aspect ratio elements"}} +``` + +--- + +## Perspective Reuse Matrix + +Each phase's output serves as context for subsequent phases: + +| Source Phase | P2 Reuse | P3 Reuse | P4 Reuse | P5 Reuse | P6 Reuse | +|-------------|---------|---------|---------|---------|---------| +| P1 Governing Eqs | Formalize → LaTeX | Constrain method selection | Code-equation mapping | Singularity sources | Correctness baseline | +| P1 Architecture | Constrain discretization | Parallel strategy | Module boundaries | Hotspot location | Integration scope | +| P1 Validation | - | Benchmark selection | Test criteria | Edge case sources | Final validation | +| P2 Formulas | - | Parameter constraints | Loop termination | Precision requirements | Theoretical verification | +| P2 Convergence | - | Mesh refinement strategy | Iteration control | Error tolerance | Rate verification | +| P2 Complexity | - | Performance baseline | Data structure choice | Optimization targets | Performance comparison | +| P3 Pseudocode | - | - | Implementation reference | Line-by-line audit | Regression baseline | +| P3 Stability | - | - | Precision selection | Cancellation detection | Numerical verification | +| P3 Performance | - | - | Memory layout | Vectorization targets | Benchmark targets | + +--- + +## Error Handling + +| Error | Resolution | +|-------|------------| +| Circular dependency | Detect in wave computation, abort with error message | +| Agent timeout | Mark as failed in results, continue with wave | +| Agent failed | Mark as failed, skip dependent tasks in later waves | +| All agents in wave failed | Log error, offer retry or abort | +| CSV parse error | Validate CSV format before execution, show line number | +| discoveries.ndjson corrupt | Ignore malformed lines, continue with valid entries | +| Continue mode: no session found | List available sessions, prompt user to select | +| LaTeX parse error | Store raw formula, flag for manual review | +| Scope files not found | Warn and continue with available files | +| Precision conflict between tracks | Flag in discoveries, defer to QA_Auditor in Wave 6 | + +--- + +## Quality Gates (Per-Wave) + +| Wave | Gate Criteria | Threshold | +|------|--------------|-----------| +| 1 | Core model identified + architecture mapped + KPI defined | All 3 tracks completed | +| 2 | Key formulas in LaTeX + convergence conditions stated + complexity determined | All 3 tracks completed | +| 3 | Pseudocode producible + stability assessed + performance predicted | ≥ 2 of 3 tracks completed | +| 4 | Code-algorithm mapping complete + data structures reviewed + APIs documented | ≥ 2 of 3 tracks completed | +| 5 | Hotspots identified + edge cases cataloged + precision risks flagged | ≥ 2 of 3 tracks completed | +| 6 | Test plan complete + benchmarks run + QA report synthesized | All 3 tracks completed | + +--- + +## Core Rules + +1. **Start Immediately**: First action is session initialization, then Phase 1 +2. **Wave Order is Sacred**: Never execute wave N before wave N-1 completes and results are merged +3. **CSV is Source of Truth**: Master tasks.csv holds all state +4. **Context Propagation**: prev_context built from master CSV findings, not from memory +5. **Discovery Board is Append-Only**: Never clear, modify, or recreate discoveries.ndjson +6. **Skip on Failure**: If a dependency failed, skip the dependent task +7. **Cleanup Temp Files**: Remove wave-{N}.csv after results are merged +8. **LaTeX Preservation**: Mathematical formulas must be preserved in LaTeX notation throughout all phases +9. **Perspective Compounding**: Each wave MUST receive cumulative context from all preceding waves +10. **DO NOT STOP**: Continuous execution until all waves complete or all remaining tasks are skipped diff --git a/.codex/skills/numerical-analysis-workflow/instructions/agent-instruction.md b/.codex/skills/numerical-analysis-workflow/instructions/agent-instruction.md new file mode 100644 index 00000000..bfb42002 --- /dev/null +++ b/.codex/skills/numerical-analysis-workflow/instructions/agent-instruction.md @@ -0,0 +1,179 @@ +# Agent Instruction Template + +Template for generating agent instruction prompts used in `spawn_agents_on_csv`. + +## Key Concept + +The instruction template is a **prompt with column placeholders** (`{column_name}`). When `spawn_agents_on_csv` executes, each agent receives the template with its row's column values substituted. + +**Critical rule**: The instruction template is the ONLY context the agent has. It must be self-contained — the agent cannot access the master CSV or other agents' data. + +--- + +## Template + +```markdown +## TASK ASSIGNMENT — Numerical Analysis + +### MANDATORY FIRST STEPS +1. Read shared discoveries: {session_folder}/discoveries.ndjson (if exists, skip if not) +2. Read project context: .workflow/project-tech.json (if exists) + +--- + +## Your Task + +**Task ID**: {id} +**Title**: {title} +**Role**: {track_role} +**Analysis Dimension**: {analysis_dimension} +**Description**: {description} +**Formula References**: {formula_refs} +**Precision Requirement**: {precision_req} +**Scope**: {scope} + +### Previous Tasks' Findings (Context) +{prev_context} + +--- + +## Execution Protocol + +1. **Read discoveries**: Load {session_folder}/discoveries.ndjson for shared exploration findings from other tracks +2. **Use context**: Apply previous tasks' findings from prev_context above — this contains cumulative analysis from all preceding phases +3. **Execute analysis based on your role**: + + #### For Domain Analysis Roles (Wave 1: Problem_Domain_Analyst, Software_Architect, Validation_Strategist) + - Survey the project codebase within scope: {scope} + - Identify mathematical models, governing equations, boundary conditions + - Map software architecture: modules, data flow, dependencies + - Define validation strategy: benchmarks, KPIs, acceptance criteria + + #### For Theory Roles (Wave 2: Mathematician, Convergence_Analyst, Complexity_Analyst) + - Build on governing equations from Wave 1 context + - Derive precise mathematical formulations using LaTeX notation + - Prove or analyze convergence properties with error bounds + - Determine computational complexity (time and space) + - All formulas MUST use LaTeX: `$$formula$$` + + #### For Algorithm Roles (Wave 3: Algorithm_Designer, Stability_Analyst, Performance_Modeler) + - Select numerical methods based on theoretical analysis from Wave 2 + - Write algorithm pseudocode for core computational kernels + - Analyze condition numbers and error propagation + - Build performance model: FLOPS count, memory bandwidth, parallel efficiency + + #### For Module Roles (Wave 4: Module_Implementer, Data_Structure_Designer, Interface_Analyst) + - Map algorithms from Wave 3 to actual code modules + - Review data structures: sparse matrix formats, mesh data, memory layout + - Document module interfaces, data contracts, error handling patterns + + #### For Local Analysis Roles (Wave 5: Code_Optimizer, Edge_Case_Analyst, Precision_Auditor) + - Identify performance hotspots with file:line references + - Catalog edge cases: singularities, division by zero, overflow/underflow + - Audit floating-point operations for catastrophic cancellation, accumulation errors + - Provide specific optimization recommendations (vectorization, cache, parallelism) + + #### For Integration Roles (Wave 6: Integration_Tester, Benchmark_Engineer, QA_Auditor) + - Design end-to-end test plans using benchmarks from Wave 1 + - Run or plan performance benchmarks comparing actual vs theoretical (Wave 3) + - Synthesize ALL findings from Waves 1-5 into final quality report + - Produce risk matrix and improvement roadmap + +4. **Generate analysis document**: Write to {session_folder}/docs/ using this template: + + ```markdown + # [Phase {wave}] {title} + + ## Metadata + - **Phase**: {wave} | **Track**: {id} | **Role**: {track_role} + - **Dimension**: {analysis_dimension} + - **Date**: [ISO8601] + - **Input Context**: Context from tasks {context_from} + + ## Executive Summary + [2-3 sentences: core conclusions] + + ## Analysis Scope + [Boundaries, assumptions, files analyzed within {scope}] + + ## Findings + + ### Finding 1: [Title] + **Severity**: Critical / High / Medium / Low + **Evidence**: [Code reference file:line or formula derivation] + $$\text{LaTeX formula if applicable}$$ + **Impact**: [Effect on project correctness, performance, or stability] + **Recommendation**: [Specific actionable suggestion] + + ### Finding N: ... + + ## Mathematical Formulas + [All key formulas derived or referenced in this analysis] + + ## Cross-References + [References to findings from other phases/tracks] + + ## Perspective Package + [Structured summary for context propagation to later phases] + - Key conclusions: ... + - Formulas for reuse: ... + - Open questions: ... + - Risks identified: ... + ``` + +5. **Share discoveries**: Append findings to shared board: + ```bash + echo '{"ts":"","worker":"{id}","type":"","data":{...}}' >> {session_folder}/discoveries.ndjson + ``` + +6. **Report result**: Return JSON via report_agent_job_result + +### Discovery Types to Share +- `governing_equation`: {eq_name, latex, domain, boundary_conditions} — Governing equations found +- `numerical_method`: {method_name, type, order, stability_class} — Numerical methods identified +- `stability_issue`: {location, condition_number, severity, description} — Stability concerns +- `convergence_property`: {method, rate, order, conditions} — Convergence properties +- `precision_risk`: {location, operation, risk_type, recommendation} — Float precision risks +- `performance_bottleneck`: {location, operation_count, memory_pattern, suggestion} — Performance issues +- `architecture_pattern`: {pattern_name, files, description} — Architecture patterns found +- `test_gap`: {component, missing_coverage, priority} — Missing test coverage + +--- + +## Output (report_agent_job_result) + +Return JSON: +{ + "id": "{id}", + "status": "completed" | "failed", + "findings": "Key discoveries and conclusions (max 500 chars)", + "severity_distribution": "Critical:N High:N Medium:N Low:N", + "latex_formulas": "key formulas in LaTeX separated by semicolons", + "doc_path": "relative path to generated analysis document (e.g., docs/P2_Mathematical_Formulation.md)", + "error": "" +} +``` + +--- + +## Placeholder Distinction + +| Syntax | Resolved By | When | +|--------|-----------|------| +| `{column_name}` | spawn_agents_on_csv | During agent execution (runtime) | +| `{session_folder}` | Wave engine | Before spawning (set in instruction string) | + +The SKILL.md embeds this template with `{session_folder}` replaced by the actual session path. Column placeholders `{column_name}` remain for runtime substitution. + +--- + +## Instruction Size Guidelines + +| Track Type | Target Length | Notes | +|-----------|-------------|-------| +| Wave 1 (Global) | 500-1000 chars | Broad survey, needs exploration guidance | +| Wave 2 (Theory) | 1000-2000 chars | Requires mathematical rigor instructions | +| Wave 3 (Algorithm) | 1000-1500 chars | Needs pseudocode format guidance | +| Wave 4 (Module) | 800-1200 chars | Focused on code-algorithm mapping | +| Wave 5 (Local) | 800-1500 chars | Detailed precision/optimization criteria | +| Wave 6 (Integration) | 1500-2500 chars | Must synthesize all prior phases | diff --git a/.codex/skills/numerical-analysis-workflow/schemas/tasks-schema.md b/.codex/skills/numerical-analysis-workflow/schemas/tasks-schema.md new file mode 100644 index 00000000..bd6faacb --- /dev/null +++ b/.codex/skills/numerical-analysis-workflow/schemas/tasks-schema.md @@ -0,0 +1,162 @@ +# Numerical Analysis Workflow — CSV Schema + +## Master CSV: tasks.csv + +### Column Definitions + +#### Input Columns (Set by Decomposer) + +| Column | Type | Required | Description | Example | +|--------|------|----------|-------------|---------| +| `id` | string | Yes | Unique task identifier (T{wave}.{track}) | `"T2.1"` | +| `title` | string | Yes | Short task title | `"Mathematical Formulation"` | +| `description` | string | Yes | Detailed task description (self-contained) | `"Derive precise mathematical formulations..."` | +| `track_role` | string | Yes | Analysis role name | `"Mathematician"` | +| `analysis_dimension` | string | Yes | Analysis focus area | `"formula_derivation"` | +| `formula_refs` | string | No | References to formulas from earlier tasks (TaskID:formula_name;...) | `"T1.1:governing_eqs;T2.2:convergence_conds"` | +| `precision_req` | string | No | Required floating-point precision | `"double"` | +| `scope` | string | No | File/directory scope for analysis (glob) | `"src/solver/**"` | +| `deps` | string | No | Semicolon-separated dependency task IDs | `"T2.1;T2.2"` | +| `context_from` | string | No | Semicolon-separated task IDs for context | `"T1.1;T2.1"` | + +#### Computed Columns (Set by Wave Engine) + +| Column | Type | Description | Example | +|--------|------|-------------|---------| +| `wave` | integer | Wave number (1-6, fixed per diamond topology) | `3` | +| `prev_context` | string | Aggregated findings + formulas from context_from tasks (per-wave CSV only) | `"[T2.1] Weak form derived..."` | + +#### Output Columns (Set by Agent) + +| Column | Type | Description | Example | +|--------|------|-------------|---------| +| `status` | enum | `pending` → `completed` / `failed` / `skipped` | `"completed"` | +| `findings` | string | Key discoveries (max 500 chars) | `"Identified CFL condition..."` | +| `severity_distribution` | string | Issue counts by severity | `"Critical:0 High:2 Medium:3 Low:1"` | +| `latex_formulas` | string | Key LaTeX formulas (semicolon-separated) | `"\\Delta t \\leq \\frac{h}{c};\\kappa(A) = \\|A\\|\\|A^{-1}\\|"` | +| `doc_path` | string | Path to generated analysis document | `"docs/P3_Numerical_Stability_Report.md"` | +| `error` | string | Error message if failed | `""` | + +--- + +### Example Data + +```csv +id,title,description,track_role,analysis_dimension,formula_refs,precision_req,scope,deps,context_from,wave,status,findings,severity_distribution,latex_formulas,doc_path,error +"T1.1","Problem Domain Survey","Survey governing equations and mathematical models. Identify PDE types, boundary conditions, conservation laws, and physical domain.","Problem_Domain_Analyst","domain_modeling","","","src/**","","","1","completed","Identified Navier-Stokes equations with k-epsilon turbulence model. Incompressible flow assumption. No-slip boundary conditions.","Critical:0 High:0 Medium:1 Low:2","\\rho(\\frac{\\partial v}{\\partial t} + v \\cdot \\nabla v) = -\\nabla p + \\mu \\nabla^2 v","docs/P1_Domain_Survey.md","" +"T2.1","Mathematical Formulation","Derive precise mathematical formulations using LaTeX. Transform governing equations into weak forms suitable for FEM discretization.","Mathematician","formula_derivation","T1.1:governing_eqs","","src/**","T1.1","T1.1","2","completed","Weak form derived for NS equations. Galerkin formulation with inf-sup stable elements (Taylor-Hood P2/P1).","Critical:0 High:0 Medium:0 Low:1","\\int_\\Omega \\mu \\nabla u : \\nabla v \\, d\\Omega - \\int_\\Omega p \\nabla \\cdot v \\, d\\Omega = \\int_\\Omega f \\cdot v \\, d\\Omega","docs/P2_Mathematical_Formulation.md","" +"T3.2","Numerical Stability Report","Analyze numerical stability of selected algorithms. Evaluate condition numbers, error propagation characteristics, and precision requirements.","Stability_Analyst","stability_analysis","T2.1:weak_forms;T2.2:convergence_conds","double","src/solver/**","T2.1;T2.2","T2.1;T2.2","3","pending","","","","","" +``` + +--- + +### Column Lifecycle + +``` +Decomposer (Phase 1) Wave Engine (Phase 2) Agent (Execution) +───────────────────── ──────────────────── ───────────────── +id ───────────► id ──────────► id +title ───────────► title ──────────► (reads) +description ───────────► description ──────────► (reads) +track_role ───────────► track_role ──────────► (reads) +analysis_dimension ─────► analysis_dimension ────► (reads) +formula_refs ──────────► formula_refs ─────────► (reads) +precision_req ─────────► precision_req ─────────► (reads) +scope ───────────► scope ──────────► (reads) +deps ───────────► deps ──────────► (reads) +context_from───────────► context_from──────────► (reads) + wave ──────────► (reads) + prev_context ──────────► (reads) + status + findings + severity_distribution + latex_formulas + doc_path + error +``` + +--- + +## Output Schema (JSON) + +Agent output via `report_agent_job_result`: + +```json +{ + "type": "object", + "properties": { + "id": { "type": "string", "description": "Task ID (T{wave}.{track})" }, + "status": { "type": "string", "enum": ["completed", "failed"] }, + "findings": { "type": "string", "description": "Key discoveries, max 500 chars" }, + "severity_distribution": { "type": "string", "description": "Critical:N High:N Medium:N Low:N" }, + "latex_formulas": { "type": "string", "description": "Key formulas in LaTeX, semicolon-separated" }, + "doc_path": { "type": "string", "description": "Path to generated analysis document" }, + "error": { "type": "string", "description": "Error message if failed" } + }, + "required": ["id", "status", "findings"] +} +``` + +--- + +## Discovery Types + +| Type | Dedup Key | Data Schema | Description | +|------|-----------|-------------|-------------| +| `governing_equation` | `eq_name` | `{eq_name, latex, domain, boundary_conditions}` | Governing equations found | +| `numerical_method` | `method_name` | `{method_name, type, order, stability_class}` | Numerical methods identified | +| `stability_issue` | `location` | `{location, condition_number, severity, description}` | Stability concerns | +| `convergence_property` | `method` | `{method, rate, order, conditions}` | Convergence properties | +| `precision_risk` | `location+operation` | `{location, operation, risk_type, recommendation}` | Float precision risks | +| `performance_bottleneck` | `location` | `{location, operation_count, memory_pattern, suggestion}` | Performance bottlenecks | +| `architecture_pattern` | `pattern_name` | `{pattern_name, files, description}` | Architecture patterns | +| `test_gap` | `component` | `{component, missing_coverage, priority}` | Missing test coverage | + +### Discovery NDJSON Format + +```jsonl +{"ts":"2026-03-04T10:00:00Z","worker":"T1.1","type":"governing_equation","data":{"eq_name":"Navier-Stokes","latex":"\\rho(\\frac{\\partial v}{\\partial t} + v \\cdot \\nabla v) = -\\nabla p + \\mu \\nabla^2 v","domain":"fluid_dynamics","boundary_conditions":"no-slip walls"}} +{"ts":"2026-03-04T10:05:00Z","worker":"T2.2","type":"convergence_property","data":{"method":"Galerkin FEM","rate":"optimal","order":"h^{k+1}","conditions":"quasi-uniform mesh"}} +{"ts":"2026-03-04T10:10:00Z","worker":"T3.2","type":"stability_issue","data":{"location":"src/solver/assembler.rs:142","condition_number":"1e12","severity":"High","description":"Ill-conditioned stiffness matrix"}} +{"ts":"2026-03-04T10:15:00Z","worker":"T5.3","type":"precision_risk","data":{"location":"src/solver/residual.rs:87","operation":"subtraction of nearly equal values","risk_type":"catastrophic_cancellation","recommendation":"Use compensated summation or reformulate"}} +``` + +--- + +## Validation Rules + +| Rule | Check | Error | +|------|-------|-------| +| Unique IDs | No duplicate `id` values | "Duplicate task ID: {id}" | +| Valid deps | All dep IDs exist in tasks | "Unknown dependency: {dep_id}" | +| No self-deps | Task cannot depend on itself | "Self-dependency: {id}" | +| No circular deps | Topological sort completes | "Circular dependency detected involving: {ids}" | +| context_from valid | All context IDs exist and in earlier waves | "Invalid context_from: {id}" | +| Description non-empty | Every task has description | "Empty description for task: {id}" | +| Status enum | status in {pending, completed, failed, skipped} | "Invalid status: {status}" | +| Wave range | wave in {1..6} | "Invalid wave number: {wave}" | +| Track role valid | track_role matches known roles | "Unknown track_role: {role}" | +| Formula refs format | TaskID:formula_name pattern | "Malformed formula_refs: {value}" | + +### Analysis Dimension Values + +| Dimension | Used In Wave | Description | +|-----------|-------------|-------------| +| `domain_modeling` | 1 | Physical/mathematical domain survey | +| `architecture_analysis` | 1 | Software architecture analysis | +| `validation_design` | 1 | Validation and benchmark strategy | +| `formula_derivation` | 2 | Mathematical formulation and derivation | +| `convergence_analysis` | 2 | Convergence theory and error bounds | +| `complexity_analysis` | 2 | Computational complexity analysis | +| `method_selection` | 3 | Numerical method selection and design | +| `stability_analysis` | 3 | Numerical stability assessment | +| `performance_modeling` | 3 | Performance prediction and modeling | +| `implementation_analysis` | 4 | Module-level code analysis | +| `data_structure_review` | 4 | Data structure and memory layout review | +| `interface_analysis` | 4 | API contract and interface analysis | +| `optimization` | 5 | Function-level performance optimization | +| `edge_case_analysis` | 5 | Boundary and singularity handling | +| `precision_audit` | 5 | Floating-point precision audit | +| `integration_testing` | 6 | System integration testing | +| `benchmarking` | 6 | Performance benchmarking | +| `quality_assurance` | 6 | Final quality audit and synthesis | diff --git a/.codex/skills/numerical-analysis-workflow/specs/analysis-dimensions.md b/.codex/skills/numerical-analysis-workflow/specs/analysis-dimensions.md new file mode 100644 index 00000000..2d4229d9 --- /dev/null +++ b/.codex/skills/numerical-analysis-workflow/specs/analysis-dimensions.md @@ -0,0 +1,237 @@ +# Analysis Dimensions for Numerical Computation + +Defines the 18 analysis dimensions across 6 phases of the NADW workflow. + +## Purpose + +| Phase | Usage | +|-------|-------| +| Phase 1 (Decomposition) | Reference when assigning analysis_dimension to tasks | +| Phase 2 (Execution) | Agents use to understand their analysis focus | +| Phase 3 (Aggregation) | Organize findings by dimension | + +--- + +## 1. Wave 1: Global Survey Dimensions + +### 1.1 Domain Modeling (`domain_modeling`) + +**Analyst Role**: Problem_Domain_Analyst + +**Focus Areas**: +- Governing equations (PDEs, ODEs, integral equations) +- Physical domain and boundary conditions +- Conservation laws and constitutive relations +- Problem classification (elliptic, parabolic, hyperbolic) +- Dimensional analysis and non-dimensionalization + +**Key Outputs**: +- Equation inventory with LaTeX notation +- Boundary condition catalog +- Problem classification matrix +- Physical parameter ranges + +**Formula Types to Identify**: +$$\frac{\partial u}{\partial t} + \mathcal{L}u = f \quad \text{(general PDE form)}$$ +$$u|_{\partial\Omega} = g \quad \text{(Dirichlet BC)}$$ +$$\frac{\partial u}{\partial n}|_{\partial\Omega} = h \quad \text{(Neumann BC)}$$ + +### 1.2 Architecture Analysis (`architecture_analysis`) + +**Analyst Role**: Software_Architect + +**Focus Areas**: +- Module decomposition and dependency graph +- Data flow between computational stages +- I/O patterns (mesh input, solution output, checkpointing) +- Parallelism strategy (MPI, OpenMP, GPU) +- Build system and dependency management + +**Key Outputs**: +- High-level component diagram +- Data flow diagram +- Technology stack inventory +- Parallelism strategy assessment + +### 1.3 Validation Design (`validation_design`) + +**Analyst Role**: Validation_Strategist + +**Focus Areas**: +- Benchmark cases with known analytical solutions +- Manufactured solution methodology +- Grid convergence study design +- Key Performance Indicators (KPIs) +- Acceptance criteria definition + +**Key Outputs**: +- Benchmark case catalog +- Validation methodology matrix +- KPI definitions with targets +- Acceptance test specifications + +--- + +## 2. Wave 2: Theoretical Foundation Dimensions + +### 2.1 Formula Derivation (`formula_derivation`) + +**Analyst Role**: Mathematician + +**Focus Areas**: +- Strong-to-weak form transformation +- Discretization schemes (FEM, FDM, FVM, spectral) +- Variational formulations +- Linearization techniques (Newton, Picard) +- Stabilization methods (SUPG, GLS, VMS) + +**Key Formula Templates**: +$$\text{Weak form: } a(u,v) = l(v) \quad \forall v \in V_h$$ +$$a(u,v) = \int_\Omega \nabla u \cdot \nabla v \, d\Omega$$ +$$l(v) = \int_\Omega f v \, d\Omega + \int_{\Gamma_N} g v \, dS$$ + +### 2.2 Convergence Analysis (`convergence_analysis`) + +**Analyst Role**: Convergence_Analyst + +**Focus Areas**: +- A priori error estimates +- A posteriori error estimators +- Convergence order verification +- Lax equivalence theorem applicability +- CFL conditions for time-dependent problems + +**Key Formula Templates**: +$$\|u - u_h\|_{L^2} \leq C h^{k+1} |u|_{H^{k+1}} \quad \text{(optimal L2 rate)}$$ +$$\|u - u_h\|_{H^1} \leq C h^k |u|_{H^{k+1}} \quad \text{(optimal H1 rate)}$$ +$$\Delta t \leq \frac{C h}{\|v\|_\infty} \quad \text{(CFL condition)}$$ + +### 2.3 Complexity Analysis (`complexity_analysis`) + +**Analyst Role**: Complexity_Analyst + +**Focus Areas**: +- Assembly operation counts +- Solver complexity (direct vs iterative) +- Preconditioner cost analysis +- Memory scaling with problem size +- Communication overhead in parallel settings + +**Key Formula Templates**: +$$T_{assembly} = O(N_{elem} \cdot p^{2d}) \quad \text{(FEM assembly)}$$ +$$T_{solve} = O(N^{3/2}) \quad \text{(2D direct)}, \quad O(N \log N) \quad \text{(multigrid)}$$ +$$M_{storage} = O(nnz) \quad \text{(sparse storage)}$$ + +--- + +## 3. Wave 3: Algorithm Design Dimensions + +### 3.1 Method Selection (`method_selection`) + +**Analyst Role**: Algorithm_Designer + +**Focus Areas**: +- Spatial discretization method selection +- Time integration scheme selection +- Linear/nonlinear solver selection +- Preconditioner selection +- Mesh generation strategy + +**Decision Criteria**: + +| Criterion | Weight | Metrics | +|-----------|--------|---------| +| Accuracy order | High | Convergence rate, error bounds | +| Stability | High | Unconditional vs conditional, CFL | +| Efficiency | Medium | FLOPS per DOF, memory per DOF | +| Parallelizability | Medium | Communication-to-computation ratio | +| Implementation complexity | Low | Lines of code, library availability | + +### 3.2 Stability Analysis (`stability_analysis`) + +**Analyst Role**: Stability_Analyst + +**Focus Areas**: +- Von Neumann stability analysis +- Matrix condition numbers +- Amplification factors +- Inf-sup (LBB) stability for mixed methods +- Mesh-dependent stability bounds + +**Key Formula Templates**: +$$\kappa(A) = \|A\| \cdot \|A^{-1}\| \quad \text{(condition number)}$$ +$$|g(\xi)| \leq 1 \quad \forall \xi \quad \text{(von Neumann stability)}$$ +$$\inf_{q_h \in Q_h} \sup_{v_h \in V_h} \frac{b(v_h, q_h)}{\|v_h\| \|q_h\|} \geq \beta > 0 \quad \text{(inf-sup)}$$ + +### 3.3 Performance Modeling (`performance_modeling`) + +**Analyst Role**: Performance_Modeler + +**Focus Areas**: +- Arithmetic intensity (FLOPS/byte) +- Roofline model analysis +- Strong/weak scaling prediction +- Memory bandwidth bottleneck identification +- Cache utilization estimates + +**Key Formula Templates**: +$$AI = \frac{\text{FLOPS}}{\text{Bytes transferred}} \quad \text{(arithmetic intensity)}$$ +$$P_{max} = \min(P_{peak}, AI \times BW_{mem}) \quad \text{(roofline bound)}$$ +$$E_{parallel}(p) = \frac{T_1}{p \cdot T_p} \quad \text{(parallel efficiency)}$$ + +--- + +## 4. Wave 4: Module Implementation Dimensions + +### 4.1 Implementation Analysis (`implementation_analysis`) + +**Focus**: Algorithm-to-code mapping, implementation correctness, coding patterns + +### 4.2 Data Structure Review (`data_structure_review`) + +**Focus**: Sparse matrix formats (CSR/CSC/COO), mesh data structures, memory layout optimization + +### 4.3 Interface Analysis (`interface_analysis`) + +**Focus**: Module APIs, data contracts between components, error handling patterns + +--- + +## 5. Wave 5: Local Function-Level Dimensions + +### 5.1 Optimization (`optimization`) + +**Focus**: Hotspot identification, vectorization opportunities, cache optimization, loop restructuring + +### 5.2 Edge Case Analysis (`edge_case_analysis`) + +**Focus**: Division by zero, matrix singularity, degenerate mesh elements, boundary layer singularities + +### 5.3 Precision Audit (`precision_audit`) + +**Focus**: Catastrophic cancellation, accumulation errors, mixed-precision opportunities, compensated algorithms + +**Critical Patterns to Detect**: + +| Pattern | Risk | Mitigation | +|---------|------|-----------| +| `a - b` where `a ≈ b` | Catastrophic cancellation | Reformulate or use higher precision | +| `sum += small_value` in loop | Accumulation error | Kahan summation | +| `1.0/x` where `x → 0` | Overflow/loss of significance | Guard with threshold | +| Mixed float32/float64 | Silent precision loss | Explicit type annotations | + +--- + +## 6. Wave 6: Integration & QA Dimensions + +### 6.1 Integration Testing (`integration_testing`) + +**Focus**: End-to-end test design, regression suite, manufactured solutions verification + +### 6.2 Benchmarking (`benchmarking`) + +**Focus**: Actual vs predicted performance, scalability tests, profiling results + +### 6.3 Quality Assurance (`quality_assurance`) + +**Focus**: All-phase synthesis, risk matrix, improvement roadmap, final recommendations diff --git a/.codex/skills/numerical-analysis-workflow/specs/phase-topology.md b/.codex/skills/numerical-analysis-workflow/specs/phase-topology.md new file mode 100644 index 00000000..bf7e0b33 --- /dev/null +++ b/.codex/skills/numerical-analysis-workflow/specs/phase-topology.md @@ -0,0 +1,214 @@ +# Phase Topology — Diamond Deep Tree + +Wave coordination patterns for the Numerical Analysis Diamond Workflow (NADW). + +## Purpose + +| Phase | Usage | +|-------|-------| +| Phase 1 (Decomposition) | Reference when assigning waves and dependencies | +| Phase 2 (Execution) | Context flow between waves | +| Phase 3 (Aggregation) | Structure the final report by topology | + +--- + +## 1. Topology Overview + +The NADW uses a **Staged Diamond** topology — six sequential waves, each with 3 parallel tracks. Context flows cumulatively from earlier waves to later ones. + +``` + Wave 1: [T1.1] [T1.2] [T1.3] Global Survey (3 parallel) + ↓ Context P1 + Wave 2: [T2.1] [T2.2] [T2.3] Theory (3 parallel) + ↓ Context P1+P2 + Wave 3: [T3.1] [T3.2] [T3.3] Algorithm (3 parallel) + ↓ Context P1+P2+P3 + Wave 4: [T4.1] [T4.2] [T4.3] Module (3 parallel) + ↓ Context P1-P4 + Wave 5: [T5.1] [T5.2] [T5.3] Local (3 parallel) + ↓ Context P1-P5 + Wave 6: [T6.1] [T6.2] [T6.3] Integration (3 parallel) +``` + +--- + +## 2. Wave Definitions + +### Wave 1: Global Survey + +| Property | Value | +|----------|-------| +| Phase Name | Global Survey | +| Track Count | 3 | +| Dependencies | None (entry wave) | +| Context Input | Project codebase only | +| Context Output | Governing equations, architecture map, validation KPIs | +| Max Parallelism | 3 | + +**Tracks**: +| ID | Role | Dimension | Scope | +|----|------|-----------|-------| +| T1.1 | Problem_Domain_Analyst | domain_modeling | Full project | +| T1.2 | Software_Architect | architecture_analysis | Full project | +| T1.3 | Validation_Strategist | validation_design | Full project | + +### Wave 2: Theoretical Foundations + +| Property | Value | +|----------|-------| +| Phase Name | Theoretical Foundations | +| Track Count | 3 | +| Dependencies | Wave 1 | +| Context Input | Context Package P1 | +| Context Output | LaTeX formulas, convergence theorems, complexity bounds | +| Max Parallelism | 3 | + +**Tracks**: +| ID | Role | Dimension | Deps | context_from | +|----|------|-----------|------|-------------| +| T2.1 | Mathematician | formula_derivation | T1.1 | T1.1 | +| T2.2 | Convergence_Analyst | convergence_analysis | T1.1 | T1.1 | +| T2.3 | Complexity_Analyst | complexity_analysis | T1.1;T1.2 | T1.1;T1.2 | + +### Wave 3: Algorithm Design & Stability + +| Property | Value | +|----------|-------| +| Phase Name | Algorithm Design | +| Track Count | 3 | +| Dependencies | Wave 2 | +| Context Input | Context Package P1+P2 | +| Context Output | Pseudocode, stability conditions, performance model | +| Max Parallelism | 3 | + +**Tracks**: +| ID | Role | Dimension | Deps | context_from | +|----|------|-----------|------|-------------| +| T3.1 | Algorithm_Designer | method_selection | T2.1 | T2.1;T2.2;T2.3 | +| T3.2 | Stability_Analyst | stability_analysis | T2.1;T2.2 | T2.1;T2.2 | +| T3.3 | Performance_Modeler | performance_modeling | T2.3 | T1.2;T2.3 | + +### Wave 4: Module Implementation + +| Property | Value | +|----------|-------| +| Phase Name | Module Implementation | +| Track Count | 3 | +| Dependencies | Wave 3 | +| Context Input | Context Package P1-P3 | +| Context Output | Code-algorithm mapping, data structure decisions, API contracts | +| Max Parallelism | 3 | + +**Tracks**: +| ID | Role | Dimension | Deps | context_from | +|----|------|-----------|------|-------------| +| T4.1 | Module_Implementer | implementation_analysis | T3.1 | T1.2;T3.1 | +| T4.2 | Data_Structure_Designer | data_structure_review | T3.1;T3.3 | T2.3;T3.1;T3.3 | +| T4.3 | Interface_Analyst | interface_analysis | T3.1 | T1.2;T3.1 | + +### Wave 5: Local Function-Level + +| Property | Value | +|----------|-------| +| Phase Name | Local Function-Level | +| Track Count | 3 | +| Dependencies | Wave 4 | +| Context Input | Context Package P1-P4 | +| Context Output | Optimization recommendations, edge case catalog, precision risk matrix | +| Max Parallelism | 3 | + +**Tracks**: +| ID | Role | Dimension | Deps | context_from | +|----|------|-----------|------|-------------| +| T5.1 | Code_Optimizer | optimization | T4.1 | T3.3;T4.1 | +| T5.2 | Edge_Case_Analyst | edge_case_analysis | T4.1 | T1.1;T3.2;T4.1 | +| T5.3 | Precision_Auditor | precision_audit | T4.1;T4.2 | T3.2;T4.1;T4.2 | + +### Wave 6: Integration & QA + +| Property | Value | +|----------|-------| +| Phase Name | Integration & QA | +| Track Count | 3 | +| Dependencies | Wave 5 | +| Context Input | Context Package P1-P5 (ALL cumulative) | +| Context Output | Final test plan, benchmark report, QA assessment | +| Max Parallelism | 3 | + +**Tracks**: +| ID | Role | Dimension | Deps | context_from | +|----|------|-----------|------|-------------| +| T6.1 | Integration_Tester | integration_testing | T5.1;T5.2 | T1.3;T5.1;T5.2 | +| T6.2 | Benchmark_Engineer | benchmarking | T5.1 | T1.3;T3.3;T5.1 | +| T6.3 | QA_Auditor | quality_assurance | T5.1;T5.2;T5.3 | T1.1;T2.1;T3.1;T4.1;T5.1;T5.2;T5.3 | + +--- + +## 3. Context Flow Map + +### Directed Context (prev_context column) + +``` +T1.1 ──► T2.1, T2.2, T2.3 +T1.2 ──► T2.3, T3.3, T4.1, T4.3 +T1.3 ──► T6.1, T6.2 +T2.1 ──► T3.1, T3.2 +T2.2 ──► T3.1, T3.2 +T2.3 ──► T3.1, T3.3, T4.2 +T3.1 ──► T4.1, T4.2, T4.3 +T3.2 ──► T5.2, T5.3 +T3.3 ──► T4.2, T5.1, T6.2 +T4.1 ──► T5.1, T5.2, T5.3 +T4.2 ──► T5.3 +T5.1 ──► T6.1, T6.2 +T5.2 ──► T6.1 +T5.3 ──► T6.3 +``` + +### Broadcast Context (discoveries.ndjson) + +All agents read/append to the same discoveries.ndjson. Key discovery types flow across waves: + +``` +Wave 1: governing_equation, architecture_pattern ──► all subsequent waves +Wave 2: convergence_property ──► Wave 3-6 +Wave 3: stability_issue, numerical_method ──► Wave 4-6 +Wave 4: (implementation findings) ──► Wave 5-6 +Wave 5: precision_risk, performance_bottleneck ──► Wave 6 +``` + +--- + +## 4. Perspective Reuse Matrix + +How each wave's output is reused by later waves: + +| Source | P2 Reuse | P3 Reuse | P4 Reuse | P5 Reuse | P6 Reuse | +|--------|---------|---------|---------|---------|---------| +| **P1 Equations** | Formalize → LaTeX | Constrain methods | Code-eq mapping | Singularity sources | Correctness baseline | +| **P1 Architecture** | Constrain discretization | Parallel strategy | Module boundaries | Hotspot location | Integration scope | +| **P1 Validation** | - | Benchmark selection | Test criteria | Edge case sources | Final validation | +| **P2 Formulas** | - | Parameter constraints | Loop termination | Precision requirements | Theory verification | +| **P2 Convergence** | - | Mesh refinement | Iteration control | Error tolerance | Rate verification | +| **P2 Complexity** | - | Performance baseline | Data structure choice | Optimization targets | Perf comparison | +| **P3 Pseudocode** | - | - | Impl reference | Line-by-line audit | Regression baseline | +| **P3 Stability** | - | - | Precision selection | Cancellation detection | Numerical verification | +| **P3 Performance** | - | - | Memory layout | Vectorization targets | Benchmark targets | +| **P4 Modules** | - | - | - | Function-level focus | Module test plan | +| **P5 Optimization** | - | - | - | - | Performance tests | +| **P5 Edge Cases** | - | - | - | - | Regression tests | +| **P5 Precision** | - | - | - | - | Numerical tests | + +--- + +## 5. Diamond Properties + +| Property | Value | +|----------|-------| +| Total Waves | 6 | +| Total Tasks | 18 (3 per wave) | +| Max Parallelism per Wave | 3 | +| Widest Context Fan-in | T6.3 (receives from 7 tasks) | +| Deepest Dependency Chain | T1.1 → T2.1 → T3.1 → T4.1 → T5.1 → T6.1 (depth 6) | +| Context Accumulation | Cumulative (each wave adds to previous context) | +| Topology Type | Staged Parallel with Diamond convergence at Wave 6 | diff --git a/.codex/skills/numerical-analysis-workflow/specs/quality-standards.md b/.codex/skills/numerical-analysis-workflow/specs/quality-standards.md new file mode 100644 index 00000000..51bd0417 --- /dev/null +++ b/.codex/skills/numerical-analysis-workflow/specs/quality-standards.md @@ -0,0 +1,173 @@ +# Quality Standards for Numerical Analysis Workflow + +Quality assessment criteria for NADW analysis reports. + +## When to Use + +| Phase | Usage | Section | +|-------|-------|---------| +| Phase 2 (Execution) | Guide agent analysis quality | All dimensions | +| Phase 3 (Aggregation) | Score generated reports | Quality Gates | + +--- + +## Quality Dimensions + +### 1. Mathematical Rigor (30%) + +| Score | Criteria | +|-------|----------| +| 100% | All formulas correct, properly derived, LaTeX well-formatted, error bounds proven | +| 80% | Formulas correct, some derivation steps skipped, bounds stated without full proof | +| 60% | Key formulas present, some notation inconsistencies, bounds estimated | +| 40% | Formulas incomplete or contain errors | +| 0% | No mathematical content | + +**Checklist**: +- [ ] Governing equations identified and written in LaTeX +- [ ] Weak forms correctly derived from strong forms +- [ ] Convergence order stated with conditions +- [ ] Error bounds provided (a priori or a posteriori) +- [ ] CFL/stability conditions explicitly stated +- [ ] Condition numbers estimated for key matrices +- [ ] Complexity bounds (time and space) determined +- [ ] LaTeX notation consistent throughout all documents + +### 2. Code-Theory Mapping (25%) + +| Score | Criteria | +|-------|----------| +| 100% | Every algorithm mapped to code with file:line references, data structures justified | +| 80% | Major algorithms mapped, most references accurate | +| 60% | Key mappings present, some code references missing | +| 40% | Superficial mapping, few code references | +| 0% | No code-theory connection | + +**Checklist**: +- [ ] Each numerical method traced to implementing function/module +- [ ] Data structures justified against algorithm requirements +- [ ] Sparse matrix format matched to access patterns +- [ ] Time integration scheme identified in code +- [ ] Boundary condition implementation verified +- [ ] Solver configuration traced to convergence requirements +- [ ] Preconditioner choice justified + +### 3. Numerical Quality Assessment (25%) + +| Score | Criteria | +|-------|----------| +| 100% | Stability fully analyzed, precision risks cataloged, all edge cases covered | +| 80% | Stability assessed, major precision risks found, common edge cases covered | +| 60% | Basic stability check, some precision risks, incomplete edge cases | +| 40% | Superficial stability mention, few precision issues found | +| 0% | No numerical quality analysis | + +**Checklist**: +- [ ] Condition numbers estimated for key operations +- [ ] Catastrophic cancellation risks identified with file:line +- [ ] Accumulation error potential assessed +- [ ] Float precision choices justified (float32 vs float64) +- [ ] Edge cases cataloged (singularities, degenerate inputs) +- [ ] Overflow/underflow risks identified +- [ ] Mixed-precision operations flagged + +### 4. Cross-Phase Coherence (20%) + +| Score | Criteria | +|-------|----------| +| 100% | All 6 phases connected, findings build on each other, no contradictions | +| 80% | Most phases connected, minor gaps in context propagation | +| 60% | Key connections present, some phases isolated | +| 40% | Limited cross-referencing between phases | +| 0% | Phases completely isolated | + +**Checklist**: +- [ ] Wave 2 formulas reference Wave 1 governing equations +- [ ] Wave 3 algorithms justified by Wave 2 theory +- [ ] Wave 4 implementation verified against Wave 3 pseudocode +- [ ] Wave 5 optimization targets from Wave 3 performance model +- [ ] Wave 5 precision requirements from Wave 2/3 analysis +- [ ] Wave 6 test plan covers findings from all prior waves +- [ ] Wave 6 benchmarks compare against Wave 3 predictions +- [ ] No contradictory findings between phases +- [ ] Discoveries board used for cross-track sharing + +--- + +## Quality Gates (Per-Wave) + +| Wave | Phase | Gate Criteria | Required Tracks | +|------|-------|--------------|-----------------| +| 1 | Global Survey | Core model identified + architecture mapped + ≥1 KPI | 3/3 completed | +| 2 | Theory | Key formulas LaTeX'd + convergence stated + complexity determined | 3/3 completed | +| 3 | Algorithm | Pseudocode produced + stability assessed + performance predicted | ≥2/3 completed | +| 4 | Module | Code-algorithm mapping + data structures reviewed + APIs documented | ≥2/3 completed | +| 5 | Local | Hotspots identified + edge cases cataloged + precision risks flagged | ≥2/3 completed | +| 6 | Integration | Test plan complete + benchmarks planned + QA report synthesized | 3/3 completed | + +--- + +## Overall Quality Gates + +| Gate | Threshold | Action | +|------|-----------|--------| +| PASS | >= 80% across all dimensions | Report ready for delivery | +| REVIEW | 70-79% in any dimension | Flag dimension for improvement, user decides | +| FAIL | < 70% in any dimension | Block delivery, identify gaps, suggest re-analysis | + +--- + +## Issue Classification + +### Errors (Must Fix) + +- Missing governing equation identification (Wave 1) +- LaTeX formulas with mathematical errors (Wave 2) +- Algorithm pseudocode that doesn't match convergence requirements (Wave 3) +- Code references to non-existent files/functions (Wave 4) +- Unidentified catastrophic cancellation in critical path (Wave 5) +- Test plan that doesn't cover identified stability issues (Wave 6) +- Contradictory findings between phases +- Missing context propagation (later phase ignores earlier findings) + +### Warnings (Should Fix) + +- Formulas without derivation steps +- Convergence bounds stated without proof or reference +- Missing edge case for known singularity +- Performance model without memory bandwidth consideration +- Data structure choice not justified +- Test plan without manufactured solution verification +- Benchmark without theoretical baseline comparison + +### Notes (Nice to Have) + +- Additional bibliography references +- Alternative algorithm comparisons +- Extended precision sensitivity analysis +- Scaling prediction beyond current problem size +- Code style or naming convention suggestions + +--- + +## Severity Levels for Findings + +| Severity | Definition | Example | +|----------|-----------|---------| +| **Critical** | Incorrect results or numerical failure | Wrong boundary condition → divergent solution | +| **High** | Significant accuracy or performance degradation | Condition number 10^15 → double precision insufficient | +| **Medium** | Suboptimal but functional | O(N^2) where O(N log N) is possible | +| **Low** | Minor improvement opportunity | Unnecessary array copy in non-critical path | + +--- + +## Document Quality Metrics + +| Metric | Target | Measurement | +|--------|--------|-------------| +| Formula coverage | ≥ 90% of core equations in LaTeX | Count identified vs documented | +| Code reference density | ≥ 1 file:line per finding | Count references per finding | +| Cross-phase references | ≥ 3 per document (Waves 3-6) | Count cross-references | +| Severity distribution | ≥ 1 per severity level | Count per level | +| Discovery board contributions | ≥ 2 per track | Count NDJSON entries per worker | +| Perspective package | Present in every document | Boolean per document | diff --git a/ccw/src/tools/team-msg.ts b/ccw/src/tools/team-msg.ts index c5bb4435..42757fc2 100644 --- a/ccw/src/tools/team-msg.ts +++ b/ccw/src/tools/team-msg.ts @@ -77,8 +77,8 @@ export function inferTeamStatus(team: string): TeamMeta['status'] { export function getEffectiveTeamMeta(team: string): TeamMeta { const meta = readTeamMeta(team); if (meta) { - // Enrich from legacy files if role_state/pipeline_mode missing - if (!meta.role_state || !meta.pipeline_mode) { + // Enrich from legacy files if key fields are missing + if (!meta.role_state || !meta.pipeline_mode || !meta.roles || !meta.pipeline_stages) { const legacyData = readLegacyFiles(team); if (!meta.pipeline_mode && legacyData.pipeline_mode) { meta.pipeline_mode = legacyData.pipeline_mode; @@ -92,6 +92,9 @@ export function getEffectiveTeamMeta(team: string): TeamMeta { if (!meta.team_name && legacyData.team_name) { meta.team_name = legacyData.team_name; } + if (!meta.roles && legacyData.roles) { + meta.roles = legacyData.roles; + } } return meta; } @@ -155,7 +158,14 @@ function readLegacyFiles(team: string): Partial { if (!result.pipeline_stages && session.pipeline_stages) result.pipeline_stages = session.pipeline_stages; if (session.team_name) result.team_name = session.team_name; if (session.task_description) result.task_description = session.task_description; - if (session.roles) result.roles = session.roles; + // Handle both string[] and { name: string }[] formats + if (session.roles && Array.isArray(session.roles)) { + if (typeof session.roles[0] === 'string') { + result.roles = session.roles; + } else if (typeof session.roles[0] === 'object' && session.roles[0] !== null && 'name' in session.roles[0]) { + result.roles = session.roles.map((r: { name: string }) => r.name); + } + } } catch { /* ignore parse errors */ } } diff --git a/docs/commands/claude/index.md b/docs/commands/claude/index.md index ed5d94d9..90bc3091 100644 --- a/docs/commands/claude/index.md +++ b/docs/commands/claude/index.md @@ -227,6 +227,6 @@ ccw cli -p "Review code quality" --tool gemini --mode analysis --rule analysis-r ## Related Documentation -- [Skills Reference](../skills/) +- [Skills Reference](../../skills/) - [CLI Invocation System](../../features/cli.md) - [Workflow Guide](../../guide/ch04-workflow-basics.md) diff --git a/docs/commands/claude/issue.md b/docs/commands/claude/issue.md index 3c4edf2c..f3ccf3c5 100644 --- a/docs/commands/claude/issue.md +++ b/docs/commands/claude/issue.md @@ -291,4 +291,4 @@ graph TD - [Workflow Commands](./workflow.md) - [Core Orchestration](./core-orchestration.md) -- [Team System](../../features/) +- [Team System](../../features/spec) diff --git a/docs/commands/claude/ui-design.md b/docs/commands/claude/ui-design.md index b3462923..3dadd9ca 100644 --- a/docs/commands/claude/ui-design.md +++ b/docs/commands/claude/ui-design.md @@ -304,4 +304,4 @@ graph TD - [Core Orchestration](./core-orchestration.md) - [Workflow Commands](./workflow.md) -- [Brainstorming](../../features/) +- [Brainstorming](../../features/spec) diff --git a/docs/commands/codex/index.md b/docs/commands/codex/index.md index 1829ef5e..d75c414d 100644 --- a/docs/commands/codex/index.md +++ b/docs/commands/codex/index.md @@ -53,4 +53,4 @@ CONSTRAINTS: [focus constraints] - [Claude Commands](../claude/) - [CLI Invocation System](../../features/cli.md) -- [Code Review](../../features/) +- [Code Review](../../features/spec) diff --git a/docs/commands/codex/review.md b/docs/commands/codex/review.md index 266695d7..dee10369 100644 --- a/docs/commands/codex/review.md +++ b/docs/commands/codex/review.md @@ -194,4 +194,4 @@ git log --oneline -10 - [Prep Prompts](./prep.md) - [CLI Tool Commands](../claude/cli.md) -- [Code Review](../../features/) +- [Code Review](../../features/spec) diff --git a/docs/components/index.md b/docs/components/index.md index 17a099b1..fd2735a8 100644 --- a/docs/components/index.md +++ b/docs/components/index.md @@ -33,53 +33,53 @@ |-----------|-------------|-------| | [Button](/components/ui/button) | Clickable action buttons with variants and sizes | `variant`, `size`, `asChild` | | [Input](/components/ui/input) | Text input field | `error` | -| [Textarea](/components/ui/textarea) | Multi-line text input | `error` | +| Textarea | Multi-line text input | `error` | | [Select](/components/ui/select) | Dropdown selection (Radix) | Select components | | [Checkbox](/components/ui/checkbox) | Boolean checkbox (Radix) | `checked`, `onCheckedChange` | -| [Switch](/components/ui/switch) | Toggle switch | `checked`, `onCheckedChange` | +| Switch | Toggle switch | `checked`, `onCheckedChange` | ### Layout Components | Component | Description | Props | |-----------|-------------|-------| | [Card](/components/ui/card) | Content container with header/footer | Nested components | -| [Separator](/components/ui/separator) | Visual divider | `orientation` | -| [ScrollArea](/components/ui/scroll-area) | Custom scrollbar container | - | +| Separator | Visual divider | `orientation` | +| ScrollArea | Custom scrollbar container | - | ### Feedback Components | Component | Description | Props | |-----------|-------------|-------| | [Badge](/components/ui/badge) | Status indicator label | `variant` | -| [Progress](/components/ui/progress) | Progress bar | `value` | -| [Alert](/components/ui/alert) | Notification message | `variant` | -| [Toast](/components/ui/toast) | Temporary notification (Radix) | Toast components | +| Progress | Progress bar | `value` | +| Alert | Notification message | `variant` | +| Toast | Temporary notification (Radix) | Toast components | ### Navigation Components | Component | Description | Props | |-----------|-------------|-------| -| [Tabs](/components/ui/tabs) | Tab navigation (Radix) | Tabs components | -| [TabsNavigation](/components/ui/tabs-navigation) | Custom tab bar | `tabs`, `value`, `onValueChange` | -| [Breadcrumb](/components/ui/breadcrumb) | Navigation breadcrumb | Breadcrumb components | +| Tabs | Tab navigation (Radix) | Tabs components | +| TabsNavigation | Custom tab bar | `tabs`, `value`, `onValueChange` | +| Breadcrumb | Navigation breadcrumb | Breadcrumb components | ### Overlay Components | Component | Description | Props | |-----------|-------------|-------| -| [Dialog](/components/ui/dialog) | Modal dialog (Radix) | `open`, `onOpenChange` | -| [Drawer](/components/ui/drawer) | Side panel (Radix) | `open`, `onOpenChange` | -| [Dropdown Menu](/components/ui/dropdown) | Context menu (Radix) | Dropdown components | -| [Popover](/components/ui/popover) | Floating content (Radix) | `open`, `onOpenChange` | -| [Tooltip](/components/ui/tooltip) | Hover tooltip (Radix) | `content` | -| [AlertDialog](/components/ui/alert-dialog) | Confirmation dialog (Radix) | Dialog components | +| Dialog | Modal dialog (Radix) | `open`, `onOpenChange` | +| Drawer | Side panel (Radix) | `open`, `onOpenChange` | +| Dropdown Menu | Context menu (Radix) | Dropdown components | +| Popover | Floating content (Radix) | `open`, `onOpenChange` | +| Tooltip | Hover tooltip (Radix) | `content` | +| AlertDialog | Confirmation dialog (Radix) | Dialog components | ### Disclosure Components | Component | Description | Props | |-----------|-------------|-------| -| [Collapsible](/components/ui/collapsible) | Expand/collapse content (Radix) | `open`, `onOpenChange` | -| [Accordion](/components/ui/accordion) | Collapsible sections (Radix) | Accordion components | +| Collapsible | Expand/collapse content (Radix) | `open`, `onOpenChange` | +| Accordion | Collapsible sections (Radix) | Accordion components | --- diff --git a/docs/components/ui/badge.md b/docs/components/ui/badge.md index 45d66b25..c508aebf 100644 --- a/docs/components/ui/badge.md +++ b/docs/components/ui/badge.md @@ -116,4 +116,4 @@ Badge with brand gradient background for featured or highlighted items. - [Card](/components/ui/card) - [Button](/components/ui/button) -- [Avatar](/components/ui/avatar) +- Avatar diff --git a/docs/components/ui/button.md b/docs/components/ui/button.md index 44f66c94..0b72953f 100644 --- a/docs/components/ui/button.md +++ b/docs/components/ui/button.md @@ -77,4 +77,4 @@ Gradient Primary buttons use the primary theme gradient with an enhanced glow ef - [Input](/components/ui/input) - [Select](/components/ui/select) -- [Dialog](/components/ui/dialog) +- Dialog diff --git a/docs/components/ui/card.md b/docs/components/ui/card.md index 7ddb8472..1ba99584 100644 --- a/docs/components/ui/card.md +++ b/docs/components/ui/card.md @@ -104,4 +104,4 @@ All Card components accept standard HTML div attributes: - [Button](/components/ui/button) - [Badge](/components/ui/badge) -- [Separator](/components/ui/separator) +- Separator diff --git a/docs/components/ui/checkbox.md b/docs/components/ui/checkbox.md index baab0434..776232bd 100644 --- a/docs/components/ui/checkbox.md +++ b/docs/components/ui/checkbox.md @@ -117,4 +117,4 @@ const state = ref('indeterminate') - [Input](/components/ui/input) - [Select](/components/ui/select) -- [Radio Group](/components/ui/radio-group) +- Radio Group diff --git a/docs/features/discovery.md b/docs/features/discovery.md index 3b6d25e8..6d057bd3 100644 --- a/docs/features/discovery.md +++ b/docs/features/discovery.md @@ -160,4 +160,4 @@ const { sessions, findings, /* ... */ } = useIssueDiscovery({ - [Issue Hub](/features/issue-hub) - Unified issues, queue, and discovery management - [Queue](/features/queue) - Issue execution queue management - [Issues Panel](/features/issue-hub) - Issue list and GitHub sync -- [Terminal Dashboard](/features/terminal-dashboard) - Real-time session monitoring +- [Terminal Dashboard](/features/terminal) - Real-time session monitoring diff --git a/docs/features/sessions.md b/docs/features/sessions.md index 25b6639e..987ae89f 100644 --- a/docs/features/sessions.md +++ b/docs/features/sessions.md @@ -183,5 +183,5 @@ interface SessionsFilter { - [Dashboard](/features/dashboard) - Overview of all sessions with statistics - [Lite Tasks](/features/tasks-history) - Lite-plan and multi-cli-plan task management -- [Terminal Dashboard](/features/terminal-dashboard) - Terminal-first session monitoring +- [Terminal Dashboard](/features/terminal) - Terminal-first session monitoring - [Orchestrator](/features/orchestrator) - Workflow template editor diff --git a/docs/guide/ccwmcp.md b/docs/guide/ccwmcp.md index 9f4a51eb..d48d1963 100644 --- a/docs/guide/ccwmcp.md +++ b/docs/guide/ccwmcp.md @@ -291,4 +291,4 @@ edit_file( - [MCP Setup Guide](./mcp-setup.md) - Configure external MCP servers - [Installation](./installation.md) - CCW installation guide -- [Team Workflows](../skills/team-workflows.md) - team-lifecycle documentation +- [Team Workflows](/workflows/teams) - team-lifecycle documentation diff --git a/docs/skills/_shared/SKILL-DESIGN-SPEC.md b/docs/skills/_shared/SKILL-DESIGN-SPEC.md new file mode 100644 index 00000000..68f9d0aa --- /dev/null +++ b/docs/skills/_shared/SKILL-DESIGN-SPEC.md @@ -0,0 +1,13 @@ +# CCW Skills Design Specification + +> **Note**: This document is a placeholder. The full specification is being developed. + +## Overview + +This document defines the design standards and conventions for CCW skills. + +## Related Documents + +- [Document Standards](/skills/specs/document-standards) +- [Quality Gates](/skills/specs/quality-gates) +- [Reference Docs Spec](/skills/specs/reference-docs-spec) diff --git a/docs/skills/claude-index.md b/docs/skills/claude-index.md index a6f3186a..ecefa5fe 100644 --- a/docs/skills/claude-index.md +++ b/docs/skills/claude-index.md @@ -258,7 +258,7 @@ memory/ - [Claude Commands](../commands/claude/) - [Codex Skills](./codex-index.md) -- [Features](../features/) +- [Features](../features/spec) ## Statistics diff --git a/docs/skills/claude-meta.md b/docs/skills/claude-meta.md index 8c1c5fa8..bfc7d136 100644 --- a/docs/skills/claude-meta.md +++ b/docs/skills/claude-meta.md @@ -230,7 +230,7 @@ Artifacts N×Role Synthesis 1×Role **Core Specifications** (required): | Document | Purpose | Priority | |----------|---------|----------| -| [../_shared/SKILL-DESIGN-SPEC.md](../_shared/SKILL-DESIGN-SPEC.md) | General design spec — Defines structure, naming, quality standards for all Skills | **P0 - Critical** | +| [_shared/SKILL-DESIGN-SPEC.md](./_shared/SKILL-DESIGN-SPEC.md) | General design spec — Defines structure, naming, quality standards for all Skills | **P0 - Critical** | | [specs/reference-docs-spec.md](specs/reference-docs-spec.md) | Reference doc generation spec — Ensures generated Skills have appropriate stage-based reference docs | **P0 - Critical** | **Template Files** (read before generation): diff --git a/docs/skills/codex-index.md b/docs/skills/codex-index.md index f18187d0..4767148b 100644 --- a/docs/skills/codex-index.md +++ b/docs/skills/codex-index.md @@ -440,7 +440,7 @@ Explore → Document → Log → Analyze → Correct Understanding → Fix → V ## Related Documentation - [Claude Skills](./claude-index.md) -- [Feature Documentation](../features/) +- [Feature Documentation](../features/spec) ## Best Practices diff --git a/docs/zh/commands/claude/index.md b/docs/zh/commands/claude/index.md index b1f66357..8096e3a5 100644 --- a/docs/zh/commands/claude/index.md +++ b/docs/zh/commands/claude/index.md @@ -225,6 +225,6 @@ ccw cli -p "审查代码质量" --tool gemini --mode analysis --rule analysis-re ## 相关文档 -- [Skills 参考](../skills/) +- [Skills 参考](../../skills/) - [CLI 调用系统](../../features/cli.md) - [工作流指南](../../guide/ch04-workflow-basics.md) diff --git a/docs/zh/commands/claude/issue.md b/docs/zh/commands/claude/issue.md index 4638fad5..6148ed93 100644 --- a/docs/zh/commands/claude/issue.md +++ b/docs/zh/commands/claude/issue.md @@ -291,4 +291,4 @@ graph TD - [工作流命令](./workflow.md) - [核心编排](./core-orchestration.md) -- [团队系统](../../features/) +- [团队系统](../../features/spec) diff --git a/docs/zh/commands/claude/ui-design.md b/docs/zh/commands/claude/ui-design.md index 59eb526f..a50c6b3f 100644 --- a/docs/zh/commands/claude/ui-design.md +++ b/docs/zh/commands/claude/ui-design.md @@ -304,4 +304,4 @@ graph TD - [核心编排](./core-orchestration.md) - [工作流命令](./workflow.md) -- [头脑风暴](../../features/) +- [头脑风暴](../../features/spec) diff --git a/docs/zh/commands/codex/index.md b/docs/zh/commands/codex/index.md index fa43674b..8ece9a95 100644 --- a/docs/zh/commands/codex/index.md +++ b/docs/zh/commands/codex/index.md @@ -53,4 +53,4 @@ CONSTRAINTS: [关注约束] - [Claude Commands](../claude/) - [CLI 调用系统](../../features/cli.md) -- [代码审查](../../features/) +- [代码审查](../../features/spec) diff --git a/docs/zh/commands/codex/review.md b/docs/zh/commands/codex/review.md index 71f6ba19..6668d748 100644 --- a/docs/zh/commands/codex/review.md +++ b/docs/zh/commands/codex/review.md @@ -194,4 +194,4 @@ git log --oneline -10 - [Prep 提示](./prep.md) - [CLI 工具命令](../claude/cli.md) -- [代码审查](../../features/) +- [代码审查](../../features/spec) diff --git a/docs/zh/components/index.md b/docs/zh/components/index.md index 09cbbd0c..3a43846f 100644 --- a/docs/zh/components/index.md +++ b/docs/zh/components/index.md @@ -34,53 +34,53 @@ |------|------|------| | [Button](/components/ui/button) | 可点击的操作按钮,带变体和尺寸 | `variant`, `size`, `asChild` | | [Input](/components/ui/input) | 文本输入字段 | `error` | -| [Textarea](/components/ui/textarea) | 多行文本输入 | `error` | +| Textarea | 多行文本输入 | `error` | | [Select](/components/ui/select) | 下拉选择(Radix) | Select 组件 | | [Checkbox](/components/ui/checkbox) | 布尔复选框(Radix) | `checked`, `onCheckedChange` | -| [Switch](/components/ui/switch) | 切换开关 | `checked`, `onCheckedChange` | +| Switch | 切换开关 | `checked`, `onCheckedChange` | ### 布局组件 | 组件 | 描述 | Props | |------|------|------| | [Card](/components/ui/card) | 带标题/脚注的内容容器 | 嵌套组件 | -| [Separator](/components/ui/separator) | 视觉分隔符 | `orientation` | -| [ScrollArea](/components/ui/scroll-area) | 自定义滚动条容器 | - | +| Separator | 视觉分隔符 | `orientation` | +| ScrollArea | 自定义滚动条容器 | - | ### 反馈组件 | 组件 | 描述 | Props | |------|------|------| | [Badge](/components/ui/badge) | 状态指示器标签 | `variant` | -| [Progress](/components/ui/progress) | 进度条 | `value` | -| [Alert](/components/ui/alert) | 通知消息 | `variant` | -| [Toast](/components/ui/toast) | 临时通知(Radix) | Toast 组件 | +| Progress | 进度条 | `value` | +| Alert | 通知消息 | `variant` | +| Toast | 临时通知(Radix) | Toast 组件 | ### 导航组件 | 组件 | 描述 | Props | |------|------|------| -| [Tabs](/components/ui/tabs) | 标签页导航(Radix) | Tabs 组件 | -| [TabsNavigation](/components/ui/tabs-navigation) | 自定义标签栏 | `tabs`, `value`, `onValueChange` | -| [Breadcrumb](/components/ui/breadcrumb) | 导航面包屑 | Breadcrumb 组件 | +| Tabs | 标签页导航(Radix) | Tabs 组件 | +| TabsNavigation | 自定义标签栏 | `tabs`, `value`, `onValueChange` | +| Breadcrumb | 导航面包屑 | Breadcrumb 组件 | ### 叠加层组件 | 组件 | 描述 | Props | |------|------|------| -| [Dialog](/components/ui/dialog) | 模态对话框(Radix) | `open`, `onOpenChange` | -| [Drawer](/components/ui/drawer) | 侧边面板(Radix) | `open`, `onOpenChange` | -| [Dropdown Menu](/components/ui/dropdown) | 上下文菜单(Radix) | Dropdown 组件 | -| [Popover](/components/ui/popover) | 浮动内容(Radix) | `open`, `onOpenChange` | -| [Tooltip](/components/ui/tooltip) | 悬停工具提示(Radix) | `content` | -| [AlertDialog](/components/ui/alert-dialog) | 确认对话框(Radix) | Dialog 组件 | +| Dialog | 模态对话框(Radix) | `open`, `onOpenChange` | +| Drawer | 侧边面板(Radix) | `open`, `onOpenChange` | +| Dropdown Menu | 上下文菜单(Radix) | Dropdown 组件 | +| Popover | 浮动内容(Radix) | `open`, `onOpenChange` | +| Tooltip | 悬停工具提示(Radix) | `content` | +| AlertDialog | 确认对话框(Radix) | Dialog 组件 | ### 展开组件 | 组件 | 描述 | Props | |------|------|------| -| [Collapsible](/components/ui/collapsible) | 展开/折叠内容(Radix) | `open`, `onOpenChange` | -| [Accordion](/components/ui/accordion) | 可折叠部分(Radix) | Accordion 组件 | +| Collapsible | 展开/折叠内容(Radix) | `open`, `onOpenChange` | +| Accordion | 可折叠部分(Radix) | Accordion 组件 | --- diff --git a/docs/zh/components/ui/badge.md b/docs/zh/components/ui/badge.md index da2b974d..e8731816 100644 --- a/docs/zh/components/ui/badge.md +++ b/docs/zh/components/ui/badge.md @@ -116,4 +116,4 @@ Badge 徽章组件用于以紧凑形式显示状态、类别或标签。它通 - [Card 卡片](/zh/components/ui/card) - [Button 按钮](/zh/components/ui/button) -- [Avatar 头像](/zh/components/ui/avatar) +- Avatar 头像 diff --git a/docs/zh/components/ui/button.md b/docs/zh/components/ui/button.md index 7369f7e8..448f5d9c 100644 --- a/docs/zh/components/ui/button.md +++ b/docs/zh/components/ui/button.md @@ -77,4 +77,4 @@ sidebar: auto - [Input 输入框](/zh/components/ui/input) - [Select 选择器](/zh/components/ui/select) -- [Dialog 对话框](/zh/components/ui/dialog) +- Dialog 对话框 diff --git a/docs/zh/components/ui/card.md b/docs/zh/components/ui/card.md index 1301c345..748d0d83 100644 --- a/docs/zh/components/ui/card.md +++ b/docs/zh/components/ui/card.md @@ -104,4 +104,4 @@ sidebar: auto - [Button 按钮](/zh/components/ui/button) - [Badge 徽章](/zh/components/ui/badge) -- [Separator 分隔线](/zh/components/ui/separator) +- Separator 分隔线 diff --git a/docs/zh/components/ui/checkbox.md b/docs/zh/components/ui/checkbox.md index c95b7cbd..7a556d6e 100644 --- a/docs/zh/components/ui/checkbox.md +++ b/docs/zh/components/ui/checkbox.md @@ -117,4 +117,4 @@ const state = ref('indeterminate') - [Input 输入框](/zh/components/ui/input) - [Select 选择器](/zh/components/ui/select) -- [Radio Group 单选框组](/zh/components/ui/radio-group) +- Radio Group 单选框组 diff --git a/docs/zh/guide/getting-started.md b/docs/zh/guide/getting-started.md index 9f49c710..937468fb 100644 --- a/docs/zh/guide/getting-started.md +++ b/docs/zh/guide/getting-started.md @@ -43,7 +43,7 @@ ccw cli -p "添加用户认证" --mode write - [安装指南](./installation.md) - 详细安装说明 - [第一个工作流](./first-workflow.md) - 30 分钟快速入门教程 -- [配置指南](./configuration.md) - 自定义 CCW 设置 +- [配置指南](/guide/cli-tools) - 自定义 CCW 设置 ::: tip 需要帮助? 查看我们的 [GitHub Discussions](https://github.com/your-repo/ccw/discussions) 或加入 [Discord 社区](https://discord.gg/ccw)。 diff --git a/docs/zh/skills/claude-index.md b/docs/zh/skills/claude-index.md index 055ce597..54e60e24 100644 --- a/docs/zh/skills/claude-index.md +++ b/docs/zh/skills/claude-index.md @@ -253,7 +253,7 @@ memory/ - [Claude Commands](../commands/claude/) - [Codex Skills](./codex-index.md) -- [功能文档](../features/) +- [功能文档](../features/spec) ## 统计数据 diff --git a/docs/zh/skills/claude-meta.md b/docs/zh/skills/claude-meta.md index 0e22f7fe..e38cfb7d 100644 --- a/docs/zh/skills/claude-meta.md +++ b/docs/zh/skills/claude-meta.md @@ -73,16 +73,16 @@ Phase 6: Readiness Check -> readiness-report.md + spec-summary.md **规范文档**(必读): | 文档 | 用途 | 优先级 | |------|------|--------| -| [specs/document-standards.md](specs/document-standards.md) | 文档格式、frontmatter、命名约定 | **P0 - 执行前必读** | -| [specs/quality-gates.md](specs/quality-gates.md) | 每阶段质量关卡标准和评分 | **P0 - 执行前必读** | +| [specs/document-standards.md](/skills/specs/document-standards) | 文档格式、frontmatter、命名约定 | **P0 - 执行前必读** | +| [specs/quality-gates.md](/skills/specs/quality-gates) | 每阶段质量关卡标准和评分 | **P0 - 执行前必读** | **模板文件**(生成前必读): | 文档 | 用途 | |------|------| -| [templates/product-brief.md](templates/product-brief.md) | 产品简报文档模板 | -| [templates/requirements-prd.md](templates/requirements-prd.md) | PRD 文档模板 | -| [templates/architecture-doc.md](templates/architecture-doc.md) | 架构文档模板 | -| [templates/epics-template.md](templates/epics-template.md) | Epic/Story 文档模板 | +| [templates/product-brief.md](/skills/templates/product-brief) | 产品简报文档模板 | +| [templates/requirements-prd.md](/skills/templates/requirements-prd) | PRD 文档模板 | +| [templates/architecture-doc.md](/skills/templates/architecture-doc) | 架构文档模板 | +| [templates/epics-template.md](/skills/templates/epics-template) | Epic/Story 文档模板 | **输出结构**: ```plaintext @@ -230,16 +230,16 @@ Artifacts N×Role Synthesis 1×Role **核心规范**(必读): | 文档 | 用途 | 优先级 | |------|------|--------| -| [../_shared/SKILL-DESIGN-SPEC.md](../_shared/SKILL-DESIGN-SPEC.md) | 通用设计规范 — 定义所有 Skills 的结构、命名、质量标准 | **P0 - 关键** | -| [specs/reference-docs-spec.md](specs/reference-docs-spec.md) | 参考文档生成规范 — 确保生成的 Skills 有适当的基于阶段的参考文档 | **P0 - 关键** | +| [_shared/SKILL-DESIGN-SPEC.md](/skills/_shared/SKILL-DESIGN-SPEC) | 通用设计规范 — 定义所有 Skills 的结构、命名、质量标准 | **P0 - 关键** | +| [specs/reference-docs-spec.md](/skills/specs/reference-docs-spec) | 参考文档生成规范 — 确保生成的 Skills 有适当的基于阶段的参考文档 | **P0 - 关键** | **模板文件**(生成前必读): | 文档 | 用途 | |------|------| -| [templates/skill-md.md](templates/skill-md.md) | SKILL.md 入口文件模板 | -| [templates/sequential-phase.md](templates/sequential-phase.md) | 顺序阶段模板 | -| [templates/autonomous-orchestrator.md](templates/autonomous-orchestrator.md) | 自治编排器模板 | -| [templates/autonomous-action.md](templates/autonomous-action.md) | 自治动作模板 | +| [templates/skill-md.md](/skills/templates/skill-md) | SKILL.md 入口文件模板 | +| [templates/sequential-phase.md](/skills/templates/sequential-phase) | 顺序阶段模板 | +| [templates/autonomous-orchestrator.md](/skills/templates/autonomous-orchestrator) | 自治编排器模板 | +| [templates/autonomous-action.md](/skills/templates/autonomous-action) | 自治动作模板 | **执行流程**: ```plaintext diff --git a/docs/zh/skills/claude-review.md b/docs/zh/skills/claude-review.md index 9c219a6d..a48ee4cb 100644 --- a/docs/zh/skills/claude-review.md +++ b/docs/zh/skills/claude-review.md @@ -77,15 +77,15 @@ **规范文档**(必读): | 文档 | 用途 | 优先级 | |------|------|--------| -| [specs/review-dimensions.md](specs/review-dimensions.md) | 审查维度定义和检查点 | **P0 - 最高** | -| [specs/issue-classification.md](specs/issue-classification.md) | 问题分类和严重程度标准 | **P0 - 最高** | -| [specs/quality-standards.md](specs/quality-standards.md) | 审查质量标准 | P1 | +| [specs/review-dimensions.md](/skills/specs/review-dimensions) | 审查维度定义和检查点 | **P0 - 最高** | +| [specs/issue-classification.md](/skills/specs/issue-classification) | 问题分类和严重程度标准 | **P0 - 最高** | +| [specs/quality-standards.md](/skills/specs/quality-standards) | 审查质量标准 | P1 | **模板文件**(生成前必读): | 文档 | 用途 | |------|------| -| [templates/review-report.md](templates/review-report.md) | 审查报告模板 | -| [templates/issue-template.md](templates/issue-template.md) | 问题记录模板 | +| [templates/review-report.md](/skills/templates/review-report) | 审查报告模板 | +| [templates/issue-template.md](/skills/templates/issue-template) | 问题记录模板 | **执行流程**: ``` @@ -234,5 +234,5 @@ const query = 'SELECT * FROM users WHERE username = ?'; await db.query(query, [username]); ``` -**Reference**: [specs/review-dimensions.md](specs/review-dimensions.md) - Security section +**Reference**: [specs/review-dimensions.md](/skills/specs/review-dimensions) - Security section ``` diff --git a/docs/zh/skills/codex-index.md b/docs/zh/skills/codex-index.md index afe66efb..fa787c68 100644 --- a/docs/zh/skills/codex-index.md +++ b/docs/zh/skills/codex-index.md @@ -440,7 +440,7 @@ Explore → Document → Log → Analyze → Correct Understanding → Fix → V ## 相关文档 - [Claude Skills](./claude-index.md) -- [功能文档](../features/) +- [功能文档](../features/spec) ## 最佳实践 diff --git a/docs/zh/workflows/comparison-table.md b/docs/zh/workflows/comparison-table.md index 453d2a43..c12c9b13 100644 --- a/docs/zh/workflows/comparison-table.md +++ b/docs/zh/workflows/comparison-table.md @@ -171,5 +171,5 @@ - [4级系统](./4-level.md) - 详细工作流说明 - [最佳实践](./best-practices.md) - 工作流优化技巧 -- [示例](./examples.md) - 工作流使用示例 +- [示例](/workflows/examples) - 工作流使用示例 - [团队](./teams.md) - 团队工作流协调