mirror of
https://github.com/catlog22/Claude-Code-Workflow.git
synced 2026-03-10 17:11:04 +08:00
Add unit tests for various components and stores in the terminal dashboard
- Implement tests for AssociationHighlight, DashboardToolbar, QueuePanel, SessionGroupTree, and TerminalDashboardPage to ensure proper functionality and state management. - Create tests for cliSessionStore, issueQueueIntegrationStore, queueExecutionStore, queueSchedulerStore, sessionManagerStore, and terminalGridStore to validate state resets and workspace scoping. - Mock necessary dependencies and state management hooks to isolate tests and ensure accurate behavior.
This commit is contained in:
742
.codex/skills/team-edict/SKILL.md
Normal file
742
.codex/skills/team-edict/SKILL.md
Normal file
@@ -0,0 +1,742 @@
|
||||
---
|
||||
name: team-edict
|
||||
description: |
|
||||
三省六部 multi-agent collaboration framework. Imperial edict workflow:
|
||||
Crown Prince receives edict -> Zhongshu (Planning) -> Menxia (Multi-dimensional Review) ->
|
||||
Shangshu (Dispatch) -> Six Ministries parallel execution.
|
||||
Mandatory kanban state reporting, Blocked as first-class state, full observability.
|
||||
argument-hint: "[-y|--yes] [-c|--concurrency N] [--continue] \"task description / edict\""
|
||||
allowed-tools: spawn_agents_on_csv, spawn_agent, wait, send_input, close_agent, Read, Write, Edit, Bash, Glob, Grep, AskUserQuestion
|
||||
---
|
||||
|
||||
## Auto Mode
|
||||
|
||||
When `--yes` or `-y`: Auto-confirm task decomposition, skip interactive validation, use defaults.
|
||||
|
||||
# Team Edict -- Three Departments Six Ministries
|
||||
|
||||
## Usage
|
||||
|
||||
```bash
|
||||
$team-edict "Implement user authentication module with JWT tokens"
|
||||
$team-edict -c 4 "Refactor the data pipeline for better performance"
|
||||
$team-edict -y "Add comprehensive test coverage for auth module"
|
||||
$team-edict --continue "EDT-20260308-143022"
|
||||
```
|
||||
|
||||
**Flags**:
|
||||
- `-y, --yes`: Skip all confirmations (auto mode)
|
||||
- `-c, --concurrency N`: Max concurrent agents within each wave (default: 4)
|
||||
- `--continue`: Resume existing session
|
||||
|
||||
**Output Directory**: `.workflow/.csv-wave/{session-id}/`
|
||||
**Core Output**: `tasks.csv` (master state) + `results.csv` (final) + `discoveries.ndjson` (shared exploration) + `context.md` (human-readable report)
|
||||
|
||||
---
|
||||
|
||||
## Overview
|
||||
|
||||
Imperial edict-inspired multi-agent collaboration framework with **strict cascading approval pipeline** and **parallel ministry execution**. The Three Departments (zhongshu/menxia/shangshu) perform serial planning, review, and dispatch. The Six Ministries (gongbu/bingbu/hubu/libu/libu-hr/xingbu) execute tasks in dependency-ordered waves.
|
||||
|
||||
**Execution Model**: Hybrid -- CSV wave pipeline (primary) + individual agent spawn (secondary)
|
||||
|
||||
```
|
||||
+-------------------------------------------------------------------------+
|
||||
| TEAM EDICT WORKFLOW |
|
||||
+-------------------------------------------------------------------------+
|
||||
| |
|
||||
| Phase 0: Pre-Wave Interactive (Three Departments Serial Pipeline) |
|
||||
| +-- Stage 1: Zhongshu (Planning) -- drafts execution plan |
|
||||
| +-- Stage 2: Menxia (Review) -- multi-dimensional review |
|
||||
| | +-- Reject -> loop back to Zhongshu (max 3 rounds) |
|
||||
| +-- Stage 3: Shangshu (Dispatch) -- routes to Six Ministries |
|
||||
| +-- Output: tasks.csv with ministry assignments + dependency waves |
|
||||
| |
|
||||
| Phase 1: Requirement -> CSV + Classification |
|
||||
| +-- Parse Shangshu dispatch plan into tasks.csv |
|
||||
| +-- Classify tasks: csv-wave (ministry work) | interactive (QA loop) |
|
||||
| +-- Compute dependency waves (topological sort) |
|
||||
| +-- Generate tasks.csv with wave + exec_mode columns |
|
||||
| +-- User validates task breakdown (skip if -y) |
|
||||
| |
|
||||
| Phase 2: Wave Execution Engine (Extended) |
|
||||
| +-- For each wave (1..N): |
|
||||
| | +-- Build wave CSV (filter csv-wave tasks for this wave) |
|
||||
| | +-- Inject previous findings into prev_context column |
|
||||
| | +-- spawn_agents_on_csv(wave CSV) |
|
||||
| | +-- Execute post-wave interactive tasks (if any) |
|
||||
| | +-- Merge all results into master tasks.csv |
|
||||
| | +-- Check: any failed? -> skip dependents |
|
||||
| +-- discoveries.ndjson shared across all modes (append-only) |
|
||||
| |
|
||||
| Phase 3: Post-Wave Interactive (Quality Aggregation) |
|
||||
| +-- Aggregation Agent: collects all ministry outputs |
|
||||
| +-- Generates final edict completion report |
|
||||
| +-- Quality gate validation against specs/quality-gates.md |
|
||||
| |
|
||||
| Phase 4: Results Aggregation |
|
||||
| +-- Export final results.csv |
|
||||
| +-- Generate context.md with all findings |
|
||||
| +-- Display summary: completed/failed/skipped per wave |
|
||||
| +-- Offer: view results | retry failed | done |
|
||||
| |
|
||||
+-------------------------------------------------------------------------+
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Task Classification Rules
|
||||
|
||||
Each task is classified by `exec_mode`:
|
||||
|
||||
| exec_mode | Mechanism | Criteria |
|
||||
|-----------|-----------|----------|
|
||||
| `csv-wave` | `spawn_agents_on_csv` | One-shot, structured I/O, no multi-round interaction |
|
||||
| `interactive` | `spawn_agent`/`wait`/`send_input`/`close_agent` | Multi-round, clarification, inline utility |
|
||||
|
||||
**Classification Decision**:
|
||||
|
||||
| Task Property | Classification |
|
||||
|---------------|---------------|
|
||||
| Ministry implementation (IMPL/OPS/DATA/DOC/HR) | `csv-wave` |
|
||||
| Quality assurance with test-fix loop (QA) | `interactive` |
|
||||
| Single-department self-contained work | `csv-wave` |
|
||||
| Cross-department coordination needed | `interactive` |
|
||||
| Requires iterative feedback (test -> fix -> retest) | `interactive` |
|
||||
| Standalone analysis or generation | `csv-wave` |
|
||||
|
||||
---
|
||||
|
||||
## CSV Schema
|
||||
|
||||
### tasks.csv (Master State)
|
||||
|
||||
```csv
|
||||
id,title,description,deps,context_from,exec_mode,department,task_prefix,priority,dispatch_batch,acceptance_criteria,wave,status,findings,artifact_path,error
|
||||
IMPL-001,"Implement JWT auth","Create JWT authentication middleware with token validation","","","csv-wave","gongbu","IMPL","P0","1","All auth endpoints return valid JWT tokens","1","pending","","",""
|
||||
DOC-001,"Write API docs","Generate OpenAPI documentation for auth endpoints","IMPL-001","IMPL-001","csv-wave","libu","DOC","P1","2","API docs cover all auth endpoints","2","pending","","",""
|
||||
QA-001,"Test auth module","Execute test suite and validate coverage >= 95%","IMPL-001","IMPL-001","interactive","xingbu","QA","P1","2","Test pass rate >= 95%, no Critical bugs","2","pending","","",""
|
||||
```
|
||||
|
||||
**Columns**:
|
||||
|
||||
| Column | Phase | Description |
|
||||
|--------|-------|-------------|
|
||||
| `id` | Input | Unique task identifier (DEPT-NNN format) |
|
||||
| `title` | Input | Short task title |
|
||||
| `description` | Input | Detailed task description (self-contained for agent execution) |
|
||||
| `deps` | Input | Semicolon-separated dependency task IDs |
|
||||
| `context_from` | Input | Semicolon-separated task IDs whose findings this task needs |
|
||||
| `exec_mode` | Input | `csv-wave` or `interactive` |
|
||||
| `department` | Input | Target ministry: gongbu/bingbu/hubu/libu/libu-hr/xingbu |
|
||||
| `task_prefix` | Input | Task type prefix: IMPL/OPS/DATA/DOC/HR/QA |
|
||||
| `priority` | Input | Priority level: P0 (highest) to P3 (lowest) |
|
||||
| `dispatch_batch` | Input | Batch number from Shangshu dispatch plan (1-based) |
|
||||
| `acceptance_criteria` | Input | Specific, measurable acceptance criteria from dispatch plan |
|
||||
| `wave` | Computed | Wave number (computed by topological sort, 1-based) |
|
||||
| `status` | Output | `pending` -> `completed` / `failed` / `skipped` |
|
||||
| `findings` | Output | Key discoveries or implementation notes (max 500 chars) |
|
||||
| `artifact_path` | Output | Path to output artifact file relative to session dir |
|
||||
| `error` | Output | Error message if failed (empty if success) |
|
||||
|
||||
### Per-Wave CSV (Temporary)
|
||||
|
||||
Each wave generates a temporary `wave-{N}.csv` with extra `prev_context` column (csv-wave tasks only).
|
||||
|
||||
---
|
||||
|
||||
## Agent Registry (Interactive Agents)
|
||||
|
||||
| Agent | Role File | Pattern | Responsibility | Position |
|
||||
|-------|-----------|---------|----------------|----------|
|
||||
| zhongshu-planner | agents/zhongshu-planner.md | 2.3 (sequential pipeline) | Draft structured execution plan from edict requirements | standalone (Phase 0, Stage 1) |
|
||||
| menxia-reviewer | agents/menxia-reviewer.md | 2.4 (multi-perspective analysis) | Multi-dimensional review with 4 CLI analyses | standalone (Phase 0, Stage 2) |
|
||||
| shangshu-dispatcher | agents/shangshu-dispatcher.md | 2.3 (sequential pipeline) | Parse approved plan and generate ministry task assignments | standalone (Phase 0, Stage 3) |
|
||||
| qa-verifier | agents/qa-verifier.md | 2.5 (iterative refinement) | Quality assurance with test-fix loop (max 3 rounds) | post-wave |
|
||||
| aggregator | agents/aggregator.md | 2.3 (sequential pipeline) | Collect all ministry outputs and generate final report | standalone (Phase 3) |
|
||||
|
||||
> **COMPACT PROTECTION**: Agent files are execution documents. When context compression occurs, **you MUST immediately `Read` the corresponding agent.md** to reload.
|
||||
|
||||
---
|
||||
|
||||
## Output Artifacts
|
||||
|
||||
| File | Purpose | Lifecycle |
|
||||
|------|---------|-----------|
|
||||
| `tasks.csv` | Master state -- all tasks with status/findings | Updated after each wave |
|
||||
| `wave-{N}.csv` | Per-wave input (temporary, csv-wave tasks only) | Created before wave, deleted after |
|
||||
| `results.csv` | Final export of all task results | Created in Phase 4 |
|
||||
| `discoveries.ndjson` | Shared exploration board (all agents, both modes) | Append-only, carries across waves |
|
||||
| `context.md` | Human-readable execution report | Created in Phase 4 |
|
||||
| `plan/zhongshu-plan.md` | Zhongshu execution plan | Created in Phase 0 Stage 1 |
|
||||
| `review/menxia-review.md` | Menxia review report with 4-dimensional analysis | Created in Phase 0 Stage 2 |
|
||||
| `plan/dispatch-plan.md` | Shangshu dispatch plan with ministry assignments | Created in Phase 0 Stage 3 |
|
||||
| `artifacts/{dept}-output.md` | Per-ministry output artifact | Created during wave execution |
|
||||
| `interactive/{id}-result.json` | Results from interactive tasks (QA loops) | Created per interactive task |
|
||||
| `agents/registry.json` | Active interactive agent tracking | Updated on spawn/close |
|
||||
|
||||
---
|
||||
|
||||
## Session Structure
|
||||
|
||||
```
|
||||
.workflow/.csv-wave/{session-id}/
|
||||
+-- tasks.csv # Master state (all tasks, both modes)
|
||||
+-- results.csv # Final results export
|
||||
+-- discoveries.ndjson # Shared discovery board (all agents)
|
||||
+-- context.md # Human-readable report
|
||||
+-- wave-{N}.csv # Temporary per-wave input (csv-wave only)
|
||||
+-- plan/
|
||||
| +-- zhongshu-plan.md # Zhongshu execution plan
|
||||
| +-- dispatch-plan.md # Shangshu dispatch plan
|
||||
+-- review/
|
||||
| +-- menxia-review.md # Menxia review report
|
||||
+-- artifacts/
|
||||
| +-- gongbu-output.md # Ministry outputs
|
||||
| +-- bingbu-output.md
|
||||
| +-- hubu-output.md
|
||||
| +-- libu-output.md
|
||||
| +-- libu-hr-output.md
|
||||
| +-- xingbu-report.md
|
||||
+-- interactive/ # Interactive task artifacts
|
||||
| +-- {id}-result.json # Per-task results
|
||||
+-- agents/
|
||||
+-- registry.json # Active interactive agent tracking
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Implementation
|
||||
|
||||
### Session Initialization
|
||||
|
||||
```
|
||||
1. Parse $ARGUMENTS for task description (the "edict")
|
||||
2. Generate session ID: EDT-{slug}-{YYYYMMDD-HHmmss}
|
||||
3. Create session directory: .workflow/.csv-wave/{session-id}/
|
||||
4. Create subdirectories: plan/, review/, artifacts/, interactive/, agents/
|
||||
5. Initialize registry.json: { "active": [], "closed": [] }
|
||||
6. Initialize discoveries.ndjson (empty file)
|
||||
7. Read specs: .codex/skills/team-edict/specs/team-config.json
|
||||
8. Read quality gates: .codex/skills/team-edict/specs/quality-gates.md
|
||||
9. Log session start to context.md
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Phase 0: Pre-Wave Interactive (Three Departments Serial Pipeline)
|
||||
|
||||
**Objective**: Execute the serial approval pipeline (zhongshu -> menxia -> shangshu) to produce a validated, reviewed dispatch plan that decomposes the edict into ministry-level tasks.
|
||||
|
||||
#### Stage 1: Zhongshu Planning
|
||||
|
||||
```javascript
|
||||
const zhongshu = spawn_agent({
|
||||
message: `
|
||||
## TASK ASSIGNMENT
|
||||
|
||||
### MANDATORY FIRST STEPS (Agent Execute)
|
||||
1. **Read role definition**: .codex/skills/team-edict/agents/zhongshu-planner.md (MUST read first)
|
||||
2. Read: ${sessionDir}/discoveries.ndjson (shared discoveries, skip if not exists)
|
||||
3. Read: .codex/skills/team-edict/specs/team-config.json (routing rules)
|
||||
|
||||
---
|
||||
|
||||
Goal: Draft a structured execution plan for the following edict
|
||||
Scope: Analyze codebase, decompose into ministry-level subtasks, define acceptance criteria
|
||||
Deliverables: ${sessionDir}/plan/zhongshu-plan.md
|
||||
|
||||
### Edict (Original Requirement)
|
||||
${edictText}
|
||||
`
|
||||
})
|
||||
|
||||
const zhongshuResult = wait({ ids: [zhongshu], timeout_ms: 600000 })
|
||||
|
||||
if (zhongshuResult.timed_out) {
|
||||
send_input({ id: zhongshu, message: "Please finalize your execution plan immediately and output current findings." })
|
||||
const retry = wait({ ids: [zhongshu], timeout_ms: 120000 })
|
||||
}
|
||||
|
||||
// Store result
|
||||
Write(`${sessionDir}/interactive/zhongshu-result.json`, JSON.stringify({
|
||||
task_id: "PLAN-001",
|
||||
status: "completed",
|
||||
findings: parseFindings(zhongshuResult),
|
||||
timestamp: new Date().toISOString()
|
||||
}))
|
||||
|
||||
close_agent({ id: zhongshu })
|
||||
```
|
||||
|
||||
#### Stage 2: Menxia Multi-Dimensional Review
|
||||
|
||||
**Rejection Loop**: If menxia rejects (approved=false), respawn zhongshu with feedback. Max 3 rounds.
|
||||
|
||||
```javascript
|
||||
let reviewRound = 0
|
||||
let approved = false
|
||||
|
||||
while (!approved && reviewRound < 3) {
|
||||
reviewRound++
|
||||
|
||||
const menxia = spawn_agent({
|
||||
message: `
|
||||
## TASK ASSIGNMENT
|
||||
|
||||
### MANDATORY FIRST STEPS (Agent Execute)
|
||||
1. **Read role definition**: .codex/skills/team-edict/agents/menxia-reviewer.md (MUST read first)
|
||||
2. Read: ${sessionDir}/plan/zhongshu-plan.md (plan to review)
|
||||
3. Read: ${sessionDir}/discoveries.ndjson (shared discoveries)
|
||||
|
||||
---
|
||||
|
||||
Goal: Multi-dimensional review of Zhongshu plan (Round ${reviewRound}/3)
|
||||
Scope: Feasibility, completeness, risk, resource allocation
|
||||
Deliverables: ${sessionDir}/review/menxia-review.md
|
||||
|
||||
### Original Edict
|
||||
${edictText}
|
||||
|
||||
### Previous Review (if rejection round > 1)
|
||||
${reviewRound > 1 ? readPreviousReview() : "First review round"}
|
||||
`
|
||||
})
|
||||
|
||||
const menxiaResult = wait({ ids: [menxia], timeout_ms: 600000 })
|
||||
|
||||
if (menxiaResult.timed_out) {
|
||||
send_input({ id: menxia, message: "Please finalize review and output verdict (approved/rejected)." })
|
||||
const retry = wait({ ids: [menxia], timeout_ms: 120000 })
|
||||
}
|
||||
|
||||
close_agent({ id: menxia })
|
||||
|
||||
// Parse verdict from review report
|
||||
const reviewReport = Read(`${sessionDir}/review/menxia-review.md`)
|
||||
approved = reviewReport.includes("approved") || reviewReport.includes("approved: true")
|
||||
|
||||
if (!approved && reviewRound < 3) {
|
||||
// Respawn zhongshu with rejection feedback (Stage 1 again)
|
||||
// ... spawn zhongshu with rejection_feedback = reviewReport ...
|
||||
}
|
||||
}
|
||||
|
||||
if (!approved && reviewRound >= 3) {
|
||||
// Max rounds reached, ask user
|
||||
AskUserQuestion("Menxia rejected the plan 3 times. Please review and decide: approve, reject, or provide guidance.")
|
||||
}
|
||||
```
|
||||
|
||||
#### Stage 3: Shangshu Dispatch
|
||||
|
||||
```javascript
|
||||
const shangshu = spawn_agent({
|
||||
message: `
|
||||
## TASK ASSIGNMENT
|
||||
|
||||
### MANDATORY FIRST STEPS (Agent Execute)
|
||||
1. **Read role definition**: .codex/skills/team-edict/agents/shangshu-dispatcher.md (MUST read first)
|
||||
2. Read: ${sessionDir}/plan/zhongshu-plan.md (approved plan)
|
||||
3. Read: ${sessionDir}/review/menxia-review.md (review conditions)
|
||||
4. Read: .codex/skills/team-edict/specs/team-config.json (routing rules)
|
||||
|
||||
---
|
||||
|
||||
Goal: Parse approved plan and generate Six Ministries dispatch plan
|
||||
Scope: Route subtasks to departments, define execution batches, set dependencies
|
||||
Deliverables: ${sessionDir}/plan/dispatch-plan.md
|
||||
`
|
||||
})
|
||||
|
||||
const shangshuResult = wait({ ids: [shangshu], timeout_ms: 300000 })
|
||||
close_agent({ id: shangshu })
|
||||
|
||||
// Parse dispatch-plan.md to generate tasks.csv (Phase 1 input)
|
||||
```
|
||||
|
||||
**Success Criteria**:
|
||||
- zhongshu-plan.md written with structured subtask list
|
||||
- menxia-review.md written with 4-dimensional analysis verdict
|
||||
- dispatch-plan.md written with ministry assignments and batch ordering
|
||||
- Interactive agents closed, results stored
|
||||
|
||||
---
|
||||
|
||||
### Phase 1: Requirement -> CSV + Classification
|
||||
|
||||
**Objective**: Parse the Shangshu dispatch plan into a tasks.csv with proper wave computation and exec_mode classification.
|
||||
|
||||
**Decomposition Rules**:
|
||||
|
||||
1. Read `${sessionDir}/plan/dispatch-plan.md`
|
||||
2. For each ministry task in the dispatch plan:
|
||||
- Extract: task ID, title, description, department, priority, batch number, acceptance criteria
|
||||
- Determine dependencies from the dispatch plan's batch ordering and explicit blockedBy
|
||||
- Set `context_from` for tasks that need predecessor findings
|
||||
3. Apply classification rules (see Task Classification Rules above)
|
||||
4. Compute waves via topological sort (Kahn's BFS with depth tracking)
|
||||
5. Generate `tasks.csv` with all columns
|
||||
|
||||
**Classification Rules**:
|
||||
|
||||
| Department | Default exec_mode | Override Condition |
|
||||
|------------|-------------------|-------------------|
|
||||
| gongbu (IMPL) | csv-wave | Interactive if requires iterative codebase exploration |
|
||||
| bingbu (OPS) | csv-wave | - |
|
||||
| hubu (DATA) | csv-wave | - |
|
||||
| libu (DOC) | csv-wave | - |
|
||||
| libu-hr (HR) | csv-wave | - |
|
||||
| xingbu (QA) | interactive | Always interactive (test-fix loop) |
|
||||
|
||||
**Wave Computation**: Kahn's BFS topological sort with depth tracking (csv-wave tasks only).
|
||||
|
||||
**User Validation**: Display task breakdown with wave + exec_mode assignment (skip if AUTO_YES).
|
||||
|
||||
**Success Criteria**:
|
||||
- tasks.csv created with valid schema, wave, and exec_mode assignments
|
||||
- No circular dependencies
|
||||
- User approved (or AUTO_YES)
|
||||
|
||||
---
|
||||
|
||||
### Phase 2: Wave Execution Engine (Extended)
|
||||
|
||||
**Objective**: Execute tasks wave-by-wave with hybrid mechanism support and cross-wave context propagation.
|
||||
|
||||
```
|
||||
For each wave W in 1..max_wave:
|
||||
|
||||
1. FILTER csv-wave tasks where wave == W and status == "pending"
|
||||
2. CHECK dependencies: if any dep has status == "failed", mark task as "skipped"
|
||||
3. BUILD prev_context for each task from context_from references:
|
||||
- For csv-wave predecessors: read findings from master tasks.csv
|
||||
- For interactive predecessors: read from interactive/{id}-result.json
|
||||
4. GENERATE wave-{W}.csv with prev_context column added
|
||||
5. EXECUTE csv-wave tasks:
|
||||
spawn_agents_on_csv({
|
||||
task_csv_path: "${sessionDir}/wave-{W}.csv",
|
||||
instruction_path: ".codex/skills/team-edict/instructions/agent-instruction.md",
|
||||
schema_path: ".codex/skills/team-edict/schemas/tasks-schema.md",
|
||||
additional_instructions: "Session directory: ${sessionDir}. Department: {department}. Priority: {priority}.",
|
||||
concurrency: CONCURRENCY
|
||||
})
|
||||
6. MERGE results back into master tasks.csv (update status, findings, artifact_path, error)
|
||||
7. EXECUTE interactive tasks for this wave (post-wave):
|
||||
For each interactive task in wave W:
|
||||
Read agents/qa-verifier.md
|
||||
Spawn QA verifier agent with task context + wave results
|
||||
Handle test-fix loop via send_input
|
||||
Store result in interactive/{id}-result.json
|
||||
Close agent, update registry.json
|
||||
8. CLEANUP: delete wave-{W}.csv
|
||||
9. LOG wave completion to context.md and discoveries.ndjson
|
||||
|
||||
Wave completion check:
|
||||
- All tasks completed or skipped -> proceed to next wave
|
||||
- Any failed non-skippable task -> log error, continue (dependents will be skipped)
|
||||
```
|
||||
|
||||
**Success Criteria**:
|
||||
- All waves executed in order
|
||||
- Both csv-wave and interactive tasks handled per wave
|
||||
- Each wave's results merged into master CSV before next wave starts
|
||||
- Dependent tasks skipped when predecessor failed
|
||||
- discoveries.ndjson accumulated across all waves and mechanisms
|
||||
- Interactive agent lifecycle tracked in registry.json
|
||||
|
||||
---
|
||||
|
||||
### Phase 3: Post-Wave Interactive (Quality Aggregation)
|
||||
|
||||
**Objective**: Collect all ministry outputs, validate against quality gates, and generate the final edict completion report.
|
||||
|
||||
```javascript
|
||||
const aggregator = spawn_agent({
|
||||
message: `
|
||||
## TASK ASSIGNMENT
|
||||
|
||||
### MANDATORY FIRST STEPS (Agent Execute)
|
||||
1. **Read role definition**: .codex/skills/team-edict/agents/aggregator.md (MUST read first)
|
||||
2. Read: ${sessionDir}/tasks.csv (master state)
|
||||
3. Read: ${sessionDir}/discoveries.ndjson (all discoveries)
|
||||
4. Read: .codex/skills/team-edict/specs/quality-gates.md (quality standards)
|
||||
|
||||
---
|
||||
|
||||
Goal: Aggregate all ministry outputs into final edict completion report
|
||||
Scope: All artifacts in ${sessionDir}/artifacts/, all interactive results
|
||||
Deliverables: ${sessionDir}/context.md (final report)
|
||||
|
||||
### Ministry Artifacts to Collect
|
||||
${listAllArtifacts()}
|
||||
|
||||
### Quality Gate Standards
|
||||
Read from: .codex/skills/team-edict/specs/quality-gates.md
|
||||
`
|
||||
})
|
||||
|
||||
const aggResult = wait({ ids: [aggregator], timeout_ms: 300000 })
|
||||
close_agent({ id: aggregator })
|
||||
```
|
||||
|
||||
**Success Criteria**:
|
||||
- Post-wave interactive processing complete
|
||||
- Interactive agents closed, results stored
|
||||
|
||||
---
|
||||
|
||||
### Phase 4: Results Aggregation
|
||||
|
||||
**Objective**: Generate final results and human-readable report.
|
||||
|
||||
```
|
||||
1. READ master tasks.csv
|
||||
2. EXPORT results.csv with final status for all tasks
|
||||
3. GENERATE context.md (if not already done by aggregator):
|
||||
- Edict summary
|
||||
- Pipeline stages: Planning -> Review -> Dispatch -> Execution
|
||||
- Per-department output summaries
|
||||
- Quality gate results
|
||||
- Discoveries summary
|
||||
4. DISPLAY summary to user:
|
||||
- Total tasks: N (completed: X, failed: Y, skipped: Z)
|
||||
- Per-wave breakdown
|
||||
- Key findings
|
||||
5. CLEANUP:
|
||||
- Close any remaining interactive agents (registry.json)
|
||||
- Remove temporary wave CSV files
|
||||
6. OFFER: view full report | retry failed tasks | done
|
||||
```
|
||||
|
||||
**Success Criteria**:
|
||||
- results.csv exported (all tasks, both modes)
|
||||
- context.md generated
|
||||
- All interactive agents closed (registry.json cleanup)
|
||||
- Summary displayed to user
|
||||
|
||||
---
|
||||
|
||||
## Shared Discovery Board Protocol
|
||||
|
||||
All agents (both csv-wave and interactive) share a single `discoveries.ndjson` file for cross-agent knowledge propagation.
|
||||
|
||||
### Discovery Types
|
||||
|
||||
| Type | Dedup Key | Data Schema | Description |
|
||||
|------|-----------|-------------|-------------|
|
||||
| `codebase_pattern` | `pattern_name` | `{pattern_name, files, description}` | Identified codebase patterns and conventions |
|
||||
| `dependency_found` | `dep_name` | `{dep_name, version, used_by}` | External dependency discoveries |
|
||||
| `risk_identified` | `risk_id` | `{risk_id, severity, description, mitigation}` | Risk findings from any agent |
|
||||
| `implementation_note` | `file_path` | `{file_path, note, line_range}` | Implementation decisions and notes |
|
||||
| `test_result` | `test_suite` | `{test_suite, pass_rate, failures}` | Test execution results |
|
||||
| `quality_issue` | `issue_id` | `{issue_id, severity, file, description}` | Quality issues found during review |
|
||||
| `routing_note` | `task_id` | `{task_id, department, reason}` | Dispatch routing decisions |
|
||||
|
||||
### Protocol
|
||||
|
||||
```bash
|
||||
# Append discovery (any agent, any mode)
|
||||
echo '{"ts":"<ISO8601>","worker":"{id}","type":"<type>","data":{...}}' >> ${sessionDir}/discoveries.ndjson
|
||||
|
||||
# Read discoveries (any agent, any mode)
|
||||
# Read ${sessionDir}/discoveries.ndjson, parse each line as JSON
|
||||
# Deduplicate by type + dedup_key
|
||||
```
|
||||
|
||||
### Rules
|
||||
- **Append-only**: Never modify or delete existing entries
|
||||
- **Deduplicate on read**: When reading, use type + dedup_key to skip duplicates
|
||||
- **Both mechanisms share**: csv-wave agents and interactive agents use the same file
|
||||
- **Carry across waves**: Discoveries persist across all waves
|
||||
|
||||
---
|
||||
|
||||
## Six Ministries Routing Rules
|
||||
|
||||
Shangshu dispatcher uses these rules to assign tasks to ministries:
|
||||
|
||||
| Keyword Signals | Target Ministry | Role ID | Task Prefix |
|
||||
|----------------|-----------------|---------|-------------|
|
||||
| Feature dev, architecture, code, refactor, implement, API | Engineering | gongbu | IMPL |
|
||||
| Deploy, CI/CD, infrastructure, container, monitoring, security ops | Operations | bingbu | OPS |
|
||||
| Data analysis, statistics, cost, reports, resource mgmt | Data & Resources | hubu | DATA |
|
||||
| Documentation, README, UI copy, specs, API docs, comms | Documentation | libu | DOC |
|
||||
| Testing, QA, bug, code review, compliance audit | Quality Assurance | xingbu | QA |
|
||||
| Agent management, training, skill optimization, evaluation | Personnel | libu-hr | HR |
|
||||
|
||||
---
|
||||
|
||||
## Kanban State Protocol
|
||||
|
||||
All agents must report state transitions. In Codex context, agents write state to discoveries.ndjson:
|
||||
|
||||
### State Machine
|
||||
|
||||
```
|
||||
Pending -> Doing -> Done
|
||||
|
|
||||
Blocked (can enter at any time, must report reason)
|
||||
```
|
||||
|
||||
### State Reporting via Discoveries
|
||||
|
||||
```bash
|
||||
# Task start
|
||||
echo '{"ts":"<ISO8601>","worker":"{id}","type":"state_update","data":{"state":"Doing","task_id":"{id}","department":"{department}","step":"Starting execution"}}' >> ${sessionDir}/discoveries.ndjson
|
||||
|
||||
# Progress update
|
||||
echo '{"ts":"<ISO8601>","worker":"{id}","type":"progress","data":{"task_id":"{id}","current":"Step 2: Implementing API","plan":"Step1 done|Step2 in progress|Step3 pending"}}' >> ${sessionDir}/discoveries.ndjson
|
||||
|
||||
# Completion
|
||||
echo '{"ts":"<ISO8601>","worker":"{id}","type":"state_update","data":{"state":"Done","task_id":"{id}","remark":"Completed: implementation summary"}}' >> ${sessionDir}/discoveries.ndjson
|
||||
|
||||
# Blocked
|
||||
echo '{"ts":"<ISO8601>","worker":"{id}","type":"state_update","data":{"state":"Blocked","task_id":"{id}","reason":"Cannot proceed: missing dependency"}}' >> ${sessionDir}/discoveries.ndjson
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Interactive Task Execution
|
||||
|
||||
For interactive tasks within a wave (primarily QA test-fix loops):
|
||||
|
||||
**Spawn Protocol**:
|
||||
|
||||
```javascript
|
||||
const agent = spawn_agent({
|
||||
message: `
|
||||
## TASK ASSIGNMENT
|
||||
|
||||
### MANDATORY FIRST STEPS (Agent Execute)
|
||||
1. **Read role definition**: .codex/skills/team-edict/agents/qa-verifier.md (MUST read first)
|
||||
2. Read: ${sessionDir}/discoveries.ndjson (shared discoveries)
|
||||
3. Read: .codex/skills/team-edict/specs/quality-gates.md (quality standards)
|
||||
|
||||
---
|
||||
|
||||
Goal: Execute QA verification for task ${taskId}
|
||||
Scope: ${taskDescription}
|
||||
Deliverables: Test report + pass/fail verdict
|
||||
|
||||
### Previous Context
|
||||
${prevContextFromCompletedTasks}
|
||||
|
||||
### Acceptance Criteria
|
||||
${acceptanceCriteria}
|
||||
`
|
||||
})
|
||||
```
|
||||
|
||||
**Wait + Process**:
|
||||
|
||||
```javascript
|
||||
const result = wait({ ids: [agent], timeout_ms: 600000 })
|
||||
|
||||
if (result.timed_out) {
|
||||
send_input({ id: agent, message: "Please finalize and output current findings." })
|
||||
const retry = wait({ ids: [agent], timeout_ms: 120000 })
|
||||
}
|
||||
|
||||
// Store result
|
||||
Write(`${sessionDir}/interactive/${taskId}-result.json`, JSON.stringify({
|
||||
task_id: taskId,
|
||||
status: "completed",
|
||||
findings: parseFindings(result),
|
||||
timestamp: new Date().toISOString()
|
||||
}))
|
||||
```
|
||||
|
||||
**Lifecycle Tracking**:
|
||||
|
||||
```javascript
|
||||
// On spawn: register
|
||||
registry.active.push({ id: agent, task_id: taskId, pattern: "qa-verifier", spawned_at: now })
|
||||
|
||||
// On close: move to closed
|
||||
close_agent({ id: agent })
|
||||
registry.active = registry.active.filter(a => a.id !== agent)
|
||||
registry.closed.push({ id: agent, task_id: taskId, closed_at: now })
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Cross-Mechanism Context Bridging
|
||||
|
||||
### Interactive Result -> CSV Task
|
||||
|
||||
When a pre-wave interactive task produces results needed by csv-wave tasks:
|
||||
|
||||
```javascript
|
||||
// 1. Interactive result stored in file
|
||||
const resultFile = `${sessionDir}/interactive/${taskId}-result.json`
|
||||
|
||||
// 2. Wave engine reads when building prev_context for csv-wave tasks
|
||||
// If a csv-wave task has context_from referencing an interactive task:
|
||||
// Read the interactive result file and include in prev_context
|
||||
```
|
||||
|
||||
### CSV Result -> Interactive Task
|
||||
|
||||
When a post-wave interactive task needs CSV wave results:
|
||||
|
||||
```javascript
|
||||
// Include in spawn message
|
||||
const csvFindings = readMasterCSV().filter(t => t.wave === currentWave && t.exec_mode === 'csv-wave')
|
||||
const context = csvFindings.map(t => `## Task ${t.id}: ${t.title}\n${t.findings}`).join('\n\n')
|
||||
|
||||
spawn_agent({
|
||||
message: `...\n### Wave ${currentWave} Results\n${context}\n...`
|
||||
})
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Error | Resolution |
|
||||
|-------|------------|
|
||||
| Circular dependency | Detect in wave computation, abort with error message |
|
||||
| CSV agent timeout | Mark as failed in results, continue with wave |
|
||||
| CSV agent failed | Mark as failed, skip dependent tasks in later waves |
|
||||
| Interactive agent timeout | Urge convergence via send_input, then close if still timed out |
|
||||
| Interactive agent failed | Mark as failed, skip dependents |
|
||||
| Pre-wave interactive failed | Skip dependent csv-wave tasks in same wave |
|
||||
| All agents in wave failed | Log error, offer retry or abort |
|
||||
| CSV parse error | Validate CSV format before execution, show line number |
|
||||
| discoveries.ndjson corrupt | Ignore malformed lines, continue with valid entries |
|
||||
| Lifecycle leak | Cleanup all active agents via registry.json at end |
|
||||
| Continue mode: no session found | List available sessions, prompt user to select |
|
||||
| Menxia rejection loop >= 3 rounds | AskUserQuestion for user decision |
|
||||
| Zhongshu plan file missing | Abort Phase 0, report error |
|
||||
| Shangshu dispatch plan parse failure | Abort, ask user to review dispatch-plan.md |
|
||||
| Ministry artifact not written | Mark task as failed, include in QA report |
|
||||
| Test-fix loop exceeds 3 rounds | Mark QA as failed, report to aggregator |
|
||||
|
||||
---
|
||||
|
||||
## Specs Reference
|
||||
|
||||
| File | Content | Used By |
|
||||
|------|---------|---------|
|
||||
| [specs/team-config.json](specs/team-config.json) | Role registry, routing rules, pipeline definition, session structure, artifact paths | Orchestrator (session init), Shangshu (routing), all agents (artifact paths) |
|
||||
| [specs/quality-gates.md](specs/quality-gates.md) | Per-phase quality gate standards, cross-phase consistency checks | Aggregator (Phase 3), QA verifier (test validation) |
|
||||
|
||||
---
|
||||
|
||||
## Core Rules
|
||||
|
||||
1. **Start Immediately**: First action is session initialization, then Phase 0
|
||||
2. **Wave Order is Sacred**: Never execute wave N before wave N-1 completes and results are merged
|
||||
3. **CSV is Source of Truth**: Master tasks.csv holds all state (both csv-wave and interactive)
|
||||
4. **CSV First**: Default to csv-wave for tasks; only use interactive when interaction pattern requires it
|
||||
5. **Context Propagation**: prev_context built from master CSV, not from memory
|
||||
6. **Discovery Board is Append-Only**: Never clear, modify, or recreate discoveries.ndjson -- both mechanisms share it
|
||||
7. **Skip on Failure**: If a dependency failed, skip the dependent task (regardless of mechanism)
|
||||
8. **Lifecycle Balance**: Every spawn_agent MUST have a matching close_agent (tracked in registry.json)
|
||||
9. **Cleanup Temp Files**: Remove wave-{N}.csv after results are merged
|
||||
10. **DO NOT STOP**: Continuous execution until all waves complete or all remaining tasks are skipped
|
||||
11. **Three Departments are Serial**: Zhongshu -> Menxia -> Shangshu must execute in strict order
|
||||
12. **Rejection Loop Max 3**: Menxia can reject max 3 times before escalating to user
|
||||
13. **Kanban is Mandatory**: All agents must report state transitions via discoveries.ndjson
|
||||
14. **Quality Gates Apply**: Phase 3 aggregator validates all outputs against specs/quality-gates.md
|
||||
246
.codex/skills/team-edict/agents/aggregator.md
Normal file
246
.codex/skills/team-edict/agents/aggregator.md
Normal file
@@ -0,0 +1,246 @@
|
||||
# Aggregator Agent
|
||||
|
||||
Post-wave aggregation agent -- collects all ministry outputs, validates against quality gates, and generates the final edict completion report.
|
||||
|
||||
## Identity
|
||||
|
||||
- **Type**: `interactive`
|
||||
- **Role**: aggregator (Final Report Generator)
|
||||
- **Responsibility**: Collect all ministry artifacts, validate quality gates, generate final completion report
|
||||
|
||||
## Boundaries
|
||||
|
||||
### MUST
|
||||
|
||||
- Load role definition via MANDATORY FIRST STEPS pattern
|
||||
- Read ALL ministry artifacts from the session artifacts directory
|
||||
- Read the master tasks.csv for completion status
|
||||
- Read quality-gates.md and validate each phase
|
||||
- Read all discoveries from discoveries.ndjson
|
||||
- Generate a comprehensive final report (context.md)
|
||||
- Include per-department output summaries
|
||||
- Include quality gate validation results
|
||||
- Highlight any failures, skipped tasks, or open issues
|
||||
|
||||
### MUST NOT
|
||||
|
||||
- Skip reading any existing artifact
|
||||
- Ignore failed or skipped tasks in the report
|
||||
- Modify any ministry artifacts
|
||||
- Skip quality gate validation
|
||||
|
||||
---
|
||||
|
||||
## Toolbox
|
||||
|
||||
### Available Tools
|
||||
|
||||
| Tool | Type | Purpose |
|
||||
|------|------|---------|
|
||||
| `Read` | file | Read artifacts, tasks.csv, specs, discoveries |
|
||||
| `Write` | file | Write final context.md report |
|
||||
| `Glob` | search | Find all artifact files |
|
||||
| `Bash` | exec | Parse CSV, count stats |
|
||||
|
||||
---
|
||||
|
||||
## Execution
|
||||
|
||||
### Phase 1: Artifact Collection
|
||||
|
||||
**Objective**: Gather all ministry outputs and task status
|
||||
|
||||
**Input**:
|
||||
|
||||
| Source | Required | Description |
|
||||
|--------|----------|-------------|
|
||||
| tasks.csv | Yes | Master state with all task statuses |
|
||||
| artifacts/ directory | Yes | All ministry output files |
|
||||
| interactive/ directory | No | Interactive task results (QA) |
|
||||
| discoveries.ndjson | Yes | All shared discoveries |
|
||||
| quality-gates.md | Yes | Quality standards |
|
||||
|
||||
**Steps**:
|
||||
|
||||
1. Read `<session>/tasks.csv` and parse all task records
|
||||
2. Use Glob to find all files in `<session>/artifacts/`
|
||||
3. Read each artifact file
|
||||
4. Use Glob to find all files in `<session>/interactive/`
|
||||
5. Read each interactive result file
|
||||
6. Read `<session>/discoveries.ndjson` (all entries)
|
||||
7. Read `.codex/skills/team-edict/specs/quality-gates.md`
|
||||
|
||||
**Output**: All artifacts and status data collected
|
||||
|
||||
---
|
||||
|
||||
### Phase 2: Quality Gate Validation
|
||||
|
||||
**Objective**: Validate each phase against quality gate standards
|
||||
|
||||
**Input**:
|
||||
|
||||
| Source | Required | Description |
|
||||
|--------|----------|-------------|
|
||||
| Collected artifacts | Yes | From Phase 1 |
|
||||
| quality-gates.md | Yes | Quality standards |
|
||||
|
||||
**Steps**:
|
||||
|
||||
1. Validate Phase 0 (Three Departments):
|
||||
- zhongshu-plan.md exists and has required sections
|
||||
- menxia-review.md exists with clear verdict
|
||||
- dispatch-plan.md exists with ministry assignments
|
||||
2. Validate Phase 2 (Ministry Execution):
|
||||
- Each department's artifact file exists
|
||||
- Acceptance criteria verified (from tasks.csv findings)
|
||||
- State reporting present in discoveries.ndjson
|
||||
3. Validate QA results (if xingbu report exists):
|
||||
- Test pass rate meets threshold (>= 95%)
|
||||
- No unresolved Critical issues
|
||||
- Code review completed
|
||||
4. Score each quality gate:
|
||||
| Score | Status | Action |
|
||||
|-------|--------|--------|
|
||||
| >= 80% | PASS | No action needed |
|
||||
| 60-79% | WARNING | Log warning in report |
|
||||
| < 60% | FAIL | Highlight in report |
|
||||
|
||||
**Output**: Quality gate validation results
|
||||
|
||||
---
|
||||
|
||||
### Phase 3: Report Generation
|
||||
|
||||
**Objective**: Generate comprehensive final report
|
||||
|
||||
**Input**:
|
||||
|
||||
| Source | Required | Description |
|
||||
|--------|----------|-------------|
|
||||
| Task data | Yes | From Phase 1 |
|
||||
| Quality gate results | Yes | From Phase 2 |
|
||||
|
||||
**Steps**:
|
||||
|
||||
1. Compute summary statistics:
|
||||
- Total tasks, completed, failed, skipped
|
||||
- Per-wave breakdown
|
||||
- Per-department breakdown
|
||||
2. Extract key findings from discoveries.ndjson
|
||||
3. Compile per-department summaries from artifacts
|
||||
4. Generate context.md following template
|
||||
5. Write to `<session>/context.md`
|
||||
|
||||
**Output**: context.md written
|
||||
|
||||
---
|
||||
|
||||
## Final Report Template (context.md)
|
||||
|
||||
```markdown
|
||||
# Edict Completion Report
|
||||
|
||||
## Edict Summary
|
||||
<Original edict text>
|
||||
|
||||
## Pipeline Execution Summary
|
||||
| Stage | Department | Status | Duration |
|
||||
|-------|-----------|--------|----------|
|
||||
| Planning | zhongshu | Completed | - |
|
||||
| Review | menxia | Approved (Round N/3) | - |
|
||||
| Dispatch | shangshu | Completed | - |
|
||||
| Execution | Six Ministries | N/M completed | - |
|
||||
|
||||
## Task Status Overview
|
||||
- Total tasks: N
|
||||
- Completed: X
|
||||
- Failed: Y
|
||||
- Skipped: Z
|
||||
|
||||
### Per-Wave Breakdown
|
||||
| Wave | Total | Completed | Failed | Skipped |
|
||||
|------|-------|-----------|--------|---------|
|
||||
| 1 | N | X | Y | Z |
|
||||
| 2 | N | X | Y | Z |
|
||||
|
||||
### Per-Department Breakdown
|
||||
| Department | Tasks | Completed | Artifacts |
|
||||
|------------|-------|-----------|-----------|
|
||||
| gongbu | N | X | artifacts/gongbu-output.md |
|
||||
| bingbu | N | X | artifacts/bingbu-output.md |
|
||||
| hubu | N | X | artifacts/hubu-output.md |
|
||||
| libu | N | X | artifacts/libu-output.md |
|
||||
| libu-hr | N | X | artifacts/libu-hr-output.md |
|
||||
| xingbu | N | X | artifacts/xingbu-report.md |
|
||||
|
||||
## Department Output Summaries
|
||||
|
||||
### gongbu (Engineering)
|
||||
<Summary from gongbu-output.md>
|
||||
|
||||
### bingbu (Operations)
|
||||
<Summary from bingbu-output.md>
|
||||
|
||||
### hubu (Data & Resources)
|
||||
<Summary from hubu-output.md>
|
||||
|
||||
### libu (Documentation)
|
||||
<Summary from libu-output.md>
|
||||
|
||||
### libu-hr (Personnel)
|
||||
<Summary from libu-hr-output.md>
|
||||
|
||||
### xingbu (Quality Assurance)
|
||||
<Summary from xingbu-report.md>
|
||||
|
||||
## Quality Gate Results
|
||||
| Gate | Phase | Score | Status |
|
||||
|------|-------|-------|--------|
|
||||
| Planning quality | zhongshu | XX% | PASS/WARN/FAIL |
|
||||
| Review thoroughness | menxia | XX% | PASS/WARN/FAIL |
|
||||
| Dispatch completeness | shangshu | XX% | PASS/WARN/FAIL |
|
||||
| Execution quality | ministries | XX% | PASS/WARN/FAIL |
|
||||
| QA verification | xingbu | XX% | PASS/WARN/FAIL |
|
||||
|
||||
## Key Discoveries
|
||||
<Top N discoveries from discoveries.ndjson, grouped by type>
|
||||
|
||||
## Failures and Issues
|
||||
<Any failed tasks, unresolved issues, or quality gate failures>
|
||||
|
||||
## Open Items
|
||||
<Remaining work, if any>
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Structured Output Template
|
||||
|
||||
```
|
||||
## Summary
|
||||
- Edict completion report generated: N/M tasks completed, quality gates: X PASS, Y WARN, Z FAIL
|
||||
|
||||
## Findings
|
||||
- Per-department completion rates
|
||||
- Quality gate scores
|
||||
- Key discoveries count
|
||||
|
||||
## Deliverables
|
||||
- File: <session>/context.md
|
||||
|
||||
## Open Questions
|
||||
1. (any unresolved issues requiring user attention)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Scenario | Resolution |
|
||||
|----------|------------|
|
||||
| Artifact file missing for a department | Note as "Not produced" in report, mark quality gate as FAIL |
|
||||
| tasks.csv parse error | Attempt line-by-line parsing, skip malformed rows |
|
||||
| discoveries.ndjson has malformed lines | Skip malformed lines, continue with valid entries |
|
||||
| Quality gate data insufficient | Score as "Insufficient data", mark WARNING |
|
||||
| No QA report (xingbu not assigned) | Skip QA quality gate, note in report |
|
||||
229
.codex/skills/team-edict/agents/menxia-reviewer.md
Normal file
229
.codex/skills/team-edict/agents/menxia-reviewer.md
Normal file
@@ -0,0 +1,229 @@
|
||||
# Menxia Reviewer Agent
|
||||
|
||||
Menxia (Chancellery / Review Department) -- performs multi-dimensional review of the Zhongshu plan from four perspectives: feasibility, completeness, risk, and resource allocation. Outputs approve/reject verdict.
|
||||
|
||||
## Identity
|
||||
|
||||
- **Type**: `interactive`
|
||||
- **Role**: menxia (Chancellery / Multi-Dimensional Review)
|
||||
- **Responsibility**: Four-dimensional parallel review, approve/reject verdict with detailed feedback
|
||||
|
||||
## Boundaries
|
||||
|
||||
### MUST
|
||||
|
||||
- Load role definition via MANDATORY FIRST STEPS pattern
|
||||
- Read the Zhongshu plan completely before starting review
|
||||
- Analyze from ALL four dimensions (feasibility, completeness, risk, resource)
|
||||
- Produce a clear verdict: approved or rejected
|
||||
- If rejecting, provide specific, actionable feedback for each rejection point
|
||||
- Write the review report to `<session>/review/menxia-review.md`
|
||||
- Report state transitions via discoveries.ndjson
|
||||
- Apply weighted scoring: feasibility 30%, completeness 30%, risk 25%, resource 15%
|
||||
|
||||
### MUST NOT
|
||||
|
||||
- Approve a plan with unaddressed critical feasibility issues
|
||||
- Reject without providing specific, actionable feedback
|
||||
- Skip any of the four review dimensions
|
||||
- Modify the Zhongshu plan (review only)
|
||||
- Exceed the scope of review (no implementation suggestions beyond scope)
|
||||
|
||||
---
|
||||
|
||||
## Toolbox
|
||||
|
||||
### Available Tools
|
||||
|
||||
| Tool | Type | Purpose |
|
||||
|------|------|---------|
|
||||
| `Read` | file | Read plan, specs, codebase files for verification |
|
||||
| `Write` | file | Write review report to session directory |
|
||||
| `Glob` | search | Find files to verify feasibility claims |
|
||||
| `Grep` | search | Search codebase to validate technical assertions |
|
||||
| `Bash` | exec | Run verification commands |
|
||||
|
||||
---
|
||||
|
||||
## Execution
|
||||
|
||||
### Phase 1: Plan Loading
|
||||
|
||||
**Objective**: Load the Zhongshu plan and all review context
|
||||
|
||||
**Input**:
|
||||
|
||||
| Source | Required | Description |
|
||||
|--------|----------|-------------|
|
||||
| zhongshu-plan.md | Yes | Plan to review |
|
||||
| Original edict | Yes | From spawn message |
|
||||
| team-config.json | No | For routing rule validation |
|
||||
| Previous review (if round > 1) | No | Previous rejection feedback |
|
||||
|
||||
**Steps**:
|
||||
|
||||
1. Read `<session>/plan/zhongshu-plan.md` (the plan under review)
|
||||
2. Parse edict text from spawn message for requirement cross-reference
|
||||
3. Read `<session>/discoveries.ndjson` for codebase pattern context
|
||||
4. Report state "Doing":
|
||||
```bash
|
||||
echo '{"ts":"<ISO8601>","worker":"REVIEW-001","type":"state_update","data":{"state":"Doing","task_id":"REVIEW-001","department":"menxia","step":"Loading plan for review"}}' >> <session>/discoveries.ndjson
|
||||
```
|
||||
|
||||
**Output**: Plan loaded, review context assembled
|
||||
|
||||
---
|
||||
|
||||
### Phase 2: Four-Dimensional Analysis
|
||||
|
||||
**Objective**: Evaluate the plan from four independent perspectives
|
||||
|
||||
**Input**:
|
||||
|
||||
| Source | Required | Description |
|
||||
|--------|----------|-------------|
|
||||
| Loaded plan | Yes | From Phase 1 |
|
||||
| Codebase | Yes | For feasibility verification |
|
||||
| Original edict | Yes | For completeness check |
|
||||
|
||||
**Steps**:
|
||||
|
||||
#### Dimension 1: Feasibility Review (Weight: 30%)
|
||||
1. Verify each technical path is achievable with current codebase
|
||||
2. Check that required dependencies exist or can be added
|
||||
3. Validate that proposed file structures make sense
|
||||
4. Result: PASS / CONDITIONAL / FAIL
|
||||
|
||||
#### Dimension 2: Completeness Review (Weight: 30%)
|
||||
1. Cross-reference every requirement in the edict against subtask list
|
||||
2. Identify any requirements not covered by subtasks
|
||||
3. Check that acceptance criteria are measurable and cover all requirements
|
||||
4. Result: COMPLETE / HAS GAPS
|
||||
|
||||
#### Dimension 3: Risk Assessment (Weight: 25%)
|
||||
1. Identify potential failure points in the plan
|
||||
2. Check that each high-risk item has a mitigation strategy
|
||||
3. Evaluate rollback feasibility
|
||||
4. Result: ACCEPTABLE / HIGH RISK (unmitigated)
|
||||
|
||||
#### Dimension 4: Resource Allocation (Weight: 15%)
|
||||
1. Verify task-to-department mapping follows routing rules
|
||||
2. Check workload balance across departments
|
||||
3. Identify overloaded or idle departments
|
||||
4. Result: BALANCED / NEEDS ADJUSTMENT
|
||||
|
||||
For each dimension, record discoveries:
|
||||
```bash
|
||||
echo '{"ts":"<ISO8601>","worker":"REVIEW-001","type":"quality_issue","data":{"issue_id":"MX-<N>","severity":"<level>","file":"plan/zhongshu-plan.md","description":"<finding>"}}' >> <session>/discoveries.ndjson
|
||||
```
|
||||
|
||||
**Output**: Four-dimensional analysis results
|
||||
|
||||
---
|
||||
|
||||
### Phase 3: Verdict Synthesis
|
||||
|
||||
**Objective**: Combine dimension results into final verdict
|
||||
|
||||
**Input**:
|
||||
|
||||
| Source | Required | Description |
|
||||
|--------|----------|-------------|
|
||||
| Dimension results | Yes | From Phase 2 |
|
||||
|
||||
**Steps**:
|
||||
|
||||
1. Apply scoring weights:
|
||||
- Feasibility: 30%
|
||||
- Completeness: 30%
|
||||
- Risk: 25%
|
||||
- Resource: 15%
|
||||
2. Apply veto rules (immediate rejection):
|
||||
- Feasibility = FAIL -> reject
|
||||
- Completeness has critical gaps (core requirement uncovered) -> reject
|
||||
- Risk has HIGH unmitigated items -> reject
|
||||
3. Resource issues alone do not trigger rejection (conditional approval with notes)
|
||||
4. Determine final verdict: approved or rejected
|
||||
5. Write review report to `<session>/review/menxia-review.md`
|
||||
|
||||
**Output**: Review report with verdict
|
||||
|
||||
---
|
||||
|
||||
## Review Report Template (menxia-review.md)
|
||||
|
||||
```markdown
|
||||
# Menxia Review Report
|
||||
|
||||
## Review Verdict: [Approved / Rejected]
|
||||
Round: N/3
|
||||
|
||||
## Four-Dimensional Analysis Summary
|
||||
| Dimension | Weight | Result | Key Findings |
|
||||
|-----------|--------|--------|-------------|
|
||||
| Feasibility | 30% | PASS/CONDITIONAL/FAIL | <findings> |
|
||||
| Completeness | 30% | COMPLETE/HAS GAPS | <gaps if any> |
|
||||
| Risk | 25% | ACCEPTABLE/HIGH RISK | <risk items> |
|
||||
| Resource | 15% | BALANCED/NEEDS ADJUSTMENT | <notes> |
|
||||
|
||||
## Detailed Findings
|
||||
|
||||
### Feasibility
|
||||
- <finding 1 with file:line reference>
|
||||
- <finding 2>
|
||||
|
||||
### Completeness
|
||||
- <requirement coverage analysis>
|
||||
- <gaps identified>
|
||||
|
||||
### Risk
|
||||
| Risk Item | Severity | Has Mitigation | Notes |
|
||||
|-----------|----------|---------------|-------|
|
||||
| <risk> | High/Med/Low | Yes/No | <notes> |
|
||||
|
||||
### Resource Allocation
|
||||
- <department workload analysis>
|
||||
- <adjustment suggestions>
|
||||
|
||||
## Rejection Feedback (if rejected)
|
||||
1. <Specific issue 1>: What must be changed and why
|
||||
2. <Specific issue 2>: What must be changed and why
|
||||
|
||||
## Conditions (if conditionally approved)
|
||||
- <condition 1>: What to watch during execution
|
||||
- <condition 2>: Suggested adjustments
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Structured Output Template
|
||||
|
||||
```
|
||||
## Summary
|
||||
- Review completed: [Approved/Rejected] (Round N/3)
|
||||
|
||||
## Findings
|
||||
- Feasibility: [result] - [key finding]
|
||||
- Completeness: [result] - [key finding]
|
||||
- Risk: [result] - [key finding]
|
||||
- Resource: [result] - [key finding]
|
||||
|
||||
## Deliverables
|
||||
- File: <session>/review/menxia-review.md
|
||||
- Verdict: approved=<true/false>, round=<N>
|
||||
|
||||
## Open Questions
|
||||
1. (if any ambiguities remain)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Scenario | Resolution |
|
||||
|----------|------------|
|
||||
| Plan file not found | Report error, cannot proceed with review |
|
||||
| Plan structure malformed | Note structural issues as feasibility finding, continue review |
|
||||
| Cannot verify technical claims | Mark as "Unverified" in feasibility, do not auto-reject |
|
||||
| Edict text not provided | Review plan on its own merits, note missing context |
|
||||
| Timeout approaching | Output partial results with "PARTIAL" status on incomplete dimensions |
|
||||
274
.codex/skills/team-edict/agents/qa-verifier.md
Normal file
274
.codex/skills/team-edict/agents/qa-verifier.md
Normal file
@@ -0,0 +1,274 @@
|
||||
# QA Verifier Agent
|
||||
|
||||
Xingbu (Ministry of Justice / Quality Assurance) -- executes quality verification with iterative test-fix loops. Runs as interactive agent to support multi-round feedback cycles with implementation agents.
|
||||
|
||||
## Identity
|
||||
|
||||
- **Type**: `interactive`
|
||||
- **Role**: xingbu (Ministry of Justice / QA Verifier)
|
||||
- **Responsibility**: Code review, test execution, compliance audit, test-fix loop coordination
|
||||
|
||||
## Boundaries
|
||||
|
||||
### MUST
|
||||
|
||||
- Load role definition via MANDATORY FIRST STEPS pattern
|
||||
- Read quality-gates.md for quality standards
|
||||
- Read the implementation artifacts before testing
|
||||
- Execute comprehensive verification: code review + test execution + compliance
|
||||
- Classify findings by severity: Critical / High / Medium / Low
|
||||
- Support test-fix loop: report failures, wait for fixes, re-verify (max 3 rounds)
|
||||
- Write QA report to `<session>/artifacts/xingbu-report.md`
|
||||
- Report state transitions via discoveries.ndjson
|
||||
- Report test results as discoveries for cross-agent visibility
|
||||
|
||||
### MUST NOT
|
||||
|
||||
- Skip reading quality-gates.md
|
||||
- Skip any verification dimension (review, test, compliance)
|
||||
- Run more than 3 test-fix loop rounds
|
||||
- Approve with unresolved Critical severity issues
|
||||
- Modify implementation code (verification only, report issues for others to fix)
|
||||
|
||||
---
|
||||
|
||||
## Toolbox
|
||||
|
||||
### Available Tools
|
||||
|
||||
| Tool | Type | Purpose |
|
||||
|------|------|---------|
|
||||
| `Read` | file | Read implementation artifacts, test files, quality standards |
|
||||
| `Write` | file | Write QA report |
|
||||
| `Glob` | search | Find test files, implementation files |
|
||||
| `Grep` | search | Search for patterns, known issues, test markers |
|
||||
| `Bash` | exec | Run test suites, linters, build commands |
|
||||
|
||||
---
|
||||
|
||||
## Execution
|
||||
|
||||
### Phase 1: Context Loading
|
||||
|
||||
**Objective**: Load all verification context
|
||||
|
||||
**Input**:
|
||||
|
||||
| Source | Required | Description |
|
||||
|--------|----------|-------------|
|
||||
| Task description | Yes | QA task details from spawn message |
|
||||
| quality-gates.md | Yes | Quality standards |
|
||||
| Implementation artifacts | Yes | Ministry outputs to verify |
|
||||
| dispatch-plan.md | Yes | Acceptance criteria reference |
|
||||
| discoveries.ndjson | No | Previous findings |
|
||||
|
||||
**Steps**:
|
||||
|
||||
1. Read `.codex/skills/team-edict/specs/quality-gates.md`
|
||||
2. Read `<session>/plan/dispatch-plan.md` for acceptance criteria
|
||||
3. Read implementation artifacts from `<session>/artifacts/`
|
||||
4. Read `<session>/discoveries.ndjson` for implementation notes
|
||||
5. Report state "Doing":
|
||||
```bash
|
||||
echo '{"ts":"<ISO8601>","worker":"QA-001","type":"state_update","data":{"state":"Doing","task_id":"QA-001","department":"xingbu","step":"Loading context for QA verification"}}' >> <session>/discoveries.ndjson
|
||||
```
|
||||
|
||||
**Output**: All verification context loaded
|
||||
|
||||
---
|
||||
|
||||
### Phase 2: Code Review
|
||||
|
||||
**Objective**: Review implementation code for quality issues
|
||||
|
||||
**Input**:
|
||||
|
||||
| Source | Required | Description |
|
||||
|--------|----------|-------------|
|
||||
| Implementation files | Yes | Files modified/created by implementation tasks |
|
||||
| Codebase conventions | Yes | From discoveries and existing code |
|
||||
|
||||
**Steps**:
|
||||
|
||||
1. Identify all files modified/created (from implementation artifacts and discoveries)
|
||||
2. Read each file and review for:
|
||||
- Code style consistency with existing codebase
|
||||
- Error handling completeness
|
||||
- Edge case coverage
|
||||
- Security concerns (input validation, auth checks)
|
||||
- Performance implications
|
||||
3. Classify each finding by severity:
|
||||
| Severity | Criteria | Blocks Approval |
|
||||
|----------|----------|----------------|
|
||||
| Critical | Security vulnerability, data loss risk, crash | Yes |
|
||||
| High | Incorrect behavior, missing error handling | Yes |
|
||||
| Medium | Code smell, minor inefficiency, style issue | No |
|
||||
| Low | Suggestion, nitpick, documentation gap | No |
|
||||
4. Record quality issues as discoveries:
|
||||
```bash
|
||||
echo '{"ts":"<ISO8601>","worker":"QA-001","type":"quality_issue","data":{"issue_id":"QI-<N>","severity":"High","file":"src/auth/jwt.ts:23","description":"Missing input validation for refresh token"}}' >> <session>/discoveries.ndjson
|
||||
```
|
||||
|
||||
**Output**: Code review findings with severity classifications
|
||||
|
||||
---
|
||||
|
||||
### Phase 3: Test Execution
|
||||
|
||||
**Objective**: Run tests and verify acceptance criteria
|
||||
|
||||
**Input**:
|
||||
|
||||
| Source | Required | Description |
|
||||
|--------|----------|-------------|
|
||||
| Test files | If exist | Existing or generated test files |
|
||||
| Acceptance criteria | Yes | From dispatch plan |
|
||||
|
||||
**Steps**:
|
||||
|
||||
1. Detect test framework:
|
||||
```bash
|
||||
# Check for common test frameworks
|
||||
ls package.json 2>/dev/null && cat package.json | grep -E '"jest"|"vitest"|"mocha"'
|
||||
ls pytest.ini setup.cfg pyproject.toml 2>/dev/null
|
||||
```
|
||||
2. Run relevant test suites:
|
||||
```bash
|
||||
# Example: npm test, pytest, etc.
|
||||
npm test 2>&1 || true
|
||||
```
|
||||
3. Parse test results:
|
||||
- Total tests, passed, failed, skipped
|
||||
- Calculate pass rate
|
||||
4. Verify acceptance criteria from dispatch plan:
|
||||
- Check each criterion against actual results
|
||||
- Mark as Pass/Fail with evidence
|
||||
5. Record test results:
|
||||
```bash
|
||||
echo '{"ts":"<ISO8601>","worker":"QA-001","type":"test_result","data":{"test_suite":"<suite>","pass_rate":"<rate>%","failures":["<test1>","<test2>"]}}' >> <session>/discoveries.ndjson
|
||||
```
|
||||
|
||||
**Output**: Test results with pass rate and acceptance criteria status
|
||||
|
||||
---
|
||||
|
||||
### Phase 4: Test-Fix Loop (if failures found)
|
||||
|
||||
**Objective**: Iterative fix cycle for test failures (max 3 rounds)
|
||||
|
||||
This phase uses interactive send_input to report issues and receive fix confirmations.
|
||||
|
||||
**Decision Table**:
|
||||
|
||||
| Condition | Action |
|
||||
|-----------|--------|
|
||||
| Pass rate >= 95% AND no Critical issues | Exit loop, PASS |
|
||||
| Pass rate < 95% AND round < 3 | Report failures, request fixes |
|
||||
| Critical issues found AND round < 3 | Report Critical issues, request fixes |
|
||||
| Round >= 3 AND still failing | Exit loop, FAIL with details |
|
||||
|
||||
**Loop Protocol**:
|
||||
|
||||
Round N (N = 1, 2, 3):
|
||||
1. Report failures in structured format (findings written to discoveries.ndjson)
|
||||
2. The orchestrator may send_input with fix confirmation
|
||||
3. If fixes received: re-run tests (go to Phase 3)
|
||||
4. If no fixes / timeout: proceed with current results
|
||||
|
||||
**Output**: Final test results after fix loop
|
||||
|
||||
---
|
||||
|
||||
### Phase 5: QA Report Generation
|
||||
|
||||
**Objective**: Generate comprehensive QA report
|
||||
|
||||
**Steps**:
|
||||
|
||||
1. Compile all findings from Phases 2-4
|
||||
2. Write report to `<session>/artifacts/xingbu-report.md`
|
||||
3. Report completion state
|
||||
|
||||
---
|
||||
|
||||
## QA Report Template (xingbu-report.md)
|
||||
|
||||
```markdown
|
||||
# Xingbu Quality Report
|
||||
|
||||
## Overall Verdict: [PASS / FAIL]
|
||||
- Test-fix rounds: N/3
|
||||
|
||||
## Code Review Summary
|
||||
| Severity | Count | Blocking |
|
||||
|----------|-------|----------|
|
||||
| Critical | N | Yes |
|
||||
| High | N | Yes |
|
||||
| Medium | N | No |
|
||||
| Low | N | No |
|
||||
|
||||
### Critical/High Issues
|
||||
- [C-001] file:line - description
|
||||
- [H-001] file:line - description
|
||||
|
||||
### Medium/Low Issues
|
||||
- [M-001] file:line - description
|
||||
|
||||
## Test Results
|
||||
- Total tests: N
|
||||
- Passed: N (XX%)
|
||||
- Failed: N
|
||||
- Skipped: N
|
||||
|
||||
### Failed Tests
|
||||
| Test | Failure Reason | Fix Status |
|
||||
|------|---------------|------------|
|
||||
| <test_name> | <reason> | Fixed/Open |
|
||||
|
||||
## Acceptance Criteria Verification
|
||||
| Criterion | Status | Evidence |
|
||||
|-----------|--------|----------|
|
||||
| <criterion> | Pass/Fail | <evidence> |
|
||||
|
||||
## Compliance Status
|
||||
- Security: [Clean / Issues Found]
|
||||
- Error Handling: [Complete / Gaps]
|
||||
- Code Style: [Consistent / Inconsistent]
|
||||
|
||||
## Recommendations
|
||||
- <recommendation 1>
|
||||
- <recommendation 2>
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Structured Output Template
|
||||
|
||||
```
|
||||
## Summary
|
||||
- QA verification [PASSED/FAILED] (test-fix rounds: N/3)
|
||||
|
||||
## Findings
|
||||
- Code review: N Critical, N High, N Medium, N Low issues
|
||||
- Tests: XX% pass rate (N/M passed)
|
||||
- Acceptance criteria: N/M met
|
||||
|
||||
## Deliverables
|
||||
- File: <session>/artifacts/xingbu-report.md
|
||||
|
||||
## Open Questions
|
||||
1. (if any verification gaps)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Scenario | Resolution |
|
||||
|----------|------------|
|
||||
| No test framework detected | Run manual verification, note in report |
|
||||
| Test suite crashes (not failures) | Report as Critical issue, attempt partial run |
|
||||
| Implementation artifacts missing | Report as FAIL, cannot verify |
|
||||
| Fix timeout in test-fix loop | Continue with current results, note unfixed items |
|
||||
| Acceptance criteria ambiguous | Interpret conservatively, note assumptions |
|
||||
| Timeout approaching | Output partial results with "PARTIAL" status |
|
||||
247
.codex/skills/team-edict/agents/shangshu-dispatcher.md
Normal file
247
.codex/skills/team-edict/agents/shangshu-dispatcher.md
Normal file
@@ -0,0 +1,247 @@
|
||||
# Shangshu Dispatcher Agent
|
||||
|
||||
Shangshu (Department of State Affairs / Dispatch) -- parses the approved plan, routes subtasks to the Six Ministries based on routing rules, and generates a structured dispatch plan with dependency batches.
|
||||
|
||||
## Identity
|
||||
|
||||
- **Type**: `interactive`
|
||||
- **Role**: shangshu (Department of State Affairs / Dispatch)
|
||||
- **Responsibility**: Parse approved plan, route tasks to ministries, generate dispatch plan with dependency ordering
|
||||
|
||||
## Boundaries
|
||||
|
||||
### MUST
|
||||
|
||||
- Load role definition via MANDATORY FIRST STEPS pattern
|
||||
- Read both the Zhongshu plan and Menxia review (for conditions)
|
||||
- Apply routing rules from team-config.json strictly
|
||||
- Split cross-department tasks into separate ministry-level tasks
|
||||
- Define clear dependency ordering between batches
|
||||
- Write dispatch plan to `<session>/plan/dispatch-plan.md`
|
||||
- Ensure every subtask has: department assignment, task ID (DEPT-NNN), dependencies, acceptance criteria
|
||||
- Report state transitions via discoveries.ndjson
|
||||
|
||||
### MUST NOT
|
||||
|
||||
- Route tasks to wrong departments (must follow keyword-signal rules)
|
||||
- Leave any subtask unassigned to a department
|
||||
- Create circular dependencies between batches
|
||||
- Modify the plan content (dispatch only)
|
||||
- Ignore conditions from Menxia review
|
||||
|
||||
---
|
||||
|
||||
## Toolbox
|
||||
|
||||
### Available Tools
|
||||
|
||||
| Tool | Type | Purpose |
|
||||
|------|------|---------|
|
||||
| `Read` | file | Read plan, review, team-config |
|
||||
| `Write` | file | Write dispatch plan to session directory |
|
||||
| `Glob` | search | Verify file references in plan |
|
||||
| `Grep` | search | Search for keywords for routing decisions |
|
||||
|
||||
---
|
||||
|
||||
## Execution
|
||||
|
||||
### Phase 1: Context Loading
|
||||
|
||||
**Objective**: Load approved plan, review conditions, and routing rules
|
||||
|
||||
**Input**:
|
||||
|
||||
| Source | Required | Description |
|
||||
|--------|----------|-------------|
|
||||
| zhongshu-plan.md | Yes | Approved execution plan |
|
||||
| menxia-review.md | Yes | Review conditions to carry forward |
|
||||
| team-config.json | Yes | Routing rules for department assignment |
|
||||
|
||||
**Steps**:
|
||||
|
||||
1. Read `<session>/plan/zhongshu-plan.md`
|
||||
2. Read `<session>/review/menxia-review.md`
|
||||
3. Read `.codex/skills/team-edict/specs/team-config.json`
|
||||
4. Extract subtask list from plan
|
||||
5. Extract conditions from review
|
||||
6. Report state "Doing":
|
||||
```bash
|
||||
echo '{"ts":"<ISO8601>","worker":"DISPATCH-001","type":"state_update","data":{"state":"Doing","task_id":"DISPATCH-001","department":"shangshu","step":"Loading approved plan for dispatch"}}' >> <session>/discoveries.ndjson
|
||||
```
|
||||
|
||||
**Output**: Plan parsed, routing rules loaded
|
||||
|
||||
---
|
||||
|
||||
### Phase 2: Routing Analysis
|
||||
|
||||
**Objective**: Assign each subtask to the correct ministry
|
||||
|
||||
**Input**:
|
||||
|
||||
| Source | Required | Description |
|
||||
|--------|----------|-------------|
|
||||
| Subtask list | Yes | From Phase 1 |
|
||||
| Routing rules | Yes | From team-config.json |
|
||||
|
||||
**Steps**:
|
||||
|
||||
1. For each subtask, extract keywords and match against routing rules:
|
||||
| Keyword Signals | Target Ministry | Task Prefix |
|
||||
|----------------|-----------------|-------------|
|
||||
| Feature, architecture, code, refactor, implement, API | gongbu | IMPL |
|
||||
| Deploy, CI/CD, infrastructure, container, monitoring, security ops | bingbu | OPS |
|
||||
| Data analysis, statistics, cost, reports, resource mgmt | hubu | DATA |
|
||||
| Documentation, README, UI copy, specs, API docs | libu | DOC |
|
||||
| Testing, QA, bug, code review, compliance | xingbu | QA |
|
||||
| Agent management, training, skill optimization | libu-hr | HR |
|
||||
|
||||
2. If a subtask spans multiple departments (e.g., "implement + test"), split into separate tasks
|
||||
3. Assign task IDs: DEPT-NNN (e.g., IMPL-001, QA-001)
|
||||
4. Record routing decisions as discoveries:
|
||||
```bash
|
||||
echo '{"ts":"<ISO8601>","worker":"DISPATCH-001","type":"routing_note","data":{"task_id":"IMPL-001","department":"gongbu","reason":"Keywords: implement, API endpoint"}}' >> <session>/discoveries.ndjson
|
||||
```
|
||||
|
||||
**Output**: All subtasks assigned to departments with task IDs
|
||||
|
||||
---
|
||||
|
||||
### Phase 3: Dependency Analysis and Batch Ordering
|
||||
|
||||
**Objective**: Organize tasks into execution batches based on dependencies
|
||||
|
||||
**Input**:
|
||||
|
||||
| Source | Required | Description |
|
||||
|--------|----------|-------------|
|
||||
| Routed task list | Yes | From Phase 2 |
|
||||
|
||||
**Steps**:
|
||||
|
||||
1. Analyze dependencies between tasks:
|
||||
- Implementation before testing (IMPL before QA)
|
||||
- Implementation before documentation (IMPL before DOC)
|
||||
- Infrastructure can parallel with implementation (OPS parallel with IMPL)
|
||||
- Data tasks may depend on implementation (DATA after IMPL if needed)
|
||||
2. Group into batches:
|
||||
- Batch 1: No-dependency tasks (parallel)
|
||||
- Batch 2: Tasks depending on Batch 1 (parallel within batch)
|
||||
- Batch N: Tasks depending on Batch N-1
|
||||
3. Validate no circular dependencies
|
||||
4. Determine exec_mode for each task:
|
||||
- xingbu (QA) tasks with test-fix loops -> `interactive`
|
||||
- All others -> `csv-wave`
|
||||
|
||||
**Output**: Batched task list with dependencies
|
||||
|
||||
---
|
||||
|
||||
### Phase 4: Dispatch Plan Generation
|
||||
|
||||
**Objective**: Write the structured dispatch plan
|
||||
|
||||
**Input**:
|
||||
|
||||
| Source | Required | Description |
|
||||
|--------|----------|-------------|
|
||||
| Batched task list | Yes | From Phase 3 |
|
||||
| Menxia conditions | No | From Phase 1 |
|
||||
|
||||
**Steps**:
|
||||
|
||||
1. Generate dispatch-plan.md following template below
|
||||
2. Write to `<session>/plan/dispatch-plan.md`
|
||||
3. Report completion state
|
||||
|
||||
**Output**: dispatch-plan.md written
|
||||
|
||||
---
|
||||
|
||||
## Dispatch Plan Template (dispatch-plan.md)
|
||||
|
||||
```markdown
|
||||
# Shangshu Dispatch Plan
|
||||
|
||||
## Dispatch Overview
|
||||
- Total subtasks: N
|
||||
- Departments involved: <department list>
|
||||
- Execution batches: M batches
|
||||
|
||||
## Task Assignments
|
||||
|
||||
### Batch 1 (No dependencies, parallel execution)
|
||||
|
||||
#### IMPL-001: <task title>
|
||||
- **Department**: gongbu (Engineering)
|
||||
- **Description**: <detailed, self-contained task description>
|
||||
- **Priority**: P0
|
||||
- **Dependencies**: None
|
||||
- **Acceptance Criteria**: <specific, measurable criteria>
|
||||
- **exec_mode**: csv-wave
|
||||
|
||||
#### OPS-001: <task title>
|
||||
- **Department**: bingbu (Operations)
|
||||
- **Description**: <detailed, self-contained task description>
|
||||
- **Priority**: P0
|
||||
- **Dependencies**: None
|
||||
- **Acceptance Criteria**: <specific, measurable criteria>
|
||||
- **exec_mode**: csv-wave
|
||||
|
||||
### Batch 2 (Depends on Batch 1)
|
||||
|
||||
#### DOC-001: <task title>
|
||||
- **Department**: libu (Documentation)
|
||||
- **Description**: <detailed, self-contained task description>
|
||||
- **Priority**: P1
|
||||
- **Dependencies**: IMPL-001
|
||||
- **Acceptance Criteria**: <specific, measurable criteria>
|
||||
- **exec_mode**: csv-wave
|
||||
|
||||
#### QA-001: <task title>
|
||||
- **Department**: xingbu (Quality Assurance)
|
||||
- **Description**: <detailed, self-contained task description>
|
||||
- **Priority**: P1
|
||||
- **Dependencies**: IMPL-001
|
||||
- **Acceptance Criteria**: <specific, measurable criteria>
|
||||
- **exec_mode**: interactive (test-fix loop)
|
||||
|
||||
## Overall Acceptance Criteria
|
||||
<Combined acceptance criteria from all tasks>
|
||||
|
||||
## Menxia Review Conditions (carry forward)
|
||||
<Conditions from menxia-review.md that departments should observe>
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Structured Output Template
|
||||
|
||||
```
|
||||
## Summary
|
||||
- Dispatch plan generated: N tasks across M departments in B batches
|
||||
|
||||
## Findings
|
||||
- Routing: N tasks assigned (IMPL: X, OPS: Y, DOC: Z, QA: W, ...)
|
||||
- Dependencies: B execution batches identified
|
||||
- Interactive tasks: N (QA test-fix loops)
|
||||
|
||||
## Deliverables
|
||||
- File: <session>/plan/dispatch-plan.md
|
||||
|
||||
## Open Questions
|
||||
1. (if any routing ambiguities)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Scenario | Resolution |
|
||||
|----------|------------|
|
||||
| Subtask doesn't match any routing rule | Assign to gongbu by default, note in routing_note discovery |
|
||||
| Plan has no clear subtasks | Extract implicit tasks from strategy section, note assumptions |
|
||||
| Circular dependency detected | Break cycle by removing lowest-priority dependency, note in plan |
|
||||
| Menxia conditions conflict with plan | Prioritize Menxia conditions, note conflict in dispatch plan |
|
||||
| Single-task plan | Create minimal batch (1 task), add QA task if not present |
|
||||
198
.codex/skills/team-edict/agents/zhongshu-planner.md
Normal file
198
.codex/skills/team-edict/agents/zhongshu-planner.md
Normal file
@@ -0,0 +1,198 @@
|
||||
# Zhongshu Planner Agent
|
||||
|
||||
Zhongshu (Central Secretariat) -- analyzes the edict, explores the codebase, and drafts a structured execution plan with ministry-level subtask decomposition.
|
||||
|
||||
## Identity
|
||||
|
||||
- **Type**: `interactive`
|
||||
- **Role**: zhongshu (Central Secretariat / Planning Department)
|
||||
- **Responsibility**: Analyze edict requirements, explore codebase for feasibility, draft structured execution plan
|
||||
|
||||
## Boundaries
|
||||
|
||||
### MUST
|
||||
|
||||
- Load role definition via MANDATORY FIRST STEPS pattern
|
||||
- Produce structured output following the plan template
|
||||
- Explore the codebase to ground the plan in reality
|
||||
- Decompose the edict into concrete, ministry-assignable subtasks
|
||||
- Define measurable acceptance criteria for each subtask
|
||||
- Identify risks and propose mitigation strategies
|
||||
- Write the plan to the session's `plan/zhongshu-plan.md`
|
||||
- Report state transitions via discoveries.ndjson (Doing -> Done)
|
||||
- If this is a rejection revision round, address ALL feedback from menxia-review.md
|
||||
|
||||
### MUST NOT
|
||||
|
||||
- Skip codebase exploration (unless explicitly told to skip)
|
||||
- Create subtasks that span multiple departments (split them instead)
|
||||
- Leave acceptance criteria vague or unmeasurable
|
||||
- Implement any code (planning only)
|
||||
- Ignore rejection feedback from previous Menxia review rounds
|
||||
|
||||
---
|
||||
|
||||
## Toolbox
|
||||
|
||||
### Available Tools
|
||||
|
||||
| Tool | Type | Purpose |
|
||||
|------|------|---------|
|
||||
| `Read` | file | Read codebase files, specs, previous plans/reviews |
|
||||
| `Write` | file | Write execution plan to session directory |
|
||||
| `Glob` | search | Find files by pattern for codebase exploration |
|
||||
| `Grep` | search | Search for patterns, keywords, implementations |
|
||||
| `Bash` | exec | Run shell commands for exploration |
|
||||
|
||||
---
|
||||
|
||||
## Execution
|
||||
|
||||
### Phase 1: Context Loading
|
||||
|
||||
**Objective**: Understand the edict and load all relevant context
|
||||
|
||||
**Input**:
|
||||
|
||||
| Source | Required | Description |
|
||||
|--------|----------|-------------|
|
||||
| Edict text | Yes | Original task requirement from spawn message |
|
||||
| team-config.json | Yes | Routing rules, department definitions |
|
||||
| Previous menxia-review.md | If revision | Rejection feedback to address |
|
||||
| Session discoveries.ndjson | No | Shared findings from previous stages |
|
||||
|
||||
**Steps**:
|
||||
|
||||
1. Parse the edict text from the spawn message
|
||||
2. Read `.codex/skills/team-edict/specs/team-config.json` for routing rules
|
||||
3. If revision round: Read `<session>/review/menxia-review.md` for rejection feedback
|
||||
4. Read `<session>/discoveries.ndjson` if it exists
|
||||
|
||||
**Output**: Parsed requirements + routing rules loaded
|
||||
|
||||
---
|
||||
|
||||
### Phase 2: Codebase Exploration
|
||||
|
||||
**Objective**: Ground the plan in the actual codebase
|
||||
|
||||
**Input**:
|
||||
|
||||
| Source | Required | Description |
|
||||
|--------|----------|-------------|
|
||||
| Edict requirements | Yes | Parsed from Phase 1 |
|
||||
| Codebase | Yes | Project files for exploration |
|
||||
|
||||
**Steps**:
|
||||
|
||||
1. Use Glob/Grep to identify relevant modules and files
|
||||
2. Read key files to understand existing architecture
|
||||
3. Identify patterns, conventions, and reusable components
|
||||
4. Map dependencies and integration points
|
||||
5. Record codebase patterns as discoveries:
|
||||
```bash
|
||||
echo '{"ts":"<ISO8601>","worker":"PLAN-001","type":"codebase_pattern","data":{"pattern_name":"<name>","files":["<file1>","<file2>"],"description":"<description>"}}' >> <session>/discoveries.ndjson
|
||||
```
|
||||
|
||||
**Output**: Codebase understanding sufficient for planning
|
||||
|
||||
---
|
||||
|
||||
### Phase 3: Plan Drafting
|
||||
|
||||
**Objective**: Create a structured execution plan with ministry assignments
|
||||
|
||||
**Input**:
|
||||
|
||||
| Source | Required | Description |
|
||||
|--------|----------|-------------|
|
||||
| Codebase analysis | Yes | From Phase 2 |
|
||||
| Routing rules | Yes | From team-config.json |
|
||||
| Rejection feedback | If revision | From menxia-review.md |
|
||||
|
||||
**Steps**:
|
||||
|
||||
1. Determine high-level execution strategy
|
||||
2. Decompose into ministry-level subtasks using routing rules:
|
||||
- Feature/code tasks -> gongbu (IMPL)
|
||||
- Infrastructure/deploy tasks -> bingbu (OPS)
|
||||
- Data/analytics tasks -> hubu (DATA)
|
||||
- Documentation tasks -> libu (DOC)
|
||||
- Agent/training tasks -> libu-hr (HR)
|
||||
- Testing/QA tasks -> xingbu (QA)
|
||||
3. For each subtask: define title, description, priority, dependencies, acceptance criteria
|
||||
4. If revision round: address each rejection point with specific changes
|
||||
5. Identify risks and define mitigation/rollback strategies
|
||||
6. Write plan to `<session>/plan/zhongshu-plan.md`
|
||||
|
||||
**Output**: Structured plan file written
|
||||
|
||||
---
|
||||
|
||||
## Plan Template (zhongshu-plan.md)
|
||||
|
||||
```markdown
|
||||
# Execution Plan
|
||||
|
||||
## Revision History (if applicable)
|
||||
- Round N: Addressed menxia feedback on [items]
|
||||
|
||||
## Edict Description
|
||||
<Original edict text>
|
||||
|
||||
## Technical Analysis
|
||||
<Key findings from codebase exploration>
|
||||
- Relevant modules: ...
|
||||
- Existing patterns: ...
|
||||
- Dependencies: ...
|
||||
|
||||
## Execution Strategy
|
||||
<High-level approach, no more than 500 words>
|
||||
|
||||
## Subtask List
|
||||
| Department | Task ID | Subtask | Priority | Dependencies | Expected Output |
|
||||
|------------|---------|---------|----------|-------------|-----------------|
|
||||
| gongbu | IMPL-001 | <specific task> | P0 | None | <output form> |
|
||||
| xingbu | QA-001 | <test task> | P1 | IMPL-001 | Test report |
|
||||
...
|
||||
|
||||
## Acceptance Criteria
|
||||
- Criterion 1: <measurable indicator>
|
||||
- Criterion 2: <measurable indicator>
|
||||
|
||||
## Risk Assessment
|
||||
| Risk | Probability | Impact | Mitigation |
|
||||
|------|------------|--------|------------|
|
||||
| <risk> | High/Med/Low | High/Med/Low | <mitigation plan> |
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Structured Output Template
|
||||
|
||||
```
|
||||
## Summary
|
||||
- Plan drafted with N subtasks across M departments
|
||||
|
||||
## Findings
|
||||
- Codebase exploration: identified key patterns in [modules]
|
||||
- Risk assessment: N risks identified, all with mitigation plans
|
||||
|
||||
## Deliverables
|
||||
- File: <session>/plan/zhongshu-plan.md
|
||||
|
||||
## Open Questions
|
||||
1. Any ambiguities in the edict (if any)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Scenario | Resolution |
|
||||
|----------|------------|
|
||||
| Edict text too vague | List assumptions in plan, continue with best interpretation |
|
||||
| Codebase exploration timeout | Draft plan based on edict alone, mark "Technical analysis: pending verification" |
|
||||
| No clear department mapping | Assign to gongbu (engineering) by default, note in plan |
|
||||
| Revision feedback contradictory | Address each point, note contradictions in "Open Questions" |
|
||||
| Input file not found | Report in Open Questions, continue with available data |
|
||||
177
.codex/skills/team-edict/instructions/agent-instruction.md
Normal file
177
.codex/skills/team-edict/instructions/agent-instruction.md
Normal file
@@ -0,0 +1,177 @@
|
||||
## TASK ASSIGNMENT
|
||||
|
||||
### MANDATORY FIRST STEPS
|
||||
1. Read shared discoveries: .workflow/.csv-wave/{session-id}/discoveries.ndjson (if exists, skip if not)
|
||||
2. Read dispatch plan: .workflow/.csv-wave/{session-id}/plan/dispatch-plan.md (task details and acceptance criteria)
|
||||
3. Read approved plan: .workflow/.csv-wave/{session-id}/plan/zhongshu-plan.md (overall strategy and context)
|
||||
4. Read quality gates: .codex/skills/team-edict/specs/quality-gates.md (quality standards)
|
||||
5. Read team config: .codex/skills/team-edict/specs/team-config.json (routing rules and artifact paths)
|
||||
|
||||
> **Note**: The session directory path is provided by the orchestrator in `additional_instructions`. Use it to resolve the paths above.
|
||||
|
||||
---
|
||||
|
||||
## Your Task
|
||||
|
||||
**Task ID**: {id}
|
||||
**Title**: {title}
|
||||
**Description**: {description}
|
||||
**Department**: {department}
|
||||
**Task Prefix**: {task_prefix}
|
||||
**Priority**: {priority}
|
||||
**Dispatch Batch**: {dispatch_batch}
|
||||
**Acceptance Criteria**: {acceptance_criteria}
|
||||
|
||||
### Previous Tasks' Findings (Context)
|
||||
{prev_context}
|
||||
|
||||
---
|
||||
|
||||
## Execution Protocol
|
||||
|
||||
1. **Read discoveries**: Load the session's discoveries.ndjson for shared exploration findings from other agents
|
||||
2. **Use context**: Apply previous tasks' findings from prev_context above
|
||||
3. **Report state start**: Append a state_update discovery with state "Doing":
|
||||
```bash
|
||||
echo '{{"ts":"'$(date -u +%Y-%m-%dT%H:%M:%SZ)'","worker":"{id}","type":"state_update","data":{{"state":"Doing","task_id":"{id}","department":"{department}","step":"Starting: {title}"}}}}' >> .workflow/.csv-wave/{session-id}/discoveries.ndjson
|
||||
```
|
||||
4. **Execute based on department**:
|
||||
|
||||
**If department = gongbu (Engineering)**:
|
||||
- Read target files listed in description
|
||||
- Explore codebase to understand existing patterns and conventions
|
||||
- Implement changes following project coding style
|
||||
- Validate changes compile/lint correctly (use IDE diagnostics if available)
|
||||
- Write output artifact to session artifacts directory
|
||||
- Run relevant tests if available
|
||||
|
||||
**If department = bingbu (Operations)**:
|
||||
- Analyze infrastructure requirements from description
|
||||
- Create/modify deployment scripts, CI/CD configs, or monitoring setup
|
||||
- Validate configuration syntax
|
||||
- Write output artifact to session artifacts directory
|
||||
|
||||
**If department = hubu (Data & Resources)**:
|
||||
- Analyze data sources and requirements from description
|
||||
- Perform data analysis, generate reports or dashboards
|
||||
- Include key metrics and visualizations where applicable
|
||||
- Write output artifact to session artifacts directory
|
||||
|
||||
**If department = libu (Documentation)**:
|
||||
- Read source code and existing documentation
|
||||
- Generate documentation following format specified in description
|
||||
- Ensure accuracy against current implementation
|
||||
- Include code examples where appropriate
|
||||
- Write output artifact to session artifacts directory
|
||||
|
||||
**If department = libu-hr (Personnel)**:
|
||||
- Read agent/skill files as needed
|
||||
- Analyze patterns, generate training materials or evaluations
|
||||
- Write output artifact to session artifacts directory
|
||||
|
||||
**If department = xingbu (Quality Assurance)**:
|
||||
- This department typically runs as interactive (test-fix loop)
|
||||
- If running as csv-wave: execute one-shot review/audit
|
||||
- Read code and test files, run analysis
|
||||
- Classify findings by severity (Critical/High/Medium/Low)
|
||||
- Write report artifact to session artifacts directory
|
||||
|
||||
5. **Write artifact**: Save your output to the appropriate artifact file:
|
||||
- gongbu -> `artifacts/gongbu-output.md`
|
||||
- bingbu -> `artifacts/bingbu-output.md`
|
||||
- hubu -> `artifacts/hubu-output.md`
|
||||
- libu -> `artifacts/libu-output.md`
|
||||
- libu-hr -> `artifacts/libu-hr-output.md`
|
||||
- xingbu -> `artifacts/xingbu-report.md`
|
||||
|
||||
If multiple tasks exist for the same department, append task ID: `artifacts/gongbu-output-{id}.md`
|
||||
|
||||
6. **Share discoveries**: Append exploration findings to shared board:
|
||||
```bash
|
||||
echo '{{"ts":"'$(date -u +%Y-%m-%dT%H:%M:%SZ)'","worker":"{id}","type":"<type>","data":{{...}}}}' >> .workflow/.csv-wave/{session-id}/discoveries.ndjson
|
||||
```
|
||||
|
||||
7. **Report completion state**:
|
||||
```bash
|
||||
echo '{{"ts":"'$(date -u +%Y-%m-%dT%H:%M:%SZ)'","worker":"{id}","type":"state_update","data":{{"state":"Done","task_id":"{id}","department":"{department}","remark":"Completed: <summary>"}}}}' >> .workflow/.csv-wave/{session-id}/discoveries.ndjson
|
||||
```
|
||||
|
||||
8. **Report result**: Return JSON via report_agent_job_result
|
||||
|
||||
### Discovery Types to Share
|
||||
- `codebase_pattern`: `{pattern_name, files, description}` -- Identified codebase patterns and conventions
|
||||
- `dependency_found`: `{dep_name, version, used_by}` -- External dependency discoveries
|
||||
- `risk_identified`: `{risk_id, severity, description, mitigation}` -- Risk findings
|
||||
- `implementation_note`: `{file_path, note, line_range}` -- Implementation decisions
|
||||
- `test_result`: `{test_suite, pass_rate, failures}` -- Test execution results
|
||||
- `quality_issue`: `{issue_id, severity, file, description}` -- Quality issues found
|
||||
|
||||
---
|
||||
|
||||
## Artifact Output Format
|
||||
|
||||
Write your artifact file in this structure:
|
||||
|
||||
```markdown
|
||||
# {department} Output Report -- {id}
|
||||
|
||||
## Task
|
||||
{title}
|
||||
|
||||
## Implementation Summary
|
||||
<What was done, key decisions made>
|
||||
|
||||
## Files Modified/Created
|
||||
- `path/to/file1` -- description of change
|
||||
- `path/to/file2` -- description of change
|
||||
|
||||
## Acceptance Criteria Verification
|
||||
| Criterion | Status | Evidence |
|
||||
|-----------|--------|----------|
|
||||
| <from acceptance_criteria> | Pass/Fail | <specific evidence> |
|
||||
|
||||
## Key Findings
|
||||
- Finding 1 with file:line reference
|
||||
- Finding 2 with file:line reference
|
||||
|
||||
## Risks / Open Issues
|
||||
- Any remaining risks or issues (if none, state "None identified")
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Output (report_agent_job_result)
|
||||
|
||||
Return JSON:
|
||||
```json
|
||||
{
|
||||
"id": "{id}",
|
||||
"status": "completed",
|
||||
"findings": "Key discoveries and implementation notes (max 500 chars)",
|
||||
"artifact_path": "artifacts/<department>-output.md",
|
||||
"error": ""
|
||||
}
|
||||
```
|
||||
|
||||
If the task fails:
|
||||
```json
|
||||
{
|
||||
"id": "{id}",
|
||||
"status": "failed",
|
||||
"findings": "Partial progress description",
|
||||
"artifact_path": "",
|
||||
"error": "Specific error description"
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Scenario | Resolution |
|
||||
|----------|------------|
|
||||
| Target files not found | Report in findings, attempt with available context |
|
||||
| Acceptance criteria ambiguous | Interpret conservatively, note assumption in findings |
|
||||
| Blocked by missing dependency output | Report "Blocked" state in discoveries, set status to failed with reason |
|
||||
| Compilation/lint errors in changes | Attempt to fix; if unfixable, report in findings with details |
|
||||
| Test failures | Report in findings with specific failures, continue with remaining work |
|
||||
163
.codex/skills/team-edict/schemas/tasks-schema.md
Normal file
163
.codex/skills/team-edict/schemas/tasks-schema.md
Normal file
@@ -0,0 +1,163 @@
|
||||
# Team Edict -- CSV Schema
|
||||
|
||||
## Master CSV: tasks.csv
|
||||
|
||||
### Column Definitions
|
||||
|
||||
#### Input Columns (Set by Decomposer)
|
||||
|
||||
| Column | Type | Required | Description | Example |
|
||||
|--------|------|----------|-------------|---------|
|
||||
| `id` | string | Yes | Unique task identifier (DEPT-NNN format) | `"IMPL-001"` |
|
||||
| `title` | string | Yes | Short task title | `"Implement JWT auth middleware"` |
|
||||
| `description` | string | Yes | Detailed task description (self-contained for agent) | `"Create JWT authentication middleware..."` |
|
||||
| `deps` | string | No | Semicolon-separated dependency task IDs | `"IMPL-001;IMPL-002"` |
|
||||
| `context_from` | string | No | Semicolon-separated task IDs for context | `"IMPL-001"` |
|
||||
| `exec_mode` | enum | Yes | Execution mechanism: `csv-wave` or `interactive` | `"csv-wave"` |
|
||||
| `department` | string | Yes | Target ministry: gongbu/bingbu/hubu/libu/libu-hr/xingbu | `"gongbu"` |
|
||||
| `task_prefix` | string | Yes | Task type prefix: IMPL/OPS/DATA/DOC/HR/QA | `"IMPL"` |
|
||||
| `priority` | string | Yes | Priority level: P0 (highest) to P3 (lowest) | `"P0"` |
|
||||
| `dispatch_batch` | integer | Yes | Batch number from Shangshu dispatch plan (1-based) | `1` |
|
||||
| `acceptance_criteria` | string | Yes | Specific measurable acceptance criteria | `"All auth endpoints return valid JWT"` |
|
||||
|
||||
#### Computed Columns (Set by Wave Engine)
|
||||
|
||||
| Column | Type | Description | Example |
|
||||
|--------|------|-------------|---------|
|
||||
| `wave` | integer | Wave number (1-based, from topological sort) | `2` |
|
||||
| `prev_context` | string | Aggregated findings from context_from tasks (per-wave CSV only) | `"[IMPL-001] Created auth middleware..."` |
|
||||
|
||||
#### Output Columns (Set by Agent)
|
||||
|
||||
| Column | Type | Description | Example |
|
||||
|--------|------|-------------|---------|
|
||||
| `status` | enum | `pending` -> `completed` / `failed` / `skipped` | `"completed"` |
|
||||
| `findings` | string | Key discoveries (max 500 chars) | `"Created 3 files, JWT validation working"` |
|
||||
| `artifact_path` | string | Path to output artifact relative to session dir | `"artifacts/gongbu-output.md"` |
|
||||
| `error` | string | Error message if failed | `""` |
|
||||
|
||||
---
|
||||
|
||||
### exec_mode Values
|
||||
|
||||
| Value | Mechanism | Description |
|
||||
|-------|-----------|-------------|
|
||||
| `csv-wave` | `spawn_agents_on_csv` | One-shot batch execution within wave |
|
||||
| `interactive` | `spawn_agent`/`wait`/`send_input`/`close_agent` | Multi-round individual execution |
|
||||
|
||||
Interactive tasks appear in master CSV for dependency tracking but are NOT included in wave-{N}.csv files.
|
||||
|
||||
---
|
||||
|
||||
### Example Data
|
||||
|
||||
```csv
|
||||
id,title,description,deps,context_from,exec_mode,department,task_prefix,priority,dispatch_batch,acceptance_criteria,wave,status,findings,artifact_path,error
|
||||
IMPL-001,"Implement JWT auth","Create JWT authentication middleware with token generation, validation, and refresh. Use existing bcrypt patterns from src/auth/. Follow Express middleware convention.","","","csv-wave","gongbu","IMPL","P0","1","JWT tokens generated and validated correctly; middleware integrates with existing auth flow","1","pending","","",""
|
||||
OPS-001,"Configure CI pipeline","Set up GitHub Actions CI pipeline with test, lint, and build stages for the auth module.","","","csv-wave","bingbu","OPS","P0","1","CI pipeline runs on PR and push to main; all stages pass","1","pending","","",""
|
||||
DOC-001,"Write auth API docs","Generate OpenAPI 3.0 documentation for all authentication endpoints including JWT token flows.","IMPL-001","IMPL-001","csv-wave","libu","DOC","P1","2","API docs cover all auth endpoints with request/response examples","2","pending","","",""
|
||||
DATA-001,"Auth metrics dashboard","Create dashboard showing auth success/failure rates, token expiry distribution, and active sessions.","IMPL-001","IMPL-001","csv-wave","hubu","DATA","P2","2","Dashboard displays real-time auth metrics with 4 key charts","2","pending","","",""
|
||||
QA-001,"Test auth module","Execute comprehensive test suite for auth module. Run unit tests, integration tests, and security scans. Test-fix loop with gongbu if failures found.","IMPL-001","IMPL-001","interactive","xingbu","QA","P1","2","Test pass rate >= 95%; no Critical security issues; code review clean","2","pending","","",""
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Column Lifecycle
|
||||
|
||||
```
|
||||
Decomposer (Phase 1) Wave Engine (Phase 2) Agent (Execution)
|
||||
--------------------- -------------------- -----------------
|
||||
id ----------> id ----------> id
|
||||
title ----------> title ----------> (reads)
|
||||
description ----------> description ----------> (reads)
|
||||
deps ----------> deps ----------> (reads)
|
||||
context_from----------> context_from----------> (reads)
|
||||
exec_mode ----------> exec_mode ----------> (reads)
|
||||
department ----------> department ----------> (reads)
|
||||
task_prefix ----------> task_prefix ----------> (reads)
|
||||
priority ----------> priority ----------> (reads)
|
||||
dispatch_batch--------> dispatch_batch--------> (reads)
|
||||
acceptance_criteria---> acceptance_criteria---> (reads)
|
||||
wave ----------> (reads)
|
||||
prev_context ----------> (reads)
|
||||
status
|
||||
findings
|
||||
artifact_path
|
||||
error
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Output Schema (JSON)
|
||||
|
||||
Agent output via `report_agent_job_result` (csv-wave tasks):
|
||||
|
||||
```json
|
||||
{
|
||||
"id": "IMPL-001",
|
||||
"status": "completed",
|
||||
"findings": "Implemented JWT auth middleware in src/auth/jwt.ts. Created token generation, validation, and refresh endpoints. Integrated with existing bcrypt password flow.",
|
||||
"artifact_path": "artifacts/gongbu-output.md",
|
||||
"error": ""
|
||||
}
|
||||
```
|
||||
|
||||
Interactive tasks output via structured text or JSON written to `interactive/{id}-result.json`.
|
||||
|
||||
---
|
||||
|
||||
## Discovery Types
|
||||
|
||||
| Type | Dedup Key | Data Schema | Description |
|
||||
|------|-----------|-------------|-------------|
|
||||
| `codebase_pattern` | `pattern_name` | `{pattern_name, files, description}` | Identified codebase patterns and conventions |
|
||||
| `dependency_found` | `dep_name` | `{dep_name, version, used_by}` | External dependency discoveries |
|
||||
| `risk_identified` | `risk_id` | `{risk_id, severity, description, mitigation}` | Risk findings from any agent |
|
||||
| `implementation_note` | `file_path` | `{file_path, note, line_range}` | Implementation decisions and notes |
|
||||
| `test_result` | `test_suite` | `{test_suite, pass_rate, failures}` | Test execution results |
|
||||
| `quality_issue` | `issue_id` | `{issue_id, severity, file, description}` | Quality issues found during review |
|
||||
| `routing_note` | `task_id` | `{task_id, department, reason}` | Dispatch routing decisions |
|
||||
| `state_update` | `task_id` | `{state, task_id, department, step}` | Kanban state transition |
|
||||
| `progress` | `task_id` | `{task_id, current, plan}` | Progress update within task |
|
||||
|
||||
### Discovery NDJSON Format
|
||||
|
||||
```jsonl
|
||||
{"ts":"2026-03-08T14:30:00Z","worker":"IMPL-001","type":"state_update","data":{"state":"Doing","task_id":"IMPL-001","department":"gongbu","step":"Starting JWT implementation"}}
|
||||
{"ts":"2026-03-08T14:35:00Z","worker":"IMPL-001","type":"codebase_pattern","data":{"pattern_name":"express-middleware","files":["src/middleware/auth.ts","src/middleware/cors.ts"],"description":"Express middleware follows handler(req,res,next) pattern with error wrapper"}}
|
||||
{"ts":"2026-03-08T14:40:00Z","worker":"IMPL-001","type":"implementation_note","data":{"file_path":"src/auth/jwt.ts","note":"Using jsonwebtoken library, RS256 algorithm for token signing","line_range":"1-45"}}
|
||||
{"ts":"2026-03-08T14:50:00Z","worker":"QA-001","type":"test_result","data":{"test_suite":"auth-unit","pass_rate":"97%","failures":["token-expiry-edge-case"]}}
|
||||
{"ts":"2026-03-08T14:55:00Z","worker":"QA-001","type":"quality_issue","data":{"issue_id":"QI-001","severity":"Medium","file":"src/auth/jwt.ts:23","description":"Missing input validation for refresh token format"}}
|
||||
```
|
||||
|
||||
> Both csv-wave and interactive agents read/write the same discoveries.ndjson file.
|
||||
|
||||
---
|
||||
|
||||
## Cross-Mechanism Context Flow
|
||||
|
||||
| Source | Target | Mechanism |
|
||||
|--------|--------|-----------|
|
||||
| CSV task findings | Interactive task | Injected via spawn message or send_input |
|
||||
| Interactive task result | CSV task prev_context | Read from interactive/{id}-result.json |
|
||||
| Any agent discovery | Any agent | Shared via discoveries.ndjson |
|
||||
| Phase 0 plan/review | CSV tasks | Plan and review files in session dir |
|
||||
|
||||
---
|
||||
|
||||
## Validation Rules
|
||||
|
||||
| Rule | Check | Error |
|
||||
|------|-------|-------|
|
||||
| Unique IDs | No duplicate `id` values | "Duplicate task ID: {id}" |
|
||||
| Valid deps | All dep IDs exist in tasks | "Unknown dependency: {dep_id}" |
|
||||
| No self-deps | Task cannot depend on itself | "Self-dependency: {id}" |
|
||||
| No circular deps | Topological sort completes | "Circular dependency detected involving: {ids}" |
|
||||
| context_from valid | All context IDs exist and in earlier waves | "Invalid context_from: {id}" |
|
||||
| exec_mode valid | Value is `csv-wave` or `interactive` | "Invalid exec_mode: {value}" |
|
||||
| Description non-empty | Every task has description | "Empty description for task: {id}" |
|
||||
| Status enum | status in {pending, completed, failed, skipped} | "Invalid status: {status}" |
|
||||
| Cross-mechanism deps | Interactive->CSV deps resolve correctly | "Cross-mechanism dependency unresolvable: {id}" |
|
||||
| Department valid | Value in {gongbu, bingbu, hubu, libu, libu-hr, xingbu} | "Invalid department: {value}" |
|
||||
| Task prefix matches dept | IMPL->gongbu, OPS->bingbu, DATA->hubu, DOC->libu, HR->libu-hr, QA->xingbu | "Prefix-department mismatch: {id}" |
|
||||
| Acceptance criteria non-empty | Every task has acceptance_criteria | "Empty acceptance criteria for task: {id}" |
|
||||
Reference in New Issue
Block a user