feat: Add Shared Discovery Board protocol to parallel-dev-cycle

Embed a real-time shared context mechanism (coordination/discoveries.ndjson)
into all 4 agent roles, eliminating redundant codebase exploration. Each agent
reads the board on start, skips covered areas, and appends new findings as
NDJSON entries for other agents to consume.

Key additions per file:
- SKILL.md: Design principle #8, discovery type table with dedup keys and
  required data fields, board lifecycle rules
- 02-agent-execution.md: Self-sufficient discovery protocol snippet in each
  spawn prompt with write method, required fields, and dedup keys
- 4 role files: Full Shared Discovery Protocol section with board location,
  lifecycle, physical write method (Bash echo >>), reads/writes tables with
  dedup keys and required data schemas, and embedded Read/Write steps in
  execution process
This commit is contained in:
catlog22
2026-02-09 14:55:01 +08:00
parent 113bee5ef9
commit 752d98ba5a
6 changed files with 438 additions and 17 deletions

View File

@@ -59,6 +59,7 @@ Each agent **maintains one main document** (e.g., requirements.md, plan.json, im
5. **Parallel Coordination**: Four agents launched simultaneously; coordination via shared state and orchestrator
6. **File References**: Use short file paths instead of content passing
7. **Self-Enhancement**: RA agent proactively extends requirements based on context
8. **Shared Discovery Board**: All agents share exploration findings via `discoveries.ndjson` — read on start, write as you discover, eliminating redundant codebase exploration
## Arguments
@@ -141,11 +142,15 @@ Phase 1: Session Initialization
↓ cycleId, state, progressDir (initialized/resumed)
Phase 2: Agent Execution
↓ agentOutputs {ra, ep, cd, vas}
├─ All agents read coordination/discoveries.ndjson on start
├─ Each agent explores → writes new discoveries to board
├─ Later-finishing agents benefit from earlier agents' findings
↓ agentOutputs {ra, ep, cd, vas} + shared discoveries.ndjson
Phase 3: Result Aggregation
↓ parsedResults, hasIssues, iteration count
↓ [Loop back to Phase 2 if issues and iteration < max]
↓ (discoveries.ndjson carries over across iterations)
Phase 4: Completion & Summary
↓ finalState, summaryReport
@@ -179,6 +184,7 @@ Return: cycle_id, iterations, final_state
│ ├── changes.log # NDJSON complete history
│ └── history/
└── coordination/
├── discoveries.ndjson # Shared discovery board (all agents append)
├── timeline.md # Execution timeline
└── decisions.log # Decision log
```
@@ -271,7 +277,45 @@ Append: changes.log ← {"timestamp","version":"1.1.0","action":"update","descr
**Execution Order**: RA → EP → CD → VAS (dependency chain, all spawned in parallel but block on dependencies)
**Agent → Orchestrator**: Each agent outputs `PHASE_RESULT:` block:
### Shared Discovery Board
All agents share a real-time discovery board at `coordination/discoveries.ndjson`. Each agent reads it on start and appends findings during work. This eliminates redundant codebase exploration.
**Lifecycle**:
- Created by the first agent to write a discovery (file may not exist initially)
- Carries over across iterations — never cleared or recreated
- Agents use Bash `echo '...' >> discoveries.ndjson` to append entries
**Format**: NDJSON, each line is a self-contained JSON with required top-level fields `ts`, `agent`, `type`, `data`:
```jsonl
{"ts":"2026-01-22T10:00:00+08:00","agent":"ra","type":"tech_stack","data":{"language":"TypeScript","framework":"Express","test":"Jest","build":"tsup"}}
```
**Discovery Types**:
| type | Dedup Key | Writers | Readers | Required `data` Fields |
|------|-----------|---------|---------|----------------------|
| `tech_stack` | singleton | RA | EP, CD, VAS | `language`, `framework`, `test`, `build` |
| `project_config` | `data.path` | RA | EP, CD | `path`, `key_deps[]`, `scripts{}` |
| `existing_feature` | `data.name` | RA, EP | CD | `name`, `files[]`, `summary` |
| `architecture` | singleton | EP | CD, VAS | `pattern`, `layers[]`, `entry` |
| `code_pattern` | `data.name` | EP, CD | CD, VAS | `name`, `description`, `example_file` |
| `integration_point` | `data.file` | EP | CD | `file`, `description`, `exports[]` |
| `similar_impl` | `data.feature` | EP | CD | `feature`, `files[]`, `relevance` |
| `code_convention` | singleton | CD | VAS | `naming`, `imports`, `formatting` |
| `utility` | `data.name` | CD | VAS | `name`, `file`, `usage` |
| `test_command` | singleton | CD, VAS | VAS, CD | `unit`, `integration`(opt), `coverage`(opt) |
| `test_baseline` | singleton | VAS | CD | `total`, `passing`, `coverage_pct`, `framework`, `config` |
| `test_pattern` | singleton | VAS | CD | `style`, `naming`, `fixtures` |
| `blocker` | `data.issue` | any | all | `issue`, `severity`, `impact` |
**Protocol Rules**:
1. Read board before own exploration → skip covered areas (if file doesn't exist, skip)
2. Write discoveries immediately via Bash `echo >>` → don't batch
3. Deduplicate — check existing entries; skip if same `type` + dedup key value already exists
4. Append-only — never modify or delete existing lines
### Agent → Orchestrator Communication
```
PHASE_RESULT:
- phase: ra | ep | cd | vas
@@ -281,7 +325,9 @@ PHASE_RESULT:
- issues: []
```
**Orchestrator → Agent**: Feedback via `send_input` (file refs + issue summary, never full content):
### Orchestrator → Agent Communication
Feedback via `send_input` (file refs + issue summary, never full content):
```
## FEEDBACK FROM [Source]
[Issue summary with file:line references]

View File

@@ -5,11 +5,38 @@ Spawn four specialized agents in parallel and wait for all to complete with time
## Objective
- Spawn RA, EP, CD, VAS agents simultaneously using Codex subagent pattern
- Pass cycle context and role references to each agent
- Pass cycle context, role references, and **discovery protocol** to each agent
- Wait for all agents with configurable timeout
- Handle timeout with convergence request
- Output: agentOutputs from all 4 agents
## Shared Discovery Board
All agents share a discovery board at `{progressDir}/coordination/discoveries.ndjson`. Each agent reads it on start and writes discoveries during execution. This eliminates redundant codebase exploration across agents.
**Agent reads board → skips covered areas → explores unknowns → writes new findings → other agents benefit**
### Discovery Protocol Snippet (injected into every agent prompt)
```
## SHARED DISCOVERY PROTOCOL
Board: ${progressDir}/coordination/discoveries.ndjson
**On Start**: Read board (if exists; if not, skip — you'll be the first writer).
Skip exploration for areas already covered.
**During Work**: Append discoveries as NDJSON entries via Bash `echo '...' >> discoveries.ndjson`.
**Format**: {"ts":"<ISO8601>","agent":"<role>","type":"<type>","data":{<required fields>}}
**Cross-iteration**: Board persists across iterations. Never clear it.
**You Write** (dedup key in parentheses):
- `<type>` (<dedup key>) → required data: <field1>, <field2>, ...
**You Read**: <comma-separated list of types from other agents>
**Rules**: Read before explore. Write via `echo >>`. Dedup by type+key. Append-only.
```
## Agent Role References
Each agent reads its detailed role definition at execution time:
@@ -64,6 +91,26 @@ Cross-reference the task description against these documents for completeness.
---
## SHARED DISCOVERY PROTOCOL
Board: ${progressDir}/coordination/discoveries.ndjson
**On Start**: Read board (if exists; if not, skip — you'll be the first writer). Skip exploration for areas already covered.
**During Work**: Append discoveries as NDJSON entries via Bash \`echo '...' >> discoveries.ndjson\`.
**Format**: {"ts":"<ISO8601>","agent":"ra","type":"<type>","data":{<see required fields>}}
**Cross-iteration**: Board persists across iterations. Never clear it.
**You Write** (dedup key in parentheses):
- \`tech_stack\` (singleton) → required data: language, framework, test, build
- \`project_config\` (data.path) → required data: path, key_deps[], scripts{}
- \`existing_feature\` (data.name) → required data: name, files[], summary
**You Read**: architecture, similar_impl, test_baseline, blocker
**Rules**: Read before explore. Write via \`echo >> \`. Dedup by type+key. Append-only.
---
${sourceRefsSection}
## CYCLE CONTEXT
@@ -87,6 +134,7 @@ Requirements Analyst - Analyze and refine requirements throughout the cycle.
3. Identify edge cases and implicit requirements
4. Track requirement changes across iterations
5. Maintain requirements.md and changes.log
6. **Share discoveries** to coordination/discoveries.ndjson
${focusDirective}
## DELIVERABLES
@@ -126,6 +174,27 @@ function spawnEPAgent(cycleId, state, progressDir) {
---
## SHARED DISCOVERY PROTOCOL
Board: ${progressDir}/coordination/discoveries.ndjson
**On Start**: Read board (if exists; if not, skip — you'll be the first writer). Skip exploration for areas already covered.
**During Work**: Append discoveries as NDJSON entries via Bash \`echo '...' >> discoveries.ndjson\`.
**Format**: {"ts":"<ISO8601>","agent":"ep","type":"<type>","data":{<see required fields>}}
**Cross-iteration**: Board persists across iterations. Never clear it.
**You Write** (dedup key in parentheses):
- \`architecture\` (singleton) → required data: pattern, layers[], entry
- \`code_pattern\` (data.name) → required data: name, description, example_file
- \`integration_point\` (data.file) → required data: file, description, exports[]
- \`similar_impl\` (data.feature) → required data: feature, files[], relevance
**You Read**: tech_stack, project_config, existing_feature, test_command, test_baseline
**Rules**: Read before explore. Write via \`echo >> \`. Dedup by type+key. Append-only.
---
## CYCLE CONTEXT
- **Cycle ID**: ${cycleId}
@@ -144,6 +213,7 @@ Exploration & Planning Agent - Explore architecture and generate implementation
3. Design implementation approach
4. Generate plan.json with task breakdown
5. Update or iterate on existing plan
6. **Share discoveries** to coordination/discoveries.ndjson
## DELIVERABLES
@@ -182,6 +252,27 @@ function spawnCDAgent(cycleId, state, progressDir) {
---
## SHARED DISCOVERY PROTOCOL
Board: ${progressDir}/coordination/discoveries.ndjson
**On Start**: Read board (if exists; if not, skip — you'll be the first writer). Skip exploration for areas already covered.
**During Work**: Append discoveries as NDJSON entries via Bash \`echo '...' >> discoveries.ndjson\`.
**Format**: {"ts":"<ISO8601>","agent":"cd","type":"<type>","data":{<see required fields>}}
**Cross-iteration**: Board persists across iterations. Never clear it.
**You Write** (dedup key in parentheses):
- \`code_convention\` (singleton) → required data: naming, imports, formatting
- \`utility\` (data.name) → required data: name, file, usage
- \`test_command\` (singleton) → required data: unit, integration(opt), coverage(opt)
- \`blocker\` (data.issue) → required data: issue, severity, impact
**You Read**: tech_stack, architecture, code_pattern, integration_point, similar_impl, test_baseline, test_command
**Rules**: Read before explore. Write via \`echo >> \`. Dedup by type+key. Append-only.
---
## CYCLE CONTEXT
- **Cycle ID**: ${cycleId}
@@ -200,6 +291,7 @@ Code Developer - Implement features based on plan and requirements.
3. Handle integration issues
4. Maintain code quality
5. Report implementation progress and issues
6. **Share discoveries** to coordination/discoveries.ndjson
## DELIVERABLES
@@ -237,6 +329,27 @@ function spawnVASAgent(cycleId, state, progressDir) {
---
## SHARED DISCOVERY PROTOCOL
Board: ${progressDir}/coordination/discoveries.ndjson
**On Start**: Read board (if exists; if not, skip — you'll be the first writer). Skip exploration for areas already covered.
**During Work**: Append discoveries as NDJSON entries via Bash \`echo '...' >> discoveries.ndjson\`.
**Format**: {"ts":"<ISO8601>","agent":"vas","type":"<type>","data":{<see required fields>}}
**Cross-iteration**: Board persists across iterations. Never clear it.
**You Write** (dedup key in parentheses):
- \`test_baseline\` (singleton) → required data: total, passing, coverage_pct, framework, config
- \`test_pattern\` (singleton) → required data: style, naming, fixtures
- \`test_command\` (singleton) → required data: unit, e2e(opt), coverage(opt)
- \`blocker\` (data.issue) → required data: issue, severity, impact
**You Read**: tech_stack, architecture, code_pattern, code_convention, test_command, utility, integration_point
**Rules**: Read before explore. Write via \`echo >> \`. Dedup by type+key. Append-only.
---
## CYCLE CONTEXT
- **Cycle ID**: ${cycleId}
@@ -255,6 +368,7 @@ Validation & Archival Specialist - Validate quality and create documentation.
3. Create archival documentation
4. Summarize cycle results
5. Generate version history
6. **Share discoveries** to coordination/discoveries.ndjson
## DELIVERABLES

View File

@@ -55,6 +55,61 @@ The Code Developer is responsible for implementing features according to the pla
- Leave TODO comments without context
- Implement features not in the plan
## Shared Discovery Protocol
CD agent participates in the **Shared Discovery Board** (`coordination/discoveries.ndjson`). This append-only NDJSON file enables all agents to share exploration findings in real-time, eliminating redundant codebase exploration.
### Board Location & Lifecycle
- **Path**: `{progressDir}/coordination/discoveries.ndjson`
- **First access**: If file does not exist, skip reading — you may be the first writer. Create it on first write.
- **Cross-iteration**: Board carries over across iterations. Do NOT clear or recreate it. New iterations append to existing entries.
### Physical Write Method
Append one NDJSON line using Bash:
```bash
echo '{"ts":"2026-01-22T11:00:00+08:00","agent":"cd","type":"code_convention","data":{"naming":"camelCase functions, PascalCase classes","imports":"absolute paths via @/ alias","formatting":"prettier with default config"}}' >> {progressDir}/coordination/discoveries.ndjson
```
### CD Reads (from other agents)
| type | Dedup Key | Use |
|------|-----------|-----|
| `tech_stack` | (singleton) | Know language/framework without detection — skip project scanning |
| `architecture` | (singleton) | Understand system layout (layers, entry point) before coding |
| `code_pattern` | `data.name` | Follow existing conventions (error handling, validation, etc.) immediately |
| `integration_point` | `data.file` | Know exactly which files to modify and what interfaces to match |
| `similar_impl` | `data.feature` | Read reference implementations for consistency |
| `test_baseline` | (singleton) | Know current test count/coverage before making changes |
| `test_command` | (singleton) | Run tests directly without figuring out commands |
### CD Writes (for other agents)
| type | Dedup Key | Required `data` Fields | When |
|------|-----------|----------------------|------|
| `code_convention` | (singleton — only 1 entry) | `naming`, `imports`, `formatting` | After observing naming/import/formatting patterns |
| `utility` | `data.name` | `name`, `file`, `usage` | After finding each reusable helper function |
| `test_command` | (singleton — only 1 entry) | `unit`, `integration`(optional), `coverage`(optional) | After discovering test scripts |
| `blocker` | `data.issue` | `issue`, `severity` (high\|medium\|low), `impact` | When hitting any blocking issue |
### Discovery Entry Format
Each line is a self-contained JSON object with exactly these top-level fields:
```jsonl
{"ts":"<ISO8601>","agent":"cd","type":"<type>","data":{<required fields per type>}}
```
### Protocol Rules
1. **Read board first** — before own exploration, read `discoveries.ndjson` (if exists) and skip already-covered areas
2. **Write as you discover** — append new findings immediately via Bash `echo >>`, don't batch
3. **Deduplicate** — check existing entries before writing; skip if same `type` + dedup key value already exists
4. **Never modify existing lines** — append-only, no edits, no deletions
---
## Execution Process
### Phase 1: Planning & Setup
@@ -64,12 +119,22 @@ The Code Developer is responsible for implementing features according to the pla
- Requirements from requirements-analyst.md
- Project tech stack and guidelines
2. **Understand Project Structure**
2. **Read Discovery Board**
- Read `{progressDir}/coordination/discoveries.ndjson` (if exists)
- Parse entries by type — note what's already discovered
- If `tech_stack` / `architecture` exist → skip project structure exploration
- If `code_pattern` / `code_convention` exist → adopt conventions directly
- If `integration_point` exist → know target files without searching
- If `similar_impl` exist → read reference files for consistency
- If `test_command` exist → use known commands for testing
3. **Understand Project Structure** (skip areas covered by board)
- Review similar existing implementations
- Understand coding conventions
- Check for relevant utilities/libraries
- **Write discoveries**: append `code_convention`, `utility` entries for new findings
3. **Prepare Environment**
4. **Prepare Environment**
- Create feature branch (if using git)
- Set up development environment
- Prepare test environment

View File

@@ -53,6 +53,59 @@ The Exploration & Planning Agent is responsible for understanding the codebase a
- Skip dependency analysis
- Forget to document risks
## Shared Discovery Protocol
EP agent participates in the **Shared Discovery Board** (`coordination/discoveries.ndjson`). This append-only NDJSON file enables all agents to share exploration findings in real-time, eliminating redundant codebase exploration.
### Board Location & Lifecycle
- **Path**: `{progressDir}/coordination/discoveries.ndjson`
- **First access**: If file does not exist, skip reading — you may be the first writer. Create it on first write.
- **Cross-iteration**: Board carries over across iterations. Do NOT clear or recreate it. New iterations append to existing entries.
### Physical Write Method
Append one NDJSON line using Bash:
```bash
echo '{"ts":"2026-01-22T10:30:00+08:00","agent":"ep","type":"architecture","data":{"pattern":"layered","layers":["routes","services","models"],"entry":"src/index.ts"}}' >> {progressDir}/coordination/discoveries.ndjson
```
### EP Reads (from other agents)
| type | Dedup Key | Use |
|------|-----------|-----|
| `tech_stack` | (singleton) | Skip tech stack detection, jump directly to architecture analysis |
| `project_config` | `data.path` | Know dependencies and scripts without re-scanning config files |
| `existing_feature` | `data.name` | Understand existing functionality as exploration starting points |
| `test_command` | (singleton) | Know how to verify architectural assumptions |
| `test_baseline` | (singleton) | Calibrate plan effort estimates based on current test coverage and pass rate |
### EP Writes (for other agents)
| type | Dedup Key | Required `data` Fields | When |
|------|-----------|----------------------|------|
| `architecture` | (singleton — only 1 entry) | `pattern`, `layers[]`, `entry` | After mapping overall system structure |
| `code_pattern` | `data.name` | `name`, `description`, `example_file` | After identifying each coding convention |
| `integration_point` | `data.file` | `file`, `description`, `exports[]` | After locating each integration target |
| `similar_impl` | `data.feature` | `feature`, `files[]`, `relevance` (high\|medium\|low) | After finding each reference implementation |
### Discovery Entry Format
Each line is a self-contained JSON object with exactly these top-level fields:
```jsonl
{"ts":"<ISO8601>","agent":"ep","type":"<type>","data":{<required fields per type>}}
```
### Protocol Rules
1. **Read board first** — before own exploration, read `discoveries.ndjson` (if exists) and skip already-covered areas
2. **Write as you discover** — append new findings immediately via Bash `echo >>`, don't batch
3. **Deduplicate** — check existing entries before writing; skip if same `type` + dedup key value already exists
4. **Never modify existing lines** — append-only, no edits, no deletions
---
## Execution Process
### Phase 1: Codebase Exploration
@@ -62,19 +115,28 @@ The Exploration & Planning Agent is responsible for understanding the codebase a
- Requirements from RA
- Project tech stack and guidelines
2. **Explore Architecture**
2. **Read Discovery Board**
- Read `{progressDir}/coordination/discoveries.ndjson` (if exists)
- Parse entries by type — note what's already discovered
- If `tech_stack` exists → skip tech stack scanning, use shared data
- If `project_config` exists → skip package.json/tsconfig reading
- If `existing_feature` entries exist → use as exploration starting points
3. **Explore Architecture** (skip areas covered by board)
- Identify existing patterns and conventions
- Find similar feature implementations
- Map module boundaries
- Document current architecture
- **Write discoveries**: append `architecture`, `code_pattern`, `integration_point`, `similar_impl` entries to board
3. **Analyze Integration Points**
4. **Analyze Integration Points**
- Where will new code integrate?
- What interfaces need to match?
- What data models exist?
- What dependencies exist?
- **Write discoveries**: append `integration_point` entries for each finding
4. **Generate Exploration Report**
5. **Generate Exploration Report**
- Write `exploration.md` documenting findings
- Include architecture overview
- Document identified patterns
@@ -82,7 +144,12 @@ The Exploration & Planning Agent is responsible for understanding the codebase a
### Phase 2: Planning
1. **Decompose Requirements**
1. **Re-read Discovery Board**
- Check for newly appeared entries since Phase 1 (other agents may have written)
- If `test_baseline` exists → calibrate effort estimates based on current coverage/pass rate
- If `blocker` entries exist → factor into risk assessment and task dependencies
2. **Decompose Requirements**
- Convert each requirement to one or more tasks
- Identify logical grouping
- Determine task sequencing

View File

@@ -50,6 +50,57 @@ The Requirements Analyst maintains **a single file** (`requirements.md`) contain
- Forget to increment version number
- Skip documenting edge cases
## Shared Discovery Protocol
RA agent participates in the **Shared Discovery Board** (`coordination/discoveries.ndjson`). This append-only NDJSON file enables all agents to share exploration findings in real-time, eliminating redundant codebase exploration.
### Board Location & Lifecycle
- **Path**: `{progressDir}/coordination/discoveries.ndjson`
- **First access**: If file does not exist, skip reading — you are the first writer. Create it on first write.
- **Cross-iteration**: Board carries over across iterations. Do NOT clear or recreate it. New iterations append to existing entries.
### Physical Write Method
Append one NDJSON line using Bash:
```bash
echo '{"ts":"2026-01-22T10:00:00+08:00","agent":"ra","type":"tech_stack","data":{"language":"TypeScript","framework":"Express","test":"Jest","build":"tsup"}}' >> {progressDir}/coordination/discoveries.ndjson
```
### RA Reads (from other agents)
| type | Dedup Key | Use |
|------|-----------|-----|
| `architecture` | (singleton) | Understand system structure for requirements scoping |
| `similar_impl` | `data.feature` | Identify existing features to avoid duplicate requirements |
| `test_baseline` | (singleton) | Calibrate NFR targets (coverage, pass rate) based on current state |
| `blocker` | `data.issue` | Incorporate known constraints into requirements |
### RA Writes (for other agents)
| type | Dedup Key | Required `data` Fields | When |
|------|-----------|----------------------|------|
| `tech_stack` | (singleton — only 1 entry) | `language`, `framework`, `test`, `build` | After reading package.json / project config |
| `project_config` | `data.path` | `path`, `key_deps[]`, `scripts{}` | After scanning each project config file |
| `existing_feature` | `data.name` | `name`, `files[]`, `summary` | After identifying each existing capability |
### Discovery Entry Format
Each line is a self-contained JSON object with exactly these top-level fields:
```jsonl
{"ts":"<ISO8601>","agent":"ra","type":"<type>","data":{<required fields per type>}}
```
### Protocol Rules
1. **Read board first** — before own exploration, read `discoveries.ndjson` (if exists) and skip already-covered areas
2. **Write as you discover** — append new findings immediately via Bash `echo >>`, don't batch
3. **Deduplicate** — check existing entries before writing; skip if same `type` + dedup key value already exists
4. **Never modify existing lines** — append-only, no edits, no deletions
---
## Execution Process
### Phase 1: Initial Analysis (v1.0.0)
@@ -59,24 +110,36 @@ The Requirements Analyst maintains **a single file** (`requirements.md`) contain
- Task description from state
- Project tech stack and guidelines
2. **Analyze Explicit Requirements**
2. **Read Discovery Board**
- Read `{progressDir}/coordination/discoveries.ndjson` (if exists)
- Parse entries by type — note what's already discovered
- If `tech_stack` exists → skip tech stack detection
- If `existing_feature` entries exist → incorporate into requirements baseline
3. **Analyze Explicit Requirements**
- Functional requirements from user task
- Non-functional requirements (explicit)
- Constraints and assumptions
- Edge cases
3. **Proactive Enhancement** (NEW - Self-Enhancement Phase)
4. **Write Discoveries**
- Append `tech_stack` entry if not already on board (from package.json, tsconfig, etc.)
- Append `project_config` entry with key deps and scripts
- Append `existing_feature` entries for each existing capability found during analysis
5. **Proactive Enhancement** (Self-Enhancement Phase)
- Execute enhancement strategies based on triggers
- Scan codebase for implied requirements
- **Read Discovery Board again** — check for `architecture`, `integration_point`, `blocker` from EP/CD/VAS (may have appeared since step 2)
- Analyze peer agent outputs (EP, CD, VAS from previous iteration)
- Suggest associated features and NFR scaffolding
4. **Consolidate & Finalize**
6. **Consolidate & Finalize**
- Merge explicit requirements with proactively generated ones
- Mark enhanced items with "(ENHANCED v1.0.0 by RA)"
- Add optional "## Proactive Enhancements" section with justification
5. **Generate Single File**
6. **Generate Single File**
- Write `requirements.md` v1.0.0
- Include all sections in one document
- Add version header

View File

@@ -55,6 +55,61 @@ The Validation & Archival Agent is responsible for verifying implementation qual
- Forget to document breaking changes
- Skip regression testing
## Shared Discovery Protocol
VAS agent participates in the **Shared Discovery Board** (`coordination/discoveries.ndjson`). This append-only NDJSON file enables all agents to share exploration findings in real-time, eliminating redundant codebase exploration.
### Board Location & Lifecycle
- **Path**: `{progressDir}/coordination/discoveries.ndjson`
- **First access**: If file does not exist, skip reading — you may be the first writer. Create it on first write.
- **Cross-iteration**: Board carries over across iterations. Do NOT clear or recreate it. New iterations append to existing entries.
### Physical Write Method
Append one NDJSON line using Bash:
```bash
echo '{"ts":"2026-01-22T12:00:00+08:00","agent":"vas","type":"test_baseline","data":{"total":120,"passing":118,"coverage_pct":82,"framework":"jest","config":"jest.config.ts"}}' >> {progressDir}/coordination/discoveries.ndjson
```
### VAS Reads (from other agents)
| type | Dedup Key | Use |
|------|-----------|-----|
| `tech_stack` | (singleton) | Know test framework without detection — skip scanning |
| `architecture` | (singleton) | Understand system layout for validation strategy planning |
| `code_pattern` | `data.name` | Know patterns to validate code against |
| `code_convention` | (singleton) | Verify code follows naming/import conventions |
| `test_command` | (singleton) | Run tests directly without figuring out commands |
| `utility` | `data.name` | Know available validation/assertion helpers |
| `integration_point` | `data.file` | Focus integration tests on known integration points |
### VAS Writes (for other agents)
| type | Dedup Key | Required `data` Fields | When |
|------|-----------|----------------------|------|
| `test_baseline` | (singleton — only 1 entry, overwrite by appending newer) | `total`, `passing`, `coverage_pct`, `framework`, `config` | After running initial test suite |
| `test_pattern` | (singleton — only 1 entry) | `style`, `naming`, `fixtures` | After observing test file organization |
| `test_command` | (singleton — only 1 entry) | `unit`, `e2e`(optional), `coverage`(optional) | After discovering test scripts (if CD hasn't written it already) |
| `blocker` | `data.issue` | `issue`, `severity` (high\|medium\|low), `impact` | When tests reveal blocking issues |
### Discovery Entry Format
Each line is a self-contained JSON object with exactly these top-level fields:
```jsonl
{"ts":"<ISO8601>","agent":"vas","type":"<type>","data":{<required fields per type>}}
```
### Protocol Rules
1. **Read board first** — before own exploration, read `discoveries.ndjson` (if exists) and skip already-covered areas
2. **Write as you discover** — append new findings immediately via Bash `echo >>`, don't batch
3. **Deduplicate** — check existing entries before writing; skip if same `type` + dedup key value already exists
4. **Never modify existing lines** — append-only, no edits, no deletions
---
## Execution Process
### Phase 1: Test Execution
@@ -64,22 +119,33 @@ The Validation & Archival Agent is responsible for verifying implementation qual
- Requirements from RA agent
- Project tech stack and guidelines
2. **Prepare Test Environment**
2. **Read Discovery Board**
- Read `{progressDir}/coordination/discoveries.ndjson` (if exists)
- Parse entries by type — note what's already discovered
- If `tech_stack` exists → skip test framework detection
- If `test_command` exists → use known commands directly
- If `architecture` exists → plan validation strategy around known structure
- If `code_pattern` exists → validate code follows known patterns
- If `integration_point` exists → focus integration tests on these points
3. **Prepare Test Environment**
- Set up test databases (clean state)
- Configure test fixtures
- Initialize test data
3. **Run Test Suites**
4. **Run Test Suites** (use `test_command` from board if available)
- Execute unit tests
- Execute integration tests
- Execute end-to-end tests
- Run security tests if applicable
- **Write discoveries**: append `test_baseline` with initial results
4. **Collect Results**
5. **Collect Results**
- Test pass/fail status
- Execution time
- Error messages and stack traces
- Coverage metrics
- **Write discoveries**: append `test_pattern` if test organization discovered
### Phase 2: Analysis & Validation