mirror of
https://github.com/catlog22/Claude-Code-Workflow.git
synced 2026-03-10 17:11:04 +08:00
Compare commits
9 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
3341a2e772 | ||
|
|
61ea9d47a6 | ||
|
|
f3ae78f95e | ||
|
|
334f82eaad | ||
|
|
1c1a4afd23 | ||
|
|
c014c0568a | ||
|
|
62d8aa3623 | ||
|
|
9aa07e8d01 | ||
|
|
4254eeeaa7 |
@@ -8,12 +8,12 @@ description: |
|
||||
|
||||
Examples:
|
||||
- Context: Coordinator spawns analyst worker
|
||||
user: "role: analyst\nrole_spec: .claude/skills/team-lifecycle/role-specs/analyst.md\nsession: .workflow/.team/TLS-xxx"
|
||||
user: "role: analyst\nrole_spec: ~ or <project>/.claude/skills/team-lifecycle/role-specs/analyst.md\nsession: .workflow/.team/TLS-xxx"
|
||||
assistant: "Loading role spec, discovering RESEARCH-* tasks, executing Phase 2-4 domain logic"
|
||||
commentary: Agent parses prompt, loads role spec, runs built-in Phase 1 then role-specific Phase 2-4 then built-in Phase 5
|
||||
|
||||
- Context: Coordinator spawns writer worker with inner loop
|
||||
user: "role: writer\nrole_spec: .claude/skills/team-lifecycle/role-specs/writer.md\ninner_loop: true"
|
||||
user: "role: writer\nrole_spec: ~ or <project>/.claude/skills/team-lifecycle/role-specs/writer.md\ninner_loop: true"
|
||||
assistant: "Loading role spec, processing all DRAFT-* tasks in inner loop"
|
||||
commentary: Agent detects inner_loop=true, loops Phase 1-5 for each same-prefix task
|
||||
color: green
|
||||
|
||||
@@ -66,7 +66,7 @@ Agent({
|
||||
run_in_background: true,
|
||||
prompt: `## Role Assignment
|
||||
role: <role>
|
||||
role_spec: .claude/skills/team-arch-opt/roles/<role>/role.md
|
||||
role_spec: ~ or <project>/.claude/skills/team-arch-opt/roles/<role>/role.md
|
||||
session: <session-folder>
|
||||
session_id: <session-id>
|
||||
team_name: arch-opt
|
||||
|
||||
@@ -95,7 +95,7 @@ Find ready tasks, spawn workers, STOP.
|
||||
run_in_background: true,
|
||||
prompt: `## Role Assignment
|
||||
role: <role>
|
||||
role_spec: .claude/skills/team-arch-opt/roles/<role>/role.md
|
||||
role_spec: ~ or <project>/.claude/skills/team-arch-opt/roles/<role>/role.md
|
||||
session: <session-folder>
|
||||
session_id: <session-id>
|
||||
team_name: arch-opt
|
||||
|
||||
@@ -3,7 +3,7 @@
|
||||
"team_name": "arch-opt",
|
||||
"team_display_name": "Architecture Optimization",
|
||||
"skill_name": "team-arch-opt",
|
||||
"skill_path": ".claude/skills/team-arch-opt/",
|
||||
"skill_path": "~ or <project>/.claude/skills/team-arch-opt/",
|
||||
"pipeline_type": "Linear with Review-Fix Cycle (Parallel-Capable)",
|
||||
"completion_action": "interactive",
|
||||
"has_inline_discuss": true,
|
||||
|
||||
@@ -65,7 +65,7 @@ Agent({
|
||||
run_in_background: true,
|
||||
prompt: `## Role Assignment
|
||||
role: <role>
|
||||
role_spec: .claude/skills/team-brainstorm/roles/<role>/role.md
|
||||
role_spec: ~ or <project>/.claude/skills/team-brainstorm/roles/<role>/role.md
|
||||
session: <session-folder>
|
||||
session_id: <session-id>
|
||||
team_name: brainstorm
|
||||
@@ -89,7 +89,7 @@ Agent({
|
||||
run_in_background: true,
|
||||
prompt: `## Role Assignment
|
||||
role: ideator
|
||||
role_spec: .claude/skills/team-brainstorm/roles/ideator/role.md
|
||||
role_spec: ~ or <project>/.claude/skills/team-brainstorm/roles/ideator/role.md
|
||||
session: <session-folder>
|
||||
session_id: <session-id>
|
||||
team_name: brainstorm
|
||||
|
||||
@@ -91,7 +91,7 @@ Find ready tasks, spawn workers, STOP.
|
||||
run_in_background: true,
|
||||
prompt: `## Role Assignment
|
||||
role: <role>
|
||||
role_spec: .claude/skills/team-brainstorm/roles/<role>/role.md
|
||||
role_spec: ~ or <project>/.claude/skills/team-brainstorm/roles/<role>/role.md
|
||||
session: <session-folder>
|
||||
session_id: <session-id>
|
||||
team_name: brainstorm
|
||||
|
||||
@@ -32,7 +32,7 @@ Generate complete team skills following the team-lifecycle-v4 architecture: SKIL
|
||||
## Key Design Principles
|
||||
|
||||
1. **v4 Architecture Compliance**: Generated skills follow team-lifecycle-v4 pattern — SKILL.md = pure router, beat model = coordinator-only, unified structure (roles/ + specs/ + templates/)
|
||||
2. **Golden Sample Reference**: Uses `team-lifecycle-v4` as reference implementation at `.claude/skills/team-lifecycle-v4/`
|
||||
2. **Golden Sample Reference**: Uses `team-lifecycle-v4` as reference implementation at `~ or <project>/.claude/skills/team-lifecycle-v4/`
|
||||
3. **Intelligent Commands Distribution**: Auto-determines which roles need `commands/` (2+ commands) vs inline logic (1 command)
|
||||
4. **team-worker Compatibility**: Role.md files include correct YAML frontmatter for team-worker agent parsing
|
||||
|
||||
@@ -76,7 +76,7 @@ Return:
|
||||
|
||||
## Golden Sample
|
||||
|
||||
Generated skills follow the architecture of `.claude/skills/team-lifecycle-v4/`:
|
||||
Generated skills follow the architecture of `~ or <project>/.claude/skills/team-lifecycle-v4/`:
|
||||
|
||||
```
|
||||
.claude/skills/<skill-name>/
|
||||
|
||||
@@ -12,7 +12,7 @@ Generate all role files, specs, and templates based on `teamConfig` and the gene
|
||||
|
||||
## Golden Sample Reference
|
||||
|
||||
Read the golden sample at `.claude/skills/team-lifecycle-v4/` for each file type before generating. This ensures pattern fidelity.
|
||||
Read the golden sample at `~ or <project>/.claude/skills/team-lifecycle-v4/` for each file type before generating. This ensures pattern fidelity.
|
||||
|
||||
## Step 3.1: Generate Coordinator
|
||||
|
||||
@@ -305,7 +305,7 @@ For each additional spec in `teamConfig.specs` (beyond pipelines), generate doma
|
||||
|
||||
For each template in `teamConfig.templates`:
|
||||
|
||||
1. Check if golden sample has matching template at `.claude/skills/team-lifecycle-v4/templates/`
|
||||
1. Check if golden sample has matching template at `~ or <project>/.claude/skills/team-lifecycle-v4/templates/`
|
||||
2. If exists: copy and adapt for new domain
|
||||
3. If not: generate domain-appropriate template structure
|
||||
|
||||
|
||||
@@ -193,7 +193,7 @@ Agent({
|
||||
name: "<role>",
|
||||
team_name: "<team_name>",
|
||||
prompt: `role: <role>
|
||||
role_spec: .claude/skills/team-edict/role-specs/<role>.md
|
||||
role_spec: ~ or <project>/.claude/skills/team-edict/role-specs/<role>.md
|
||||
session: <session_path>
|
||||
session_id: <session_id>
|
||||
team_name: <team_name>
|
||||
|
||||
@@ -24,7 +24,7 @@ team_msg(operation="log", session_id=<session_id>, from="xingbu",
|
||||
|
||||
1. 读取当前任务(QA-* task description)
|
||||
2. 读取 `<session_path>/plan/dispatch-plan.md` 获取验收标准
|
||||
3. 读取 `.claude/skills/team-edict/specs/quality-gates.md` 获取质量门标准
|
||||
3. 读取 `~ or <project>/.claude/skills/team-edict/specs/quality-gates.md` 获取质量门标准
|
||||
4. 读取被测部门(通常为工部)的产出报告
|
||||
|
||||
## Phase 3: 质量审查
|
||||
|
||||
@@ -18,7 +18,7 @@
|
||||
|
||||
```javascript
|
||||
// Phase 0/1 启动时执行
|
||||
Read(".claude/skills/team-edict/specs/team-config.json") // 加载路由规则和artifact路径
|
||||
Read("~ or <project>/.claude/skills/team-edict/specs/team-config.json") // 加载路由规则和artifact路径
|
||||
```
|
||||
|
||||
---
|
||||
@@ -106,7 +106,7 @@ Read(".claude/skills/team-edict/specs/team-config.json") // 加载路由规则
|
||||
name: "zhongshu",
|
||||
team_name: <team_name>,
|
||||
prompt: `role: zhongshu
|
||||
role_spec: .claude/skills/team-edict/role-specs/zhongshu.md
|
||||
role_spec: ~ or <project>/.claude/skills/team-edict/role-specs/zhongshu.md
|
||||
session: <session_path>
|
||||
session_id: <session_id>
|
||||
team_name: <team_name>
|
||||
@@ -138,7 +138,7 @@ inner_loop: false`,
|
||||
name: "menxia",
|
||||
team_name: <team_name>,
|
||||
prompt: `role: menxia
|
||||
role_spec: .claude/skills/team-edict/role-specs/menxia.md
|
||||
role_spec: ~ or <project>/.claude/skills/team-edict/role-specs/menxia.md
|
||||
session: <session_path>
|
||||
session_id: <session_id>
|
||||
team_name: <team_name>
|
||||
@@ -177,7 +177,7 @@ inner_loop: false`,
|
||||
name: "shangshu",
|
||||
team_name: <team_name>,
|
||||
prompt: `role: shangshu
|
||||
role_spec: .claude/skills/team-edict/role-specs/shangshu.md
|
||||
role_spec: ~ or <project>/.claude/skills/team-edict/role-specs/shangshu.md
|
||||
session: <session_path>
|
||||
session_id: <session_id>
|
||||
team_name: <team_name>
|
||||
|
||||
@@ -99,7 +99,7 @@ Agent({
|
||||
run_in_background: true,
|
||||
prompt: `## Role Assignment
|
||||
role: <role>
|
||||
role_spec: .claude/skills/team-frontend-debug/roles/<role>/role.md
|
||||
role_spec: ~ or <project>/.claude/skills/team-frontend-debug/roles/<role>/role.md
|
||||
session: <session-folder>
|
||||
session_id: <session-id>
|
||||
team_name: <team-name>
|
||||
|
||||
@@ -29,7 +29,7 @@ EXPECTED: <artifact path> + <quality criteria>
|
||||
CONSTRAINTS: <scope limits>
|
||||
---
|
||||
InnerLoop: <true|false>
|
||||
RoleSpec: .claude/skills/team-frontend-debug/roles/<role>/role.md
|
||||
RoleSpec: ~ or <project>/.claude/skills/team-frontend-debug/roles/<role>/role.md
|
||||
```
|
||||
|
||||
---
|
||||
@@ -55,7 +55,7 @@ EXPECTED: <session>/artifacts/TEST-001-report.md + <session>/artifacts/TEST-001-
|
||||
CONSTRAINTS: Use Chrome DevTools MCP only | Do not modify any code | Test all listed features
|
||||
---
|
||||
InnerLoop: true
|
||||
RoleSpec: .claude/skills/team-frontend-debug/roles/tester/role.md
|
||||
RoleSpec: ~ or <project>/.claude/skills/team-frontend-debug/roles/tester/role.md
|
||||
```
|
||||
|
||||
### ANALYZE-001 (Test Mode): Analyze Discovered Issues
|
||||
@@ -75,7 +75,7 @@ EXPECTED: <session>/artifacts/ANALYZE-001-rca.md with root causes for all issues
|
||||
CONSTRAINTS: Read-only analysis | Skip low-severity warnings unless user requests
|
||||
---
|
||||
InnerLoop: false
|
||||
RoleSpec: .claude/skills/team-frontend-debug/roles/analyzer/role.md
|
||||
RoleSpec: ~ or <project>/.claude/skills/team-frontend-debug/roles/analyzer/role.md
|
||||
```
|
||||
|
||||
**Conditional**: If TEST-001 reports zero issues → skip ANALYZE-001, FIX-001, VERIFY-001. Pipeline completes.
|
||||
@@ -96,7 +96,7 @@ EXPECTED: Modified source files + <session>/artifacts/FIX-001-changes.md
|
||||
CONSTRAINTS: Minimal changes per issue | Follow existing code style
|
||||
---
|
||||
InnerLoop: true
|
||||
RoleSpec: .claude/skills/team-frontend-debug/roles/fixer/role.md
|
||||
RoleSpec: ~ or <project>/.claude/skills/team-frontend-debug/roles/fixer/role.md
|
||||
```
|
||||
|
||||
### VERIFY-001 (Test Mode): Re-Test After Fix
|
||||
@@ -117,7 +117,7 @@ EXPECTED: <session>/artifacts/VERIFY-001-report.md with pass/fail per previously
|
||||
CONSTRAINTS: Only re-test failed scenarios | Use Chrome DevTools MCP only
|
||||
---
|
||||
InnerLoop: false
|
||||
RoleSpec: .claude/skills/team-frontend-debug/roles/verifier/role.md
|
||||
RoleSpec: ~ or <project>/.claude/skills/team-frontend-debug/roles/verifier/role.md
|
||||
```
|
||||
|
||||
---
|
||||
@@ -143,7 +143,7 @@ EXPECTED: <session>/evidence/ directory with all captures + reproduction report
|
||||
CONSTRAINTS: Use Chrome DevTools MCP only | Do not modify any code
|
||||
---
|
||||
InnerLoop: false
|
||||
RoleSpec: .claude/skills/team-frontend-debug/roles/reproducer/role.md
|
||||
RoleSpec: ~ or <project>/.claude/skills/team-frontend-debug/roles/reproducer/role.md
|
||||
```
|
||||
|
||||
### ANALYZE-001 (Debug Mode): Root Cause Analysis
|
||||
@@ -164,7 +164,7 @@ EXPECTED: <session>/artifacts/ANALYZE-001-rca.md with root cause, file:line, fix
|
||||
CONSTRAINTS: Read-only analysis | Request more evidence if inconclusive
|
||||
---
|
||||
InnerLoop: false
|
||||
RoleSpec: .claude/skills/team-frontend-debug/roles/analyzer/role.md
|
||||
RoleSpec: ~ or <project>/.claude/skills/team-frontend-debug/roles/analyzer/role.md
|
||||
```
|
||||
|
||||
### FIX-001 (Debug Mode): Code Fix
|
||||
@@ -183,7 +183,7 @@ EXPECTED: Modified source files + <session>/artifacts/FIX-001-changes.md
|
||||
CONSTRAINTS: Minimal changes | Follow existing code style | No breaking changes
|
||||
---
|
||||
InnerLoop: true
|
||||
RoleSpec: .claude/skills/team-frontend-debug/roles/fixer/role.md
|
||||
RoleSpec: ~ or <project>/.claude/skills/team-frontend-debug/roles/fixer/role.md
|
||||
```
|
||||
|
||||
### VERIFY-001 (Debug Mode): Fix Verification
|
||||
@@ -203,7 +203,7 @@ EXPECTED: <session>/artifacts/VERIFY-001-report.md with pass/fail verdict
|
||||
CONSTRAINTS: Use Chrome DevTools MCP only | Same steps as reproduction
|
||||
---
|
||||
InnerLoop: false
|
||||
RoleSpec: .claude/skills/team-frontend-debug/roles/verifier/role.md
|
||||
RoleSpec: ~ or <project>/.claude/skills/team-frontend-debug/roles/verifier/role.md
|
||||
```
|
||||
|
||||
---
|
||||
@@ -219,7 +219,7 @@ TASK: <specific evidence requests from Analyzer>
|
||||
CONTEXT: Session + Analyzer request
|
||||
---
|
||||
InnerLoop: false
|
||||
RoleSpec: .claude/skills/team-frontend-debug/roles/reproducer/role.md
|
||||
RoleSpec: ~ or <project>/.claude/skills/team-frontend-debug/roles/reproducer/role.md
|
||||
```
|
||||
|
||||
### FIX-002 (Either Mode): Re-Fix After Failed Verification
|
||||
@@ -231,7 +231,7 @@ TASK: Review VERIFY-001 failure details, apply corrective fix
|
||||
CONTEXT: Session + VERIFY-001-report.md
|
||||
---
|
||||
InnerLoop: true
|
||||
RoleSpec: .claude/skills/team-frontend-debug/roles/fixer/role.md
|
||||
RoleSpec: ~ or <project>/.claude/skills/team-frontend-debug/roles/fixer/role.md
|
||||
```
|
||||
|
||||
## Conditional Skip Rules
|
||||
|
||||
@@ -66,7 +66,7 @@ Agent({
|
||||
run_in_background: true,
|
||||
prompt: `## Role Assignment
|
||||
role: <role>
|
||||
role_spec: .claude/skills/team-frontend/roles/<role>/role.md
|
||||
role_spec: ~ or <project>/.claude/skills/team-frontend/roles/<role>/role.md
|
||||
session: <session-folder>
|
||||
session_id: <session-id>
|
||||
team_name: frontend
|
||||
|
||||
@@ -129,7 +129,7 @@ Agent({
|
||||
run_in_background: true,
|
||||
prompt: `## Role Assignment
|
||||
role: <role>
|
||||
role_spec: .claude/skills/team-frontend/roles/<role>/role.md
|
||||
role_spec: ~ or <project>/.claude/skills/team-frontend/roles/<role>/role.md
|
||||
session: <session-folder>
|
||||
session_id: <session-id>
|
||||
team_name: frontend
|
||||
|
||||
@@ -67,7 +67,7 @@ Agent({
|
||||
run_in_background: true,
|
||||
prompt: `## Role Assignment
|
||||
role: <role>
|
||||
role_spec: .claude/skills/team-issue/roles/<role>/role.md
|
||||
role_spec: ~ or <project>/.claude/skills/team-issue/roles/<role>/role.md
|
||||
session: <session-folder>
|
||||
session_id: <session-id>
|
||||
team_name: issue
|
||||
@@ -89,7 +89,7 @@ Agent({
|
||||
run_in_background: true,
|
||||
prompt: `## Role Assignment
|
||||
role: <role>
|
||||
role_spec: .claude/skills/team-issue/roles/<role>/role.md
|
||||
role_spec: ~ or <project>/.claude/skills/team-issue/roles/<role>/role.md
|
||||
session: <session-folder>
|
||||
session_id: <session-id>
|
||||
team_name: issue
|
||||
|
||||
@@ -101,7 +101,7 @@ Find ready tasks, spawn workers, STOP.
|
||||
run_in_background: true,
|
||||
prompt: `## Role Assignment
|
||||
role: <role>
|
||||
role_spec: .claude/skills/team-issue/roles/<role>/role.md
|
||||
role_spec: ~ or <project>/.claude/skills/team-issue/roles/<role>/role.md
|
||||
session: <session-folder>
|
||||
session_id: <session-id>
|
||||
team_name: issue
|
||||
@@ -133,7 +133,7 @@ Agent({
|
||||
run_in_background: true,
|
||||
prompt: `## Role Assignment
|
||||
role: <role>
|
||||
role_spec: .claude/skills/team-issue/roles/<role>/role.md
|
||||
role_spec: ~ or <project>/.claude/skills/team-issue/roles/<role>/role.md
|
||||
session: <session-folder>
|
||||
session_id: <session-id>
|
||||
team_name: issue
|
||||
|
||||
@@ -66,7 +66,7 @@ Agent({
|
||||
run_in_background: true,
|
||||
prompt: `## Role Assignment
|
||||
role: <role>
|
||||
role_spec: .claude/skills/team-iterdev/roles/<role>/role.md
|
||||
role_spec: ~ or <project>/.claude/skills/team-iterdev/roles/<role>/role.md
|
||||
session: <session-folder>
|
||||
session_id: <session-id>
|
||||
team_name: iterdev
|
||||
|
||||
@@ -101,7 +101,7 @@ Agent({
|
||||
run_in_background: true,
|
||||
prompt: `## Role Assignment
|
||||
role: <role>
|
||||
role_spec: .claude/skills/team-iterdev/roles/<role>/role.md
|
||||
role_spec: ~ or <project>/.claude/skills/team-iterdev/roles/<role>/role.md
|
||||
session: <session-folder>
|
||||
session_id: <session-id>
|
||||
team_name: iterdev
|
||||
|
||||
@@ -71,7 +71,7 @@ Agent({
|
||||
run_in_background: true,
|
||||
prompt: `## Role Assignment
|
||||
role: <role>
|
||||
role_spec: .claude/skills/team-lifecycle-v4/roles/<role>/role.md
|
||||
role_spec: ~ or <project>/.claude/skills/team-lifecycle-v4/roles/<role>/role.md
|
||||
session: <session-folder>
|
||||
session_id: <session-id>
|
||||
team_name: <team-name>
|
||||
@@ -98,7 +98,7 @@ Agent({
|
||||
run_in_background: true,
|
||||
prompt: `## Role Assignment
|
||||
role: supervisor
|
||||
role_spec: .claude/skills/team-lifecycle-v4/roles/supervisor/role.md
|
||||
role_spec: ~ or <project>/.claude/skills/team-lifecycle-v4/roles/supervisor/role.md
|
||||
session: <session-folder>
|
||||
session_id: <session-id>
|
||||
team_name: <team-name>
|
||||
|
||||
@@ -29,7 +29,7 @@ EXPECTED: <artifact path> + <quality criteria>
|
||||
CONSTRAINTS: <scope limits>
|
||||
---
|
||||
InnerLoop: <true|false>
|
||||
RoleSpec: .claude/skills/team-lifecycle-v4/roles/<role>/role.md
|
||||
RoleSpec: ~ or <project>/.claude/skills/team-lifecycle-v4/roles/<role>/role.md
|
||||
```
|
||||
|
||||
## InnerLoop Flag Rules
|
||||
@@ -45,7 +45,7 @@ CHECKPOINT tasks are dispatched like regular tasks but handled differently at sp
|
||||
- Owner: supervisor
|
||||
- **NOT spawned as team-worker** — coordinator wakes the resident supervisor via SendMessage
|
||||
- If `supervision: false` in team-session.json, skip creating CHECKPOINT tasks entirely
|
||||
- RoleSpec in description: `.claude/skills/team-lifecycle-v4/roles/supervisor/role.md`
|
||||
- RoleSpec in description: `~ or <project>/.claude/skills/team-lifecycle-v4/roles/supervisor/role.md`
|
||||
|
||||
## Dependency Validation
|
||||
|
||||
|
||||
@@ -78,7 +78,7 @@ Agent({
|
||||
run_in_background: true,
|
||||
prompt: `## Role Assignment
|
||||
role: <role>
|
||||
role_spec: .claude/skills/team-perf-opt/roles/<role>/role.md
|
||||
role_spec: ~ or <project>/.claude/skills/team-perf-opt/roles/<role>/role.md
|
||||
session: <session-folder>
|
||||
session_id: <session-id>
|
||||
team_name: perf-opt
|
||||
|
||||
@@ -73,7 +73,7 @@ Agent({
|
||||
run_in_background: true,
|
||||
prompt: `## Role Assignment
|
||||
role: <role>
|
||||
role_spec: .claude/skills/team-perf-opt/role-specs/<role>.md
|
||||
role_spec: ~ or <project>/.claude/skills/team-perf-opt/role-specs/<role>.md
|
||||
session: <session-folder>
|
||||
session_id: <session-id>
|
||||
team_name: perf-opt
|
||||
|
||||
@@ -112,7 +112,7 @@ Execute `commands/dispatch.md` inline (Command Execution Protocol).
|
||||
### Initial Spawn
|
||||
|
||||
Find first unblocked task and spawn its worker using SKILL.md Worker Spawn Template with:
|
||||
- `role_spec: .claude/skills/team-perf-opt/roles/<role>/role.md`
|
||||
- `role_spec: ~ or <project>/.claude/skills/team-perf-opt/roles/<role>/role.md`
|
||||
- `team_name: perf-opt`
|
||||
|
||||
**STOP** after spawning. Wait for worker callback.
|
||||
|
||||
@@ -3,7 +3,7 @@
|
||||
"team_name": "perf-opt",
|
||||
"team_display_name": "Performance Optimization",
|
||||
"skill_name": "team-perf-opt",
|
||||
"skill_path": ".claude/skills/team-perf-opt/",
|
||||
"skill_path": "~ or <project>/.claude/skills/team-perf-opt/",
|
||||
"worker_agent": "team-worker",
|
||||
"pipeline_type": "Linear with Review-Fix Cycle (Parallel-Capable)",
|
||||
"completion_action": "interactive",
|
||||
|
||||
@@ -65,7 +65,7 @@ Agent({
|
||||
run_in_background: true,
|
||||
prompt: `## Role Assignment
|
||||
role: <role>
|
||||
role_spec: .claude/skills/team-planex/roles/<role>/role.md
|
||||
role_spec: ~ or <project>/.claude/skills/team-planex/roles/<role>/role.md
|
||||
session: <session-folder>
|
||||
session_id: <session-id>
|
||||
team_name: planex
|
||||
|
||||
@@ -125,7 +125,7 @@ Collect task states from TaskList()
|
||||
run_in_background: true,
|
||||
prompt: `## Role Assignment
|
||||
role: <role>
|
||||
role_spec: .claude/skills/team-planex/roles/<role>/role.md
|
||||
role_spec: ~ or <project>/.claude/skills/team-planex/roles/<role>/role.md
|
||||
session: <session-folder>
|
||||
session_id: <session-id>
|
||||
team_name: <team-name>
|
||||
|
||||
@@ -68,7 +68,7 @@ Agent({
|
||||
run_in_background: true,
|
||||
prompt: `## Role Assignment
|
||||
role: <role>
|
||||
role_spec: .claude/skills/team-quality-assurance/roles/<role>/role.md
|
||||
role_spec: ~ or <project>/.claude/skills/team-quality-assurance/roles/<role>/role.md
|
||||
session: <session-folder>
|
||||
session_id: <session-id>
|
||||
team_name: quality-assurance
|
||||
|
||||
@@ -30,7 +30,7 @@ EXPECTED: <artifact path> + <quality criteria>
|
||||
CONSTRAINTS: <scope limits>
|
||||
---
|
||||
InnerLoop: <true|false>
|
||||
RoleSpec: .claude/skills/team-quality-assurance/roles/<role>/role.md
|
||||
RoleSpec: ~ or <project>/.claude/skills/team-quality-assurance/roles/<role>/role.md
|
||||
```
|
||||
|
||||
## Pipeline Task Registry
|
||||
|
||||
@@ -59,7 +59,7 @@ EXPECTED: Fixed test files | Improved coverage
|
||||
CONSTRAINTS: Only modify test files | No source changes
|
||||
---
|
||||
InnerLoop: false
|
||||
RoleSpec: .claude/skills/team-quality-assurance/roles/generator/role.md"
|
||||
RoleSpec: ~ or <project>/.claude/skills/team-quality-assurance/roles/generator/role.md"
|
||||
})
|
||||
TaskCreate({
|
||||
subject: "QARUN-gc-<round>: Re-execute <layer> (GC #<round>)",
|
||||
@@ -72,7 +72,7 @@ EXPECTED: <session>/results/run-<layer>-gc-<round>.json
|
||||
CONSTRAINTS: Read-only execution
|
||||
---
|
||||
InnerLoop: false
|
||||
RoleSpec: .claude/skills/team-quality-assurance/roles/executor/role.md",
|
||||
RoleSpec: ~ or <project>/.claude/skills/team-quality-assurance/roles/executor/role.md",
|
||||
blockedBy: ["QAGEN-fix-<round>"]
|
||||
})
|
||||
```
|
||||
@@ -149,7 +149,7 @@ Agent({
|
||||
run_in_background: true,
|
||||
prompt: `## Role Assignment
|
||||
role: <role>
|
||||
role_spec: .claude/skills/team-quality-assurance/roles/<role>/role.md
|
||||
role_spec: ~ or <project>/.claude/skills/team-quality-assurance/roles/<role>/role.md
|
||||
session: <session-folder>
|
||||
session_id: <session-id>
|
||||
team_name: quality-assurance
|
||||
|
||||
@@ -66,7 +66,7 @@ Agent({
|
||||
run_in_background: true,
|
||||
prompt: `## Role Assignment
|
||||
role: <role>
|
||||
role_spec: .claude/skills/team-review/roles/<role>/role.md
|
||||
role_spec: ~ or <project>/.claude/skills/team-review/roles/<role>/role.md
|
||||
session: <session-folder>
|
||||
session_id: <session-id>
|
||||
team_name: review
|
||||
|
||||
@@ -30,7 +30,7 @@ EXPECTED: <artifact path> + <quality criteria>
|
||||
CONSTRAINTS: <scope limits>
|
||||
---
|
||||
InnerLoop: <true|false>
|
||||
RoleSpec: .claude/skills/team-review/roles/<role>/role.md
|
||||
RoleSpec: ~ or <project>/.claude/skills/team-review/roles/<role>/role.md
|
||||
```
|
||||
|
||||
## Pipeline Task Registry
|
||||
|
||||
@@ -24,9 +24,9 @@ Event-driven pipeline coordination. Beat model: coordinator wake -> process -> s
|
||||
|
||||
| Prefix | Role | Role Spec | inner_loop |
|
||||
|--------|------|-----------|------------|
|
||||
| SCAN-* | scanner | `.claude/skills/team-review/roles/scanner/role.md` | false |
|
||||
| REV-* | reviewer | `.claude/skills/team-review/roles/reviewer/role.md` | false |
|
||||
| FIX-* | fixer | `.claude/skills/team-review/roles/fixer/role.md` | true |
|
||||
| SCAN-* | scanner | `~ or <project>/.claude/skills/team-review/roles/scanner/role.md` | false |
|
||||
| REV-* | reviewer | `~ or <project>/.claude/skills/team-review/roles/reviewer/role.md` | false |
|
||||
| FIX-* | fixer | `~ or <project>/.claude/skills/team-review/roles/fixer/role.md` | true |
|
||||
|
||||
## handleCallback
|
||||
|
||||
@@ -123,7 +123,7 @@ Agent({
|
||||
run_in_background: true,
|
||||
prompt: `## Role Assignment
|
||||
role: <role>
|
||||
role_spec: .claude/skills/team-review/roles/<role>/role.md
|
||||
role_spec: ~ or <project>/.claude/skills/team-review/roles/<role>/role.md
|
||||
session: <session-folder>
|
||||
session_id: <session-id>
|
||||
team_name: review
|
||||
|
||||
@@ -72,7 +72,7 @@ Agent({
|
||||
run_in_background: true,
|
||||
prompt: `## Role Assignment
|
||||
role: <role>
|
||||
role_spec: .claude/skills/team-roadmap-dev/roles/<role>/role.md
|
||||
role_spec: ~ or <project>/.claude/skills/team-roadmap-dev/roles/<role>/role.md
|
||||
session: <session-folder>
|
||||
session_id: <session-id>
|
||||
team_name: roadmap-dev
|
||||
|
||||
@@ -15,9 +15,9 @@ Handle all coordinator monitoring events for the roadmap-dev pipeline using the
|
||||
|
||||
| Prefix | Role | Role Spec | inner_loop |
|
||||
|--------|------|-----------|------------|
|
||||
| PLAN | planner | `.claude/skills/team-roadmap-dev/roles/planner/role.md` | true (cli_tools: gemini --mode analysis) |
|
||||
| EXEC | executor | `.claude/skills/team-roadmap-dev/roles/executor/role.md` | true (cli_tools: gemini --mode write) |
|
||||
| VERIFY | verifier | `.claude/skills/team-roadmap-dev/roles/verifier/role.md` | true |
|
||||
| PLAN | planner | `~ or <project>/.claude/skills/team-roadmap-dev/roles/planner/role.md` | true (cli_tools: gemini --mode analysis) |
|
||||
| EXEC | executor | `~ or <project>/.claude/skills/team-roadmap-dev/roles/executor/role.md` | true (cli_tools: gemini --mode write) |
|
||||
| VERIFY | verifier | `~ or <project>/.claude/skills/team-roadmap-dev/roles/verifier/role.md` | true |
|
||||
|
||||
### Pipeline Structure
|
||||
|
||||
@@ -247,7 +247,7 @@ Agent({
|
||||
run_in_background: true,
|
||||
prompt: `## Role Assignment
|
||||
role: <role>
|
||||
role_spec: .claude/skills/team-roadmap-dev/roles/<role>/role.md
|
||||
role_spec: ~ or <project>/.claude/skills/team-roadmap-dev/roles/<role>/role.md
|
||||
session: <session-folder>
|
||||
session_id: <session-id>
|
||||
team_name: roadmap-dev
|
||||
|
||||
@@ -88,6 +88,6 @@ Phase N: PLAN-N01 --> EXEC-N01 --> VERIFY-N01
|
||||
|
||||
| Prefix | Role | Role Spec | Inner Loop |
|
||||
|--------|------|-----------|------------|
|
||||
| PLAN | planner | `.claude/skills/team-roadmap-dev/roles/planner/role.md` | true |
|
||||
| EXEC | executor | `.claude/skills/team-roadmap-dev/roles/executor/role.md` | true |
|
||||
| VERIFY | verifier | `.claude/skills/team-roadmap-dev/roles/verifier/role.md` | true |
|
||||
| PLAN | planner | `~ or <project>/.claude/skills/team-roadmap-dev/roles/planner/role.md` | true |
|
||||
| EXEC | executor | `~ or <project>/.claude/skills/team-roadmap-dev/roles/executor/role.md` | true |
|
||||
| VERIFY | verifier | `~ or <project>/.claude/skills/team-roadmap-dev/roles/verifier/role.md` | true |
|
||||
|
||||
@@ -2,7 +2,7 @@
|
||||
"team_name": "roadmap-dev",
|
||||
"team_display_name": "Roadmap Dev",
|
||||
"skill_name": "team-roadmap-dev",
|
||||
"skill_path": ".claude/skills/team-roadmap-dev/",
|
||||
"skill_path": "~ or <project>/.claude/skills/team-roadmap-dev/",
|
||||
"design_source": "roadmap-driven development workflow design (2026-02-24)",
|
||||
"pipeline_type": "Phased",
|
||||
"pipeline": {
|
||||
|
||||
@@ -68,7 +68,7 @@ Agent({
|
||||
run_in_background: true,
|
||||
prompt: `## Role Assignment
|
||||
role: <role>
|
||||
role_spec: .claude/skills/team-tech-debt/roles/<role>/role.md
|
||||
role_spec: ~ or <project>/.claude/skills/team-tech-debt/roles/<role>/role.md
|
||||
session: <session-folder>
|
||||
session_id: <session-id>
|
||||
team_name: tech-debt
|
||||
|
||||
@@ -123,7 +123,7 @@ Agent({
|
||||
run_in_background: true,
|
||||
prompt: `## Role Assignment
|
||||
role: <role>
|
||||
role_spec: .claude/skills/team-tech-debt/roles/<role>/role.md
|
||||
role_spec: ~ or <project>/.claude/skills/team-tech-debt/roles/<role>/role.md
|
||||
session: <session-folder>
|
||||
session_id: <session-id>
|
||||
team_name: tech-debt
|
||||
|
||||
@@ -67,7 +67,7 @@ Agent({
|
||||
run_in_background: true,
|
||||
prompt: `## Role Assignment
|
||||
role: <role>
|
||||
role_spec: .claude/skills/team-testing/roles/<role>/role.md
|
||||
role_spec: ~ or <project>/.claude/skills/team-testing/roles/<role>/role.md
|
||||
session: <session-folder>
|
||||
session_id: <session-id>
|
||||
team_name: testing
|
||||
|
||||
@@ -31,7 +31,7 @@ EXPECTED: <deliverable path> + <quality criteria>
|
||||
CONSTRAINTS: <scope limits, focus areas>
|
||||
---
|
||||
InnerLoop: <true|false>
|
||||
RoleSpec: .claude/skills/team-testing/roles/<role>/role.md
|
||||
RoleSpec: ~ or <project>/.claude/skills/team-testing/roles/<role>/role.md
|
||||
```
|
||||
|
||||
## Pipeline Task Registry
|
||||
|
||||
@@ -25,10 +25,10 @@ Event-driven pipeline coordination. Beat model: coordinator wake -> process -> s
|
||||
|
||||
| Prefix | Role | Role Spec | inner_loop |
|
||||
|--------|------|-----------|------------|
|
||||
| STRATEGY-* | strategist | `.claude/skills/team-testing/roles/strategist/role.md` | false |
|
||||
| TESTGEN-* | generator | `.claude/skills/team-testing/roles/generator/role.md` | true |
|
||||
| TESTRUN-* | executor | `.claude/skills/team-testing/roles/executor/role.md` | true |
|
||||
| TESTANA-* | analyst | `.claude/skills/team-testing/roles/analyst/role.md` | false |
|
||||
| STRATEGY-* | strategist | `~ or <project>/.claude/skills/team-testing/roles/strategist/role.md` | false |
|
||||
| TESTGEN-* | generator | `~ or <project>/.claude/skills/team-testing/roles/generator/role.md` | true |
|
||||
| TESTRUN-* | executor | `~ or <project>/.claude/skills/team-testing/roles/executor/role.md` | true |
|
||||
| TESTANA-* | analyst | `~ or <project>/.claude/skills/team-testing/roles/analyst/role.md` | false |
|
||||
|
||||
## handleCallback
|
||||
|
||||
@@ -68,7 +68,7 @@ EXPECTED: Revised test files in <session>/tests/<layer>/
|
||||
CONSTRAINTS: Only modify test files
|
||||
---
|
||||
InnerLoop: true
|
||||
RoleSpec: .claude/skills/team-testing/roles/generator/role.md"
|
||||
RoleSpec: ~ or <project>/.claude/skills/team-testing/roles/generator/role.md"
|
||||
})
|
||||
TaskCreate({
|
||||
subject: "TESTRUN-<layer>-fix-<round>: Re-execute <layer> (GC #<round>)",
|
||||
@@ -80,7 +80,7 @@ CONTEXT:
|
||||
EXPECTED: <session>/results/run-<N>-gc.json
|
||||
---
|
||||
InnerLoop: true
|
||||
RoleSpec: .claude/skills/team-testing/roles/executor/role.md",
|
||||
RoleSpec: ~ or <project>/.claude/skills/team-testing/roles/executor/role.md",
|
||||
blockedBy: ["TESTGEN-<layer>-fix-<round>"]
|
||||
})
|
||||
```
|
||||
@@ -150,7 +150,7 @@ Agent({
|
||||
run_in_background: true,
|
||||
prompt: `## Role Assignment
|
||||
role: <role>
|
||||
role_spec: .claude/skills/team-testing/roles/<role>/role.md
|
||||
role_spec: ~ or <project>/.claude/skills/team-testing/roles/<role>/role.md
|
||||
session: <session-folder>
|
||||
session_id: <session-id>
|
||||
team_name: testing
|
||||
|
||||
@@ -67,7 +67,7 @@ Agent({
|
||||
run_in_background: true,
|
||||
prompt: `## Role Assignment
|
||||
role: <role>
|
||||
role_spec: .claude/skills/team-uidesign/roles/<role>/role.md
|
||||
role_spec: ~ or <project>/.claude/skills/team-uidesign/roles/<role>/role.md
|
||||
session: <session-folder>
|
||||
session_id: <session-id>
|
||||
team_name: uidesign
|
||||
|
||||
@@ -127,7 +127,7 @@ Agent({
|
||||
run_in_background: true,
|
||||
prompt: `## Role Assignment
|
||||
role: <role>
|
||||
role_spec: .claude/skills/team-uidesign/roles/<role>/role.md
|
||||
role_spec: ~ or <project>/.claude/skills/team-uidesign/roles/<role>/role.md
|
||||
session: <session-folder>
|
||||
session_id: <session-id>
|
||||
team_name: uidesign
|
||||
|
||||
@@ -75,7 +75,7 @@ Agent({
|
||||
run_in_background: true,
|
||||
prompt: `## Role Assignment
|
||||
role: <role>
|
||||
role_spec: .claude/skills/team-ultra-analyze/roles/<role>/role.md
|
||||
role_spec: ~ or <project>/.claude/skills/team-ultra-analyze/roles/<role>/role.md
|
||||
session: <session-folder>
|
||||
session_id: <session-id>
|
||||
team_name: ultra-analyze
|
||||
|
||||
@@ -211,10 +211,10 @@ Find and spawn the next ready tasks.
|
||||
|
||||
| Task Prefix | Role | Role Spec |
|
||||
|-------------|------|-----------|
|
||||
| `EXPLORE-*` | explorer | `.claude/skills/team-ultra-analyze/role-specs/explorer.md` |
|
||||
| `ANALYZE-*` | analyst | `.claude/skills/team-ultra-analyze/role-specs/analyst.md` |
|
||||
| `DISCUSS-*` | discussant | `.claude/skills/team-ultra-analyze/role-specs/discussant.md` |
|
||||
| `SYNTH-*` | synthesizer | `.claude/skills/team-ultra-analyze/role-specs/synthesizer.md` |
|
||||
| `EXPLORE-*` | explorer | `~ or <project>/.claude/skills/team-ultra-analyze/role-specs/explorer.md` |
|
||||
| `ANALYZE-*` | analyst | `~ or <project>/.claude/skills/team-ultra-analyze/role-specs/analyst.md` |
|
||||
| `DISCUSS-*` | discussant | `~ or <project>/.claude/skills/team-ultra-analyze/role-specs/discussant.md` |
|
||||
| `SYNTH-*` | synthesizer | `~ or <project>/.claude/skills/team-ultra-analyze/role-specs/synthesizer.md` |
|
||||
|
||||
3. Spawn team-worker for each ready task:
|
||||
|
||||
@@ -227,7 +227,7 @@ Agent({
|
||||
run_in_background: true,
|
||||
prompt: `## Role Assignment
|
||||
role: <role>
|
||||
role_spec: .claude/skills/team-ultra-analyze/role-specs/<role>.md
|
||||
role_spec: ~ or <project>/.claude/skills/team-ultra-analyze/role-specs/<role>.md
|
||||
session: <session-folder>
|
||||
session_id: <session-id>
|
||||
team_name: ultra-analyze
|
||||
|
||||
@@ -152,7 +152,7 @@ Execute `commands/dispatch.md` inline (Command Execution Protocol):
|
||||
### Initial Spawn
|
||||
|
||||
Find first unblocked tasks and spawn their workers. Use SKILL.md Worker Spawn Template with:
|
||||
- `role_spec: .claude/skills/team-ultra-analyze/roles/<role>/role.md`
|
||||
- `role_spec: ~ or <project>/.claude/skills/team-ultra-analyze/roles/<role>/role.md`
|
||||
- `team_name: ultra-analyze`
|
||||
- `inner_loop: false`
|
||||
|
||||
|
||||
@@ -76,7 +76,7 @@ Agent({
|
||||
run_in_background: true,
|
||||
prompt: `## Role Assignment
|
||||
role: <role>
|
||||
role_spec: .claude/skills/team-ux-improve/roles/<role>/role.md
|
||||
role_spec: ~ or <project>/.claude/skills/team-ux-improve/roles/<role>/role.md
|
||||
session: <session-folder>
|
||||
session_id: <session-id>
|
||||
team_name: ux-improve
|
||||
|
||||
@@ -102,7 +102,7 @@ Agent({
|
||||
run_in_background: true,
|
||||
prompt: `## Role Assignment
|
||||
role: <role>
|
||||
role_spec: .claude/skills/team-ux-improve/roles/<role>/role.md
|
||||
role_spec: ~ or <project>/.claude/skills/team-ux-improve/roles/<role>/role.md
|
||||
session: <session-folder>
|
||||
session_id: <session-id>
|
||||
team_name: ux-improve
|
||||
|
||||
@@ -76,7 +76,7 @@ TEXT-LEVEL ONLY. No source code reading.
|
||||
├── explorations/
|
||||
└── wisdom/contributions/
|
||||
```
|
||||
3. **Wisdom Initialization**: Copy `.claude/skills/team-ux-improve/wisdom/` to `<session>/wisdom/`
|
||||
3. **Wisdom Initialization**: Copy `~ or <project>/.claude/skills/team-ux-improve/wisdom/` to `<session>/wisdom/`
|
||||
4. Initialize `.msg/meta.json` via team_msg state_update with pipeline metadata
|
||||
5. TeamCreate(team_name="ux-improve")
|
||||
6. Do NOT spawn workers yet - deferred to Phase 4
|
||||
@@ -110,7 +110,7 @@ Delegate to `commands/monitor.md#handleSpawnNext`:
|
||||
|
||||
3. **Wisdom Consolidation**: Check `<session>/wisdom/contributions/` for worker contributions
|
||||
- If contributions exist -> AskUserQuestion to merge to permanent wisdom
|
||||
- If approved -> copy to `.claude/skills/team-ux-improve/wisdom/`
|
||||
- If approved -> copy to `~ or <project>/.claude/skills/team-ux-improve/wisdom/`
|
||||
|
||||
4. Calculate: completed_tasks, total_issues_found, issues_fixed, test_pass_rate
|
||||
5. Output pipeline summary with [coordinator] prefix
|
||||
|
||||
@@ -44,7 +44,7 @@ UX improvement pipeline modes and task registry.
|
||||
|
||||
## Wisdom System
|
||||
|
||||
Workers contribute learnings to `<session>/wisdom/contributions/`. On pipeline completion, coordinator asks user to merge approved contributions to permanent wisdom at `.claude/skills/team-ux-improve/wisdom/`.
|
||||
Workers contribute learnings to `<session>/wisdom/contributions/`. On pipeline completion, coordinator asks user to merge approved contributions to permanent wisdom at `~ or <project>/.claude/skills/team-ux-improve/wisdom/`.
|
||||
|
||||
| Directory | Purpose |
|
||||
|-----------|---------|
|
||||
|
||||
@@ -4,7 +4,7 @@
|
||||
"team_display_name": "UX Improve",
|
||||
"team_purpose": "Systematically discover and fix UI/UX interaction issues including unresponsive buttons, missing feedback, and state refresh problems",
|
||||
"skill_name": "team-ux-improve",
|
||||
"skill_path": ".claude/skills/team-ux-improve/",
|
||||
"skill_path": "~ or <project>/.claude/skills/team-ux-improve/",
|
||||
"worker_agent": "team-worker",
|
||||
"pipeline_type": "Standard",
|
||||
"completion_action": "interactive",
|
||||
|
||||
110
.codex/skills/spec-generator/README.md
Normal file
110
.codex/skills/spec-generator/README.md
Normal file
@@ -0,0 +1,110 @@
|
||||
# Spec Generator
|
||||
|
||||
Structured specification document generator producing a complete document chain (Product Brief -> PRD -> Architecture -> Epics).
|
||||
|
||||
## Usage
|
||||
|
||||
```bash
|
||||
# Via workflow command
|
||||
/workflow:spec "Build a task management system"
|
||||
/workflow:spec -y "User auth with OAuth2" # Auto mode
|
||||
/workflow:spec -c "task management" # Resume session
|
||||
```
|
||||
|
||||
## Architecture
|
||||
|
||||
```
|
||||
spec-generator/
|
||||
|- SKILL.md # Entry point: metadata + architecture + flow
|
||||
|- phases/
|
||||
| |- 01-discovery.md # Seed analysis + codebase exploration + spec type selection
|
||||
| |- 01-5-requirement-clarification.md # Interactive requirement expansion
|
||||
| |- 02-product-brief.md # Multi-CLI product brief + glossary generation
|
||||
| |- 03-requirements.md # PRD with MoSCoW priorities + RFC 2119 constraints
|
||||
| |- 04-architecture.md # Architecture + state machine + config model + observability
|
||||
| |- 05-epics-stories.md # Epic/Story decomposition
|
||||
| |- 06-readiness-check.md # Quality validation + handoff + iterate option
|
||||
| |- 06-5-auto-fix.md # Auto-fix loop for readiness issues (max 2 iterations)
|
||||
| |- 07-issue-export.md # Issue creation from Epics + export report
|
||||
|- specs/
|
||||
| |- document-standards.md # Format, frontmatter, naming rules
|
||||
| |- quality-gates.md # Per-phase quality criteria + iteration tracking
|
||||
| |- glossary-template.json # Terminology glossary schema
|
||||
|- templates/
|
||||
| |- product-brief.md # Product brief template (+ Concepts & Non-Goals)
|
||||
| |- requirements-prd.md # PRD template
|
||||
| |- architecture-doc.md # Architecture template (+ state machine, config, observability)
|
||||
| |- epics-template.md # Epic/Story template (+ versioning)
|
||||
| |- profiles/ # Spec type specialization profiles
|
||||
| |- service-profile.md # Service spec: lifecycle, observability, trust
|
||||
| |- api-profile.md # API spec: endpoints, auth, rate limiting
|
||||
| |- library-profile.md # Library spec: public API, examples, compatibility
|
||||
|- README.md # This file
|
||||
```
|
||||
|
||||
## 7-Phase Pipeline
|
||||
|
||||
| Phase | Name | Output | CLI Tools | Key Features |
|
||||
|-------|------|--------|-----------|-------------|
|
||||
| 1 | Discovery | spec-config.json | Gemini (analysis) | Spec type selection |
|
||||
| 1.5 | Req Expansion | refined-requirements.json | Gemini (analysis) | Multi-round interactive |
|
||||
| 2 | Product Brief *(Agent)* | product-brief.md, glossary.json | Gemini + Codex + Claude (parallel) | Terminology glossary |
|
||||
| 3 | Requirements *(Agent)* | requirements/ | Gemini + **Codex review** | RFC 2119, data model |
|
||||
| 4 | Architecture *(Agent)* | architecture/ | Gemini + Codex (sequential) | State machine, config, observability |
|
||||
| 5 | Epics & Stories *(Agent)* | epics/ | Gemini + **Codex review** | Glossary consistency |
|
||||
| 6 | Readiness Check | readiness-report.md, spec-summary.md | Gemini + **Codex** (parallel) | Per-requirement verification |
|
||||
| 6.5 | Auto-Fix *(Agent)* | Updated phase docs | Gemini (analysis) | Max 2 iterations |
|
||||
| 7 | Issue Export | issue-export-report.md | ccw issue create | Epic→Issue mapping, wave assignment |
|
||||
|
||||
## Runtime Output
|
||||
|
||||
```
|
||||
.workflow/.spec/SPEC-{slug}-{YYYY-MM-DD}/
|
||||
|- spec-config.json # Session state
|
||||
|- discovery-context.json # Codebase context (optional)
|
||||
|- refined-requirements.json # Phase 1.5 (requirement expansion)
|
||||
|- glossary.json # Phase 2 (terminology)
|
||||
|- product-brief.md # Phase 2
|
||||
|- requirements/ # Phase 3 (directory)
|
||||
| |- _index.md
|
||||
| |- REQ-*.md
|
||||
| └── NFR-*.md
|
||||
|- architecture/ # Phase 4 (directory)
|
||||
| |- _index.md
|
||||
| └── ADR-*.md
|
||||
|- epics/ # Phase 5 (directory)
|
||||
| |- _index.md
|
||||
| └── EPIC-*.md
|
||||
|- readiness-report.md # Phase 6
|
||||
|- spec-summary.md # Phase 6
|
||||
└── issue-export-report.md # Phase 7 (issue export)
|
||||
```
|
||||
|
||||
## Flags
|
||||
|
||||
- `-y|--yes`: Auto mode - skip all interactive confirmations
|
||||
- `-c|--continue`: Resume from last completed phase
|
||||
|
||||
Spec type is selected interactively in Phase 1 (defaults to `service` in auto mode)
|
||||
Available types: `service`, `api`, `library`, `platform`
|
||||
|
||||
## Handoff
|
||||
|
||||
After Phase 6, choose execution path:
|
||||
- `Export Issues (Phase 7)` - Create issues per Epic with spec links → team-planex
|
||||
- `workflow-lite-plan` - Execute per Epic
|
||||
- `workflow:req-plan-with-file` - Roadmap decomposition
|
||||
- `workflow-plan` - Full planning
|
||||
- `Iterate & improve` - Re-run failed phases (max 2 iterations)
|
||||
|
||||
## Design Principles
|
||||
|
||||
- **Document chain**: Each phase builds on previous outputs
|
||||
- **Multi-perspective**: Gemini/Codex/Claude provide different viewpoints
|
||||
- **Template-driven**: Consistent format via templates + frontmatter
|
||||
- **Resumable**: spec-config.json tracks completed phases
|
||||
- **Pure documentation**: No code generation - clean handoff to execution workflows
|
||||
- **Type-specialized**: Profiles adapt templates to service/api/library/platform requirements
|
||||
- **Iterative quality**: Phase 6.5 auto-fix repairs issues, max 2 iterations before handoff
|
||||
- **Terminology-first**: glossary.json ensures consistent terminology across all documents
|
||||
- **Agent-delegated**: Heavy document phases (2-5, 6.5) run in doc-generator agents to minimize main context usage
|
||||
425
.codex/skills/spec-generator/SKILL.md
Normal file
425
.codex/skills/spec-generator/SKILL.md
Normal file
@@ -0,0 +1,425 @@
|
||||
---
|
||||
name: spec-generator
|
||||
description: Specification generator - 7 phase document chain producing product brief, PRD, architecture, epics, and issues. Agent-delegated heavy phases (2-5, 6.5) with Codex review gates. Triggers on "generate spec", "create specification", "spec generator", "workflow:spec".
|
||||
allowed-tools: Agent, AskUserQuestion, TaskCreate, TaskUpdate, TaskList, Read, Write, Edit, Bash, Glob, Grep, Skill
|
||||
---
|
||||
|
||||
# Spec Generator
|
||||
|
||||
Structured specification document generator producing a complete specification package (Product Brief, PRD, Architecture, Epics, Issues) through 7 sequential phases with multi-CLI analysis, Codex review gates, and interactive refinement. Heavy document phases are delegated to `doc-generator` agents to minimize main context usage. **Document generation only** - execution handoff via issue export to team-planex or existing workflows.
|
||||
|
||||
## Architecture Overview
|
||||
|
||||
```
|
||||
Phase 0: Specification Study (Read specs/ + templates/ - mandatory prerequisite) [Inline]
|
||||
|
|
||||
Phase 1: Discovery -> spec-config.json + discovery-context.json [Inline]
|
||||
| (includes spec_type selection)
|
||||
Phase 1.5: Req Expansion -> refined-requirements.json [Inline]
|
||||
| (interactive discussion + CLI gap analysis)
|
||||
Phase 2: Product Brief -> product-brief.md + glossary.json [Agent]
|
||||
| (3-CLI parallel + synthesis)
|
||||
Phase 3: Requirements (PRD) -> requirements/ (_index.md + REQ-*.md + NFR-*.md) [Agent]
|
||||
| (Gemini + Codex review)
|
||||
Phase 4: Architecture -> architecture/ (_index.md + ADR-*.md) [Agent]
|
||||
| (Gemini + Codex review)
|
||||
Phase 5: Epics & Stories -> epics/ (_index.md + EPIC-*.md) [Agent]
|
||||
| (Gemini + Codex review)
|
||||
Phase 6: Readiness Check -> readiness-report.md + spec-summary.md [Inline]
|
||||
| (Gemini + Codex dual validation + per-req verification)
|
||||
├── Pass (>=80%): Handoff or Phase 7
|
||||
├── Review (60-79%): Handoff with caveats or Phase 7
|
||||
└── Fail (<60%): Phase 6.5 Auto-Fix (max 2 iterations)
|
||||
|
|
||||
Phase 6.5: Auto-Fix -> Updated Phase 2-5 documents [Agent]
|
||||
|
|
||||
└── Re-run Phase 6 validation
|
||||
|
|
||||
Phase 7: Issue Export -> issue-export-report.md [Inline]
|
||||
(Epic→Issue mapping, ccw issue create, wave assignment)
|
||||
```
|
||||
|
||||
## Key Design Principles
|
||||
|
||||
1. **Document Chain**: Each phase builds on previous outputs, creating a traceable specification chain from idea to executable issues
|
||||
2. **Agent-Delegated**: Heavy document phases (2-5, 6.5) run in `doc-generator` agents, keeping main context lean (summaries only)
|
||||
3. **Multi-Perspective Analysis**: CLI tools (Gemini/Codex/Claude) provide product, technical, and user perspectives in parallel
|
||||
4. **Codex Review Gates**: Phases 3, 5, 6 include Codex CLI review for quality validation before output
|
||||
5. **Interactive by Default**: Each phase offers user confirmation points; `-y` flag enables full auto mode
|
||||
6. **Resumable Sessions**: `spec-config.json` tracks completed phases; `-c` flag resumes from last checkpoint
|
||||
7. **Template-Driven**: All documents generated from standardized templates with YAML frontmatter
|
||||
8. **Pure Documentation**: No code generation or execution - clean handoff via issue export to execution workflows
|
||||
9. **Spec Type Specialization**: Templates adapt to spec type (service/api/library/platform) via profiles for domain-specific depth
|
||||
10. **Iterative Quality**: Phase 6.5 auto-fix loop repairs issues found in readiness check (max 2 iterations)
|
||||
11. **Terminology Consistency**: glossary.json generated in Phase 2, injected into all subsequent phases
|
||||
|
||||
---
|
||||
|
||||
## Mandatory Prerequisites
|
||||
|
||||
> **Do NOT skip**: Before performing any operations, you **must** completely read the following documents. Proceeding without reading the specifications will result in outputs that do not meet quality standards.
|
||||
|
||||
### Specification Documents (Required Reading)
|
||||
|
||||
| Document | Purpose | Priority |
|
||||
|----------|---------|----------|
|
||||
| [specs/document-standards.md](specs/document-standards.md) | Document format, frontmatter, naming conventions | **P0 - Must read before execution** |
|
||||
| [specs/quality-gates.md](specs/quality-gates.md) | Per-phase quality gate criteria and scoring | **P0 - Must read before execution** |
|
||||
|
||||
### Template Files (Must read before generation)
|
||||
|
||||
| Document | Purpose |
|
||||
|----------|---------|
|
||||
| [templates/product-brief.md](templates/product-brief.md) | Product brief document template |
|
||||
| [templates/requirements-prd.md](templates/requirements-prd.md) | PRD document template |
|
||||
| [templates/architecture-doc.md](templates/architecture-doc.md) | Architecture document template |
|
||||
| [templates/epics-template.md](templates/epics-template.md) | Epic/Story document template |
|
||||
|
||||
---
|
||||
|
||||
## Execution Flow
|
||||
|
||||
```
|
||||
Input Parsing:
|
||||
|- Parse $ARGUMENTS: extract idea/topic, flags (-y, -c, -m)
|
||||
|- Detect mode: new | continue
|
||||
|- If continue: read spec-config.json, resume from first incomplete phase
|
||||
|- If new: proceed to Phase 1
|
||||
|
||||
Phase 1: Discovery & Seed Analysis
|
||||
|- Ref: phases/01-discovery.md
|
||||
|- Generate session ID: SPEC-{slug}-{YYYY-MM-DD}
|
||||
|- Parse input (text or file reference)
|
||||
|- Gemini CLI seed analysis (problem, users, domain, dimensions)
|
||||
|- Codebase exploration (conditional, if project detected)
|
||||
|- Spec type selection: service|api|library|platform (interactive, -y defaults to service)
|
||||
|- User confirmation (interactive, -y skips)
|
||||
|- Output: spec-config.json, discovery-context.json (optional)
|
||||
|
||||
Phase 1.5: Requirement Expansion & Clarification
|
||||
|- Ref: phases/01-5-requirement-clarification.md
|
||||
|- CLI gap analysis: completeness scoring, missing dimensions detection
|
||||
|- Multi-round interactive discussion (max 5 rounds)
|
||||
| |- Round 1: present gap analysis + expansion suggestions
|
||||
| |- Round N: follow-up refinement based on user responses
|
||||
|- User final confirmation of requirements
|
||||
|- Auto mode (-y): CLI auto-expansion without interaction
|
||||
|- Output: refined-requirements.json
|
||||
|
||||
Phase 2: Product Brief [AGENT: doc-generator]
|
||||
|- Delegate to Task(subagent_type="doc-generator")
|
||||
|- Agent reads: phases/02-product-brief.md
|
||||
|- Agent executes: 3 parallel CLI analyses + synthesis + glossary generation
|
||||
|- Agent writes: product-brief.md, glossary.json
|
||||
|- Agent returns: JSON summary {files_created, quality_notes, key_decisions}
|
||||
|- Orchestrator validates: files exist, spec-config.json updated
|
||||
|
||||
Phase 3: Requirements / PRD [AGENT: doc-generator]
|
||||
|- Delegate to Task(subagent_type="doc-generator")
|
||||
|- Agent reads: phases/03-requirements.md
|
||||
|- Agent executes: Gemini expansion + Codex review (Step 2.5) + priority sorting
|
||||
|- Agent writes: requirements/ directory (_index.md + REQ-*.md + NFR-*.md)
|
||||
|- Agent returns: JSON summary {files_created, codex_review_integrated, key_decisions}
|
||||
|- Orchestrator validates: directory exists, file count matches
|
||||
|
||||
Phase 4: Architecture [AGENT: doc-generator]
|
||||
|- Delegate to Task(subagent_type="doc-generator")
|
||||
|- Agent reads: phases/04-architecture.md
|
||||
|- Agent executes: Gemini analysis + Codex review + codebase mapping
|
||||
|- Agent writes: architecture/ directory (_index.md + ADR-*.md)
|
||||
|- Agent returns: JSON summary {files_created, codex_review_rating, key_decisions}
|
||||
|- Orchestrator validates: directory exists, ADR files present
|
||||
|
||||
Phase 5: Epics & Stories [AGENT: doc-generator]
|
||||
|- Delegate to Task(subagent_type="doc-generator")
|
||||
|- Agent reads: phases/05-epics-stories.md
|
||||
|- Agent executes: Gemini decomposition + Codex review (Step 2.5) + validation
|
||||
|- Agent writes: epics/ directory (_index.md + EPIC-*.md)
|
||||
|- Agent returns: JSON summary {files_created, codex_review_integrated, mvp_epic_count}
|
||||
|- Orchestrator validates: directory exists, MVP epics present
|
||||
|
||||
Phase 6: Readiness Check [INLINE + ENHANCED]
|
||||
|- Ref: phases/06-readiness-check.md
|
||||
|- Gemini CLI: cross-document validation (completeness, consistency, traceability)
|
||||
|- Codex CLI: technical depth review (ADR quality, data model, security, observability)
|
||||
|- Per-requirement verification: iterate all REQ-*.md / NFR-*.md
|
||||
| |- Check: AC exists + testable, Brief trace, Story coverage, Arch coverage
|
||||
| |- Generate: Per-Requirement Verification table
|
||||
|- Merge dual CLI scores into quality report
|
||||
|- Output: readiness-report.md, spec-summary.md
|
||||
|- Handoff options: Phase 7 (issue export), lite-plan, req-plan, plan, iterate
|
||||
|
||||
Phase 6.5: Auto-Fix (conditional) [AGENT: doc-generator]
|
||||
|- Delegate to Task(subagent_type="doc-generator")
|
||||
|- Agent reads: phases/06-5-auto-fix.md + readiness-report.md
|
||||
|- Agent executes: fix affected Phase 2-5 documents
|
||||
|- Agent returns: JSON summary {files_modified, issues_fixed, phases_touched}
|
||||
|- Re-run Phase 6 validation
|
||||
|- Max 2 iterations, then force handoff
|
||||
|
||||
Phase 7: Issue Export [INLINE]
|
||||
|- Ref: phases/07-issue-export.md
|
||||
|- Read EPIC-*.md files, assign waves (MVP→wave-1, others→wave-2)
|
||||
|- Create issues via ccw issue create (one per Epic)
|
||||
|- Map Epic dependencies to issue dependencies
|
||||
|- Generate issue-export-report.md
|
||||
|- Update spec-config.json with issue_ids
|
||||
|- Handoff: team-planex, wave-1 only, view issues, done
|
||||
|
||||
Complete: Full specification package with issues ready for execution
|
||||
|
||||
Phase 6/7 → Handoff Bridge (conditional, based on user selection):
|
||||
├─ team-planex: Execute issues via coordinated team workflow
|
||||
├─ lite-plan: Extract first MVP Epic description → direct text input
|
||||
├─ plan / req-plan: Create WFS session + .brainstorming/ bridge files
|
||||
│ ├─ guidance-specification.md (synthesized from spec outputs)
|
||||
│ ├─ feature-specs/feature-index.json (Epic → Feature mapping)
|
||||
│ └─ feature-specs/F-{num}-{slug}.md (one per Epic)
|
||||
└─ context-search-agent auto-discovers .brainstorming/
|
||||
→ context-package.json.brainstorm_artifacts populated
|
||||
→ action-planning-agent consumes: guidance_spec (P1) → feature_index (P2)
|
||||
```
|
||||
|
||||
## Directory Setup
|
||||
|
||||
```javascript
|
||||
// Session ID generation
|
||||
const slug = topic.toLowerCase().replace(/[^a-z0-9\u4e00-\u9fff]+/g, '-').slice(0, 40);
|
||||
const date = new Date().toISOString().slice(0, 10);
|
||||
const sessionId = `SPEC-${slug}-${date}`;
|
||||
const workDir = `.workflow/.spec/${sessionId}`;
|
||||
|
||||
Bash(`mkdir -p "${workDir}"`);
|
||||
```
|
||||
|
||||
## Output Structure
|
||||
|
||||
```
|
||||
.workflow/.spec/SPEC-{slug}-{YYYY-MM-DD}/
|
||||
├── spec-config.json # Session configuration + phase state
|
||||
├── discovery-context.json # Codebase exploration results (optional)
|
||||
├── refined-requirements.json # Phase 1.5: Confirmed requirements after discussion
|
||||
├── glossary.json # Phase 2: Terminology glossary for cross-doc consistency
|
||||
├── product-brief.md # Phase 2: Product brief
|
||||
├── requirements/ # Phase 3: Detailed PRD (directory)
|
||||
│ ├── _index.md # Summary, MoSCoW table, traceability, links
|
||||
│ ├── REQ-NNN-{slug}.md # Individual functional requirement
|
||||
│ └── NFR-{type}-NNN-{slug}.md # Individual non-functional requirement
|
||||
├── architecture/ # Phase 4: Architecture decisions (directory)
|
||||
│ ├── _index.md # Overview, components, tech stack, links
|
||||
│ └── ADR-NNN-{slug}.md # Individual Architecture Decision Record
|
||||
├── epics/ # Phase 5: Epic/Story breakdown (directory)
|
||||
│ ├── _index.md # Epic table, dependency map, MVP scope
|
||||
│ └── EPIC-NNN-{slug}.md # Individual Epic with Stories
|
||||
├── readiness-report.md # Phase 6: Quality report (+ per-req verification table)
|
||||
├── spec-summary.md # Phase 6: One-page executive summary
|
||||
└── issue-export-report.md # Phase 7: Issue mapping table + spec links
|
||||
```
|
||||
|
||||
## State Management
|
||||
|
||||
**spec-config.json** serves as core state file:
|
||||
```json
|
||||
{
|
||||
"session_id": "SPEC-xxx-2026-02-11",
|
||||
"seed_input": "User input text",
|
||||
"input_type": "text",
|
||||
"timestamp": "ISO8601",
|
||||
"mode": "interactive",
|
||||
"complexity": "moderate",
|
||||
"depth": "standard",
|
||||
"focus_areas": [],
|
||||
"spec_type": "service",
|
||||
"iteration_count": 0,
|
||||
"iteration_history": [],
|
||||
"seed_analysis": {
|
||||
"problem_statement": "...",
|
||||
"target_users": [],
|
||||
"domain": "...",
|
||||
"constraints": [],
|
||||
"dimensions": []
|
||||
},
|
||||
"has_codebase": false,
|
||||
"refined_requirements_file": "refined-requirements.json",
|
||||
"issue_ids": [],
|
||||
"issues_created": 0,
|
||||
"phasesCompleted": [
|
||||
{ "phase": 1, "name": "discovery", "output_file": "spec-config.json", "completed_at": "ISO8601" },
|
||||
{ "phase": 1.5, "name": "requirement-clarification", "output_file": "refined-requirements.json", "discussion_rounds": 2, "completed_at": "ISO8601" },
|
||||
{ "phase": 3, "name": "requirements", "output_dir": "requirements/", "output_index": "requirements/_index.md", "file_count": 8, "completed_at": "ISO8601" }
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
**Resume mechanism**: `-c|--continue` flag reads `spec-config.json.phasesCompleted`, resumes from first incomplete phase.
|
||||
|
||||
## Core Rules
|
||||
|
||||
1. **Start Immediately**: First action is TaskCreate initialization, then Phase 0 (spec study), then Phase 1
|
||||
2. **Progressive Phase Loading**: Read phase docs ONLY when that phase is about to execute
|
||||
3. **Auto-Continue**: All phases run autonomously; check TaskList to execute next pending phase
|
||||
4. **Parse Every Output**: Extract required data from each phase for next phase context
|
||||
5. **DO NOT STOP**: Continuous 7-phase pipeline until all phases complete or user exits
|
||||
6. **Respect -y Flag**: When auto mode, skip all AskUserQuestion calls, use recommended defaults
|
||||
7. **Respect -c Flag**: When continue mode, load spec-config.json and resume from checkpoint
|
||||
8. **Inject Glossary**: From Phase 3 onward, inject glossary.json terms into every CLI prompt
|
||||
9. **Load Profile**: Read templates/profiles/{spec_type}-profile.md and inject requirements into Phase 2-5 prompts
|
||||
10. **Iterate on Failure**: When Phase 6 score < 60%, auto-trigger Phase 6.5 (max 2 iterations)
|
||||
11. **Agent Delegation**: Phase 2-5 and 6.5 MUST be delegated to `doc-generator` agents via Task tool — never execute inline
|
||||
12. **Lean Context**: Orchestrator only sees agent return summaries (JSON), never the full document content
|
||||
13. **Validate Agent Output**: After each agent returns, verify files exist on disk and spec-config.json was updated
|
||||
|
||||
## Agent Delegation Protocol
|
||||
|
||||
For Phase 2-5 and 6.5, the orchestrator delegates to a `doc-generator` agent via the Task tool. The orchestrator builds a lean context envelope — passing only paths, never file content.
|
||||
|
||||
### Context Envelope Template
|
||||
|
||||
```javascript
|
||||
Task({
|
||||
subagent_type: "doc-generator",
|
||||
run_in_background: false,
|
||||
description: `Spec Phase ${N}: ${phaseName}`,
|
||||
prompt: `
|
||||
## Spec Generator - Phase ${N}: ${phaseName}
|
||||
|
||||
### Session
|
||||
- ID: ${sessionId}
|
||||
- Work Dir: ${workDir}
|
||||
- Auto Mode: ${autoMode}
|
||||
- Spec Type: ${specType}
|
||||
|
||||
### Input (read from disk)
|
||||
${inputFilesList} // Only file paths — agent reads content itself
|
||||
|
||||
### Instructions
|
||||
Read: ${skillDir}/phases/${phaseFile} // Agent reads the phase doc for full instructions
|
||||
Apply template: ${skillDir}/templates/${templateFile}
|
||||
|
||||
### Glossary (Phase 3+ only)
|
||||
Read: ${workDir}/glossary.json
|
||||
|
||||
### Output
|
||||
Write files to: ${workDir}/${outputPath}
|
||||
Update: ${workDir}/spec-config.json (phasesCompleted)
|
||||
Return: JSON summary { files_created, quality_notes, key_decisions }
|
||||
`
|
||||
});
|
||||
```
|
||||
|
||||
### Orchestrator Post-Agent Validation
|
||||
|
||||
After each agent returns:
|
||||
|
||||
```javascript
|
||||
// 1. Parse agent return summary
|
||||
const summary = JSON.parse(agentResult);
|
||||
|
||||
// 2. Validate files exist
|
||||
summary.files_created.forEach(file => {
|
||||
const exists = Glob(`${workDir}/${file}`);
|
||||
if (!exists.length) throw new Error(`Agent claimed to create ${file} but file not found`);
|
||||
});
|
||||
|
||||
// 3. Verify spec-config.json updated
|
||||
const config = JSON.parse(Read(`${workDir}/spec-config.json`));
|
||||
const phaseComplete = config.phasesCompleted.some(p => p.phase === N);
|
||||
if (!phaseComplete) throw new Error(`Agent did not update phasesCompleted for Phase ${N}`);
|
||||
|
||||
// 4. Store summary for downstream context (do NOT read full documents)
|
||||
phasesSummaries[N] = summary;
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Reference Documents by Phase
|
||||
|
||||
### Phase 1: Discovery
|
||||
| Document | Purpose | When to Use |
|
||||
|----------|---------|-------------|
|
||||
| [phases/01-discovery.md](phases/01-discovery.md) | Seed analysis and session setup | Phase start |
|
||||
| [templates/profiles/](templates/profiles/) | Spec type profiles | Spec type selection |
|
||||
| [specs/document-standards.md](specs/document-standards.md) | Frontmatter format for spec-config.json | Config generation |
|
||||
|
||||
### Phase 1.5: Requirement Expansion & Clarification
|
||||
| Document | Purpose | When to Use |
|
||||
|----------|---------|-------------|
|
||||
| [phases/01-5-requirement-clarification.md](phases/01-5-requirement-clarification.md) | Interactive requirement discussion workflow | Phase start |
|
||||
| [specs/quality-gates.md](specs/quality-gates.md) | Quality criteria for refined requirements | Validation |
|
||||
|
||||
### Phase 2: Product Brief
|
||||
| Document | Purpose | When to Use |
|
||||
|----------|---------|-------------|
|
||||
| [phases/02-product-brief.md](phases/02-product-brief.md) | Multi-CLI analysis orchestration | Phase start |
|
||||
| [templates/product-brief.md](templates/product-brief.md) | Document template | Document generation |
|
||||
| [specs/glossary-template.json](specs/glossary-template.json) | Glossary schema | Glossary generation |
|
||||
|
||||
### Phase 3: Requirements
|
||||
| Document | Purpose | When to Use |
|
||||
|----------|---------|-------------|
|
||||
| [phases/03-requirements.md](phases/03-requirements.md) | PRD generation workflow | Phase start |
|
||||
| [templates/requirements-prd.md](templates/requirements-prd.md) | Document template | Document generation |
|
||||
|
||||
### Phase 4: Architecture
|
||||
| Document | Purpose | When to Use |
|
||||
|----------|---------|-------------|
|
||||
| [phases/04-architecture.md](phases/04-architecture.md) | Architecture decision workflow | Phase start |
|
||||
| [templates/architecture-doc.md](templates/architecture-doc.md) | Document template | Document generation |
|
||||
|
||||
### Phase 5: Epics & Stories
|
||||
| Document | Purpose | When to Use |
|
||||
|----------|---------|-------------|
|
||||
| [phases/05-epics-stories.md](phases/05-epics-stories.md) | Epic/Story decomposition | Phase start |
|
||||
| [templates/epics-template.md](templates/epics-template.md) | Document template | Document generation |
|
||||
|
||||
### Phase 6: Readiness Check
|
||||
| Document | Purpose | When to Use |
|
||||
|----------|---------|-------------|
|
||||
| [phases/06-readiness-check.md](phases/06-readiness-check.md) | Cross-document validation | Phase start |
|
||||
| [specs/quality-gates.md](specs/quality-gates.md) | Quality scoring criteria | Validation |
|
||||
|
||||
### Phase 6.5: Auto-Fix
|
||||
| Document | Purpose | When to Use |
|
||||
|----------|---------|-------------|
|
||||
| [phases/06-5-auto-fix.md](phases/06-5-auto-fix.md) | Auto-fix workflow for readiness issues | When Phase 6 score < 60% |
|
||||
| [specs/quality-gates.md](specs/quality-gates.md) | Iteration exit criteria | Validation |
|
||||
|
||||
### Phase 7: Issue Export
|
||||
| Document | Purpose | When to Use |
|
||||
|----------|---------|-------------|
|
||||
| [phases/07-issue-export.md](phases/07-issue-export.md) | Epic→Issue mapping and export | Phase start |
|
||||
| [specs/quality-gates.md](specs/quality-gates.md) | Issue export quality criteria | Validation |
|
||||
|
||||
### Debugging & Troubleshooting
|
||||
| Issue | Solution Document |
|
||||
|-------|-------------------|
|
||||
| Phase execution failed | Refer to the relevant Phase documentation |
|
||||
| Output does not meet expectations | [specs/quality-gates.md](specs/quality-gates.md) |
|
||||
| Document format issues | [specs/document-standards.md](specs/document-standards.md) |
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Phase | Error | Blocking? | Action |
|
||||
|-------|-------|-----------|--------|
|
||||
| Phase 1 | Empty input | Yes | Error and exit |
|
||||
| Phase 1 | CLI seed analysis fails | No | Use basic parsing fallback |
|
||||
| Phase 1.5 | Gap analysis CLI fails | No | Skip to user questions with basic prompts |
|
||||
| Phase 1.5 | User skips discussion | No | Proceed with seed_analysis as-is |
|
||||
| Phase 1.5 | Max rounds reached (5) | No | Force confirmation with current state |
|
||||
| Phase 2 | Single CLI perspective fails | No | Continue with available perspectives |
|
||||
| Phase 2 | All CLI calls fail | No | Generate basic brief from seed analysis |
|
||||
| Phase 3 | Gemini CLI fails | No | Use codex fallback |
|
||||
| Phase 4 | Architecture review fails | No | Skip review, proceed with initial analysis |
|
||||
| Phase 5 | Story generation fails | No | Generate epics without detailed stories |
|
||||
| Phase 6 | Validation CLI fails | No | Generate partial report with available data |
|
||||
| Phase 6.5 | Auto-fix CLI fails | No | Log failure, proceed to handoff with Review status |
|
||||
| Phase 6.5 | Max iterations reached | No | Force handoff, report remaining issues |
|
||||
| Phase 7 | ccw issue create fails for one Epic | No | Log error, continue with remaining Epics |
|
||||
| Phase 7 | No EPIC files found | Yes | Error and return to Phase 5 |
|
||||
| Phase 7 | All issue creations fail | Yes | Error with CLI diagnostic, suggest manual creation |
|
||||
| Phase 2-5 | Agent fails to return | Yes | Retry once, then fall back to inline execution |
|
||||
| Phase 2-5 | Agent returns incomplete files | No | Log gaps, attempt inline completion for missing files |
|
||||
|
||||
### CLI Fallback Chain
|
||||
|
||||
Gemini -> Codex -> Claude -> degraded mode (local analysis only)
|
||||
@@ -0,0 +1,404 @@
|
||||
# Phase 1.5: Requirement Expansion & Clarification
|
||||
|
||||
在进入正式文档生成前,通过多轮交互讨论对原始需求进行深度挖掘、扩展和确认。
|
||||
|
||||
## Objective
|
||||
|
||||
- 识别原始需求中的模糊点、遗漏和潜在风险
|
||||
- 通过 CLI 辅助分析需求完整性,生成深度探测问题
|
||||
- 支持多轮交互讨论,逐步细化需求
|
||||
- 生成经用户确认的 `refined-requirements.json` 作为后续阶段的高质量输入
|
||||
|
||||
## Input
|
||||
|
||||
- Dependency: `{workDir}/spec-config.json` (Phase 1 output)
|
||||
- Optional: `{workDir}/discovery-context.json` (codebase context)
|
||||
|
||||
## Execution Steps
|
||||
|
||||
### Step 1: Load Phase 1 Context
|
||||
|
||||
```javascript
|
||||
const specConfig = JSON.parse(Read(`${workDir}/spec-config.json`));
|
||||
const { seed_analysis, seed_input, focus_areas, has_codebase, depth } = specConfig;
|
||||
|
||||
let discoveryContext = null;
|
||||
if (has_codebase) {
|
||||
try {
|
||||
discoveryContext = JSON.parse(Read(`${workDir}/discovery-context.json`));
|
||||
} catch (e) { /* proceed without */ }
|
||||
}
|
||||
```
|
||||
|
||||
### Step 2: CLI Gap Analysis & Question Generation
|
||||
|
||||
调用 Gemini CLI 分析原始需求的完整性,识别模糊点并生成探测问题。
|
||||
|
||||
```javascript
|
||||
Bash({
|
||||
command: `ccw cli -p "PURPOSE: 深度分析用户的初始需求,识别模糊点、遗漏和需要澄清的领域。
|
||||
Success: 生成 3-5 个高质量的探测问题,覆盖功能范围、边界条件、非功能性需求、用户场景等维度。
|
||||
|
||||
ORIGINAL SEED INPUT:
|
||||
${seed_input}
|
||||
|
||||
SEED ANALYSIS:
|
||||
${JSON.stringify(seed_analysis, null, 2)}
|
||||
|
||||
FOCUS AREAS: ${focus_areas.join(', ')}
|
||||
${discoveryContext ? `
|
||||
CODEBASE CONTEXT:
|
||||
- Existing patterns: ${discoveryContext.existing_patterns?.slice(0,5).join(', ') || 'none'}
|
||||
- Tech stack: ${JSON.stringify(discoveryContext.tech_stack || {})}
|
||||
` : ''}
|
||||
|
||||
TASK:
|
||||
1. 评估当前需求描述的完整性(1-10 分,列出缺失维度)
|
||||
2. 识别 3-5 个关键模糊区域,每个区域包含:
|
||||
- 模糊点描述(为什么不清楚)
|
||||
- 1-2 个开放式探测问题
|
||||
- 1-2 个扩展建议(基于领域最佳实践)
|
||||
3. 检查以下维度是否有遗漏:
|
||||
- 功能范围边界(什么在范围内/外?)
|
||||
- 核心用户场景和流程
|
||||
- 非功能性需求(性能、安全、可用性、可扩展性)
|
||||
- 集成点和外部依赖
|
||||
- 数据模型和存储需求
|
||||
- 错误处理和异常场景
|
||||
4. 基于领域经验提供需求扩展建议
|
||||
|
||||
MODE: analysis
|
||||
EXPECTED: JSON output:
|
||||
{
|
||||
\"completeness_score\": 7,
|
||||
\"missing_dimensions\": [\"Performance requirements\", \"Error handling\"],
|
||||
\"clarification_areas\": [
|
||||
{
|
||||
\"area\": \"Scope boundary\",
|
||||
\"rationale\": \"Input does not clarify...\",
|
||||
\"questions\": [\"Question 1?\", \"Question 2?\"],
|
||||
\"suggestions\": [\"Suggestion 1\", \"Suggestion 2\"]
|
||||
}
|
||||
],
|
||||
\"expansion_recommendations\": [
|
||||
{
|
||||
\"category\": \"Non-functional\",
|
||||
\"recommendation\": \"Consider adding...\",
|
||||
\"priority\": \"high|medium|low\"
|
||||
}
|
||||
]
|
||||
}
|
||||
CONSTRAINTS: 问题必须是开放式的,建议必须具体可执行,使用用户输入的语言
|
||||
" --tool gemini --mode analysis`,
|
||||
run_in_background: true
|
||||
});
|
||||
// Wait for CLI result before continuing
|
||||
```
|
||||
|
||||
解析 CLI 输出为结构化数据:
|
||||
```javascript
|
||||
const gapAnalysis = {
|
||||
completeness_score: 0,
|
||||
missing_dimensions: [],
|
||||
clarification_areas: [],
|
||||
expansion_recommendations: []
|
||||
};
|
||||
// Parse from CLI output
|
||||
```
|
||||
|
||||
### Step 3: Interactive Discussion Loop
|
||||
|
||||
核心多轮交互循环。每轮:展示分析结果 → 用户回应 → 更新需求状态 → 判断是否继续。
|
||||
|
||||
```javascript
|
||||
// Initialize requirement state
|
||||
let requirementState = {
|
||||
problem_statement: seed_analysis.problem_statement,
|
||||
target_users: seed_analysis.target_users,
|
||||
domain: seed_analysis.domain,
|
||||
constraints: seed_analysis.constraints,
|
||||
confirmed_features: [],
|
||||
non_functional_requirements: [],
|
||||
boundary_conditions: [],
|
||||
integration_points: [],
|
||||
key_assumptions: [],
|
||||
discussion_rounds: 0
|
||||
};
|
||||
|
||||
let discussionLog = [];
|
||||
let userSatisfied = false;
|
||||
|
||||
// === Round 1: Present gap analysis results ===
|
||||
// Display completeness_score, clarification_areas, expansion_recommendations
|
||||
// Then ask user to respond
|
||||
|
||||
while (!userSatisfied && requirementState.discussion_rounds < 5) {
|
||||
requirementState.discussion_rounds++;
|
||||
|
||||
if (requirementState.discussion_rounds === 1) {
|
||||
// --- First round: present initial gap analysis ---
|
||||
// Format questions and suggestions from gapAnalysis for display
|
||||
// Present as a structured summary to the user
|
||||
|
||||
AskUserQuestion({
|
||||
questions: [
|
||||
{
|
||||
question: buildDiscussionPrompt(gapAnalysis, requirementState),
|
||||
header: "Req Expand",
|
||||
multiSelect: false,
|
||||
options: [
|
||||
{ label: "I'll answer", description: "I have answers/feedback to provide (type in 'Other')" },
|
||||
{ label: "Accept all suggestions", description: "Accept all expansion recommendations as-is" },
|
||||
{ label: "Skip to generation", description: "Requirements are clear enough, proceed directly" }
|
||||
]
|
||||
}
|
||||
]
|
||||
});
|
||||
} else {
|
||||
// --- Subsequent rounds: refine based on user feedback ---
|
||||
// Call CLI with accumulated context for follow-up analysis
|
||||
Bash({
|
||||
command: `ccw cli -p "PURPOSE: 基于用户最新回应,更新需求理解,识别剩余模糊点。
|
||||
|
||||
CURRENT REQUIREMENT STATE:
|
||||
${JSON.stringify(requirementState, null, 2)}
|
||||
|
||||
DISCUSSION HISTORY:
|
||||
${JSON.stringify(discussionLog, null, 2)}
|
||||
|
||||
USER'S LATEST RESPONSE:
|
||||
${lastUserResponse}
|
||||
|
||||
TASK:
|
||||
1. 将用户回应整合到需求状态中
|
||||
2. 识别 1-3 个仍需澄清或可扩展的领域
|
||||
3. 生成后续问题(如有必要)
|
||||
4. 如果需求已充分,输出最终需求摘要
|
||||
|
||||
MODE: analysis
|
||||
EXPECTED: JSON output:
|
||||
{
|
||||
\"updated_fields\": { /* fields to merge into requirementState */ },
|
||||
\"status\": \"need_more_discussion\" | \"ready_for_confirmation\",
|
||||
\"follow_up\": {
|
||||
\"remaining_areas\": [{\"area\": \"...\", \"questions\": [\"...\"]}],
|
||||
\"summary\": \"...\"
|
||||
}
|
||||
}
|
||||
CONSTRAINTS: 避免重复已回答的问题,聚焦未覆盖的领域
|
||||
" --tool gemini --mode analysis`,
|
||||
run_in_background: true
|
||||
});
|
||||
// Wait for CLI result, parse and continue
|
||||
|
||||
// If status === "ready_for_confirmation", break to confirmation step
|
||||
// If status === "need_more_discussion", present follow-up questions
|
||||
|
||||
AskUserQuestion({
|
||||
questions: [
|
||||
{
|
||||
question: buildFollowUpPrompt(followUpAnalysis, requirementState),
|
||||
header: "Follow-up",
|
||||
multiSelect: false,
|
||||
options: [
|
||||
{ label: "I'll answer", description: "I have more feedback (type in 'Other')" },
|
||||
{ label: "Looks good", description: "Requirements are sufficiently clear now" },
|
||||
{ label: "Accept suggestions", description: "Accept remaining suggestions" }
|
||||
]
|
||||
}
|
||||
]
|
||||
});
|
||||
}
|
||||
|
||||
// Process user response
|
||||
// - "Skip to generation" / "Looks good" → userSatisfied = true
|
||||
// - "Accept all suggestions" → merge suggestions into requirementState, userSatisfied = true
|
||||
// - "I'll answer" (with Other text) → record in discussionLog, continue loop
|
||||
// - User selects Other with custom text → parse and record
|
||||
|
||||
discussionLog.push({
|
||||
round: requirementState.discussion_rounds,
|
||||
agent_prompt: currentPrompt,
|
||||
user_response: userResponse,
|
||||
timestamp: new Date().toISOString()
|
||||
});
|
||||
}
|
||||
```
|
||||
|
||||
#### Helper: Build Discussion Prompt
|
||||
|
||||
```javascript
|
||||
function buildDiscussionPrompt(gapAnalysis, state) {
|
||||
let prompt = `## Requirement Analysis Results\n\n`;
|
||||
prompt += `**Completeness Score**: ${gapAnalysis.completeness_score}/10\n`;
|
||||
|
||||
if (gapAnalysis.missing_dimensions.length > 0) {
|
||||
prompt += `**Missing Dimensions**: ${gapAnalysis.missing_dimensions.join(', ')}\n\n`;
|
||||
}
|
||||
|
||||
prompt += `### Key Questions\n\n`;
|
||||
gapAnalysis.clarification_areas.forEach((area, i) => {
|
||||
prompt += `**${i+1}. ${area.area}**\n`;
|
||||
prompt += ` ${area.rationale}\n`;
|
||||
area.questions.forEach(q => { prompt += ` - ${q}\n`; });
|
||||
if (area.suggestions.length > 0) {
|
||||
prompt += ` Suggestions: ${area.suggestions.join('; ')}\n`;
|
||||
}
|
||||
prompt += `\n`;
|
||||
});
|
||||
|
||||
if (gapAnalysis.expansion_recommendations.length > 0) {
|
||||
prompt += `### Expansion Recommendations\n\n`;
|
||||
gapAnalysis.expansion_recommendations.forEach(rec => {
|
||||
prompt += `- [${rec.priority}] **${rec.category}**: ${rec.recommendation}\n`;
|
||||
});
|
||||
}
|
||||
|
||||
prompt += `\nPlease answer the questions above, or choose an option below.`;
|
||||
return prompt;
|
||||
}
|
||||
```
|
||||
|
||||
### Step 4: Auto Mode Handling
|
||||
|
||||
```javascript
|
||||
if (autoMode) {
|
||||
// Skip interactive discussion
|
||||
// CLI generates default requirement expansion based on seed_analysis
|
||||
Bash({
|
||||
command: `ccw cli -p "PURPOSE: 基于种子分析自动生成需求扩展,无需用户交互。
|
||||
|
||||
SEED ANALYSIS:
|
||||
${JSON.stringify(seed_analysis, null, 2)}
|
||||
|
||||
SEED INPUT: ${seed_input}
|
||||
DEPTH: ${depth}
|
||||
${discoveryContext ? `CODEBASE: ${JSON.stringify(discoveryContext.tech_stack || {})}` : ''}
|
||||
|
||||
TASK:
|
||||
1. 基于领域最佳实践,自动扩展功能需求清单
|
||||
2. 推断合理的非功能性需求
|
||||
3. 识别明显的边界条件
|
||||
4. 列出关键假设
|
||||
|
||||
MODE: analysis
|
||||
EXPECTED: JSON output matching refined-requirements.json schema
|
||||
CONSTRAINTS: 保守推断,只添加高置信度的扩展
|
||||
" --tool gemini --mode analysis`,
|
||||
run_in_background: true
|
||||
});
|
||||
// Parse output directly into refined-requirements.json
|
||||
}
|
||||
```
|
||||
|
||||
### Step 5: Generate Requirement Confirmation Summary
|
||||
|
||||
在写入文件前,向用户展示最终的需求确认摘要(非 auto mode)。
|
||||
|
||||
```javascript
|
||||
if (!autoMode) {
|
||||
// Build confirmation summary from requirementState
|
||||
const summary = buildConfirmationSummary(requirementState);
|
||||
|
||||
AskUserQuestion({
|
||||
questions: [
|
||||
{
|
||||
question: `## Requirement Confirmation\n\n${summary}\n\nConfirm and proceed to specification generation?`,
|
||||
header: "Confirm",
|
||||
multiSelect: false,
|
||||
options: [
|
||||
{ label: "Confirm & proceed", description: "Requirements confirmed, start spec generation" },
|
||||
{ label: "Need adjustments", description: "Go back and refine further" }
|
||||
]
|
||||
}
|
||||
]
|
||||
});
|
||||
|
||||
// If "Need adjustments" → loop back to Step 3
|
||||
// If "Confirm & proceed" → continue to Step 6
|
||||
}
|
||||
```
|
||||
|
||||
### Step 6: Write refined-requirements.json
|
||||
|
||||
```javascript
|
||||
const refinedRequirements = {
|
||||
session_id: specConfig.session_id,
|
||||
phase: "1.5",
|
||||
generated_at: new Date().toISOString(),
|
||||
source: autoMode ? "auto-expansion" : "interactive-discussion",
|
||||
discussion_rounds: requirementState.discussion_rounds,
|
||||
|
||||
// Core requirement content
|
||||
clarified_problem_statement: requirementState.problem_statement,
|
||||
confirmed_target_users: requirementState.target_users.map(u =>
|
||||
typeof u === 'string' ? { name: u, needs: [], pain_points: [] } : u
|
||||
),
|
||||
confirmed_domain: requirementState.domain,
|
||||
|
||||
confirmed_features: requirementState.confirmed_features.map(f => ({
|
||||
name: f.name,
|
||||
description: f.description,
|
||||
acceptance_criteria: f.acceptance_criteria || [],
|
||||
edge_cases: f.edge_cases || [],
|
||||
priority: f.priority || "unset"
|
||||
})),
|
||||
|
||||
non_functional_requirements: requirementState.non_functional_requirements.map(nfr => ({
|
||||
type: nfr.type, // Performance, Security, Usability, Scalability, etc.
|
||||
details: nfr.details,
|
||||
measurable_criteria: nfr.measurable_criteria || ""
|
||||
})),
|
||||
|
||||
boundary_conditions: {
|
||||
in_scope: requirementState.boundary_conditions.filter(b => b.scope === 'in'),
|
||||
out_of_scope: requirementState.boundary_conditions.filter(b => b.scope === 'out'),
|
||||
constraints: requirementState.constraints
|
||||
},
|
||||
|
||||
integration_points: requirementState.integration_points,
|
||||
key_assumptions: requirementState.key_assumptions,
|
||||
|
||||
// Traceability
|
||||
discussion_log: autoMode ? [] : discussionLog
|
||||
};
|
||||
|
||||
Write(`${workDir}/refined-requirements.json`, JSON.stringify(refinedRequirements, null, 2));
|
||||
```
|
||||
|
||||
### Step 7: Update spec-config.json
|
||||
|
||||
```javascript
|
||||
specConfig.refined_requirements_file = "refined-requirements.json";
|
||||
specConfig.phasesCompleted.push({
|
||||
phase: 1.5,
|
||||
name: "requirement-clarification",
|
||||
output_file: "refined-requirements.json",
|
||||
discussion_rounds: requirementState.discussion_rounds,
|
||||
completed_at: new Date().toISOString()
|
||||
});
|
||||
|
||||
Write(`${workDir}/spec-config.json`, JSON.stringify(specConfig, null, 2));
|
||||
```
|
||||
|
||||
## Output
|
||||
|
||||
- **File**: `refined-requirements.json`
|
||||
- **Format**: JSON
|
||||
- **Updated**: `spec-config.json` (added `refined_requirements_file` field and phase 1.5 to `phasesCompleted`)
|
||||
|
||||
## Quality Checklist
|
||||
|
||||
- [ ] Problem statement refined (>= 30 characters, more specific than seed)
|
||||
- [ ] At least 2 confirmed features with descriptions
|
||||
- [ ] At least 1 non-functional requirement identified
|
||||
- [ ] Boundary conditions defined (in-scope + out-of-scope)
|
||||
- [ ] Key assumptions listed (>= 1)
|
||||
- [ ] Discussion rounds recorded (>= 1 in interactive mode)
|
||||
- [ ] User explicitly confirmed requirements (non-auto mode)
|
||||
- [ ] `refined-requirements.json` written with valid JSON
|
||||
- [ ] `spec-config.json` updated with phase 1.5 completion
|
||||
|
||||
## Next Phase
|
||||
|
||||
Proceed to [Phase 2: Product Brief](02-product-brief.md). Phase 2 should load `refined-requirements.json` as primary input instead of relying solely on `spec-config.json.seed_analysis`.
|
||||
257
.codex/skills/spec-generator/phases/01-discovery.md
Normal file
257
.codex/skills/spec-generator/phases/01-discovery.md
Normal file
@@ -0,0 +1,257 @@
|
||||
# Phase 1: Discovery
|
||||
|
||||
Parse input, analyze the seed idea, optionally explore codebase, establish session configuration.
|
||||
|
||||
## Objective
|
||||
|
||||
- Generate session ID and create output directory
|
||||
- Parse user input (text description or file reference)
|
||||
- Analyze seed via Gemini CLI to extract problem space dimensions
|
||||
- Conditionally explore codebase for existing patterns and constraints
|
||||
- Gather user preferences (depth, focus areas) via interactive confirmation
|
||||
- Write `spec-config.json` as the session state file
|
||||
|
||||
## Input
|
||||
|
||||
- Dependency: `$ARGUMENTS` (user input from command)
|
||||
- Flags: `-y` (auto mode), `-c` (continue mode)
|
||||
|
||||
## Execution Steps
|
||||
|
||||
### Step 1: Session Initialization
|
||||
|
||||
```javascript
|
||||
// Parse arguments
|
||||
const args = $ARGUMENTS;
|
||||
const autoMode = args.includes('-y') || args.includes('--yes');
|
||||
const continueMode = args.includes('-c') || args.includes('--continue');
|
||||
|
||||
// Extract the idea/topic (remove flags)
|
||||
const idea = args.replace(/(-y|--yes|-c|--continue)\s*/g, '').trim();
|
||||
|
||||
// Generate session ID
|
||||
const slug = idea.toLowerCase()
|
||||
.replace(/[^a-z0-9\u4e00-\u9fff]+/g, '-')
|
||||
.replace(/^-|-$/g, '')
|
||||
.slice(0, 40);
|
||||
const date = new Date().toISOString().slice(0, 10);
|
||||
const sessionId = `SPEC-${slug}-${date}`;
|
||||
const workDir = `.workflow/.spec/${sessionId}`;
|
||||
|
||||
// Check for continue mode
|
||||
if (continueMode) {
|
||||
// Find existing session
|
||||
const existingSessions = Glob('.workflow/.spec/SPEC-*/spec-config.json');
|
||||
// If slug matches an existing session, load it and resume
|
||||
// Read spec-config.json, find first incomplete phase, jump to that phase
|
||||
return; // Resume logic handled by orchestrator
|
||||
}
|
||||
|
||||
// Create output directory
|
||||
Bash(`mkdir -p "${workDir}"`);
|
||||
```
|
||||
|
||||
### Step 2: Input Parsing
|
||||
|
||||
```javascript
|
||||
// Determine input type
|
||||
if (idea.startsWith('@') || idea.endsWith('.md') || idea.endsWith('.txt')) {
|
||||
// File reference - read and extract content
|
||||
const filePath = idea.replace(/^@/, '');
|
||||
const fileContent = Read(filePath);
|
||||
// Use file content as the seed
|
||||
inputType = 'file';
|
||||
seedInput = fileContent;
|
||||
} else {
|
||||
// Direct text description
|
||||
inputType = 'text';
|
||||
seedInput = idea;
|
||||
}
|
||||
```
|
||||
|
||||
### Step 3: Seed Analysis via Gemini CLI
|
||||
|
||||
```javascript
|
||||
Bash({
|
||||
command: `ccw cli -p "PURPOSE: Analyze this seed idea/requirement to extract structured problem space dimensions.
|
||||
Success: Clear problem statement, target users, domain identification, 3-5 exploration dimensions.
|
||||
|
||||
SEED INPUT:
|
||||
${seedInput}
|
||||
|
||||
TASK:
|
||||
- Extract a clear problem statement (what problem does this solve?)
|
||||
- Identify target users (who benefits?)
|
||||
- Determine the domain (technical, business, consumer, etc.)
|
||||
- List constraints (budget, time, technical, regulatory)
|
||||
- Generate 3-5 exploration dimensions (key areas to investigate)
|
||||
- Assess complexity: simple (1-2 components), moderate (3-5 components), complex (6+ components)
|
||||
|
||||
MODE: analysis
|
||||
EXPECTED: JSON output with fields: problem_statement, target_users[], domain, constraints[], dimensions[], complexity
|
||||
CONSTRAINTS: Be specific and actionable, not vague
|
||||
" --tool gemini --mode analysis`,
|
||||
run_in_background: true
|
||||
});
|
||||
// Wait for CLI result before continuing
|
||||
```
|
||||
|
||||
Parse the CLI output into structured `seedAnalysis`:
|
||||
```javascript
|
||||
const seedAnalysis = {
|
||||
problem_statement: "...",
|
||||
target_users: ["..."],
|
||||
domain: "...",
|
||||
constraints: ["..."],
|
||||
dimensions: ["..."]
|
||||
};
|
||||
const complexity = "moderate"; // from CLI output
|
||||
```
|
||||
|
||||
### Step 4: Codebase Exploration (Conditional)
|
||||
|
||||
```javascript
|
||||
// Detect if running inside a project with code
|
||||
const hasCodebase = Glob('**/*.{ts,js,py,java,go,rs}').length > 0
|
||||
|| Glob('package.json').length > 0
|
||||
|| Glob('Cargo.toml').length > 0;
|
||||
|
||||
if (hasCodebase) {
|
||||
Agent({
|
||||
subagent_type: "cli-explore-agent",
|
||||
run_in_background: false,
|
||||
description: `Explore codebase for spec: ${slug}`,
|
||||
prompt: `
|
||||
## Spec Generator Context
|
||||
Topic: ${seedInput}
|
||||
Dimensions: ${seedAnalysis.dimensions.join(', ')}
|
||||
Session: ${workDir}
|
||||
|
||||
## MANDATORY FIRST STEPS
|
||||
1. Search for code related to topic keywords
|
||||
2. Read project config files (package.json, pyproject.toml, etc.) if they exist
|
||||
|
||||
## Exploration Focus
|
||||
- Identify existing implementations related to the topic
|
||||
- Find patterns that could inform architecture decisions
|
||||
- Map current architecture constraints
|
||||
- Locate integration points and dependencies
|
||||
|
||||
## Output
|
||||
Write findings to: ${workDir}/discovery-context.json
|
||||
|
||||
Schema:
|
||||
{
|
||||
"relevant_files": [{"path": "...", "relevance": "high|medium|low", "rationale": "..."}],
|
||||
"existing_patterns": ["pattern descriptions"],
|
||||
"architecture_constraints": ["constraint descriptions"],
|
||||
"integration_points": ["integration point descriptions"],
|
||||
"tech_stack": {"languages": [], "frameworks": [], "databases": []},
|
||||
"_metadata": { "exploration_type": "spec-discovery", "timestamp": "ISO8601" }
|
||||
}
|
||||
`
|
||||
});
|
||||
}
|
||||
```
|
||||
|
||||
### Step 5: User Confirmation (Interactive)
|
||||
|
||||
```javascript
|
||||
if (!autoMode) {
|
||||
// Confirm problem statement and select depth
|
||||
AskUserQuestion({
|
||||
questions: [
|
||||
{
|
||||
question: `Problem statement: "${seedAnalysis.problem_statement}" - Is this accurate?`,
|
||||
header: "Problem",
|
||||
multiSelect: false,
|
||||
options: [
|
||||
{ label: "Accurate", description: "Proceed with this problem statement" },
|
||||
{ label: "Needs adjustment", description: "I'll refine the problem statement" }
|
||||
]
|
||||
},
|
||||
{
|
||||
question: "What specification depth do you need?",
|
||||
header: "Depth",
|
||||
multiSelect: false,
|
||||
options: [
|
||||
{ label: "Light", description: "Quick overview - key decisions only" },
|
||||
{ label: "Standard (Recommended)", description: "Balanced detail for most projects" },
|
||||
{ label: "Comprehensive", description: "Maximum detail for complex/critical projects" }
|
||||
]
|
||||
},
|
||||
{
|
||||
question: "Which areas should we focus on?",
|
||||
header: "Focus",
|
||||
multiSelect: true,
|
||||
options: seedAnalysis.dimensions.map(d => ({ label: d, description: `Explore ${d} in depth` }))
|
||||
},
|
||||
{
|
||||
question: "What type of specification is this?",
|
||||
header: "Spec Type",
|
||||
multiSelect: false,
|
||||
options: [
|
||||
{ label: "Service (Recommended)", description: "Long-running service with lifecycle, state machine, observability" },
|
||||
{ label: "API", description: "REST/GraphQL API with endpoints, auth, rate limiting" },
|
||||
{ label: "Library/SDK", description: "Reusable package with public API surface, examples" },
|
||||
{ label: "Platform", description: "Multi-component system, uses Service profile" }
|
||||
]
|
||||
}
|
||||
]
|
||||
});
|
||||
} else {
|
||||
// Auto mode defaults
|
||||
depth = "standard";
|
||||
focusAreas = seedAnalysis.dimensions;
|
||||
specType = "service"; // default for auto mode
|
||||
}
|
||||
```
|
||||
|
||||
### Step 6: Write spec-config.json
|
||||
|
||||
```javascript
|
||||
const specConfig = {
|
||||
session_id: sessionId,
|
||||
seed_input: seedInput,
|
||||
input_type: inputType,
|
||||
timestamp: new Date().toISOString(),
|
||||
mode: autoMode ? "auto" : "interactive",
|
||||
complexity: complexity,
|
||||
depth: depth,
|
||||
focus_areas: focusAreas,
|
||||
seed_analysis: seedAnalysis,
|
||||
has_codebase: hasCodebase,
|
||||
spec_type: specType, // "service" | "api" | "library" | "platform"
|
||||
iteration_count: 0,
|
||||
iteration_history: [],
|
||||
phasesCompleted: [
|
||||
{
|
||||
phase: 1,
|
||||
name: "discovery",
|
||||
output_file: "spec-config.json",
|
||||
completed_at: new Date().toISOString()
|
||||
}
|
||||
]
|
||||
};
|
||||
|
||||
Write(`${workDir}/spec-config.json`, JSON.stringify(specConfig, null, 2));
|
||||
```
|
||||
|
||||
## Output
|
||||
|
||||
- **File**: `spec-config.json`
|
||||
- **File**: `discovery-context.json` (optional, if codebase detected)
|
||||
- **Format**: JSON
|
||||
|
||||
## Quality Checklist
|
||||
|
||||
- [ ] Session ID matches `SPEC-{slug}-{date}` format
|
||||
- [ ] Problem statement exists and is >= 20 characters
|
||||
- [ ] Target users identified (>= 1)
|
||||
- [ ] 3-5 exploration dimensions generated
|
||||
- [ ] spec-config.json written with all required fields
|
||||
- [ ] Output directory created
|
||||
|
||||
## Next Phase
|
||||
|
||||
Proceed to [Phase 2: Product Brief](02-product-brief.md) with the generated spec-config.json.
|
||||
298
.codex/skills/spec-generator/phases/02-product-brief.md
Normal file
298
.codex/skills/spec-generator/phases/02-product-brief.md
Normal file
@@ -0,0 +1,298 @@
|
||||
# Phase 2: Product Brief
|
||||
|
||||
> **Execution Mode: Agent Delegated**
|
||||
> This phase is executed by a `doc-generator` agent. The orchestrator (SKILL.md) passes session context via the Task tool. The agent reads this file for instructions, executes all steps, writes output files, and returns a JSON summary.
|
||||
|
||||
Generate a product brief through multi-perspective CLI analysis, establishing "what" and "why".
|
||||
|
||||
## Objective
|
||||
|
||||
- Read Phase 1 outputs (spec-config.json, discovery-context.json)
|
||||
- Launch 3 parallel CLI analyses from product, technical, and user perspectives
|
||||
- Synthesize convergent themes and conflicting views
|
||||
- Optionally refine with user input
|
||||
- Generate product-brief.md using template
|
||||
|
||||
## Input
|
||||
|
||||
- Dependency: `{workDir}/spec-config.json`
|
||||
- Primary: `{workDir}/refined-requirements.json` (Phase 1.5 output, preferred over raw seed_analysis)
|
||||
- Optional: `{workDir}/discovery-context.json`
|
||||
- Config: `{workDir}/spec-config.json`
|
||||
- Template: `templates/product-brief.md`
|
||||
|
||||
## Execution Steps
|
||||
|
||||
### Step 1: Load Phase 1 Context
|
||||
|
||||
```javascript
|
||||
const specConfig = JSON.parse(Read(`${workDir}/spec-config.json`));
|
||||
const { seed_analysis, seed_input, has_codebase, depth, focus_areas } = specConfig;
|
||||
|
||||
// Load refined requirements (Phase 1.5 output) - preferred over raw seed_analysis
|
||||
let refinedReqs = null;
|
||||
try {
|
||||
refinedReqs = JSON.parse(Read(`${workDir}/refined-requirements.json`));
|
||||
} catch (e) {
|
||||
// No refined requirements, fall back to seed_analysis
|
||||
}
|
||||
|
||||
let discoveryContext = null;
|
||||
if (has_codebase) {
|
||||
try {
|
||||
discoveryContext = JSON.parse(Read(`${workDir}/discovery-context.json`));
|
||||
} catch (e) {
|
||||
// No discovery context available, proceed without
|
||||
}
|
||||
}
|
||||
|
||||
// Build shared context string for CLI prompts
|
||||
// Prefer refined requirements over raw seed_analysis
|
||||
const problem = refinedReqs?.clarified_problem_statement || seed_analysis.problem_statement;
|
||||
const users = refinedReqs?.confirmed_target_users?.map(u => u.name || u).join(', ')
|
||||
|| seed_analysis.target_users.join(', ');
|
||||
const domain = refinedReqs?.confirmed_domain || seed_analysis.domain;
|
||||
const constraints = refinedReqs?.boundary_conditions?.constraints?.join(', ')
|
||||
|| seed_analysis.constraints.join(', ');
|
||||
const features = refinedReqs?.confirmed_features?.map(f => f.name).join(', ') || '';
|
||||
const nfrs = refinedReqs?.non_functional_requirements?.map(n => `${n.type}: ${n.details}`).join('; ') || '';
|
||||
|
||||
const sharedContext = `
|
||||
SEED: ${seed_input}
|
||||
PROBLEM: ${problem}
|
||||
TARGET USERS: ${users}
|
||||
DOMAIN: ${domain}
|
||||
CONSTRAINTS: ${constraints}
|
||||
FOCUS AREAS: ${focus_areas.join(', ')}
|
||||
${features ? `CONFIRMED FEATURES: ${features}` : ''}
|
||||
${nfrs ? `NON-FUNCTIONAL REQUIREMENTS: ${nfrs}` : ''}
|
||||
${discoveryContext ? `
|
||||
CODEBASE CONTEXT:
|
||||
- Existing patterns: ${discoveryContext.existing_patterns?.slice(0,5).join(', ') || 'none'}
|
||||
- Architecture constraints: ${discoveryContext.architecture_constraints?.slice(0,3).join(', ') || 'none'}
|
||||
- Tech stack: ${JSON.stringify(discoveryContext.tech_stack || {})}
|
||||
` : ''}`;
|
||||
```
|
||||
|
||||
### Step 2: Multi-CLI Parallel Analysis (3 perspectives)
|
||||
|
||||
Launch 3 CLI calls in parallel:
|
||||
|
||||
**Product Perspective (Gemini)**:
|
||||
```javascript
|
||||
Bash({
|
||||
command: `ccw cli -p "PURPOSE: Product analysis for specification - identify market fit, user value, and success criteria.
|
||||
Success: Clear vision, measurable goals, competitive positioning.
|
||||
|
||||
${sharedContext}
|
||||
|
||||
TASK:
|
||||
- Define product vision (1-3 sentences, aspirational)
|
||||
- Analyze market/competitive landscape
|
||||
- Define 3-5 measurable success metrics
|
||||
- Identify scope boundaries (in-scope vs out-of-scope)
|
||||
- Assess user value proposition
|
||||
- List assumptions that need validation
|
||||
|
||||
MODE: analysis
|
||||
EXPECTED: Structured product analysis with: vision, goals with metrics, scope, competitive positioning, assumptions
|
||||
CONSTRAINTS: Focus on 'what' and 'why', not 'how'
|
||||
" --tool gemini --mode analysis`,
|
||||
run_in_background: true
|
||||
});
|
||||
```
|
||||
|
||||
**Technical Perspective (Codex)**:
|
||||
```javascript
|
||||
Bash({
|
||||
command: `ccw cli -p "PURPOSE: Technical feasibility analysis for specification - assess implementation viability and constraints.
|
||||
Success: Clear technical constraints, integration complexity, technology recommendations.
|
||||
|
||||
${sharedContext}
|
||||
|
||||
TASK:
|
||||
- Assess technical feasibility of the core concept
|
||||
- Identify technical constraints and blockers
|
||||
- Evaluate integration complexity with existing systems
|
||||
- Recommend technology approach (high-level)
|
||||
- Identify technical risks and dependencies
|
||||
- Estimate complexity: simple/moderate/complex
|
||||
|
||||
MODE: analysis
|
||||
EXPECTED: Technical analysis with: feasibility assessment, constraints, integration complexity, tech recommendations, risks
|
||||
CONSTRAINTS: Focus on feasibility and constraints, not detailed architecture
|
||||
" --tool codex --mode analysis`,
|
||||
run_in_background: true
|
||||
});
|
||||
```
|
||||
|
||||
**User Perspective (Claude)**:
|
||||
```javascript
|
||||
Bash({
|
||||
command: `ccw cli -p "PURPOSE: User experience analysis for specification - understand user journeys, pain points, and UX considerations.
|
||||
Success: Clear user personas, journey maps, UX requirements.
|
||||
|
||||
${sharedContext}
|
||||
|
||||
TASK:
|
||||
- Elaborate user personas with goals and frustrations
|
||||
- Map primary user journey (happy path)
|
||||
- Identify key pain points in current experience
|
||||
- Define UX success criteria
|
||||
- List accessibility and usability considerations
|
||||
- Suggest interaction patterns
|
||||
|
||||
MODE: analysis
|
||||
EXPECTED: User analysis with: personas, journey map, pain points, UX criteria, interaction recommendations
|
||||
CONSTRAINTS: Focus on user needs and experience, not implementation
|
||||
" --tool claude --mode analysis`,
|
||||
run_in_background: true
|
||||
});
|
||||
|
||||
// STOP: Wait for all 3 CLI results before continuing
|
||||
```
|
||||
|
||||
### Step 3: Synthesize Perspectives
|
||||
|
||||
```javascript
|
||||
// After receiving all 3 CLI results:
|
||||
// Extract convergent themes (all agree)
|
||||
// Identify conflicting views (need resolution)
|
||||
// Note unique contributions from each perspective
|
||||
|
||||
const synthesis = {
|
||||
convergent_themes: [], // themes all 3 perspectives agree on
|
||||
conflicts: [], // areas where perspectives differ
|
||||
product_insights: [], // unique from product perspective
|
||||
technical_insights: [], // unique from technical perspective
|
||||
user_insights: [] // unique from user perspective
|
||||
};
|
||||
```
|
||||
|
||||
### Step 4: Interactive Refinement (Optional)
|
||||
|
||||
```javascript
|
||||
if (!autoMode) {
|
||||
// Present synthesis summary to user
|
||||
// AskUserQuestion with:
|
||||
// - Confirm vision statement
|
||||
// - Resolve any conflicts between perspectives
|
||||
// - Adjust scope if needed
|
||||
AskUserQuestion({
|
||||
questions: [
|
||||
{
|
||||
question: "Review the synthesized product brief. Any adjustments needed?",
|
||||
header: "Review",
|
||||
multiSelect: false,
|
||||
options: [
|
||||
{ label: "Looks good", description: "Proceed to PRD generation" },
|
||||
{ label: "Adjust scope", description: "Narrow or expand the scope" },
|
||||
{ label: "Revise vision", description: "Refine the vision statement" }
|
||||
]
|
||||
}
|
||||
]
|
||||
});
|
||||
}
|
||||
```
|
||||
|
||||
### Step 5: Generate product-brief.md
|
||||
|
||||
```javascript
|
||||
// Read template
|
||||
const template = Read('templates/product-brief.md');
|
||||
|
||||
// Fill template with synthesized content
|
||||
// Apply document-standards.md formatting rules
|
||||
// Write with YAML frontmatter
|
||||
|
||||
const frontmatter = `---
|
||||
session_id: ${specConfig.session_id}
|
||||
phase: 2
|
||||
document_type: product-brief
|
||||
status: ${autoMode ? 'complete' : 'draft'}
|
||||
generated_at: ${new Date().toISOString()}
|
||||
stepsCompleted: ["load-context", "multi-cli-analysis", "synthesis", "generation"]
|
||||
version: 1
|
||||
dependencies:
|
||||
- spec-config.json
|
||||
---`;
|
||||
|
||||
// Combine frontmatter + filled template content
|
||||
Write(`${workDir}/product-brief.md`, `${frontmatter}\n\n${filledContent}`);
|
||||
|
||||
// Update spec-config.json
|
||||
specConfig.phasesCompleted.push({
|
||||
phase: 2,
|
||||
name: "product-brief",
|
||||
output_file: "product-brief.md",
|
||||
completed_at: new Date().toISOString()
|
||||
});
|
||||
Write(`${workDir}/spec-config.json`, JSON.stringify(specConfig, null, 2));
|
||||
```
|
||||
|
||||
### Step 5.5: Generate glossary.json
|
||||
|
||||
```javascript
|
||||
// Extract terminology from product brief and CLI analysis
|
||||
// Generate structured glossary for cross-document consistency
|
||||
|
||||
const glossary = {
|
||||
session_id: specConfig.session_id,
|
||||
terms: [
|
||||
// Extract from product brief content:
|
||||
// - Key domain nouns from problem statement
|
||||
// - User persona names
|
||||
// - Technical terms from multi-perspective synthesis
|
||||
// Each term should have:
|
||||
// { term: "...", definition: "...", aliases: [], first_defined_in: "product-brief.md", category: "core|technical|business" }
|
||||
]
|
||||
};
|
||||
|
||||
Write(`${workDir}/glossary.json`, JSON.stringify(glossary, null, 2));
|
||||
```
|
||||
|
||||
**Glossary Injection**: In all subsequent phase prompts, inject the following into the CONTEXT section:
|
||||
```
|
||||
TERMINOLOGY GLOSSARY (use these terms consistently):
|
||||
${JSON.stringify(glossary.terms, null, 2)}
|
||||
```
|
||||
|
||||
## Output
|
||||
|
||||
- **File**: `product-brief.md`
|
||||
- **Format**: Markdown with YAML frontmatter
|
||||
|
||||
## Quality Checklist
|
||||
|
||||
- [ ] Vision statement: clear, 1-3 sentences
|
||||
- [ ] Problem statement: specific and measurable
|
||||
- [ ] Target users: >= 1 persona with needs
|
||||
- [ ] Goals: >= 2 with measurable metrics
|
||||
- [ ] Scope: in-scope and out-of-scope defined
|
||||
- [ ] Multi-perspective synthesis included
|
||||
- [ ] YAML frontmatter valid
|
||||
|
||||
## Next Phase
|
||||
|
||||
Proceed to [Phase 3: Requirements](03-requirements.md) with the generated product-brief.md.
|
||||
|
||||
---
|
||||
|
||||
## Agent Return Summary
|
||||
|
||||
When executed as a delegated agent, return the following JSON summary to the orchestrator:
|
||||
|
||||
```json
|
||||
{
|
||||
"phase": 2,
|
||||
"status": "complete",
|
||||
"files_created": ["product-brief.md", "glossary.json"],
|
||||
"quality_notes": ["list of any quality concerns or deviations"],
|
||||
"key_decisions": ["list of significant synthesis decisions made"]
|
||||
}
|
||||
```
|
||||
|
||||
The orchestrator will:
|
||||
1. Validate that listed files exist on disk
|
||||
2. Read `spec-config.json` to confirm `phasesCompleted` was updated
|
||||
3. Store the summary for downstream phase context
|
||||
248
.codex/skills/spec-generator/phases/03-requirements.md
Normal file
248
.codex/skills/spec-generator/phases/03-requirements.md
Normal file
@@ -0,0 +1,248 @@
|
||||
# Phase 3: Requirements (PRD)
|
||||
|
||||
> **Execution Mode: Agent Delegated**
|
||||
> This phase is executed by a `doc-generator` agent. The orchestrator (SKILL.md) passes session context via the Task tool. The agent reads this file for instructions, executes all steps, writes output files, and returns a JSON summary.
|
||||
|
||||
Generate a detailed Product Requirements Document with functional/non-functional requirements, acceptance criteria, and MoSCoW prioritization.
|
||||
|
||||
## Objective
|
||||
|
||||
- Read product-brief.md and extract goals, scope, constraints
|
||||
- Expand each goal into functional requirements with acceptance criteria
|
||||
- Generate non-functional requirements
|
||||
- Apply MoSCoW priority labels (user input or auto)
|
||||
- Generate requirements.md using template
|
||||
|
||||
## Input
|
||||
|
||||
- Dependency: `{workDir}/product-brief.md`
|
||||
- Config: `{workDir}/spec-config.json`
|
||||
- Template: `templates/requirements-prd.md` (directory structure: `_index.md` + `REQ-*.md` + `NFR-*.md`)
|
||||
|
||||
## Execution Steps
|
||||
|
||||
### Step 1: Load Phase 2 Context
|
||||
|
||||
```javascript
|
||||
const specConfig = JSON.parse(Read(`${workDir}/spec-config.json`));
|
||||
const productBrief = Read(`${workDir}/product-brief.md`);
|
||||
|
||||
// Extract key sections from product brief
|
||||
// - Goals & Success Metrics table
|
||||
// - Scope (in-scope items)
|
||||
// - Target Users (personas)
|
||||
// - Constraints
|
||||
// - Technical perspective insights
|
||||
```
|
||||
|
||||
### Step 2: Requirements Expansion via Gemini CLI
|
||||
|
||||
```javascript
|
||||
Bash({
|
||||
command: `ccw cli -p "PURPOSE: Generate detailed functional and non-functional requirements from product brief.
|
||||
Success: Complete PRD with testable acceptance criteria for every requirement.
|
||||
|
||||
PRODUCT BRIEF CONTEXT:
|
||||
${productBrief}
|
||||
|
||||
TASK:
|
||||
- For each goal in the product brief, generate 3-7 functional requirements
|
||||
- Each requirement must have:
|
||||
- Unique ID: REQ-NNN (zero-padded)
|
||||
- Clear title
|
||||
- Detailed description
|
||||
- User story: As a [persona], I want [action] so that [benefit]
|
||||
- 2-4 specific, testable acceptance criteria
|
||||
- Generate non-functional requirements:
|
||||
- Performance (response times, throughput)
|
||||
- Security (authentication, authorization, data protection)
|
||||
- Scalability (user load, data volume)
|
||||
- Usability (accessibility, learnability)
|
||||
- Assign initial MoSCoW priority based on:
|
||||
- Must: Core functionality, cannot launch without
|
||||
- Should: Important but has workaround
|
||||
- Could: Nice-to-have, enhances experience
|
||||
- Won't: Explicitly deferred
|
||||
- Use RFC 2119 keywords (MUST, SHOULD, MAY, MUST NOT, SHOULD NOT) to define behavioral constraints for each requirement. Example: 'The system MUST return a 401 response within 100ms for invalid tokens.'
|
||||
- For each core domain entity referenced in requirements, define its data model: fields, types, constraints, and relationships to other entities
|
||||
- Maintain terminology consistency with the glossary below:
|
||||
TERMINOLOGY GLOSSARY:
|
||||
\${glossary ? JSON.stringify(glossary.terms, null, 2) : 'N/A - generate terms inline'}
|
||||
|
||||
MODE: analysis
|
||||
EXPECTED: Structured requirements with: ID, title, description, user story, acceptance criteria, priority, traceability to goals
|
||||
CONSTRAINTS: Every requirement must be specific enough to estimate and test. No vague requirements like 'system should be fast'.
|
||||
" --tool gemini --mode analysis`,
|
||||
run_in_background: true
|
||||
});
|
||||
|
||||
// Wait for CLI result
|
||||
```
|
||||
|
||||
### Step 2.5: Codex Requirements Review
|
||||
|
||||
After receiving Gemini expansion results, validate requirements quality via Codex CLI before proceeding:
|
||||
|
||||
```javascript
|
||||
Bash({
|
||||
command: `ccw cli -p "PURPOSE: Critical review of generated requirements - validate quality, testability, and scope alignment.
|
||||
Success: Actionable feedback on requirement quality with specific issues identified.
|
||||
|
||||
GENERATED REQUIREMENTS:
|
||||
${geminiRequirementsOutput.slice(0, 5000)}
|
||||
|
||||
PRODUCT BRIEF SCOPE:
|
||||
${productBrief.slice(0, 2000)}
|
||||
|
||||
TASK:
|
||||
- Verify every acceptance criterion is specific, measurable, and testable (not vague like 'should be fast')
|
||||
- Validate RFC 2119 keyword usage: MUST/SHOULD/MAY used correctly per RFC 2119 semantics
|
||||
- Check scope containment: no requirement exceeds the product brief's defined scope boundaries
|
||||
- Assess data model completeness: all referenced entities have field-level definitions
|
||||
- Identify duplicate or overlapping requirements
|
||||
- Rate overall requirements quality: 1-5 with justification
|
||||
|
||||
MODE: analysis
|
||||
EXPECTED: Requirements review with: per-requirement feedback, testability assessment, scope violations, data model gaps, quality rating
|
||||
CONSTRAINTS: Be genuinely critical. Focus on requirements that would block implementation if left vague.
|
||||
" --tool codex --mode analysis`,
|
||||
run_in_background: true
|
||||
});
|
||||
|
||||
// Wait for Codex review result
|
||||
// Integrate feedback into requirements before writing files:
|
||||
// - Fix vague acceptance criteria flagged by Codex
|
||||
// - Correct RFC 2119 keyword misuse
|
||||
// - Remove or flag requirements that exceed brief scope
|
||||
// - Fill data model gaps identified by Codex
|
||||
```
|
||||
|
||||
### Step 3: User Priority Sorting (Interactive)
|
||||
|
||||
```javascript
|
||||
if (!autoMode) {
|
||||
// Present requirements grouped by initial priority
|
||||
// Allow user to adjust MoSCoW labels
|
||||
AskUserQuestion({
|
||||
questions: [
|
||||
{
|
||||
question: "Review the Must-Have requirements. Any that should be reprioritized?",
|
||||
header: "Must-Have",
|
||||
multiSelect: false,
|
||||
options: [
|
||||
{ label: "All correct", description: "Must-have requirements are accurate" },
|
||||
{ label: "Too many", description: "Some should be Should/Could" },
|
||||
{ label: "Missing items", description: "Some Should requirements should be Must" }
|
||||
]
|
||||
},
|
||||
{
|
||||
question: "What is the target MVP scope?",
|
||||
header: "MVP Scope",
|
||||
multiSelect: false,
|
||||
options: [
|
||||
{ label: "Must-Have only (Recommended)", description: "MVP includes only Must requirements" },
|
||||
{ label: "Must + key Should", description: "Include critical Should items in MVP" },
|
||||
{ label: "Comprehensive", description: "Include all Must and Should" }
|
||||
]
|
||||
}
|
||||
]
|
||||
});
|
||||
// Apply user adjustments to priorities
|
||||
} else {
|
||||
// Auto mode: accept CLI-suggested priorities as-is
|
||||
}
|
||||
```
|
||||
|
||||
### Step 4: Generate requirements/ directory
|
||||
|
||||
```javascript
|
||||
// Read template
|
||||
const template = Read('templates/requirements-prd.md');
|
||||
|
||||
// Create requirements directory
|
||||
Bash(`mkdir -p "${workDir}/requirements"`);
|
||||
|
||||
const status = autoMode ? 'complete' : 'draft';
|
||||
const timestamp = new Date().toISOString();
|
||||
|
||||
// Parse CLI output into structured requirements
|
||||
const funcReqs = parseFunctionalRequirements(cliOutput); // [{id, slug, title, priority, ...}]
|
||||
const nfReqs = parseNonFunctionalRequirements(cliOutput); // [{id, type, slug, title, ...}]
|
||||
|
||||
// Step 4a: Write individual REQ-*.md files (one per functional requirement)
|
||||
funcReqs.forEach(req => {
|
||||
// Use REQ-NNN-{slug}.md template from templates/requirements-prd.md
|
||||
// Fill: id, title, priority, description, user_story, acceptance_criteria, traces
|
||||
Write(`${workDir}/requirements/REQ-${req.id}-${req.slug}.md`, reqContent);
|
||||
});
|
||||
|
||||
// Step 4b: Write individual NFR-*.md files (one per non-functional requirement)
|
||||
nfReqs.forEach(nfr => {
|
||||
// Use NFR-{type}-NNN-{slug}.md template from templates/requirements-prd.md
|
||||
// Fill: id, type, category, title, requirement, metric, target, traces
|
||||
Write(`${workDir}/requirements/NFR-${nfr.type}-${nfr.id}-${nfr.slug}.md`, nfrContent);
|
||||
});
|
||||
|
||||
// Step 4c: Write _index.md (summary + links to all individual files)
|
||||
// Use _index.md template from templates/requirements-prd.md
|
||||
// Fill: summary table, functional req links table, NFR links tables,
|
||||
// data requirements, integration requirements, traceability matrix
|
||||
Write(`${workDir}/requirements/_index.md`, indexContent);
|
||||
|
||||
// Update spec-config.json
|
||||
specConfig.phasesCompleted.push({
|
||||
phase: 3,
|
||||
name: "requirements",
|
||||
output_dir: "requirements/",
|
||||
output_index: "requirements/_index.md",
|
||||
file_count: funcReqs.length + nfReqs.length + 1,
|
||||
completed_at: timestamp
|
||||
});
|
||||
Write(`${workDir}/spec-config.json`, JSON.stringify(specConfig, null, 2));
|
||||
```
|
||||
|
||||
## Output
|
||||
|
||||
- **Directory**: `requirements/`
|
||||
- `_index.md` — Summary, MoSCoW table, traceability matrix, links
|
||||
- `REQ-NNN-{slug}.md` — Individual functional requirement (per requirement)
|
||||
- `NFR-{type}-NNN-{slug}.md` — Individual non-functional requirement (per NFR)
|
||||
- **Format**: Markdown with YAML frontmatter, cross-linked via relative paths
|
||||
|
||||
## Quality Checklist
|
||||
|
||||
- [ ] Functional requirements: >= 3 with REQ-NNN IDs, each in own file
|
||||
- [ ] Every requirement file has >= 1 acceptance criterion
|
||||
- [ ] Every requirement has MoSCoW priority tag in frontmatter
|
||||
- [ ] Non-functional requirements: >= 1, each in own file
|
||||
- [ ] User stories present for Must-have requirements
|
||||
- [ ] `_index.md` links to all individual requirement files
|
||||
- [ ] Traceability links to product-brief.md goals
|
||||
- [ ] All files have valid YAML frontmatter
|
||||
|
||||
## Next Phase
|
||||
|
||||
Proceed to [Phase 4: Architecture](04-architecture.md) with the generated requirements.md.
|
||||
|
||||
---
|
||||
|
||||
## Agent Return Summary
|
||||
|
||||
When executed as a delegated agent, return the following JSON summary to the orchestrator:
|
||||
|
||||
```json
|
||||
{
|
||||
"phase": 3,
|
||||
"status": "complete",
|
||||
"files_created": ["requirements/_index.md", "requirements/REQ-001-*.md", "..."],
|
||||
"file_count": 0,
|
||||
"codex_review_integrated": true,
|
||||
"quality_notes": ["list of quality concerns or Codex feedback items addressed"],
|
||||
"key_decisions": ["MoSCoW priority rationale", "scope adjustments from Codex review"]
|
||||
}
|
||||
```
|
||||
|
||||
The orchestrator will:
|
||||
1. Validate that `requirements/` directory exists with `_index.md` and individual files
|
||||
2. Read `spec-config.json` to confirm `phasesCompleted` was updated
|
||||
3. Store the summary for downstream phase context
|
||||
274
.codex/skills/spec-generator/phases/04-architecture.md
Normal file
274
.codex/skills/spec-generator/phases/04-architecture.md
Normal file
@@ -0,0 +1,274 @@
|
||||
# Phase 4: Architecture
|
||||
|
||||
> **Execution Mode: Agent Delegated**
|
||||
> This phase is executed by a `doc-generator` agent. The orchestrator (SKILL.md) passes session context via the Task tool. The agent reads this file for instructions, executes all steps, writes output files, and returns a JSON summary.
|
||||
|
||||
Generate technical architecture decisions, component design, and technology selections based on requirements.
|
||||
|
||||
## Objective
|
||||
|
||||
- Analyze requirements to identify core components and system architecture
|
||||
- Generate Architecture Decision Records (ADRs) with alternatives
|
||||
- Map architecture to existing codebase (if applicable)
|
||||
- Challenge architecture via Codex CLI review
|
||||
- Generate architecture.md using template
|
||||
|
||||
## Input
|
||||
|
||||
- Dependency: `{workDir}/requirements/_index.md` (and individual `REQ-*.md` files)
|
||||
- Reference: `{workDir}/product-brief.md`
|
||||
- Optional: `{workDir}/discovery-context.json`
|
||||
- Config: `{workDir}/spec-config.json`
|
||||
- Template: `templates/architecture-doc.md`
|
||||
|
||||
## Execution Steps
|
||||
|
||||
### Step 1: Load Phase 2-3 Context
|
||||
|
||||
```javascript
|
||||
const specConfig = JSON.parse(Read(`${workDir}/spec-config.json`));
|
||||
const productBrief = Read(`${workDir}/product-brief.md`);
|
||||
const requirements = Read(`${workDir}/requirements.md`);
|
||||
|
||||
let discoveryContext = null;
|
||||
if (specConfig.has_codebase) {
|
||||
try {
|
||||
discoveryContext = JSON.parse(Read(`${workDir}/discovery-context.json`));
|
||||
} catch (e) { /* no context */ }
|
||||
}
|
||||
|
||||
// Load glossary for terminology consistency
|
||||
let glossary = null;
|
||||
try {
|
||||
glossary = JSON.parse(Read(`${workDir}/glossary.json`));
|
||||
} catch (e) { /* proceed without */ }
|
||||
|
||||
// Load spec type profile for specialized sections
|
||||
const specType = specConfig.spec_type || 'service';
|
||||
let profile = null;
|
||||
try {
|
||||
profile = Read(`templates/profiles/${specType}-profile.md`);
|
||||
} catch (e) { /* use base template only */ }
|
||||
```
|
||||
|
||||
### Step 2: Architecture Analysis via Gemini CLI
|
||||
|
||||
```javascript
|
||||
Bash({
|
||||
command: `ccw cli -p "PURPOSE: Generate technical architecture for the specified requirements.
|
||||
Success: Complete component architecture, tech stack, and ADRs with justified decisions.
|
||||
|
||||
PRODUCT BRIEF (summary):
|
||||
${productBrief.slice(0, 3000)}
|
||||
|
||||
REQUIREMENTS:
|
||||
${requirements.slice(0, 5000)}
|
||||
|
||||
${discoveryContext ? `EXISTING CODEBASE:
|
||||
- Tech stack: ${JSON.stringify(discoveryContext.tech_stack || {})}
|
||||
- Existing patterns: ${discoveryContext.existing_patterns?.slice(0,5).join('; ') || 'none'}
|
||||
- Architecture constraints: ${discoveryContext.architecture_constraints?.slice(0,3).join('; ') || 'none'}
|
||||
` : ''}
|
||||
|
||||
TASK:
|
||||
- Define system architecture style (monolith, microservices, serverless, etc.) with justification
|
||||
- Identify core components and their responsibilities
|
||||
- Create component interaction diagram (Mermaid graph TD format)
|
||||
- Specify technology stack: languages, frameworks, databases, infrastructure
|
||||
- Generate 2-4 Architecture Decision Records (ADRs):
|
||||
- Each ADR: context, decision, 2-3 alternatives with pros/cons, consequences
|
||||
- Focus on: data storage, API design, authentication, key technical choices
|
||||
- Define data model: key entities and relationships (Mermaid erDiagram format)
|
||||
- Identify security architecture: auth, authorization, data protection
|
||||
- List API endpoints (high-level)
|
||||
${discoveryContext ? '- Map new components to existing codebase modules' : ''}
|
||||
- For each core entity with a lifecycle, create an ASCII state machine diagram showing:
|
||||
- All states and transitions
|
||||
- Trigger events for each transition
|
||||
- Side effects of transitions
|
||||
- Error states and recovery paths
|
||||
- Define a Configuration Model: list all configurable fields with name, type, default value, constraint, and description
|
||||
- Define Error Handling strategy:
|
||||
- Classify errors (transient/permanent/degraded)
|
||||
- Per-component error behavior using RFC 2119 keywords
|
||||
- Recovery mechanisms
|
||||
- Define Observability requirements:
|
||||
- Key metrics (name, type: counter/gauge/histogram, labels)
|
||||
- Structured log format and key log events
|
||||
- Health check endpoints
|
||||
\${profile ? \`
|
||||
SPEC TYPE PROFILE REQUIREMENTS (\${specType}):
|
||||
\${profile}
|
||||
\` : ''}
|
||||
\${glossary ? \`
|
||||
TERMINOLOGY GLOSSARY (use consistently):
|
||||
\${JSON.stringify(glossary.terms, null, 2)}
|
||||
\` : ''}
|
||||
|
||||
MODE: analysis
|
||||
EXPECTED: Complete architecture with: style justification, component diagram, tech stack table, ADRs, data model, security controls, API overview
|
||||
CONSTRAINTS: Architecture must support all Must-have requirements. Prefer proven technologies over cutting-edge.
|
||||
" --tool gemini --mode analysis`,
|
||||
run_in_background: true
|
||||
});
|
||||
|
||||
// Wait for CLI result
|
||||
```
|
||||
|
||||
### Step 3: Architecture Review via Codex CLI
|
||||
|
||||
```javascript
|
||||
// After receiving Gemini analysis, challenge it with Codex
|
||||
Bash({
|
||||
command: `ccw cli -p "PURPOSE: Critical review of proposed architecture - identify weaknesses and risks.
|
||||
Success: Actionable feedback with specific concerns and improvement suggestions.
|
||||
|
||||
PROPOSED ARCHITECTURE:
|
||||
${geminiArchitectureOutput.slice(0, 5000)}
|
||||
|
||||
REQUIREMENTS CONTEXT:
|
||||
${requirements.slice(0, 2000)}
|
||||
|
||||
TASK:
|
||||
- Challenge each ADR: are the alternatives truly the best options?
|
||||
- Identify scalability bottlenecks in the component design
|
||||
- Assess security gaps: authentication, authorization, data protection
|
||||
- Evaluate technology choices: maturity, community support, fit
|
||||
- Check for over-engineering or under-engineering
|
||||
- Verify architecture covers all Must-have requirements
|
||||
- Rate overall architecture quality: 1-5 with justification
|
||||
|
||||
MODE: analysis
|
||||
EXPECTED: Architecture review with: per-ADR feedback, scalability concerns, security gaps, technology risks, quality rating
|
||||
CONSTRAINTS: Be genuinely critical, not just validating. Focus on actionable improvements.
|
||||
" --tool codex --mode analysis`,
|
||||
run_in_background: true
|
||||
});
|
||||
|
||||
// Wait for CLI result
|
||||
```
|
||||
|
||||
### Step 4: Interactive ADR Decisions (Optional)
|
||||
|
||||
```javascript
|
||||
if (!autoMode) {
|
||||
// Present ADRs with review feedback to user
|
||||
// For each ADR where review raised concerns:
|
||||
AskUserQuestion({
|
||||
questions: [
|
||||
{
|
||||
question: "Architecture review raised concerns. How should we proceed?",
|
||||
header: "ADR Review",
|
||||
multiSelect: false,
|
||||
options: [
|
||||
{ label: "Accept as-is", description: "Architecture is sound, proceed" },
|
||||
{ label: "Incorporate feedback", description: "Adjust ADRs based on review" },
|
||||
{ label: "Simplify", description: "Reduce complexity, fewer components" }
|
||||
]
|
||||
}
|
||||
]
|
||||
});
|
||||
// Apply user decisions to architecture
|
||||
}
|
||||
```
|
||||
|
||||
### Step 5: Codebase Integration Mapping (Conditional)
|
||||
|
||||
```javascript
|
||||
if (specConfig.has_codebase && discoveryContext) {
|
||||
// Map new architecture components to existing code
|
||||
const integrationMapping = discoveryContext.relevant_files.map(f => ({
|
||||
new_component: "...", // matched from architecture
|
||||
existing_module: f.path,
|
||||
integration_type: "Extend|Replace|New",
|
||||
notes: f.rationale
|
||||
}));
|
||||
// Include in architecture document
|
||||
}
|
||||
```
|
||||
|
||||
### Step 6: Generate architecture/ directory
|
||||
|
||||
```javascript
|
||||
const template = Read('templates/architecture-doc.md');
|
||||
|
||||
// Create architecture directory
|
||||
Bash(`mkdir -p "${workDir}/architecture"`);
|
||||
|
||||
const status = autoMode ? 'complete' : 'draft';
|
||||
const timestamp = new Date().toISOString();
|
||||
|
||||
// Parse CLI outputs into structured ADRs
|
||||
const adrs = parseADRs(geminiArchitectureOutput, codexReviewOutput); // [{id, slug, title, ...}]
|
||||
|
||||
// Step 6a: Write individual ADR-*.md files (one per decision)
|
||||
adrs.forEach(adr => {
|
||||
// Use ADR-NNN-{slug}.md template from templates/architecture-doc.md
|
||||
// Fill: id, title, status, context, decision, alternatives, consequences, traces
|
||||
Write(`${workDir}/architecture/ADR-${adr.id}-${adr.slug}.md`, adrContent);
|
||||
});
|
||||
|
||||
// Step 6b: Write _index.md (overview + components + tech stack + links to ADRs)
|
||||
// Use _index.md template from templates/architecture-doc.md
|
||||
// Fill: system overview, component diagram, tech stack, ADR links table,
|
||||
// data model, API design, security controls, infrastructure, codebase integration
|
||||
Write(`${workDir}/architecture/_index.md`, indexContent);
|
||||
|
||||
// Update spec-config.json
|
||||
specConfig.phasesCompleted.push({
|
||||
phase: 4,
|
||||
name: "architecture",
|
||||
output_dir: "architecture/",
|
||||
output_index: "architecture/_index.md",
|
||||
file_count: adrs.length + 1,
|
||||
completed_at: timestamp
|
||||
});
|
||||
Write(`${workDir}/spec-config.json`, JSON.stringify(specConfig, null, 2));
|
||||
```
|
||||
|
||||
## Output
|
||||
|
||||
- **Directory**: `architecture/`
|
||||
- `_index.md` — Overview, component diagram, tech stack, data model, security, links
|
||||
- `ADR-NNN-{slug}.md` — Individual Architecture Decision Record (per ADR)
|
||||
- **Format**: Markdown with YAML frontmatter, cross-linked to requirements via relative paths
|
||||
|
||||
## Quality Checklist
|
||||
|
||||
- [ ] Component diagram present in `_index.md` (Mermaid or ASCII)
|
||||
- [ ] Tech stack specified (languages, frameworks, key libraries)
|
||||
- [ ] >= 1 ADR file with alternatives considered
|
||||
- [ ] Each ADR file lists >= 2 options
|
||||
- [ ] `_index.md` ADR table links to all individual ADR files
|
||||
- [ ] Integration points identified
|
||||
- [ ] Data model described
|
||||
- [ ] Codebase mapping present (if has_codebase)
|
||||
- [ ] All files have valid YAML frontmatter
|
||||
- [ ] ADR files link back to requirement files
|
||||
|
||||
## Next Phase
|
||||
|
||||
Proceed to [Phase 5: Epics & Stories](05-epics-stories.md) with the generated architecture.md.
|
||||
|
||||
---
|
||||
|
||||
## Agent Return Summary
|
||||
|
||||
When executed as a delegated agent, return the following JSON summary to the orchestrator:
|
||||
|
||||
```json
|
||||
{
|
||||
"phase": 4,
|
||||
"status": "complete",
|
||||
"files_created": ["architecture/_index.md", "architecture/ADR-001-*.md", "..."],
|
||||
"file_count": 0,
|
||||
"codex_review_rating": 0,
|
||||
"quality_notes": ["list of quality concerns or review feedback addressed"],
|
||||
"key_decisions": ["architecture style choice", "key ADR decisions"]
|
||||
}
|
||||
```
|
||||
|
||||
The orchestrator will:
|
||||
1. Validate that `architecture/` directory exists with `_index.md` and ADR files
|
||||
2. Read `spec-config.json` to confirm `phasesCompleted` was updated
|
||||
3. Store the summary for downstream phase context
|
||||
241
.codex/skills/spec-generator/phases/05-epics-stories.md
Normal file
241
.codex/skills/spec-generator/phases/05-epics-stories.md
Normal file
@@ -0,0 +1,241 @@
|
||||
# Phase 5: Epics & Stories
|
||||
|
||||
> **Execution Mode: Agent Delegated**
|
||||
> This phase is executed by a `doc-generator` agent. The orchestrator (SKILL.md) passes session context via the Task tool. The agent reads this file for instructions, executes all steps, writes output files, and returns a JSON summary.
|
||||
|
||||
Decompose the specification into executable Epics and Stories with dependency mapping.
|
||||
|
||||
## Objective
|
||||
|
||||
- Group requirements into 3-7 logical Epics
|
||||
- Tag MVP subset of Epics
|
||||
- Generate 2-5 Stories per Epic in standard user story format
|
||||
- Map cross-Epic dependencies (Mermaid diagram)
|
||||
- Generate epics.md using template
|
||||
|
||||
## Input
|
||||
|
||||
- Dependency: `{workDir}/requirements/_index.md`, `{workDir}/architecture/_index.md` (and individual files)
|
||||
- Reference: `{workDir}/product-brief.md`
|
||||
- Config: `{workDir}/spec-config.json`
|
||||
- Template: `templates/epics-template.md` (directory structure: `_index.md` + `EPIC-*.md`)
|
||||
|
||||
## Execution Steps
|
||||
|
||||
### Step 1: Load Phase 2-4 Context
|
||||
|
||||
```javascript
|
||||
const specConfig = JSON.parse(Read(`${workDir}/spec-config.json`));
|
||||
const productBrief = Read(`${workDir}/product-brief.md`);
|
||||
const requirements = Read(`${workDir}/requirements.md`);
|
||||
const architecture = Read(`${workDir}/architecture.md`);
|
||||
|
||||
let glossary = null;
|
||||
try {
|
||||
glossary = JSON.parse(Read(`${workDir}/glossary.json`));
|
||||
} catch (e) { /* proceed without */ }
|
||||
```
|
||||
|
||||
### Step 2: Epic Decomposition via Gemini CLI
|
||||
|
||||
```javascript
|
||||
Bash({
|
||||
command: `ccw cli -p "PURPOSE: Decompose requirements into executable Epics and Stories for implementation planning.
|
||||
Success: 3-7 Epics with prioritized Stories, dependency map, and MVP subset clearly defined.
|
||||
|
||||
PRODUCT BRIEF (summary):
|
||||
${productBrief.slice(0, 2000)}
|
||||
|
||||
REQUIREMENTS:
|
||||
${requirements.slice(0, 5000)}
|
||||
|
||||
ARCHITECTURE (summary):
|
||||
${architecture.slice(0, 3000)}
|
||||
|
||||
TASK:
|
||||
- Group requirements into 3-7 logical Epics:
|
||||
- Each Epic: EPIC-NNN ID, title, description, priority (Must/Should/Could)
|
||||
- Group by functional domain or user journey stage
|
||||
- Tag MVP Epics (minimum set for initial release)
|
||||
|
||||
- For each Epic, generate 2-5 Stories:
|
||||
- Each Story: STORY-{EPIC}-NNN ID, title
|
||||
- User story format: As a [persona], I want [action] so that [benefit]
|
||||
- 2-4 acceptance criteria per story (testable)
|
||||
- Relative size estimate: S/M/L/XL
|
||||
- Trace to source requirement(s): REQ-NNN
|
||||
|
||||
- Create dependency map:
|
||||
- Cross-Epic dependencies (which Epics block others)
|
||||
- Mermaid graph LR format
|
||||
- Recommended execution order with rationale
|
||||
|
||||
- Define MVP:
|
||||
- Which Epics are in MVP
|
||||
- MVP definition of done (3-5 criteria)
|
||||
- What is explicitly deferred post-MVP
|
||||
|
||||
MODE: analysis
|
||||
EXPECTED: Structured output with: Epic list (ID, title, priority, MVP flag), Stories per Epic (ID, user story, AC, size, trace), dependency Mermaid diagram, execution order, MVP definition
|
||||
CONSTRAINTS:
|
||||
- Every Must-have requirement must appear in at least one Story
|
||||
- Stories must be small enough to implement independently (no XL stories in MVP)
|
||||
- Dependencies should be minimized across Epics
|
||||
\${glossary ? \`- Maintain terminology consistency with glossary: \${glossary.terms.map(t => t.term).join(', ')}\` : ''}
|
||||
" --tool gemini --mode analysis`,
|
||||
run_in_background: true
|
||||
});
|
||||
|
||||
// Wait for CLI result
|
||||
```
|
||||
|
||||
### Step 2.5: Codex Epics Review
|
||||
|
||||
After receiving Gemini decomposition results, validate epic/story quality via Codex CLI:
|
||||
|
||||
```javascript
|
||||
Bash({
|
||||
command: `ccw cli -p "PURPOSE: Critical review of epic/story decomposition - validate coverage, sizing, and dependency structure.
|
||||
Success: Actionable feedback on epic quality with specific issues identified.
|
||||
|
||||
GENERATED EPICS AND STORIES:
|
||||
${geminiEpicsOutput.slice(0, 5000)}
|
||||
|
||||
REQUIREMENTS (Must-Have):
|
||||
${mustHaveRequirements.slice(0, 2000)}
|
||||
|
||||
TASK:
|
||||
- Verify Must-Have requirement coverage: every Must requirement appears in at least one Story
|
||||
- Check MVP story sizing: no XL stories in MVP epics (too large to implement independently)
|
||||
- Validate dependency graph: no circular dependencies between Epics
|
||||
- Assess acceptance criteria: every Story AC is specific and testable
|
||||
- Verify traceability: Stories trace back to specific REQ-NNN IDs
|
||||
- Check Epic granularity: 3-7 epics (not too few/many), 2-5 stories each
|
||||
- Rate overall decomposition quality: 1-5 with justification
|
||||
|
||||
MODE: analysis
|
||||
EXPECTED: Epic review with: coverage gaps, oversized stories, dependency issues, traceability gaps, quality rating
|
||||
CONSTRAINTS: Focus on issues that would block execution planning. Be specific about which Story/Epic has problems.
|
||||
" --tool codex --mode analysis`,
|
||||
run_in_background: true
|
||||
});
|
||||
|
||||
// Wait for Codex review result
|
||||
// Integrate feedback into epics before writing files:
|
||||
// - Add missing Stories for uncovered Must requirements
|
||||
// - Split XL stories in MVP epics into smaller units
|
||||
// - Fix dependency cycles identified by Codex
|
||||
// - Improve vague acceptance criteria
|
||||
```
|
||||
|
||||
### Step 3: Interactive Validation (Optional)
|
||||
|
||||
```javascript
|
||||
if (!autoMode) {
|
||||
// Present Epic overview table and dependency diagram
|
||||
AskUserQuestion({
|
||||
questions: [
|
||||
{
|
||||
question: "Review the Epic breakdown. Any adjustments needed?",
|
||||
header: "Epics",
|
||||
multiSelect: false,
|
||||
options: [
|
||||
{ label: "Looks good", description: "Epic structure is appropriate" },
|
||||
{ label: "Merge epics", description: "Some epics should be combined" },
|
||||
{ label: "Split epic", description: "An epic is too large, needs splitting" },
|
||||
{ label: "Adjust MVP", description: "Change which epics are in MVP" }
|
||||
]
|
||||
}
|
||||
]
|
||||
});
|
||||
// Apply user adjustments
|
||||
}
|
||||
```
|
||||
|
||||
### Step 4: Generate epics/ directory
|
||||
|
||||
```javascript
|
||||
const template = Read('templates/epics-template.md');
|
||||
|
||||
// Create epics directory
|
||||
Bash(`mkdir -p "${workDir}/epics"`);
|
||||
|
||||
const status = autoMode ? 'complete' : 'draft';
|
||||
const timestamp = new Date().toISOString();
|
||||
|
||||
// Parse CLI output into structured Epics
|
||||
const epicsList = parseEpics(cliOutput); // [{id, slug, title, priority, mvp, size, stories[], reqs[], adrs[], deps[]}]
|
||||
|
||||
// Step 4a: Write individual EPIC-*.md files (one per Epic, stories included)
|
||||
epicsList.forEach(epic => {
|
||||
// Use EPIC-NNN-{slug}.md template from templates/epics-template.md
|
||||
// Fill: id, title, priority, mvp, size, description, requirements links,
|
||||
// architecture links, dependency links, stories with user stories + AC
|
||||
Write(`${workDir}/epics/EPIC-${epic.id}-${epic.slug}.md`, epicContent);
|
||||
});
|
||||
|
||||
// Step 4b: Write _index.md (overview + dependency map + MVP scope + traceability)
|
||||
// Use _index.md template from templates/epics-template.md
|
||||
// Fill: epic overview table (with links), dependency Mermaid diagram,
|
||||
// execution order, MVP scope, traceability matrix, estimation summary
|
||||
Write(`${workDir}/epics/_index.md`, indexContent);
|
||||
|
||||
// Update spec-config.json
|
||||
specConfig.phasesCompleted.push({
|
||||
phase: 5,
|
||||
name: "epics-stories",
|
||||
output_dir: "epics/",
|
||||
output_index: "epics/_index.md",
|
||||
file_count: epicsList.length + 1,
|
||||
completed_at: timestamp
|
||||
});
|
||||
Write(`${workDir}/spec-config.json`, JSON.stringify(specConfig, null, 2));
|
||||
```
|
||||
|
||||
## Output
|
||||
|
||||
- **Directory**: `epics/`
|
||||
- `_index.md` — Overview table, dependency map, MVP scope, traceability matrix, links
|
||||
- `EPIC-NNN-{slug}.md` — Individual Epic with Stories (per Epic)
|
||||
- **Format**: Markdown with YAML frontmatter, cross-linked to requirements and architecture via relative paths
|
||||
|
||||
## Quality Checklist
|
||||
|
||||
- [ ] 3-7 Epic files with EPIC-NNN IDs
|
||||
- [ ] >= 1 Epic tagged as MVP in frontmatter
|
||||
- [ ] 2-5 Stories per Epic file
|
||||
- [ ] Stories use "As a...I want...So that..." format
|
||||
- [ ] `_index.md` has cross-Epic dependency map (Mermaid)
|
||||
- [ ] `_index.md` links to all individual Epic files
|
||||
- [ ] Relative sizing (S/M/L/XL) per Story
|
||||
- [ ] Epic files link to requirement files and ADR files
|
||||
- [ ] All files have valid YAML frontmatter
|
||||
|
||||
## Next Phase
|
||||
|
||||
Proceed to [Phase 6: Readiness Check](06-readiness-check.md) to validate the complete specification package.
|
||||
|
||||
---
|
||||
|
||||
## Agent Return Summary
|
||||
|
||||
When executed as a delegated agent, return the following JSON summary to the orchestrator:
|
||||
|
||||
```json
|
||||
{
|
||||
"phase": 5,
|
||||
"status": "complete",
|
||||
"files_created": ["epics/_index.md", "epics/EPIC-001-*.md", "..."],
|
||||
"file_count": 0,
|
||||
"codex_review_integrated": true,
|
||||
"mvp_epic_count": 0,
|
||||
"total_story_count": 0,
|
||||
"quality_notes": ["list of quality concerns or Codex feedback items addressed"],
|
||||
"key_decisions": ["MVP scope decisions", "dependency resolution choices"]
|
||||
}
|
||||
```
|
||||
|
||||
The orchestrator will:
|
||||
1. Validate that `epics/` directory exists with `_index.md` and EPIC files
|
||||
2. Read `spec-config.json` to confirm `phasesCompleted` was updated
|
||||
3. Store the summary for downstream phase context
|
||||
172
.codex/skills/spec-generator/phases/06-5-auto-fix.md
Normal file
172
.codex/skills/spec-generator/phases/06-5-auto-fix.md
Normal file
@@ -0,0 +1,172 @@
|
||||
# Phase 6.5: Auto-Fix
|
||||
|
||||
> **Execution Mode: Agent Delegated**
|
||||
> This phase is executed by a `doc-generator` agent when triggered by the orchestrator after Phase 6 identifies issues. The agent reads this file for instructions, applies fixes to affected documents, and returns a JSON summary.
|
||||
|
||||
Automatically repair specification issues identified in Phase 6 Readiness Check.
|
||||
|
||||
## Objective
|
||||
|
||||
- Parse readiness-report.md to extract Error and Warning items
|
||||
- Group issues by originating Phase (2-5)
|
||||
- Re-generate affected sections with error context injected into CLI prompts
|
||||
- Re-run Phase 6 validation after fixes
|
||||
|
||||
## Input
|
||||
|
||||
- Dependency: `{workDir}/readiness-report.md` (Phase 6 output)
|
||||
- Config: `{workDir}/spec-config.json` (with iteration_count)
|
||||
- All Phase 2-5 outputs
|
||||
|
||||
## Execution Steps
|
||||
|
||||
### Step 1: Parse Readiness Report
|
||||
|
||||
```javascript
|
||||
const readinessReport = Read(`${workDir}/readiness-report.md`);
|
||||
const specConfig = JSON.parse(Read(`${workDir}/spec-config.json`));
|
||||
|
||||
// Load glossary for terminology consistency during fixes
|
||||
let glossary = null;
|
||||
try {
|
||||
glossary = JSON.parse(Read(`${workDir}/glossary.json`));
|
||||
} catch (e) { /* proceed without */ }
|
||||
|
||||
// Extract issues from readiness report
|
||||
// Parse Error and Warning severity items
|
||||
// Group by originating phase:
|
||||
// Phase 2 issues: vision, problem statement, scope, personas
|
||||
// Phase 3 issues: requirements, acceptance criteria, priority, traceability
|
||||
// Phase 4 issues: architecture, ADRs, tech stack, data model, state machine
|
||||
// Phase 5 issues: epics, stories, dependencies, MVP scope
|
||||
|
||||
const issuesByPhase = {
|
||||
2: [], // product brief issues
|
||||
3: [], // requirements issues
|
||||
4: [], // architecture issues
|
||||
5: [] // epics issues
|
||||
};
|
||||
|
||||
// Parse structured issues from report
|
||||
// Each issue: { severity: "Error"|"Warning", description: "...", location: "file:section" }
|
||||
|
||||
// Map phase numbers to output files
|
||||
const phaseOutputFile = {
|
||||
2: 'product-brief.md',
|
||||
3: 'requirements/_index.md',
|
||||
4: 'architecture/_index.md',
|
||||
5: 'epics/_index.md'
|
||||
};
|
||||
```
|
||||
|
||||
### Step 2: Fix Affected Phases (Sequential)
|
||||
|
||||
For each phase with issues (in order 2 -> 3 -> 4 -> 5):
|
||||
|
||||
```javascript
|
||||
for (const [phase, issues] of Object.entries(issuesByPhase)) {
|
||||
if (issues.length === 0) continue;
|
||||
|
||||
const errorContext = issues.map(i => `[${i.severity}] ${i.description} (at ${i.location})`).join('\n');
|
||||
|
||||
// Read current phase output
|
||||
const currentOutput = Read(`${workDir}/${phaseOutputFile[phase]}`);
|
||||
|
||||
Bash({
|
||||
command: `ccw cli -p "PURPOSE: Fix specification issues identified in readiness check for Phase ${phase}.
|
||||
Success: All listed issues resolved while maintaining consistency with other documents.
|
||||
|
||||
CURRENT DOCUMENT:
|
||||
${currentOutput.slice(0, 5000)}
|
||||
|
||||
ISSUES TO FIX:
|
||||
${errorContext}
|
||||
|
||||
${glossary ? `GLOSSARY (maintain consistency):
|
||||
${JSON.stringify(glossary.terms, null, 2)}` : ''}
|
||||
|
||||
TASK:
|
||||
- Address each listed issue specifically
|
||||
- Maintain all existing content that is not flagged
|
||||
- Ensure terminology consistency with glossary
|
||||
- Preserve YAML frontmatter and cross-references
|
||||
- Use RFC 2119 keywords for behavioral requirements
|
||||
- Increment document version number
|
||||
|
||||
MODE: analysis
|
||||
EXPECTED: Corrected document content addressing all listed issues
|
||||
CONSTRAINTS: Minimal changes - only fix flagged issues, do not restructure unflagged sections
|
||||
" --tool gemini --mode analysis`,
|
||||
run_in_background: true
|
||||
});
|
||||
|
||||
// Wait for result, apply fixes to document
|
||||
// Update document version in frontmatter
|
||||
}
|
||||
```
|
||||
|
||||
### Step 3: Update State
|
||||
|
||||
```javascript
|
||||
specConfig.phasesCompleted.push({
|
||||
phase: 6.5,
|
||||
name: "auto-fix",
|
||||
iteration: specConfig.iteration_count,
|
||||
phases_fixed: Object.keys(issuesByPhase).filter(p => issuesByPhase[p].length > 0),
|
||||
completed_at: new Date().toISOString()
|
||||
});
|
||||
Write(`${workDir}/spec-config.json`, JSON.stringify(specConfig, null, 2));
|
||||
```
|
||||
|
||||
### Step 4: Re-run Phase 6 Validation
|
||||
|
||||
```javascript
|
||||
// Re-execute Phase 6: Readiness Check
|
||||
// This creates a new readiness-report.md
|
||||
// If still Fail and iteration_count < 2: loop back to Step 1
|
||||
// If Pass or iteration_count >= 2: proceed to handoff
|
||||
```
|
||||
|
||||
## Output
|
||||
|
||||
- **Updated**: Phase 2-5 documents (only affected ones)
|
||||
- **Updated**: `spec-config.json` (iteration tracking)
|
||||
- **Triggers**: Phase 6 re-validation
|
||||
|
||||
## Quality Checklist
|
||||
|
||||
- [ ] All Error-severity issues addressed
|
||||
- [ ] Warning-severity issues attempted (best effort)
|
||||
- [ ] Document versions incremented for modified files
|
||||
- [ ] Terminology consistency maintained
|
||||
- [ ] Cross-references still valid after fixes
|
||||
- [ ] Iteration count not exceeded (max 2)
|
||||
|
||||
## Next Phase
|
||||
|
||||
Re-run [Phase 6: Readiness Check](06-readiness-check.md) to validate fixes.
|
||||
|
||||
---
|
||||
|
||||
## Agent Return Summary
|
||||
|
||||
When executed as a delegated agent, return the following JSON summary to the orchestrator:
|
||||
|
||||
```json
|
||||
{
|
||||
"phase": 6.5,
|
||||
"status": "complete",
|
||||
"files_modified": ["list of files that were updated"],
|
||||
"issues_fixed": {
|
||||
"errors": 0,
|
||||
"warnings": 0
|
||||
},
|
||||
"quality_notes": ["list of fix decisions and remaining concerns"],
|
||||
"phases_touched": [2, 3, 4, 5]
|
||||
}
|
||||
```
|
||||
|
||||
The orchestrator will:
|
||||
1. Validate that listed files were actually modified (check version increment)
|
||||
2. Update `spec-config.json` iteration tracking
|
||||
3. Re-trigger Phase 6 validation
|
||||
581
.codex/skills/spec-generator/phases/06-readiness-check.md
Normal file
581
.codex/skills/spec-generator/phases/06-readiness-check.md
Normal file
@@ -0,0 +1,581 @@
|
||||
# Phase 6: Readiness Check
|
||||
|
||||
Validate the complete specification package, generate quality report and executive summary, provide execution handoff options.
|
||||
|
||||
## Objective
|
||||
|
||||
- Cross-document validation: completeness, consistency, traceability, depth
|
||||
- Generate quality scores per dimension
|
||||
- Produce readiness-report.md with issue list and traceability matrix
|
||||
- Produce spec-summary.md as one-page executive summary
|
||||
- Update all document frontmatter to `status: complete`
|
||||
- Present handoff options to execution workflows
|
||||
|
||||
## Input
|
||||
|
||||
- All Phase 2-5 outputs: `product-brief.md`, `requirements/_index.md` (+ `REQ-*.md`, `NFR-*.md`), `architecture/_index.md` (+ `ADR-*.md`), `epics/_index.md` (+ `EPIC-*.md`)
|
||||
- Config: `{workDir}/spec-config.json`
|
||||
- Reference: `specs/quality-gates.md`
|
||||
|
||||
## Execution Steps
|
||||
|
||||
### Step 1: Load All Documents
|
||||
|
||||
```javascript
|
||||
const specConfig = JSON.parse(Read(`${workDir}/spec-config.json`));
|
||||
const productBrief = Read(`${workDir}/product-brief.md`);
|
||||
const requirementsIndex = Read(`${workDir}/requirements/_index.md`);
|
||||
const architectureIndex = Read(`${workDir}/architecture/_index.md`);
|
||||
const epicsIndex = Read(`${workDir}/epics/_index.md`);
|
||||
const qualityGates = Read('specs/quality-gates.md');
|
||||
|
||||
// Load individual files for deep validation
|
||||
const reqFiles = Glob(`${workDir}/requirements/REQ-*.md`);
|
||||
const nfrFiles = Glob(`${workDir}/requirements/NFR-*.md`);
|
||||
const adrFiles = Glob(`${workDir}/architecture/ADR-*.md`);
|
||||
const epicFiles = Glob(`${workDir}/epics/EPIC-*.md`);
|
||||
```
|
||||
|
||||
### Step 2: Cross-Document Validation via Gemini CLI
|
||||
|
||||
```javascript
|
||||
Bash({
|
||||
command: `ccw cli -p "PURPOSE: Validate specification package for completeness, consistency, traceability, and depth.
|
||||
Success: Comprehensive quality report with scores, issues, and traceability matrix.
|
||||
|
||||
DOCUMENTS TO VALIDATE:
|
||||
|
||||
=== PRODUCT BRIEF ===
|
||||
${productBrief.slice(0, 3000)}
|
||||
|
||||
=== REQUIREMENTS INDEX (${reqFiles.length} REQ + ${nfrFiles.length} NFR files) ===
|
||||
${requirementsIndex.slice(0, 3000)}
|
||||
|
||||
=== ARCHITECTURE INDEX (${adrFiles.length} ADR files) ===
|
||||
${architectureIndex.slice(0, 2500)}
|
||||
|
||||
=== EPICS INDEX (${epicFiles.length} EPIC files) ===
|
||||
${epicsIndex.slice(0, 2500)}
|
||||
|
||||
QUALITY CRITERIA (from quality-gates.md):
|
||||
${qualityGates.slice(0, 2000)}
|
||||
|
||||
TASK:
|
||||
Perform 4-dimension validation:
|
||||
|
||||
1. COMPLETENESS (25%):
|
||||
- All required sections present in each document?
|
||||
- All template fields filled with substantive content?
|
||||
- Score 0-100 with specific gaps listed
|
||||
|
||||
2. CONSISTENCY (25%):
|
||||
- Terminology uniform across documents?
|
||||
- Terminology glossary compliance: all core terms used consistently per glossary.json definitions?
|
||||
- No synonym drift (e.g., "user" vs "client" vs "consumer" for same concept)?
|
||||
- User personas consistent?
|
||||
- Scope consistent (PRD does not exceed brief)?
|
||||
- Scope containment: PRD requirements do not exceed product brief's defined scope?
|
||||
- Non-Goals respected: no requirement or story contradicts explicit Non-Goals?
|
||||
- Tech stack references match between architecture and epics?
|
||||
- Score 0-100 with inconsistencies listed
|
||||
|
||||
3. TRACEABILITY (25%):
|
||||
- Every goal has >= 1 requirement?
|
||||
- Every Must requirement has architecture coverage?
|
||||
- Every Must requirement appears in >= 1 story?
|
||||
- ADR choices reflected in epics?
|
||||
- Build traceability matrix: Goal -> Requirement -> Architecture -> Epic/Story
|
||||
- Score 0-100 with orphan items listed
|
||||
|
||||
4. DEPTH (25%):
|
||||
- Acceptance criteria specific and testable?
|
||||
- Architecture decisions justified with alternatives?
|
||||
- Stories estimable by dev team?
|
||||
- Score 0-100 with vague areas listed
|
||||
|
||||
ALSO:
|
||||
- List all issues found, classified as Error/Warning/Info
|
||||
- Generate overall weighted score
|
||||
- Determine gate: Pass (>=80) / Review (60-79) / Fail (<60)
|
||||
|
||||
MODE: analysis
|
||||
EXPECTED: JSON-compatible output with: dimension scores, overall score, gate, issues list (severity + description + location), traceability matrix
|
||||
CONSTRAINTS: Be thorough but fair. Focus on actionable issues.
|
||||
" --tool gemini --mode analysis`,
|
||||
run_in_background: true
|
||||
});
|
||||
|
||||
// Wait for CLI result
|
||||
```
|
||||
|
||||
### Step 2b: Codex Technical Depth Review
|
||||
|
||||
Launch Codex review in parallel with Gemini validation for deeper technical assessment:
|
||||
|
||||
```javascript
|
||||
Bash({
|
||||
command: `ccw cli -p "PURPOSE: Deep technical quality review of specification package - assess architectural rigor and implementation readiness.
|
||||
Success: Technical quality assessment with specific actionable feedback on ADR quality, data model, security, and observability.
|
||||
|
||||
ARCHITECTURE INDEX:
|
||||
${architectureIndex.slice(0, 3000)}
|
||||
|
||||
ADR FILES (summaries):
|
||||
${adrFiles.map(f => Read(f).slice(0, 500)).join('\n---\n')}
|
||||
|
||||
REQUIREMENTS INDEX:
|
||||
${requirementsIndex.slice(0, 2000)}
|
||||
|
||||
TASK:
|
||||
- ADR Alternative Quality: Each ADR has >= 2 genuine alternatives with substantive pros/cons (not strawman options)
|
||||
- Data Model Completeness: All entities referenced in requirements have field-level definitions with types and constraints
|
||||
- Security Coverage: Authentication, authorization, data protection, and input validation addressed for all external interfaces
|
||||
- Observability Specification: Metrics, logging, and health checks defined for service/platform types
|
||||
- Error Handling: Error classification and recovery strategies defined per component
|
||||
- Configuration Model: All configurable parameters documented with types, defaults, and constraints
|
||||
- Rate each dimension 1-5 with specific gaps identified
|
||||
|
||||
MODE: analysis
|
||||
EXPECTED: Technical depth review with: per-dimension scores (1-5), specific gaps, improvement recommendations, overall technical readiness assessment
|
||||
CONSTRAINTS: Focus on gaps that would cause implementation ambiguity. Ignore cosmetic issues.
|
||||
" --tool codex --mode analysis`,
|
||||
run_in_background: true
|
||||
});
|
||||
|
||||
// Codex result merged with Gemini result in Step 3
|
||||
```
|
||||
|
||||
### Step 2c: Per-Requirement Verification
|
||||
|
||||
Iterate through all individual requirement files for fine-grained verification:
|
||||
|
||||
```javascript
|
||||
// Load all requirement files
|
||||
const reqFiles = Glob(`${workDir}/requirements/REQ-*.md`);
|
||||
const nfrFiles = Glob(`${workDir}/requirements/NFR-*.md`);
|
||||
const allReqFiles = [...reqFiles, ...nfrFiles];
|
||||
|
||||
// Load reference documents for cross-checking
|
||||
const productBrief = Read(`${workDir}/product-brief.md`);
|
||||
const epicFiles = Glob(`${workDir}/epics/EPIC-*.md`);
|
||||
const adrFiles = Glob(`${workDir}/architecture/ADR-*.md`);
|
||||
|
||||
// Read all epic content for coverage check
|
||||
const epicContents = epicFiles.map(f => ({ path: f, content: Read(f) }));
|
||||
const adrContents = adrFiles.map(f => ({ path: f, content: Read(f) }));
|
||||
|
||||
// Per-requirement verification
|
||||
const verificationResults = allReqFiles.map(reqFile => {
|
||||
const content = Read(reqFile);
|
||||
const reqId = extractReqId(content); // e.g., REQ-001 or NFR-PERF-001
|
||||
const priority = extractPriority(content); // Must/Should/Could/Won't
|
||||
|
||||
// Check 1: AC exists and is testable
|
||||
const hasAC = content.includes('- [ ]') || content.includes('Acceptance Criteria');
|
||||
const acTestable = !content.match(/should be (fast|good|reliable|secure)/i); // No vague AC
|
||||
|
||||
// Check 2: Traces back to Brief goal
|
||||
const tracesLinks = content.match(/product-brief\.md/);
|
||||
|
||||
// Check 3: Must requirements have Story coverage (search EPIC files)
|
||||
let storyCoverage = priority !== 'Must' ? 'N/A' :
|
||||
epicContents.some(e => e.content.includes(reqId)) ? 'Covered' : 'MISSING';
|
||||
|
||||
// Check 4: Must requirements have architecture coverage (search ADR files)
|
||||
let archCoverage = priority !== 'Must' ? 'N/A' :
|
||||
adrContents.some(a => a.content.includes(reqId)) ||
|
||||
Read(`${workDir}/architecture/_index.md`).includes(reqId) ? 'Covered' : 'MISSING';
|
||||
|
||||
return {
|
||||
req_id: reqId,
|
||||
priority,
|
||||
ac_exists: hasAC ? 'Yes' : 'MISSING',
|
||||
ac_testable: acTestable ? 'Yes' : 'VAGUE',
|
||||
brief_trace: tracesLinks ? 'Yes' : 'MISSING',
|
||||
story_coverage: storyCoverage,
|
||||
arch_coverage: archCoverage,
|
||||
pass: hasAC && acTestable && tracesLinks &&
|
||||
(priority !== 'Must' || (storyCoverage === 'Covered' && archCoverage === 'Covered'))
|
||||
};
|
||||
});
|
||||
|
||||
// Generate Per-Requirement Verification table for readiness-report.md
|
||||
const verificationTable = `
|
||||
## Per-Requirement Verification
|
||||
|
||||
| Req ID | Priority | AC Exists | AC Testable | Brief Trace | Story Coverage | Arch Coverage | Status |
|
||||
|--------|----------|-----------|-------------|-------------|----------------|---------------|--------|
|
||||
${verificationResults.map(r =>
|
||||
`| ${r.req_id} | ${r.priority} | ${r.ac_exists} | ${r.ac_testable} | ${r.brief_trace} | ${r.story_coverage} | ${r.arch_coverage} | ${r.pass ? 'PASS' : 'FAIL'} |`
|
||||
).join('\n')}
|
||||
|
||||
**Summary**: ${verificationResults.filter(r => r.pass).length}/${verificationResults.length} requirements pass all checks.
|
||||
`;
|
||||
```
|
||||
|
||||
### Step 3: Generate readiness-report.md
|
||||
|
||||
```javascript
|
||||
const frontmatterReport = `---
|
||||
session_id: ${specConfig.session_id}
|
||||
phase: 6
|
||||
document_type: readiness-report
|
||||
status: complete
|
||||
generated_at: ${new Date().toISOString()}
|
||||
stepsCompleted: ["load-all", "cross-validation", "codex-technical-review", "per-req-verification", "scoring", "report-generation"]
|
||||
version: 1
|
||||
dependencies:
|
||||
- product-brief.md
|
||||
- requirements/_index.md
|
||||
- architecture/_index.md
|
||||
- epics/_index.md
|
||||
---`;
|
||||
|
||||
// Report content from CLI validation output:
|
||||
// - Quality Score Summary (4 dimensions + overall)
|
||||
// - Gate Decision (Pass/Review/Fail)
|
||||
// - Issue List (grouped by severity: Error, Warning, Info)
|
||||
// - Traceability Matrix (Goal -> Req -> Arch -> Epic/Story)
|
||||
// - Codex Technical Depth Review (per-dimension scores from Step 2b)
|
||||
// - Per-Requirement Verification Table (from Step 2c)
|
||||
// - Recommendations for improvement
|
||||
|
||||
Write(`${workDir}/readiness-report.md`, `${frontmatterReport}\n\n${reportContent}`);
|
||||
```
|
||||
|
||||
### Step 4: Generate spec-summary.md
|
||||
|
||||
```javascript
|
||||
const frontmatterSummary = `---
|
||||
session_id: ${specConfig.session_id}
|
||||
phase: 6
|
||||
document_type: spec-summary
|
||||
status: complete
|
||||
generated_at: ${new Date().toISOString()}
|
||||
stepsCompleted: ["synthesis"]
|
||||
version: 1
|
||||
dependencies:
|
||||
- product-brief.md
|
||||
- requirements/_index.md
|
||||
- architecture/_index.md
|
||||
- epics/_index.md
|
||||
- readiness-report.md
|
||||
---`;
|
||||
|
||||
// One-page executive summary:
|
||||
// - Product Name & Vision (from product-brief.md)
|
||||
// - Problem & Target Users (from product-brief.md)
|
||||
// - Key Requirements count (Must/Should/Could from requirements.md)
|
||||
// - Architecture Style & Tech Stack (from architecture.md)
|
||||
// - Epic Overview (count, MVP scope from epics.md)
|
||||
// - Quality Score (from readiness-report.md)
|
||||
// - Recommended Next Step
|
||||
// - File manifest with links
|
||||
|
||||
Write(`${workDir}/spec-summary.md`, `${frontmatterSummary}\n\n${summaryContent}`);
|
||||
```
|
||||
|
||||
### Step 5: Update All Document Status
|
||||
|
||||
```javascript
|
||||
// Update frontmatter status to 'complete' in all documents (directories + single files)
|
||||
// product-brief.md is a single file
|
||||
const singleFiles = ['product-brief.md'];
|
||||
singleFiles.forEach(doc => {
|
||||
const content = Read(`${workDir}/${doc}`);
|
||||
Write(`${workDir}/${doc}`, content.replace(/status: draft/, 'status: complete'));
|
||||
});
|
||||
|
||||
// Update all files in directories (index + individual files)
|
||||
const dirFiles = [
|
||||
...Glob(`${workDir}/requirements/*.md`),
|
||||
...Glob(`${workDir}/architecture/*.md`),
|
||||
...Glob(`${workDir}/epics/*.md`)
|
||||
];
|
||||
dirFiles.forEach(filePath => {
|
||||
const content = Read(filePath);
|
||||
if (content.includes('status: draft')) {
|
||||
Write(filePath, content.replace(/status: draft/, 'status: complete'));
|
||||
}
|
||||
});
|
||||
|
||||
// Update spec-config.json
|
||||
specConfig.phasesCompleted.push({
|
||||
phase: 6,
|
||||
name: "readiness-check",
|
||||
output_file: "readiness-report.md",
|
||||
completed_at: new Date().toISOString()
|
||||
});
|
||||
Write(`${workDir}/spec-config.json`, JSON.stringify(specConfig, null, 2));
|
||||
```
|
||||
|
||||
### Step 6: Handoff Options
|
||||
|
||||
```javascript
|
||||
AskUserQuestion({
|
||||
questions: [
|
||||
{
|
||||
question: "Specification package is complete. What would you like to do next?",
|
||||
header: "Next Step",
|
||||
multiSelect: false,
|
||||
options: [
|
||||
{
|
||||
label: "Execute via lite-plan",
|
||||
description: "Start implementing with /workflow-lite-plan, one Epic at a time"
|
||||
},
|
||||
{
|
||||
label: "Create roadmap",
|
||||
description: "Generate execution roadmap with /workflow:req-plan-with-file"
|
||||
},
|
||||
{
|
||||
label: "Full planning",
|
||||
description: "Detailed planning with /workflow-plan for the full scope"
|
||||
},
|
||||
{
|
||||
label: "Export Issues (Phase 7)",
|
||||
description: "Create issues per Epic with spec links and wave assignment"
|
||||
},
|
||||
{
|
||||
label: "Iterate & improve",
|
||||
description: "Re-run failed phases based on readiness report issues (max 2 iterations)"
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
});
|
||||
|
||||
// Based on user selection, execute the corresponding handoff:
|
||||
|
||||
if (selection === "Execute via lite-plan") {
|
||||
// lite-plan accepts a text description directly
|
||||
// Read first MVP Epic from individual EPIC-*.md files
|
||||
const epicFiles = Glob(`${workDir}/epics/EPIC-*.md`);
|
||||
const firstMvpFile = epicFiles.find(f => {
|
||||
const content = Read(f);
|
||||
return content.includes('mvp: true');
|
||||
});
|
||||
const epicContent = Read(firstMvpFile);
|
||||
const title = extractTitle(epicContent); // First # heading
|
||||
const description = extractSection(epicContent, "Description");
|
||||
Skill(skill="workflow-lite-plan", args=`"${title}: ${description}"`)
|
||||
}
|
||||
|
||||
if (selection === "Full planning" || selection === "Create roadmap") {
|
||||
// === Bridge: Build brainstorm_artifacts compatible structure ===
|
||||
// Reads from directory-based outputs (individual files), maps to .brainstorming/ format
|
||||
// for context-search-agent auto-discovery → action-planning-agent consumption.
|
||||
|
||||
// Step A: Read spec documents from directories
|
||||
const specSummary = Read(`${workDir}/spec-summary.md`);
|
||||
const productBrief = Read(`${workDir}/product-brief.md`);
|
||||
const requirementsIndex = Read(`${workDir}/requirements/_index.md`);
|
||||
const architectureIndex = Read(`${workDir}/architecture/_index.md`);
|
||||
const epicsIndex = Read(`${workDir}/epics/_index.md`);
|
||||
|
||||
// Read individual EPIC files (already split — direct mapping to feature-specs)
|
||||
const epicFiles = Glob(`${workDir}/epics/EPIC-*.md`);
|
||||
|
||||
// Step B: Build structured description from spec-summary
|
||||
const structuredDesc = `GOAL: ${extractGoal(specSummary)}
|
||||
SCOPE: ${extractScope(specSummary)}
|
||||
CONTEXT: Generated from spec session ${specConfig.session_id}. Source: ${workDir}/`;
|
||||
|
||||
// Step C: Create WFS session (provides session directory + .brainstorming/)
|
||||
Skill(skill="workflow:session:start", args=`--auto "${structuredDesc}"`)
|
||||
// → Produces sessionId (WFS-xxx) and session directory at .workflow/active/{sessionId}/
|
||||
|
||||
// Step D: Create .brainstorming/ bridge files
|
||||
const brainstormDir = `.workflow/active/${sessionId}/.brainstorming`;
|
||||
Bash(`mkdir -p "${brainstormDir}/feature-specs"`);
|
||||
|
||||
// D.1: guidance-specification.md (highest priority — action-planning-agent reads first)
|
||||
// Synthesized from spec-summary + product-brief + architecture/requirements indexes
|
||||
Write(`${brainstormDir}/guidance-specification.md`, `
|
||||
# ${specConfig.seed_analysis.problem_statement} - Confirmed Guidance Specification
|
||||
|
||||
**Source**: spec-generator session ${specConfig.session_id}
|
||||
**Generated**: ${new Date().toISOString()}
|
||||
**Spec Directory**: ${workDir}
|
||||
|
||||
## 1. Project Positioning & Goals
|
||||
${extractSection(productBrief, "Vision")}
|
||||
${extractSection(productBrief, "Goals")}
|
||||
|
||||
## 2. Requirements Summary
|
||||
${extractSection(requirementsIndex, "Functional Requirements")}
|
||||
|
||||
## 3. Architecture Decisions
|
||||
${extractSection(architectureIndex, "Architecture Decision Records")}
|
||||
${extractSection(architectureIndex, "Technology Stack")}
|
||||
|
||||
## 4. Implementation Scope
|
||||
${extractSection(epicsIndex, "Epic Overview")}
|
||||
${extractSection(epicsIndex, "MVP Scope")}
|
||||
|
||||
## Feature Decomposition
|
||||
${extractSection(epicsIndex, "Traceability Matrix")}
|
||||
|
||||
## Appendix: Source Documents
|
||||
| Document | Path | Description |
|
||||
|----------|------|-------------|
|
||||
| Product Brief | ${workDir}/product-brief.md | Vision, goals, scope |
|
||||
| Requirements | ${workDir}/requirements/ | _index.md + REQ-*.md + NFR-*.md |
|
||||
| Architecture | ${workDir}/architecture/ | _index.md + ADR-*.md |
|
||||
| Epics | ${workDir}/epics/ | _index.md + EPIC-*.md |
|
||||
| Readiness Report | ${workDir}/readiness-report.md | Quality validation |
|
||||
`);
|
||||
|
||||
// D.2: feature-index.json (each EPIC file mapped to a Feature)
|
||||
// Path: feature-specs/feature-index.json (matches context-search-agent discovery)
|
||||
// Directly read from individual EPIC-*.md files (no monolithic parsing needed)
|
||||
const features = epicFiles.map(epicFile => {
|
||||
const content = Read(epicFile);
|
||||
const fm = parseFrontmatter(content); // Extract YAML frontmatter
|
||||
const basename = path.basename(epicFile, '.md'); // EPIC-001-slug
|
||||
const epicNum = fm.id.replace('EPIC-', ''); // 001
|
||||
const slug = basename.replace(/^EPIC-\d+-/, ''); // slug
|
||||
return {
|
||||
id: `F-${epicNum}`,
|
||||
slug: slug,
|
||||
name: extractTitle(content),
|
||||
description: extractSection(content, "Description"),
|
||||
priority: fm.mvp ? "High" : "Medium",
|
||||
spec_path: `${brainstormDir}/feature-specs/F-${epicNum}-${slug}.md`,
|
||||
source_epic: fm.id,
|
||||
source_file: epicFile
|
||||
};
|
||||
});
|
||||
Write(`${brainstormDir}/feature-specs/feature-index.json`, JSON.stringify({
|
||||
version: "1.0",
|
||||
source: "spec-generator",
|
||||
spec_session: specConfig.session_id,
|
||||
features,
|
||||
cross_cutting_specs: []
|
||||
}, null, 2));
|
||||
|
||||
// D.3: Feature-spec files — directly adapt from individual EPIC-*.md files
|
||||
// Since Epics are already individual documents, transform format directly
|
||||
// Filename pattern: F-{num}-{slug}.md (matches context-search-agent glob F-*-*.md)
|
||||
features.forEach(feature => {
|
||||
const epicContent = Read(feature.source_file);
|
||||
Write(feature.spec_path, `
|
||||
# Feature Spec: ${feature.source_epic} - ${feature.name}
|
||||
|
||||
**Source**: ${feature.source_file}
|
||||
**Priority**: ${feature.priority === "High" ? "MVP" : "Post-MVP"}
|
||||
|
||||
## Description
|
||||
${extractSection(epicContent, "Description")}
|
||||
|
||||
## Stories
|
||||
${extractSection(epicContent, "Stories")}
|
||||
|
||||
## Requirements
|
||||
${extractSection(epicContent, "Requirements")}
|
||||
|
||||
## Architecture
|
||||
${extractSection(epicContent, "Architecture")}
|
||||
`);
|
||||
});
|
||||
|
||||
// Step E: Invoke downstream workflow
|
||||
// context-search-agent will auto-discover .brainstorming/ files
|
||||
// → context-package.json.brainstorm_artifacts populated
|
||||
// → action-planning-agent loads guidance_specification (P1) + feature_index (P2)
|
||||
if (selection === "Full planning") {
|
||||
Skill(skill="workflow-plan", args=`"${structuredDesc}"`)
|
||||
} else {
|
||||
Skill(skill="workflow:req-plan-with-file", args=`"${extractGoal(specSummary)}"`)
|
||||
}
|
||||
}
|
||||
|
||||
if (selection === "Export Issues (Phase 7)") {
|
||||
// Proceed to Phase 7: Issue Export
|
||||
// Read phases/07-issue-export.md and execute
|
||||
}
|
||||
|
||||
// If user selects "Other": Export only or return to specific phase
|
||||
|
||||
if (selection === "Iterate & improve") {
|
||||
// Check iteration count
|
||||
if (specConfig.iteration_count >= 2) {
|
||||
// Max iterations reached, force handoff
|
||||
// Present handoff options again without iterate
|
||||
return;
|
||||
}
|
||||
|
||||
// Update iteration tracking
|
||||
specConfig.iteration_count = (specConfig.iteration_count || 0) + 1;
|
||||
specConfig.iteration_history.push({
|
||||
iteration: specConfig.iteration_count,
|
||||
timestamp: new Date().toISOString(),
|
||||
readiness_score: overallScore,
|
||||
errors_found: errorCount,
|
||||
phases_to_fix: affectedPhases
|
||||
});
|
||||
Write(`${workDir}/spec-config.json`, JSON.stringify(specConfig, null, 2));
|
||||
|
||||
// Proceed to Phase 6.5: Auto-Fix
|
||||
// Read phases/06-5-auto-fix.md and execute
|
||||
}
|
||||
```
|
||||
|
||||
#### Helper Functions Reference (pseudocode)
|
||||
|
||||
The following helper functions are used in the handoff bridge. They operate on markdown content from individual spec files:
|
||||
|
||||
```javascript
|
||||
// Extract title from a markdown document (first # heading)
|
||||
function extractTitle(markdown) {
|
||||
// Return the text after the first # heading (e.g., "# EPIC-001: Title" → "Title")
|
||||
}
|
||||
|
||||
// Parse YAML frontmatter from markdown (between --- markers)
|
||||
function parseFrontmatter(markdown) {
|
||||
// Return object with: id, priority, mvp, size, requirements, architecture, dependencies
|
||||
}
|
||||
|
||||
// Extract GOAL/SCOPE from spec-summary frontmatter or ## sections
|
||||
function extractGoal(specSummary) { /* Return the Vision/Goal line */ }
|
||||
function extractScope(specSummary) { /* Return the Scope/MVP boundary */ }
|
||||
|
||||
// Extract a named ## section from a markdown document
|
||||
function extractSection(markdown, sectionName) {
|
||||
// Return content between ## {sectionName} and next ## heading
|
||||
}
|
||||
```
|
||||
|
||||
## Output
|
||||
|
||||
- **File**: `readiness-report.md` - Quality validation report
|
||||
- **File**: `spec-summary.md` - One-page executive summary
|
||||
- **Format**: Markdown with YAML frontmatter
|
||||
|
||||
## Quality Checklist
|
||||
|
||||
- [ ] All document directories validated (product-brief, requirements/, architecture/, epics/)
|
||||
- [ ] All frontmatter parseable and valid (index + individual files)
|
||||
- [ ] Cross-references checked (relative links between directories)
|
||||
- [ ] Overall quality score calculated
|
||||
- [ ] No unresolved Error-severity issues
|
||||
- [ ] Traceability matrix generated
|
||||
- [ ] spec-summary.md created
|
||||
- [ ] All document statuses updated to 'complete' (all files in all directories)
|
||||
- [ ] Handoff options presented
|
||||
|
||||
## Completion
|
||||
|
||||
This is the final phase. The specification package is ready for execution handoff.
|
||||
|
||||
### Output Files Manifest
|
||||
|
||||
| Path | Phase | Description |
|
||||
|------|-------|-------------|
|
||||
| `spec-config.json` | 1 | Session configuration and state |
|
||||
| `discovery-context.json` | 1 | Codebase exploration (optional) |
|
||||
| `product-brief.md` | 2 | Product brief with multi-perspective synthesis |
|
||||
| `requirements/` | 3 | Directory: `_index.md` + `REQ-*.md` + `NFR-*.md` |
|
||||
| `architecture/` | 4 | Directory: `_index.md` + `ADR-*.md` |
|
||||
| `epics/` | 5 | Directory: `_index.md` + `EPIC-*.md` |
|
||||
| `readiness-report.md` | 6 | Quality validation report |
|
||||
| `spec-summary.md` | 6 | One-page executive summary |
|
||||
329
.codex/skills/spec-generator/phases/07-issue-export.md
Normal file
329
.codex/skills/spec-generator/phases/07-issue-export.md
Normal file
@@ -0,0 +1,329 @@
|
||||
# Phase 7: Issue Export
|
||||
|
||||
Map specification Epics to issues, create them via `ccw issue create`, and generate an export report with spec document links.
|
||||
|
||||
> **Execution Mode: Inline**
|
||||
> This phase runs in the main orchestrator context (not delegated to agent) for direct access to `ccw issue create` CLI and interactive handoff options.
|
||||
|
||||
## Objective
|
||||
|
||||
- Read all EPIC-*.md files from Phase 5 output
|
||||
- Assign waves: MVP epics → wave-1, non-MVP → wave-2
|
||||
- Create one issue per Epic via `ccw issue create`
|
||||
- Map Epic dependencies to issue dependencies
|
||||
- Generate issue-export-report.md with mapping table and spec links
|
||||
- Present handoff options for execution
|
||||
|
||||
## Input
|
||||
|
||||
- Dependency: `{workDir}/epics/_index.md` (and individual `EPIC-*.md` files)
|
||||
- Reference: `{workDir}/readiness-report.md`, `{workDir}/spec-config.json`
|
||||
- Reference: `{workDir}/product-brief.md`, `{workDir}/requirements/_index.md`, `{workDir}/architecture/_index.md`
|
||||
|
||||
## Execution Steps
|
||||
|
||||
### Step 1: Load Epic Files
|
||||
|
||||
```javascript
|
||||
const specConfig = JSON.parse(Read(`${workDir}/spec-config.json`));
|
||||
const epicFiles = Glob(`${workDir}/epics/EPIC-*.md`);
|
||||
const epicsIndex = Read(`${workDir}/epics/_index.md`);
|
||||
|
||||
// Parse each Epic file
|
||||
const epics = epicFiles.map(epicFile => {
|
||||
const content = Read(epicFile);
|
||||
const fm = parseFrontmatter(content);
|
||||
const title = extractTitle(content);
|
||||
const description = extractSection(content, "Description");
|
||||
const stories = extractSection(content, "Stories");
|
||||
const reqRefs = extractSection(content, "Requirements");
|
||||
const adrRefs = extractSection(content, "Architecture");
|
||||
const deps = fm.dependencies || [];
|
||||
|
||||
return {
|
||||
file: epicFile,
|
||||
id: fm.id, // e.g., EPIC-001
|
||||
title,
|
||||
description,
|
||||
stories,
|
||||
reqRefs,
|
||||
adrRefs,
|
||||
priority: fm.priority,
|
||||
mvp: fm.mvp || false,
|
||||
dependencies: deps, // other EPIC IDs this depends on
|
||||
size: fm.size
|
||||
};
|
||||
});
|
||||
```
|
||||
|
||||
### Step 2: Wave Assignment
|
||||
|
||||
```javascript
|
||||
const epicWaves = epics.map(epic => ({
|
||||
...epic,
|
||||
wave: epic.mvp ? 1 : 2
|
||||
}));
|
||||
|
||||
// Log wave assignment
|
||||
const wave1 = epicWaves.filter(e => e.wave === 1);
|
||||
const wave2 = epicWaves.filter(e => e.wave === 2);
|
||||
// wave-1: MVP epics (must-have, core functionality)
|
||||
// wave-2: Post-MVP epics (should-have, enhancements)
|
||||
```
|
||||
|
||||
### Step 3: Issue Creation Loop
|
||||
|
||||
```javascript
|
||||
const createdIssues = [];
|
||||
const epicToIssue = {}; // EPIC-ID -> Issue ID mapping
|
||||
|
||||
for (const epic of epicWaves) {
|
||||
// Build issue JSON matching roadmap-with-file schema
|
||||
const issueData = {
|
||||
title: `[${specConfig.session_id}] ${epic.title}`,
|
||||
status: "pending",
|
||||
priority: epic.wave === 1 ? 2 : 3, // wave-1 = higher priority
|
||||
context: `## ${epic.title}
|
||||
|
||||
${epic.description}
|
||||
|
||||
## Stories
|
||||
${epic.stories}
|
||||
|
||||
## Spec References
|
||||
- Epic: ${epic.file}
|
||||
- Requirements: ${epic.reqRefs}
|
||||
- Architecture: ${epic.adrRefs}
|
||||
- Product Brief: ${workDir}/product-brief.md
|
||||
- Full Spec: ${workDir}/`,
|
||||
source: "text",
|
||||
tags: [
|
||||
"spec-generated",
|
||||
`spec:${specConfig.session_id}`,
|
||||
`wave-${epic.wave}`,
|
||||
epic.mvp ? "mvp" : "post-mvp",
|
||||
`epic:${epic.id}`
|
||||
],
|
||||
extended_context: {
|
||||
notes: {
|
||||
session: specConfig.session_id,
|
||||
spec_dir: workDir,
|
||||
source_epic: epic.id,
|
||||
wave: epic.wave,
|
||||
depends_on_issues: [], // Filled in Step 4
|
||||
spec_documents: {
|
||||
product_brief: `${workDir}/product-brief.md`,
|
||||
requirements: `${workDir}/requirements/_index.md`,
|
||||
architecture: `${workDir}/architecture/_index.md`,
|
||||
epic: epic.file
|
||||
}
|
||||
}
|
||||
},
|
||||
lifecycle_requirements: {
|
||||
test_strategy: "acceptance",
|
||||
regression_scope: "affected",
|
||||
acceptance_type: "manual",
|
||||
commit_strategy: "per-epic"
|
||||
}
|
||||
};
|
||||
|
||||
// Create issue via ccw issue create (pipe JSON to avoid shell escaping)
|
||||
const result = Bash(`echo '${JSON.stringify(issueData)}' | ccw issue create`);
|
||||
|
||||
// Parse returned issue ID
|
||||
const issueId = JSON.parse(result).id; // e.g., ISS-20260308-001
|
||||
epicToIssue[epic.id] = issueId;
|
||||
|
||||
createdIssues.push({
|
||||
epic_id: epic.id,
|
||||
epic_title: epic.title,
|
||||
issue_id: issueId,
|
||||
wave: epic.wave,
|
||||
priority: issueData.priority,
|
||||
mvp: epic.mvp
|
||||
});
|
||||
}
|
||||
```
|
||||
|
||||
### Step 4: Epic Dependency → Issue Dependency Mapping
|
||||
|
||||
```javascript
|
||||
// Map EPIC dependencies to Issue dependencies
|
||||
for (const epic of epicWaves) {
|
||||
if (epic.dependencies.length === 0) continue;
|
||||
|
||||
const issueId = epicToIssue[epic.id];
|
||||
const depIssueIds = epic.dependencies
|
||||
.map(depEpicId => epicToIssue[depEpicId])
|
||||
.filter(Boolean);
|
||||
|
||||
if (depIssueIds.length > 0) {
|
||||
// Update issue's extended_context.notes.depends_on_issues
|
||||
// This is informational — actual dependency enforcement is in execution phase
|
||||
// Note: ccw issue create already created the issue; dependency info is in the context
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Step 5: Generate issue-export-report.md
|
||||
|
||||
```javascript
|
||||
const timestamp = new Date().toISOString();
|
||||
|
||||
const reportContent = `---
|
||||
session_id: ${specConfig.session_id}
|
||||
phase: 7
|
||||
document_type: issue-export-report
|
||||
status: complete
|
||||
generated_at: ${timestamp}
|
||||
stepsCompleted: ["load-epics", "wave-assignment", "issue-creation", "dependency-mapping", "report-generation"]
|
||||
version: 1
|
||||
dependencies:
|
||||
- epics/_index.md
|
||||
- readiness-report.md
|
||||
---
|
||||
|
||||
# Issue Export Report
|
||||
|
||||
## Summary
|
||||
|
||||
- **Session**: ${specConfig.session_id}
|
||||
- **Issues Created**: ${createdIssues.length}
|
||||
- **Wave 1 (MVP)**: ${wave1.length} issues
|
||||
- **Wave 2 (Post-MVP)**: ${wave2.length} issues
|
||||
- **Export Date**: ${timestamp}
|
||||
|
||||
## Issue Mapping
|
||||
|
||||
| Epic ID | Epic Title | Issue ID | Wave | Priority | MVP |
|
||||
|---------|-----------|----------|------|----------|-----|
|
||||
${createdIssues.map(i =>
|
||||
`| ${i.epic_id} | ${i.epic_title} | ${i.issue_id} | ${i.wave} | ${i.priority} | ${i.mvp ? 'Yes' : 'No'} |`
|
||||
).join('\n')}
|
||||
|
||||
## Spec Document Links
|
||||
|
||||
| Document | Path | Description |
|
||||
|----------|------|-------------|
|
||||
| Product Brief | ${workDir}/product-brief.md | Vision, goals, scope |
|
||||
| Requirements | ${workDir}/requirements/_index.md | Functional + non-functional requirements |
|
||||
| Architecture | ${workDir}/architecture/_index.md | Components, ADRs, tech stack |
|
||||
| Epics | ${workDir}/epics/_index.md | Epic/Story breakdown |
|
||||
| Readiness Report | ${workDir}/readiness-report.md | Quality validation |
|
||||
| Spec Summary | ${workDir}/spec-summary.md | Executive summary |
|
||||
|
||||
## Dependency Map
|
||||
|
||||
| Issue ID | Depends On |
|
||||
|----------|-----------|
|
||||
${createdIssues.map(i => {
|
||||
const epic = epicWaves.find(e => e.id === i.epic_id);
|
||||
const deps = (epic.dependencies || []).map(d => epicToIssue[d]).filter(Boolean);
|
||||
return `| ${i.issue_id} | ${deps.length > 0 ? deps.join(', ') : 'None'} |`;
|
||||
}).join('\n')}
|
||||
|
||||
## Next Steps
|
||||
|
||||
1. **team-planex**: Execute all issues via coordinated team workflow
|
||||
2. **Wave 1 only**: Execute MVP issues first (${wave1.length} issues)
|
||||
3. **View issues**: Browse created issues via \`ccw issue list --tag spec:${specConfig.session_id}\`
|
||||
4. **Manual review**: Review individual issues before execution
|
||||
`;
|
||||
|
||||
Write(`${workDir}/issue-export-report.md`, reportContent);
|
||||
```
|
||||
|
||||
### Step 6: Update spec-config.json
|
||||
|
||||
```javascript
|
||||
specConfig.issue_ids = createdIssues.map(i => i.issue_id);
|
||||
specConfig.issues_created = createdIssues.length;
|
||||
specConfig.phasesCompleted.push({
|
||||
phase: 7,
|
||||
name: "issue-export",
|
||||
output_file: "issue-export-report.md",
|
||||
issues_created: createdIssues.length,
|
||||
wave_1_count: wave1.length,
|
||||
wave_2_count: wave2.length,
|
||||
completed_at: timestamp
|
||||
});
|
||||
Write(`${workDir}/spec-config.json`, JSON.stringify(specConfig, null, 2));
|
||||
```
|
||||
|
||||
### Step 7: Handoff Options
|
||||
|
||||
```javascript
|
||||
AskUserQuestion({
|
||||
questions: [
|
||||
{
|
||||
question: `${createdIssues.length} issues created from ${epicWaves.length} Epics. What would you like to do next?`,
|
||||
header: "Next Step",
|
||||
multiSelect: false,
|
||||
options: [
|
||||
{
|
||||
label: "Execute via team-planex",
|
||||
description: `Execute all ${createdIssues.length} issues with coordinated team workflow`
|
||||
},
|
||||
{
|
||||
label: "Wave 1 only",
|
||||
description: `Execute ${wave1.length} MVP issues first`
|
||||
},
|
||||
{
|
||||
label: "View issues",
|
||||
description: "Browse created issues before deciding"
|
||||
},
|
||||
{
|
||||
label: "Done",
|
||||
description: "Export complete, handle manually"
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
});
|
||||
|
||||
// Based on user selection:
|
||||
if (selection === "Execute via team-planex") {
|
||||
const issueIds = createdIssues.map(i => i.issue_id).join(',');
|
||||
Skill({ skill: "team-planex", args: `--issues ${issueIds}` });
|
||||
}
|
||||
|
||||
if (selection === "Wave 1 only") {
|
||||
const wave1Ids = createdIssues.filter(i => i.wave === 1).map(i => i.issue_id).join(',');
|
||||
Skill({ skill: "team-planex", args: `--issues ${wave1Ids}` });
|
||||
}
|
||||
|
||||
if (selection === "View issues") {
|
||||
Bash(`ccw issue list --tag spec:${specConfig.session_id}`);
|
||||
}
|
||||
```
|
||||
|
||||
## Output
|
||||
|
||||
- **File**: `issue-export-report.md` — Issue mapping table + spec links + next steps
|
||||
- **Updated**: `.workflow/issues/issues.jsonl` — New issue entries appended
|
||||
- **Updated**: `spec-config.json` — Phase 7 completion + issue IDs
|
||||
|
||||
## Quality Checklist
|
||||
|
||||
- [ ] All MVP Epics have corresponding issues created
|
||||
- [ ] All non-MVP Epics have corresponding issues created
|
||||
- [ ] Issue tags include `spec-generated` and `spec:{session_id}`
|
||||
- [ ] Issue `extended_context.notes.spec_documents` paths are correct
|
||||
- [ ] Wave assignment matches MVP status (MVP → wave-1, non-MVP → wave-2)
|
||||
- [ ] Epic dependencies mapped to issue dependency references
|
||||
- [ ] `issue-export-report.md` generated with mapping table
|
||||
- [ ] `spec-config.json` updated with `issue_ids` and `issues_created`
|
||||
- [ ] Handoff options presented
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Error | Blocking? | Action |
|
||||
|-------|-----------|--------|
|
||||
| `ccw issue create` fails for one Epic | No | Log error, continue with remaining Epics, report partial creation |
|
||||
| No EPIC files found | Yes | Error and return to Phase 5 |
|
||||
| All issue creations fail | Yes | Error with CLI diagnostic, suggest manual creation |
|
||||
| Dependency EPIC not found in mapping | No | Skip dependency link, log warning |
|
||||
|
||||
## Completion
|
||||
|
||||
Phase 7 is the final phase. The specification package has been fully converted to executable issues ready for team-planex or manual execution.
|
||||
295
.codex/skills/spec-generator/specs/document-standards.md
Normal file
295
.codex/skills/spec-generator/specs/document-standards.md
Normal file
@@ -0,0 +1,295 @@
|
||||
# Document Standards
|
||||
|
||||
Defines format conventions, YAML frontmatter schema, naming rules, and content structure for all spec-generator outputs.
|
||||
|
||||
## When to Use
|
||||
|
||||
| Phase | Usage | Section |
|
||||
|-------|-------|---------|
|
||||
| All Phases | Frontmatter format | YAML Frontmatter Schema |
|
||||
| All Phases | File naming | Naming Conventions |
|
||||
| Phase 2-5 | Document structure | Content Structure |
|
||||
| Phase 6 | Validation reference | All sections |
|
||||
|
||||
---
|
||||
|
||||
## YAML Frontmatter Schema
|
||||
|
||||
Every generated document MUST begin with YAML frontmatter:
|
||||
|
||||
```yaml
|
||||
---
|
||||
session_id: SPEC-{slug}-{YYYY-MM-DD}
|
||||
phase: {1-6}
|
||||
document_type: {product-brief|requirements|architecture|epics|readiness-report|spec-summary|issue-export-report}
|
||||
status: draft|review|complete
|
||||
generated_at: {ISO8601 timestamp}
|
||||
stepsCompleted: []
|
||||
version: 1
|
||||
dependencies:
|
||||
- {list of input documents used}
|
||||
---
|
||||
```
|
||||
|
||||
### Field Definitions
|
||||
|
||||
| Field | Type | Required | Description |
|
||||
|-------|------|----------|-------------|
|
||||
| `session_id` | string | Yes | Session identifier matching spec-config.json |
|
||||
| `phase` | number | Yes | Phase number that generated this document (1-6) |
|
||||
| `document_type` | string | Yes | One of: product-brief, requirements, architecture, epics, readiness-report, spec-summary, issue-export-report |
|
||||
| `status` | enum | Yes | draft (initial), review (user reviewed), complete (finalized) |
|
||||
| `generated_at` | string | Yes | ISO8601 timestamp of generation |
|
||||
| `stepsCompleted` | array | Yes | List of step IDs completed during generation |
|
||||
| `version` | number | Yes | Document version, incremented on re-generation |
|
||||
| `dependencies` | array | No | List of input files this document depends on |
|
||||
|
||||
### Status Transitions
|
||||
|
||||
```
|
||||
draft -> review -> complete
|
||||
| ^
|
||||
+-------------------+ (direct promotion in auto mode)
|
||||
```
|
||||
|
||||
- **draft**: Initial generation, not yet user-reviewed
|
||||
- **review**: User has reviewed and provided feedback
|
||||
- **complete**: Finalized, ready for downstream consumption
|
||||
|
||||
In auto mode (`-y`), documents are promoted directly from `draft` to `complete`.
|
||||
|
||||
---
|
||||
|
||||
## Naming Conventions
|
||||
|
||||
### Session ID Format
|
||||
|
||||
```
|
||||
SPEC-{slug}-{YYYY-MM-DD}
|
||||
```
|
||||
|
||||
- **slug**: Lowercase, alphanumeric + Chinese characters, hyphens as separators, max 40 chars
|
||||
- **date**: UTC+8 date in YYYY-MM-DD format
|
||||
|
||||
Examples:
|
||||
- `SPEC-task-management-system-2026-02-11`
|
||||
- `SPEC-user-auth-oauth-2026-02-11`
|
||||
|
||||
### Output Files
|
||||
|
||||
| File | Phase | Description |
|
||||
|------|-------|-------------|
|
||||
| `spec-config.json` | 1 | Session configuration and state |
|
||||
| `discovery-context.json` | 1 | Codebase exploration results (optional) |
|
||||
| `refined-requirements.json` | 1.5 | Confirmed requirements after discussion |
|
||||
| `glossary.json` | 2 | Terminology glossary for cross-document consistency |
|
||||
| `product-brief.md` | 2 | Product brief document |
|
||||
| `requirements.md` | 3 | PRD document |
|
||||
| `architecture.md` | 4 | Architecture decisions document |
|
||||
| `epics.md` | 5 | Epic/Story breakdown document |
|
||||
| `readiness-report.md` | 6 | Quality validation report |
|
||||
| `spec-summary.md` | 6 | One-page executive summary |
|
||||
| `issue-export-report.md` | 7 | Issue export report with Epic→Issue mapping |
|
||||
|
||||
### Output Directory
|
||||
|
||||
```
|
||||
.workflow/.spec/{session-id}/
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Content Structure
|
||||
|
||||
### Heading Hierarchy
|
||||
|
||||
- `#` (H1): Document title only (one per document)
|
||||
- `##` (H2): Major sections
|
||||
- `###` (H3): Subsections
|
||||
- `####` (H4): Detail items (use sparingly)
|
||||
|
||||
Maximum depth: 4 levels. Prefer flat structures.
|
||||
|
||||
### Section Ordering
|
||||
|
||||
Every document follows this general pattern:
|
||||
|
||||
1. **YAML Frontmatter** (mandatory)
|
||||
2. **Title** (H1)
|
||||
3. **Executive Summary** (2-3 sentences)
|
||||
4. **Core Content Sections** (H2, document-specific)
|
||||
5. **Open Questions / Risks** (if applicable)
|
||||
6. **References / Traceability** (links to upstream/downstream docs)
|
||||
|
||||
### Formatting Rules
|
||||
|
||||
| Element | Format | Example |
|
||||
|---------|--------|---------|
|
||||
| Requirements | `REQ-{NNN}` prefix | REQ-001: User login |
|
||||
| Acceptance criteria | Checkbox list | `- [ ] User can log in with email` |
|
||||
| Architecture decisions | `ADR-{NNN}` prefix | ADR-001: Use PostgreSQL |
|
||||
| Epics | `EPIC-{NNN}` prefix | EPIC-001: Authentication |
|
||||
| Stories | `STORY-{EPIC}-{NNN}` prefix | STORY-001-001: Login form |
|
||||
| Priority tags | MoSCoW labels | `[Must]`, `[Should]`, `[Could]`, `[Won't]` |
|
||||
| Mermaid diagrams | Fenced code blocks | ````mermaid ... ``` `` |
|
||||
| Code examples | Language-tagged blocks | ````typescript ... ``` `` |
|
||||
|
||||
### Cross-Reference Format
|
||||
|
||||
Use relative references between documents:
|
||||
|
||||
```markdown
|
||||
See [Product Brief](product-brief.md#section-name) for details.
|
||||
Derived from [REQ-001](requirements.md#req-001).
|
||||
```
|
||||
|
||||
### Language
|
||||
|
||||
- Document body: Follow user's input language (Chinese or English)
|
||||
- Technical identifiers: Always English (REQ-001, ADR-001, EPIC-001)
|
||||
- YAML frontmatter keys: Always English
|
||||
|
||||
---
|
||||
|
||||
## spec-config.json Schema
|
||||
|
||||
```json
|
||||
{
|
||||
"session_id": "string (required)",
|
||||
"seed_input": "string (required) - original user input",
|
||||
"input_type": "text|file (required)",
|
||||
"timestamp": "ISO8601 (required)",
|
||||
"mode": "interactive|auto (required)",
|
||||
"complexity": "simple|moderate|complex (required)",
|
||||
"depth": "light|standard|comprehensive (required)",
|
||||
"focus_areas": ["string array"],
|
||||
"seed_analysis": {
|
||||
"problem_statement": "string",
|
||||
"target_users": ["string array"],
|
||||
"domain": "string",
|
||||
"constraints": ["string array"],
|
||||
"dimensions": ["string array - 3-5 exploration dimensions"]
|
||||
},
|
||||
"has_codebase": "boolean",
|
||||
"spec_type": "service|api|library|platform (required) - type of specification",
|
||||
"iteration_count": "number (required, default 0) - number of auto-fix iterations completed",
|
||||
"iteration_history": [
|
||||
{
|
||||
"iteration": "number",
|
||||
"timestamp": "ISO8601",
|
||||
"readiness_score": "number (0-100)",
|
||||
"errors_found": "number",
|
||||
"phases_fixed": ["number array - phase numbers that were re-generated"]
|
||||
}
|
||||
],
|
||||
"refined_requirements_file": "string (optional) - path to refined-requirements.json",
|
||||
"phasesCompleted": [
|
||||
{
|
||||
"phase": "number (1-6)",
|
||||
"name": "string (phase name)",
|
||||
"output_file": "string (primary output file)",
|
||||
"completed_at": "ISO8601"
|
||||
}
|
||||
],
|
||||
"issue_ids": ["string array (optional) - IDs of issues created in Phase 7"],
|
||||
"issues_created": "number (optional, default 0) - count of issues created in Phase 7"
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## refined-requirements.json Schema
|
||||
|
||||
```json
|
||||
{
|
||||
"session_id": "string (required) - matches spec-config.json",
|
||||
"phase": "1.5",
|
||||
"generated_at": "ISO8601 (required)",
|
||||
"source": "interactive-discussion|auto-expansion (required)",
|
||||
"discussion_rounds": "number (required) - 0 for auto mode",
|
||||
"clarified_problem_statement": "string (required) - refined problem statement",
|
||||
"confirmed_target_users": [
|
||||
{
|
||||
"name": "string",
|
||||
"needs": ["string array"],
|
||||
"pain_points": ["string array"]
|
||||
}
|
||||
],
|
||||
"confirmed_domain": "string",
|
||||
"confirmed_features": [
|
||||
{
|
||||
"name": "string",
|
||||
"description": "string",
|
||||
"acceptance_criteria": ["string array"],
|
||||
"edge_cases": ["string array"],
|
||||
"priority": "must|should|could|unset"
|
||||
}
|
||||
],
|
||||
"non_functional_requirements": [
|
||||
{
|
||||
"type": "Performance|Security|Usability|Scalability|Reliability|...",
|
||||
"details": "string",
|
||||
"measurable_criteria": "string (optional)"
|
||||
}
|
||||
],
|
||||
"boundary_conditions": {
|
||||
"in_scope": ["string array"],
|
||||
"out_of_scope": ["string array"],
|
||||
"constraints": ["string array"]
|
||||
},
|
||||
"integration_points": ["string array"],
|
||||
"key_assumptions": ["string array"],
|
||||
"discussion_log": [
|
||||
{
|
||||
"round": "number",
|
||||
"agent_prompt": "string",
|
||||
"user_response": "string",
|
||||
"timestamp": "ISO8601"
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## glossary.json Schema
|
||||
|
||||
```json
|
||||
{
|
||||
"session_id": "string (required) - matches spec-config.json",
|
||||
"generated_at": "ISO8601 (required)",
|
||||
"version": "number (required, default 1) - incremented on updates",
|
||||
"terms": [
|
||||
{
|
||||
"term": "string (required) - the canonical term",
|
||||
"definition": "string (required) - concise definition",
|
||||
"aliases": ["string array - acceptable alternative names"],
|
||||
"first_defined_in": "string (required) - source document path",
|
||||
"category": "core|technical|business (required)"
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
### Glossary Usage Rules
|
||||
|
||||
- Terms MUST be defined before first use in any document
|
||||
- All documents MUST use the canonical term from glossary; aliases are for reference only
|
||||
- Glossary is generated in Phase 2 and injected into all subsequent phase prompts
|
||||
- Phase 6 validates glossary compliance across all documents
|
||||
|
||||
---
|
||||
|
||||
## Validation Checklist
|
||||
|
||||
- [ ] Every document starts with valid YAML frontmatter
|
||||
- [ ] `session_id` matches across all documents in a session
|
||||
- [ ] `status` field reflects current document state
|
||||
- [ ] All cross-references resolve to valid targets
|
||||
- [ ] Heading hierarchy is correct (no skipped levels)
|
||||
- [ ] Technical identifiers use correct prefixes
|
||||
- [ ] Output files are in the correct directory
|
||||
- [ ] `glossary.json` created with >= 5 terms
|
||||
- [ ] `spec_type` field set in spec-config.json
|
||||
- [ ] All documents use glossary terms consistently
|
||||
- [ ] Non-Goals section present in product brief (if applicable)
|
||||
29
.codex/skills/spec-generator/specs/glossary-template.json
Normal file
29
.codex/skills/spec-generator/specs/glossary-template.json
Normal file
@@ -0,0 +1,29 @@
|
||||
{
|
||||
"$schema": "glossary-v1",
|
||||
"description": "Template for terminology glossary used across spec-generator documents",
|
||||
"session_id": "",
|
||||
"generated_at": "",
|
||||
"version": 1,
|
||||
"terms": [
|
||||
{
|
||||
"term": "",
|
||||
"definition": "",
|
||||
"aliases": [],
|
||||
"first_defined_in": "product-brief.md",
|
||||
"category": "core"
|
||||
}
|
||||
],
|
||||
"_usage_notes": {
|
||||
"category_values": {
|
||||
"core": "Domain-specific terms central to the product (e.g., 'Workspace', 'Session')",
|
||||
"technical": "Technical terms specific to the architecture (e.g., 'gRPC', 'event bus')",
|
||||
"business": "Business/process terms (e.g., 'Sprint', 'SLA', 'stakeholder')"
|
||||
},
|
||||
"rules": [
|
||||
"Terms MUST be defined before first use in any document",
|
||||
"All documents MUST use the canonical 'term' field consistently",
|
||||
"Aliases are for reference only - prefer canonical term in all documents",
|
||||
"Phase 6 validates glossary compliance across all documents"
|
||||
]
|
||||
}
|
||||
}
|
||||
270
.codex/skills/spec-generator/specs/quality-gates.md
Normal file
270
.codex/skills/spec-generator/specs/quality-gates.md
Normal file
@@ -0,0 +1,270 @@
|
||||
# Quality Gates
|
||||
|
||||
Per-phase quality gate criteria and scoring dimensions for spec-generator outputs.
|
||||
|
||||
## When to Use
|
||||
|
||||
| Phase | Usage | Section |
|
||||
|-------|-------|---------|
|
||||
| Phase 2-5 | Post-generation self-check | Per-Phase Gates |
|
||||
| Phase 6 | Cross-document validation | Cross-Document Validation |
|
||||
| Phase 6 | Final scoring | Scoring Dimensions |
|
||||
|
||||
---
|
||||
|
||||
## Quality Thresholds
|
||||
|
||||
| Gate | Score | Action |
|
||||
|------|-------|--------|
|
||||
| **Pass** | >= 80% | Continue to next phase |
|
||||
| **Review** | 60-79% | Log warnings, continue with caveats |
|
||||
| **Fail** | < 60% | Must address issues before continuing |
|
||||
|
||||
In auto mode (`-y`), Review-level issues are logged but do not block progress.
|
||||
|
||||
---
|
||||
|
||||
## Scoring Dimensions
|
||||
|
||||
### 1. Completeness (25%)
|
||||
|
||||
All required sections present with substantive content.
|
||||
|
||||
| Score | Criteria |
|
||||
|-------|----------|
|
||||
| 100% | All template sections filled with detailed content |
|
||||
| 75% | All sections present, some lack detail |
|
||||
| 50% | Major sections present but minor sections missing |
|
||||
| 25% | Multiple major sections missing or empty |
|
||||
| 0% | Document is a skeleton only |
|
||||
|
||||
### 2. Consistency (25%)
|
||||
|
||||
Terminology, formatting, and references are uniform across documents.
|
||||
|
||||
| Score | Criteria |
|
||||
|-------|----------|
|
||||
| 100% | All terms consistent, all references valid, formatting uniform |
|
||||
| 75% | Minor terminology variations, all references valid |
|
||||
| 50% | Some inconsistent terms, 1-2 broken references |
|
||||
| 25% | Frequent inconsistencies, multiple broken references |
|
||||
| 0% | Documents contradict each other |
|
||||
|
||||
### 3. Traceability (25%)
|
||||
|
||||
Requirements, architecture decisions, and stories trace back to goals.
|
||||
|
||||
| Score | Criteria |
|
||||
|-------|----------|
|
||||
| 100% | Every story traces to a requirement, every requirement traces to a goal |
|
||||
| 75% | Most items traceable, few orphans |
|
||||
| 50% | Partial traceability, some disconnected items |
|
||||
| 25% | Weak traceability, many orphan items |
|
||||
| 0% | No traceability between documents |
|
||||
|
||||
### 4. Depth (25%)
|
||||
|
||||
Content provides sufficient detail for execution teams.
|
||||
|
||||
| Score | Criteria |
|
||||
|-------|----------|
|
||||
| 100% | Acceptance criteria specific and testable, architecture decisions justified, stories estimable |
|
||||
| 75% | Most items detailed enough, few vague areas |
|
||||
| 50% | Mix of detailed and vague content |
|
||||
| 25% | Mostly high-level, lacking actionable detail |
|
||||
| 0% | Too abstract for execution |
|
||||
|
||||
---
|
||||
|
||||
## Per-Phase Quality Gates
|
||||
|
||||
### Phase 1: Discovery
|
||||
|
||||
| Check | Criteria | Severity |
|
||||
|-------|----------|----------|
|
||||
| Session ID valid | Matches `SPEC-{slug}-{date}` format | Error |
|
||||
| Problem statement exists | Non-empty, >= 20 characters | Error |
|
||||
| Target users identified | >= 1 user group | Error |
|
||||
| Dimensions generated | 3-5 exploration dimensions | Warning |
|
||||
| Constraints listed | >= 0 (can be empty with justification) | Info |
|
||||
|
||||
### Phase 1.5: Requirement Expansion & Clarification
|
||||
|
||||
| Check | Criteria | Severity |
|
||||
|-------|----------|----------|
|
||||
| Problem statement refined | More specific than seed, >= 30 characters | Error |
|
||||
| Confirmed features | >= 2 features with descriptions | Error |
|
||||
| Non-functional requirements | >= 1 identified (performance, security, etc.) | Warning |
|
||||
| Boundary conditions | In-scope and out-of-scope defined | Warning |
|
||||
| Key assumptions | >= 1 assumption listed | Warning |
|
||||
| User confirmation | Explicit user confirmation recorded (non-auto mode) | Info |
|
||||
| Discussion rounds | >= 1 round of interaction (non-auto mode) | Info |
|
||||
|
||||
### Phase 2: Product Brief
|
||||
|
||||
| Check | Criteria | Severity |
|
||||
|-------|----------|----------|
|
||||
| Vision statement | Clear, 1-3 sentences | Error |
|
||||
| Problem statement | Specific and measurable | Error |
|
||||
| Target users | >= 1 persona with needs described | Error |
|
||||
| Goals defined | >= 2 measurable goals | Error |
|
||||
| Success metrics | >= 2 quantifiable metrics | Warning |
|
||||
| Scope boundaries | In-scope and out-of-scope listed | Warning |
|
||||
| Multi-perspective | >= 2 CLI perspectives synthesized | Info |
|
||||
| Terminology glossary generated | glossary.json created with >= 5 terms | Warning |
|
||||
| Non-Goals section present | At least 1 non-goal with rationale | Warning |
|
||||
| Concepts section present | Terminology table in product brief | Warning |
|
||||
|
||||
### Phase 3: Requirements (PRD)
|
||||
|
||||
| Check | Criteria | Severity |
|
||||
|-------|----------|----------|
|
||||
| Functional requirements | >= 3 with REQ-NNN IDs | Error |
|
||||
| Acceptance criteria | Every requirement has >= 1 criterion | Error |
|
||||
| MoSCoW priority | Every requirement tagged | Error |
|
||||
| Non-functional requirements | >= 1 (performance, security, etc.) | Warning |
|
||||
| User stories | >= 1 per Must-have requirement | Warning |
|
||||
| Traceability | Requirements trace to product brief goals | Warning |
|
||||
| RFC 2119 keywords used | Behavioral requirements use MUST/SHOULD/MAY | Warning |
|
||||
| Data model defined | Core entities have field-level definitions | Warning |
|
||||
|
||||
### Phase 4: Architecture
|
||||
|
||||
| Check | Criteria | Severity |
|
||||
|-------|----------|----------|
|
||||
| Component diagram | Present (Mermaid or ASCII) | Error |
|
||||
| Tech stack specified | Languages, frameworks, key libraries | Error |
|
||||
| ADR present | >= 1 Architecture Decision Record | Error |
|
||||
| ADR has alternatives | Each ADR lists >= 2 options considered | Warning |
|
||||
| Integration points | External systems/APIs identified | Warning |
|
||||
| Data model | Key entities and relationships described | Warning |
|
||||
| Codebase mapping | Mapped to existing code (if has_codebase) | Info |
|
||||
| State machine defined | >= 1 lifecycle state diagram (if service/platform type) | Warning |
|
||||
| Configuration model defined | All config fields with type/default/constraint (if service type) | Warning |
|
||||
| Error handling strategy | Per-component error classification and recovery | Warning |
|
||||
| Observability metrics | >= 3 metrics defined (if service/platform type) | Warning |
|
||||
| Trust model defined | Trust levels documented (if service type) | Info |
|
||||
| Implementation guidance | Key decisions for implementers listed | Info |
|
||||
|
||||
### Phase 5: Epics & Stories
|
||||
|
||||
| Check | Criteria | Severity |
|
||||
|-------|----------|----------|
|
||||
| Epics defined | 3-7 epics with EPIC-NNN IDs | Error |
|
||||
| MVP subset | >= 1 epic tagged as MVP | Error |
|
||||
| Stories per epic | 2-5 stories per epic | Error |
|
||||
| Story format | "As a...I want...So that..." pattern | Warning |
|
||||
| Dependency map | Cross-epic dependencies documented | Warning |
|
||||
| Estimation hints | Relative sizing (S/M/L/XL) per story | Info |
|
||||
| Traceability | Stories trace to requirements | Warning |
|
||||
|
||||
### Phase 6: Readiness Check
|
||||
|
||||
| Check | Criteria | Severity |
|
||||
|-------|----------|----------|
|
||||
| All documents exist | product-brief, requirements, architecture, epics | Error |
|
||||
| Frontmatter valid | All YAML frontmatter parseable and correct | Error |
|
||||
| Cross-references valid | All document links resolve | Error |
|
||||
| Overall score >= 60% | Weighted average across 4 dimensions | Error |
|
||||
| No unresolved Errors | All Error-severity issues addressed | Error |
|
||||
| Summary generated | spec-summary.md created | Warning |
|
||||
| Per-requirement verified | All Must requirements pass 4-check verification | Error |
|
||||
| Codex technical review | Technical depth assessment completed | Warning |
|
||||
| Dual-source validation | Both Gemini and Codex scores recorded | Warning |
|
||||
|
||||
### Phase 7: Issue Export
|
||||
|
||||
| Check | Criteria | Severity |
|
||||
|-------|----------|----------|
|
||||
| All MVP epics have issues | Every MVP-tagged Epic has a corresponding issue created | Error |
|
||||
| Issue tags correct | Each issue has `spec-generated` and `spec:{session_id}` tags | Error |
|
||||
| Export report generated | `issue-export-report.md` exists with mapping table | Error |
|
||||
| Wave assignment correct | MVP epics → wave-1, non-MVP epics → wave-2 | Warning |
|
||||
| Spec document links valid | `extended_context.notes.spec_documents` paths resolve | Warning |
|
||||
| Epic dependencies mapped | Cross-epic dependencies reflected in issue dependency references | Warning |
|
||||
| All epics covered | Non-MVP epics also have corresponding issues | Info |
|
||||
|
||||
---
|
||||
|
||||
## Cross-Document Validation
|
||||
|
||||
Checks performed during Phase 6 across all documents:
|
||||
|
||||
### Completeness Matrix
|
||||
|
||||
```
|
||||
Product Brief goals -> Requirements (each goal has >= 1 requirement)
|
||||
Requirements -> Architecture (each Must requirement has design coverage)
|
||||
Requirements -> Epics (each Must requirement appears in >= 1 story)
|
||||
Architecture ADRs -> Epics (tech choices reflected in implementation stories)
|
||||
Glossary terms -> All Documents (core terms used consistently)
|
||||
Non-Goals (Brief) -> Requirements + Epics (no contradictions)
|
||||
```
|
||||
|
||||
### Consistency Checks
|
||||
|
||||
| Check | Documents | Rule |
|
||||
|-------|-----------|------|
|
||||
| Terminology | All | Same term used consistently (no synonyms for same concept) |
|
||||
| User personas | Brief + PRD + Epics | Same user names/roles throughout |
|
||||
| Scope | Brief + PRD | PRD scope does not exceed brief scope |
|
||||
| Tech stack | Architecture + Epics | Stories reference correct technologies |
|
||||
| Glossary compliance | All | Core terms match glossary.json definitions, no synonym drift |
|
||||
| Scope containment | Brief + PRD | PRD requirements do not introduce scope beyond brief boundaries |
|
||||
| Non-Goals respected | Brief + PRD + Epics | No requirement/story contradicts explicit Non-Goals |
|
||||
|
||||
### Traceability Matrix Format
|
||||
|
||||
```markdown
|
||||
| Goal | Requirements | Architecture | Epics |
|
||||
|------|-------------|--------------|-------|
|
||||
| G-001: ... | REQ-001, REQ-002 | ADR-001 | EPIC-001 |
|
||||
| G-002: ... | REQ-003 | ADR-002 | EPIC-002, EPIC-003 |
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Issue Classification
|
||||
|
||||
### Error (Must Fix)
|
||||
|
||||
- Missing required document or section
|
||||
- Broken cross-references
|
||||
- Contradictory information between documents
|
||||
- Empty acceptance criteria on Must-have requirements
|
||||
- No MVP subset defined in epics
|
||||
|
||||
### Warning (Should Fix)
|
||||
|
||||
- Vague acceptance criteria
|
||||
- Missing non-functional requirements
|
||||
- No success metrics defined
|
||||
- Incomplete traceability
|
||||
- Missing architecture review notes
|
||||
|
||||
### Info (Nice to Have)
|
||||
|
||||
- Could add more detailed personas
|
||||
- Consider additional ADR alternatives
|
||||
- Story estimation hints missing
|
||||
- Mermaid diagrams could be more detailed
|
||||
|
||||
---
|
||||
|
||||
## Iteration Quality Tracking
|
||||
|
||||
When Phase 6.5 (Auto-Fix) is triggered:
|
||||
|
||||
| Iteration | Expected Improvement | Max Iterations |
|
||||
|-----------|---------------------|----------------|
|
||||
| 1st | Fix all Error-severity issues | - |
|
||||
| 2nd | Fix remaining Warnings, improve scores | Max reached |
|
||||
|
||||
### Iteration Exit Criteria
|
||||
|
||||
| Condition | Action |
|
||||
|-----------|--------|
|
||||
| Overall score >= 80% after fix | Pass, proceed to handoff |
|
||||
| Overall score 60-79% after 2 iterations | Review, proceed with caveats |
|
||||
| Overall score < 60% after 2 iterations | Fail, manual intervention required |
|
||||
| No Error-severity issues remaining | Eligible for handoff regardless of score |
|
||||
373
.codex/skills/spec-generator/templates/architecture-doc.md
Normal file
373
.codex/skills/spec-generator/templates/architecture-doc.md
Normal file
@@ -0,0 +1,373 @@
|
||||
# Architecture Document Template (Directory Structure)
|
||||
|
||||
Template for generating architecture decision documents as a directory of individual ADR files in Phase 4.
|
||||
|
||||
## Usage Context
|
||||
|
||||
| Phase | Usage |
|
||||
|-------|-------|
|
||||
| Phase 4 (Architecture) | Generate `architecture/` directory from requirements analysis |
|
||||
| Output Location | `{workDir}/architecture/` |
|
||||
|
||||
## Output Structure
|
||||
|
||||
```
|
||||
{workDir}/architecture/
|
||||
├── _index.md # Overview, components, tech stack, data model, security
|
||||
├── ADR-001-{slug}.md # Individual Architecture Decision Record
|
||||
├── ADR-002-{slug}.md
|
||||
└── ...
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Template: _index.md
|
||||
|
||||
```markdown
|
||||
---
|
||||
session_id: {session_id}
|
||||
phase: 4
|
||||
document_type: architecture-index
|
||||
status: draft
|
||||
generated_at: {timestamp}
|
||||
version: 1
|
||||
dependencies:
|
||||
- ../spec-config.json
|
||||
- ../product-brief.md
|
||||
- ../requirements/_index.md
|
||||
---
|
||||
|
||||
# Architecture: {product_name}
|
||||
|
||||
{executive_summary - high-level architecture approach and key decisions}
|
||||
|
||||
## System Overview
|
||||
|
||||
### Architecture Style
|
||||
{description of chosen architecture style: microservices, monolith, serverless, etc.}
|
||||
|
||||
### System Context Diagram
|
||||
|
||||
```mermaid
|
||||
C4Context
|
||||
title System Context Diagram
|
||||
Person(user, "User", "Primary user")
|
||||
System(system, "{product_name}", "Core system")
|
||||
System_Ext(ext1, "{external_system}", "{description}")
|
||||
Rel(user, system, "Uses")
|
||||
Rel(system, ext1, "Integrates with")
|
||||
```
|
||||
|
||||
## Component Architecture
|
||||
|
||||
### Component Diagram
|
||||
|
||||
```mermaid
|
||||
graph TD
|
||||
subgraph "{product_name}"
|
||||
A[Component A] --> B[Component B]
|
||||
B --> C[Component C]
|
||||
A --> D[Component D]
|
||||
end
|
||||
B --> E[External Service]
|
||||
```
|
||||
|
||||
### Component Descriptions
|
||||
|
||||
| Component | Responsibility | Technology | Dependencies |
|
||||
|-----------|---------------|------------|--------------|
|
||||
| {component_name} | {what it does} | {tech stack} | {depends on} |
|
||||
|
||||
## Technology Stack
|
||||
|
||||
### Core Technologies
|
||||
|
||||
| Layer | Technology | Version | Rationale |
|
||||
|-------|-----------|---------|-----------|
|
||||
| Frontend | {technology} | {version} | {why chosen} |
|
||||
| Backend | {technology} | {version} | {why chosen} |
|
||||
| Database | {technology} | {version} | {why chosen} |
|
||||
| Infrastructure | {technology} | {version} | {why chosen} |
|
||||
|
||||
### Key Libraries & Frameworks
|
||||
|
||||
| Library | Purpose | License |
|
||||
|---------|---------|---------|
|
||||
| {library_name} | {purpose} | {license} |
|
||||
|
||||
## Architecture Decision Records
|
||||
|
||||
| ADR | Title | Status | Key Choice |
|
||||
|-----|-------|--------|------------|
|
||||
| [ADR-001](ADR-001-{slug}.md) | {title} | Accepted | {one-line summary} |
|
||||
| [ADR-002](ADR-002-{slug}.md) | {title} | Accepted | {one-line summary} |
|
||||
| [ADR-003](ADR-003-{slug}.md) | {title} | Proposed | {one-line summary} |
|
||||
|
||||
## Data Architecture
|
||||
|
||||
### Data Model
|
||||
|
||||
```mermaid
|
||||
erDiagram
|
||||
ENTITY_A ||--o{ ENTITY_B : "has many"
|
||||
ENTITY_A {
|
||||
string id PK
|
||||
string name
|
||||
datetime created_at
|
||||
}
|
||||
ENTITY_B {
|
||||
string id PK
|
||||
string entity_a_id FK
|
||||
string value
|
||||
}
|
||||
```
|
||||
|
||||
### Data Storage Strategy
|
||||
|
||||
| Data Type | Storage | Retention | Backup |
|
||||
|-----------|---------|-----------|--------|
|
||||
| {type} | {storage solution} | {retention policy} | {backup strategy} |
|
||||
|
||||
## API Design
|
||||
|
||||
### API Overview
|
||||
|
||||
| Endpoint | Method | Purpose | Auth |
|
||||
|----------|--------|---------|------|
|
||||
| {/api/resource} | {GET/POST/etc} | {purpose} | {auth type} |
|
||||
|
||||
## Security Architecture
|
||||
|
||||
### Security Controls
|
||||
|
||||
| Control | Implementation | Requirement |
|
||||
|---------|---------------|-------------|
|
||||
| Authentication | {approach} | [NFR-S-{NNN}](../requirements/NFR-S-{NNN}-{slug}.md) |
|
||||
| Authorization | {approach} | [NFR-S-{NNN}](../requirements/NFR-S-{NNN}-{slug}.md) |
|
||||
| Data Protection | {approach} | [NFR-S-{NNN}](../requirements/NFR-S-{NNN}-{slug}.md) |
|
||||
|
||||
## Infrastructure & Deployment
|
||||
|
||||
### Deployment Architecture
|
||||
|
||||
{description of deployment model: containers, serverless, VMs, etc.}
|
||||
|
||||
### Environment Strategy
|
||||
|
||||
| Environment | Purpose | Configuration |
|
||||
|-------------|---------|---------------|
|
||||
| Development | Local development | {config} |
|
||||
| Staging | Pre-production testing | {config} |
|
||||
| Production | Live system | {config} |
|
||||
|
||||
## Codebase Integration
|
||||
|
||||
{if has_codebase is true:}
|
||||
|
||||
### Existing Code Mapping
|
||||
|
||||
| New Component | Existing Module | Integration Type | Notes |
|
||||
|--------------|----------------|------------------|-------|
|
||||
| {component} | {existing module path} | Extend/Replace/New | {notes} |
|
||||
|
||||
### Migration Notes
|
||||
{any migration considerations for existing code}
|
||||
|
||||
## Quality Attributes
|
||||
|
||||
| Attribute | Target | Measurement | ADR Reference |
|
||||
|-----------|--------|-------------|---------------|
|
||||
| Performance | {target} | {how measured} | [ADR-{NNN}](ADR-{NNN}-{slug}.md) |
|
||||
| Scalability | {target} | {how measured} | [ADR-{NNN}](ADR-{NNN}-{slug}.md) |
|
||||
| Reliability | {target} | {how measured} | [ADR-{NNN}](ADR-{NNN}-{slug}.md) |
|
||||
|
||||
## State Machine
|
||||
|
||||
{For each core entity with a lifecycle (e.g., Order, Session, Task):}
|
||||
|
||||
### {Entity} Lifecycle
|
||||
|
||||
```
|
||||
{ASCII state diagram showing all states, transitions, triggers, and error paths}
|
||||
|
||||
┌──────────┐
|
||||
│ Created │
|
||||
└─────┬────┘
|
||||
│ start()
|
||||
▼
|
||||
┌──────────┐ error ┌──────────┐
|
||||
│ Running │ ──────────▶ │ Failed │
|
||||
└─────┬────┘ └──────────┘
|
||||
│ complete()
|
||||
▼
|
||||
┌──────────┐
|
||||
│ Completed │
|
||||
└──────────┘
|
||||
```
|
||||
|
||||
| From State | Event | To State | Side Effects | Error Handling |
|
||||
|-----------|-------|----------|-------------|----------------|
|
||||
| {from} | {event} | {to} | {side_effects} | {error_behavior} |
|
||||
|
||||
## Configuration Model
|
||||
|
||||
### Required Configuration
|
||||
|
||||
| Field | Type | Default | Constraint | Description |
|
||||
|-------|------|---------|------------|-------------|
|
||||
| {field_name} | {string/number/boolean/enum} | {default_value} | {validation rule} | {description} |
|
||||
|
||||
### Optional Configuration
|
||||
|
||||
| Field | Type | Default | Constraint | Description |
|
||||
|-------|------|---------|------------|-------------|
|
||||
| {field_name} | {type} | {default} | {constraint} | {description} |
|
||||
|
||||
### Environment Variables
|
||||
|
||||
| Variable | Maps To | Required |
|
||||
|----------|---------|----------|
|
||||
| {ENV_VAR} | {config_field} | {yes/no} |
|
||||
|
||||
## Error Handling
|
||||
|
||||
### Error Classification
|
||||
|
||||
| Category | Severity | Retry | Example |
|
||||
|----------|----------|-------|---------|
|
||||
| Transient | Low | Yes, with backoff | Network timeout, rate limit |
|
||||
| Permanent | High | No | Invalid configuration, auth failure |
|
||||
| Degraded | Medium | Partial | Dependency unavailable, fallback active |
|
||||
|
||||
### Per-Component Error Strategy
|
||||
|
||||
| Component | Error Scenario | Behavior | Recovery |
|
||||
|-----------|---------------|----------|----------|
|
||||
| {component} | {scenario} | {MUST/SHOULD behavior} | {recovery strategy} |
|
||||
|
||||
## Observability
|
||||
|
||||
### Metrics
|
||||
|
||||
| Metric Name | Type | Labels | Description |
|
||||
|-------------|------|--------|-------------|
|
||||
| {metric_name} | {counter/gauge/histogram} | {label1, label2} | {what it measures} |
|
||||
|
||||
### Logging
|
||||
|
||||
| Event | Level | Fields | Description |
|
||||
|-------|-------|--------|-------------|
|
||||
| {event_name} | {INFO/WARN/ERROR} | {structured fields} | {when logged} |
|
||||
|
||||
### Health Checks
|
||||
|
||||
| Check | Endpoint | Interval | Failure Action |
|
||||
|-------|----------|----------|----------------|
|
||||
| {check_name} | {/health/xxx} | {duration} | {action on failure} |
|
||||
|
||||
## Trust & Safety
|
||||
|
||||
### Trust Levels
|
||||
|
||||
| Level | Description | Approval Required | Allowed Operations |
|
||||
|-------|-------------|-------------------|-------------------|
|
||||
| High Trust | {description} | None | {operations} |
|
||||
| Standard | {description} | {approval type} | {operations} |
|
||||
| Low Trust | {description} | {approval type} | {operations} |
|
||||
|
||||
### Security Controls
|
||||
|
||||
{Detailed security controls beyond the basic auth covered in Security Architecture}
|
||||
|
||||
## Implementation Guidance
|
||||
|
||||
### Key Decisions for Implementers
|
||||
|
||||
| Decision | Options | Recommendation | Rationale |
|
||||
|----------|---------|---------------|-----------|
|
||||
| {decision_area} | {option_1, option_2} | {recommended} | {why} |
|
||||
|
||||
### Implementation Order
|
||||
|
||||
1. {component/module 1}: {why first}
|
||||
2. {component/module 2}: {depends on #1}
|
||||
|
||||
### Testing Strategy
|
||||
|
||||
| Layer | Scope | Tools | Coverage Target |
|
||||
|-------|-------|-------|-----------------|
|
||||
| Unit | {scope} | {tools} | {target} |
|
||||
| Integration | {scope} | {tools} | {target} |
|
||||
| E2E | {scope} | {tools} | {target} |
|
||||
|
||||
## Risks & Mitigations
|
||||
|
||||
| Risk | Impact | Probability | Mitigation |
|
||||
|------|--------|-------------|------------|
|
||||
| {risk} | High/Medium/Low | High/Medium/Low | {mitigation approach} |
|
||||
|
||||
## Open Questions
|
||||
|
||||
- [ ] {architectural question 1}
|
||||
- [ ] {architectural question 2}
|
||||
|
||||
## References
|
||||
|
||||
- Derived from: [Requirements](../requirements/_index.md), [Product Brief](../product-brief.md)
|
||||
- Next: [Epics & Stories](../epics/_index.md)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Template: ADR-NNN-{slug}.md (Individual Architecture Decision Record)
|
||||
|
||||
```markdown
|
||||
---
|
||||
id: ADR-{NNN}
|
||||
status: Accepted
|
||||
traces_to: [{REQ-NNN}, {NFR-X-NNN}]
|
||||
date: {timestamp}
|
||||
---
|
||||
|
||||
# ADR-{NNN}: {decision_title}
|
||||
|
||||
## Context
|
||||
|
||||
{what is the situation that motivates this decision}
|
||||
|
||||
## Decision
|
||||
|
||||
{what is the chosen approach}
|
||||
|
||||
## Alternatives Considered
|
||||
|
||||
| Option | Pros | Cons |
|
||||
|--------|------|------|
|
||||
| {option_1 - chosen} | {pros} | {cons} |
|
||||
| {option_2} | {pros} | {cons} |
|
||||
| {option_3} | {pros} | {cons} |
|
||||
|
||||
## Consequences
|
||||
|
||||
- **Positive**: {positive outcomes}
|
||||
- **Negative**: {tradeoffs accepted}
|
||||
- **Risks**: {risks to monitor}
|
||||
|
||||
## Traces
|
||||
|
||||
- **Requirements**: [REQ-{NNN}](../requirements/REQ-{NNN}-{slug}.md), [NFR-X-{NNN}](../requirements/NFR-X-{NNN}-{slug}.md)
|
||||
- **Implemented by**: [EPIC-{NNN}](../epics/EPIC-{NNN}-{slug}.md) (added in Phase 5)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Variable Descriptions
|
||||
|
||||
| Variable | Source | Description |
|
||||
|----------|--------|-------------|
|
||||
| `{session_id}` | spec-config.json | Session identifier |
|
||||
| `{timestamp}` | Runtime | ISO8601 generation timestamp |
|
||||
| `{product_name}` | product-brief.md | Product/feature name |
|
||||
| `{NNN}` | Auto-increment | ADR/requirement number |
|
||||
| `{slug}` | Auto-generated | Kebab-case from decision title |
|
||||
| `{has_codebase}` | spec-config.json | Whether existing codebase exists |
|
||||
209
.codex/skills/spec-generator/templates/epics-template.md
Normal file
209
.codex/skills/spec-generator/templates/epics-template.md
Normal file
@@ -0,0 +1,209 @@
|
||||
# Epics & Stories Template (Directory Structure)
|
||||
|
||||
Template for generating epic/story breakdown as a directory of individual Epic files in Phase 5.
|
||||
|
||||
## Usage Context
|
||||
|
||||
| Phase | Usage |
|
||||
|-------|-------|
|
||||
| Phase 5 (Epics & Stories) | Generate `epics/` directory from requirements decomposition |
|
||||
| Output Location | `{workDir}/epics/` |
|
||||
|
||||
## Output Structure
|
||||
|
||||
```
|
||||
{workDir}/epics/
|
||||
├── _index.md # Overview table + dependency map + MVP scope + execution order
|
||||
├── EPIC-001-{slug}.md # Individual Epic with its Stories
|
||||
├── EPIC-002-{slug}.md
|
||||
└── ...
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Template: _index.md
|
||||
|
||||
```markdown
|
||||
---
|
||||
session_id: {session_id}
|
||||
phase: 5
|
||||
document_type: epics-index
|
||||
status: draft
|
||||
generated_at: {timestamp}
|
||||
version: 1
|
||||
dependencies:
|
||||
- ../spec-config.json
|
||||
- ../product-brief.md
|
||||
- ../requirements/_index.md
|
||||
- ../architecture/_index.md
|
||||
---
|
||||
|
||||
# Epics & Stories: {product_name}
|
||||
|
||||
{executive_summary - overview of epic structure and MVP scope}
|
||||
|
||||
## Epic Overview
|
||||
|
||||
| Epic ID | Title | Priority | MVP | Stories | Est. Size |
|
||||
|---------|-------|----------|-----|---------|-----------|
|
||||
| [EPIC-001](EPIC-001-{slug}.md) | {title} | Must | Yes | {n} | {S/M/L/XL} |
|
||||
| [EPIC-002](EPIC-002-{slug}.md) | {title} | Must | Yes | {n} | {S/M/L/XL} |
|
||||
| [EPIC-003](EPIC-003-{slug}.md) | {title} | Should | No | {n} | {S/M/L/XL} |
|
||||
|
||||
## Dependency Map
|
||||
|
||||
```mermaid
|
||||
graph LR
|
||||
EPIC-001 --> EPIC-002
|
||||
EPIC-001 --> EPIC-003
|
||||
EPIC-002 --> EPIC-004
|
||||
EPIC-003 --> EPIC-005
|
||||
```
|
||||
|
||||
### Dependency Notes
|
||||
{explanation of why these dependencies exist and suggested execution order}
|
||||
|
||||
### Recommended Execution Order
|
||||
1. [EPIC-{NNN}](EPIC-{NNN}-{slug}.md): {reason - foundational}
|
||||
2. [EPIC-{NNN}](EPIC-{NNN}-{slug}.md): {reason - depends on #1}
|
||||
3. ...
|
||||
|
||||
## MVP Scope
|
||||
|
||||
### MVP Epics
|
||||
{list of epics included in MVP with justification, linking to each}
|
||||
|
||||
### MVP Definition of Done
|
||||
- [ ] {MVP completion criterion 1}
|
||||
- [ ] {MVP completion criterion 2}
|
||||
- [ ] {MVP completion criterion 3}
|
||||
|
||||
## Traceability Matrix
|
||||
|
||||
| Requirement | Epic | Stories | Architecture |
|
||||
|-------------|------|---------|--------------|
|
||||
| [REQ-001](../requirements/REQ-001-{slug}.md) | [EPIC-001](EPIC-001-{slug}.md) | STORY-001-001, STORY-001-002 | [ADR-001](../architecture/ADR-001-{slug}.md) |
|
||||
| [REQ-002](../requirements/REQ-002-{slug}.md) | [EPIC-001](EPIC-001-{slug}.md) | STORY-001-003 | Component B |
|
||||
| [REQ-003](../requirements/REQ-003-{slug}.md) | [EPIC-002](EPIC-002-{slug}.md) | STORY-002-001 | [ADR-002](../architecture/ADR-002-{slug}.md) |
|
||||
|
||||
## Estimation Summary
|
||||
|
||||
| Size | Meaning | Count |
|
||||
|------|---------|-------|
|
||||
| S | Small - well-understood, minimal risk | {n} |
|
||||
| M | Medium - some complexity, moderate risk | {n} |
|
||||
| L | Large - significant complexity, should consider splitting | {n} |
|
||||
| XL | Extra Large - high complexity, must split before implementation | {n} |
|
||||
|
||||
## Risks & Considerations
|
||||
|
||||
| Risk | Affected Epics | Mitigation |
|
||||
|------|---------------|------------|
|
||||
| {risk description} | [EPIC-{NNN}](EPIC-{NNN}-{slug}.md) | {mitigation} |
|
||||
|
||||
## Versioning & Changelog
|
||||
|
||||
### Version Strategy
|
||||
- **Versioning Scheme**: {semver/calver/custom}
|
||||
- **Breaking Change Definition**: {what constitutes a breaking change}
|
||||
- **Deprecation Policy**: {how deprecated features are handled}
|
||||
|
||||
### Changelog
|
||||
|
||||
| Version | Date | Type | Description |
|
||||
|---------|------|------|-------------|
|
||||
| {version} | {date} | {Added/Changed/Fixed/Removed} | {description} |
|
||||
|
||||
## Open Questions
|
||||
|
||||
- [ ] {question about scope or implementation 1}
|
||||
- [ ] {question about scope or implementation 2}
|
||||
|
||||
## References
|
||||
|
||||
- Derived from: [Requirements](../requirements/_index.md), [Architecture](../architecture/_index.md)
|
||||
- Handoff to: execution workflows (lite-plan, plan, req-plan)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Template: EPIC-NNN-{slug}.md (Individual Epic)
|
||||
|
||||
```markdown
|
||||
---
|
||||
id: EPIC-{NNN}
|
||||
priority: {Must|Should|Could}
|
||||
mvp: {true|false}
|
||||
size: {S|M|L|XL}
|
||||
requirements: [REQ-{NNN}]
|
||||
architecture: [ADR-{NNN}]
|
||||
dependencies: [EPIC-{NNN}]
|
||||
status: draft
|
||||
---
|
||||
|
||||
# EPIC-{NNN}: {epic_title}
|
||||
|
||||
**Priority**: {Must|Should|Could}
|
||||
**MVP**: {Yes|No}
|
||||
**Estimated Size**: {S|M|L|XL}
|
||||
|
||||
## Description
|
||||
|
||||
{detailed epic description}
|
||||
|
||||
## Requirements
|
||||
|
||||
- [REQ-{NNN}](../requirements/REQ-{NNN}-{slug}.md): {title}
|
||||
- [REQ-{NNN}](../requirements/REQ-{NNN}-{slug}.md): {title}
|
||||
|
||||
## Architecture
|
||||
|
||||
- [ADR-{NNN}](../architecture/ADR-{NNN}-{slug}.md): {title}
|
||||
- Component: {component_name}
|
||||
|
||||
## Dependencies
|
||||
|
||||
- [EPIC-{NNN}](EPIC-{NNN}-{slug}.md) (blocking): {reason}
|
||||
- [EPIC-{NNN}](EPIC-{NNN}-{slug}.md) (soft): {reason}
|
||||
|
||||
## Stories
|
||||
|
||||
### STORY-{EPIC}-001: {story_title}
|
||||
|
||||
**User Story**: As a {persona}, I want to {action} so that {benefit}.
|
||||
|
||||
**Acceptance Criteria**:
|
||||
- [ ] {criterion 1}
|
||||
- [ ] {criterion 2}
|
||||
- [ ] {criterion 3}
|
||||
|
||||
**Size**: {S|M|L|XL}
|
||||
**Traces to**: [REQ-{NNN}](../requirements/REQ-{NNN}-{slug}.md)
|
||||
|
||||
---
|
||||
|
||||
### STORY-{EPIC}-002: {story_title}
|
||||
|
||||
**User Story**: As a {persona}, I want to {action} so that {benefit}.
|
||||
|
||||
**Acceptance Criteria**:
|
||||
- [ ] {criterion 1}
|
||||
- [ ] {criterion 2}
|
||||
|
||||
**Size**: {S|M|L|XL}
|
||||
**Traces to**: [REQ-{NNN}](../requirements/REQ-{NNN}-{slug}.md)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Variable Descriptions
|
||||
|
||||
| Variable | Source | Description |
|
||||
|----------|--------|-------------|
|
||||
| `{session_id}` | spec-config.json | Session identifier |
|
||||
| `{timestamp}` | Runtime | ISO8601 generation timestamp |
|
||||
| `{product_name}` | product-brief.md | Product/feature name |
|
||||
| `{EPIC}` | Auto-increment | Epic number (3 digits) |
|
||||
| `{NNN}` | Auto-increment | Story/requirement number |
|
||||
| `{slug}` | Auto-generated | Kebab-case from epic/story title |
|
||||
| `{S\|M\|L\|XL}` | CLI analysis | Relative size estimate |
|
||||
153
.codex/skills/spec-generator/templates/product-brief.md
Normal file
153
.codex/skills/spec-generator/templates/product-brief.md
Normal file
@@ -0,0 +1,153 @@
|
||||
# Product Brief Template
|
||||
|
||||
Template for generating product brief documents in Phase 2.
|
||||
|
||||
## Usage Context
|
||||
|
||||
| Phase | Usage |
|
||||
|-------|-------|
|
||||
| Phase 2 (Product Brief) | Generate product-brief.md from multi-CLI analysis |
|
||||
| Output Location | `{workDir}/product-brief.md` |
|
||||
|
||||
---
|
||||
|
||||
## Template
|
||||
|
||||
```markdown
|
||||
---
|
||||
session_id: {session_id}
|
||||
phase: 2
|
||||
document_type: product-brief
|
||||
status: draft
|
||||
generated_at: {timestamp}
|
||||
stepsCompleted: []
|
||||
version: 1
|
||||
dependencies:
|
||||
- spec-config.json
|
||||
---
|
||||
|
||||
# Product Brief: {product_name}
|
||||
|
||||
{executive_summary - 2-3 sentences capturing the essence of the product/feature}
|
||||
|
||||
## Concepts & Terminology
|
||||
|
||||
| Term | Definition | Aliases |
|
||||
|------|-----------|---------|
|
||||
| {term_1} | {definition} | {comma-separated aliases if any} |
|
||||
| {term_2} | {definition} | |
|
||||
|
||||
{Note: All documents in this specification MUST use these terms consistently.}
|
||||
|
||||
## Vision
|
||||
|
||||
{vision_statement - clear, aspirational 1-3 sentence statement of what success looks like}
|
||||
|
||||
## Problem Statement
|
||||
|
||||
### Current Situation
|
||||
{description of the current state and pain points}
|
||||
|
||||
### Impact
|
||||
{quantified impact of the problem - who is affected, how much, how often}
|
||||
|
||||
## Target Users
|
||||
|
||||
{for each user persona:}
|
||||
|
||||
### {Persona Name}
|
||||
- **Role**: {user's role/context}
|
||||
- **Needs**: {primary needs related to this product}
|
||||
- **Pain Points**: {current frustrations}
|
||||
- **Success Criteria**: {what success looks like for this user}
|
||||
|
||||
## Goals & Success Metrics
|
||||
|
||||
| Goal ID | Goal | Success Metric | Target |
|
||||
|---------|------|----------------|--------|
|
||||
| G-001 | {goal description} | {measurable metric} | {specific target} |
|
||||
| G-002 | {goal description} | {measurable metric} | {specific target} |
|
||||
|
||||
## Scope
|
||||
|
||||
### In Scope
|
||||
- {feature/capability 1}
|
||||
- {feature/capability 2}
|
||||
- {feature/capability 3}
|
||||
|
||||
### Out of Scope
|
||||
- {explicitly excluded item 1}
|
||||
- {explicitly excluded item 2}
|
||||
|
||||
### Non-Goals
|
||||
|
||||
{Explicit list of things this project will NOT do, with rationale for each:}
|
||||
|
||||
| Non-Goal | Rationale |
|
||||
|----------|-----------|
|
||||
| {non_goal_1} | {why this is explicitly excluded} |
|
||||
| {non_goal_2} | {why this is explicitly excluded} |
|
||||
|
||||
### Assumptions
|
||||
- {key assumption 1}
|
||||
- {key assumption 2}
|
||||
|
||||
## Competitive Landscape
|
||||
|
||||
| Aspect | Current State | Proposed Solution | Advantage |
|
||||
|--------|--------------|-------------------|-----------|
|
||||
| {aspect} | {how it's done now} | {our approach} | {differentiator} |
|
||||
|
||||
## Constraints & Dependencies
|
||||
|
||||
### Technical Constraints
|
||||
- {constraint 1}
|
||||
- {constraint 2}
|
||||
|
||||
### Business Constraints
|
||||
- {constraint 1}
|
||||
|
||||
### Dependencies
|
||||
- {external dependency 1}
|
||||
- {external dependency 2}
|
||||
|
||||
## Multi-Perspective Synthesis
|
||||
|
||||
### Product Perspective
|
||||
{summary of product/market analysis findings}
|
||||
|
||||
### Technical Perspective
|
||||
{summary of technical feasibility and constraints}
|
||||
|
||||
### User Perspective
|
||||
{summary of user journey and UX considerations}
|
||||
|
||||
### Convergent Themes
|
||||
{themes where all perspectives agree}
|
||||
|
||||
### Conflicting Views
|
||||
{areas where perspectives differ, with notes on resolution approach}
|
||||
|
||||
## Open Questions
|
||||
|
||||
- [ ] {unresolved question 1}
|
||||
- [ ] {unresolved question 2}
|
||||
|
||||
## References
|
||||
|
||||
- Derived from: [spec-config.json](spec-config.json)
|
||||
- Next: [Requirements PRD](requirements.md)
|
||||
```
|
||||
|
||||
## Variable Descriptions
|
||||
|
||||
| Variable | Source | Description |
|
||||
|----------|--------|-------------|
|
||||
| `{session_id}` | spec-config.json | Session identifier |
|
||||
| `{timestamp}` | Runtime | ISO8601 generation timestamp |
|
||||
| `{product_name}` | Seed analysis | Product/feature name |
|
||||
| `{executive_summary}` | CLI synthesis | 2-3 sentence summary |
|
||||
| `{vision_statement}` | CLI product perspective | Aspirational vision |
|
||||
| `{term_1}`, `{term_2}` | CLI synthesis | Domain terms with definitions and optional aliases |
|
||||
| `{non_goal_1}`, `{non_goal_2}` | CLI synthesis | Explicit exclusions with rationale |
|
||||
| All `{...}` fields | CLI analysis outputs | Filled from multi-perspective analysis |
|
||||
@@ -0,0 +1,27 @@
|
||||
# API Spec Profile
|
||||
|
||||
Defines additional required sections for API-type specifications.
|
||||
|
||||
## Required Sections (in addition to base template)
|
||||
|
||||
### In Architecture Document
|
||||
- **Endpoint Definition**: MUST list all endpoints with method, path, auth, request/response schema
|
||||
- **Authentication Model**: MUST define auth mechanism (OAuth2/JWT/API Key), token lifecycle
|
||||
- **Rate Limiting**: MUST define rate limits per tier/endpoint, throttling behavior
|
||||
- **Error Codes**: MUST define error response format, standard error codes with descriptions
|
||||
- **API Versioning**: MUST define versioning strategy (URL/header/query), deprecation policy
|
||||
- **Pagination**: SHOULD define pagination strategy for list endpoints
|
||||
- **Idempotency**: SHOULD define idempotency requirements for write operations
|
||||
|
||||
### In Requirements Document
|
||||
- **Endpoint Acceptance Criteria**: Each requirement SHOULD map to specific endpoints
|
||||
- **SLA Definitions**: MUST define response time, availability targets per endpoint tier
|
||||
|
||||
### Quality Gate Additions
|
||||
| Check | Criteria | Severity |
|
||||
|-------|----------|----------|
|
||||
| Endpoints documented | All endpoints with method + path | Error |
|
||||
| Auth model defined | Authentication mechanism specified | Error |
|
||||
| Error codes defined | Standard error format + codes | Warning |
|
||||
| Rate limits defined | Per-endpoint or per-tier limits | Warning |
|
||||
| API versioning strategy | Versioning approach specified | Warning |
|
||||
@@ -0,0 +1,25 @@
|
||||
# Library Spec Profile
|
||||
|
||||
Defines additional required sections for library/SDK-type specifications.
|
||||
|
||||
## Required Sections (in addition to base template)
|
||||
|
||||
### In Architecture Document
|
||||
- **Public API Surface**: MUST define all public interfaces with signatures, parameters, return types
|
||||
- **Usage Examples**: MUST provide >= 3 code examples showing common usage patterns
|
||||
- **Compatibility Matrix**: MUST define supported language versions, runtime environments
|
||||
- **Dependency Policy**: MUST define transitive dependency policy, version constraints
|
||||
- **Extension Points**: SHOULD define plugin/extension mechanisms if applicable
|
||||
- **Bundle Size**: SHOULD define target bundle size and tree-shaking strategy
|
||||
|
||||
### In Requirements Document
|
||||
- **API Ergonomics**: Requirements SHOULD address developer experience and API consistency
|
||||
- **Error Reporting**: MUST define error types, messages, and recovery hints for consumers
|
||||
|
||||
### Quality Gate Additions
|
||||
| Check | Criteria | Severity |
|
||||
|-------|----------|----------|
|
||||
| Public API documented | All public interfaces with types | Error |
|
||||
| Usage examples | >= 3 working examples | Warning |
|
||||
| Compatibility matrix | Supported environments listed | Warning |
|
||||
| Dependency policy | Transitive deps strategy defined | Info |
|
||||
@@ -0,0 +1,28 @@
|
||||
# Service Spec Profile
|
||||
|
||||
Defines additional required sections for service-type specifications.
|
||||
|
||||
## Required Sections (in addition to base template)
|
||||
|
||||
### In Architecture Document
|
||||
- **Concepts & Terminology**: MUST define all domain terms with consistent aliases
|
||||
- **State Machine**: MUST include ASCII state diagram for each entity with a lifecycle
|
||||
- **Configuration Model**: MUST define all configurable fields with types, defaults, constraints
|
||||
- **Error Handling**: MUST define per-component error classification and recovery strategies
|
||||
- **Observability**: MUST define >= 3 metrics, structured log format, health check endpoints
|
||||
- **Trust & Safety**: SHOULD define trust levels and approval matrix
|
||||
- **Graceful Shutdown**: MUST describe shutdown sequence and cleanup procedures
|
||||
- **Implementation Guidance**: SHOULD provide implementation order and key decisions
|
||||
|
||||
### In Requirements Document
|
||||
- **Behavioral Constraints**: MUST use RFC 2119 keywords (MUST/SHOULD/MAY) for all requirements
|
||||
- **Data Model**: MUST define core entities with field-level detail (type, constraint, relation)
|
||||
|
||||
### Quality Gate Additions
|
||||
| Check | Criteria | Severity |
|
||||
|-------|----------|----------|
|
||||
| State machine present | >= 1 lifecycle state diagram | Error |
|
||||
| Configuration model | All config fields documented | Warning |
|
||||
| Observability metrics | >= 3 metrics defined | Warning |
|
||||
| Error handling defined | Per-component strategy | Warning |
|
||||
| RFC keywords used | Behavioral requirements use MUST/SHOULD/MAY | Warning |
|
||||
224
.codex/skills/spec-generator/templates/requirements-prd.md
Normal file
224
.codex/skills/spec-generator/templates/requirements-prd.md
Normal file
@@ -0,0 +1,224 @@
|
||||
# Requirements PRD Template (Directory Structure)
|
||||
|
||||
Template for generating Product Requirements Document as a directory of individual requirement files in Phase 3.
|
||||
|
||||
## Usage Context
|
||||
|
||||
| Phase | Usage |
|
||||
|-------|-------|
|
||||
| Phase 3 (Requirements) | Generate `requirements/` directory from product brief expansion |
|
||||
| Output Location | `{workDir}/requirements/` |
|
||||
|
||||
## Output Structure
|
||||
|
||||
```
|
||||
{workDir}/requirements/
|
||||
├── _index.md # Summary + MoSCoW table + traceability matrix + links
|
||||
├── REQ-001-{slug}.md # Individual functional requirement
|
||||
├── REQ-002-{slug}.md
|
||||
├── NFR-P-001-{slug}.md # Non-functional: Performance
|
||||
├── NFR-S-001-{slug}.md # Non-functional: Security
|
||||
├── NFR-SC-001-{slug}.md # Non-functional: Scalability
|
||||
├── NFR-U-001-{slug}.md # Non-functional: Usability
|
||||
└── ...
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Template: _index.md
|
||||
|
||||
```markdown
|
||||
---
|
||||
session_id: {session_id}
|
||||
phase: 3
|
||||
document_type: requirements-index
|
||||
status: draft
|
||||
generated_at: {timestamp}
|
||||
version: 1
|
||||
dependencies:
|
||||
- ../spec-config.json
|
||||
- ../product-brief.md
|
||||
---
|
||||
|
||||
# Requirements: {product_name}
|
||||
|
||||
{executive_summary - brief overview of what this PRD covers and key decisions}
|
||||
|
||||
## Requirement Summary
|
||||
|
||||
| Priority | Count | Coverage |
|
||||
|----------|-------|----------|
|
||||
| Must Have | {n} | {description of must-have scope} |
|
||||
| Should Have | {n} | {description of should-have scope} |
|
||||
| Could Have | {n} | {description of could-have scope} |
|
||||
| Won't Have | {n} | {description of explicitly excluded} |
|
||||
|
||||
## Functional Requirements
|
||||
|
||||
| ID | Title | Priority | Traces To |
|
||||
|----|-------|----------|-----------|
|
||||
| [REQ-001](REQ-001-{slug}.md) | {title} | Must | [G-001](../product-brief.md#goals--success-metrics) |
|
||||
| [REQ-002](REQ-002-{slug}.md) | {title} | Must | [G-001](../product-brief.md#goals--success-metrics) |
|
||||
| [REQ-003](REQ-003-{slug}.md) | {title} | Should | [G-002](../product-brief.md#goals--success-metrics) |
|
||||
|
||||
## Non-Functional Requirements
|
||||
|
||||
### Performance
|
||||
|
||||
| ID | Title | Target |
|
||||
|----|-------|--------|
|
||||
| [NFR-P-001](NFR-P-001-{slug}.md) | {title} | {target value} |
|
||||
|
||||
### Security
|
||||
|
||||
| ID | Title | Standard |
|
||||
|----|-------|----------|
|
||||
| [NFR-S-001](NFR-S-001-{slug}.md) | {title} | {standard/framework} |
|
||||
|
||||
### Scalability
|
||||
|
||||
| ID | Title | Target |
|
||||
|----|-------|--------|
|
||||
| [NFR-SC-001](NFR-SC-001-{slug}.md) | {title} | {target value} |
|
||||
|
||||
### Usability
|
||||
|
||||
| ID | Title | Target |
|
||||
|----|-------|--------|
|
||||
| [NFR-U-001](NFR-U-001-{slug}.md) | {title} | {target value} |
|
||||
|
||||
## Data Requirements
|
||||
|
||||
### Data Entities
|
||||
|
||||
| Entity | Description | Key Attributes |
|
||||
|--------|-------------|----------------|
|
||||
| {entity_name} | {description} | {attr1, attr2, attr3} |
|
||||
|
||||
### Data Flows
|
||||
|
||||
{description of key data flows, optionally with Mermaid diagram}
|
||||
|
||||
## Integration Requirements
|
||||
|
||||
| System | Direction | Protocol | Data Format | Notes |
|
||||
|--------|-----------|----------|-------------|-------|
|
||||
| {system_name} | Inbound/Outbound/Both | {REST/gRPC/etc} | {JSON/XML/etc} | {notes} |
|
||||
|
||||
## Constraints & Assumptions
|
||||
|
||||
### Constraints
|
||||
- {technical or business constraint 1}
|
||||
- {technical or business constraint 2}
|
||||
|
||||
### Assumptions
|
||||
- {assumption 1 - must be validated}
|
||||
- {assumption 2 - must be validated}
|
||||
|
||||
## Priority Rationale
|
||||
|
||||
{explanation of MoSCoW prioritization decisions, especially for Should/Could boundaries}
|
||||
|
||||
## Traceability Matrix
|
||||
|
||||
| Goal | Requirements |
|
||||
|------|-------------|
|
||||
| G-001 | [REQ-001](REQ-001-{slug}.md), [REQ-002](REQ-002-{slug}.md), [NFR-P-001](NFR-P-001-{slug}.md) |
|
||||
| G-002 | [REQ-003](REQ-003-{slug}.md), [NFR-S-001](NFR-S-001-{slug}.md) |
|
||||
|
||||
## Open Questions
|
||||
|
||||
- [ ] {unresolved question 1}
|
||||
- [ ] {unresolved question 2}
|
||||
|
||||
## References
|
||||
|
||||
- Derived from: [Product Brief](../product-brief.md)
|
||||
- Next: [Architecture](../architecture/_index.md)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Template: REQ-NNN-{slug}.md (Individual Functional Requirement)
|
||||
|
||||
```markdown
|
||||
---
|
||||
id: REQ-{NNN}
|
||||
type: functional
|
||||
priority: {Must|Should|Could|Won't}
|
||||
traces_to: [G-{NNN}]
|
||||
status: draft
|
||||
---
|
||||
|
||||
# REQ-{NNN}: {requirement_title}
|
||||
|
||||
**Priority**: {Must|Should|Could|Won't}
|
||||
|
||||
## Description
|
||||
|
||||
{detailed requirement description}
|
||||
|
||||
## User Story
|
||||
|
||||
As a {persona}, I want to {action} so that {benefit}.
|
||||
|
||||
## Acceptance Criteria
|
||||
|
||||
- [ ] {specific, testable criterion 1}
|
||||
- [ ] {specific, testable criterion 2}
|
||||
- [ ] {specific, testable criterion 3}
|
||||
|
||||
## Traces
|
||||
|
||||
- **Goal**: [G-{NNN}](../product-brief.md#goals--success-metrics)
|
||||
- **Architecture**: [ADR-{NNN}](../architecture/ADR-{NNN}-{slug}.md) (if applicable)
|
||||
- **Implemented by**: [EPIC-{NNN}](../epics/EPIC-{NNN}-{slug}.md) (added in Phase 5)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Template: NFR-{type}-NNN-{slug}.md (Individual Non-Functional Requirement)
|
||||
|
||||
```markdown
|
||||
---
|
||||
id: NFR-{type}-{NNN}
|
||||
type: non-functional
|
||||
category: {Performance|Security|Scalability|Usability}
|
||||
priority: {Must|Should|Could}
|
||||
status: draft
|
||||
---
|
||||
|
||||
# NFR-{type}-{NNN}: {requirement_title}
|
||||
|
||||
**Category**: {Performance|Security|Scalability|Usability}
|
||||
**Priority**: {Must|Should|Could}
|
||||
|
||||
## Requirement
|
||||
|
||||
{detailed requirement description}
|
||||
|
||||
## Metric & Target
|
||||
|
||||
| Metric | Target | Measurement Method |
|
||||
|--------|--------|--------------------|
|
||||
| {metric} | {target value} | {how measured} |
|
||||
|
||||
## Traces
|
||||
|
||||
- **Goal**: [G-{NNN}](../product-brief.md#goals--success-metrics)
|
||||
- **Architecture**: [ADR-{NNN}](../architecture/ADR-{NNN}-{slug}.md) (if applicable)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Variable Descriptions
|
||||
|
||||
| Variable | Source | Description |
|
||||
|----------|--------|-------------|
|
||||
| `{session_id}` | spec-config.json | Session identifier |
|
||||
| `{timestamp}` | Runtime | ISO8601 generation timestamp |
|
||||
| `{product_name}` | product-brief.md | Product/feature name |
|
||||
| `{NNN}` | Auto-increment | Requirement number (zero-padded 3 digits) |
|
||||
| `{slug}` | Auto-generated | Kebab-case from requirement title |
|
||||
| `{type}` | Category | P (Performance), S (Security), SC (Scalability), U (Usability) |
|
||||
| `{Must\|Should\|Could\|Won't}` | User input / auto | MoSCoW priority tag |
|
||||
698
.codex/skills/team-arch-opt/SKILL.md
Normal file
698
.codex/skills/team-arch-opt/SKILL.md
Normal file
@@ -0,0 +1,698 @@
|
||||
---
|
||||
name: team-arch-opt
|
||||
description: Architecture optimization team skill. Analyzes codebase architecture, designs refactoring plans, implements changes, validates improvements, and reviews code quality via CSV wave pipeline with interactive review-fix cycles.
|
||||
argument-hint: "[-y|--yes] [-c|--concurrency N] [--continue] \"architecture optimization task description\""
|
||||
allowed-tools: spawn_agents_on_csv, spawn_agent, wait, send_input, close_agent, Read, Write, Edit, Bash, Glob, Grep, AskUserQuestion
|
||||
---
|
||||
|
||||
## Auto Mode
|
||||
|
||||
When `--yes` or `-y`: Auto-confirm task decomposition, skip interactive validation, use defaults.
|
||||
|
||||
# Team Architecture Optimization
|
||||
|
||||
## Usage
|
||||
|
||||
```bash
|
||||
$team-arch-opt "Refactor the auth module to reduce coupling and eliminate circular dependencies"
|
||||
$team-arch-opt -c 4 "Analyze and fix God Classes across the service layer"
|
||||
$team-arch-opt -y "Remove dead code and clean up barrel exports in src/utils"
|
||||
$team-arch-opt --continue "tao-refactor-auth-20260308"
|
||||
```
|
||||
|
||||
**Flags**:
|
||||
- `-y, --yes`: Skip all confirmations (auto mode)
|
||||
- `-c, --concurrency N`: Max concurrent agents within each wave (default: 3)
|
||||
- `--continue`: Resume existing session
|
||||
|
||||
**Output Directory**: `.workflow/.csv-wave/{session-id}/`
|
||||
**Core Output**: `tasks.csv` (master state) + `results.csv` (final) + `discoveries.ndjson` (shared exploration) + `context.md` (human-readable report)
|
||||
|
||||
---
|
||||
|
||||
## Overview
|
||||
|
||||
Orchestrate multi-agent architecture optimization: analyze codebase structure, design refactoring plan, implement changes, validate improvements, review code quality. The pipeline has five domain roles (analyzer, designer, refactorer, validator, reviewer) mapped to CSV wave stages with an interactive review-fix cycle.
|
||||
|
||||
**Execution Model**: Hybrid -- CSV wave pipeline (primary) + individual agent spawn (secondary)
|
||||
|
||||
```
|
||||
+-------------------------------------------------------------------+
|
||||
| TEAM ARCHITECTURE OPTIMIZATION WORKFLOW |
|
||||
+-------------------------------------------------------------------+
|
||||
| |
|
||||
| Phase 0: Pre-Wave Interactive (Requirement Clarification) |
|
||||
| +- Parse user task description |
|
||||
| +- Detect scope: targeted module vs full architecture |
|
||||
| +- Clarify ambiguous requirements (AskUserQuestion) |
|
||||
| +- Output: refined requirements for decomposition |
|
||||
| |
|
||||
| Phase 1: Requirement -> CSV + Classification |
|
||||
| +- Identify architecture issues to target |
|
||||
| +- Build 5-stage pipeline (analyze->design->refactor->validate |
|
||||
| | +review) |
|
||||
| +- Classify tasks: csv-wave | interactive (exec_mode) |
|
||||
| +- Compute dependency waves (topological sort) |
|
||||
| +- Generate tasks.csv with wave + exec_mode columns |
|
||||
| +- User validates task breakdown (skip if -y) |
|
||||
| |
|
||||
| Phase 2: Wave Execution Engine (Extended) |
|
||||
| +- For each wave (1..N): |
|
||||
| | +- Execute pre-wave interactive tasks (if any) |
|
||||
| | +- Build wave CSV (filter csv-wave tasks for this wave) |
|
||||
| | +- Inject previous findings into prev_context column |
|
||||
| | +- spawn_agents_on_csv(wave CSV) |
|
||||
| | +- Execute post-wave interactive tasks (if any) |
|
||||
| | +- Merge all results into master tasks.csv |
|
||||
| | +- Check: any failed? -> skip dependents |
|
||||
| +- discoveries.ndjson shared across all modes (append-only) |
|
||||
| +- Review-fix cycle: max 3 iterations per branch |
|
||||
| |
|
||||
| Phase 3: Post-Wave Interactive (Completion Action) |
|
||||
| +- Pipeline completion report with improvement metrics |
|
||||
| +- Interactive completion choice (Archive/Keep/Export) |
|
||||
| +- Final aggregation / report |
|
||||
| |
|
||||
| Phase 4: Results Aggregation |
|
||||
| +- Export final results.csv |
|
||||
| +- Generate context.md with all findings |
|
||||
| +- Display summary: completed/failed/skipped per wave |
|
||||
| +- Offer: view results | retry failed | done |
|
||||
| |
|
||||
+-------------------------------------------------------------------+
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Pipeline Definition
|
||||
|
||||
```
|
||||
Stage 1 Stage 2 Stage 3 Stage 4
|
||||
ANALYZE-001 --> DESIGN-001 --> REFACTOR-001 --> VALIDATE-001
|
||||
[analyzer] [designer] [refactorer] [validator]
|
||||
^ |
|
||||
+<-- FIX-001 ----+
|
||||
| REVIEW-001
|
||||
+<--------> [reviewer]
|
||||
(max 3 iterations)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Task Classification Rules
|
||||
|
||||
Each task is classified by `exec_mode`:
|
||||
|
||||
| exec_mode | Mechanism | Criteria |
|
||||
|-----------|-----------|----------|
|
||||
| `csv-wave` | `spawn_agents_on_csv` | One-shot, structured I/O, no multi-round interaction |
|
||||
| `interactive` | `spawn_agent`/`wait`/`send_input`/`close_agent` | Multi-round, revision cycles, user checkpoints |
|
||||
|
||||
**Classification Decision**:
|
||||
|
||||
| Task Property | Classification |
|
||||
|---------------|---------------|
|
||||
| Architecture analysis (single-pass scan) | `csv-wave` |
|
||||
| Refactoring plan design (single-pass) | `csv-wave` |
|
||||
| Code refactoring implementation | `csv-wave` |
|
||||
| Validation (build, test, metrics) | `csv-wave` |
|
||||
| Code review (single-pass) | `csv-wave` |
|
||||
| Review-fix cycle (iterative revision) | `interactive` |
|
||||
| User checkpoint (plan approval) | `interactive` |
|
||||
| Discussion round (DISCUSS-REFACTOR, DISCUSS-REVIEW) | `interactive` |
|
||||
|
||||
---
|
||||
|
||||
## CSV Schema
|
||||
|
||||
### tasks.csv (Master State)
|
||||
|
||||
```csv
|
||||
id,title,description,role,issue_type,priority,target_files,deps,context_from,exec_mode,wave,status,findings,verdict,artifacts_produced,error
|
||||
"ANALYZE-001","Analyze architecture","Analyze codebase architecture to identify structural issues: cycles, coupling, cohesion, God Classes, dead code, API bloat. Produce baseline metrics and ranked report.","analyzer","","","","","","csv-wave","1","pending","","","",""
|
||||
"DESIGN-001","Design refactoring plan","Analyze architecture report to design prioritized refactoring plan with strategies, expected improvements, and risk assessments.","designer","","","","ANALYZE-001","ANALYZE-001","csv-wave","2","pending","","","",""
|
||||
"REFACTOR-001","Implement refactorings","Implement architecture refactoring changes following design plan in priority order (P0 first).","refactorer","","","","DESIGN-001","DESIGN-001","csv-wave","3","pending","","","",""
|
||||
"VALIDATE-001","Validate changes","Validate refactoring: build checks, test suite, dependency metrics, API compatibility.","validator","","","","REFACTOR-001","REFACTOR-001","csv-wave","4","pending","","PASS","",""
|
||||
"REVIEW-001","Review refactoring code","Review refactoring changes for correctness, patterns, completeness, migration safety, best practices.","reviewer","","","","REFACTOR-001","REFACTOR-001","csv-wave","4","pending","","APPROVE","",""
|
||||
```
|
||||
|
||||
**Columns**:
|
||||
|
||||
| Column | Phase | Description |
|
||||
|--------|-------|-------------|
|
||||
| `id` | Input | Unique task identifier (PREFIX-NNN format) |
|
||||
| `title` | Input | Short task title |
|
||||
| `description` | Input | Detailed task description (self-contained) |
|
||||
| `role` | Input | Worker role: analyzer, designer, refactorer, validator, reviewer |
|
||||
| `issue_type` | Input | Architecture issue category: CYCLE, COUPLING, COHESION, GOD_CLASS, DUPLICATION, LAYER_VIOLATION, DEAD_CODE, API_BLOAT |
|
||||
| `priority` | Input | P0 (Critical), P1 (High), P2 (Medium), P3 (Low) |
|
||||
| `target_files` | Input | Semicolon-separated file paths to focus on |
|
||||
| `deps` | Input | Semicolon-separated dependency task IDs |
|
||||
| `context_from` | Input | Semicolon-separated task IDs whose findings this task needs |
|
||||
| `exec_mode` | Input | `csv-wave` or `interactive` |
|
||||
| `wave` | Computed | Wave number (computed by topological sort, 1-based) |
|
||||
| `status` | Output | `pending` -> `completed` / `failed` / `skipped` |
|
||||
| `findings` | Output | Key discoveries or implementation notes (max 500 chars) |
|
||||
| `verdict` | Output | Validation/review verdict: PASS, WARN, FAIL, APPROVE, REVISE, REJECT |
|
||||
| `artifacts_produced` | Output | Semicolon-separated paths of produced artifacts |
|
||||
| `error` | Output | Error message if failed (empty if success) |
|
||||
|
||||
### Per-Wave CSV (Temporary)
|
||||
|
||||
Each wave generates a temporary `wave-{N}.csv` with extra `prev_context` column (csv-wave tasks only).
|
||||
|
||||
---
|
||||
|
||||
## Agent Registry (Interactive Agents)
|
||||
|
||||
| Agent | Role File | Pattern | Responsibility | Position |
|
||||
|-------|-----------|---------|----------------|----------|
|
||||
| Plan Reviewer | agents/plan-reviewer.md | 2.3 (send_input cycle) | Review architecture report or refactoring plan at user checkpoint | pre-wave |
|
||||
| Fix Cycle Handler | agents/fix-cycle-handler.md | 2.3 (send_input cycle) | Manage review-fix iteration cycle (max 3 rounds) | post-wave |
|
||||
| Completion Handler | agents/completion-handler.md | 2.3 (send_input cycle) | Handle pipeline completion action (Archive/Keep/Export) | standalone |
|
||||
|
||||
> **COMPACT PROTECTION**: Agent files are execution documents. When context compression occurs, **you MUST immediately `Read` the corresponding agent.md** to reload.
|
||||
|
||||
---
|
||||
|
||||
## Output Artifacts
|
||||
|
||||
| File | Purpose | Lifecycle |
|
||||
|------|---------|-----------|
|
||||
| `tasks.csv` | Master state -- all tasks with status/findings | Updated after each wave |
|
||||
| `wave-{N}.csv` | Per-wave input (temporary, csv-wave tasks only) | Created before wave, deleted after |
|
||||
| `results.csv` | Final export of all task results | Created in Phase 4 |
|
||||
| `discoveries.ndjson` | Shared exploration board (all agents, both modes) | Append-only, carries across waves |
|
||||
| `context.md` | Human-readable execution report | Created in Phase 4 |
|
||||
| `task-analysis.json` | Phase 1 output: scope, issues, pipeline config | Created in Phase 1 |
|
||||
| `artifacts/architecture-baseline.json` | Analyzer: pre-refactoring metrics | Created by analyzer |
|
||||
| `artifacts/architecture-report.md` | Analyzer: ranked structural issue findings | Created by analyzer |
|
||||
| `artifacts/refactoring-plan.md` | Designer: prioritized refactoring plan | Created by designer |
|
||||
| `artifacts/validation-results.json` | Validator: post-refactoring validation | Created by validator |
|
||||
| `artifacts/review-report.md` | Reviewer: code review findings | Created by reviewer |
|
||||
| `interactive/{id}-result.json` | Results from interactive tasks | Created per interactive task |
|
||||
|
||||
---
|
||||
|
||||
## Session Structure
|
||||
|
||||
```
|
||||
.workflow/.csv-wave/{session-id}/
|
||||
+-- tasks.csv # Master state (all tasks, both modes)
|
||||
+-- results.csv # Final results export
|
||||
+-- discoveries.ndjson # Shared discovery board (all agents)
|
||||
+-- context.md # Human-readable report
|
||||
+-- task-analysis.json # Phase 1 analysis output
|
||||
+-- wave-{N}.csv # Temporary per-wave input (csv-wave only)
|
||||
+-- artifacts/
|
||||
| +-- architecture-baseline.json # Analyzer output
|
||||
| +-- architecture-report.md # Analyzer output
|
||||
| +-- refactoring-plan.md # Designer output
|
||||
| +-- validation-results.json # Validator output
|
||||
| +-- review-report.md # Reviewer output
|
||||
+-- interactive/ # Interactive task artifacts
|
||||
| +-- {id}-result.json
|
||||
+-- wisdom/
|
||||
+-- patterns.md # Discovered patterns and conventions
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Implementation
|
||||
|
||||
### Session Initialization
|
||||
|
||||
```javascript
|
||||
const getUtc8ISOString = () => new Date(Date.now() + 8 * 60 * 60 * 1000).toISOString()
|
||||
|
||||
const AUTO_YES = $ARGUMENTS.includes('--yes') || $ARGUMENTS.includes('-y')
|
||||
const continueMode = $ARGUMENTS.includes('--continue')
|
||||
const concurrencyMatch = $ARGUMENTS.match(/(?:--concurrency|-c)\s+(\d+)/)
|
||||
const maxConcurrency = concurrencyMatch ? parseInt(concurrencyMatch[1]) : 3
|
||||
|
||||
const requirement = $ARGUMENTS
|
||||
.replace(/--yes|-y|--continue|--concurrency\s+\d+|-c\s+\d+/g, '')
|
||||
.trim()
|
||||
|
||||
const slug = requirement.toLowerCase()
|
||||
.replace(/[^a-z0-9\u4e00-\u9fa5]+/g, '-')
|
||||
.substring(0, 40)
|
||||
const dateStr = getUtc8ISOString().substring(0, 10).replace(/-/g, '')
|
||||
const sessionId = `tao-${slug}-${dateStr}`
|
||||
const sessionFolder = `.workflow/.csv-wave/${sessionId}`
|
||||
|
||||
Bash(`mkdir -p ${sessionFolder}/artifacts ${sessionFolder}/interactive ${sessionFolder}/wisdom`)
|
||||
|
||||
// Initialize discoveries.ndjson
|
||||
Write(`${sessionFolder}/discoveries.ndjson`, '')
|
||||
|
||||
// Initialize wisdom
|
||||
Write(`${sessionFolder}/wisdom/patterns.md`, '# Patterns & Conventions\n')
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Phase 0: Pre-Wave Interactive (Requirement Clarification)
|
||||
|
||||
**Objective**: Parse user task, detect architecture scope, clarify ambiguities, prepare for decomposition.
|
||||
|
||||
**Workflow**:
|
||||
|
||||
1. **Parse user task description** from $ARGUMENTS
|
||||
|
||||
2. **Check for existing sessions** (continue mode):
|
||||
- Scan `.workflow/.csv-wave/tao-*/tasks.csv` for sessions with pending tasks
|
||||
- If `--continue`: resume the specified or most recent session, skip to Phase 2
|
||||
- If active session found: ask user whether to resume or start new
|
||||
|
||||
3. **Identify architecture optimization target**:
|
||||
|
||||
| Signal | Target |
|
||||
|--------|--------|
|
||||
| Specific file/module mentioned | Scoped refactoring |
|
||||
| "coupling", "dependency", "structure", generic | Full architecture analysis |
|
||||
| Specific issue (cycles, God Class, duplication) | Targeted issue resolution |
|
||||
|
||||
4. **Clarify if ambiguous** (skip if AUTO_YES):
|
||||
```javascript
|
||||
AskUserQuestion({
|
||||
questions: [{
|
||||
question: "Please confirm the architecture optimization scope:",
|
||||
header: "Architecture Scope",
|
||||
multiSelect: false,
|
||||
options: [
|
||||
{ label: "Proceed as described", description: "Scope is clear" },
|
||||
{ label: "Narrow scope", description: "Specify modules/files to focus on" },
|
||||
{ label: "Add constraints", description: "Exclude areas, set priorities" }
|
||||
]
|
||||
}]
|
||||
})
|
||||
```
|
||||
|
||||
5. **Output**: Refined requirement string for Phase 1
|
||||
|
||||
**Success Criteria**:
|
||||
- Refined requirements available for Phase 1 decomposition
|
||||
- Existing session detected and handled if applicable
|
||||
|
||||
---
|
||||
|
||||
### Phase 1: Requirement -> CSV + Classification
|
||||
|
||||
**Objective**: Decompose architecture optimization task into the 5-stage pipeline tasks, assign waves, generate tasks.csv.
|
||||
|
||||
**Decomposition Rules**:
|
||||
|
||||
1. **Stage mapping** -- architecture optimization always follows this pipeline:
|
||||
|
||||
| Stage | Role | Task Prefix | Wave | Description |
|
||||
|-------|------|-------------|------|-------------|
|
||||
| 1 | analyzer | ANALYZE | 1 | Scan codebase, identify structural issues, produce baseline metrics |
|
||||
| 2 | designer | DESIGN | 2 | Design refactoring plan from architecture report |
|
||||
| 3 | refactorer | REFACTOR | 3 | Implement refactorings per plan priority |
|
||||
| 4a | validator | VALIDATE | 4 | Validate build, tests, metrics, API compatibility |
|
||||
| 4b | reviewer | REVIEW | 4 | Review refactoring code for correctness and patterns |
|
||||
|
||||
2. **Single-pipeline decomposition**: Generate one task per stage with sequential dependencies:
|
||||
- ANALYZE-001 (wave 1, no deps)
|
||||
- DESIGN-001 (wave 2, deps: ANALYZE-001)
|
||||
- REFACTOR-001 (wave 3, deps: DESIGN-001)
|
||||
- VALIDATE-001 (wave 4, deps: REFACTOR-001)
|
||||
- REVIEW-001 (wave 4, deps: REFACTOR-001)
|
||||
|
||||
3. **Description enrichment**: Each task description must be self-contained with:
|
||||
- Clear goal statement
|
||||
- Input artifacts to read
|
||||
- Output artifacts to produce
|
||||
- Success criteria
|
||||
- Session folder path
|
||||
|
||||
**Classification Rules**:
|
||||
|
||||
| Task Property | exec_mode |
|
||||
|---------------|-----------|
|
||||
| ANALYZE, DESIGN, REFACTOR, VALIDATE, REVIEW (initial pass) | `csv-wave` |
|
||||
| FIX tasks (review-fix cycle) | `interactive` (handled by fix-cycle-handler agent) |
|
||||
|
||||
**Wave Computation**: Kahn's BFS topological sort with depth tracking (csv-wave tasks only).
|
||||
|
||||
**User Validation**: Display task breakdown with wave + exec_mode assignment (skip if AUTO_YES).
|
||||
|
||||
**Success Criteria**:
|
||||
- tasks.csv created with valid schema, wave, and exec_mode assignments
|
||||
- task-analysis.json written with scope and pipeline config
|
||||
- No circular dependencies
|
||||
- User approved (or AUTO_YES)
|
||||
|
||||
---
|
||||
|
||||
### Phase 2: Wave Execution Engine (Extended)
|
||||
|
||||
**Objective**: Execute tasks wave-by-wave with hybrid mechanism support and cross-wave context propagation.
|
||||
|
||||
```javascript
|
||||
const masterCsv = Read(`${sessionFolder}/tasks.csv`)
|
||||
let tasks = parseCsv(masterCsv)
|
||||
const maxWave = Math.max(...tasks.map(t => t.wave))
|
||||
|
||||
for (let wave = 1; wave <= maxWave; wave++) {
|
||||
console.log(`\nWave ${wave}/${maxWave}`)
|
||||
|
||||
// 1. Separate tasks by exec_mode
|
||||
const waveTasks = tasks.filter(t => t.wave === wave && t.status === 'pending')
|
||||
const csvTasks = waveTasks.filter(t => t.exec_mode === 'csv-wave')
|
||||
const interactiveTasks = waveTasks.filter(t => t.exec_mode === 'interactive')
|
||||
|
||||
// 2. Check dependencies -- skip tasks whose deps failed
|
||||
for (const task of waveTasks) {
|
||||
const depIds = (task.deps || '').split(';').filter(Boolean)
|
||||
const depStatuses = depIds.map(id => tasks.find(t => t.id === id)?.status)
|
||||
if (depStatuses.some(s => s === 'failed' || s === 'skipped')) {
|
||||
task.status = 'skipped'
|
||||
task.error = `Dependency failed: ${depIds.filter((id, i) =>
|
||||
['failed','skipped'].includes(depStatuses[i])).join(', ')}`
|
||||
}
|
||||
}
|
||||
|
||||
// 3. Execute pre-wave interactive tasks (if any)
|
||||
for (const task of interactiveTasks.filter(t => t.status === 'pending')) {
|
||||
// Determine agent file based on task type
|
||||
const agentFile = task.id.startsWith('FIX') ? 'agents/fix-cycle-handler.md' : 'agents/plan-reviewer.md'
|
||||
Read(agentFile)
|
||||
|
||||
const agent = spawn_agent({
|
||||
message: `## TASK ASSIGNMENT\n\n### MANDATORY FIRST STEPS\n1. Read: ${agentFile}\n2. Read: ${sessionFolder}/discoveries.ndjson\n3. Read: .workflow/project-tech.json (if exists)\n\n---\n\nGoal: ${task.description}\nScope: ${task.title}\nSession: ${sessionFolder}\n\n### Previous Context\n${buildPrevContext(task, tasks)}`
|
||||
})
|
||||
const result = wait({ ids: [agent], timeout_ms: 600000 })
|
||||
if (result.timed_out) {
|
||||
send_input({ id: agent, message: "Please finalize and output current findings." })
|
||||
wait({ ids: [agent], timeout_ms: 120000 })
|
||||
}
|
||||
Write(`${sessionFolder}/interactive/${task.id}-result.json`, JSON.stringify({
|
||||
task_id: task.id, status: "completed", findings: parseFindings(result),
|
||||
timestamp: getUtc8ISOString()
|
||||
}))
|
||||
close_agent({ id: agent })
|
||||
task.status = 'completed'
|
||||
task.findings = parseFindings(result)
|
||||
}
|
||||
|
||||
// 4. Build prev_context for csv-wave tasks
|
||||
const pendingCsvTasks = csvTasks.filter(t => t.status === 'pending')
|
||||
for (const task of pendingCsvTasks) {
|
||||
task.prev_context = buildPrevContext(task, tasks)
|
||||
}
|
||||
|
||||
if (pendingCsvTasks.length > 0) {
|
||||
// 5. Write wave CSV
|
||||
Write(`${sessionFolder}/wave-${wave}.csv`, toCsv(pendingCsvTasks))
|
||||
|
||||
// 6. Determine instruction -- read from instructions/agent-instruction.md
|
||||
Read('instructions/agent-instruction.md')
|
||||
|
||||
// 7. Execute wave via spawn_agents_on_csv
|
||||
spawn_agents_on_csv({
|
||||
csv_path: `${sessionFolder}/wave-${wave}.csv`,
|
||||
id_column: "id",
|
||||
instruction: archOptInstruction, // from instructions/agent-instruction.md
|
||||
max_concurrency: maxConcurrency,
|
||||
max_runtime_seconds: 900,
|
||||
output_csv_path: `${sessionFolder}/wave-${wave}-results.csv`,
|
||||
output_schema: {
|
||||
type: "object",
|
||||
properties: {
|
||||
id: { type: "string" },
|
||||
status: { type: "string", enum: ["completed", "failed"] },
|
||||
findings: { type: "string" },
|
||||
verdict: { type: "string" },
|
||||
artifacts_produced: { type: "string" },
|
||||
error: { type: "string" }
|
||||
}
|
||||
}
|
||||
})
|
||||
|
||||
// 8. Merge results into master CSV
|
||||
const results = parseCsv(Read(`${sessionFolder}/wave-${wave}-results.csv`))
|
||||
for (const r of results) {
|
||||
const t = tasks.find(t => t.id === r.id)
|
||||
if (t) Object.assign(t, r)
|
||||
}
|
||||
}
|
||||
|
||||
// 9. Update master CSV
|
||||
Write(`${sessionFolder}/tasks.csv`, toCsv(tasks))
|
||||
|
||||
// 10. Cleanup temp files
|
||||
Bash(`rm -f ${sessionFolder}/wave-${wave}.csv ${sessionFolder}/wave-${wave}-results.csv`)
|
||||
|
||||
// 11. Post-wave: check for review-fix cycle
|
||||
const validateTask = tasks.find(t => t.id.startsWith('VALIDATE') && t.wave === wave)
|
||||
const reviewTask = tasks.find(t => t.id.startsWith('REVIEW') && t.wave === wave)
|
||||
|
||||
if ((validateTask?.verdict === 'FAIL' || reviewTask?.verdict === 'REVISE' || reviewTask?.verdict === 'REJECT')) {
|
||||
const fixCycleCount = tasks.filter(t => t.id.startsWith('FIX')).length
|
||||
if (fixCycleCount < 3) {
|
||||
// Create FIX task, add to tasks, re-run refactor -> validate+review cycle
|
||||
const fixId = `FIX-${String(fixCycleCount + 1).padStart(3, '0')}`
|
||||
const feedback = [validateTask?.error, reviewTask?.findings].filter(Boolean).join('\n')
|
||||
tasks.push({
|
||||
id: fixId, title: `Fix issues from review/validation cycle ${fixCycleCount + 1}`,
|
||||
description: `Fix issues found:\n${feedback}`,
|
||||
role: 'refactorer', issue_type: '', priority: 'P0', target_files: '',
|
||||
deps: '', context_from: '', exec_mode: 'interactive',
|
||||
wave: wave + 1, status: 'pending', findings: '', verdict: '',
|
||||
artifacts_produced: '', error: ''
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
// 12. Display wave summary
|
||||
const completed = waveTasks.filter(t => t.status === 'completed').length
|
||||
const failed = waveTasks.filter(t => t.status === 'failed').length
|
||||
const skipped = waveTasks.filter(t => t.status === 'skipped').length
|
||||
console.log(`Wave ${wave} Complete: ${completed} completed, ${failed} failed, ${skipped} skipped`)
|
||||
}
|
||||
```
|
||||
|
||||
**Success Criteria**:
|
||||
- All waves executed in order
|
||||
- Both csv-wave and interactive tasks handled per wave
|
||||
- Each wave's results merged into master CSV before next wave starts
|
||||
- Dependent tasks skipped when predecessor failed
|
||||
- Review-fix cycle handled with max 3 iterations
|
||||
- discoveries.ndjson accumulated across all waves and mechanisms
|
||||
|
||||
---
|
||||
|
||||
### Phase 3: Post-Wave Interactive (Completion Action)
|
||||
|
||||
**Objective**: Pipeline completion report with architecture improvement metrics and interactive completion choice.
|
||||
|
||||
```javascript
|
||||
// 1. Generate pipeline summary
|
||||
const tasks = parseCsv(Read(`${sessionFolder}/tasks.csv`))
|
||||
const completed = tasks.filter(t => t.status === 'completed')
|
||||
const failed = tasks.filter(t => t.status === 'failed')
|
||||
|
||||
// 2. Load improvement metrics from validation results
|
||||
let improvements = ''
|
||||
try {
|
||||
const validation = JSON.parse(Read(`${sessionFolder}/artifacts/validation-results.json`))
|
||||
improvements = `Architecture Improvements:\n${validation.dimensions.map(d =>
|
||||
` ${d.name}: ${d.baseline} -> ${d.current} (${d.improvement})`).join('\n')}`
|
||||
} catch {}
|
||||
|
||||
console.log(`
|
||||
============================================
|
||||
ARCHITECTURE OPTIMIZATION COMPLETE
|
||||
|
||||
Deliverables:
|
||||
- Architecture Baseline: artifacts/architecture-baseline.json
|
||||
- Architecture Report: artifacts/architecture-report.md
|
||||
- Refactoring Plan: artifacts/refactoring-plan.md
|
||||
- Validation Results: artifacts/validation-results.json
|
||||
- Review Report: artifacts/review-report.md
|
||||
|
||||
${improvements}
|
||||
|
||||
Pipeline: ${completed.length}/${tasks.length} tasks
|
||||
Session: ${sessionFolder}
|
||||
============================================
|
||||
`)
|
||||
|
||||
// 3. Completion action
|
||||
if (!AUTO_YES) {
|
||||
AskUserQuestion({
|
||||
questions: [{
|
||||
question: "Architecture optimization complete. What would you like to do?",
|
||||
header: "Completion",
|
||||
multiSelect: false,
|
||||
options: [
|
||||
{ label: "Archive & Clean (Recommended)", description: "Archive session, output final summary" },
|
||||
{ label: "Keep Active", description: "Keep session for follow-up work" },
|
||||
{ label: "Retry Failed", description: "Re-run failed tasks" }
|
||||
]
|
||||
}]
|
||||
})
|
||||
}
|
||||
```
|
||||
|
||||
**Success Criteria**:
|
||||
- Post-wave interactive processing complete
|
||||
- User informed of results and improvement metrics
|
||||
|
||||
---
|
||||
|
||||
### Phase 4: Results Aggregation
|
||||
|
||||
**Objective**: Generate final results and human-readable report.
|
||||
|
||||
```javascript
|
||||
// 1. Export results.csv
|
||||
Bash(`cp ${sessionFolder}/tasks.csv ${sessionFolder}/results.csv`)
|
||||
|
||||
// 2. Generate context.md
|
||||
const tasks = parseCsv(Read(`${sessionFolder}/tasks.csv`))
|
||||
let contextMd = `# Architecture Optimization Report\n\n`
|
||||
contextMd += `**Session**: ${sessionId}\n`
|
||||
contextMd += `**Date**: ${getUtc8ISOString().substring(0, 10)}\n\n`
|
||||
|
||||
contextMd += `## Summary\n`
|
||||
contextMd += `| Status | Count |\n|--------|-------|\n`
|
||||
contextMd += `| Completed | ${tasks.filter(t => t.status === 'completed').length} |\n`
|
||||
contextMd += `| Failed | ${tasks.filter(t => t.status === 'failed').length} |\n`
|
||||
contextMd += `| Skipped | ${tasks.filter(t => t.status === 'skipped').length} |\n\n`
|
||||
|
||||
contextMd += `## Deliverables\n\n`
|
||||
contextMd += `| Artifact | Path |\n|----------|------|\n`
|
||||
contextMd += `| Architecture Baseline | artifacts/architecture-baseline.json |\n`
|
||||
contextMd += `| Architecture Report | artifacts/architecture-report.md |\n`
|
||||
contextMd += `| Refactoring Plan | artifacts/refactoring-plan.md |\n`
|
||||
contextMd += `| Validation Results | artifacts/validation-results.json |\n`
|
||||
contextMd += `| Review Report | artifacts/review-report.md |\n\n`
|
||||
|
||||
const maxWave = Math.max(...tasks.map(t => t.wave))
|
||||
contextMd += `## Wave Execution\n\n`
|
||||
for (let w = 1; w <= maxWave; w++) {
|
||||
const waveTasks = tasks.filter(t => t.wave === w)
|
||||
contextMd += `### Wave ${w}\n\n`
|
||||
for (const t of waveTasks) {
|
||||
const icon = t.status === 'completed' ? '[DONE]' : t.status === 'failed' ? '[FAIL]' : '[SKIP]'
|
||||
contextMd += `${icon} **${t.title}** [${t.role}] ${t.verdict ? `(${t.verdict})` : ''} ${t.findings || ''}\n\n`
|
||||
}
|
||||
}
|
||||
|
||||
Write(`${sessionFolder}/context.md`, contextMd)
|
||||
|
||||
console.log(`Results exported to: ${sessionFolder}/results.csv`)
|
||||
console.log(`Report generated at: ${sessionFolder}/context.md`)
|
||||
```
|
||||
|
||||
**Success Criteria**:
|
||||
- results.csv exported (all tasks, both modes)
|
||||
- context.md generated with deliverables list
|
||||
- Summary displayed to user
|
||||
|
||||
---
|
||||
|
||||
## Shared Discovery Board Protocol
|
||||
|
||||
All agents (csv-wave and interactive) share a single `discoveries.ndjson` file for cross-task knowledge exchange.
|
||||
|
||||
**Format**: One JSON object per line (NDJSON):
|
||||
|
||||
```jsonl
|
||||
{"ts":"2026-03-08T10:00:00Z","worker":"ANALYZE-001","type":"cycle_found","data":{"modules":["auth","user"],"depth":2,"description":"Circular dependency between auth and user modules"}}
|
||||
{"ts":"2026-03-08T10:05:00Z","worker":"REFACTOR-001","type":"file_modified","data":{"file":"src/auth/index.ts","change":"Extracted interface to break cycle","lines_added":15}}
|
||||
```
|
||||
|
||||
**Discovery Types**:
|
||||
|
||||
| Type | Data Schema | Description |
|
||||
|------|-------------|-------------|
|
||||
| `cycle_found` | `{modules, depth, description}` | Circular dependency detected |
|
||||
| `god_class_found` | `{file, loc, methods, description}` | God Class/Module identified |
|
||||
| `coupling_issue` | `{module, fan_in, fan_out, description}` | High coupling detected |
|
||||
| `dead_code_found` | `{file, type, description}` | Dead code or dead export found |
|
||||
| `file_modified` | `{file, change, lines_added}` | File change recorded |
|
||||
| `pattern_found` | `{pattern_name, location, description}` | Code pattern identified |
|
||||
| `metric_measured` | `{metric, value, unit, module}` | Architecture metric measured |
|
||||
| `artifact_produced` | `{name, path, producer, type}` | Deliverable created |
|
||||
|
||||
**Protocol**:
|
||||
1. Agents MUST read discoveries.ndjson at start of execution
|
||||
2. Agents MUST append relevant discoveries during execution
|
||||
3. Agents MUST NOT modify or delete existing entries
|
||||
4. Deduplication by `{type, data.file}` or `{type, data.modules}` key
|
||||
|
||||
---
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Error | Resolution |
|
||||
|-------|------------|
|
||||
| Circular dependency in tasks | Detect in wave computation, abort with error message |
|
||||
| CSV agent timeout | Mark as failed in results, continue with wave |
|
||||
| CSV agent failed | Mark as failed, skip dependent tasks in later waves |
|
||||
| Interactive agent timeout | Urge convergence via send_input, then close if still timed out |
|
||||
| Interactive agent failed | Mark as failed, skip dependents |
|
||||
| All agents in wave failed | Log error, offer retry or abort |
|
||||
| CSV parse error | Validate CSV format before execution, show line number |
|
||||
| discoveries.ndjson corrupt | Ignore malformed lines, continue with valid entries |
|
||||
| Review-fix cycle exceeds 3 iterations | Escalate to user with summary of remaining issues |
|
||||
| Validation fails on build | Create FIX task with compilation error details |
|
||||
| Architecture baseline unavailable | Fall back to static analysis estimates |
|
||||
| Continue mode: no session found | List available sessions, prompt user to select |
|
||||
|
||||
---
|
||||
|
||||
## Core Rules
|
||||
|
||||
1. **Start Immediately**: First action is session initialization, then Phase 0/1
|
||||
2. **Wave Order is Sacred**: Never execute wave N before wave N-1 completes and results are merged
|
||||
3. **CSV is Source of Truth**: Master tasks.csv holds all state (both csv-wave and interactive)
|
||||
4. **CSV First**: Default to csv-wave for tasks; only use interactive when interaction pattern requires it
|
||||
5. **Context Propagation**: prev_context built from master CSV, not from memory
|
||||
6. **Discovery Board is Append-Only**: Never clear, modify, or recreate discoveries.ndjson -- both mechanisms share it
|
||||
7. **Skip on Failure**: If a dependency failed, skip the dependent task (regardless of mechanism)
|
||||
8. **Max 3 Fix Cycles**: Review-fix cycle capped at 3 iterations; escalate to user after
|
||||
9. **Cleanup Temp Files**: Remove wave-{N}.csv after results are merged
|
||||
10. **DO NOT STOP**: Continuous execution until all waves complete or all remaining tasks are skipped
|
||||
|
||||
|
||||
---
|
||||
|
||||
## Coordinator Role Constraints (Main Agent)
|
||||
|
||||
**CRITICAL**: The coordinator (main agent executing this skill) is responsible for **orchestration only**, NOT implementation.
|
||||
|
||||
15. **Coordinator Does NOT Execute Code**: The main agent MUST NOT write, modify, or implement any code directly. All implementation work is delegated to spawned team agents. The coordinator only:
|
||||
- Spawns agents with task assignments
|
||||
- Waits for agent callbacks
|
||||
- Merges results and coordinates workflow
|
||||
- Manages workflow transitions between phases
|
||||
|
||||
16. **Patient Waiting is Mandatory**: Agent execution takes significant time (typically 10-30 minutes per phase, sometimes longer). The coordinator MUST:
|
||||
- Wait patiently for `wait()` calls to complete
|
||||
- NOT skip workflow steps due to perceived delays
|
||||
- NOT assume agents have failed just because they're taking time
|
||||
- Trust the timeout mechanisms defined in the skill
|
||||
|
||||
17. **Use send_input for Clarification**: When agents need guidance or appear stuck, the coordinator MUST:
|
||||
- Use `send_input()` to ask questions or provide clarification
|
||||
- NOT skip the agent or move to next phase prematurely
|
||||
- Give agents opportunity to respond before escalating
|
||||
- Example: `send_input({ id: agent_id, message: "Please provide status update or clarify blockers" })`
|
||||
|
||||
18. **No Workflow Shortcuts**: The coordinator MUST NOT:
|
||||
- Skip phases or stages defined in the workflow
|
||||
- Bypass required approval or review steps
|
||||
- Execute dependent tasks before prerequisites complete
|
||||
- Assume task completion without explicit agent callback
|
||||
- Make up or fabricate agent results
|
||||
|
||||
19. **Respect Long-Running Processes**: This is a complex multi-agent workflow that requires patience:
|
||||
- Total execution time may range from 30-90 minutes or longer
|
||||
- Each phase may take 10-30 minutes depending on complexity
|
||||
- The coordinator must remain active and attentive throughout the entire process
|
||||
- Do not terminate or skip steps due to time concerns
|
||||
138
.codex/skills/team-arch-opt/agents/completion-handler.md
Normal file
138
.codex/skills/team-arch-opt/agents/completion-handler.md
Normal file
@@ -0,0 +1,138 @@
|
||||
# Completion Handler Agent
|
||||
|
||||
Handle pipeline completion action for architecture optimization: present results summary, offer Archive/Keep/Export options, execute chosen action.
|
||||
|
||||
## Identity
|
||||
|
||||
- **Type**: `interactive`
|
||||
- **Responsibility**: Pipeline completion and session lifecycle management
|
||||
|
||||
## Boundaries
|
||||
|
||||
### MUST
|
||||
|
||||
- Load role definition via MANDATORY FIRST STEPS pattern
|
||||
- Present complete pipeline summary with improvement metrics
|
||||
- Offer completion action choices
|
||||
- Execute chosen action (archive, keep, export)
|
||||
- Produce structured output
|
||||
|
||||
### MUST NOT
|
||||
|
||||
- Skip presenting results summary
|
||||
- Execute destructive actions without confirmation
|
||||
- Modify source code
|
||||
|
||||
---
|
||||
|
||||
## Toolbox
|
||||
|
||||
### Available Tools
|
||||
|
||||
| Tool | Type | Purpose |
|
||||
|------|------|---------|
|
||||
| `Read` | builtin | Load result artifacts |
|
||||
| `Write` | builtin | Write export files |
|
||||
| `Bash` | builtin | Archive/cleanup operations |
|
||||
| `AskUserQuestion` | builtin | Present completion choices |
|
||||
|
||||
---
|
||||
|
||||
## Execution
|
||||
|
||||
### Phase 1: Results Collection
|
||||
|
||||
**Objective**: Gather all pipeline results for summary.
|
||||
|
||||
**Input**:
|
||||
|
||||
| Source | Required | Description |
|
||||
|--------|----------|-------------|
|
||||
| tasks.csv | Yes | Master task state |
|
||||
| Architecture baseline | Yes | Pre-refactoring metrics |
|
||||
| Validation results | Yes | Post-refactoring metrics |
|
||||
| Review report | Yes | Code review findings |
|
||||
|
||||
**Steps**:
|
||||
|
||||
1. Read tasks.csv -- count completed/failed/skipped
|
||||
2. Read architecture-baseline.json -- extract before metrics
|
||||
3. Read validation-results.json -- extract after metrics, compute improvements
|
||||
4. Read review-report.md -- extract final verdict
|
||||
|
||||
**Output**: Compiled results summary
|
||||
|
||||
---
|
||||
|
||||
### Phase 2: Present and Choose
|
||||
|
||||
**Objective**: Display results and get user's completion choice.
|
||||
|
||||
**Steps**:
|
||||
|
||||
1. Display pipeline summary with improvement metrics
|
||||
2. Present completion action:
|
||||
|
||||
```javascript
|
||||
AskUserQuestion({
|
||||
questions: [{
|
||||
question: "Architecture optimization complete. What would you like to do?",
|
||||
header: "Completion",
|
||||
multiSelect: false,
|
||||
options: [
|
||||
{ label: "Archive & Clean (Recommended)", description: "Archive session, output final summary" },
|
||||
{ label: "Keep Active", description: "Keep session for follow-up work or inspection" },
|
||||
{ label: "Export Results", description: "Export deliverables to a specified location" }
|
||||
]
|
||||
}]
|
||||
})
|
||||
```
|
||||
|
||||
**Output**: User's choice
|
||||
|
||||
---
|
||||
|
||||
### Phase 3: Execute Action
|
||||
|
||||
**Objective**: Execute the chosen completion action.
|
||||
|
||||
| Choice | Action |
|
||||
|--------|--------|
|
||||
| Archive & Clean | Copy results.csv and context.md to archive, mark session completed |
|
||||
| Keep Active | Mark session as paused, leave all artifacts in place |
|
||||
| Export Results | Copy key deliverables to user-specified location |
|
||||
|
||||
---
|
||||
|
||||
## Structured Output Template
|
||||
|
||||
```
|
||||
## Pipeline Summary
|
||||
- Tasks: X completed, Y failed, Z skipped
|
||||
- Duration: estimated from timestamps
|
||||
|
||||
## Architecture Improvements
|
||||
- Metric 1: before -> after (improvement %)
|
||||
- Metric 2: before -> after (improvement %)
|
||||
|
||||
## Deliverables
|
||||
- Architecture Report: path
|
||||
- Refactoring Plan: path
|
||||
- Validation Results: path
|
||||
- Review Report: path
|
||||
|
||||
## Action Taken
|
||||
- Choice: Archive & Clean / Keep Active / Export Results
|
||||
- Status: completed
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Scenario | Resolution |
|
||||
|----------|------------|
|
||||
| Result artifacts missing | Report partial summary with available data |
|
||||
| Archive operation fails | Default to Keep Active |
|
||||
| Export path invalid | Ask user for valid path |
|
||||
| Timeout approaching | Default to Keep Active |
|
||||
146
.codex/skills/team-arch-opt/agents/fix-cycle-handler.md
Normal file
146
.codex/skills/team-arch-opt/agents/fix-cycle-handler.md
Normal file
@@ -0,0 +1,146 @@
|
||||
# Fix Cycle Handler Agent
|
||||
|
||||
Manage the review-fix iteration cycle for architecture refactoring. Reads validation/review feedback, applies targeted fixes, re-validates, up to 3 iterations.
|
||||
|
||||
## Identity
|
||||
|
||||
- **Type**: `interactive`
|
||||
- **Responsibility**: Iterative fix-verify cycle for refactoring issues
|
||||
|
||||
## Boundaries
|
||||
|
||||
### MUST
|
||||
|
||||
- Load role definition via MANDATORY FIRST STEPS pattern
|
||||
- Read validation results and review report to understand failures
|
||||
- Apply targeted fixes addressing specific feedback items
|
||||
- Re-validate after each fix attempt
|
||||
- Track iteration count (max 3)
|
||||
- Produce structured output with fix summary
|
||||
|
||||
### MUST NOT
|
||||
|
||||
- Skip reading feedback before attempting fixes
|
||||
- Apply broad changes unrelated to feedback
|
||||
- Exceed 3 fix iterations
|
||||
- Modify code outside the scope of reported issues
|
||||
|
||||
---
|
||||
|
||||
## Toolbox
|
||||
|
||||
### Available Tools
|
||||
|
||||
| Tool | Type | Purpose |
|
||||
|------|------|---------|
|
||||
| `Read` | builtin | Load feedback artifacts and source files |
|
||||
| `Edit` | builtin | Apply targeted code fixes |
|
||||
| `Write` | builtin | Write updated artifacts |
|
||||
| `Bash` | builtin | Run build/test validation |
|
||||
| `Grep` | builtin | Search for patterns |
|
||||
| `Glob` | builtin | Find files |
|
||||
|
||||
---
|
||||
|
||||
## Execution
|
||||
|
||||
### Phase 1: Feedback Loading
|
||||
|
||||
**Objective**: Load and parse validation/review feedback.
|
||||
|
||||
**Input**:
|
||||
|
||||
| Source | Required | Description |
|
||||
|--------|----------|-------------|
|
||||
| Validation results | Yes (if validation failed) | From artifacts/validation-results.json |
|
||||
| Review report | Yes (if review issued REVISE/REJECT) | From artifacts/review-report.md |
|
||||
| Refactoring plan | Yes | Original plan for reference |
|
||||
| Discoveries | No | Shared findings |
|
||||
|
||||
**Steps**:
|
||||
|
||||
1. Read validation-results.json -- identify failed dimensions (build, test, metrics, API)
|
||||
2. Read review-report.md -- identify Critical/High findings with file:line references
|
||||
3. Categorize issues by type and priority
|
||||
|
||||
**Output**: Prioritized list of issues to fix
|
||||
|
||||
---
|
||||
|
||||
### Phase 2: Fix Implementation (Iterative)
|
||||
|
||||
**Objective**: Apply fixes and re-validate, up to 3 rounds.
|
||||
|
||||
**Steps**:
|
||||
|
||||
For each iteration (1..3):
|
||||
|
||||
1. **Apply fixes**:
|
||||
- Address highest-severity issues first
|
||||
- Make minimal, targeted changes at reported file:line locations
|
||||
- Update imports if structural changes are needed
|
||||
- Preserve existing behavior
|
||||
|
||||
2. **Self-validate**:
|
||||
- Run build check (no new compilation errors)
|
||||
- Run test suite (no new test failures)
|
||||
- Verify fix addresses the specific concern raised
|
||||
|
||||
3. **Check convergence**:
|
||||
|
||||
| Validation Result | Action |
|
||||
|-------------------|--------|
|
||||
| All checks pass | Exit loop, report success |
|
||||
| Some checks still fail, iteration < 3 | Continue to next iteration |
|
||||
| Still failing at iteration 3 | Report remaining issues for escalation |
|
||||
|
||||
**Output**: Fix results per iteration
|
||||
|
||||
---
|
||||
|
||||
### Phase 3: Result Reporting
|
||||
|
||||
**Objective**: Produce final fix cycle summary.
|
||||
|
||||
**Steps**:
|
||||
|
||||
1. Update validation-results.json with post-fix metrics
|
||||
2. Append fix discoveries to discoveries.ndjson
|
||||
3. Report final status
|
||||
|
||||
---
|
||||
|
||||
## Structured Output Template
|
||||
|
||||
```
|
||||
## Summary
|
||||
- Fix cycle completed: N iterations, M issues resolved, K remaining
|
||||
|
||||
## Iterations
|
||||
### Iteration 1
|
||||
- Fixed: [list of fixes applied with file:line]
|
||||
- Validation: [pass/fail per dimension]
|
||||
|
||||
### Iteration 2 (if needed)
|
||||
- Fixed: [list of fixes]
|
||||
- Validation: [pass/fail]
|
||||
|
||||
## Final Status
|
||||
- verdict: PASS | PARTIAL | ESCALATE
|
||||
- Remaining issues (if any): [list]
|
||||
|
||||
## Artifacts Updated
|
||||
- artifacts/validation-results.json (updated metrics)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Scenario | Resolution |
|
||||
|----------|------------|
|
||||
| Fix introduces new errors | Revert fix, try alternative approach |
|
||||
| Cannot reproduce reported issue | Log as resolved-by-environment, continue |
|
||||
| Fix scope exceeds current files | Report scope expansion needed, escalate |
|
||||
| Timeout approaching | Output partial results with iteration count |
|
||||
| 3 iterations exhausted | Report remaining issues for user escalation |
|
||||
150
.codex/skills/team-arch-opt/agents/plan-reviewer.md
Normal file
150
.codex/skills/team-arch-opt/agents/plan-reviewer.md
Normal file
@@ -0,0 +1,150 @@
|
||||
# Plan Reviewer Agent
|
||||
|
||||
Review architecture report or refactoring plan at user checkpoints, providing interactive approval or revision requests.
|
||||
|
||||
## Identity
|
||||
|
||||
- **Type**: `interactive`
|
||||
- **Responsibility**: Review and approve/revise plans before execution proceeds
|
||||
|
||||
## Boundaries
|
||||
|
||||
### MUST
|
||||
|
||||
- Load role definition via MANDATORY FIRST STEPS pattern
|
||||
- Read the architecture report or refactoring plan being reviewed
|
||||
- Produce structured output with clear APPROVE/REVISE verdict
|
||||
- Include specific file:line references in findings
|
||||
|
||||
### MUST NOT
|
||||
|
||||
- Skip the MANDATORY FIRST STEPS role loading
|
||||
- Modify source code directly
|
||||
- Produce unstructured output
|
||||
- Approve without actually reading the plan
|
||||
|
||||
---
|
||||
|
||||
## Toolbox
|
||||
|
||||
### Available Tools
|
||||
|
||||
| Tool | Type | Purpose |
|
||||
|------|------|---------|
|
||||
| `Read` | builtin | Load plan artifacts and project files |
|
||||
| `Grep` | builtin | Search for patterns in codebase |
|
||||
| `Glob` | builtin | Find files by pattern |
|
||||
| `Bash` | builtin | Run build/test commands |
|
||||
|
||||
### Tool Usage Patterns
|
||||
|
||||
**Read Pattern**: Load context files before review
|
||||
```
|
||||
Read("{session_folder}/artifacts/architecture-report.md")
|
||||
Read("{session_folder}/artifacts/refactoring-plan.md")
|
||||
Read("{session_folder}/discoveries.ndjson")
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Execution
|
||||
|
||||
### Phase 1: Context Loading
|
||||
|
||||
**Objective**: Load the plan or report to review.
|
||||
|
||||
**Input**:
|
||||
|
||||
| Source | Required | Description |
|
||||
|--------|----------|-------------|
|
||||
| Architecture report | Yes (if reviewing analysis) | Ranked issue list from analyzer |
|
||||
| Refactoring plan | Yes (if reviewing design) | Prioritized plan from designer |
|
||||
| Discoveries | No | Shared findings from prior stages |
|
||||
|
||||
**Steps**:
|
||||
|
||||
1. Read the artifact being reviewed from session artifacts folder
|
||||
2. Read discoveries.ndjson for additional context
|
||||
3. Identify which checkpoint this review corresponds to (CP-1 for analysis, CP-2 for design)
|
||||
|
||||
**Output**: Loaded plan context for review
|
||||
|
||||
---
|
||||
|
||||
### Phase 2: Plan Review
|
||||
|
||||
**Objective**: Evaluate plan quality, completeness, and feasibility.
|
||||
|
||||
**Steps**:
|
||||
|
||||
1. **For architecture report review (CP-1)**:
|
||||
- Verify all issue categories are covered (cycles, coupling, cohesion, God Classes, dead code, API bloat)
|
||||
- Check that severity rankings are justified with evidence
|
||||
- Validate baseline metrics are quantified and reproducible
|
||||
- Check scope coverage matches original requirement
|
||||
|
||||
2. **For refactoring plan review (CP-2)**:
|
||||
- Verify each refactoring has unique REFACTOR-ID and self-contained detail
|
||||
- Check priority assignments follow impact/effort matrix
|
||||
- Validate target files are non-overlapping between refactorings
|
||||
- Verify success criteria are measurable
|
||||
- Check that implementation guidance is actionable
|
||||
- Assess risk levels and mitigation strategies
|
||||
|
||||
3. **Issue classification**:
|
||||
|
||||
| Finding Severity | Condition | Impact |
|
||||
|------------------|-----------|--------|
|
||||
| Critical | Missing key analysis area or infeasible plan | REVISE required |
|
||||
| High | Unclear criteria or overlapping targets | REVISE recommended |
|
||||
| Medium | Minor gaps in coverage or detail | Note for improvement |
|
||||
| Low | Style or formatting issues | Informational |
|
||||
|
||||
**Output**: Review findings with severity classifications
|
||||
|
||||
---
|
||||
|
||||
### Phase 3: Verdict
|
||||
|
||||
**Objective**: Issue APPROVE or REVISE verdict.
|
||||
|
||||
| Verdict | Condition | Action |
|
||||
|---------|-----------|--------|
|
||||
| APPROVE | No Critical or High findings | Plan is ready for next stage |
|
||||
| REVISE | Has Critical or High findings | Return specific feedback for revision |
|
||||
|
||||
**Output**: Verdict with detailed feedback
|
||||
|
||||
---
|
||||
|
||||
## Structured Output Template
|
||||
|
||||
```
|
||||
## Summary
|
||||
- One-sentence verdict: APPROVE or REVISE with rationale
|
||||
|
||||
## Findings
|
||||
- Finding 1: [severity] description with artifact reference
|
||||
- Finding 2: [severity] description with specific section reference
|
||||
|
||||
## Verdict
|
||||
- APPROVE: Plan is ready for execution
|
||||
OR
|
||||
- REVISE: Specific items requiring revision
|
||||
1. Issue description + suggested fix
|
||||
2. Issue description + suggested fix
|
||||
|
||||
## Recommendations
|
||||
- Optional improvement suggestions (non-blocking)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Scenario | Resolution |
|
||||
|----------|------------|
|
||||
| Artifact file not found | Report in findings, request re-generation |
|
||||
| Plan structure invalid | Report as Critical finding, REVISE verdict |
|
||||
| Scope mismatch | Report in findings, note for coordinator |
|
||||
| Timeout approaching | Output current findings with "PARTIAL" status |
|
||||
114
.codex/skills/team-arch-opt/instructions/agent-instruction.md
Normal file
114
.codex/skills/team-arch-opt/instructions/agent-instruction.md
Normal file
@@ -0,0 +1,114 @@
|
||||
## TASK ASSIGNMENT
|
||||
|
||||
### MANDATORY FIRST STEPS
|
||||
1. Read shared discoveries: {session_folder}/discoveries.ndjson (if exists, skip if not)
|
||||
2. Read project context: .workflow/project-tech.json (if exists)
|
||||
3. Read task schema: ~ or <project>/.codex/skills/team-arch-opt/schemas/tasks-schema.md
|
||||
|
||||
---
|
||||
|
||||
## Your Task
|
||||
|
||||
**Task ID**: {id}
|
||||
**Title**: {title}
|
||||
**Description**: {description}
|
||||
**Role**: {role}
|
||||
**Issue Type**: {issue_type}
|
||||
**Priority**: {priority}
|
||||
**Target Files**: {target_files}
|
||||
|
||||
### Previous Tasks' Findings (Context)
|
||||
{prev_context}
|
||||
|
||||
---
|
||||
|
||||
## Execution Protocol
|
||||
|
||||
1. **Read discoveries**: Load {session_folder}/discoveries.ndjson for shared exploration findings
|
||||
2. **Use context**: Apply previous tasks' findings from prev_context above
|
||||
3. **Execute by role**:
|
||||
|
||||
**If role = analyzer**:
|
||||
- Scan codebase for architecture issues within target scope
|
||||
- Build import/require graph, detect circular dependencies
|
||||
- Identify God Classes (>500 LOC, >10 public methods)
|
||||
- Calculate coupling (fan-in/fan-out) and cohesion metrics
|
||||
- Detect dead code, dead exports, layering violations
|
||||
- Collect quantified baseline metrics
|
||||
- Rank top 3-7 issues by severity (Critical/High/Medium)
|
||||
- Write `{session_folder}/artifacts/architecture-baseline.json` (metrics)
|
||||
- Write `{session_folder}/artifacts/architecture-report.md` (ranked issues)
|
||||
|
||||
**If role = designer**:
|
||||
- Read architecture report and baseline from {session_folder}/artifacts/
|
||||
- For each issue, select refactoring strategy by type:
|
||||
- CYCLE: interface extraction, dependency inversion, mediator
|
||||
- GOD_CLASS: SRP decomposition, extract class/module
|
||||
- COUPLING: introduce interface/abstraction, DI, events
|
||||
- DUPLICATION: extract shared utility/base class
|
||||
- LAYER_VIOLATION: move to correct layer, add facade
|
||||
- DEAD_CODE: safe removal with reference verification
|
||||
- API_BLOAT: privatize internals, barrel file cleanup
|
||||
- Prioritize by impact/effort: P0 (high impact+low effort) to P3 (low impact or high effort)
|
||||
- Assign unique REFACTOR-IDs (REFACTOR-001, 002, ...) with non-overlapping file targets
|
||||
- Write `{session_folder}/artifacts/refactoring-plan.md`
|
||||
|
||||
**If role = refactorer**:
|
||||
- Read refactoring plan from {session_folder}/artifacts/refactoring-plan.md
|
||||
- Apply refactorings in priority order (P0 first)
|
||||
- Preserve existing behavior -- refactoring must not change functionality
|
||||
- Update ALL import references when moving/renaming modules
|
||||
- Update ALL test files referencing moved/renamed symbols
|
||||
- Verify no dangling imports after module moves
|
||||
|
||||
**If role = validator**:
|
||||
- Read baseline from {session_folder}/artifacts/architecture-baseline.json
|
||||
- Read plan from {session_folder}/artifacts/refactoring-plan.md
|
||||
- Build validation: compile/type-check, zero new errors
|
||||
- Test validation: run test suite, no new failures
|
||||
- Metric validation: coupling improved or neutral, no new cycles
|
||||
- API validation: public signatures preserved, no dangling references
|
||||
- Write `{session_folder}/artifacts/validation-results.json`
|
||||
- Set verdict: PASS / WARN / FAIL
|
||||
|
||||
**If role = reviewer**:
|
||||
- Read plan from {session_folder}/artifacts/refactoring-plan.md
|
||||
- Review changed files across 5 dimensions:
|
||||
- Correctness: no behavior changes, all references updated
|
||||
- Pattern consistency: follows existing conventions
|
||||
- Completeness: imports, tests, configs all updated
|
||||
- Migration safety: no dangling refs, backward compatible
|
||||
- Best practices: SOLID, appropriate abstraction
|
||||
- Write `{session_folder}/artifacts/review-report.md`
|
||||
- Set verdict: APPROVE / REVISE / REJECT
|
||||
|
||||
4. **Share discoveries**: Append exploration findings to shared board:
|
||||
```bash
|
||||
echo '{"ts":"<ISO8601>","worker":"{id}","type":"<type>","data":{...}}' >> {session_folder}/discoveries.ndjson
|
||||
```
|
||||
5. **Report result**: Return JSON via report_agent_job_result
|
||||
|
||||
### Discovery Types to Share
|
||||
- `cycle_found`: `{modules, depth, description}` -- Circular dependency detected
|
||||
- `god_class_found`: `{file, loc, methods, description}` -- God Class identified
|
||||
- `coupling_issue`: `{module, fan_in, fan_out, description}` -- High coupling
|
||||
- `dead_code_found`: `{file, type, description}` -- Dead code found
|
||||
- `layer_violation`: `{from, to, description}` -- Layering violation
|
||||
- `file_modified`: `{file, change, lines_added}` -- File change recorded
|
||||
- `pattern_found`: `{pattern_name, location, description}` -- Pattern identified
|
||||
- `metric_measured`: `{metric, value, unit, module}` -- Metric measured
|
||||
- `artifact_produced`: `{name, path, producer, type}` -- Deliverable created
|
||||
|
||||
---
|
||||
|
||||
## Output (report_agent_job_result)
|
||||
|
||||
Return JSON:
|
||||
{
|
||||
"id": "{id}",
|
||||
"status": "completed" | "failed",
|
||||
"findings": "Key discoveries and implementation notes (max 500 chars)",
|
||||
"verdict": "PASS|WARN|FAIL|APPROVE|REVISE|REJECT or empty",
|
||||
"artifacts_produced": "semicolon-separated artifact paths",
|
||||
"error": ""
|
||||
}
|
||||
174
.codex/skills/team-arch-opt/schemas/tasks-schema.md
Normal file
174
.codex/skills/team-arch-opt/schemas/tasks-schema.md
Normal file
@@ -0,0 +1,174 @@
|
||||
# Team Architecture Optimization -- CSV Schema
|
||||
|
||||
## Master CSV: tasks.csv
|
||||
|
||||
### Column Definitions
|
||||
|
||||
#### Input Columns (Set by Decomposer)
|
||||
|
||||
| Column | Type | Required | Description | Example |
|
||||
|--------|------|----------|-------------|---------|
|
||||
| `id` | string | Yes | Unique task identifier (PREFIX-NNN) | `"ANALYZE-001"` |
|
||||
| `title` | string | Yes | Short task title | `"Analyze architecture"` |
|
||||
| `description` | string | Yes | Detailed task description (self-contained) with goal, inputs, outputs, success criteria | `"Analyze codebase architecture..."` |
|
||||
| `role` | enum | Yes | Worker role: `analyzer`, `designer`, `refactorer`, `validator`, `reviewer` | `"analyzer"` |
|
||||
| `issue_type` | string | No | Architecture issue category: CYCLE, COUPLING, COHESION, GOD_CLASS, DUPLICATION, LAYER_VIOLATION, DEAD_CODE, API_BLOAT | `"CYCLE"` |
|
||||
| `priority` | enum | No | P0 (Critical), P1 (High), P2 (Medium), P3 (Low) | `"P0"` |
|
||||
| `target_files` | string | No | Semicolon-separated file paths to focus on | `"src/auth/index.ts;src/user/index.ts"` |
|
||||
| `deps` | string | No | Semicolon-separated dependency task IDs | `"ANALYZE-001"` |
|
||||
| `context_from` | string | No | Semicolon-separated task IDs for context | `"ANALYZE-001"` |
|
||||
| `exec_mode` | enum | Yes | Execution mechanism: `csv-wave` or `interactive` | `"csv-wave"` |
|
||||
|
||||
#### Computed Columns (Set by Wave Engine)
|
||||
|
||||
| Column | Type | Description | Example |
|
||||
|--------|------|-------------|---------|
|
||||
| `wave` | integer | Wave number (1-based, from topological sort) | `2` |
|
||||
| `prev_context` | string | Aggregated findings from context_from tasks (per-wave CSV only) | `"[ANALYZE-001] Found 5 architecture issues..."` |
|
||||
|
||||
#### Output Columns (Set by Agent)
|
||||
|
||||
| Column | Type | Description | Example |
|
||||
|--------|------|-------------|---------|
|
||||
| `status` | enum | `pending` -> `completed` / `failed` / `skipped` | `"completed"` |
|
||||
| `findings` | string | Key discoveries (max 500 chars) | `"Found 3 circular deps, 2 God Classes..."` |
|
||||
| `verdict` | string | Validation/review verdict: PASS, WARN, FAIL, APPROVE, REVISE, REJECT | `"PASS"` |
|
||||
| `artifacts_produced` | string | Semicolon-separated paths of produced artifacts | `"artifacts/architecture-report.md"` |
|
||||
| `error` | string | Error message if failed | `""` |
|
||||
|
||||
---
|
||||
|
||||
### exec_mode Values
|
||||
|
||||
| Value | Mechanism | Description |
|
||||
|-------|-----------|-------------|
|
||||
| `csv-wave` | `spawn_agents_on_csv` | One-shot batch execution within wave |
|
||||
| `interactive` | `spawn_agent`/`wait`/`send_input`/`close_agent` | Multi-round individual execution |
|
||||
|
||||
Interactive tasks appear in master CSV for dependency tracking but are NOT included in wave-{N}.csv files.
|
||||
|
||||
---
|
||||
|
||||
### Role Prefix Mapping
|
||||
|
||||
| Role | Prefix | Stage | Responsibility |
|
||||
|------|--------|-------|----------------|
|
||||
| analyzer | ANALYZE | 1 | Architecture analysis, baseline metrics, issue identification |
|
||||
| designer | DESIGN | 2 | Refactoring plan design, strategy selection, prioritization |
|
||||
| refactorer | REFACTOR / FIX | 3 | Code implementation, refactoring application, targeted fixes |
|
||||
| validator | VALIDATE | 4 | Build checks, test suite, metric validation, API compatibility |
|
||||
| reviewer | REVIEW | 4 | Code review for correctness, patterns, completeness, safety |
|
||||
|
||||
---
|
||||
|
||||
### Example Data
|
||||
|
||||
```csv
|
||||
id,title,description,role,issue_type,priority,target_files,deps,context_from,exec_mode,wave,status,findings,verdict,artifacts_produced,error
|
||||
"ANALYZE-001","Analyze architecture","PURPOSE: Analyze codebase architecture to identify structural issues\nTASK:\n- Build import graph, detect circular deps\n- Identify God Classes (>500 LOC, >10 methods)\n- Calculate coupling/cohesion metrics\n- Detect dead code and dead exports\nINPUT: Codebase under target scope\nOUTPUT: artifacts/architecture-baseline.json + artifacts/architecture-report.md\nSUCCESS: Ranked issue list with severity, baseline metrics collected\nSESSION: .workflow/.csv-wave/tao-example-20260308","analyzer","","","","","","csv-wave","1","pending","","","",""
|
||||
"DESIGN-001","Design refactoring plan","PURPOSE: Design prioritized refactoring plan from architecture report\nTASK:\n- For each issue, select refactoring strategy\n- Prioritize by impact/effort ratio (P0-P3)\n- Define measurable success criteria per refactoring\n- Assign unique REFACTOR-IDs with non-overlapping file targets\nINPUT: artifacts/architecture-report.md + artifacts/architecture-baseline.json\nOUTPUT: artifacts/refactoring-plan.md\nSUCCESS: Prioritized plan with self-contained REFACTOR blocks\nSESSION: .workflow/.csv-wave/tao-example-20260308","designer","","","","ANALYZE-001","ANALYZE-001","csv-wave","2","pending","","","",""
|
||||
"REFACTOR-001","Implement refactorings","PURPOSE: Implement architecture refactoring changes per plan\nTASK:\n- Apply refactorings in priority order (P0 first)\n- Update all import references when moving/renaming\n- Update all test files referencing moved symbols\n- Preserve existing behavior\nINPUT: artifacts/refactoring-plan.md\nOUTPUT: Modified source files\nSUCCESS: All planned structural changes applied, no dangling imports\nSESSION: .workflow/.csv-wave/tao-example-20260308","refactorer","","","","DESIGN-001","DESIGN-001","csv-wave","3","pending","","","",""
|
||||
"VALIDATE-001","Validate refactoring","PURPOSE: Validate refactoring improves architecture without breaking functionality\nTASK:\n- Build check: zero new compilation errors\n- Test suite: all previously passing tests still pass\n- Metrics: coupling improved or neutral, no new cycles\n- API: public signatures preserved\nINPUT: artifacts/architecture-baseline.json + artifacts/refactoring-plan.md\nOUTPUT: artifacts/validation-results.json\nSUCCESS: All dimensions PASS\nSESSION: .workflow/.csv-wave/tao-example-20260308","validator","","","","REFACTOR-001","REFACTOR-001","csv-wave","4","pending","","","",""
|
||||
"REVIEW-001","Review refactoring code","PURPOSE: Review refactoring changes for correctness and quality\nTASK:\n- Correctness: no behavior changes, all references updated\n- Pattern consistency: follows existing conventions\n- Completeness: imports, tests, configs all updated\n- Migration safety: no dangling refs, backward compatible\n- Best practices: SOLID principles, appropriate abstraction\nINPUT: artifacts/refactoring-plan.md + changed files\nOUTPUT: artifacts/review-report.md\nSUCCESS: APPROVE verdict (no Critical/High findings)\nSESSION: .workflow/.csv-wave/tao-example-20260308","reviewer","","","","REFACTOR-001","REFACTOR-001","csv-wave","4","pending","","","",""
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Column Lifecycle
|
||||
|
||||
```
|
||||
Decomposer (Phase 1) Wave Engine (Phase 2) Agent (Execution)
|
||||
--------------------- -------------------- -----------------
|
||||
id ----------> id ----------> id
|
||||
title ----------> title ----------> (reads)
|
||||
description ----------> description ----------> (reads)
|
||||
role ----------> role ----------> (reads)
|
||||
issue_type ----------> issue_type ----------> (reads)
|
||||
priority ----------> priority ----------> (reads)
|
||||
target_files----------> target_files----------> (reads)
|
||||
deps ----------> deps ----------> (reads)
|
||||
context_from----------> context_from----------> (reads)
|
||||
exec_mode ----------> exec_mode ----------> (reads)
|
||||
wave ----------> (reads)
|
||||
prev_context ----------> (reads)
|
||||
status
|
||||
findings
|
||||
verdict
|
||||
artifacts_produced
|
||||
error
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Output Schema (JSON)
|
||||
|
||||
Agent output via `report_agent_job_result` (csv-wave tasks):
|
||||
|
||||
```json
|
||||
{
|
||||
"id": "ANALYZE-001",
|
||||
"status": "completed",
|
||||
"findings": "Found 5 architecture issues: 2 circular deps (auth<->user, service<->repo), 1 God Class (UserManager 850 LOC), 1 dead code cluster (src/legacy/), 1 API bloat (utils/ exports 45 symbols, 12 unused).",
|
||||
"verdict": "",
|
||||
"artifacts_produced": "artifacts/architecture-baseline.json;artifacts/architecture-report.md",
|
||||
"error": ""
|
||||
}
|
||||
```
|
||||
|
||||
Interactive tasks output via structured text or JSON written to `interactive/{id}-result.json`.
|
||||
|
||||
---
|
||||
|
||||
## Discovery Types
|
||||
|
||||
| Type | Dedup Key | Data Schema | Description |
|
||||
|------|-----------|-------------|-------------|
|
||||
| `cycle_found` | `data.modules` (sorted) | `{modules, depth, description}` | Circular dependency detected |
|
||||
| `god_class_found` | `data.file` | `{file, loc, methods, description}` | God Class/Module identified |
|
||||
| `coupling_issue` | `data.module` | `{module, fan_in, fan_out, description}` | High coupling detected |
|
||||
| `dead_code_found` | `data.file+data.type` | `{file, type, description}` | Dead code or dead export |
|
||||
| `layer_violation` | `data.from+data.to` | `{from, to, description}` | Layering violation detected |
|
||||
| `file_modified` | `data.file` | `{file, change, lines_added}` | File change recorded |
|
||||
| `pattern_found` | `data.pattern_name+data.location` | `{pattern_name, location, description}` | Code pattern identified |
|
||||
| `metric_measured` | `data.metric+data.module` | `{metric, value, unit, module}` | Architecture metric measured |
|
||||
| `artifact_produced` | `data.path` | `{name, path, producer, type}` | Deliverable created |
|
||||
|
||||
### Discovery NDJSON Format
|
||||
|
||||
```jsonl
|
||||
{"ts":"2026-03-08T10:00:00Z","worker":"ANALYZE-001","type":"cycle_found","data":{"modules":["auth","user"],"depth":2,"description":"Circular dependency: auth imports user, user imports auth"}}
|
||||
{"ts":"2026-03-08T10:01:00Z","worker":"ANALYZE-001","type":"god_class_found","data":{"file":"src/services/UserManager.ts","loc":850,"methods":15,"description":"UserManager handles auth, profile, permissions, notifications"}}
|
||||
{"ts":"2026-03-08T10:05:00Z","worker":"ANALYZE-001","type":"metric_measured","data":{"metric":"coupling_score","value":0.72,"unit":"normalized","module":"src/auth/"}}
|
||||
{"ts":"2026-03-08T10:20:00Z","worker":"REFACTOR-001","type":"file_modified","data":{"file":"src/auth/index.ts","change":"Extracted IAuthService interface to break cycle","lines_added":25}}
|
||||
{"ts":"2026-03-08T10:25:00Z","worker":"REFACTOR-001","type":"artifact_produced","data":{"name":"refactoring-summary","path":"artifacts/refactoring-plan.md","producer":"designer","type":"markdown"}}
|
||||
```
|
||||
|
||||
> Both csv-wave and interactive agents read/write the same discoveries.ndjson file.
|
||||
|
||||
---
|
||||
|
||||
## Cross-Mechanism Context Flow
|
||||
|
||||
| Source | Target | Mechanism |
|
||||
|--------|--------|-----------|
|
||||
| CSV task findings | Interactive task | Injected via spawn message or send_input |
|
||||
| Interactive task result | CSV task prev_context | Read from interactive/{id}-result.json |
|
||||
| Any agent discovery | Any agent | Shared via discoveries.ndjson |
|
||||
|
||||
---
|
||||
|
||||
## Validation Rules
|
||||
|
||||
| Rule | Check | Error |
|
||||
|------|-------|-------|
|
||||
| Unique IDs | No duplicate `id` values | "Duplicate task ID: {id}" |
|
||||
| Valid deps | All dep IDs exist in tasks | "Unknown dependency: {dep_id}" |
|
||||
| No self-deps | Task cannot depend on itself | "Self-dependency: {id}" |
|
||||
| No circular deps | Topological sort completes | "Circular dependency detected involving: {ids}" |
|
||||
| context_from valid | All context IDs exist and in earlier waves | "Invalid context_from: {id}" |
|
||||
| exec_mode valid | Value is `csv-wave` or `interactive` | "Invalid exec_mode: {value}" |
|
||||
| Description non-empty | Every task has description | "Empty description for task: {id}" |
|
||||
| Status enum | status in {pending, completed, failed, skipped} | "Invalid status: {status}" |
|
||||
| Role valid | role in {analyzer, designer, refactorer, validator, reviewer} | "Invalid role: {role}" |
|
||||
| Verdict enum | verdict in {PASS, WARN, FAIL, APPROVE, REVISE, REJECT, ""} | "Invalid verdict: {verdict}" |
|
||||
| Cross-mechanism deps | Interactive to CSV deps resolve correctly | "Cross-mechanism dependency unresolvable: {id}" |
|
||||
725
.codex/skills/team-brainstorm/SKILL.md
Normal file
725
.codex/skills/team-brainstorm/SKILL.md
Normal file
@@ -0,0 +1,725 @@
|
||||
---
|
||||
name: team-brainstorm
|
||||
description: Multi-agent brainstorming pipeline with Generator-Critic loop. Generates ideas, challenges assumptions, synthesizes themes, and evaluates proposals. Supports Quick, Deep, and Full pipeline modes.
|
||||
argument-hint: "[-y|--yes] [-c|--concurrency N] [--continue] \"topic description\""
|
||||
allowed-tools: spawn_agents_on_csv, spawn_agent, wait, send_input, close_agent, Read, Write, Edit, Bash, Glob, Grep, AskUserQuestion
|
||||
---
|
||||
|
||||
## Auto Mode
|
||||
|
||||
When `--yes` or `-y`: Auto-confirm task decomposition, skip interactive validation, use defaults.
|
||||
|
||||
# Team Brainstorm
|
||||
|
||||
## Usage
|
||||
|
||||
```bash
|
||||
$team-brainstorm "How should we approach microservices migration?"
|
||||
$team-brainstorm -c 4 "Innovation strategies for AI-powered developer tools"
|
||||
$team-brainstorm -y "Quick brainstorm on naming conventions"
|
||||
$team-brainstorm --continue "brs-microservices-20260308"
|
||||
```
|
||||
|
||||
**Flags**:
|
||||
- `-y, --yes`: Skip all confirmations (auto mode)
|
||||
- `-c, --concurrency N`: Max concurrent agents within each wave (default: 3)
|
||||
- `--continue`: Resume existing session
|
||||
|
||||
**Output Directory**: `.workflow/.csv-wave/{session-id}/`
|
||||
**Core Output**: `tasks.csv` (master state) + `results.csv` (final) + `discoveries.ndjson` (shared exploration) + `context.md` (human-readable report)
|
||||
|
||||
---
|
||||
|
||||
## Overview
|
||||
|
||||
Multi-agent brainstorming with Generator-Critic loop: generate ideas across multiple angles, challenge assumptions, synthesize themes, and evaluate proposals. Supports three pipeline modes (Quick/Deep/Full) with configurable depth and parallel ideation.
|
||||
|
||||
**Execution Model**: Hybrid — CSV wave pipeline (primary) + individual agent spawn (secondary for Generator-Critic control)
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────────────────────┐
|
||||
│ TEAM BRAINSTORM WORKFLOW │
|
||||
├─────────────────────────────────────────────────────────────────────────┤
|
||||
│ │
|
||||
│ Phase 0: Pre-Wave Interactive │
|
||||
│ ├─ Topic clarification + complexity scoring │
|
||||
│ ├─ Pipeline mode selection (quick/deep/full) │
|
||||
│ └─ Output: refined requirements for decomposition │
|
||||
│ │
|
||||
│ Phase 1: Requirement → CSV + Classification │
|
||||
│ ├─ Parse topic into brainstorm tasks per selected pipeline │
|
||||
│ ├─ Assign roles: ideator, challenger, synthesizer, evaluator │
|
||||
│ ├─ Classify tasks: csv-wave | interactive (exec_mode) │
|
||||
│ ├─ Compute dependency waves (topological sort → depth grouping) │
|
||||
│ ├─ Generate tasks.csv with wave + exec_mode columns │
|
||||
│ └─ User validates task breakdown (skip if -y) │
|
||||
│ │
|
||||
│ Phase 2: Wave Execution Engine (Extended) │
|
||||
│ ├─ For each wave (1..N): │
|
||||
│ │ ├─ Execute pre-wave interactive tasks (if any) │
|
||||
│ │ ├─ Build wave CSV (filter csv-wave tasks for this wave) │
|
||||
│ │ ├─ Inject previous findings into prev_context column │
|
||||
│ │ ├─ spawn_agents_on_csv(wave CSV) │
|
||||
│ │ ├─ Execute post-wave interactive tasks (if any) │
|
||||
│ │ ├─ Merge all results into master tasks.csv │
|
||||
│ │ └─ Check: any failed? → skip dependents │
|
||||
│ └─ discoveries.ndjson shared across all modes (append-only) │
|
||||
│ │
|
||||
│ Phase 3: Post-Wave Interactive │
|
||||
│ ├─ Generator-Critic (GC) loop control │
|
||||
│ ├─ If critique severity >= HIGH: trigger revision wave │
|
||||
│ └─ Max 2 GC rounds, then force convergence │
|
||||
│ │
|
||||
│ Phase 4: Results Aggregation │
|
||||
│ ├─ Export final results.csv │
|
||||
│ ├─ Generate context.md with all findings │
|
||||
│ ├─ Display summary: completed/failed/skipped per wave │
|
||||
│ └─ Offer: view results | retry failed | done │
|
||||
│ │
|
||||
└─────────────────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Task Classification Rules
|
||||
|
||||
Each task is classified by `exec_mode`:
|
||||
|
||||
| exec_mode | Mechanism | Criteria |
|
||||
|-----------|-----------|----------|
|
||||
| `csv-wave` | `spawn_agents_on_csv` | One-shot, structured I/O, no multi-round interaction |
|
||||
| `interactive` | `spawn_agent`/`wait`/`send_input`/`close_agent` | Multi-round, clarification, inline utility |
|
||||
|
||||
**Classification Decision**:
|
||||
|
||||
| Task Property | Classification |
|
||||
|---------------|---------------|
|
||||
| Idea generation (single angle) | `csv-wave` |
|
||||
| Parallel ideation (Full pipeline, multiple angles) | `csv-wave` (parallel in same wave) |
|
||||
| Idea revision (GC loop) | `csv-wave` |
|
||||
| Critique / challenge | `csv-wave` |
|
||||
| Synthesis (theme extraction) | `csv-wave` |
|
||||
| Evaluation (scoring / ranking) | `csv-wave` |
|
||||
| GC loop control (severity check → decide revision or convergence) | `interactive` |
|
||||
| Topic clarification (Phase 0) | `interactive` |
|
||||
|
||||
---
|
||||
|
||||
## CSV Schema
|
||||
|
||||
### tasks.csv (Master State)
|
||||
|
||||
```csv
|
||||
id,title,description,role,angle,gc_round,deps,context_from,exec_mode,wave,status,findings,gc_signal,severity_summary,error
|
||||
"IDEA-001","Multi-angle idea generation","Generate 3+ ideas per angle with title, description, assumption, impact","ideator","Technical;Product;Innovation","0","","","csv-wave","1","pending","","","",""
|
||||
"CHALLENGE-001","Critique generated ideas","Challenge each idea across assumption, feasibility, risk, competition dimensions","challenger","","0","IDEA-001","IDEA-001","csv-wave","2","pending","","","",""
|
||||
"GC-CHECK-001","GC loop decision","Evaluate critique severity and decide: revision or convergence","gc-controller","","1","CHALLENGE-001","CHALLENGE-001","interactive","3","pending","","","",""
|
||||
```
|
||||
|
||||
**Columns**:
|
||||
|
||||
| Column | Phase | Description |
|
||||
|--------|-------|-------------|
|
||||
| `id` | Input | Unique task identifier (string) |
|
||||
| `title` | Input | Short task title |
|
||||
| `description` | Input | Detailed task description |
|
||||
| `role` | Input | Worker role: ideator, challenger, synthesizer, evaluator |
|
||||
| `angle` | Input | Brainstorming angle(s) for ideator tasks (semicolon-separated) |
|
||||
| `gc_round` | Input | Generator-Critic round number (0 = initial, 1+ = revision) |
|
||||
| `deps` | Input | Semicolon-separated dependency task IDs |
|
||||
| `context_from` | Input | Semicolon-separated task IDs whose findings this task needs |
|
||||
| `exec_mode` | Input | `csv-wave` or `interactive` |
|
||||
| `wave` | Computed | Wave number (computed by topological sort, 1-based) |
|
||||
| `status` | Output | `pending` → `completed` / `failed` / `skipped` |
|
||||
| `findings` | Output | Key discoveries or implementation notes (max 500 chars) |
|
||||
| `gc_signal` | Output | Generator-Critic signal: `REVISION_NEEDED` or `CONVERGED` (challenger only) |
|
||||
| `severity_summary` | Output | Severity count: e.g. "CRITICAL:1 HIGH:2 MEDIUM:3 LOW:1" |
|
||||
| `error` | Output | Error message if failed (empty if success) |
|
||||
|
||||
### Per-Wave CSV (Temporary)
|
||||
|
||||
Each wave generates a temporary `wave-{N}.csv` with extra `prev_context` column (csv-wave tasks only).
|
||||
|
||||
---
|
||||
|
||||
## Agent Registry (Interactive Agents)
|
||||
|
||||
| Agent | Role File | Pattern | Responsibility | Position |
|
||||
|-------|-----------|---------|----------------|----------|
|
||||
| gc-controller | agents/gc-controller.md | 2.3 (wait-respond) | Evaluate critique severity, decide revision vs convergence | post-wave (after challenger wave) |
|
||||
| topic-clarifier | agents/topic-clarifier.md | 2.3 (wait-respond) | Clarify topic, assess complexity, select pipeline mode | standalone (Phase 0) |
|
||||
|
||||
> **COMPACT PROTECTION**: Agent files are execution documents. When context compression occurs, **you MUST immediately `Read` the corresponding agent.md** to reload.
|
||||
|
||||
---
|
||||
|
||||
## Output Artifacts
|
||||
|
||||
| File | Purpose | Lifecycle |
|
||||
|------|---------|-----------|
|
||||
| `tasks.csv` | Master state — all tasks with status/findings | Updated after each wave |
|
||||
| `wave-{N}.csv` | Per-wave input (temporary, csv-wave tasks only) | Created before wave, deleted after |
|
||||
| `results.csv` | Final export of all task results | Created in Phase 4 |
|
||||
| `discoveries.ndjson` | Shared exploration board (all agents, both modes) | Append-only, carries across waves |
|
||||
| `context.md` | Human-readable execution report | Created in Phase 4 |
|
||||
| `interactive/{id}-result.json` | Results from interactive tasks | Created per interactive task |
|
||||
|
||||
---
|
||||
|
||||
## Session Structure
|
||||
|
||||
```
|
||||
.workflow/.csv-wave/{session-id}/
|
||||
├── tasks.csv # Master state (all tasks, both modes)
|
||||
├── results.csv # Final results export
|
||||
├── discoveries.ndjson # Shared discovery board (all agents)
|
||||
├── context.md # Human-readable report
|
||||
├── wave-{N}.csv # Temporary per-wave input (csv-wave only)
|
||||
└── interactive/ # Interactive task artifacts
|
||||
└── {id}-result.json # Per-task results
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Implementation
|
||||
|
||||
### Session Initialization
|
||||
|
||||
```javascript
|
||||
const getUtc8ISOString = () => new Date(Date.now() + 8 * 60 * 60 * 1000).toISOString()
|
||||
|
||||
// Parse flags
|
||||
const AUTO_YES = $ARGUMENTS.includes('--yes') || $ARGUMENTS.includes('-y')
|
||||
const continueMode = $ARGUMENTS.includes('--continue')
|
||||
const concurrencyMatch = $ARGUMENTS.match(/(?:--concurrency|-c)\s+(\d+)/)
|
||||
const maxConcurrency = concurrencyMatch ? parseInt(concurrencyMatch[1]) : 3
|
||||
|
||||
// Clean requirement text (remove flags)
|
||||
const topic = $ARGUMENTS
|
||||
.replace(/--yes|-y|--continue|--concurrency\s+\d+|-c\s+\d+/g, '')
|
||||
.trim()
|
||||
|
||||
const slug = topic.toLowerCase()
|
||||
.replace(/[^a-z0-9\u4e00-\u9fa5]+/g, '-')
|
||||
.substring(0, 40)
|
||||
const dateStr = getUtc8ISOString().substring(0, 10).replace(/-/g, '')
|
||||
let sessionId = `brs-${slug}-${dateStr}`
|
||||
let sessionFolder = `.workflow/.csv-wave/${sessionId}`
|
||||
|
||||
// Continue mode: find existing session
|
||||
if (continueMode) {
|
||||
const existing = Bash(`ls -t .workflow/.csv-wave/brs-* 2>/dev/null | head -1`).trim()
|
||||
if (existing) {
|
||||
sessionId = existing.split('/').pop()
|
||||
sessionFolder = existing
|
||||
// Read existing tasks.csv, find incomplete waves, resume from Phase 2
|
||||
}
|
||||
}
|
||||
|
||||
Bash(`mkdir -p ${sessionFolder}/interactive`)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Phase 0: Pre-Wave Interactive
|
||||
|
||||
**Objective**: Clarify topic, assess complexity, and select pipeline mode.
|
||||
|
||||
**Execution**:
|
||||
|
||||
```javascript
|
||||
const clarifier = spawn_agent({
|
||||
message: `
|
||||
## TASK ASSIGNMENT
|
||||
|
||||
### MANDATORY FIRST STEPS (Agent Execute)
|
||||
1. **Read role definition**: ~ or <project>/.codex/skills/team-brainstorm/agents/topic-clarifier.md (MUST read first)
|
||||
2. Read: .workflow/project-tech.json (if exists)
|
||||
|
||||
---
|
||||
|
||||
Goal: Clarify brainstorming topic and select pipeline mode
|
||||
Topic: ${topic}
|
||||
|
||||
### Task
|
||||
1. Assess topic complexity using signal detection:
|
||||
- Strategic/systemic keywords (+3): strategy, architecture, system, framework, paradigm
|
||||
- Multi-dimensional keywords (+2): multiple, compare, tradeoff, versus, alternative
|
||||
- Innovation-focused keywords (+2): innovative, creative, novel, breakthrough
|
||||
- Simple/basic keywords (-2): simple, quick, straightforward, basic
|
||||
2. Score >= 4 → full, 2-3 → deep, 0-1 → quick
|
||||
3. Suggest divergence angles (e.g., Technical, Product, Innovation, Risk)
|
||||
4. Return structured result
|
||||
`
|
||||
})
|
||||
|
||||
const clarifierResult = wait({ ids: [clarifier], timeout_ms: 120000 })
|
||||
|
||||
if (clarifierResult.timed_out) {
|
||||
send_input({ id: clarifier, message: "Please finalize and output current findings." })
|
||||
const retry = wait({ ids: [clarifier], timeout_ms: 60000 })
|
||||
}
|
||||
|
||||
// Parse result for pipeline_mode, angles
|
||||
close_agent({ id: clarifier })
|
||||
|
||||
// Store result
|
||||
Write(`${sessionFolder}/interactive/topic-clarifier-result.json`, JSON.stringify({
|
||||
task_id: "topic-clarification",
|
||||
status: "completed",
|
||||
pipeline_mode: parsedMode, // "quick" | "deep" | "full"
|
||||
angles: parsedAngles, // ["Technical", "Product", "Innovation", "Risk"]
|
||||
complexity_score: parsedScore,
|
||||
timestamp: getUtc8ISOString()
|
||||
}))
|
||||
```
|
||||
|
||||
If not AUTO_YES, present user with pipeline mode selection for confirmation:
|
||||
|
||||
```javascript
|
||||
if (!AUTO_YES) {
|
||||
const answer = AskUserQuestion({
|
||||
questions: [{
|
||||
question: `Topic: "${topic}"\nRecommended pipeline: ${pipeline_mode} (complexity: ${complexity_score})\nAngles: ${angles.join(', ')}\n\nApprove?`,
|
||||
header: "Pipeline Selection",
|
||||
multiSelect: false,
|
||||
options: [
|
||||
{ label: "Approve", description: `Use ${pipeline_mode} pipeline` },
|
||||
{ label: "Quick", description: "3 tasks: generate → challenge → synthesize" },
|
||||
{ label: "Deep", description: "6 tasks: generate → challenge → revise → re-challenge → synthesize → evaluate" },
|
||||
{ label: "Full", description: "7 tasks: 3x parallel generation → challenge → revise → synthesize → evaluate" }
|
||||
]
|
||||
}]
|
||||
})
|
||||
// Update pipeline_mode based on user choice
|
||||
}
|
||||
```
|
||||
|
||||
**Success Criteria**:
|
||||
- Refined requirements available for Phase 1 decomposition
|
||||
- Interactive agents closed, results stored
|
||||
|
||||
---
|
||||
|
||||
### Phase 1: Requirement → CSV + Classification
|
||||
|
||||
**Objective**: Build tasks.csv from selected pipeline mode with proper wave assignments.
|
||||
|
||||
**Decomposition Rules**:
|
||||
|
||||
| Pipeline | Tasks | Wave Structure |
|
||||
|----------|-------|---------------|
|
||||
| quick | IDEA-001 → CHALLENGE-001 → SYNTH-001 | 3 waves, serial |
|
||||
| deep | IDEA-001 → CHALLENGE-001 → IDEA-002 → CHALLENGE-002 → SYNTH-001 → EVAL-001 | 6 waves, serial with GC loop |
|
||||
| full | IDEA-001,002,003 (parallel) → CHALLENGE-001 → IDEA-004 → SYNTH-001 → EVAL-001 | 5 waves, fan-out + GC |
|
||||
|
||||
**Classification Rules**:
|
||||
|
||||
All brainstorm work tasks (ideation, challenging, synthesis, evaluation) are `csv-wave`. The GC loop controller between challenger and next ideation revision is `interactive` (post-wave, spawned by orchestrator to decide the GC outcome).
|
||||
|
||||
**Wave Computation**: Kahn's BFS topological sort with depth tracking (csv-wave tasks only).
|
||||
|
||||
**User Validation**: Display task breakdown with wave + exec_mode assignment (skip if AUTO_YES).
|
||||
|
||||
**Pipeline Task Definitions**:
|
||||
|
||||
#### Quick Pipeline (3 csv-wave tasks)
|
||||
|
||||
| Task ID | Role | Wave | Deps | Description |
|
||||
|---------|------|------|------|-------------|
|
||||
| IDEA-001 | ideator | 1 | (none) | Generate multi-angle ideas: 3+ ideas per angle with title, description, assumption, impact |
|
||||
| CHALLENGE-001 | challenger | 2 | IDEA-001 | Challenge each idea across 4 dimensions (assumption, feasibility, risk, competition). Assign severity per idea. Output GC signal |
|
||||
| SYNTH-001 | synthesizer | 3 | CHALLENGE-001 | Synthesize ideas and critiques into 1-3 integrated proposals with feasibility and innovation scores |
|
||||
|
||||
#### Deep Pipeline (6 csv-wave tasks + 1 interactive GC check)
|
||||
|
||||
Same as Quick plus:
|
||||
|
||||
| Task ID | Role | Wave | Deps | Description |
|
||||
|---------|------|------|------|-------------|
|
||||
| IDEA-002 | ideator | 4 | CHALLENGE-001 | Revise ideas based on critique feedback (GC Round 1). Address HIGH/CRITICAL challenges |
|
||||
| CHALLENGE-002 | challenger | 5 | IDEA-002 | Validate revised ideas (GC Round 2). Re-evaluate previously challenged ideas |
|
||||
| SYNTH-001 | synthesizer | 6 | CHALLENGE-002 | Synthesize all ideas and critiques |
|
||||
| EVAL-001 | evaluator | 7 | SYNTH-001 | Score and rank proposals: Feasibility 30%, Innovation 25%, Impact 25%, Cost 20% |
|
||||
|
||||
GC-CHECK-001 (interactive) runs post-wave after CHALLENGE-001 to decide whether to proceed with revision or skip to synthesis.
|
||||
|
||||
#### Full Pipeline (7 csv-wave tasks + GC control)
|
||||
|
||||
| Task ID | Role | Wave | Deps | Description |
|
||||
|---------|------|------|------|-------------|
|
||||
| IDEA-001 | ideator | 1 | (none) | Generate ideas from angle 1 |
|
||||
| IDEA-002 | ideator | 1 | (none) | Generate ideas from angle 2 |
|
||||
| IDEA-003 | ideator | 1 | (none) | Generate ideas from angle 3 |
|
||||
| CHALLENGE-001 | challenger | 2 | IDEA-001;IDEA-002;IDEA-003 | Critique all generated ideas |
|
||||
| IDEA-004 | ideator | 3 | CHALLENGE-001 | Revise ideas based on critique |
|
||||
| SYNTH-001 | synthesizer | 4 | IDEA-004 | Synthesize all ideas and critiques |
|
||||
| EVAL-001 | evaluator | 5 | SYNTH-001 | Score and rank proposals |
|
||||
|
||||
**Success Criteria**:
|
||||
- tasks.csv created with valid schema, wave, and exec_mode assignments
|
||||
- No circular dependencies
|
||||
- User approved (or AUTO_YES)
|
||||
|
||||
---
|
||||
|
||||
### Phase 2: Wave Execution Engine (Extended)
|
||||
|
||||
**Objective**: Execute tasks wave-by-wave with hybrid mechanism support and cross-wave context propagation.
|
||||
|
||||
```javascript
|
||||
const failedIds = new Set()
|
||||
const skippedIds = new Set()
|
||||
const MAX_GC_ROUNDS = 2
|
||||
let gcRound = 0
|
||||
|
||||
for (let wave = 1; wave <= maxWave; wave++) {
|
||||
console.log(`\n## Wave ${wave}/${maxWave}\n`)
|
||||
|
||||
// 1. Read current master CSV
|
||||
const masterCsv = parseCsv(Read(`${sessionFolder}/tasks.csv`))
|
||||
|
||||
// 2. Separate csv-wave and interactive tasks for this wave
|
||||
const waveTasks = masterCsv.filter(row => parseInt(row.wave) === wave)
|
||||
const csvTasks = waveTasks.filter(t => t.exec_mode === 'csv-wave')
|
||||
const interactiveTasks = waveTasks.filter(t => t.exec_mode === 'interactive')
|
||||
|
||||
// 3. Skip tasks whose deps failed
|
||||
const executableCsvTasks = []
|
||||
for (const task of csvTasks) {
|
||||
const deps = task.deps.split(';').filter(Boolean)
|
||||
if (deps.some(d => failedIds.has(d) || skippedIds.has(d))) {
|
||||
skippedIds.add(task.id)
|
||||
updateMasterCsvRow(sessionFolder, task.id, {
|
||||
status: 'skipped',
|
||||
error: 'Dependency failed or skipped'
|
||||
})
|
||||
continue
|
||||
}
|
||||
executableCsvTasks.push(task)
|
||||
}
|
||||
|
||||
// 4. Build prev_context for each csv-wave task
|
||||
for (const task of executableCsvTasks) {
|
||||
const contextIds = task.context_from.split(';').filter(Boolean)
|
||||
const prevFindings = contextIds
|
||||
.map(id => {
|
||||
const prevRow = masterCsv.find(r => r.id === id)
|
||||
if (prevRow && prevRow.status === 'completed' && prevRow.findings) {
|
||||
return `[Task ${id}: ${prevRow.title}] ${prevRow.findings}`
|
||||
}
|
||||
return null
|
||||
})
|
||||
.filter(Boolean)
|
||||
.join('\n')
|
||||
task.prev_context = prevFindings || 'No previous context available'
|
||||
}
|
||||
|
||||
// 5. Write wave CSV and execute csv-wave tasks
|
||||
if (executableCsvTasks.length > 0) {
|
||||
const waveHeader = 'id,title,description,role,angle,gc_round,deps,context_from,exec_mode,wave,prev_context'
|
||||
const waveRows = executableCsvTasks.map(t =>
|
||||
[t.id, t.title, t.description, t.role, t.angle, t.gc_round, t.deps, t.context_from, t.exec_mode, t.wave, t.prev_context]
|
||||
.map(cell => `"${String(cell).replace(/"/g, '""')}"`)
|
||||
.join(',')
|
||||
)
|
||||
Write(`${sessionFolder}/wave-${wave}.csv`, [waveHeader, ...waveRows].join('\n'))
|
||||
|
||||
const waveResult = spawn_agents_on_csv({
|
||||
csv_path: `${sessionFolder}/wave-${wave}.csv`,
|
||||
id_column: "id",
|
||||
instruction: buildBrainstormInstruction(sessionFolder, wave),
|
||||
max_concurrency: maxConcurrency,
|
||||
max_runtime_seconds: 600,
|
||||
output_csv_path: `${sessionFolder}/wave-${wave}-results.csv`,
|
||||
output_schema: {
|
||||
type: "object",
|
||||
properties: {
|
||||
id: { type: "string" },
|
||||
status: { type: "string", enum: ["completed", "failed"] },
|
||||
findings: { type: "string" },
|
||||
gc_signal: { type: "string" },
|
||||
severity_summary: { type: "string" },
|
||||
error: { type: "string" }
|
||||
},
|
||||
required: ["id", "status", "findings"]
|
||||
}
|
||||
})
|
||||
// Blocks until wave completes
|
||||
|
||||
// Merge results into master CSV
|
||||
const waveResults = parseCsv(Read(`${sessionFolder}/wave-${wave}-results.csv`))
|
||||
for (const result of waveResults) {
|
||||
updateMasterCsvRow(sessionFolder, result.id, {
|
||||
status: result.status,
|
||||
findings: result.findings || '',
|
||||
gc_signal: result.gc_signal || '',
|
||||
severity_summary: result.severity_summary || '',
|
||||
error: result.error || ''
|
||||
})
|
||||
if (result.status === 'failed') failedIds.add(result.id)
|
||||
}
|
||||
|
||||
Bash(`rm -f "${sessionFolder}/wave-${wave}.csv"`)
|
||||
}
|
||||
|
||||
// 6. Execute post-wave interactive tasks (GC controller)
|
||||
for (const task of interactiveTasks) {
|
||||
if (task.status !== 'pending') continue
|
||||
const deps = task.deps.split(';').filter(Boolean)
|
||||
if (deps.some(d => failedIds.has(d) || skippedIds.has(d))) {
|
||||
skippedIds.add(task.id)
|
||||
continue
|
||||
}
|
||||
|
||||
// Spawn GC controller agent
|
||||
const gcAgent = spawn_agent({
|
||||
message: `
|
||||
## TASK ASSIGNMENT
|
||||
|
||||
### MANDATORY FIRST STEPS (Agent Execute)
|
||||
1. **Read role definition**: ~ or <project>/.codex/skills/team-brainstorm/agents/gc-controller.md (MUST read first)
|
||||
2. Read: ${sessionFolder}/discoveries.ndjson (shared discoveries)
|
||||
|
||||
---
|
||||
|
||||
Goal: Evaluate critique severity and decide revision vs convergence
|
||||
Session: ${sessionFolder}
|
||||
GC Round: ${gcRound}
|
||||
Max GC Rounds: ${MAX_GC_ROUNDS}
|
||||
|
||||
### Context
|
||||
Read the latest critique file and determine the GC signal.
|
||||
If REVISION_NEEDED and gcRound < maxRounds: output "REVISION"
|
||||
If CONVERGED or gcRound >= maxRounds: output "CONVERGE"
|
||||
`
|
||||
})
|
||||
|
||||
const gcResult = wait({ ids: [gcAgent], timeout_ms: 120000 })
|
||||
if (gcResult.timed_out) {
|
||||
send_input({ id: gcAgent, message: "Please finalize your decision now." })
|
||||
wait({ ids: [gcAgent], timeout_ms: 60000 })
|
||||
}
|
||||
close_agent({ id: gcAgent })
|
||||
|
||||
// Parse GC decision and potentially create/skip revision tasks
|
||||
Write(`${sessionFolder}/interactive/${task.id}-result.json`, JSON.stringify({
|
||||
task_id: task.id, status: "completed",
|
||||
gc_decision: gcDecision, gc_round: gcRound,
|
||||
timestamp: getUtc8ISOString()
|
||||
}))
|
||||
|
||||
if (gcDecision === "CONVERGE") {
|
||||
// Skip remaining GC tasks, mark revision tasks as skipped
|
||||
// Unblock SYNTH directly
|
||||
} else {
|
||||
gcRound++
|
||||
// Let the revision wave proceed naturally
|
||||
}
|
||||
|
||||
updateMasterCsvRow(sessionFolder, task.id, { status: 'completed', findings: `GC decision: ${gcDecision}` })
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Success Criteria**:
|
||||
- All waves executed in order
|
||||
- Both csv-wave and interactive tasks handled per wave
|
||||
- Each wave's results merged into master CSV before next wave starts
|
||||
- Dependent tasks skipped when predecessor failed
|
||||
- discoveries.ndjson accumulated across all waves and mechanisms
|
||||
- GC loop controlled with max 2 rounds
|
||||
|
||||
---
|
||||
|
||||
### Phase 3: Post-Wave Interactive
|
||||
|
||||
**Objective**: Handle any final GC loop convergence and prepare for synthesis.
|
||||
|
||||
If the pipeline used GC loops and the final GC decision was CONVERGE or max rounds reached, ensure SYNTH-001 is unblocked and all remaining GC-related tasks are properly marked.
|
||||
|
||||
**Success Criteria**:
|
||||
- Post-wave interactive processing complete
|
||||
- Interactive agents closed, results stored
|
||||
|
||||
---
|
||||
|
||||
### Phase 4: Results Aggregation
|
||||
|
||||
**Objective**: Generate final results and human-readable report.
|
||||
|
||||
```javascript
|
||||
const masterCsv = Read(`${sessionFolder}/tasks.csv`)
|
||||
Write(`${sessionFolder}/results.csv`, masterCsv)
|
||||
|
||||
const tasks = parseCsv(masterCsv)
|
||||
const completed = tasks.filter(t => t.status === 'completed')
|
||||
const failed = tasks.filter(t => t.status === 'failed')
|
||||
const skipped = tasks.filter(t => t.status === 'skipped')
|
||||
|
||||
const contextContent = `# Team Brainstorm Report
|
||||
|
||||
**Session**: ${sessionId}
|
||||
**Topic**: ${topic}
|
||||
**Pipeline**: ${pipeline_mode}
|
||||
**Completed**: ${getUtc8ISOString()}
|
||||
|
||||
---
|
||||
|
||||
## Summary
|
||||
|
||||
| Metric | Count |
|
||||
|--------|-------|
|
||||
| Total Tasks | ${tasks.length} |
|
||||
| Completed | ${completed.length} |
|
||||
| Failed | ${failed.length} |
|
||||
| Skipped | ${skipped.length} |
|
||||
| GC Rounds | ${gcRound} |
|
||||
|
||||
---
|
||||
|
||||
## Wave Execution
|
||||
|
||||
${waveDetails}
|
||||
|
||||
---
|
||||
|
||||
## Task Details
|
||||
|
||||
${taskDetails}
|
||||
|
||||
---
|
||||
|
||||
## Brainstorm Artifacts
|
||||
|
||||
- Ideas: discoveries with type "idea" in discoveries.ndjson
|
||||
- Critiques: discoveries with type "critique" in discoveries.ndjson
|
||||
- Synthesis: discoveries with type "synthesis" in discoveries.ndjson
|
||||
- Evaluation: discoveries with type "evaluation" in discoveries.ndjson
|
||||
`
|
||||
|
||||
Write(`${sessionFolder}/context.md`, contextContent)
|
||||
```
|
||||
|
||||
If not AUTO_YES and there are failed tasks, offer retry or view report.
|
||||
|
||||
**Success Criteria**:
|
||||
- results.csv exported (all tasks, both modes)
|
||||
- context.md generated
|
||||
- All interactive agents closed
|
||||
- Summary displayed to user
|
||||
|
||||
---
|
||||
|
||||
## Shared Discovery Board Protocol
|
||||
|
||||
All agents across all waves share `discoveries.ndjson`. This enables cross-role knowledge sharing.
|
||||
|
||||
**Discovery Types**:
|
||||
|
||||
| Type | Dedup Key | Data Schema | Description |
|
||||
|------|-----------|-------------|-------------|
|
||||
| `idea` | `data.title` | `{title, angle, description, assumption, impact}` | Generated idea |
|
||||
| `critique` | `data.idea_title` | `{idea_title, dimension, severity, challenge, rationale}` | Critique of an idea |
|
||||
| `theme` | `data.name` | `{name, strength, supporting_ideas[]}` | Extracted theme from synthesis |
|
||||
| `proposal` | `data.title` | `{title, source_ideas[], feasibility, innovation, description}` | Integrated proposal |
|
||||
| `evaluation` | `data.proposal_title` | `{proposal_title, weighted_score, rank, recommendation}` | Proposal evaluation |
|
||||
| `gc_decision` | `data.round` | `{round, signal, severity_counts}` | GC loop decision |
|
||||
|
||||
**Format**: NDJSON, each line is self-contained JSON:
|
||||
|
||||
```jsonl
|
||||
{"ts":"2026-03-08T10:00:00+08:00","worker":"IDEA-001","type":"idea","data":{"title":"API Gateway Pattern","angle":"Technical","description":"Centralized API gateway for microservice routing","assumption":"Services need unified entry point","impact":"Simplifies client integration"}}
|
||||
{"ts":"2026-03-08T10:05:00+08:00","worker":"CHALLENGE-001","type":"critique","data":{"idea_title":"API Gateway Pattern","dimension":"feasibility","severity":"MEDIUM","challenge":"Single point of failure","rationale":"Requires high availability design"}}
|
||||
```
|
||||
|
||||
**Protocol Rules**:
|
||||
1. Read board before own work → leverage existing context
|
||||
2. Write discoveries immediately via `echo >>` → don't batch
|
||||
3. Deduplicate — check existing entries by type + dedup key
|
||||
4. Append-only — never modify or delete existing lines
|
||||
|
||||
---
|
||||
|
||||
## Consensus Severity Routing
|
||||
|
||||
When the challenger returns critique results with severity-graded verdicts:
|
||||
|
||||
| Severity | Action |
|
||||
|----------|--------|
|
||||
| HIGH | Trigger revision round (GC loop), max 2 rounds total |
|
||||
| MEDIUM | Log warning, continue pipeline |
|
||||
| LOW | Treat as consensus reached |
|
||||
|
||||
**Constraints**: Max 2 GC rounds (revision cycles). If still HIGH after 2 rounds, force convergence to synthesizer.
|
||||
|
||||
---
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Error | Resolution |
|
||||
|-------|------------|
|
||||
| Circular dependency | Detect in wave computation, abort with error message |
|
||||
| CSV agent timeout | Mark as failed in results, continue with wave |
|
||||
| CSV agent failed | Mark as failed, skip dependent tasks in later waves |
|
||||
| Interactive agent timeout | Urge convergence via send_input, then close if still timed out |
|
||||
| Interactive agent failed | Mark as failed, skip dependents |
|
||||
| All agents in wave failed | Log error, offer retry or abort |
|
||||
| CSV parse error | Validate CSV format before execution, show line number |
|
||||
| discoveries.ndjson corrupt | Ignore malformed lines, continue with valid entries |
|
||||
| GC loop exceeds 2 rounds | Force convergence to synthesizer |
|
||||
| No ideas generated | Report failure, suggest refining topic |
|
||||
| Continue mode: no session found | List available sessions, prompt user to select |
|
||||
|
||||
---
|
||||
|
||||
## Core Rules
|
||||
|
||||
1. **Start Immediately**: First action is session initialization, then Phase 0/1
|
||||
2. **Wave Order is Sacred**: Never execute wave N before wave N-1 completes and results are merged
|
||||
3. **CSV is Source of Truth**: Master tasks.csv holds all state (both csv-wave and interactive)
|
||||
4. **CSV First**: Default to csv-wave for tasks; only use interactive when interaction pattern requires it
|
||||
5. **Context Propagation**: prev_context built from master CSV, not from memory
|
||||
6. **Discovery Board is Append-Only**: Never clear, modify, or recreate discoveries.ndjson — both mechanisms share it
|
||||
7. **Skip on Failure**: If a dependency failed, skip the dependent task (regardless of mechanism)
|
||||
8. **Lifecycle Balance**: Every spawn_agent MUST have a matching close_agent
|
||||
9. **Cleanup Temp Files**: Remove wave-{N}.csv after results are merged
|
||||
10. **DO NOT STOP**: Continuous execution until all waves complete or all remaining tasks are skipped
|
||||
|
||||
|
||||
---
|
||||
|
||||
## Coordinator Role Constraints (Main Agent)
|
||||
|
||||
**CRITICAL**: The coordinator (main agent executing this skill) is responsible for **orchestration only**, NOT implementation.
|
||||
|
||||
15. **Coordinator Does NOT Execute Code**: The main agent MUST NOT write, modify, or implement any code directly. All implementation work is delegated to spawned team agents. The coordinator only:
|
||||
- Spawns agents with task assignments
|
||||
- Waits for agent callbacks
|
||||
- Merges results and coordinates workflow
|
||||
- Manages workflow transitions between phases
|
||||
|
||||
16. **Patient Waiting is Mandatory**: Agent execution takes significant time (typically 10-30 minutes per phase, sometimes longer). The coordinator MUST:
|
||||
- Wait patiently for `wait()` calls to complete
|
||||
- NOT skip workflow steps due to perceived delays
|
||||
- NOT assume agents have failed just because they're taking time
|
||||
- Trust the timeout mechanisms defined in the skill
|
||||
|
||||
17. **Use send_input for Clarification**: When agents need guidance or appear stuck, the coordinator MUST:
|
||||
- Use `send_input()` to ask questions or provide clarification
|
||||
- NOT skip the agent or move to next phase prematurely
|
||||
- Give agents opportunity to respond before escalating
|
||||
- Example: `send_input({ id: agent_id, message: "Please provide status update or clarify blockers" })`
|
||||
|
||||
18. **No Workflow Shortcuts**: The coordinator MUST NOT:
|
||||
- Skip phases or stages defined in the workflow
|
||||
- Bypass required approval or review steps
|
||||
- Execute dependent tasks before prerequisites complete
|
||||
- Assume task completion without explicit agent callback
|
||||
- Make up or fabricate agent results
|
||||
|
||||
19. **Respect Long-Running Processes**: This is a complex multi-agent workflow that requires patience:
|
||||
- Total execution time may range from 30-90 minutes or longer
|
||||
- Each phase may take 10-30 minutes depending on complexity
|
||||
- The coordinator must remain active and attentive throughout the entire process
|
||||
- Do not terminate or skip steps due to time concerns
|
||||
122
.codex/skills/team-brainstorm/agents/gc-controller.md
Normal file
122
.codex/skills/team-brainstorm/agents/gc-controller.md
Normal file
@@ -0,0 +1,122 @@
|
||||
# GC Controller Agent
|
||||
|
||||
Evaluate Generator-Critic loop severity and decide whether to trigger revision or converge to synthesis.
|
||||
|
||||
## Identity
|
||||
|
||||
- **Type**: `interactive`
|
||||
- **Responsibility**: GC loop decision making
|
||||
|
||||
## Boundaries
|
||||
|
||||
### MUST
|
||||
|
||||
- Load role definition via MANDATORY FIRST STEPS pattern
|
||||
- Read the latest critique file to assess severity
|
||||
- Make a binary decision: REVISION or CONVERGE
|
||||
- Respect max GC round limits
|
||||
- Produce structured output following template
|
||||
|
||||
### MUST NOT
|
||||
|
||||
- Generate ideas or perform critique (delegate to csv-wave agents)
|
||||
- Exceed 1 decision per invocation
|
||||
- Ignore the max round constraint
|
||||
|
||||
---
|
||||
|
||||
## Toolbox
|
||||
|
||||
### Available Tools
|
||||
|
||||
| Tool | Type | Purpose |
|
||||
|------|------|---------|
|
||||
| `Read` | builtin | Load critique artifacts and session state |
|
||||
| `Glob` | builtin | Find critique files in session directory |
|
||||
|
||||
---
|
||||
|
||||
## Execution
|
||||
|
||||
### Phase 1: Context Loading
|
||||
|
||||
**Objective**: Load critique results and GC round state
|
||||
|
||||
**Input**:
|
||||
|
||||
| Source | Required | Description |
|
||||
|--------|----------|-------------|
|
||||
| Session folder | Yes | Path to session directory |
|
||||
| GC Round | Yes | Current GC round number |
|
||||
| Max GC Rounds | Yes | Maximum allowed rounds (default: 2) |
|
||||
|
||||
**Steps**:
|
||||
|
||||
1. Read the session's discoveries.ndjson for critique entries
|
||||
2. Parse prev_context for the challenger's findings
|
||||
3. Extract severity counts from the challenger's severity_summary
|
||||
4. Load current gc_round from spawn message
|
||||
|
||||
**Output**: Severity counts and round state loaded
|
||||
|
||||
---
|
||||
|
||||
### Phase 2: Decision Making
|
||||
|
||||
**Objective**: Determine whether to trigger revision or converge
|
||||
|
||||
**Input**:
|
||||
|
||||
| Source | Required | Description |
|
||||
|--------|----------|-------------|
|
||||
| Severity counts | Yes | CRITICAL, HIGH, MEDIUM, LOW counts |
|
||||
| GC round | Yes | Current round number |
|
||||
| Max rounds | Yes | Maximum allowed rounds |
|
||||
|
||||
**Steps**:
|
||||
|
||||
1. Check severity threshold:
|
||||
|
||||
| Condition | Decision |
|
||||
|-----------|----------|
|
||||
| gc_round >= max_rounds | CONVERGE (force, regardless of severity) |
|
||||
| CRITICAL count > 0 | REVISION (if rounds remain) |
|
||||
| HIGH count > 0 | REVISION (if rounds remain) |
|
||||
| All MEDIUM or lower | CONVERGE |
|
||||
|
||||
2. Log the decision rationale
|
||||
|
||||
**Output**: Decision string "REVISION" or "CONVERGE"
|
||||
|
||||
---
|
||||
|
||||
## Structured Output Template
|
||||
|
||||
```
|
||||
## Summary
|
||||
- GC Round: <current>/<max>
|
||||
- Decision: REVISION | CONVERGE
|
||||
|
||||
## Severity Assessment
|
||||
- CRITICAL: <count>
|
||||
- HIGH: <count>
|
||||
- MEDIUM: <count>
|
||||
- LOW: <count>
|
||||
|
||||
## Rationale
|
||||
- <1-2 sentence explanation of decision>
|
||||
|
||||
## Next Action
|
||||
- REVISION: Ideator should address HIGH/CRITICAL challenges in next round
|
||||
- CONVERGE: Proceed to synthesis phase, skip remaining revision tasks
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Scenario | Resolution |
|
||||
|----------|------------|
|
||||
| No critique data found | Default to CONVERGE (no evidence for revision) |
|
||||
| Severity parsing fails | Default to CONVERGE with warning |
|
||||
| Timeout approaching | Output current decision immediately |
|
||||
126
.codex/skills/team-brainstorm/agents/topic-clarifier.md
Normal file
126
.codex/skills/team-brainstorm/agents/topic-clarifier.md
Normal file
@@ -0,0 +1,126 @@
|
||||
# Topic Clarifier Agent
|
||||
|
||||
Assess brainstorming topic complexity, recommend pipeline mode, and suggest divergence angles.
|
||||
|
||||
## Identity
|
||||
|
||||
- **Type**: `interactive`
|
||||
- **Responsibility**: Topic analysis and pipeline selection
|
||||
|
||||
## Boundaries
|
||||
|
||||
### MUST
|
||||
|
||||
- Load role definition via MANDATORY FIRST STEPS pattern
|
||||
- Perform text-level analysis only (no source code reading)
|
||||
- Produce structured output with pipeline recommendation
|
||||
- Suggest meaningful divergence angles for ideation
|
||||
|
||||
### MUST NOT
|
||||
|
||||
- Read source code or explore codebase
|
||||
- Generate ideas (that is the ideator's job)
|
||||
- Make final pipeline decisions (orchestrator confirms with user)
|
||||
|
||||
---
|
||||
|
||||
## Toolbox
|
||||
|
||||
### Available Tools
|
||||
|
||||
| Tool | Type | Purpose |
|
||||
|------|------|---------|
|
||||
| `Read` | builtin | Load project context if available |
|
||||
|
||||
---
|
||||
|
||||
## Execution
|
||||
|
||||
### Phase 1: Signal Detection
|
||||
|
||||
**Objective**: Analyze topic keywords for complexity signals
|
||||
|
||||
**Input**:
|
||||
|
||||
| Source | Required | Description |
|
||||
|--------|----------|-------------|
|
||||
| Topic text | Yes | The brainstorming topic from user |
|
||||
|
||||
**Steps**:
|
||||
|
||||
1. Scan topic for complexity signals:
|
||||
|
||||
| Signal | Weight | Keywords |
|
||||
|--------|--------|----------|
|
||||
| Strategic/systemic | +3 | strategy, architecture, system, framework, paradigm |
|
||||
| Multi-dimensional | +2 | multiple, compare, tradeoff, versus, alternative |
|
||||
| Innovation-focused | +2 | innovative, creative, novel, breakthrough |
|
||||
| Simple/basic | -2 | simple, quick, straightforward, basic |
|
||||
|
||||
2. Calculate complexity score
|
||||
|
||||
**Output**: Complexity score and matched signals
|
||||
|
||||
---
|
||||
|
||||
### Phase 2: Pipeline Recommendation
|
||||
|
||||
**Objective**: Map complexity to pipeline mode and suggest angles
|
||||
|
||||
**Steps**:
|
||||
|
||||
1. Map score to pipeline:
|
||||
|
||||
| Score | Complexity | Pipeline |
|
||||
|-------|------------|----------|
|
||||
| >= 4 | High | full (3x parallel ideation + GC + evaluation) |
|
||||
| 2-3 | Medium | deep (serial with GC loop + evaluation) |
|
||||
| 0-1 | Low | quick (generate → challenge → synthesize) |
|
||||
|
||||
2. Identify divergence angles from topic context:
|
||||
- **Technical**: Implementation approaches, architecture patterns
|
||||
- **Product**: User experience, market fit, value proposition
|
||||
- **Innovation**: Novel approaches, emerging tech, disruption potential
|
||||
- **Risk**: Failure modes, mitigation strategies, worst cases
|
||||
- **Business**: Cost, ROI, competitive advantage
|
||||
- **Organizational**: Team structure, process, culture
|
||||
|
||||
3. Select 3-4 most relevant angles based on topic keywords
|
||||
|
||||
**Output**: Pipeline mode, angles, complexity rationale
|
||||
|
||||
---
|
||||
|
||||
## Structured Output Template
|
||||
|
||||
```
|
||||
## Summary
|
||||
- Topic: <topic>
|
||||
- Complexity Score: <score> (<level>)
|
||||
- Recommended Pipeline: <quick|deep|full>
|
||||
|
||||
## Signal Detection
|
||||
- Matched signals: <list of matched signals with weights>
|
||||
|
||||
## Suggested Angles
|
||||
1. <Angle 1>: <why relevant>
|
||||
2. <Angle 2>: <why relevant>
|
||||
3. <Angle 3>: <why relevant>
|
||||
|
||||
## Pipeline Details
|
||||
- <pipeline>: <brief description of what this pipeline does>
|
||||
- Expected tasks: <count>
|
||||
- Parallel ideation: <yes/no>
|
||||
- GC rounds: <0/1/2>
|
||||
- Evaluation: <yes/no>
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Scenario | Resolution |
|
||||
|----------|------------|
|
||||
| Topic too vague | Suggest clarifying questions in output |
|
||||
| No signal matches | Default to "deep" pipeline with general angles |
|
||||
| Timeout approaching | Output current analysis with "PARTIAL" status |
|
||||
105
.codex/skills/team-brainstorm/instructions/agent-instruction.md
Normal file
105
.codex/skills/team-brainstorm/instructions/agent-instruction.md
Normal file
@@ -0,0 +1,105 @@
|
||||
## TASK ASSIGNMENT
|
||||
|
||||
### MANDATORY FIRST STEPS
|
||||
1. Read shared discoveries: .workflow/.csv-wave/{session-id}/discoveries.ndjson (if exists, skip if not)
|
||||
2. Read project context: .workflow/project-tech.json (if exists)
|
||||
|
||||
---
|
||||
|
||||
## Your Task
|
||||
|
||||
**Task ID**: {id}
|
||||
**Title**: {title}
|
||||
**Role**: {role}
|
||||
**Description**: {description}
|
||||
**Angle(s)**: {angle}
|
||||
**GC Round**: {gc_round}
|
||||
|
||||
### Previous Tasks' Findings (Context)
|
||||
{prev_context}
|
||||
|
||||
---
|
||||
|
||||
## Execution Protocol
|
||||
|
||||
1. **Read discoveries**: Load shared discoveries from the session's discoveries.ndjson for cross-task context
|
||||
2. **Use context**: Apply previous tasks' findings from prev_context above
|
||||
3. **Execute by role**:
|
||||
|
||||
### Role: ideator (IDEA-* tasks)
|
||||
- **Initial Generation** (gc_round = 0):
|
||||
- For each angle listed in the Angle(s) field, generate 3+ ideas
|
||||
- Each idea must include: title, description (2-3 sentences), key assumption, potential impact, implementation hint
|
||||
- Self-review: ensure >= 6 ideas total, no duplicates, all angles covered
|
||||
- **GC Revision** (gc_round > 0):
|
||||
- Read critique findings from prev_context
|
||||
- Focus on HIGH/CRITICAL severity challenges
|
||||
- Retain unchallenged ideas intact
|
||||
- Revise challenged ideas with revision rationale
|
||||
- Replace unsalvageable ideas with new alternatives
|
||||
|
||||
### Role: challenger (CHALLENGE-* tasks)
|
||||
- Read all idea findings from prev_context
|
||||
- Challenge each idea across 4 dimensions:
|
||||
- **Assumption Validity**: Does the core assumption hold? Counter-examples?
|
||||
- **Feasibility**: Technical/resource/time feasibility?
|
||||
- **Risk Assessment**: Worst case scenario? Hidden risks?
|
||||
- **Competitive Analysis**: Better alternatives already exist?
|
||||
- Assign severity per idea: CRITICAL / HIGH / MEDIUM / LOW
|
||||
- Determine GC signal:
|
||||
- Any CRITICAL or HIGH severity → `REVISION_NEEDED`
|
||||
- All MEDIUM or lower → `CONVERGED`
|
||||
|
||||
### Role: synthesizer (SYNTH-* tasks)
|
||||
- Read all idea and critique findings from prev_context
|
||||
- Execute synthesis steps:
|
||||
1. **Theme Extraction**: Identify common themes, rate strength (1-10), list supporting ideas
|
||||
2. **Conflict Resolution**: Identify contradictions, determine resolution approach
|
||||
3. **Complementary Grouping**: Group complementary ideas together
|
||||
4. **Gap Identification**: Discover uncovered perspectives
|
||||
5. **Integrated Proposals**: Generate 1-3 consolidated proposals with feasibility score (1-10) and innovation score (1-10)
|
||||
|
||||
### Role: evaluator (EVAL-* tasks)
|
||||
- Read synthesis findings from prev_context
|
||||
- Score each proposal across 4 weighted dimensions:
|
||||
- Feasibility (30%): Technical feasibility, resource needs, timeline
|
||||
- Innovation (25%): Novelty, differentiation, breakthrough potential
|
||||
- Impact (25%): Scope of impact, value creation, problem resolution
|
||||
- Cost Efficiency (20%): Implementation cost, risk cost, opportunity cost
|
||||
- Weighted score = (Feasibility * 0.30) + (Innovation * 0.25) + (Impact * 0.25) + (Cost * 0.20)
|
||||
- Provide recommendation per proposal: Strong Recommend / Recommend / Consider / Pass
|
||||
- Generate final ranking
|
||||
|
||||
4. **Share discoveries**: Append exploration findings to shared board:
|
||||
```bash
|
||||
echo '{"ts":"<ISO8601>","worker":"{id}","type":"<type>","data":{...}}' >> .workflow/.csv-wave/{session-id}/discoveries.ndjson
|
||||
```
|
||||
|
||||
Discovery types to share:
|
||||
- `idea`: {title, angle, description, assumption, impact} — generated idea
|
||||
- `critique`: {idea_title, dimension, severity, challenge, rationale} — critique finding
|
||||
- `theme`: {name, strength, supporting_ideas[]} — extracted theme
|
||||
- `proposal`: {title, source_ideas[], feasibility, innovation, description} — integrated proposal
|
||||
- `evaluation`: {proposal_title, weighted_score, rank, recommendation} — scored proposal
|
||||
|
||||
5. **Report result**: Return JSON via report_agent_job_result
|
||||
|
||||
---
|
||||
|
||||
## Output (report_agent_job_result)
|
||||
|
||||
Return JSON:
|
||||
{
|
||||
"id": "{id}",
|
||||
"status": "completed" | "failed",
|
||||
"findings": "Key discoveries and implementation notes (max 500 chars)",
|
||||
"gc_signal": "REVISION_NEEDED | CONVERGED | (empty for non-challenger roles)",
|
||||
"severity_summary": "CRITICAL:N HIGH:N MEDIUM:N LOW:N (challenger only, empty for others)",
|
||||
"error": ""
|
||||
}
|
||||
|
||||
**Role-specific findings guidance**:
|
||||
- **ideator**: List idea count, angles covered, key themes. Example: "Generated 8 ideas across Technical, Product, Innovation. Top ideas: API Gateway, Event Sourcing, DevEx Platform."
|
||||
- **challenger**: Summarize severity counts and GC signal. Example: "Challenged 8 ideas. 2 HIGH (require revision), 3 MEDIUM, 3 LOW. GC signal: REVISION_NEEDED."
|
||||
- **synthesizer**: List proposal count and key themes. Example: "Synthesized 3 proposals from 5 themes. Top: Infrastructure Modernization (feasibility:8, innovation:7)."
|
||||
- **evaluator**: List ranking and top recommendation. Example: "Ranked 3 proposals. #1: Infrastructure Modernization (7.85) - Strong Recommend."
|
||||
171
.codex/skills/team-brainstorm/schemas/tasks-schema.md
Normal file
171
.codex/skills/team-brainstorm/schemas/tasks-schema.md
Normal file
@@ -0,0 +1,171 @@
|
||||
# Team Brainstorm — CSV Schema
|
||||
|
||||
## Master CSV: tasks.csv
|
||||
|
||||
### Column Definitions
|
||||
|
||||
#### Input Columns (Set by Decomposer)
|
||||
|
||||
| Column | Type | Required | Description | Example |
|
||||
|--------|------|----------|-------------|---------|
|
||||
| `id` | string | Yes | Unique task identifier | `"IDEA-001"` |
|
||||
| `title` | string | Yes | Short task title | `"Multi-angle idea generation"` |
|
||||
| `description` | string | Yes | Detailed task description (self-contained) | `"Generate 3+ ideas per angle..."` |
|
||||
| `role` | string | Yes | Worker role: ideator, challenger, synthesizer, evaluator | `"ideator"` |
|
||||
| `angle` | string | No | Brainstorming angle(s) for ideator tasks (semicolon-separated) | `"Technical;Product;Innovation"` |
|
||||
| `gc_round` | integer | Yes | Generator-Critic round number (0 = initial, 1+ = revision) | `"0"` |
|
||||
| `deps` | string | No | Semicolon-separated dependency task IDs | `"IDEA-001"` |
|
||||
| `context_from` | string | No | Semicolon-separated task IDs for context | `"IDEA-001"` |
|
||||
| `exec_mode` | enum | Yes | Execution mechanism: `csv-wave` or `interactive` | `"csv-wave"` |
|
||||
|
||||
#### Computed Columns (Set by Wave Engine)
|
||||
|
||||
| Column | Type | Description | Example |
|
||||
|--------|------|-------------|---------|
|
||||
| `wave` | integer | Wave number (1-based, from topological sort) | `2` |
|
||||
| `prev_context` | string | Aggregated findings from context_from tasks (per-wave CSV only) | `"[Task IDEA-001] Generated 8 ideas..."` |
|
||||
|
||||
#### Output Columns (Set by Agent)
|
||||
|
||||
| Column | Type | Description | Example |
|
||||
|--------|------|-------------|---------|
|
||||
| `status` | enum | `pending` → `completed` / `failed` / `skipped` | `"completed"` |
|
||||
| `findings` | string | Key discoveries (max 500 chars) | `"Generated 8 ideas across 3 angles..."` |
|
||||
| `gc_signal` | string | Generator-Critic signal (challenger only): `REVISION_NEEDED` or `CONVERGED` | `"REVISION_NEEDED"` |
|
||||
| `severity_summary` | string | Severity count summary (challenger only) | `"CRITICAL:0 HIGH:2 MEDIUM:3 LOW:1"` |
|
||||
| `error` | string | Error message if failed | `""` |
|
||||
|
||||
---
|
||||
|
||||
### exec_mode Values
|
||||
|
||||
| Value | Mechanism | Description |
|
||||
|-------|-----------|-------------|
|
||||
| `csv-wave` | `spawn_agents_on_csv` | One-shot batch execution within wave |
|
||||
| `interactive` | `spawn_agent`/`wait`/`send_input`/`close_agent` | Multi-round individual execution |
|
||||
|
||||
Interactive tasks appear in master CSV for dependency tracking but are NOT included in wave-{N}.csv files.
|
||||
|
||||
---
|
||||
|
||||
### Example Data
|
||||
|
||||
```csv
|
||||
id,title,description,role,angle,gc_round,deps,context_from,exec_mode,wave,status,findings,gc_signal,severity_summary,error
|
||||
"IDEA-001","Multi-angle idea generation","Generate 3+ ideas per angle with title, description, assumption, and potential impact. Cover all assigned angles comprehensively.","ideator","Technical;Product;Innovation","0","","","csv-wave","1","pending","","","",""
|
||||
"IDEA-002","Parallel angle generation (Risk)","Generate 3+ ideas focused on Risk angle with title, description, assumption, and potential impact.","ideator","Risk","0","","","csv-wave","1","pending","","","",""
|
||||
"CHALLENGE-001","Critique generated ideas","Read all idea artifacts. Challenge each idea across assumption validity, feasibility, risk, and competition dimensions. Assign severity (CRITICAL/HIGH/MEDIUM/LOW) per idea. Output GC signal.","challenger","","0","IDEA-001;IDEA-002","IDEA-001;IDEA-002","csv-wave","2","pending","","","",""
|
||||
"GC-CHECK-001","GC loop decision","Evaluate critique severity counts. If any HIGH/CRITICAL: REVISION_NEEDED. Else: CONVERGED.","gc-controller","","1","CHALLENGE-001","CHALLENGE-001","interactive","3","pending","","","",""
|
||||
"IDEA-003","Revise ideas (GC Round 1)","Address HIGH/CRITICAL challenges from critique. Retain unchallenged ideas intact. Replace unsalvageable ideas.","ideator","","1","GC-CHECK-001","CHALLENGE-001","csv-wave","4","pending","","","",""
|
||||
"SYNTH-001","Synthesize proposals","Extract themes from ideas and critiques. Resolve conflicts. Generate 1-3 integrated proposals with feasibility and innovation scores.","synthesizer","","0","IDEA-003","IDEA-001;IDEA-002;IDEA-003;CHALLENGE-001","csv-wave","5","pending","","","",""
|
||||
"EVAL-001","Score and rank proposals","Score each proposal: Feasibility 30%, Innovation 25%, Impact 25%, Cost 20%. Generate final ranking and recommendation.","evaluator","","0","SYNTH-001","SYNTH-001","csv-wave","6","pending","","","",""
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Column Lifecycle
|
||||
|
||||
```
|
||||
Decomposer (Phase 1) Wave Engine (Phase 2) Agent (Execution)
|
||||
───────────────────── ──────────────────── ─────────────────
|
||||
id ───────────► id ──────────► id
|
||||
title ───────────► title ──────────► (reads)
|
||||
description ───────────► description ──────────► (reads)
|
||||
role ───────────► role ──────────► (reads)
|
||||
angle ───────────► angle ──────────► (reads)
|
||||
gc_round ───────────► gc_round ──────────► (reads)
|
||||
deps ───────────► deps ──────────► (reads)
|
||||
context_from───────────► context_from──────────► (reads)
|
||||
exec_mode ───────────► exec_mode ──────────► (reads)
|
||||
wave ──────────► (reads)
|
||||
prev_context ──────────► (reads)
|
||||
status
|
||||
findings
|
||||
gc_signal
|
||||
severity_summary
|
||||
error
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Output Schema (JSON)
|
||||
|
||||
Agent output via `report_agent_job_result` (csv-wave tasks):
|
||||
|
||||
```json
|
||||
{
|
||||
"id": "IDEA-001",
|
||||
"status": "completed",
|
||||
"findings": "Generated 8 ideas across Technical, Product, Innovation angles. Key themes: API gateway pattern, event-driven architecture, developer experience tools.",
|
||||
"gc_signal": "",
|
||||
"severity_summary": "",
|
||||
"error": ""
|
||||
}
|
||||
```
|
||||
|
||||
Challenger-specific output:
|
||||
|
||||
```json
|
||||
{
|
||||
"id": "CHALLENGE-001",
|
||||
"status": "completed",
|
||||
"findings": "Challenged 8 ideas. 2 HIGH severity (require revision), 3 MEDIUM, 3 LOW.",
|
||||
"gc_signal": "REVISION_NEEDED",
|
||||
"severity_summary": "CRITICAL:0 HIGH:2 MEDIUM:3 LOW:3",
|
||||
"error": ""
|
||||
}
|
||||
```
|
||||
|
||||
Interactive tasks output via structured text or JSON written to `interactive/{id}-result.json`.
|
||||
|
||||
---
|
||||
|
||||
## Discovery Types
|
||||
|
||||
| Type | Dedup Key | Data Schema | Description |
|
||||
|------|-----------|-------------|-------------|
|
||||
| `idea` | `data.title` | `{title, angle, description, assumption, impact}` | Generated brainstorm idea |
|
||||
| `critique` | `data.idea_title` | `{idea_title, dimension, severity, challenge, rationale}` | Critique of an idea |
|
||||
| `theme` | `data.name` | `{name, strength, supporting_ideas[]}` | Extracted theme from synthesis |
|
||||
| `proposal` | `data.title` | `{title, source_ideas[], feasibility, innovation, description}` | Integrated proposal |
|
||||
| `evaluation` | `data.proposal_title` | `{proposal_title, weighted_score, rank, recommendation}` | Scored proposal |
|
||||
| `gc_decision` | `data.round` | `{round, signal, severity_counts}` | GC loop decision record |
|
||||
|
||||
### Discovery NDJSON Format
|
||||
|
||||
```jsonl
|
||||
{"ts":"2026-03-08T10:00:00+08:00","worker":"IDEA-001","type":"idea","data":{"title":"API Gateway Pattern","angle":"Technical","description":"Centralized API gateway for microservice routing","assumption":"Services need unified entry point","impact":"Simplifies client integration"}}
|
||||
{"ts":"2026-03-08T10:01:00+08:00","worker":"IDEA-001","type":"idea","data":{"title":"Event Sourcing Migration","angle":"Technical","description":"Adopt event sourcing for service state management","assumption":"Current state is hard to trace across services","impact":"Full audit trail and temporal queries"}}
|
||||
{"ts":"2026-03-08T10:05:00+08:00","worker":"CHALLENGE-001","type":"critique","data":{"idea_title":"API Gateway Pattern","dimension":"feasibility","severity":"MEDIUM","challenge":"Single point of failure risk","rationale":"Requires HA design with circuit breakers"}}
|
||||
{"ts":"2026-03-08T10:10:00+08:00","worker":"SYNTH-001","type":"theme","data":{"name":"Infrastructure Modernization","strength":8,"supporting_ideas":["API Gateway Pattern","Event Sourcing Migration"]}}
|
||||
```
|
||||
|
||||
> Both csv-wave and interactive agents read/write the same discoveries.ndjson file.
|
||||
|
||||
---
|
||||
|
||||
## Cross-Mechanism Context Flow
|
||||
|
||||
| Source | Target | Mechanism |
|
||||
|--------|--------|-----------|
|
||||
| CSV task findings | Interactive task | Injected via spawn message or send_input |
|
||||
| Interactive task result | CSV task prev_context | Read from interactive/{id}-result.json |
|
||||
| Any agent discovery | Any agent | Shared via discoveries.ndjson |
|
||||
|
||||
---
|
||||
|
||||
## Validation Rules
|
||||
|
||||
| Rule | Check | Error |
|
||||
|------|-------|-------|
|
||||
| Unique IDs | No duplicate `id` values | "Duplicate task ID: {id}" |
|
||||
| Valid deps | All dep IDs exist in tasks | "Unknown dependency: {dep_id}" |
|
||||
| No self-deps | Task cannot depend on itself | "Self-dependency: {id}" |
|
||||
| No circular deps | Topological sort completes | "Circular dependency detected involving: {ids}" |
|
||||
| context_from valid | All context IDs exist and in earlier waves | "Invalid context_from: {id}" |
|
||||
| exec_mode valid | Value is `csv-wave` or `interactive` | "Invalid exec_mode: {value}" |
|
||||
| Description non-empty | Every task has description | "Empty description for task: {id}" |
|
||||
| Status enum | status in {pending, completed, failed, skipped} | "Invalid status: {status}" |
|
||||
| Valid role | role in {ideator, challenger, synthesizer, evaluator, gc-controller} | "Invalid role: {role}" |
|
||||
| GC round non-negative | gc_round >= 0 | "Invalid gc_round: {value}" |
|
||||
| Cross-mechanism deps | Interactive→CSV deps resolve correctly | "Cross-mechanism dependency unresolvable: {id}" |
|
||||
667
.codex/skills/team-coordinate/SKILL.md
Normal file
667
.codex/skills/team-coordinate/SKILL.md
Normal file
@@ -0,0 +1,667 @@
|
||||
---
|
||||
name: team-coordinate
|
||||
description: Universal team coordination skill with dynamic role generation. Analyzes task, generates worker roles at runtime, decomposes into CSV tasks with dependency waves, dispatches parallel CSV agents per wave. Coordinator is orchestrator; all workers are CSV or interactive agents with dynamically generated instructions.
|
||||
argument-hint: "[-y|--yes] [-c|--concurrency N] [--continue] \"task description\""
|
||||
allowed-tools: spawn_agents_on_csv, spawn_agent, wait, send_input, close_agent, Read, Write, Edit, Bash, Glob, Grep, AskUserQuestion
|
||||
---
|
||||
|
||||
## Auto Mode
|
||||
|
||||
When `--yes` or `-y`: Auto-confirm task decomposition, skip interactive validation, use defaults.
|
||||
|
||||
# Team Coordinate
|
||||
|
||||
## Usage
|
||||
|
||||
```bash
|
||||
$team-coordinate "Implement user authentication with JWT tokens"
|
||||
$team-coordinate -c 4 "Refactor payment module and write API documentation"
|
||||
$team-coordinate -y "Analyze codebase security and fix vulnerabilities"
|
||||
$team-coordinate --continue "tc-auth-jwt-20260308"
|
||||
```
|
||||
|
||||
**Flags**:
|
||||
- `-y, --yes`: Skip all confirmations (auto mode)
|
||||
- `-c, --concurrency N`: Max concurrent agents within each wave (default: 3)
|
||||
- `--continue`: Resume existing session
|
||||
|
||||
**Output Directory**: `.workflow/.csv-wave/{session-id}/`
|
||||
**Core Output**: `tasks.csv` (master state) + `results.csv` (final) + `discoveries.ndjson` (shared exploration) + `context.md` (human-readable report)
|
||||
|
||||
---
|
||||
|
||||
## Overview
|
||||
|
||||
Universal team coordination: analyze task -> detect capabilities -> generate dynamic role instructions -> decompose into dependency-ordered CSV tasks -> execute wave-by-wave -> deliver results. Only the **coordinator** (this orchestrator) is built-in. All worker roles are **dynamically generated** as CSV agent instructions at runtime.
|
||||
|
||||
**Execution Model**: Hybrid -- CSV wave pipeline (primary) + individual agent spawn (secondary)
|
||||
|
||||
```
|
||||
+-------------------------------------------------------------------+
|
||||
| TEAM COORDINATE WORKFLOW |
|
||||
+-------------------------------------------------------------------+
|
||||
| |
|
||||
| Phase 0: Pre-Wave Interactive (Requirement Clarification) |
|
||||
| +- Parse user task description |
|
||||
| +- Clarify ambiguous requirements (AskUserQuestion) |
|
||||
| +- Output: refined requirements for decomposition |
|
||||
| |
|
||||
| Phase 1: Requirement -> CSV + Classification |
|
||||
| +- Signal detection: keyword scan -> capability inference |
|
||||
| +- Dependency graph construction (DAG) |
|
||||
| +- Role minimization (cap at 5 roles) |
|
||||
| +- Classify tasks: csv-wave | interactive (exec_mode) |
|
||||
| +- Compute dependency waves (topological sort) |
|
||||
| +- Generate tasks.csv with wave + exec_mode columns |
|
||||
| +- Generate per-role agent instructions dynamically |
|
||||
| +- User validates task breakdown (skip if -y) |
|
||||
| |
|
||||
| Phase 2: Wave Execution Engine (Extended) |
|
||||
| +- For each wave (1..N): |
|
||||
| | +- Execute pre-wave interactive tasks (if any) |
|
||||
| | +- Build wave CSV (filter csv-wave tasks for this wave) |
|
||||
| | +- Inject previous findings into prev_context column |
|
||||
| | +- spawn_agents_on_csv(wave CSV) |
|
||||
| | +- Execute post-wave interactive tasks (if any) |
|
||||
| | +- Merge all results into master tasks.csv |
|
||||
| | +- Check: any failed? -> skip dependents |
|
||||
| +- discoveries.ndjson shared across all modes (append-only) |
|
||||
| |
|
||||
| Phase 3: Post-Wave Interactive (Completion Action) |
|
||||
| +- Pipeline completion report |
|
||||
| +- Interactive completion choice (Archive/Keep/Export) |
|
||||
| +- Final aggregation / report |
|
||||
| |
|
||||
| Phase 4: Results Aggregation |
|
||||
| +- Export final results.csv |
|
||||
| +- Generate context.md with all findings |
|
||||
| +- Display summary: completed/failed/skipped per wave |
|
||||
| +- Offer: view results | retry failed | done |
|
||||
| |
|
||||
+-------------------------------------------------------------------+
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Task Classification Rules
|
||||
|
||||
Each task is classified by `exec_mode`:
|
||||
|
||||
| exec_mode | Mechanism | Criteria |
|
||||
|-----------|-----------|----------|
|
||||
| `csv-wave` | `spawn_agents_on_csv` | One-shot, structured I/O, no multi-round interaction |
|
||||
| `interactive` | `spawn_agent`/`wait`/`send_input`/`close_agent` | Multi-round, needs clarification, revision cycles |
|
||||
|
||||
**Classification Decision**:
|
||||
|
||||
| Task Property | Classification |
|
||||
|---------------|---------------|
|
||||
| Single-pass code implementation | `csv-wave` |
|
||||
| Single-pass analysis or documentation | `csv-wave` |
|
||||
| Research with defined scope | `csv-wave` |
|
||||
| Testing with known targets | `csv-wave` |
|
||||
| Design requiring iterative refinement | `interactive` |
|
||||
| Plan requiring user approval checkpoint | `interactive` |
|
||||
| Revision cycle (fix-verify loop) | `interactive` |
|
||||
|
||||
---
|
||||
|
||||
## CSV Schema
|
||||
|
||||
### tasks.csv (Master State)
|
||||
|
||||
```csv
|
||||
id,title,description,role,responsibility_type,output_type,deps,context_from,exec_mode,wave,status,findings,artifacts_produced,error
|
||||
"RESEARCH-001","Investigate auth patterns","Research JWT authentication patterns and best practices","researcher","orchestration","artifact","","","csv-wave","1","pending","","",""
|
||||
"IMPL-001","Implement auth module","Build JWT authentication middleware","developer","code-gen","codebase","RESEARCH-001","RESEARCH-001","csv-wave","2","pending","","",""
|
||||
"TEST-001","Validate auth implementation","Write and run tests for auth module","tester","validation","artifact","IMPL-001","IMPL-001","csv-wave","3","pending","","",""
|
||||
```
|
||||
|
||||
**Columns**:
|
||||
|
||||
| Column | Phase | Description |
|
||||
|--------|-------|-------------|
|
||||
| `id` | Input | Unique task identifier (PREFIX-NNN format) |
|
||||
| `title` | Input | Short task title |
|
||||
| `description` | Input | Detailed task description with goal, steps, success criteria |
|
||||
| `role` | Input | Dynamic role name (researcher, developer, analyst, etc.) |
|
||||
| `responsibility_type` | Input | `orchestration`, `read-only`, `code-gen`, `code-gen-docs`, `validation` |
|
||||
| `output_type` | Input | `artifact` (session files), `codebase` (project files), `mixed` |
|
||||
| `deps` | Input | Semicolon-separated dependency task IDs |
|
||||
| `context_from` | Input | Semicolon-separated task IDs whose findings this task needs |
|
||||
| `exec_mode` | Input | `csv-wave` or `interactive` |
|
||||
| `wave` | Computed | Wave number (computed by topological sort, 1-based) |
|
||||
| `status` | Output | `pending` -> `completed` / `failed` / `skipped` |
|
||||
| `findings` | Output | Key discoveries or implementation notes (max 500 chars) |
|
||||
| `artifacts_produced` | Output | Semicolon-separated paths of produced artifacts |
|
||||
| `error` | Output | Error message if failed (empty if success) |
|
||||
|
||||
### Per-Wave CSV (Temporary)
|
||||
|
||||
Each wave generates a temporary `wave-{N}.csv` with extra `prev_context` column (csv-wave tasks only).
|
||||
|
||||
---
|
||||
|
||||
## Agent Registry (Interactive Agents)
|
||||
|
||||
| Agent | Role File | Pattern | Responsibility | Position |
|
||||
|-------|-----------|---------|----------------|----------|
|
||||
| Plan Reviewer | agents/plan-reviewer.md | 2.3 (send_input cycle) | Review and approve plans before execution waves | pre-wave |
|
||||
| Completion Handler | agents/completion-handler.md | 2.3 (send_input cycle) | Handle pipeline completion action (Archive/Keep/Export) | standalone |
|
||||
|
||||
> **COMPACT PROTECTION**: Agent files are execution documents. When context compression occurs, **you MUST immediately `Read` the corresponding agent.md** to reload.
|
||||
|
||||
---
|
||||
|
||||
## Output Artifacts
|
||||
|
||||
| File | Purpose | Lifecycle |
|
||||
|------|---------|-----------|
|
||||
| `tasks.csv` | Master state -- all tasks with status/findings | Updated after each wave |
|
||||
| `wave-{N}.csv` | Per-wave input (temporary, csv-wave tasks only) | Created before wave, deleted after |
|
||||
| `results.csv` | Final export of all task results | Created in Phase 4 |
|
||||
| `discoveries.ndjson` | Shared exploration board (all agents, both modes) | Append-only, carries across waves |
|
||||
| `context.md` | Human-readable execution report | Created in Phase 4 |
|
||||
| `task-analysis.json` | Phase 0/1 output: capabilities, dependency graph, roles | Created in Phase 1 |
|
||||
| `role-instructions/` | Dynamically generated per-role instruction templates | Created in Phase 1 |
|
||||
| `interactive/{id}-result.json` | Results from interactive tasks | Created per interactive task |
|
||||
|
||||
---
|
||||
|
||||
## Session Structure
|
||||
|
||||
```
|
||||
.workflow/.csv-wave/{session-id}/
|
||||
+-- tasks.csv # Master state (all tasks, both modes)
|
||||
+-- results.csv # Final results export
|
||||
+-- discoveries.ndjson # Shared discovery board (all agents)
|
||||
+-- context.md # Human-readable report
|
||||
+-- task-analysis.json # Phase 1 analysis output
|
||||
+-- wave-{N}.csv # Temporary per-wave input (csv-wave only)
|
||||
+-- role-instructions/ # Dynamically generated instruction templates
|
||||
| +-- researcher.md
|
||||
| +-- developer.md
|
||||
| +-- ...
|
||||
+-- artifacts/ # All deliverables from workers
|
||||
| +-- research-findings.md
|
||||
| +-- implementation-summary.md
|
||||
| +-- ...
|
||||
+-- interactive/ # Interactive task artifacts
|
||||
| +-- {id}-result.json
|
||||
+-- wisdom/ # Cross-task knowledge
|
||||
+-- learnings.md
|
||||
+-- decisions.md
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Implementation
|
||||
|
||||
### Session Initialization
|
||||
|
||||
```javascript
|
||||
const getUtc8ISOString = () => new Date(Date.now() + 8 * 60 * 60 * 1000).toISOString()
|
||||
|
||||
const AUTO_YES = $ARGUMENTS.includes('--yes') || $ARGUMENTS.includes('-y')
|
||||
const continueMode = $ARGUMENTS.includes('--continue')
|
||||
const concurrencyMatch = $ARGUMENTS.match(/(?:--concurrency|-c)\s+(\d+)/)
|
||||
const maxConcurrency = concurrencyMatch ? parseInt(concurrencyMatch[1]) : 3
|
||||
|
||||
const requirement = $ARGUMENTS
|
||||
.replace(/--yes|-y|--continue|--concurrency\s+\d+|-c\s+\d+/g, '')
|
||||
.trim()
|
||||
|
||||
const slug = requirement.toLowerCase()
|
||||
.replace(/[^a-z0-9\u4e00-\u9fa5]+/g, '-')
|
||||
.substring(0, 40)
|
||||
const dateStr = getUtc8ISOString().substring(0, 10).replace(/-/g, '')
|
||||
const sessionId = `tc-${slug}-${dateStr}`
|
||||
const sessionFolder = `.workflow/.csv-wave/${sessionId}`
|
||||
|
||||
Bash(`mkdir -p ${sessionFolder}/artifacts ${sessionFolder}/role-instructions ${sessionFolder}/interactive ${sessionFolder}/wisdom`)
|
||||
|
||||
// Initialize discoveries.ndjson
|
||||
Write(`${sessionFolder}/discoveries.ndjson`, '')
|
||||
|
||||
// Initialize wisdom files
|
||||
Write(`${sessionFolder}/wisdom/learnings.md`, '# Learnings\n')
|
||||
Write(`${sessionFolder}/wisdom/decisions.md`, '# Decisions\n')
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Phase 0: Pre-Wave Interactive (Requirement Clarification)
|
||||
|
||||
**Objective**: Parse user task, clarify ambiguities, prepare for decomposition.
|
||||
|
||||
**Workflow**:
|
||||
|
||||
1. **Parse user task description** from $ARGUMENTS
|
||||
|
||||
2. **Check for existing sessions** (continue mode):
|
||||
- Scan `.workflow/.csv-wave/tc-*/tasks.csv` for sessions with pending tasks
|
||||
- If `--continue`: resume the specified or most recent session, skip to Phase 2
|
||||
- If active session found: ask user whether to resume or start new
|
||||
|
||||
3. **Clarify if ambiguous** (skip if AUTO_YES):
|
||||
```javascript
|
||||
AskUserQuestion({
|
||||
questions: [{
|
||||
question: "Please confirm the task scope and deliverables:",
|
||||
header: "Task Clarification",
|
||||
multiSelect: false,
|
||||
options: [
|
||||
{ label: "Proceed as described", description: "Task is clear enough" },
|
||||
{ label: "Narrow scope", description: "Specify files/modules/areas" },
|
||||
{ label: "Add constraints", description: "Timeline, tech stack, style" }
|
||||
]
|
||||
}]
|
||||
})
|
||||
```
|
||||
|
||||
4. **Output**: Refined requirement string for Phase 1
|
||||
|
||||
**Success Criteria**:
|
||||
- Refined requirements available for Phase 1 decomposition
|
||||
- Existing session detected and handled if applicable
|
||||
|
||||
---
|
||||
|
||||
### Phase 1: Requirement -> CSV + Classification
|
||||
|
||||
**Objective**: Analyze task, detect capabilities, build dependency graph, generate tasks.csv and role instructions.
|
||||
|
||||
**Decomposition Rules**:
|
||||
|
||||
1. **Signal Detection** -- scan task description for capability keywords:
|
||||
|
||||
| Signal | Keywords | Capability | Prefix | Responsibility Type |
|
||||
|--------|----------|------------|--------|---------------------|
|
||||
| Research | investigate, explore, compare, survey, find, research, discover | researcher | RESEARCH | orchestration |
|
||||
| Writing | write, draft, document, article, report, summarize | writer | DRAFT | code-gen-docs |
|
||||
| Coding | implement, build, code, fix, refactor, develop, create, migrate | developer | IMPL | code-gen |
|
||||
| Design | design, architect, plan, structure, blueprint, schema | designer | DESIGN | orchestration |
|
||||
| Analysis | analyze, review, audit, assess, evaluate, inspect, diagnose | analyst | ANALYSIS | read-only |
|
||||
| Testing | test, verify, validate, QA, quality, check, coverage | tester | TEST | validation |
|
||||
| Planning | plan, breakdown, organize, schedule, decompose, roadmap | planner | PLAN | orchestration |
|
||||
|
||||
2. **Dependency Graph** -- build DAG using natural ordering tiers:
|
||||
|
||||
| Tier | Capabilities | Description |
|
||||
|------|-------------|-------------|
|
||||
| 0 | researcher, planner | Knowledge gathering / planning |
|
||||
| 1 | designer | Design (requires tier 0 if present) |
|
||||
| 2 | writer, developer | Creation (requires design/plan if present) |
|
||||
| 3 | analyst, tester | Validation (requires artifacts to validate) |
|
||||
|
||||
3. **Role Minimization** -- merge overlapping capabilities, cap at 5 roles
|
||||
|
||||
4. **Key File Inference** -- extract nouns from task description, map to likely file paths
|
||||
|
||||
5. **output_type derivation**:
|
||||
|
||||
| Task Signal | output_type |
|
||||
|-------------|-------------|
|
||||
| "write report", "analyze", "research" | `artifact` |
|
||||
| "update code", "modify", "fix bug" | `codebase` |
|
||||
| "implement feature + write summary" | `mixed` |
|
||||
|
||||
**Classification Rules**:
|
||||
|
||||
| Task Property | exec_mode |
|
||||
|---------------|-----------|
|
||||
| Single-pass implementation/analysis/documentation | `csv-wave` |
|
||||
| Needs iterative user approval | `interactive` |
|
||||
| Fix-verify revision cycle | `interactive` |
|
||||
| Standard research, coding, testing | `csv-wave` |
|
||||
|
||||
**Wave Computation**: Kahn's BFS topological sort with depth tracking.
|
||||
|
||||
```javascript
|
||||
// After task analysis, generate dynamic role instruction templates
|
||||
for (const role of analysisResult.roles) {
|
||||
const instruction = generateRoleInstruction(role, sessionFolder)
|
||||
Write(`${sessionFolder}/role-instructions/${role.name}.md`, instruction)
|
||||
}
|
||||
|
||||
// Generate tasks.csv from dependency graph
|
||||
const tasks = buildTasksCsv(analysisResult)
|
||||
Write(`${sessionFolder}/tasks.csv`, toCsv(tasks))
|
||||
Write(`${sessionFolder}/task-analysis.json`, JSON.stringify(analysisResult, null, 2))
|
||||
```
|
||||
|
||||
**User Validation**: Display task breakdown with wave + exec_mode assignment (skip if AUTO_YES).
|
||||
|
||||
**Success Criteria**:
|
||||
- tasks.csv created with valid schema, wave, and exec_mode assignments
|
||||
- Role instruction templates generated in role-instructions/
|
||||
- task-analysis.json written
|
||||
- No circular dependencies
|
||||
- User approved (or AUTO_YES)
|
||||
|
||||
---
|
||||
|
||||
### Phase 2: Wave Execution Engine (Extended)
|
||||
|
||||
**Objective**: Execute tasks wave-by-wave with hybrid mechanism support and cross-wave context propagation.
|
||||
|
||||
```javascript
|
||||
const masterCsv = Read(`${sessionFolder}/tasks.csv`)
|
||||
let tasks = parseCsv(masterCsv)
|
||||
const maxWave = Math.max(...tasks.map(t => t.wave))
|
||||
|
||||
for (let wave = 1; wave <= maxWave; wave++) {
|
||||
console.log(`\nWave ${wave}/${maxWave}`)
|
||||
|
||||
// 1. Separate tasks by exec_mode
|
||||
const waveTasks = tasks.filter(t => t.wave === wave && t.status === 'pending')
|
||||
const csvTasks = waveTasks.filter(t => t.exec_mode === 'csv-wave')
|
||||
const interactiveTasks = waveTasks.filter(t => t.exec_mode === 'interactive')
|
||||
|
||||
// 2. Check dependencies -- skip tasks whose deps failed
|
||||
for (const task of waveTasks) {
|
||||
const depIds = (task.deps || '').split(';').filter(Boolean)
|
||||
const depStatuses = depIds.map(id => tasks.find(t => t.id === id)?.status)
|
||||
if (depStatuses.some(s => s === 'failed' || s === 'skipped')) {
|
||||
task.status = 'skipped'
|
||||
task.error = `Dependency failed: ${depIds.filter((id, i) =>
|
||||
['failed','skipped'].includes(depStatuses[i])).join(', ')}`
|
||||
}
|
||||
}
|
||||
|
||||
// 3. Execute pre-wave interactive tasks (e.g., plan approval)
|
||||
const preWaveInteractive = interactiveTasks.filter(t => t.status === 'pending')
|
||||
for (const task of preWaveInteractive) {
|
||||
// Read agent definition
|
||||
Read(`agents/plan-reviewer.md`)
|
||||
|
||||
const agent = spawn_agent({
|
||||
message: `## TASK ASSIGNMENT\n\n### MANDATORY FIRST STEPS\n1. Read: ${sessionFolder}/discoveries.ndjson\n\nGoal: ${task.description}\nScope: ${task.title}\nSession: ${sessionFolder}\n\n### Previous Context\n${buildPrevContext(task, tasks)}`
|
||||
})
|
||||
const result = wait({ ids: [agent], timeout_ms: 600000 })
|
||||
if (result.timed_out) {
|
||||
send_input({ id: agent, message: "Please finalize and output current findings." })
|
||||
wait({ ids: [agent], timeout_ms: 120000 })
|
||||
}
|
||||
Write(`${sessionFolder}/interactive/${task.id}-result.json`, JSON.stringify({
|
||||
task_id: task.id, status: "completed", findings: parseFindings(result),
|
||||
timestamp: getUtc8ISOString()
|
||||
}))
|
||||
close_agent({ id: agent })
|
||||
task.status = 'completed'
|
||||
task.findings = parseFindings(result)
|
||||
}
|
||||
|
||||
// 4. Build prev_context for csv-wave tasks
|
||||
const pendingCsvTasks = csvTasks.filter(t => t.status === 'pending')
|
||||
for (const task of pendingCsvTasks) {
|
||||
task.prev_context = buildPrevContext(task, tasks)
|
||||
}
|
||||
|
||||
if (pendingCsvTasks.length > 0) {
|
||||
// 5. Write wave CSV
|
||||
Write(`${sessionFolder}/wave-${wave}.csv`, toCsv(pendingCsvTasks))
|
||||
|
||||
// 6. Determine instruction for this wave (use role-specific instruction)
|
||||
// Group tasks by role, build combined instruction
|
||||
const waveInstruction = buildWaveInstruction(pendingCsvTasks, sessionFolder, wave)
|
||||
|
||||
// 7. Execute wave via spawn_agents_on_csv
|
||||
spawn_agents_on_csv({
|
||||
csv_path: `${sessionFolder}/wave-${wave}.csv`,
|
||||
id_column: "id",
|
||||
instruction: waveInstruction,
|
||||
max_concurrency: maxConcurrency,
|
||||
max_runtime_seconds: 900,
|
||||
output_csv_path: `${sessionFolder}/wave-${wave}-results.csv`,
|
||||
output_schema: {
|
||||
type: "object",
|
||||
properties: {
|
||||
id: { type: "string" },
|
||||
status: { type: "string", enum: ["completed", "failed"] },
|
||||
findings: { type: "string" },
|
||||
artifacts_produced: { type: "string" },
|
||||
error: { type: "string" }
|
||||
}
|
||||
}
|
||||
})
|
||||
|
||||
// 8. Merge results into master CSV
|
||||
const results = parseCsv(Read(`${sessionFolder}/wave-${wave}-results.csv`))
|
||||
for (const r of results) {
|
||||
const t = tasks.find(t => t.id === r.id)
|
||||
if (t) Object.assign(t, r)
|
||||
}
|
||||
}
|
||||
|
||||
// 9. Update master CSV
|
||||
Write(`${sessionFolder}/tasks.csv`, toCsv(tasks))
|
||||
|
||||
// 10. Cleanup temp files
|
||||
Bash(`rm -f ${sessionFolder}/wave-${wave}.csv ${sessionFolder}/wave-${wave}-results.csv`)
|
||||
|
||||
// 11. Display wave summary
|
||||
const completed = waveTasks.filter(t => t.status === 'completed').length
|
||||
const failed = waveTasks.filter(t => t.status === 'failed').length
|
||||
const skipped = waveTasks.filter(t => t.status === 'skipped').length
|
||||
console.log(`Wave ${wave} Complete: ${completed} completed, ${failed} failed, ${skipped} skipped`)
|
||||
}
|
||||
```
|
||||
|
||||
**Success Criteria**:
|
||||
- All waves executed in order
|
||||
- Both csv-wave and interactive tasks handled per wave
|
||||
- Each wave's results merged into master CSV before next wave starts
|
||||
- Dependent tasks skipped when predecessor failed
|
||||
- discoveries.ndjson accumulated across all waves and mechanisms
|
||||
|
||||
---
|
||||
|
||||
### Phase 3: Post-Wave Interactive (Completion Action)
|
||||
|
||||
**Objective**: Pipeline completion report and interactive completion choice.
|
||||
|
||||
```javascript
|
||||
// 1. Generate pipeline summary
|
||||
const tasks = parseCsv(Read(`${sessionFolder}/tasks.csv`))
|
||||
const completed = tasks.filter(t => t.status === 'completed')
|
||||
const failed = tasks.filter(t => t.status === 'failed')
|
||||
|
||||
console.log(`
|
||||
============================================
|
||||
TASK COMPLETE
|
||||
|
||||
Deliverables:
|
||||
${completed.map(t => ` - ${t.id}: ${t.title} (${t.role})`).join('\n')}
|
||||
|
||||
Pipeline: ${completed.length}/${tasks.length} tasks
|
||||
Duration: <elapsed>
|
||||
Session: ${sessionFolder}
|
||||
============================================
|
||||
`)
|
||||
|
||||
// 2. Completion action
|
||||
if (!AUTO_YES) {
|
||||
const choice = AskUserQuestion({
|
||||
questions: [{
|
||||
question: "Team pipeline complete. What would you like to do?",
|
||||
header: "Completion",
|
||||
multiSelect: false,
|
||||
options: [
|
||||
{ label: "Archive & Clean (Recommended)", description: "Archive session, output final summary" },
|
||||
{ label: "Keep Active", description: "Keep session for follow-up work" },
|
||||
{ label: "Retry Failed", description: "Re-run failed tasks" }
|
||||
]
|
||||
}]
|
||||
})
|
||||
// Handle choice accordingly
|
||||
}
|
||||
```
|
||||
|
||||
**Success Criteria**:
|
||||
- Post-wave interactive processing complete
|
||||
- User informed of results
|
||||
|
||||
---
|
||||
|
||||
### Phase 4: Results Aggregation
|
||||
|
||||
**Objective**: Generate final results and human-readable report.
|
||||
|
||||
```javascript
|
||||
// 1. Export results.csv
|
||||
Bash(`cp ${sessionFolder}/tasks.csv ${sessionFolder}/results.csv`)
|
||||
|
||||
// 2. Generate context.md
|
||||
const tasks = parseCsv(Read(`${sessionFolder}/tasks.csv`))
|
||||
let contextMd = `# Team Coordinate Report\n\n`
|
||||
contextMd += `**Session**: ${sessionId}\n`
|
||||
contextMd += `**Date**: ${getUtc8ISOString().substring(0, 10)}\n\n`
|
||||
|
||||
contextMd += `## Summary\n`
|
||||
contextMd += `| Status | Count |\n|--------|-------|\n`
|
||||
contextMd += `| Completed | ${tasks.filter(t => t.status === 'completed').length} |\n`
|
||||
contextMd += `| Failed | ${tasks.filter(t => t.status === 'failed').length} |\n`
|
||||
contextMd += `| Skipped | ${tasks.filter(t => t.status === 'skipped').length} |\n\n`
|
||||
|
||||
const maxWave = Math.max(...tasks.map(t => t.wave))
|
||||
contextMd += `## Wave Execution\n\n`
|
||||
for (let w = 1; w <= maxWave; w++) {
|
||||
const waveTasks = tasks.filter(t => t.wave === w)
|
||||
contextMd += `### Wave ${w}\n\n`
|
||||
for (const t of waveTasks) {
|
||||
const icon = t.status === 'completed' ? '[DONE]' : t.status === 'failed' ? '[FAIL]' : '[SKIP]'
|
||||
contextMd += `${icon} **${t.title}** [${t.role}] ${t.findings || ''}\n\n`
|
||||
}
|
||||
}
|
||||
|
||||
Write(`${sessionFolder}/context.md`, contextMd)
|
||||
|
||||
// 3. Display final summary
|
||||
console.log(`Results exported to: ${sessionFolder}/results.csv`)
|
||||
console.log(`Report generated at: ${sessionFolder}/context.md`)
|
||||
```
|
||||
|
||||
**Success Criteria**:
|
||||
- results.csv exported (all tasks, both modes)
|
||||
- context.md generated
|
||||
- Summary displayed to user
|
||||
|
||||
---
|
||||
|
||||
## Shared Discovery Board Protocol
|
||||
|
||||
All agents (csv-wave and interactive) share a single `discoveries.ndjson` file for cross-task knowledge exchange.
|
||||
|
||||
**Format**: One JSON object per line (NDJSON):
|
||||
|
||||
```jsonl
|
||||
{"ts":"2026-03-08T10:00:00Z","worker":"RESEARCH-001","type":"pattern_found","data":{"pattern_name":"Repository Pattern","location":"src/repos/","description":"Data access layer uses repository pattern"}}
|
||||
{"ts":"2026-03-08T10:05:00Z","worker":"IMPL-001","type":"file_modified","data":{"file":"src/auth/jwt.ts","change":"Added JWT middleware","lines_added":45}}
|
||||
```
|
||||
|
||||
**Discovery Types**:
|
||||
|
||||
| Type | Data Schema | Description |
|
||||
|------|-------------|-------------|
|
||||
| `pattern_found` | `{pattern_name, location, description}` | Design pattern identified |
|
||||
| `file_modified` | `{file, change, lines_added}` | File change recorded |
|
||||
| `dependency_found` | `{from, to, type}` | Dependency relationship discovered |
|
||||
| `issue_found` | `{file, line, severity, description}` | Issue or bug discovered |
|
||||
| `decision_made` | `{decision, rationale, impact}` | Design decision recorded |
|
||||
| `artifact_produced` | `{name, path, producer, type}` | Deliverable created |
|
||||
|
||||
**Protocol**:
|
||||
1. Agents MUST read discoveries.ndjson at start of execution
|
||||
2. Agents MUST append relevant discoveries during execution
|
||||
3. Agents MUST NOT modify or delete existing entries
|
||||
4. Deduplication by `{type, data.file, data.pattern_name}` key
|
||||
|
||||
---
|
||||
|
||||
## Dynamic Role Instruction Generation
|
||||
|
||||
The coordinator generates role-specific instruction templates during Phase 1. Each template is written to `role-instructions/{role-name}.md` and used as the `instruction` parameter for `spawn_agents_on_csv`.
|
||||
|
||||
**Generation Rules**:
|
||||
1. Each instruction must be self-contained (agent has no access to master CSV)
|
||||
2. Use `{column_name}` placeholders for CSV column substitution
|
||||
3. Include session folder path as literal (not placeholder)
|
||||
4. Include mandatory discovery board read/write steps
|
||||
5. Include role-specific execution guidance based on responsibility_type
|
||||
6. Include output schema matching tasks.csv output columns
|
||||
|
||||
See `instructions/agent-instruction.md` for the base instruction template that is customized per role.
|
||||
|
||||
---
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Error | Resolution |
|
||||
|-------|------------|
|
||||
| Circular dependency | Detect in wave computation, abort with error message |
|
||||
| CSV agent timeout | Mark as failed in results, continue with wave |
|
||||
| CSV agent failed | Mark as failed, skip dependent tasks in later waves |
|
||||
| Interactive agent timeout | Urge convergence via send_input, then close if still timed out |
|
||||
| Interactive agent failed | Mark as failed, skip dependents |
|
||||
| All agents in wave failed | Log error, offer retry or abort |
|
||||
| CSV parse error | Validate CSV format before execution, show line number |
|
||||
| discoveries.ndjson corrupt | Ignore malformed lines, continue with valid entries |
|
||||
| No capabilities detected | Default to single `general` role with TASK prefix |
|
||||
| All capabilities merge to one | Valid: single-role execution, reduced overhead |
|
||||
| Task description too vague | AskUserQuestion for clarification in Phase 0 |
|
||||
| Continue mode: no session found | List available sessions, prompt user to select |
|
||||
| Role instruction generation fails | Fall back to generic instruction template |
|
||||
|
||||
---
|
||||
|
||||
## Core Rules
|
||||
|
||||
1. **Start Immediately**: First action is session initialization, then Phase 0/1
|
||||
2. **Wave Order is Sacred**: Never execute wave N before wave N-1 completes and results are merged
|
||||
3. **CSV is Source of Truth**: Master tasks.csv holds all state (both csv-wave and interactive)
|
||||
4. **CSV First**: Default to csv-wave for tasks; only use interactive when interaction pattern requires it
|
||||
5. **Context Propagation**: prev_context built from master CSV, not from memory
|
||||
6. **Discovery Board is Append-Only**: Never clear, modify, or recreate discoveries.ndjson
|
||||
7. **Skip on Failure**: If a dependency failed, skip the dependent task
|
||||
8. **Dynamic Roles**: All worker roles are generated at runtime from task analysis -- no static role registry
|
||||
9. **Cleanup Temp Files**: Remove wave-{N}.csv after results are merged
|
||||
10. **DO NOT STOP**: Continuous execution until all waves complete or all remaining tasks are skipped
|
||||
|
||||
|
||||
---
|
||||
|
||||
## Coordinator Role Constraints (Main Agent)
|
||||
|
||||
**CRITICAL**: The coordinator (main agent executing this skill) is responsible for **orchestration only**, NOT implementation.
|
||||
|
||||
15. **Coordinator Does NOT Execute Code**: The main agent MUST NOT write, modify, or implement any code directly. All implementation work is delegated to spawned team agents. The coordinator only:
|
||||
- Spawns agents with task assignments
|
||||
- Waits for agent callbacks
|
||||
- Merges results and coordinates workflow
|
||||
- Manages workflow transitions between phases
|
||||
|
||||
16. **Patient Waiting is Mandatory**: Agent execution takes significant time (typically 10-30 minutes per phase, sometimes longer). The coordinator MUST:
|
||||
- Wait patiently for `wait()` calls to complete
|
||||
- NOT skip workflow steps due to perceived delays
|
||||
- NOT assume agents have failed just because they're taking time
|
||||
- Trust the timeout mechanisms defined in the skill
|
||||
|
||||
17. **Use send_input for Clarification**: When agents need guidance or appear stuck, the coordinator MUST:
|
||||
- Use `send_input()` to ask questions or provide clarification
|
||||
- NOT skip the agent or move to next phase prematurely
|
||||
- Give agents opportunity to respond before escalating
|
||||
- Example: `send_input({ id: agent_id, message: "Please provide status update or clarify blockers" })`
|
||||
|
||||
18. **No Workflow Shortcuts**: The coordinator MUST NOT:
|
||||
- Skip phases or stages defined in the workflow
|
||||
- Bypass required approval or review steps
|
||||
- Execute dependent tasks before prerequisites complete
|
||||
- Assume task completion without explicit agent callback
|
||||
- Make up or fabricate agent results
|
||||
|
||||
19. **Respect Long-Running Processes**: This is a complex multi-agent workflow that requires patience:
|
||||
- Total execution time may range from 30-90 minutes or longer
|
||||
- Each phase may take 10-30 minutes depending on complexity
|
||||
- The coordinator must remain active and attentive throughout the entire process
|
||||
- Do not terminate or skip steps due to time concerns
|
||||
127
.codex/skills/team-coordinate/agents/completion-handler.md
Normal file
127
.codex/skills/team-coordinate/agents/completion-handler.md
Normal file
@@ -0,0 +1,127 @@
|
||||
# Completion Handler Agent
|
||||
|
||||
Interactive agent for handling pipeline completion actions. Presents results summary and manages Archive/Keep/Export choices.
|
||||
|
||||
## Identity
|
||||
|
||||
- **Type**: `interactive`
|
||||
- **Role File**: `agents/completion-handler.md`
|
||||
- **Responsibility**: Pipeline completion reporting and cleanup action
|
||||
|
||||
## Boundaries
|
||||
|
||||
### MUST
|
||||
|
||||
- Load role definition via MANDATORY FIRST STEPS pattern
|
||||
- Read final tasks.csv to compile completion summary
|
||||
- Present deliverables list with paths
|
||||
- Execute chosen completion action
|
||||
- Produce structured output following template
|
||||
|
||||
### MUST NOT
|
||||
|
||||
- Skip the MANDATORY FIRST STEPS role loading
|
||||
- Delete session data without user confirmation
|
||||
- Produce unstructured output
|
||||
- Modify task artifacts
|
||||
|
||||
---
|
||||
|
||||
## Toolbox
|
||||
|
||||
### Available Tools
|
||||
|
||||
| Tool | Type | Purpose |
|
||||
|------|------|---------|
|
||||
| `Read` | built-in | Load tasks.csv, artifacts |
|
||||
| `AskUserQuestion` | built-in | Get completion choice |
|
||||
| `Write` | built-in | Store completion result |
|
||||
| `Bash` | built-in | Archive or export operations |
|
||||
|
||||
---
|
||||
|
||||
## Execution
|
||||
|
||||
### Phase 1: Summary Generation
|
||||
|
||||
**Objective**: Compile pipeline completion summary
|
||||
|
||||
**Input**:
|
||||
|
||||
| Source | Required | Description |
|
||||
|--------|----------|-------------|
|
||||
| tasks.csv | Yes | Master state with all results |
|
||||
| artifacts/ | No | Deliverable files |
|
||||
| discoveries.ndjson | No | Shared discoveries |
|
||||
|
||||
**Steps**:
|
||||
|
||||
1. Read tasks.csv, count completed/failed/skipped
|
||||
2. List all produced artifacts with paths
|
||||
3. Summarize discoveries
|
||||
4. Calculate pipeline duration if timestamps available
|
||||
|
||||
**Output**: Completion summary
|
||||
|
||||
---
|
||||
|
||||
### Phase 2: Completion Choice
|
||||
|
||||
**Objective**: Execute user's chosen completion action
|
||||
|
||||
**Steps**:
|
||||
|
||||
1. Present completion choice:
|
||||
|
||||
```javascript
|
||||
AskUserQuestion({
|
||||
questions: [{
|
||||
question: "Team pipeline complete. What would you like to do?",
|
||||
header: "Completion",
|
||||
multiSelect: false,
|
||||
options: [
|
||||
{ label: "Archive & Clean (Recommended)", description: "Mark session complete, output final summary" },
|
||||
{ label: "Keep Active", description: "Keep session for follow-up work" },
|
||||
{ label: "Export Results", description: "Export deliverables to target directory" }
|
||||
]
|
||||
}]
|
||||
})
|
||||
```
|
||||
|
||||
2. Handle choice:
|
||||
|
||||
| Choice | Steps |
|
||||
|--------|-------|
|
||||
| Archive & Clean | Write completion status, output artifact paths |
|
||||
| Keep Active | Keep session files, output resume instructions |
|
||||
| Export Results | Ask target path, copy artifacts, then archive |
|
||||
|
||||
**Output**: Completion action result
|
||||
|
||||
---
|
||||
|
||||
## Structured Output Template
|
||||
|
||||
```
|
||||
## Summary
|
||||
- Pipeline status: completed
|
||||
- Tasks: <completed>/<total>
|
||||
|
||||
## Deliverables
|
||||
- <artifact-path-1> (produced by <role>)
|
||||
- <artifact-path-2> (produced by <role>)
|
||||
|
||||
## Action Taken
|
||||
- Choice: <archive|keep|export>
|
||||
- Details: <action-specific details>
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Scenario | Resolution |
|
||||
|----------|------------|
|
||||
| tasks.csv not found | Report error, suggest manual review |
|
||||
| Export target path invalid | Ask user for valid path |
|
||||
| Processing failure | Default to Keep Active, log warning |
|
||||
145
.codex/skills/team-coordinate/agents/plan-reviewer.md
Normal file
145
.codex/skills/team-coordinate/agents/plan-reviewer.md
Normal file
@@ -0,0 +1,145 @@
|
||||
# Plan Reviewer Agent
|
||||
|
||||
Interactive agent for reviewing and approving plans before execution waves. Used when a task requires user confirmation checkpoint before proceeding.
|
||||
|
||||
## Identity
|
||||
|
||||
- **Type**: `interactive`
|
||||
- **Role File**: `agents/plan-reviewer.md`
|
||||
- **Responsibility**: Review generated plans, seek user approval, handle revision requests
|
||||
|
||||
## Boundaries
|
||||
|
||||
### MUST
|
||||
|
||||
- Load role definition via MANDATORY FIRST STEPS pattern
|
||||
- Read the plan artifact being reviewed
|
||||
- Present a clear summary to the user
|
||||
- Wait for user approval before reporting complete
|
||||
- Produce structured output following template
|
||||
- Include file:line references in findings
|
||||
|
||||
### MUST NOT
|
||||
|
||||
- Skip the MANDATORY FIRST STEPS role loading
|
||||
- Approve plans without user confirmation
|
||||
- Modify the plan artifact directly
|
||||
- Produce unstructured output
|
||||
- Exceed defined scope boundaries
|
||||
|
||||
---
|
||||
|
||||
## Toolbox
|
||||
|
||||
### Available Tools
|
||||
|
||||
| Tool | Type | Purpose |
|
||||
|------|------|---------|
|
||||
| `Read` | built-in | Load plan artifacts and context |
|
||||
| `AskUserQuestion` | built-in | Get user approval or revision feedback |
|
||||
| `Write` | built-in | Store review result |
|
||||
|
||||
### Tool Usage Patterns
|
||||
|
||||
**Read Pattern**: Load context files before review
|
||||
```
|
||||
Read("<session>/artifacts/<plan>.md")
|
||||
Read("<session>/discoveries.ndjson")
|
||||
```
|
||||
|
||||
**Write Pattern**: Store review result
|
||||
```
|
||||
Write("<session>/interactive/<task-id>-result.json", <result>)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Execution
|
||||
|
||||
### Phase 1: Context Loading
|
||||
|
||||
**Objective**: Load the plan artifact and supporting context
|
||||
|
||||
**Input**:
|
||||
|
||||
| Source | Required | Description |
|
||||
|--------|----------|-------------|
|
||||
| Plan artifact | Yes | The plan document to review |
|
||||
| discoveries.ndjson | No | Shared discoveries for context |
|
||||
| Previous task findings | No | Upstream task results |
|
||||
|
||||
**Steps**:
|
||||
|
||||
1. Extract session path from task assignment
|
||||
2. Read the plan artifact referenced in the task description
|
||||
3. Read discoveries.ndjson for additional context
|
||||
4. Summarize key aspects of the plan
|
||||
|
||||
**Output**: Plan summary ready for user review
|
||||
|
||||
---
|
||||
|
||||
### Phase 2: User Review
|
||||
|
||||
**Objective**: Present plan to user and get approval
|
||||
|
||||
**Steps**:
|
||||
|
||||
1. Display plan summary with key decisions and trade-offs
|
||||
2. Present approval choice:
|
||||
|
||||
```javascript
|
||||
AskUserQuestion({
|
||||
questions: [{
|
||||
question: "Review the plan and decide:",
|
||||
header: "Plan Review",
|
||||
multiSelect: false,
|
||||
options: [
|
||||
{ label: "Approve", description: "Proceed with execution" },
|
||||
{ label: "Revise", description: "Request changes to the plan" },
|
||||
{ label: "Abort", description: "Cancel the pipeline" }
|
||||
]
|
||||
}]
|
||||
})
|
||||
```
|
||||
|
||||
3. Handle response:
|
||||
|
||||
| Response | Action |
|
||||
|----------|--------|
|
||||
| Approve | Report approved status |
|
||||
| Revise | Collect revision feedback, report revision needed |
|
||||
| Abort | Report abort status |
|
||||
|
||||
**Output**: Review decision with details
|
||||
|
||||
---
|
||||
|
||||
## Structured Output Template
|
||||
|
||||
```
|
||||
## Summary
|
||||
- Plan reviewed: <plan-name>
|
||||
- Decision: <approved|revision-needed|aborted>
|
||||
|
||||
## Findings
|
||||
- Key strength 1: description
|
||||
- Key concern 1: description
|
||||
|
||||
## Decision Details
|
||||
- User choice: <choice>
|
||||
- Feedback: <user feedback if revision>
|
||||
|
||||
## Open Questions
|
||||
1. Any unresolved items from review
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Scenario | Resolution |
|
||||
|----------|------------|
|
||||
| Plan artifact not found | Report in Open Questions, ask user for path |
|
||||
| User does not respond | Timeout, report partial with "awaiting-review" status |
|
||||
| Processing failure | Output partial results with clear status indicator |
|
||||
184
.codex/skills/team-coordinate/instructions/agent-instruction.md
Normal file
184
.codex/skills/team-coordinate/instructions/agent-instruction.md
Normal file
@@ -0,0 +1,184 @@
|
||||
# Agent Instruction Template -- Team Coordinate
|
||||
|
||||
Base instruction template for CSV wave agents. The orchestrator dynamically customizes this per role during Phase 1, writing role-specific versions to `role-instructions/{role-name}.md`.
|
||||
|
||||
## Purpose
|
||||
|
||||
| Phase | Usage |
|
||||
|-------|-------|
|
||||
| Phase 1 | Coordinator generates per-role instruction from this template |
|
||||
| Phase 2 | Injected as `instruction` parameter to `spawn_agents_on_csv` |
|
||||
|
||||
---
|
||||
|
||||
## Base Instruction Template
|
||||
|
||||
```markdown
|
||||
## TASK ASSIGNMENT -- Team Coordinate
|
||||
|
||||
### MANDATORY FIRST STEPS
|
||||
1. Read shared discoveries: <session-folder>/discoveries.ndjson (if exists, skip if not)
|
||||
2. Read project context: .workflow/project-tech.json (if exists)
|
||||
|
||||
---
|
||||
|
||||
## Your Task
|
||||
|
||||
**Task ID**: {id}
|
||||
**Title**: {title}
|
||||
**Role**: {role}
|
||||
**Responsibility**: {responsibility_type}
|
||||
**Output Type**: {output_type}
|
||||
|
||||
### Task Description
|
||||
{description}
|
||||
|
||||
### Previous Tasks' Findings (Context)
|
||||
{prev_context}
|
||||
|
||||
---
|
||||
|
||||
## Execution Protocol
|
||||
|
||||
1. **Read discoveries**: Load <session-folder>/discoveries.ndjson for shared exploration findings
|
||||
2. **Use context**: Apply previous tasks' findings from prev_context above
|
||||
3. **Execute task**:
|
||||
- Read target files referenced in description
|
||||
- Follow the execution steps outlined in the TASK section of description
|
||||
- Produce deliverables matching the EXPECTED section of description
|
||||
- Verify output matches success criteria
|
||||
4. **Share discoveries**: Append exploration findings to shared board:
|
||||
```bash
|
||||
echo '{"ts":"<ISO8601>","worker":"{id}","type":"<type>","data":{...}}' >> <session-folder>/discoveries.ndjson
|
||||
```
|
||||
5. **Report result**: Return JSON via report_agent_job_result
|
||||
|
||||
### Discovery Types to Share
|
||||
- `pattern_found`: {pattern_name, location, description} -- Design pattern identified in codebase
|
||||
- `file_modified`: {file, change, lines_added} -- File change performed by this agent
|
||||
- `dependency_found`: {from, to, type} -- Dependency relationship between components
|
||||
- `issue_found`: {file, line, severity, description} -- Issue or bug discovered
|
||||
- `decision_made`: {decision, rationale, impact} -- Design decision made during execution
|
||||
- `artifact_produced`: {name, path, producer, type} -- Deliverable file created
|
||||
|
||||
---
|
||||
|
||||
## Output (report_agent_job_result)
|
||||
|
||||
Return JSON:
|
||||
{
|
||||
"id": "{id}",
|
||||
"status": "completed" | "failed",
|
||||
"findings": "Key discoveries and implementation notes (max 500 chars)",
|
||||
"artifacts_produced": "semicolon-separated paths of produced files",
|
||||
"error": ""
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Role-Specific Customization
|
||||
|
||||
The coordinator generates per-role instruction variants during Phase 1. Each variant adds role-specific execution guidance to Step 3.
|
||||
|
||||
### For Research / Exploration Roles
|
||||
|
||||
Add to execution protocol step 3:
|
||||
```
|
||||
3. **Execute**:
|
||||
- Define exploration scope from description
|
||||
- Use code search tools to find relevant patterns and implementations
|
||||
- Survey approaches, compare alternatives
|
||||
- Document findings with file:line references
|
||||
- Write research artifact to <session-folder>/artifacts/
|
||||
```
|
||||
|
||||
### For Code Implementation Roles
|
||||
|
||||
Add to execution protocol step 3:
|
||||
```
|
||||
3. **Execute**:
|
||||
- Read upstream design/spec artifacts referenced in description
|
||||
- Read target files listed in description
|
||||
- Apply code changes following project conventions
|
||||
- Validate changes compile/lint correctly
|
||||
- Run relevant tests if available
|
||||
- Write implementation summary to <session-folder>/artifacts/
|
||||
```
|
||||
|
||||
### For Analysis / Audit Roles
|
||||
|
||||
Add to execution protocol step 3:
|
||||
```
|
||||
3. **Execute**:
|
||||
- Read target files/modules for analysis
|
||||
- Apply analysis criteria systematically
|
||||
- Classify findings by severity (critical, high, medium, low)
|
||||
- Include file:line references in findings
|
||||
- Write analysis report to <session-folder>/artifacts/
|
||||
```
|
||||
|
||||
### For Test / Validation Roles
|
||||
|
||||
Add to execution protocol step 3:
|
||||
```
|
||||
3. **Execute**:
|
||||
- Read source files to understand implementation
|
||||
- Identify test cases from description
|
||||
- Generate test files following project test conventions
|
||||
- Run tests and capture results
|
||||
- Write test report to <session-folder>/artifacts/
|
||||
```
|
||||
|
||||
### For Documentation / Writing Roles
|
||||
|
||||
Add to execution protocol step 3:
|
||||
```
|
||||
3. **Execute**:
|
||||
- Read source code and existing documentation
|
||||
- Generate documentation following template in description
|
||||
- Ensure accuracy against current implementation
|
||||
- Include code examples where appropriate
|
||||
- Write document to <session-folder>/artifacts/
|
||||
```
|
||||
|
||||
### For Design / Architecture Roles
|
||||
|
||||
Add to execution protocol step 3:
|
||||
```
|
||||
3. **Execute**:
|
||||
- Read upstream research findings
|
||||
- Analyze existing codebase structure
|
||||
- Design component interactions and data flow
|
||||
- Document architecture decisions with rationale
|
||||
- Write design document to <session-folder>/artifacts/
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Quality Requirements
|
||||
|
||||
All agents must verify before reporting complete:
|
||||
|
||||
| Requirement | Criteria |
|
||||
|-------------|----------|
|
||||
| Files produced | Verify all claimed artifacts exist via Read |
|
||||
| Files modified | Verify content actually changed |
|
||||
| Findings accuracy | Findings reflect actual work done |
|
||||
| Discovery sharing | At least 1 discovery shared to board |
|
||||
| Error reporting | Non-empty error field if status is failed |
|
||||
|
||||
---
|
||||
|
||||
## Placeholder Reference
|
||||
|
||||
| Placeholder | Resolved By | When |
|
||||
|-------------|------------|------|
|
||||
| `<session-folder>` | Skill designer (Phase 1) | Literal path baked into instruction |
|
||||
| `{id}` | spawn_agents_on_csv | Runtime from CSV row |
|
||||
| `{title}` | spawn_agents_on_csv | Runtime from CSV row |
|
||||
| `{description}` | spawn_agents_on_csv | Runtime from CSV row |
|
||||
| `{role}` | spawn_agents_on_csv | Runtime from CSV row |
|
||||
| `{responsibility_type}` | spawn_agents_on_csv | Runtime from CSV row |
|
||||
| `{output_type}` | spawn_agents_on_csv | Runtime from CSV row |
|
||||
| `{prev_context}` | spawn_agents_on_csv | Runtime from CSV row |
|
||||
165
.codex/skills/team-coordinate/schemas/tasks-schema.md
Normal file
165
.codex/skills/team-coordinate/schemas/tasks-schema.md
Normal file
@@ -0,0 +1,165 @@
|
||||
# Team Coordinate -- CSV Schema
|
||||
|
||||
## Master CSV: tasks.csv
|
||||
|
||||
### Column Definitions
|
||||
|
||||
#### Input Columns (Set by Decomposer)
|
||||
|
||||
| Column | Type | Required | Description | Example |
|
||||
|--------|------|----------|-------------|---------|
|
||||
| `id` | string | Yes | Unique task identifier (PREFIX-NNN) | `"RESEARCH-001"` |
|
||||
| `title` | string | Yes | Short task title | `"Investigate auth patterns"` |
|
||||
| `description` | string | Yes | Detailed task description (self-contained) with goal, steps, success criteria, key files | `"PURPOSE: Research JWT auth patterns..."` |
|
||||
| `role` | string | Yes | Dynamic role name | `"researcher"` |
|
||||
| `responsibility_type` | enum | Yes | `orchestration`, `read-only`, `code-gen`, `code-gen-docs`, `validation` | `"orchestration"` |
|
||||
| `output_type` | enum | Yes | `artifact` (session files), `codebase` (project files), `mixed` | `"artifact"` |
|
||||
| `deps` | string | No | Semicolon-separated dependency task IDs | `"RESEARCH-001;DESIGN-001"` |
|
||||
| `context_from` | string | No | Semicolon-separated task IDs for context | `"RESEARCH-001"` |
|
||||
| `exec_mode` | enum | Yes | Execution mechanism: `csv-wave` or `interactive` | `"csv-wave"` |
|
||||
|
||||
#### Computed Columns (Set by Wave Engine)
|
||||
|
||||
| Column | Type | Description | Example |
|
||||
|--------|------|-------------|---------|
|
||||
| `wave` | integer | Wave number (1-based, from topological sort) | `2` |
|
||||
| `prev_context` | string | Aggregated findings from context_from tasks (per-wave CSV only) | `"[RESEARCH-001] Found 3 auth patterns..."` |
|
||||
|
||||
#### Output Columns (Set by Agent)
|
||||
|
||||
| Column | Type | Description | Example |
|
||||
|--------|------|-------------|---------|
|
||||
| `status` | enum | `pending` -> `completed` / `failed` / `skipped` | `"completed"` |
|
||||
| `findings` | string | Key discoveries (max 500 chars) | `"Implemented JWT middleware with refresh token support..."` |
|
||||
| `artifacts_produced` | string | Semicolon-separated paths of produced artifacts | `"artifacts/research-findings.md;src/auth/jwt.ts"` |
|
||||
| `error` | string | Error message if failed | `""` |
|
||||
|
||||
---
|
||||
|
||||
### exec_mode Values
|
||||
|
||||
| Value | Mechanism | Description |
|
||||
|-------|-----------|-------------|
|
||||
| `csv-wave` | `spawn_agents_on_csv` | One-shot batch execution within wave |
|
||||
| `interactive` | `spawn_agent`/`wait`/`send_input`/`close_agent` | Multi-round individual execution |
|
||||
|
||||
Interactive tasks appear in master CSV for dependency tracking but are NOT included in wave-{N}.csv files.
|
||||
|
||||
---
|
||||
|
||||
### Dynamic Role Prefixes
|
||||
|
||||
| Capability | Prefix | Responsibility Type |
|
||||
|------------|--------|---------------------|
|
||||
| researcher | RESEARCH | orchestration |
|
||||
| writer | DRAFT | code-gen-docs |
|
||||
| developer | IMPL | code-gen |
|
||||
| designer | DESIGN | orchestration |
|
||||
| analyst | ANALYSIS | read-only |
|
||||
| tester | TEST | validation |
|
||||
| planner | PLAN | orchestration |
|
||||
| (default) | TASK | orchestration |
|
||||
|
||||
---
|
||||
|
||||
### Example Data
|
||||
|
||||
```csv
|
||||
id,title,description,role,responsibility_type,output_type,deps,context_from,exec_mode,wave,status,findings,artifacts_produced,error
|
||||
"RESEARCH-001","Research auth patterns","PURPOSE: Investigate JWT authentication patterns and industry best practices | Success: Comprehensive findings document with pattern comparison\nTASK:\n- Survey JWT vs session-based auth\n- Compare token refresh strategies\n- Document security considerations\nCONTEXT:\n- Key files: src/auth/*, src/middleware/*\nEXPECTED: artifacts/research-findings.md","researcher","orchestration","artifact","","","csv-wave","1","pending","","",""
|
||||
"DESIGN-001","Design auth architecture","PURPOSE: Design authentication module architecture based on research | Success: Architecture document with component diagram\nTASK:\n- Define auth module structure\n- Design token lifecycle\n- Plan middleware integration\nCONTEXT:\n- Upstream: RESEARCH-001 findings\nEXPECTED: artifacts/auth-design.md","designer","orchestration","artifact","RESEARCH-001","RESEARCH-001","csv-wave","2","pending","","",""
|
||||
"IMPL-001","Implement auth module","PURPOSE: Build JWT authentication middleware | Success: Working auth module with tests passing\nTASK:\n- Create JWT utility functions\n- Implement auth middleware\n- Add route guards\nCONTEXT:\n- Upstream: DESIGN-001 architecture\n- Key files: src/auth/*, src/middleware/*\nEXPECTED: Source files + artifacts/implementation-summary.md","developer","code-gen","mixed","DESIGN-001","DESIGN-001","csv-wave","3","pending","","",""
|
||||
"TEST-001","Test auth implementation","PURPOSE: Validate auth module correctness | Success: All tests pass, coverage >= 80%\nTASK:\n- Write unit tests for JWT utilities\n- Write integration tests for middleware\n- Run test suite\nCONTEXT:\n- Upstream: IMPL-001 implementation\nEXPECTED: artifacts/test-report.md","tester","validation","artifact","IMPL-001","IMPL-001","csv-wave","4","pending","","",""
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Column Lifecycle
|
||||
|
||||
```
|
||||
Decomposer (Phase 1) Wave Engine (Phase 2) Agent (Execution)
|
||||
--------------------- -------------------- -----------------
|
||||
id ----------> id ----------> id
|
||||
title ----------> title ----------> (reads)
|
||||
description ----------> description ----------> (reads)
|
||||
role ----------> role ----------> (reads)
|
||||
responsibility_type ---> responsibility_type ---> (reads)
|
||||
output_type ----------> output_type ----------> (reads)
|
||||
deps ----------> deps ----------> (reads)
|
||||
context_from----------> context_from----------> (reads)
|
||||
exec_mode ----------> exec_mode ----------> (reads)
|
||||
wave ----------> (reads)
|
||||
prev_context ----------> (reads)
|
||||
status
|
||||
findings
|
||||
artifacts_produced
|
||||
error
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Output Schema (JSON)
|
||||
|
||||
Agent output via `report_agent_job_result` (csv-wave tasks):
|
||||
|
||||
```json
|
||||
{
|
||||
"id": "IMPL-001",
|
||||
"status": "completed",
|
||||
"findings": "Implemented JWT auth middleware with access/refresh token support. Created 3 files: jwt.ts, auth-middleware.ts, route-guard.ts. All syntax checks pass.",
|
||||
"artifacts_produced": "artifacts/implementation-summary.md;src/auth/jwt.ts;src/auth/auth-middleware.ts",
|
||||
"error": ""
|
||||
}
|
||||
```
|
||||
|
||||
Interactive tasks output via structured text or JSON written to `interactive/{id}-result.json`.
|
||||
|
||||
---
|
||||
|
||||
## Discovery Types
|
||||
|
||||
| Type | Dedup Key | Data Schema | Description |
|
||||
|------|-----------|-------------|-------------|
|
||||
| `pattern_found` | `data.pattern_name+data.location` | `{pattern_name, location, description}` | Design pattern identified |
|
||||
| `file_modified` | `data.file` | `{file, change, lines_added}` | File change recorded |
|
||||
| `dependency_found` | `data.from+data.to` | `{from, to, type}` | Dependency relationship |
|
||||
| `issue_found` | `data.file+data.line` | `{file, line, severity, description}` | Issue discovered |
|
||||
| `decision_made` | `data.decision` | `{decision, rationale, impact}` | Design decision |
|
||||
| `artifact_produced` | `data.path` | `{name, path, producer, type}` | Deliverable created |
|
||||
|
||||
### Discovery NDJSON Format
|
||||
|
||||
```jsonl
|
||||
{"ts":"2026-03-08T10:00:00Z","worker":"RESEARCH-001","type":"pattern_found","data":{"pattern_name":"Repository Pattern","location":"src/repos/","description":"Data access layer uses repository pattern"}}
|
||||
{"ts":"2026-03-08T10:05:00Z","worker":"IMPL-001","type":"file_modified","data":{"file":"src/auth/jwt.ts","change":"Added JWT middleware","lines_added":45}}
|
||||
{"ts":"2026-03-08T10:10:00Z","worker":"IMPL-001","type":"artifact_produced","data":{"name":"implementation-summary","path":"artifacts/implementation-summary.md","producer":"developer","type":"markdown"}}
|
||||
```
|
||||
|
||||
> Both csv-wave and interactive agents read/write the same discoveries.ndjson file.
|
||||
|
||||
---
|
||||
|
||||
## Cross-Mechanism Context Flow
|
||||
|
||||
| Source | Target | Mechanism |
|
||||
|--------|--------|-----------|
|
||||
| CSV task findings | Interactive task | Injected via spawn message or send_input |
|
||||
| Interactive task result | CSV task prev_context | Read from interactive/{id}-result.json |
|
||||
| Any agent discovery | Any agent | Shared via discoveries.ndjson |
|
||||
|
||||
---
|
||||
|
||||
## Validation Rules
|
||||
|
||||
| Rule | Check | Error |
|
||||
|------|-------|-------|
|
||||
| Unique IDs | No duplicate `id` values | "Duplicate task ID: {id}" |
|
||||
| Valid deps | All dep IDs exist in tasks | "Unknown dependency: {dep_id}" |
|
||||
| No self-deps | Task cannot depend on itself | "Self-dependency: {id}" |
|
||||
| No circular deps | Topological sort completes | "Circular dependency detected involving: {ids}" |
|
||||
| context_from valid | All context IDs exist and in earlier waves | "Invalid context_from: {id}" |
|
||||
| exec_mode valid | Value is `csv-wave` or `interactive` | "Invalid exec_mode: {value}" |
|
||||
| Description non-empty | Every task has description | "Empty description for task: {id}" |
|
||||
| Status enum | status in {pending, completed, failed, skipped} | "Invalid status: {status}" |
|
||||
| Role valid | role matches a generated role-instruction | "No instruction for role: {role}" |
|
||||
| Cross-mechanism deps | Interactive to CSV deps resolve correctly | "Cross-mechanism dependency unresolvable: {id}" |
|
||||
691
.codex/skills/team-designer/SKILL.md
Normal file
691
.codex/skills/team-designer/SKILL.md
Normal file
@@ -0,0 +1,691 @@
|
||||
---
|
||||
name: team-designer
|
||||
description: Meta-skill for generating team skills. Analyzes requirements, scaffolds directory structure, generates role definitions and specs, validates completeness. Produces complete Codex team skill packages with SKILL.md orchestrator, CSV schemas, agent instructions, and interactive agents.
|
||||
argument-hint: "[-y|--yes] [-c|--concurrency N] [--continue] \"skill description with roles and domain\""
|
||||
allowed-tools: spawn_agents_on_csv, spawn_agent, wait, send_input, close_agent, Read, Write, Edit, Bash, Glob, Grep, AskUserQuestion
|
||||
---
|
||||
|
||||
## Auto Mode
|
||||
|
||||
When `--yes` or `-y`: Auto-confirm task decomposition, skip interactive validation, use defaults.
|
||||
|
||||
# Team Skill Designer
|
||||
|
||||
## Usage
|
||||
|
||||
```bash
|
||||
$team-designer "Design a code review team with analyst, reviewer, security-expert roles"
|
||||
$team-designer -c 4 "Create a documentation team with researcher, writer, editor"
|
||||
$team-designer -y "Generate a test automation team with planner, executor, tester"
|
||||
$team-designer --continue "td-code-review-20260308"
|
||||
```
|
||||
|
||||
**Flags**:
|
||||
- `-y, --yes`: Skip all confirmations (auto mode)
|
||||
- `-c, --concurrency N`: Max concurrent agents within each wave (default: 3)
|
||||
- `--continue`: Resume existing session
|
||||
|
||||
**Output Directory**: `.workflow/.csv-wave/{session-id}/`
|
||||
**Core Output**: `tasks.csv` (master state) + `results.csv` (final) + `discoveries.ndjson` (shared exploration) + `context.md` (human-readable report)
|
||||
|
||||
---
|
||||
|
||||
## Overview
|
||||
|
||||
Meta-skill for generating complete team skill packages. Takes a skill description with roles and domain, then: analyzes requirements -> scaffolds directory structure -> generates all role files, specs, templates -> validates the package. The generated skill follows the Codex hybrid team architecture (CSV wave primary + interactive secondary).
|
||||
|
||||
**Execution Model**: Hybrid -- CSV wave pipeline (primary) + individual agent spawn (secondary)
|
||||
|
||||
```
|
||||
+-------------------------------------------------------------------+
|
||||
| TEAM SKILL DESIGNER WORKFLOW |
|
||||
+-------------------------------------------------------------------+
|
||||
| |
|
||||
| Phase 0: Pre-Wave Interactive (Requirement Clarification) |
|
||||
| +- Parse user skill description |
|
||||
| +- Detect input source (reference, structured, natural) |
|
||||
| +- Gather core identity (skill name, prefix, domain) |
|
||||
| +- Output: refined requirements for decomposition |
|
||||
| |
|
||||
| Phase 1: Requirement -> CSV + Classification |
|
||||
| +- Discover roles from domain keywords |
|
||||
| +- Define pipelines from role combinations |
|
||||
| +- Determine commands distribution (inline vs commands/) |
|
||||
| +- Build teamConfig data structure |
|
||||
| +- Classify tasks: csv-wave | interactive (exec_mode) |
|
||||
| +- Compute dependency waves (topological sort) |
|
||||
| +- Generate tasks.csv with wave + exec_mode columns |
|
||||
| +- User validates task breakdown (skip if -y) |
|
||||
| |
|
||||
| Phase 2: Wave Execution Engine (Extended) |
|
||||
| +- For each wave (1..N): |
|
||||
| | +- Execute pre-wave interactive tasks (if any) |
|
||||
| | +- Build wave CSV (filter csv-wave tasks for this wave) |
|
||||
| | +- Inject previous findings into prev_context column |
|
||||
| | +- spawn_agents_on_csv(wave CSV) |
|
||||
| | +- Execute post-wave interactive tasks (if any) |
|
||||
| | +- Merge all results into master tasks.csv |
|
||||
| | +- Check: any failed? -> skip dependents |
|
||||
| +- discoveries.ndjson shared across all modes (append-only) |
|
||||
| |
|
||||
| Phase 3: Post-Wave Interactive (Validation) |
|
||||
| +- Structural validation (files exist, sections present) |
|
||||
| +- Reference integrity (role registry matches files) |
|
||||
| +- Pipeline consistency (no circular deps, roles exist) |
|
||||
| +- Final aggregation / report |
|
||||
| |
|
||||
| Phase 4: Results Aggregation |
|
||||
| +- Export final results.csv |
|
||||
| +- Generate context.md with all findings |
|
||||
| +- Display summary: completed/failed/skipped per wave |
|
||||
| +- Offer: view results | retry failed | done |
|
||||
| |
|
||||
+-------------------------------------------------------------------+
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Task Classification Rules
|
||||
|
||||
Each task is classified by `exec_mode`:
|
||||
|
||||
| exec_mode | Mechanism | Criteria |
|
||||
|-----------|-----------|----------|
|
||||
| `csv-wave` | `spawn_agents_on_csv` | One-shot, structured I/O, no multi-round interaction |
|
||||
| `interactive` | `spawn_agent`/`wait`/`send_input`/`close_agent` | Multi-round, needs clarification, revision cycles |
|
||||
|
||||
**Classification Decision**:
|
||||
|
||||
| Task Property | Classification |
|
||||
|---------------|---------------|
|
||||
| Single-pass file generation (role.md, spec.md) | `csv-wave` |
|
||||
| Directory scaffold creation | `csv-wave` |
|
||||
| SKILL.md generation (complex, multi-section) | `csv-wave` |
|
||||
| Coordinator role generation (multi-file) | `csv-wave` |
|
||||
| Worker role generation (single file) | `csv-wave` |
|
||||
| Pipeline spec generation | `csv-wave` |
|
||||
| Template generation | `csv-wave` |
|
||||
| User requirement clarification | `interactive` |
|
||||
| Validation requiring user approval | `interactive` |
|
||||
| Error recovery (auto-fix vs regenerate choice) | `interactive` |
|
||||
|
||||
---
|
||||
|
||||
## CSV Schema
|
||||
|
||||
### tasks.csv (Master State)
|
||||
|
||||
```csv
|
||||
id,title,description,role,file_target,gen_type,deps,context_from,exec_mode,wave,status,findings,files_produced,error
|
||||
"SCAFFOLD-001","Create directory structure","Create the complete directory structure for the team skill including roles/, specs/, templates/ subdirectories","scaffolder","skill-dir","directory","","","csv-wave","1","pending","","",""
|
||||
"SPEC-001","Generate pipelines spec","Generate specs/pipelines.md with pipeline definitions, task registry, conditional routing","spec-writer","specs/pipelines.md","spec","SCAFFOLD-001","","csv-wave","2","pending","","",""
|
||||
"ROLE-001","Generate coordinator role","Generate roles/coordinator/role.md with entry router, command execution protocol, phase logic","role-writer","roles/coordinator/","role-bundle","SCAFFOLD-001;SPEC-001","SPEC-001","csv-wave","2","pending","","",""
|
||||
"ROLE-002","Generate analyst worker role","Generate roles/analyst/role.md with domain-specific Phase 2-4 logic","role-writer","roles/analyst/role.md","role-inline","SCAFFOLD-001;SPEC-001","SPEC-001","csv-wave","2","pending","","",""
|
||||
```
|
||||
|
||||
**Columns**:
|
||||
|
||||
| Column | Phase | Description |
|
||||
|--------|-------|-------------|
|
||||
| `id` | Input | Unique task identifier (string) |
|
||||
| `title` | Input | Short task title |
|
||||
| `description` | Input | Detailed task description with generation instructions |
|
||||
| `role` | Input | Generator role: `scaffolder`, `spec-writer`, `role-writer`, `router-writer`, `validator` |
|
||||
| `file_target` | Input | Target file or directory path relative to skill root |
|
||||
| `gen_type` | Input | Generation type: `directory`, `router`, `role-bundle`, `role-inline`, `spec`, `template` |
|
||||
| `deps` | Input | Semicolon-separated dependency task IDs |
|
||||
| `context_from` | Input | Semicolon-separated task IDs whose findings this task needs |
|
||||
| `exec_mode` | Input | `csv-wave` or `interactive` |
|
||||
| `wave` | Computed | Wave number (computed by topological sort, 1-based) |
|
||||
| `status` | Output | `pending` -> `completed` / `failed` / `skipped` |
|
||||
| `findings` | Output | Key discoveries or implementation notes (max 500 chars) |
|
||||
| `files_produced` | Output | Semicolon-separated paths of produced files |
|
||||
| `error` | Output | Error message if failed (empty if success) |
|
||||
|
||||
### Per-Wave CSV (Temporary)
|
||||
|
||||
Each wave generates a temporary `wave-{N}.csv` with extra `prev_context` column (csv-wave tasks only).
|
||||
|
||||
---
|
||||
|
||||
## Agent Registry (Interactive Agents)
|
||||
|
||||
| Agent | Role File | Pattern | Responsibility | Position |
|
||||
|-------|-----------|---------|----------------|----------|
|
||||
| Requirement Clarifier | agents/requirement-clarifier.md | 2.3 (send_input cycle) | Gather and refine skill requirements interactively | standalone (Phase 0) |
|
||||
| Validation Reporter | agents/validation-reporter.md | 2.3 (send_input cycle) | Validate generated skill package and report results | standalone (Phase 3) |
|
||||
|
||||
> **COMPACT PROTECTION**: Agent files are execution documents. When context compression occurs, **you MUST immediately `Read` the corresponding agent.md** to reload.
|
||||
|
||||
---
|
||||
|
||||
## Output Artifacts
|
||||
|
||||
| File | Purpose | Lifecycle |
|
||||
|------|---------|-----------|
|
||||
| `tasks.csv` | Master state -- all tasks with status/findings | Updated after each wave |
|
||||
| `wave-{N}.csv` | Per-wave input (temporary, csv-wave tasks only) | Created before wave, deleted after |
|
||||
| `results.csv` | Final export of all task results | Created in Phase 4 |
|
||||
| `discoveries.ndjson` | Shared exploration board (all agents, both modes) | Append-only, carries across waves |
|
||||
| `context.md` | Human-readable execution report | Created in Phase 4 |
|
||||
| `teamConfig.json` | Phase 0/1 output: skill config, roles, pipelines | Created in Phase 1 |
|
||||
| `interactive/{id}-result.json` | Results from interactive tasks | Created per interactive task |
|
||||
|
||||
---
|
||||
|
||||
## Session Structure
|
||||
|
||||
```
|
||||
.workflow/.csv-wave/{session-id}/
|
||||
+-- tasks.csv # Master state (all tasks, both modes)
|
||||
+-- results.csv # Final results export
|
||||
+-- discoveries.ndjson # Shared discovery board (all agents)
|
||||
+-- context.md # Human-readable report
|
||||
+-- teamConfig.json # Skill configuration from Phase 1
|
||||
+-- wave-{N}.csv # Temporary per-wave input (csv-wave only)
|
||||
+-- artifacts/ # Generated skill files (intermediate)
|
||||
+-- interactive/ # Interactive task artifacts
|
||||
| +-- {id}-result.json
|
||||
+-- validation/ # Validation reports
|
||||
+-- structural.json
|
||||
+-- references.json
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Implementation
|
||||
|
||||
### Session Initialization
|
||||
|
||||
```javascript
|
||||
const getUtc8ISOString = () => new Date(Date.now() + 8 * 60 * 60 * 1000).toISOString()
|
||||
|
||||
const AUTO_YES = $ARGUMENTS.includes('--yes') || $ARGUMENTS.includes('-y')
|
||||
const continueMode = $ARGUMENTS.includes('--continue')
|
||||
const concurrencyMatch = $ARGUMENTS.match(/(?:--concurrency|-c)\s+(\d+)/)
|
||||
const maxConcurrency = concurrencyMatch ? parseInt(concurrencyMatch[1]) : 3
|
||||
|
||||
const requirement = $ARGUMENTS
|
||||
.replace(/--yes|-y|--continue|--concurrency\s+\d+|-c\s+\d+/g, '')
|
||||
.trim()
|
||||
|
||||
const slug = requirement.toLowerCase()
|
||||
.replace(/[^a-z0-9\u4e00-\u9fa5]+/g, '-')
|
||||
.substring(0, 40)
|
||||
const dateStr = getUtc8ISOString().substring(0, 10).replace(/-/g, '')
|
||||
const sessionId = `td-${slug}-${dateStr}`
|
||||
const sessionFolder = `.workflow/.csv-wave/${sessionId}`
|
||||
|
||||
Bash(`mkdir -p ${sessionFolder}/artifacts ${sessionFolder}/interactive ${sessionFolder}/validation`)
|
||||
|
||||
// Initialize discoveries.ndjson
|
||||
Write(`${sessionFolder}/discoveries.ndjson`, '')
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Phase 0: Pre-Wave Interactive (Requirement Clarification)
|
||||
|
||||
**Objective**: Parse user skill description, clarify ambiguities, build teamConfig.
|
||||
|
||||
**Workflow**:
|
||||
|
||||
1. **Parse user skill description** from $ARGUMENTS
|
||||
|
||||
2. **Detect input source**:
|
||||
|
||||
| Source Type | Detection | Action |
|
||||
|-------------|-----------|--------|
|
||||
| Reference | Contains "based on", "like", or existing skill path | Read referenced skill, extract structure |
|
||||
| Structured | Contains ROLES:, PIPELINES:, or DOMAIN: | Parse structured input directly |
|
||||
| Natural language | Default | Analyze keywords, discover roles |
|
||||
|
||||
3. **Check for existing sessions** (continue mode):
|
||||
- Scan `.workflow/.csv-wave/td-*/tasks.csv` for sessions with pending tasks
|
||||
- If `--continue`: resume the specified or most recent session, skip to Phase 2
|
||||
|
||||
4. **Gather core identity** (skip if AUTO_YES or already clear):
|
||||
|
||||
Read `agents/requirement-clarifier.md`, then:
|
||||
|
||||
```javascript
|
||||
const clarifier = spawn_agent({
|
||||
message: `## TASK ASSIGNMENT
|
||||
|
||||
### MANDATORY FIRST STEPS
|
||||
1. Read: agents/requirement-clarifier.md
|
||||
2. Read: ${sessionFolder}/discoveries.ndjson (if exists)
|
||||
|
||||
---
|
||||
|
||||
Goal: Gather team skill requirements from the user
|
||||
Input: "${requirement}"
|
||||
Session: ${sessionFolder}
|
||||
|
||||
Determine: skill name (kebab-case), session prefix (3-4 chars), domain description, roles, pipelines, commands distribution.`
|
||||
})
|
||||
const clarifyResult = wait({ ids: [clarifier], timeout_ms: 600000 })
|
||||
if (clarifyResult.timed_out) {
|
||||
send_input({ id: clarifier, message: "Please finalize requirements with current information." })
|
||||
wait({ ids: [clarifier], timeout_ms: 120000 })
|
||||
}
|
||||
Write(`${sessionFolder}/interactive/clarify-result.json`, JSON.stringify({
|
||||
task_id: "CLARIFY-001", status: "completed", findings: parseFindings(clarifyResult),
|
||||
timestamp: getUtc8ISOString()
|
||||
}))
|
||||
close_agent({ id: clarifier })
|
||||
```
|
||||
|
||||
5. **Build teamConfig** from gathered requirements:
|
||||
|
||||
```javascript
|
||||
const teamConfig = {
|
||||
skillName: "<kebab-case-name>",
|
||||
sessionPrefix: "<3-4 char prefix>",
|
||||
domain: "<domain description>",
|
||||
title: "<Human Readable Title>",
|
||||
roles: [
|
||||
{ name: "coordinator", prefix: "—", inner_loop: false, hasCommands: true, commands: ["analyze", "dispatch", "monitor"], path: "roles/coordinator/role.md" },
|
||||
// ... discovered worker roles
|
||||
],
|
||||
pipelines: [{ name: "<pipeline-name>", tasks: [/* task definitions */] }],
|
||||
specs: ["pipelines"],
|
||||
templates: [],
|
||||
conditionalRouting: false,
|
||||
targetDir: `.codex/skills/<skill-name>`
|
||||
}
|
||||
|
||||
Write(`${sessionFolder}/teamConfig.json`, JSON.stringify(teamConfig, null, 2))
|
||||
```
|
||||
|
||||
6. **Decompose into tasks** -- generate tasks.csv from teamConfig:
|
||||
|
||||
| Task Pattern | gen_type | Wave | Description |
|
||||
|--------------|----------|------|-------------|
|
||||
| Directory scaffold | `directory` | 1 | Create skill directory structure |
|
||||
| SKILL.md router | `router` | 2 | Generate main SKILL.md orchestrator |
|
||||
| Pipeline spec | `spec` | 2 | Generate specs/pipelines.md |
|
||||
| Domain specs | `spec` | 2 | Generate additional specs files |
|
||||
| Coordinator role | `role-bundle` | 3 | Generate coordinator role.md + commands/ |
|
||||
| Worker roles (each) | `role-inline` or `role-bundle` | 3 | Generate each worker role.md |
|
||||
| Templates (each) | `template` | 3 | Generate template files |
|
||||
| Validation | `validation` | 4 | Validate the complete package |
|
||||
|
||||
**Success Criteria**:
|
||||
- teamConfig.json written with complete configuration
|
||||
- Refined requirements available for Phase 1 decomposition
|
||||
- Interactive agents closed, results stored
|
||||
|
||||
---
|
||||
|
||||
### Phase 1: Requirement -> CSV + Classification
|
||||
|
||||
**Objective**: Generate tasks.csv from teamConfig with dependency-ordered waves.
|
||||
|
||||
**Decomposition Rules**:
|
||||
|
||||
1. **Role Discovery** -- scan domain description for keywords:
|
||||
|
||||
| Signal | Keywords | Role Name | Prefix |
|
||||
|--------|----------|-----------|--------|
|
||||
| Analysis | analyze, research, investigate, explore | analyst | RESEARCH |
|
||||
| Planning | plan, design, architect, decompose | planner | PLAN |
|
||||
| Writing | write, document, draft, spec, report | writer | DRAFT |
|
||||
| Implementation | implement, build, code, develop | executor | IMPL |
|
||||
| Testing | test, verify, validate, qa | tester | TEST |
|
||||
| Review | review, audit, check, inspect | reviewer | REVIEW |
|
||||
| Security | security, vulnerability, penetration | security-expert | SECURITY |
|
||||
|
||||
2. **Commands Distribution** -- determine inline vs commands/:
|
||||
|
||||
| Condition | Commands Structure |
|
||||
|-----------|-------------------|
|
||||
| 1 distinct action for role | Inline in role.md |
|
||||
| 2+ distinct actions | commands/ folder |
|
||||
| Coordinator (always) | commands/: analyze, dispatch, monitor |
|
||||
|
||||
3. **Pipeline Construction** -- build from role ordering:
|
||||
|
||||
| Role Combination | Pipeline Type |
|
||||
|------------------|---------------|
|
||||
| analyst + writer + executor | full-lifecycle |
|
||||
| analyst + writer (no executor) | spec-only |
|
||||
| planner + executor (no analyst) | impl-only |
|
||||
| Other | custom |
|
||||
|
||||
**Classification Rules**:
|
||||
|
||||
| Task Property | exec_mode |
|
||||
|---------------|-----------|
|
||||
| Directory creation | `csv-wave` |
|
||||
| Single file generation (role.md, spec.md) | `csv-wave` |
|
||||
| Multi-file bundle generation (coordinator) | `csv-wave` |
|
||||
| SKILL.md router generation | `csv-wave` |
|
||||
| User requirement clarification | `interactive` |
|
||||
| Validation with error recovery | `interactive` |
|
||||
|
||||
**Wave Computation**: Kahn's BFS topological sort with depth tracking (csv-wave tasks only).
|
||||
|
||||
**User Validation**: Display task breakdown with wave + exec_mode assignment (skip if AUTO_YES).
|
||||
|
||||
**Success Criteria**:
|
||||
- tasks.csv created with valid schema, wave, and exec_mode assignments
|
||||
- No circular dependencies
|
||||
- User approved (or AUTO_YES)
|
||||
|
||||
---
|
||||
|
||||
### Phase 2: Wave Execution Engine (Extended)
|
||||
|
||||
**Objective**: Execute tasks wave-by-wave with hybrid mechanism support and cross-wave context propagation.
|
||||
|
||||
```javascript
|
||||
const masterCsv = Read(`${sessionFolder}/tasks.csv`)
|
||||
let tasks = parseCsv(masterCsv)
|
||||
const maxWave = Math.max(...tasks.map(t => t.wave))
|
||||
|
||||
for (let wave = 1; wave <= maxWave; wave++) {
|
||||
console.log(`\nWave ${wave}/${maxWave}`)
|
||||
|
||||
// 1. Separate tasks by exec_mode
|
||||
const waveTasks = tasks.filter(t => t.wave === wave && t.status === 'pending')
|
||||
const csvTasks = waveTasks.filter(t => t.exec_mode === 'csv-wave')
|
||||
const interactiveTasks = waveTasks.filter(t => t.exec_mode === 'interactive')
|
||||
|
||||
// 2. Check dependencies -- skip tasks whose deps failed
|
||||
for (const task of waveTasks) {
|
||||
const depIds = (task.deps || '').split(';').filter(Boolean)
|
||||
const depStatuses = depIds.map(id => tasks.find(t => t.id === id)?.status)
|
||||
if (depStatuses.some(s => s === 'failed' || s === 'skipped')) {
|
||||
task.status = 'skipped'
|
||||
task.error = `Dependency failed: ${depIds.filter((id, i) =>
|
||||
['failed','skipped'].includes(depStatuses[i])).join(', ')}`
|
||||
}
|
||||
}
|
||||
|
||||
// 3. Execute pre-wave interactive tasks
|
||||
const preWaveInteractive = interactiveTasks.filter(t => t.status === 'pending')
|
||||
for (const task of preWaveInteractive) {
|
||||
// Use appropriate interactive agent
|
||||
const agentFile = task.gen_type === 'validation'
|
||||
? 'agents/validation-reporter.md'
|
||||
: 'agents/requirement-clarifier.md'
|
||||
Read(agentFile)
|
||||
|
||||
const agent = spawn_agent({
|
||||
message: `## TASK ASSIGNMENT\n\n### MANDATORY FIRST STEPS\n1. Read: ${agentFile}\n2. Read: ${sessionFolder}/discoveries.ndjson\n\nGoal: ${task.description}\nScope: ${task.title}\nSession: ${sessionFolder}\nteamConfig: ${sessionFolder}/teamConfig.json\n\n### Previous Context\n${buildPrevContext(task, tasks)}`
|
||||
})
|
||||
const result = wait({ ids: [agent], timeout_ms: 600000 })
|
||||
if (result.timed_out) {
|
||||
send_input({ id: agent, message: "Please finalize and output current findings." })
|
||||
wait({ ids: [agent], timeout_ms: 120000 })
|
||||
}
|
||||
Write(`${sessionFolder}/interactive/${task.id}-result.json`, JSON.stringify({
|
||||
task_id: task.id, status: "completed", findings: parseFindings(result),
|
||||
timestamp: getUtc8ISOString()
|
||||
}))
|
||||
close_agent({ id: agent })
|
||||
task.status = 'completed'
|
||||
task.findings = parseFindings(result)
|
||||
}
|
||||
|
||||
// 4. Build prev_context for csv-wave tasks
|
||||
const pendingCsvTasks = csvTasks.filter(t => t.status === 'pending')
|
||||
for (const task of pendingCsvTasks) {
|
||||
task.prev_context = buildPrevContext(task, tasks)
|
||||
}
|
||||
|
||||
if (pendingCsvTasks.length > 0) {
|
||||
// 5. Write wave CSV
|
||||
Write(`${sessionFolder}/wave-${wave}.csv`, toCsv(pendingCsvTasks))
|
||||
|
||||
// 6. Execute wave via spawn_agents_on_csv
|
||||
spawn_agents_on_csv({
|
||||
csv_path: `${sessionFolder}/wave-${wave}.csv`,
|
||||
id_column: "id",
|
||||
instruction: Read(`instructions/agent-instruction.md`)
|
||||
.replace(/<session-folder>/g, sessionFolder),
|
||||
max_concurrency: maxConcurrency,
|
||||
max_runtime_seconds: 900,
|
||||
output_csv_path: `${sessionFolder}/wave-${wave}-results.csv`,
|
||||
output_schema: {
|
||||
type: "object",
|
||||
properties: {
|
||||
id: { type: "string" },
|
||||
status: { type: "string", enum: ["completed", "failed"] },
|
||||
findings: { type: "string" },
|
||||
files_produced: { type: "string" },
|
||||
error: { type: "string" }
|
||||
}
|
||||
}
|
||||
})
|
||||
|
||||
// 7. Merge results into master CSV
|
||||
const results = parseCsv(Read(`${sessionFolder}/wave-${wave}-results.csv`))
|
||||
for (const r of results) {
|
||||
const t = tasks.find(t => t.id === r.id)
|
||||
if (t) Object.assign(t, r)
|
||||
}
|
||||
}
|
||||
|
||||
// 8. Update master CSV
|
||||
Write(`${sessionFolder}/tasks.csv`, toCsv(tasks))
|
||||
|
||||
// 9. Cleanup temp files
|
||||
Bash(`rm -f ${sessionFolder}/wave-${wave}.csv ${sessionFolder}/wave-${wave}-results.csv`)
|
||||
|
||||
// 10. Display wave summary
|
||||
const completed = waveTasks.filter(t => t.status === 'completed').length
|
||||
const failed = waveTasks.filter(t => t.status === 'failed').length
|
||||
const skipped = waveTasks.filter(t => t.status === 'skipped').length
|
||||
console.log(`Wave ${wave} Complete: ${completed} completed, ${failed} failed, ${skipped} skipped`)
|
||||
}
|
||||
```
|
||||
|
||||
**Success Criteria**:
|
||||
- All waves executed in order
|
||||
- Both csv-wave and interactive tasks handled per wave
|
||||
- Each wave's results merged into master CSV before next wave starts
|
||||
- Dependent tasks skipped when predecessor failed
|
||||
- discoveries.ndjson accumulated across all waves and mechanisms
|
||||
|
||||
---
|
||||
|
||||
### Phase 3: Post-Wave Interactive (Validation)
|
||||
|
||||
**Objective**: Validate the generated team skill package and present results.
|
||||
|
||||
Read `agents/validation-reporter.md`, then:
|
||||
|
||||
```javascript
|
||||
const validator = spawn_agent({
|
||||
message: `## TASK ASSIGNMENT
|
||||
|
||||
### MANDATORY FIRST STEPS
|
||||
1. Read: agents/validation-reporter.md
|
||||
2. Read: ${sessionFolder}/discoveries.ndjson
|
||||
3. Read: ${sessionFolder}/teamConfig.json
|
||||
|
||||
---
|
||||
|
||||
Goal: Validate the generated team skill package at ${teamConfig.targetDir}
|
||||
Session: ${sessionFolder}
|
||||
|
||||
### Validation Checks
|
||||
1. Structural: All files exist per teamConfig
|
||||
2. SKILL.md: Required sections present, role registry correct
|
||||
3. Role frontmatter: YAML frontmatter valid for each worker role
|
||||
4. Pipeline consistency: No circular deps, roles referenced exist
|
||||
5. Commands distribution: commands/ matches hasCommands flag
|
||||
|
||||
### Previous Context
|
||||
${buildCompletePrevContext(tasks)}`
|
||||
})
|
||||
const validResult = wait({ ids: [validator], timeout_ms: 600000 })
|
||||
if (validResult.timed_out) {
|
||||
send_input({ id: validator, message: "Please finalize validation with current findings." })
|
||||
wait({ ids: [validator], timeout_ms: 120000 })
|
||||
}
|
||||
Write(`${sessionFolder}/interactive/validation-result.json`, JSON.stringify({
|
||||
task_id: "VALIDATE-001", status: "completed", findings: parseFindings(validResult),
|
||||
timestamp: getUtc8ISOString()
|
||||
}))
|
||||
close_agent({ id: validator })
|
||||
```
|
||||
|
||||
**Success Criteria**:
|
||||
- Post-wave interactive processing complete
|
||||
- Validation report generated
|
||||
- Interactive agents closed, results stored
|
||||
|
||||
---
|
||||
|
||||
### Phase 4: Results Aggregation
|
||||
|
||||
**Objective**: Generate final results and human-readable report.
|
||||
|
||||
```javascript
|
||||
// 1. Export results.csv
|
||||
Bash(`cp ${sessionFolder}/tasks.csv ${sessionFolder}/results.csv`)
|
||||
|
||||
// 2. Generate context.md
|
||||
const tasks = parseCsv(Read(`${sessionFolder}/tasks.csv`))
|
||||
let contextMd = `# Team Skill Designer Report\n\n`
|
||||
contextMd += `**Session**: ${sessionId}\n`
|
||||
contextMd += `**Skill**: ${teamConfig.skillName}\n`
|
||||
contextMd += `**Target**: ${teamConfig.targetDir}\n\n`
|
||||
|
||||
contextMd += `## Summary\n`
|
||||
contextMd += `| Status | Count |\n|--------|-------|\n`
|
||||
contextMd += `| Completed | ${tasks.filter(t => t.status === 'completed').length} |\n`
|
||||
contextMd += `| Failed | ${tasks.filter(t => t.status === 'failed').length} |\n`
|
||||
contextMd += `| Skipped | ${tasks.filter(t => t.status === 'skipped').length} |\n\n`
|
||||
|
||||
contextMd += `## Generated Skill Structure\n\n`
|
||||
contextMd += `\`\`\`\n${teamConfig.targetDir}/\n`
|
||||
contextMd += `+-- SKILL.md\n+-- schemas/\n| +-- tasks-schema.md\n+-- instructions/\n| +-- agent-instruction.md\n`
|
||||
// ... roles, specs, templates
|
||||
contextMd += `\`\`\`\n\n`
|
||||
|
||||
contextMd += `## Validation\n`
|
||||
// ... validation results
|
||||
|
||||
Write(`${sessionFolder}/context.md`, contextMd)
|
||||
|
||||
// 3. Display final summary
|
||||
console.log(`\nTeam Skill Designer Complete`)
|
||||
console.log(`Generated skill: ${teamConfig.targetDir}`)
|
||||
console.log(`Results: ${sessionFolder}/results.csv`)
|
||||
console.log(`Report: ${sessionFolder}/context.md`)
|
||||
console.log(`\nUsage: $${teamConfig.skillName} "task description"`)
|
||||
```
|
||||
|
||||
**Success Criteria**:
|
||||
- results.csv exported (all tasks, both modes)
|
||||
- context.md generated
|
||||
- All interactive agents closed
|
||||
- Summary displayed to user
|
||||
|
||||
---
|
||||
|
||||
## Shared Discovery Board Protocol
|
||||
|
||||
All agents (csv-wave and interactive) share a single `discoveries.ndjson` file for cross-task knowledge exchange.
|
||||
|
||||
**Format**: One JSON object per line (NDJSON):
|
||||
|
||||
```jsonl
|
||||
{"ts":"2026-03-08T10:00:00Z","worker":"SCAFFOLD-001","type":"dir_created","data":{"path":"~ or <project>/.codex/skills/team-code-review/","description":"Created skill directory structure"}}
|
||||
{"ts":"2026-03-08T10:05:00Z","worker":"ROLE-001","type":"file_generated","data":{"file":"roles/coordinator/role.md","gen_type":"role-bundle","sections":["entry-router","commands"]}}
|
||||
{"ts":"2026-03-08T10:10:00Z","worker":"SPEC-001","type":"pattern_found","data":{"pattern_name":"full-lifecycle","description":"Pipeline with analyst -> writer -> executor -> tester"}}
|
||||
```
|
||||
|
||||
**Discovery Types**:
|
||||
|
||||
| Type | Data Schema | Description |
|
||||
|------|-------------|-------------|
|
||||
| `dir_created` | `{path, description}` | Directory structure created |
|
||||
| `file_generated` | `{file, gen_type, sections}` | File generated with specific sections |
|
||||
| `pattern_found` | `{pattern_name, description}` | Design pattern identified in golden sample |
|
||||
| `config_decision` | `{decision, rationale, impact}` | Configuration decision made |
|
||||
| `validation_result` | `{check, passed, message}` | Validation check result |
|
||||
| `reference_found` | `{source, target, type}` | Cross-reference between generated files |
|
||||
|
||||
**Protocol**:
|
||||
1. Agents MUST read discoveries.ndjson at start of execution
|
||||
2. Agents MUST append relevant discoveries during execution
|
||||
3. Agents MUST NOT modify or delete existing entries
|
||||
4. Deduplication by `{type, data.file, data.path}` key
|
||||
|
||||
---
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Error | Resolution |
|
||||
|-------|------------|
|
||||
| Circular dependency | Detect in wave computation, abort with error message |
|
||||
| CSV agent timeout | Mark as failed in results, continue with wave |
|
||||
| CSV agent failed | Mark as failed, skip dependent tasks in later waves |
|
||||
| Interactive agent timeout | Urge convergence via send_input, then close if still timed out |
|
||||
| Interactive agent failed | Mark as failed, skip dependents |
|
||||
| All agents in wave failed | Log error, offer retry or abort |
|
||||
| CSV parse error | Validate CSV format before execution, show line number |
|
||||
| discoveries.ndjson corrupt | Ignore malformed lines, continue with valid entries |
|
||||
| Invalid role name | Must be lowercase alphanumeric with hyphens, max 20 chars |
|
||||
| Directory conflict | Warn if skill directory already exists, ask user to confirm overwrite |
|
||||
| Golden sample not found | Fall back to embedded templates in instructions |
|
||||
| Validation FAIL | Offer auto-fix, regenerate, or accept as-is |
|
||||
| Continue mode: no session found | List available sessions, prompt user to select |
|
||||
|
||||
---
|
||||
|
||||
## Core Rules
|
||||
|
||||
1. **Start Immediately**: First action is session initialization, then Phase 0/1
|
||||
2. **Wave Order is Sacred**: Never execute wave N before wave N-1 completes and results are merged
|
||||
3. **CSV is Source of Truth**: Master tasks.csv holds all state (both csv-wave and interactive)
|
||||
4. **CSV First**: Default to csv-wave for tasks; only use interactive when interaction pattern requires it
|
||||
5. **Context Propagation**: prev_context built from master CSV, not from memory
|
||||
6. **Discovery Board is Append-Only**: Never clear, modify, or recreate discoveries.ndjson
|
||||
7. **Skip on Failure**: If a dependency failed, skip the dependent task
|
||||
8. **Golden Sample Fidelity**: Generated files must match existing team skill patterns
|
||||
9. **Cleanup Temp Files**: Remove wave-{N}.csv after results are merged
|
||||
10. **DO NOT STOP**: Continuous execution until all waves complete or all remaining tasks are skipped
|
||||
|
||||
|
||||
---
|
||||
|
||||
## Coordinator Role Constraints (Main Agent)
|
||||
|
||||
**CRITICAL**: The coordinator (main agent executing this skill) is responsible for **orchestration only**, NOT implementation.
|
||||
|
||||
15. **Coordinator Does NOT Execute Code**: The main agent MUST NOT write, modify, or implement any code directly. All implementation work is delegated to spawned team agents. The coordinator only:
|
||||
- Spawns agents with task assignments
|
||||
- Waits for agent callbacks
|
||||
- Merges results and coordinates workflow
|
||||
- Manages workflow transitions between phases
|
||||
|
||||
16. **Patient Waiting is Mandatory**: Agent execution takes significant time (typically 10-30 minutes per phase, sometimes longer). The coordinator MUST:
|
||||
- Wait patiently for `wait()` calls to complete
|
||||
- NOT skip workflow steps due to perceived delays
|
||||
- NOT assume agents have failed just because they're taking time
|
||||
- Trust the timeout mechanisms defined in the skill
|
||||
|
||||
17. **Use send_input for Clarification**: When agents need guidance or appear stuck, the coordinator MUST:
|
||||
- Use `send_input()` to ask questions or provide clarification
|
||||
- NOT skip the agent or move to next phase prematurely
|
||||
- Give agents opportunity to respond before escalating
|
||||
- Example: `send_input({ id: agent_id, message: "Please provide status update or clarify blockers" })`
|
||||
|
||||
18. **No Workflow Shortcuts**: The coordinator MUST NOT:
|
||||
- Skip phases or stages defined in the workflow
|
||||
- Bypass required approval or review steps
|
||||
- Execute dependent tasks before prerequisites complete
|
||||
- Assume task completion without explicit agent callback
|
||||
- Make up or fabricate agent results
|
||||
|
||||
19. **Respect Long-Running Processes**: This is a complex multi-agent workflow that requires patience:
|
||||
- Total execution time may range from 30-90 minutes or longer
|
||||
- Each phase may take 10-30 minutes depending on complexity
|
||||
- The coordinator must remain active and attentive throughout the entire process
|
||||
- Do not terminate or skip steps due to time concerns
|
||||
248
.codex/skills/team-designer/agents/requirement-clarifier.md
Normal file
248
.codex/skills/team-designer/agents/requirement-clarifier.md
Normal file
@@ -0,0 +1,248 @@
|
||||
# Requirement Clarifier Agent
|
||||
|
||||
Interactive agent for gathering and refining team skill requirements from user input. Used in Phase 0 when the skill description needs clarification or missing details.
|
||||
|
||||
## Identity
|
||||
|
||||
- **Type**: `interactive`
|
||||
- **Role File**: `agents/requirement-clarifier.md`
|
||||
- **Responsibility**: Gather skill name, roles, pipelines, specs, and build teamConfig
|
||||
|
||||
## Boundaries
|
||||
|
||||
### MUST
|
||||
|
||||
- Load role definition via MANDATORY FIRST STEPS pattern
|
||||
- Parse user input to detect input source (reference, structured, natural)
|
||||
- Gather all required teamConfig fields
|
||||
- Confirm configuration with user before reporting complete
|
||||
- Produce structured output following template
|
||||
- Write teamConfig.json to session folder
|
||||
|
||||
### MUST NOT
|
||||
|
||||
- Skip the MANDATORY FIRST STEPS role loading
|
||||
- Generate skill files (that is Phase 2 work)
|
||||
- Approve incomplete configurations
|
||||
- Produce unstructured output
|
||||
- Exceed defined scope boundaries
|
||||
|
||||
---
|
||||
|
||||
## Toolbox
|
||||
|
||||
### Available Tools
|
||||
|
||||
| Tool | Type | Purpose |
|
||||
|------|------|---------|
|
||||
| `Read` | built-in | Load reference skills, existing patterns |
|
||||
| `AskUserQuestion` | built-in | Gather missing details from user |
|
||||
| `Write` | built-in | Store teamConfig.json |
|
||||
| `Glob` | built-in | Find reference skill files |
|
||||
|
||||
### Tool Usage Patterns
|
||||
|
||||
**Read Pattern**: Load reference skill for pattern extraction
|
||||
```
|
||||
Read(".codex/skills/<reference-skill>/SKILL.md")
|
||||
Read(".codex/skills/<reference-skill>/schemas/tasks-schema.md")
|
||||
```
|
||||
|
||||
**Write Pattern**: Store configuration
|
||||
```
|
||||
Write("<session>/teamConfig.json", <config>)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Execution
|
||||
|
||||
### Phase 1: Input Detection
|
||||
|
||||
**Objective**: Detect input source type and extract initial information
|
||||
|
||||
**Input**:
|
||||
|
||||
| Source | Required | Description |
|
||||
|--------|----------|-------------|
|
||||
| User requirement | Yes | Skill description from $ARGUMENTS |
|
||||
| Reference skill | No | Existing skill if "based on" detected |
|
||||
|
||||
**Steps**:
|
||||
|
||||
1. Parse user input to detect source type:
|
||||
|
||||
| Source Type | Detection | Action |
|
||||
|-------------|-----------|--------|
|
||||
| Reference | Contains "based on", "like", skill path | Read referenced skill, extract roles/pipelines |
|
||||
| Structured | Contains ROLES:, PIPELINES:, DOMAIN: | Parse structured fields directly |
|
||||
| Natural language | Default | Analyze keywords for role discovery |
|
||||
|
||||
2. Extract initial information from detected source
|
||||
3. Identify missing required fields
|
||||
|
||||
**Output**: Initial teamConfig draft with gaps identified
|
||||
|
||||
---
|
||||
|
||||
### Phase 2: Requirement Gathering
|
||||
|
||||
**Objective**: Fill in all required teamConfig fields
|
||||
|
||||
**Steps**:
|
||||
|
||||
1. **Core Identity** -- gather if not clear from input:
|
||||
|
||||
```javascript
|
||||
AskUserQuestion({
|
||||
questions: [
|
||||
{
|
||||
question: "Team skill name? (kebab-case, e.g., team-code-review)",
|
||||
header: "Skill Name",
|
||||
multiSelect: false,
|
||||
options: [
|
||||
{ label: "<auto-suggested-name>", description: "Auto-suggested from description" },
|
||||
{ label: "Custom", description: "Enter custom name" }
|
||||
]
|
||||
},
|
||||
{
|
||||
question: "Session prefix? (3-4 chars for task IDs, e.g., TCR)",
|
||||
header: "Prefix",
|
||||
multiSelect: false,
|
||||
options: [
|
||||
{ label: "<auto-suggested-prefix>", description: "Auto-suggested" },
|
||||
{ label: "Custom", description: "Enter custom prefix" }
|
||||
]
|
||||
}
|
||||
]
|
||||
})
|
||||
```
|
||||
|
||||
2. **Role Discovery** -- identify roles from domain keywords:
|
||||
|
||||
| Signal | Keywords | Default Role |
|
||||
|--------|----------|-------------|
|
||||
| Analysis | analyze, research, investigate | analyst |
|
||||
| Planning | plan, design, architect | planner |
|
||||
| Writing | write, document, draft | writer |
|
||||
| Implementation | implement, build, code | executor |
|
||||
| Testing | test, verify, validate | tester |
|
||||
| Review | review, audit, check | reviewer |
|
||||
|
||||
3. **Commands Distribution** -- determine per role:
|
||||
|
||||
| Rule | Condition | Result |
|
||||
|------|-----------|--------|
|
||||
| Coordinator | Always | commands/: analyze, dispatch, monitor |
|
||||
| Multi-action role | 2+ distinct actions | commands/ folder |
|
||||
| Single-action role | 1 action | Inline in role.md |
|
||||
|
||||
4. **Pipeline Construction** -- determine from role combination:
|
||||
|
||||
| Roles Present | Pipeline Type |
|
||||
|---------------|---------------|
|
||||
| analyst + writer + executor | full-lifecycle |
|
||||
| analyst + writer (no executor) | spec-only |
|
||||
| planner + executor (no analyst) | impl-only |
|
||||
| Other combinations | custom |
|
||||
|
||||
5. **Specs and Templates** -- determine required specs:
|
||||
- Always: pipelines.md
|
||||
- If quality gates needed: quality-gates.md
|
||||
- If writer role: domain-appropriate templates
|
||||
|
||||
**Output**: Complete teamConfig ready for confirmation
|
||||
|
||||
---
|
||||
|
||||
### Phase 3: Confirmation
|
||||
|
||||
**Objective**: Present configuration summary and get user approval
|
||||
|
||||
**Steps**:
|
||||
|
||||
1. Display configuration summary:
|
||||
|
||||
```
|
||||
Team Skill Configuration Summary
|
||||
|
||||
Skill Name: <skillName>
|
||||
Session Prefix: <sessionPrefix>
|
||||
Domain: <domain>
|
||||
Target: .codex/skills/<skillName>/
|
||||
|
||||
Roles:
|
||||
+- coordinator (commands: analyze, dispatch, monitor)
|
||||
+- <role-a> [PREFIX-*] (inline)
|
||||
+- <role-b> [PREFIX-*] (commands: cmd1, cmd2)
|
||||
|
||||
Pipelines:
|
||||
+- <pipeline-name>: TASK-001 -> TASK-002 -> TASK-003
|
||||
|
||||
Specs: pipelines, <additional>
|
||||
Templates: <list or none>
|
||||
```
|
||||
|
||||
2. Present confirmation:
|
||||
|
||||
```javascript
|
||||
AskUserQuestion({
|
||||
questions: [{
|
||||
question: "Confirm this team skill configuration?",
|
||||
header: "Configuration Review",
|
||||
multiSelect: false,
|
||||
options: [
|
||||
{ label: "Confirm", description: "Proceed with generation" },
|
||||
{ label: "Modify Roles", description: "Add, remove, or change roles" },
|
||||
{ label: "Modify Pipelines", description: "Change pipeline structure" },
|
||||
{ label: "Cancel", description: "Abort skill generation" }
|
||||
]
|
||||
}]
|
||||
})
|
||||
```
|
||||
|
||||
3. Handle response:
|
||||
|
||||
| Response | Action |
|
||||
|----------|--------|
|
||||
| Confirm | Write teamConfig.json, report complete |
|
||||
| Modify Roles | Loop back to role gathering |
|
||||
| Modify Pipelines | Loop back to pipeline construction |
|
||||
| Cancel | Report cancelled status |
|
||||
|
||||
**Output**: Confirmed teamConfig.json written to session folder
|
||||
|
||||
---
|
||||
|
||||
## Structured Output Template
|
||||
|
||||
```
|
||||
## Summary
|
||||
- Configuration: <confirmed|modified|cancelled>
|
||||
- Skill: <skill-name>
|
||||
|
||||
## Configuration
|
||||
- Roles: <count> roles defined
|
||||
- Pipelines: <count> pipelines
|
||||
- Target: <target-dir>
|
||||
|
||||
## Details
|
||||
- Role list with prefix and commands structure
|
||||
- Pipeline definitions with task flow
|
||||
- Specs and templates list
|
||||
|
||||
## Open Questions
|
||||
1. Any unresolved items from clarification
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Scenario | Resolution |
|
||||
|----------|------------|
|
||||
| Reference skill not found | Report error, ask for correct path |
|
||||
| Invalid role name | Suggest valid kebab-case alternative |
|
||||
| Conflicting pipeline structure | Ask user to resolve |
|
||||
| User does not respond | Timeout, report partial with current config |
|
||||
| Processing failure | Output partial results with clear status indicator |
|
||||
163
.codex/skills/team-designer/instructions/agent-instruction.md
Normal file
163
.codex/skills/team-designer/instructions/agent-instruction.md
Normal file
@@ -0,0 +1,163 @@
|
||||
# Agent Instruction Template -- Team Skill Designer
|
||||
|
||||
Base instruction template for CSV wave agents. Each agent receives this template with its row's column values substituted at runtime via `spawn_agents_on_csv`.
|
||||
|
||||
## Purpose
|
||||
|
||||
| Phase | Usage |
|
||||
|-------|-------|
|
||||
| Phase 1 | Baked into instruction parameter with session folder path |
|
||||
| Phase 2 | Injected as `instruction` parameter to `spawn_agents_on_csv` |
|
||||
|
||||
---
|
||||
|
||||
## Base Instruction Template
|
||||
|
||||
```markdown
|
||||
## TASK ASSIGNMENT -- Team Skill Designer
|
||||
|
||||
### MANDATORY FIRST STEPS
|
||||
1. Read shared discoveries: <session-folder>/discoveries.ndjson (if exists, skip if not)
|
||||
2. Read project context: .workflow/project-tech.json (if exists)
|
||||
3. Read teamConfig: <session-folder>/teamConfig.json (REQUIRED -- contains complete skill configuration)
|
||||
|
||||
---
|
||||
|
||||
## Your Task
|
||||
|
||||
**Task ID**: {id}
|
||||
**Title**: {title}
|
||||
**Role**: {role}
|
||||
**File Target**: {file_target}
|
||||
**Generation Type**: {gen_type}
|
||||
|
||||
### Task Description
|
||||
{description}
|
||||
|
||||
### Previous Tasks' Findings (Context)
|
||||
{prev_context}
|
||||
|
||||
---
|
||||
|
||||
## Execution Protocol
|
||||
|
||||
1. **Read discoveries**: Load <session-folder>/discoveries.ndjson for shared exploration findings
|
||||
2. **Read teamConfig**: Load <session-folder>/teamConfig.json for complete skill configuration (roles, pipelines, specs, templates)
|
||||
3. **Use context**: Apply previous tasks' findings from prev_context above
|
||||
4. **Execute by gen_type**:
|
||||
|
||||
### For gen_type = directory
|
||||
- Parse teamConfig to determine required directories
|
||||
- Create directory structure at teamConfig.targetDir
|
||||
- Create subdirectories: roles/, specs/, templates/ (if needed)
|
||||
- Create per-role subdirectories: roles/<role-name>/ (+ commands/ if hasCommands)
|
||||
- Verify all directories exist
|
||||
|
||||
### For gen_type = router
|
||||
- Read existing Codex team skill SKILL.md as reference pattern
|
||||
- Generate SKILL.md with these sections in order:
|
||||
1. YAML frontmatter (name, description, argument-hint, allowed-tools)
|
||||
2. Auto Mode section
|
||||
3. Title + Usage examples
|
||||
4. Overview with workflow diagram
|
||||
5. Task Classification Rules
|
||||
6. CSV Schema (header + column definitions)
|
||||
7. Agent Registry (if interactive agents exist)
|
||||
8. Output Artifacts table
|
||||
9. Session Structure diagram
|
||||
10. Implementation (session init, phases 0-4)
|
||||
11. Discovery Board Protocol
|
||||
12. Error Handling table
|
||||
13. Core Rules list
|
||||
- Use teamConfig.roles for role registry
|
||||
- Use teamConfig.pipelines for pipeline definitions
|
||||
|
||||
### For gen_type = role-bundle
|
||||
- Generate role.md with:
|
||||
1. YAML frontmatter (role, prefix, inner_loop, message_types)
|
||||
2. Identity section
|
||||
3. Boundaries (MUST/MUST NOT)
|
||||
4. Entry Router (for coordinator)
|
||||
5. Phase references (Phase 0-5 for coordinator)
|
||||
- Generate commands/*.md for each command in teamConfig.roles[].commands
|
||||
- Each command file: Purpose, Constants, Phase 2-4 execution logic
|
||||
- Coordinator always gets: analyze.md, dispatch.md, monitor.md
|
||||
|
||||
### For gen_type = role-inline
|
||||
- Generate single role.md with:
|
||||
1. YAML frontmatter (role, prefix, inner_loop, message_types)
|
||||
2. Identity section
|
||||
3. Boundaries (MUST/MUST NOT)
|
||||
4. Phase 2: Context Loading
|
||||
5. Phase 3: Domain Execution (role-specific logic)
|
||||
6. Phase 4: Output & Report
|
||||
|
||||
### For gen_type = spec
|
||||
- For pipelines.md: Generate from teamConfig.pipelines
|
||||
- Pipeline name, task table (ID, Role, Name, Depends On, Checkpoint)
|
||||
- Task metadata registry
|
||||
- Conditional routing rules
|
||||
- Dynamic specialist injection
|
||||
- For other specs: Generate domain-appropriate content
|
||||
|
||||
### For gen_type = template
|
||||
- Check for reference templates in existing skills
|
||||
- Generate domain-appropriate template structure
|
||||
- Include placeholder sections and formatting guidelines
|
||||
|
||||
5. **Share discoveries**: Append exploration findings to shared board:
|
||||
```bash
|
||||
echo '{"ts":"<ISO8601>","worker":"{id}","type":"<type>","data":{...}}' >> <session-folder>/discoveries.ndjson
|
||||
```
|
||||
6. **Report result**: Return JSON via report_agent_job_result
|
||||
|
||||
### Discovery Types to Share
|
||||
- `dir_created`: {path, description} -- Directory structure created
|
||||
- `file_generated`: {file, gen_type, sections} -- File generated with specific sections
|
||||
- `pattern_found`: {pattern_name, description} -- Design pattern identified
|
||||
- `config_decision`: {decision, rationale, impact} -- Configuration decision made
|
||||
- `reference_found`: {source, target, type} -- Cross-reference between generated files
|
||||
|
||||
---
|
||||
|
||||
## Output (report_agent_job_result)
|
||||
|
||||
Return JSON:
|
||||
{
|
||||
"id": "{id}",
|
||||
"status": "completed" | "failed",
|
||||
"findings": "Key discoveries and generation notes (max 500 chars)",
|
||||
"files_produced": "semicolon-separated paths of produced files relative to skill root",
|
||||
"error": ""
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Quality Requirements
|
||||
|
||||
All agents must verify before reporting complete:
|
||||
|
||||
| Requirement | Criteria |
|
||||
|-------------|----------|
|
||||
| Files produced | Verify all claimed files exist via Read |
|
||||
| teamConfig adherence | Generated content matches teamConfig specifications |
|
||||
| Pattern fidelity | Generated files follow existing Codex skill patterns |
|
||||
| Discovery sharing | At least 1 discovery shared to board |
|
||||
| Error reporting | Non-empty error field if status is failed |
|
||||
| YAML frontmatter | Role files must have valid frontmatter for agent parsing |
|
||||
|
||||
---
|
||||
|
||||
## Placeholder Reference
|
||||
|
||||
| Placeholder | Resolved By | When |
|
||||
|-------------|------------|------|
|
||||
| `<session-folder>` | Skill designer (Phase 1) | Literal path baked into instruction |
|
||||
| `{id}` | spawn_agents_on_csv | Runtime from CSV row |
|
||||
| `{title}` | spawn_agents_on_csv | Runtime from CSV row |
|
||||
| `{description}` | spawn_agents_on_csv | Runtime from CSV row |
|
||||
| `{role}` | spawn_agents_on_csv | Runtime from CSV row |
|
||||
| `{file_target}` | spawn_agents_on_csv | Runtime from CSV row |
|
||||
| `{gen_type}` | spawn_agents_on_csv | Runtime from CSV row |
|
||||
| `{prev_context}` | spawn_agents_on_csv | Runtime from CSV row |
|
||||
180
.codex/skills/team-designer/schemas/tasks-schema.md
Normal file
180
.codex/skills/team-designer/schemas/tasks-schema.md
Normal file
@@ -0,0 +1,180 @@
|
||||
# Team Skill Designer -- CSV Schema
|
||||
|
||||
## Master CSV: tasks.csv
|
||||
|
||||
### Column Definitions
|
||||
|
||||
#### Input Columns (Set by Decomposer)
|
||||
|
||||
| Column | Type | Required | Description | Example |
|
||||
|--------|------|----------|-------------|---------|
|
||||
| `id` | string | Yes | Unique task identifier | `"SCAFFOLD-001"` |
|
||||
| `title` | string | Yes | Short task title | `"Create directory structure"` |
|
||||
| `description` | string | Yes | Detailed generation instructions (self-contained) | `"Create roles/, specs/, templates/ directories..."` |
|
||||
| `role` | string | Yes | Generator role name | `"scaffolder"` |
|
||||
| `file_target` | string | Yes | Target file/directory path relative to skill root | `"roles/coordinator/role.md"` |
|
||||
| `gen_type` | enum | Yes | `directory`, `router`, `role-bundle`, `role-inline`, `spec`, `template`, `validation` | `"role-inline"` |
|
||||
| `deps` | string | No | Semicolon-separated dependency task IDs | `"SCAFFOLD-001;SPEC-001"` |
|
||||
| `context_from` | string | No | Semicolon-separated task IDs for context | `"SPEC-001"` |
|
||||
| `exec_mode` | enum | Yes | Execution mechanism: `csv-wave` or `interactive` | `"csv-wave"` |
|
||||
|
||||
#### Computed Columns (Set by Wave Engine)
|
||||
|
||||
| Column | Type | Description | Example |
|
||||
|--------|------|-------------|---------|
|
||||
| `wave` | integer | Wave number (1-based, from topological sort) | `2` |
|
||||
| `prev_context` | string | Aggregated findings from context_from tasks (per-wave CSV only) | `"[SCAFFOLD-001] Created directory structure at .codex/skills/..."` |
|
||||
|
||||
#### Output Columns (Set by Agent)
|
||||
|
||||
| Column | Type | Description | Example |
|
||||
|--------|------|-------------|---------|
|
||||
| `status` | enum | `pending` -> `completed` / `failed` / `skipped` | `"completed"` |
|
||||
| `findings` | string | Key discoveries (max 500 chars) | `"Generated coordinator with 3 commands: analyze, dispatch, monitor"` |
|
||||
| `files_produced` | string | Semicolon-separated paths of produced files | `"roles/coordinator/role.md;roles/coordinator/commands/analyze.md"` |
|
||||
| `error` | string | Error message if failed | `""` |
|
||||
|
||||
---
|
||||
|
||||
### exec_mode Values
|
||||
|
||||
| Value | Mechanism | Description |
|
||||
|-------|-----------|-------------|
|
||||
| `csv-wave` | `spawn_agents_on_csv` | One-shot batch execution within wave |
|
||||
| `interactive` | `spawn_agent`/`wait`/`send_input`/`close_agent` | Multi-round individual execution |
|
||||
|
||||
Interactive tasks appear in master CSV for dependency tracking but are NOT included in wave-{N}.csv files.
|
||||
|
||||
---
|
||||
|
||||
### Generator Roles
|
||||
|
||||
| Role | gen_type Values | Description |
|
||||
|------|-----------------|-------------|
|
||||
| `scaffolder` | `directory` | Creates directory structures |
|
||||
| `router-writer` | `router` | Generates SKILL.md orchestrator files |
|
||||
| `role-writer` | `role-bundle`, `role-inline` | Generates role.md files (+ optional commands/) |
|
||||
| `spec-writer` | `spec` | Generates specs/*.md files |
|
||||
| `template-writer` | `template` | Generates templates/*.md files |
|
||||
| `validator` | `validation` | Validates generated skill package |
|
||||
|
||||
---
|
||||
|
||||
### gen_type Values
|
||||
|
||||
| gen_type | Target | Description |
|
||||
|----------|--------|-------------|
|
||||
| `directory` | Directory path | Create directory structure with subdirectories |
|
||||
| `router` | SKILL.md | Generate main orchestrator SKILL.md with frontmatter, role registry, router |
|
||||
| `role-bundle` | Directory path | Generate role.md + commands/ folder with multiple command files |
|
||||
| `role-inline` | Single .md file | Generate single role.md with inline Phase 2-4 logic |
|
||||
| `spec` | Single .md file | Generate spec file (pipelines, quality-gates, etc.) |
|
||||
| `template` | Single .md file | Generate document template file |
|
||||
| `validation` | Report | Validate complete skill package structure and references |
|
||||
|
||||
---
|
||||
|
||||
### Example Data
|
||||
|
||||
```csv
|
||||
id,title,description,role,file_target,gen_type,deps,context_from,exec_mode,wave,status,findings,files_produced,error
|
||||
"SCAFFOLD-001","Create directory structure","Create complete directory structure for team-code-review skill:\n- ~ or <project>/.codex/skills/team-code-review/\n- roles/coordinator/ + commands/\n- roles/analyst/\n- roles/reviewer/\n- specs/\n- templates/","scaffolder","skill-dir","directory","","","csv-wave","1","pending","","",""
|
||||
"ROUTER-001","Generate SKILL.md","Generate ~ or <project>/.codex/skills/team-code-review/SKILL.md with:\n- Frontmatter (name, description, allowed-tools)\n- Architecture diagram\n- Role registry table\n- CSV schema reference\n- Session structure\n- Wave execution engine\nUse teamConfig.json for role list and pipeline definitions","router-writer","SKILL.md","router","SCAFFOLD-001","SCAFFOLD-001","csv-wave","2","pending","","",""
|
||||
"SPEC-001","Generate pipelines spec","Generate specs/pipelines.md with:\n- Pipeline definitions from teamConfig\n- Task registry with PREFIX-NNN format\n- Conditional routing rules\n- Dynamic specialist injection\nRoles: analyst(ANALYSIS-*), reviewer(REVIEW-*)","spec-writer","specs/pipelines.md","spec","SCAFFOLD-001","SCAFFOLD-001","csv-wave","2","pending","","",""
|
||||
"ROLE-001","Generate coordinator","Generate roles/coordinator/role.md with entry router and commands/analyze.md, commands/dispatch.md, commands/monitor.md. Coordinator orchestrates the analysis pipeline","role-writer","roles/coordinator/","role-bundle","SCAFFOLD-001;SPEC-001","SPEC-001","csv-wave","3","pending","","",""
|
||||
"ROLE-002","Generate analyst role","Generate roles/analyst/role.md with Phase 2 (context loading), Phase 3 (analysis execution), Phase 4 (output). Prefix: ANALYSIS, inner_loop: false","role-writer","roles/analyst/role.md","role-inline","SCAFFOLD-001;SPEC-001","SPEC-001","csv-wave","3","pending","","",""
|
||||
"ROLE-003","Generate reviewer role","Generate roles/reviewer/role.md with Phase 2 (load artifacts), Phase 3 (review execution), Phase 4 (report). Prefix: REVIEW, inner_loop: false","role-writer","roles/reviewer/role.md","role-inline","SCAFFOLD-001;SPEC-001","SPEC-001","csv-wave","3","pending","","",""
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Column Lifecycle
|
||||
|
||||
```
|
||||
Decomposer (Phase 1) Wave Engine (Phase 2) Agent (Execution)
|
||||
--------------------- -------------------- -----------------
|
||||
id ----------> id ----------> id
|
||||
title ----------> title ----------> (reads)
|
||||
description ----------> description ----------> (reads)
|
||||
role ----------> role ----------> (reads)
|
||||
file_target ----------> file_target ----------> (reads)
|
||||
gen_type ----------> gen_type ----------> (reads)
|
||||
deps ----------> deps ----------> (reads)
|
||||
context_from----------> context_from----------> (reads)
|
||||
exec_mode ----------> exec_mode ----------> (reads)
|
||||
wave ----------> (reads)
|
||||
prev_context ----------> (reads)
|
||||
status
|
||||
findings
|
||||
files_produced
|
||||
error
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Output Schema (JSON)
|
||||
|
||||
Agent output via `report_agent_job_result` (csv-wave tasks):
|
||||
|
||||
```json
|
||||
{
|
||||
"id": "ROLE-001",
|
||||
"status": "completed",
|
||||
"findings": "Generated coordinator role with entry router, 3 commands (analyze, dispatch, monitor), beat model in monitor.md only",
|
||||
"files_produced": "roles/coordinator/role.md;roles/coordinator/commands/analyze.md;roles/coordinator/commands/dispatch.md;roles/coordinator/commands/monitor.md",
|
||||
"error": ""
|
||||
}
|
||||
```
|
||||
|
||||
Interactive tasks output via structured text or JSON written to `interactive/{id}-result.json`.
|
||||
|
||||
---
|
||||
|
||||
## Discovery Types
|
||||
|
||||
| Type | Dedup Key | Data Schema | Description |
|
||||
|------|-----------|-------------|-------------|
|
||||
| `dir_created` | `data.path` | `{path, description}` | Directory structure created |
|
||||
| `file_generated` | `data.file` | `{file, gen_type, sections}` | File generated with sections |
|
||||
| `pattern_found` | `data.pattern_name` | `{pattern_name, description}` | Design pattern from golden sample |
|
||||
| `config_decision` | `data.decision` | `{decision, rationale, impact}` | Config decision made |
|
||||
| `validation_result` | `data.check` | `{check, passed, message}` | Validation check result |
|
||||
| `reference_found` | `data.source+data.target` | `{source, target, type}` | Cross-reference between files |
|
||||
|
||||
### Discovery NDJSON Format
|
||||
|
||||
```jsonl
|
||||
{"ts":"2026-03-08T10:00:00Z","worker":"SCAFFOLD-001","type":"dir_created","data":{"path":"~ or <project>/.codex/skills/team-code-review/roles/","description":"Created roles directory with coordinator, analyst, reviewer subdirs"}}
|
||||
{"ts":"2026-03-08T10:05:00Z","worker":"ROLE-001","type":"file_generated","data":{"file":"roles/coordinator/role.md","gen_type":"role-bundle","sections":["entry-router","phase-0","phase-1","phase-2","phase-3"]}}
|
||||
{"ts":"2026-03-08T10:10:00Z","worker":"SPEC-001","type":"config_decision","data":{"decision":"full-lifecycle pipeline","rationale":"Both analyst and reviewer roles present","impact":"4-tier dependency graph"}}
|
||||
```
|
||||
|
||||
> Both csv-wave and interactive agents read/write the same discoveries.ndjson file.
|
||||
|
||||
---
|
||||
|
||||
## Cross-Mechanism Context Flow
|
||||
|
||||
| Source | Target | Mechanism |
|
||||
|--------|--------|-----------|
|
||||
| CSV task findings | Interactive task | Injected via spawn message or send_input |
|
||||
| Interactive task result | CSV task prev_context | Read from interactive/{id}-result.json |
|
||||
| Any agent discovery | Any agent | Shared via discoveries.ndjson |
|
||||
|
||||
---
|
||||
|
||||
## Validation Rules
|
||||
|
||||
| Rule | Check | Error |
|
||||
|------|-------|-------|
|
||||
| Unique IDs | No duplicate `id` values | "Duplicate task ID: {id}" |
|
||||
| Valid deps | All dep IDs exist in tasks | "Unknown dependency: {dep_id}" |
|
||||
| No self-deps | Task cannot depend on itself | "Self-dependency: {id}" |
|
||||
| No circular deps | Topological sort completes | "Circular dependency detected involving: {ids}" |
|
||||
| context_from valid | All context IDs exist and in earlier waves | "Invalid context_from: {id}" |
|
||||
| exec_mode valid | Value is `csv-wave` or `interactive` | "Invalid exec_mode: {value}" |
|
||||
| Description non-empty | Every task has description | "Empty description for task: {id}" |
|
||||
| Status enum | status in {pending, completed, failed, skipped} | "Invalid status: {status}" |
|
||||
| gen_type valid | Value in {directory, router, role-bundle, role-inline, spec, template, validation} | "Invalid gen_type: {value}" |
|
||||
| file_target valid | Path is relative and uses forward slashes | "Invalid file_target: {path}" |
|
||||
| Cross-mechanism deps | Interactive to CSV deps resolve correctly | "Cross-mechanism dependency unresolvable: {id}" |
|
||||
781
.codex/skills/team-edict/SKILL.md
Normal file
781
.codex/skills/team-edict/SKILL.md
Normal file
@@ -0,0 +1,781 @@
|
||||
---
|
||||
name: team-edict
|
||||
description: |
|
||||
三省六部 multi-agent collaboration framework. Imperial edict workflow:
|
||||
Crown Prince receives edict -> Zhongshu (Planning) -> Menxia (Multi-dimensional Review) ->
|
||||
Shangshu (Dispatch) -> Six Ministries parallel execution.
|
||||
Mandatory kanban state reporting, Blocked as first-class state, full observability.
|
||||
argument-hint: "[-y|--yes] [-c|--concurrency N] [--continue] \"task description / edict\""
|
||||
allowed-tools: spawn_agents_on_csv, spawn_agent, wait, send_input, close_agent, Read, Write, Edit, Bash, Glob, Grep, AskUserQuestion
|
||||
---
|
||||
|
||||
## Auto Mode
|
||||
|
||||
When `--yes` or `-y`: Auto-confirm task decomposition, skip interactive validation, use defaults.
|
||||
|
||||
# Team Edict -- Three Departments Six Ministries
|
||||
|
||||
## Usage
|
||||
|
||||
```bash
|
||||
$team-edict "Implement user authentication module with JWT tokens"
|
||||
$team-edict -c 4 "Refactor the data pipeline for better performance"
|
||||
$team-edict -y "Add comprehensive test coverage for auth module"
|
||||
$team-edict --continue "EDT-20260308-143022"
|
||||
```
|
||||
|
||||
**Flags**:
|
||||
- `-y, --yes`: Skip all confirmations (auto mode)
|
||||
- `-c, --concurrency N`: Max concurrent agents within each wave (default: 4)
|
||||
- `--continue`: Resume existing session
|
||||
|
||||
**Output Directory**: `.workflow/.csv-wave/{session-id}/`
|
||||
**Core Output**: `tasks.csv` (master state) + `results.csv` (final) + `discoveries.ndjson` (shared exploration) + `context.md` (human-readable report)
|
||||
|
||||
---
|
||||
|
||||
## Overview
|
||||
|
||||
Imperial edict-inspired multi-agent collaboration framework with **strict cascading approval pipeline** and **parallel ministry execution**. The Three Departments (zhongshu/menxia/shangshu) perform serial planning, review, and dispatch. The Six Ministries (gongbu/bingbu/hubu/libu/libu-hr/xingbu) execute tasks in dependency-ordered waves.
|
||||
|
||||
**Execution Model**: Hybrid -- CSV wave pipeline (primary) + individual agent spawn (secondary)
|
||||
|
||||
```
|
||||
+-------------------------------------------------------------------------+
|
||||
| TEAM EDICT WORKFLOW |
|
||||
+-------------------------------------------------------------------------+
|
||||
| |
|
||||
| Phase 0: Pre-Wave Interactive (Three Departments Serial Pipeline) |
|
||||
| +-- Stage 1: Zhongshu (Planning) -- drafts execution plan |
|
||||
| +-- Stage 2: Menxia (Review) -- multi-dimensional review |
|
||||
| | +-- Reject -> loop back to Zhongshu (max 3 rounds) |
|
||||
| +-- Stage 3: Shangshu (Dispatch) -- routes to Six Ministries |
|
||||
| +-- Output: tasks.csv with ministry assignments + dependency waves |
|
||||
| |
|
||||
| Phase 1: Requirement -> CSV + Classification |
|
||||
| +-- Parse Shangshu dispatch plan into tasks.csv |
|
||||
| +-- Classify tasks: csv-wave (ministry work) | interactive (QA loop) |
|
||||
| +-- Compute dependency waves (topological sort) |
|
||||
| +-- Generate tasks.csv with wave + exec_mode columns |
|
||||
| +-- User validates task breakdown (skip if -y) |
|
||||
| |
|
||||
| Phase 2: Wave Execution Engine (Extended) |
|
||||
| +-- For each wave (1..N): |
|
||||
| | +-- Build wave CSV (filter csv-wave tasks for this wave) |
|
||||
| | +-- Inject previous findings into prev_context column |
|
||||
| | +-- spawn_agents_on_csv(wave CSV) |
|
||||
| | +-- Execute post-wave interactive tasks (if any) |
|
||||
| | +-- Merge all results into master tasks.csv |
|
||||
| | +-- Check: any failed? -> skip dependents |
|
||||
| +-- discoveries.ndjson shared across all modes (append-only) |
|
||||
| |
|
||||
| Phase 3: Post-Wave Interactive (Quality Aggregation) |
|
||||
| +-- Aggregation Agent: collects all ministry outputs |
|
||||
| +-- Generates final edict completion report |
|
||||
| +-- Quality gate validation against specs/quality-gates.md |
|
||||
| |
|
||||
| Phase 4: Results Aggregation |
|
||||
| +-- Export final results.csv |
|
||||
| +-- Generate context.md with all findings |
|
||||
| +-- Display summary: completed/failed/skipped per wave |
|
||||
| +-- Offer: view results | retry failed | done |
|
||||
| |
|
||||
+-------------------------------------------------------------------------+
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Task Classification Rules
|
||||
|
||||
Each task is classified by `exec_mode`:
|
||||
|
||||
| exec_mode | Mechanism | Criteria |
|
||||
|-----------|-----------|----------|
|
||||
| `csv-wave` | `spawn_agents_on_csv` | One-shot, structured I/O, no multi-round interaction |
|
||||
| `interactive` | `spawn_agent`/`wait`/`send_input`/`close_agent` | Multi-round, clarification, inline utility |
|
||||
|
||||
**Classification Decision**:
|
||||
|
||||
| Task Property | Classification |
|
||||
|---------------|---------------|
|
||||
| Ministry implementation (IMPL/OPS/DATA/DOC/HR) | `csv-wave` |
|
||||
| Quality assurance with test-fix loop (QA) | `interactive` |
|
||||
| Single-department self-contained work | `csv-wave` |
|
||||
| Cross-department coordination needed | `interactive` |
|
||||
| Requires iterative feedback (test -> fix -> retest) | `interactive` |
|
||||
| Standalone analysis or generation | `csv-wave` |
|
||||
|
||||
---
|
||||
|
||||
## CSV Schema
|
||||
|
||||
### tasks.csv (Master State)
|
||||
|
||||
```csv
|
||||
id,title,description,deps,context_from,exec_mode,department,task_prefix,priority,dispatch_batch,acceptance_criteria,wave,status,findings,artifact_path,error
|
||||
IMPL-001,"Implement JWT auth","Create JWT authentication middleware with token validation","","","csv-wave","gongbu","IMPL","P0","1","All auth endpoints return valid JWT tokens","1","pending","","",""
|
||||
DOC-001,"Write API docs","Generate OpenAPI documentation for auth endpoints","IMPL-001","IMPL-001","csv-wave","libu","DOC","P1","2","API docs cover all auth endpoints","2","pending","","",""
|
||||
QA-001,"Test auth module","Execute test suite and validate coverage >= 95%","IMPL-001","IMPL-001","interactive","xingbu","QA","P1","2","Test pass rate >= 95%, no Critical bugs","2","pending","","",""
|
||||
```
|
||||
|
||||
**Columns**:
|
||||
|
||||
| Column | Phase | Description |
|
||||
|--------|-------|-------------|
|
||||
| `id` | Input | Unique task identifier (DEPT-NNN format) |
|
||||
| `title` | Input | Short task title |
|
||||
| `description` | Input | Detailed task description (self-contained for agent execution) |
|
||||
| `deps` | Input | Semicolon-separated dependency task IDs |
|
||||
| `context_from` | Input | Semicolon-separated task IDs whose findings this task needs |
|
||||
| `exec_mode` | Input | `csv-wave` or `interactive` |
|
||||
| `department` | Input | Target ministry: gongbu/bingbu/hubu/libu/libu-hr/xingbu |
|
||||
| `task_prefix` | Input | Task type prefix: IMPL/OPS/DATA/DOC/HR/QA |
|
||||
| `priority` | Input | Priority level: P0 (highest) to P3 (lowest) |
|
||||
| `dispatch_batch` | Input | Batch number from Shangshu dispatch plan (1-based) |
|
||||
| `acceptance_criteria` | Input | Specific, measurable acceptance criteria from dispatch plan |
|
||||
| `wave` | Computed | Wave number (computed by topological sort, 1-based) |
|
||||
| `status` | Output | `pending` -> `completed` / `failed` / `skipped` |
|
||||
| `findings` | Output | Key discoveries or implementation notes (max 500 chars) |
|
||||
| `artifact_path` | Output | Path to output artifact file relative to session dir |
|
||||
| `error` | Output | Error message if failed (empty if success) |
|
||||
|
||||
### Per-Wave CSV (Temporary)
|
||||
|
||||
Each wave generates a temporary `wave-{N}.csv` with extra `prev_context` column (csv-wave tasks only).
|
||||
|
||||
---
|
||||
|
||||
## Agent Registry (Interactive Agents)
|
||||
|
||||
| Agent | Role File | Pattern | Responsibility | Position |
|
||||
|-------|-----------|---------|----------------|----------|
|
||||
| zhongshu-planner | agents/zhongshu-planner.md | 2.3 (sequential pipeline) | Draft structured execution plan from edict requirements | standalone (Phase 0, Stage 1) |
|
||||
| menxia-reviewer | agents/menxia-reviewer.md | 2.4 (multi-perspective analysis) | Multi-dimensional review with 4 CLI analyses | standalone (Phase 0, Stage 2) |
|
||||
| shangshu-dispatcher | agents/shangshu-dispatcher.md | 2.3 (sequential pipeline) | Parse approved plan and generate ministry task assignments | standalone (Phase 0, Stage 3) |
|
||||
| qa-verifier | agents/qa-verifier.md | 2.5 (iterative refinement) | Quality assurance with test-fix loop (max 3 rounds) | post-wave |
|
||||
| aggregator | agents/aggregator.md | 2.3 (sequential pipeline) | Collect all ministry outputs and generate final report | standalone (Phase 3) |
|
||||
|
||||
> **COMPACT PROTECTION**: Agent files are execution documents. When context compression occurs, **you MUST immediately `Read` the corresponding agent.md** to reload.
|
||||
|
||||
---
|
||||
|
||||
## Output Artifacts
|
||||
|
||||
| File | Purpose | Lifecycle |
|
||||
|------|---------|-----------|
|
||||
| `tasks.csv` | Master state -- all tasks with status/findings | Updated after each wave |
|
||||
| `wave-{N}.csv` | Per-wave input (temporary, csv-wave tasks only) | Created before wave, deleted after |
|
||||
| `results.csv` | Final export of all task results | Created in Phase 4 |
|
||||
| `discoveries.ndjson` | Shared exploration board (all agents, both modes) | Append-only, carries across waves |
|
||||
| `context.md` | Human-readable execution report | Created in Phase 4 |
|
||||
| `plan/zhongshu-plan.md` | Zhongshu execution plan | Created in Phase 0 Stage 1 |
|
||||
| `review/menxia-review.md` | Menxia review report with 4-dimensional analysis | Created in Phase 0 Stage 2 |
|
||||
| `plan/dispatch-plan.md` | Shangshu dispatch plan with ministry assignments | Created in Phase 0 Stage 3 |
|
||||
| `artifacts/{dept}-output.md` | Per-ministry output artifact | Created during wave execution |
|
||||
| `interactive/{id}-result.json` | Results from interactive tasks (QA loops) | Created per interactive task |
|
||||
| `agents/registry.json` | Active interactive agent tracking | Updated on spawn/close |
|
||||
|
||||
---
|
||||
|
||||
## Session Structure
|
||||
|
||||
```
|
||||
.workflow/.csv-wave/{session-id}/
|
||||
+-- tasks.csv # Master state (all tasks, both modes)
|
||||
+-- results.csv # Final results export
|
||||
+-- discoveries.ndjson # Shared discovery board (all agents)
|
||||
+-- context.md # Human-readable report
|
||||
+-- wave-{N}.csv # Temporary per-wave input (csv-wave only)
|
||||
+-- plan/
|
||||
| +-- zhongshu-plan.md # Zhongshu execution plan
|
||||
| +-- dispatch-plan.md # Shangshu dispatch plan
|
||||
+-- review/
|
||||
| +-- menxia-review.md # Menxia review report
|
||||
+-- artifacts/
|
||||
| +-- gongbu-output.md # Ministry outputs
|
||||
| +-- bingbu-output.md
|
||||
| +-- hubu-output.md
|
||||
| +-- libu-output.md
|
||||
| +-- libu-hr-output.md
|
||||
| +-- xingbu-report.md
|
||||
+-- interactive/ # Interactive task artifacts
|
||||
| +-- {id}-result.json # Per-task results
|
||||
+-- agents/
|
||||
+-- registry.json # Active interactive agent tracking
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Implementation
|
||||
|
||||
### Session Initialization
|
||||
|
||||
```
|
||||
1. Parse $ARGUMENTS for task description (the "edict")
|
||||
2. Generate session ID: EDT-{slug}-{YYYYMMDD-HHmmss}
|
||||
3. Create session directory: .workflow/.csv-wave/{session-id}/
|
||||
4. Create subdirectories: plan/, review/, artifacts/, interactive/, agents/
|
||||
5. Initialize registry.json: { "active": [], "closed": [] }
|
||||
6. Initialize discoveries.ndjson (empty file)
|
||||
7. Read specs: ~ or <project>/.codex/skills/team-edict/specs/team-config.json
|
||||
8. Read quality gates: ~ or <project>/.codex/skills/team-edict/specs/quality-gates.md
|
||||
9. Log session start to context.md
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Phase 0: Pre-Wave Interactive (Three Departments Serial Pipeline)
|
||||
|
||||
**Objective**: Execute the serial approval pipeline (zhongshu -> menxia -> shangshu) to produce a validated, reviewed dispatch plan that decomposes the edict into ministry-level tasks.
|
||||
|
||||
#### Stage 1: Zhongshu Planning
|
||||
|
||||
```javascript
|
||||
const zhongshu = spawn_agent({
|
||||
message: `
|
||||
## TASK ASSIGNMENT
|
||||
|
||||
### MANDATORY FIRST STEPS (Agent Execute)
|
||||
1. **Read role definition**: ~ or <project>/.codex/skills/team-edict/agents/zhongshu-planner.md (MUST read first)
|
||||
2. Read: ${sessionDir}/discoveries.ndjson (shared discoveries, skip if not exists)
|
||||
3. Read: ~ or <project>/.codex/skills/team-edict/specs/team-config.json (routing rules)
|
||||
|
||||
---
|
||||
|
||||
Goal: Draft a structured execution plan for the following edict
|
||||
Scope: Analyze codebase, decompose into ministry-level subtasks, define acceptance criteria
|
||||
Deliverables: ${sessionDir}/plan/zhongshu-plan.md
|
||||
|
||||
### Edict (Original Requirement)
|
||||
${edictText}
|
||||
`
|
||||
})
|
||||
|
||||
const zhongshuResult = wait({ ids: [zhongshu], timeout_ms: 600000 })
|
||||
|
||||
if (zhongshuResult.timed_out) {
|
||||
send_input({ id: zhongshu, message: "Please finalize your execution plan immediately and output current findings." })
|
||||
const retry = wait({ ids: [zhongshu], timeout_ms: 120000 })
|
||||
}
|
||||
|
||||
// Store result
|
||||
Write(`${sessionDir}/interactive/zhongshu-result.json`, JSON.stringify({
|
||||
task_id: "PLAN-001",
|
||||
status: "completed",
|
||||
findings: parseFindings(zhongshuResult),
|
||||
timestamp: new Date().toISOString()
|
||||
}))
|
||||
|
||||
close_agent({ id: zhongshu })
|
||||
```
|
||||
|
||||
#### Stage 2: Menxia Multi-Dimensional Review
|
||||
|
||||
**Rejection Loop**: If menxia rejects (approved=false), respawn zhongshu with feedback. Max 3 rounds.
|
||||
|
||||
```javascript
|
||||
let reviewRound = 0
|
||||
let approved = false
|
||||
|
||||
while (!approved && reviewRound < 3) {
|
||||
reviewRound++
|
||||
|
||||
const menxia = spawn_agent({
|
||||
message: `
|
||||
## TASK ASSIGNMENT
|
||||
|
||||
### MANDATORY FIRST STEPS (Agent Execute)
|
||||
1. **Read role definition**: ~ or <project>/.codex/skills/team-edict/agents/menxia-reviewer.md (MUST read first)
|
||||
2. Read: ${sessionDir}/plan/zhongshu-plan.md (plan to review)
|
||||
3. Read: ${sessionDir}/discoveries.ndjson (shared discoveries)
|
||||
|
||||
---
|
||||
|
||||
Goal: Multi-dimensional review of Zhongshu plan (Round ${reviewRound}/3)
|
||||
Scope: Feasibility, completeness, risk, resource allocation
|
||||
Deliverables: ${sessionDir}/review/menxia-review.md
|
||||
|
||||
### Original Edict
|
||||
${edictText}
|
||||
|
||||
### Previous Review (if rejection round > 1)
|
||||
${reviewRound > 1 ? readPreviousReview() : "First review round"}
|
||||
`
|
||||
})
|
||||
|
||||
const menxiaResult = wait({ ids: [menxia], timeout_ms: 600000 })
|
||||
|
||||
if (menxiaResult.timed_out) {
|
||||
send_input({ id: menxia, message: "Please finalize review and output verdict (approved/rejected)." })
|
||||
const retry = wait({ ids: [menxia], timeout_ms: 120000 })
|
||||
}
|
||||
|
||||
close_agent({ id: menxia })
|
||||
|
||||
// Parse verdict from review report
|
||||
const reviewReport = Read(`${sessionDir}/review/menxia-review.md`)
|
||||
approved = reviewReport.includes("approved") || reviewReport.includes("approved: true")
|
||||
|
||||
if (!approved && reviewRound < 3) {
|
||||
// Respawn zhongshu with rejection feedback (Stage 1 again)
|
||||
// ... spawn zhongshu with rejection_feedback = reviewReport ...
|
||||
}
|
||||
}
|
||||
|
||||
if (!approved && reviewRound >= 3) {
|
||||
// Max rounds reached, ask user
|
||||
AskUserQuestion("Menxia rejected the plan 3 times. Please review and decide: approve, reject, or provide guidance.")
|
||||
}
|
||||
```
|
||||
|
||||
#### Stage 3: Shangshu Dispatch
|
||||
|
||||
```javascript
|
||||
const shangshu = spawn_agent({
|
||||
message: `
|
||||
## TASK ASSIGNMENT
|
||||
|
||||
### MANDATORY FIRST STEPS (Agent Execute)
|
||||
1. **Read role definition**: ~ or <project>/.codex/skills/team-edict/agents/shangshu-dispatcher.md (MUST read first)
|
||||
2. Read: ${sessionDir}/plan/zhongshu-plan.md (approved plan)
|
||||
3. Read: ${sessionDir}/review/menxia-review.md (review conditions)
|
||||
4. Read: ~ or <project>/.codex/skills/team-edict/specs/team-config.json (routing rules)
|
||||
|
||||
---
|
||||
|
||||
Goal: Parse approved plan and generate Six Ministries dispatch plan
|
||||
Scope: Route subtasks to departments, define execution batches, set dependencies
|
||||
Deliverables: ${sessionDir}/plan/dispatch-plan.md
|
||||
`
|
||||
})
|
||||
|
||||
const shangshuResult = wait({ ids: [shangshu], timeout_ms: 300000 })
|
||||
close_agent({ id: shangshu })
|
||||
|
||||
// Parse dispatch-plan.md to generate tasks.csv (Phase 1 input)
|
||||
```
|
||||
|
||||
**Success Criteria**:
|
||||
- zhongshu-plan.md written with structured subtask list
|
||||
- menxia-review.md written with 4-dimensional analysis verdict
|
||||
- dispatch-plan.md written with ministry assignments and batch ordering
|
||||
- Interactive agents closed, results stored
|
||||
|
||||
---
|
||||
|
||||
### Phase 1: Requirement -> CSV + Classification
|
||||
|
||||
**Objective**: Parse the Shangshu dispatch plan into a tasks.csv with proper wave computation and exec_mode classification.
|
||||
|
||||
**Decomposition Rules**:
|
||||
|
||||
1. Read `${sessionDir}/plan/dispatch-plan.md`
|
||||
2. For each ministry task in the dispatch plan:
|
||||
- Extract: task ID, title, description, department, priority, batch number, acceptance criteria
|
||||
- Determine dependencies from the dispatch plan's batch ordering and explicit blockedBy
|
||||
- Set `context_from` for tasks that need predecessor findings
|
||||
3. Apply classification rules (see Task Classification Rules above)
|
||||
4. Compute waves via topological sort (Kahn's BFS with depth tracking)
|
||||
5. Generate `tasks.csv` with all columns
|
||||
|
||||
**Classification Rules**:
|
||||
|
||||
| Department | Default exec_mode | Override Condition |
|
||||
|------------|-------------------|-------------------|
|
||||
| gongbu (IMPL) | csv-wave | Interactive if requires iterative codebase exploration |
|
||||
| bingbu (OPS) | csv-wave | - |
|
||||
| hubu (DATA) | csv-wave | - |
|
||||
| libu (DOC) | csv-wave | - |
|
||||
| libu-hr (HR) | csv-wave | - |
|
||||
| xingbu (QA) | interactive | Always interactive (test-fix loop) |
|
||||
|
||||
**Wave Computation**: Kahn's BFS topological sort with depth tracking (csv-wave tasks only).
|
||||
|
||||
**User Validation**: Display task breakdown with wave + exec_mode assignment (skip if AUTO_YES).
|
||||
|
||||
**Success Criteria**:
|
||||
- tasks.csv created with valid schema, wave, and exec_mode assignments
|
||||
- No circular dependencies
|
||||
- User approved (or AUTO_YES)
|
||||
|
||||
---
|
||||
|
||||
### Phase 2: Wave Execution Engine (Extended)
|
||||
|
||||
**Objective**: Execute tasks wave-by-wave with hybrid mechanism support and cross-wave context propagation.
|
||||
|
||||
```
|
||||
For each wave W in 1..max_wave:
|
||||
|
||||
1. FILTER csv-wave tasks where wave == W and status == "pending"
|
||||
2. CHECK dependencies: if any dep has status == "failed", mark task as "skipped"
|
||||
3. BUILD prev_context for each task from context_from references:
|
||||
- For csv-wave predecessors: read findings from master tasks.csv
|
||||
- For interactive predecessors: read from interactive/{id}-result.json
|
||||
4. GENERATE wave-{W}.csv with prev_context column added
|
||||
5. EXECUTE csv-wave tasks:
|
||||
spawn_agents_on_csv({
|
||||
task_csv_path: "${sessionDir}/wave-{W}.csv",
|
||||
instruction_path: "~ or <project>/.codex/skills/team-edict/instructions/agent-instruction.md",
|
||||
schema_path: "~ or <project>/.codex/skills/team-edict/schemas/tasks-schema.md",
|
||||
additional_instructions: "Session directory: ${sessionDir}. Department: {department}. Priority: {priority}.",
|
||||
concurrency: CONCURRENCY
|
||||
})
|
||||
6. MERGE results back into master tasks.csv (update status, findings, artifact_path, error)
|
||||
7. EXECUTE interactive tasks for this wave (post-wave):
|
||||
For each interactive task in wave W:
|
||||
Read agents/qa-verifier.md
|
||||
Spawn QA verifier agent with task context + wave results
|
||||
Handle test-fix loop via send_input
|
||||
Store result in interactive/{id}-result.json
|
||||
Close agent, update registry.json
|
||||
8. CLEANUP: delete wave-{W}.csv
|
||||
9. LOG wave completion to context.md and discoveries.ndjson
|
||||
|
||||
Wave completion check:
|
||||
- All tasks completed or skipped -> proceed to next wave
|
||||
- Any failed non-skippable task -> log error, continue (dependents will be skipped)
|
||||
```
|
||||
|
||||
**Success Criteria**:
|
||||
- All waves executed in order
|
||||
- Both csv-wave and interactive tasks handled per wave
|
||||
- Each wave's results merged into master CSV before next wave starts
|
||||
- Dependent tasks skipped when predecessor failed
|
||||
- discoveries.ndjson accumulated across all waves and mechanisms
|
||||
- Interactive agent lifecycle tracked in registry.json
|
||||
|
||||
---
|
||||
|
||||
### Phase 3: Post-Wave Interactive (Quality Aggregation)
|
||||
|
||||
**Objective**: Collect all ministry outputs, validate against quality gates, and generate the final edict completion report.
|
||||
|
||||
```javascript
|
||||
const aggregator = spawn_agent({
|
||||
message: `
|
||||
## TASK ASSIGNMENT
|
||||
|
||||
### MANDATORY FIRST STEPS (Agent Execute)
|
||||
1. **Read role definition**: ~ or <project>/.codex/skills/team-edict/agents/aggregator.md (MUST read first)
|
||||
2. Read: ${sessionDir}/tasks.csv (master state)
|
||||
3. Read: ${sessionDir}/discoveries.ndjson (all discoveries)
|
||||
4. Read: ~ or <project>/.codex/skills/team-edict/specs/quality-gates.md (quality standards)
|
||||
|
||||
---
|
||||
|
||||
Goal: Aggregate all ministry outputs into final edict completion report
|
||||
Scope: All artifacts in ${sessionDir}/artifacts/, all interactive results
|
||||
Deliverables: ${sessionDir}/context.md (final report)
|
||||
|
||||
### Ministry Artifacts to Collect
|
||||
${listAllArtifacts()}
|
||||
|
||||
### Quality Gate Standards
|
||||
Read from: ~ or <project>/.codex/skills/team-edict/specs/quality-gates.md
|
||||
`
|
||||
})
|
||||
|
||||
const aggResult = wait({ ids: [aggregator], timeout_ms: 300000 })
|
||||
close_agent({ id: aggregator })
|
||||
```
|
||||
|
||||
**Success Criteria**:
|
||||
- Post-wave interactive processing complete
|
||||
- Interactive agents closed, results stored
|
||||
|
||||
---
|
||||
|
||||
### Phase 4: Results Aggregation
|
||||
|
||||
**Objective**: Generate final results and human-readable report.
|
||||
|
||||
```
|
||||
1. READ master tasks.csv
|
||||
2. EXPORT results.csv with final status for all tasks
|
||||
3. GENERATE context.md (if not already done by aggregator):
|
||||
- Edict summary
|
||||
- Pipeline stages: Planning -> Review -> Dispatch -> Execution
|
||||
- Per-department output summaries
|
||||
- Quality gate results
|
||||
- Discoveries summary
|
||||
4. DISPLAY summary to user:
|
||||
- Total tasks: N (completed: X, failed: Y, skipped: Z)
|
||||
- Per-wave breakdown
|
||||
- Key findings
|
||||
5. CLEANUP:
|
||||
- Close any remaining interactive agents (registry.json)
|
||||
- Remove temporary wave CSV files
|
||||
6. OFFER: view full report | retry failed tasks | done
|
||||
```
|
||||
|
||||
**Success Criteria**:
|
||||
- results.csv exported (all tasks, both modes)
|
||||
- context.md generated
|
||||
- All interactive agents closed (registry.json cleanup)
|
||||
- Summary displayed to user
|
||||
|
||||
---
|
||||
|
||||
## Shared Discovery Board Protocol
|
||||
|
||||
All agents (both csv-wave and interactive) share a single `discoveries.ndjson` file for cross-agent knowledge propagation.
|
||||
|
||||
### Discovery Types
|
||||
|
||||
| Type | Dedup Key | Data Schema | Description |
|
||||
|------|-----------|-------------|-------------|
|
||||
| `codebase_pattern` | `pattern_name` | `{pattern_name, files, description}` | Identified codebase patterns and conventions |
|
||||
| `dependency_found` | `dep_name` | `{dep_name, version, used_by}` | External dependency discoveries |
|
||||
| `risk_identified` | `risk_id` | `{risk_id, severity, description, mitigation}` | Risk findings from any agent |
|
||||
| `implementation_note` | `file_path` | `{file_path, note, line_range}` | Implementation decisions and notes |
|
||||
| `test_result` | `test_suite` | `{test_suite, pass_rate, failures}` | Test execution results |
|
||||
| `quality_issue` | `issue_id` | `{issue_id, severity, file, description}` | Quality issues found during review |
|
||||
| `routing_note` | `task_id` | `{task_id, department, reason}` | Dispatch routing decisions |
|
||||
|
||||
### Protocol
|
||||
|
||||
```bash
|
||||
# Append discovery (any agent, any mode)
|
||||
echo '{"ts":"<ISO8601>","worker":"{id}","type":"<type>","data":{...}}' >> ${sessionDir}/discoveries.ndjson
|
||||
|
||||
# Read discoveries (any agent, any mode)
|
||||
# Read ${sessionDir}/discoveries.ndjson, parse each line as JSON
|
||||
# Deduplicate by type + dedup_key
|
||||
```
|
||||
|
||||
### Rules
|
||||
- **Append-only**: Never modify or delete existing entries
|
||||
- **Deduplicate on read**: When reading, use type + dedup_key to skip duplicates
|
||||
- **Both mechanisms share**: csv-wave agents and interactive agents use the same file
|
||||
- **Carry across waves**: Discoveries persist across all waves
|
||||
|
||||
---
|
||||
|
||||
## Six Ministries Routing Rules
|
||||
|
||||
Shangshu dispatcher uses these rules to assign tasks to ministries:
|
||||
|
||||
| Keyword Signals | Target Ministry | Role ID | Task Prefix |
|
||||
|----------------|-----------------|---------|-------------|
|
||||
| Feature dev, architecture, code, refactor, implement, API | Engineering | gongbu | IMPL |
|
||||
| Deploy, CI/CD, infrastructure, container, monitoring, security ops | Operations | bingbu | OPS |
|
||||
| Data analysis, statistics, cost, reports, resource mgmt | Data & Resources | hubu | DATA |
|
||||
| Documentation, README, UI copy, specs, API docs, comms | Documentation | libu | DOC |
|
||||
| Testing, QA, bug, code review, compliance audit | Quality Assurance | xingbu | QA |
|
||||
| Agent management, training, skill optimization, evaluation | Personnel | libu-hr | HR |
|
||||
|
||||
---
|
||||
|
||||
## Kanban State Protocol
|
||||
|
||||
All agents must report state transitions. In Codex context, agents write state to discoveries.ndjson:
|
||||
|
||||
### State Machine
|
||||
|
||||
```
|
||||
Pending -> Doing -> Done
|
||||
|
|
||||
Blocked (can enter at any time, must report reason)
|
||||
```
|
||||
|
||||
### State Reporting via Discoveries
|
||||
|
||||
```bash
|
||||
# Task start
|
||||
echo '{"ts":"<ISO8601>","worker":"{id}","type":"state_update","data":{"state":"Doing","task_id":"{id}","department":"{department}","step":"Starting execution"}}' >> ${sessionDir}/discoveries.ndjson
|
||||
|
||||
# Progress update
|
||||
echo '{"ts":"<ISO8601>","worker":"{id}","type":"progress","data":{"task_id":"{id}","current":"Step 2: Implementing API","plan":"Step1 done|Step2 in progress|Step3 pending"}}' >> ${sessionDir}/discoveries.ndjson
|
||||
|
||||
# Completion
|
||||
echo '{"ts":"<ISO8601>","worker":"{id}","type":"state_update","data":{"state":"Done","task_id":"{id}","remark":"Completed: implementation summary"}}' >> ${sessionDir}/discoveries.ndjson
|
||||
|
||||
# Blocked
|
||||
echo '{"ts":"<ISO8601>","worker":"{id}","type":"state_update","data":{"state":"Blocked","task_id":"{id}","reason":"Cannot proceed: missing dependency"}}' >> ${sessionDir}/discoveries.ndjson
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Interactive Task Execution
|
||||
|
||||
For interactive tasks within a wave (primarily QA test-fix loops):
|
||||
|
||||
**Spawn Protocol**:
|
||||
|
||||
```javascript
|
||||
const agent = spawn_agent({
|
||||
message: `
|
||||
## TASK ASSIGNMENT
|
||||
|
||||
### MANDATORY FIRST STEPS (Agent Execute)
|
||||
1. **Read role definition**: ~ or <project>/.codex/skills/team-edict/agents/qa-verifier.md (MUST read first)
|
||||
2. Read: ${sessionDir}/discoveries.ndjson (shared discoveries)
|
||||
3. Read: ~ or <project>/.codex/skills/team-edict/specs/quality-gates.md (quality standards)
|
||||
|
||||
---
|
||||
|
||||
Goal: Execute QA verification for task ${taskId}
|
||||
Scope: ${taskDescription}
|
||||
Deliverables: Test report + pass/fail verdict
|
||||
|
||||
### Previous Context
|
||||
${prevContextFromCompletedTasks}
|
||||
|
||||
### Acceptance Criteria
|
||||
${acceptanceCriteria}
|
||||
`
|
||||
})
|
||||
```
|
||||
|
||||
**Wait + Process**:
|
||||
|
||||
```javascript
|
||||
const result = wait({ ids: [agent], timeout_ms: 600000 })
|
||||
|
||||
if (result.timed_out) {
|
||||
send_input({ id: agent, message: "Please finalize and output current findings." })
|
||||
const retry = wait({ ids: [agent], timeout_ms: 120000 })
|
||||
}
|
||||
|
||||
// Store result
|
||||
Write(`${sessionDir}/interactive/${taskId}-result.json`, JSON.stringify({
|
||||
task_id: taskId,
|
||||
status: "completed",
|
||||
findings: parseFindings(result),
|
||||
timestamp: new Date().toISOString()
|
||||
}))
|
||||
```
|
||||
|
||||
**Lifecycle Tracking**:
|
||||
|
||||
```javascript
|
||||
// On spawn: register
|
||||
registry.active.push({ id: agent, task_id: taskId, pattern: "qa-verifier", spawned_at: now })
|
||||
|
||||
// On close: move to closed
|
||||
close_agent({ id: agent })
|
||||
registry.active = registry.active.filter(a => a.id !== agent)
|
||||
registry.closed.push({ id: agent, task_id: taskId, closed_at: now })
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Cross-Mechanism Context Bridging
|
||||
|
||||
### Interactive Result -> CSV Task
|
||||
|
||||
When a pre-wave interactive task produces results needed by csv-wave tasks:
|
||||
|
||||
```javascript
|
||||
// 1. Interactive result stored in file
|
||||
const resultFile = `${sessionDir}/interactive/${taskId}-result.json`
|
||||
|
||||
// 2. Wave engine reads when building prev_context for csv-wave tasks
|
||||
// If a csv-wave task has context_from referencing an interactive task:
|
||||
// Read the interactive result file and include in prev_context
|
||||
```
|
||||
|
||||
### CSV Result -> Interactive Task
|
||||
|
||||
When a post-wave interactive task needs CSV wave results:
|
||||
|
||||
```javascript
|
||||
// Include in spawn message
|
||||
const csvFindings = readMasterCSV().filter(t => t.wave === currentWave && t.exec_mode === 'csv-wave')
|
||||
const context = csvFindings.map(t => `## Task ${t.id}: ${t.title}\n${t.findings}`).join('\n\n')
|
||||
|
||||
spawn_agent({
|
||||
message: `...\n### Wave ${currentWave} Results\n${context}\n...`
|
||||
})
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Error | Resolution |
|
||||
|-------|------------|
|
||||
| Circular dependency | Detect in wave computation, abort with error message |
|
||||
| CSV agent timeout | Mark as failed in results, continue with wave |
|
||||
| CSV agent failed | Mark as failed, skip dependent tasks in later waves |
|
||||
| Interactive agent timeout | Urge convergence via send_input, then close if still timed out |
|
||||
| Interactive agent failed | Mark as failed, skip dependents |
|
||||
| Pre-wave interactive failed | Skip dependent csv-wave tasks in same wave |
|
||||
| All agents in wave failed | Log error, offer retry or abort |
|
||||
| CSV parse error | Validate CSV format before execution, show line number |
|
||||
| discoveries.ndjson corrupt | Ignore malformed lines, continue with valid entries |
|
||||
| Lifecycle leak | Cleanup all active agents via registry.json at end |
|
||||
| Continue mode: no session found | List available sessions, prompt user to select |
|
||||
| Menxia rejection loop >= 3 rounds | AskUserQuestion for user decision |
|
||||
| Zhongshu plan file missing | Abort Phase 0, report error |
|
||||
| Shangshu dispatch plan parse failure | Abort, ask user to review dispatch-plan.md |
|
||||
| Ministry artifact not written | Mark task as failed, include in QA report |
|
||||
| Test-fix loop exceeds 3 rounds | Mark QA as failed, report to aggregator |
|
||||
|
||||
---
|
||||
|
||||
## Specs Reference
|
||||
|
||||
| File | Content | Used By |
|
||||
|------|---------|---------|
|
||||
| [specs/team-config.json](specs/team-config.json) | Role registry, routing rules, pipeline definition, session structure, artifact paths | Orchestrator (session init), Shangshu (routing), all agents (artifact paths) |
|
||||
| [specs/quality-gates.md](specs/quality-gates.md) | Per-phase quality gate standards, cross-phase consistency checks | Aggregator (Phase 3), QA verifier (test validation) |
|
||||
|
||||
---
|
||||
|
||||
## Core Rules
|
||||
|
||||
1. **Start Immediately**: First action is session initialization, then Phase 0
|
||||
2. **Wave Order is Sacred**: Never execute wave N before wave N-1 completes and results are merged
|
||||
3. **CSV is Source of Truth**: Master tasks.csv holds all state (both csv-wave and interactive)
|
||||
4. **CSV First**: Default to csv-wave for tasks; only use interactive when interaction pattern requires it
|
||||
5. **Context Propagation**: prev_context built from master CSV, not from memory
|
||||
6. **Discovery Board is Append-Only**: Never clear, modify, or recreate discoveries.ndjson -- both mechanisms share it
|
||||
7. **Skip on Failure**: If a dependency failed, skip the dependent task (regardless of mechanism)
|
||||
8. **Lifecycle Balance**: Every spawn_agent MUST have a matching close_agent (tracked in registry.json)
|
||||
9. **Cleanup Temp Files**: Remove wave-{N}.csv after results are merged
|
||||
10. **DO NOT STOP**: Continuous execution until all waves complete or all remaining tasks are skipped
|
||||
11. **Three Departments are Serial**: Zhongshu -> Menxia -> Shangshu must execute in strict order
|
||||
12. **Rejection Loop Max 3**: Menxia can reject max 3 times before escalating to user
|
||||
13. **Kanban is Mandatory**: All agents must report state transitions via discoveries.ndjson
|
||||
14. **Quality Gates Apply**: Phase 3 aggregator validates all outputs against specs/quality-gates.md
|
||||
|
||||
---
|
||||
|
||||
## Coordinator Role Constraints (Crown Prince / Main Agent)
|
||||
|
||||
**CRITICAL**: The coordinator (main agent executing this skill) is responsible for **orchestration only**, NOT implementation.
|
||||
|
||||
15. **Coordinator Does NOT Execute Code**: The main agent MUST NOT write, modify, or implement any code directly. All implementation work is delegated to spawned agents (Three Departments and Six Ministries). The coordinator only:
|
||||
- Spawns agents with task assignments
|
||||
- Waits for agent callbacks
|
||||
- Merges results into master CSV
|
||||
- Coordinates workflow transitions between phases
|
||||
|
||||
16. **Patient Waiting is Mandatory**: Agent execution takes significant time (typically 10-30 minutes per phase, sometimes longer). The coordinator MUST:
|
||||
- Wait patiently for `wait()` calls to complete
|
||||
- NOT skip workflow steps due to perceived delays
|
||||
- NOT assume agents have failed just because they're taking time
|
||||
- Trust the timeout mechanisms (600s for planning, 300s for execution)
|
||||
|
||||
17. **Use send_input for Clarification**: When agents need guidance or appear stuck, the coordinator MUST:
|
||||
- Use `send_input()` to ask questions or provide clarification
|
||||
- NOT skip the agent or move to next phase prematurely
|
||||
- Give agents opportunity to respond before escalating
|
||||
- Example: `send_input({ id: agent_id, message: "Please provide status update or clarify blockers" })`
|
||||
|
||||
18. **No Workflow Shortcuts**: The coordinator MUST NOT:
|
||||
- Skip phases or stages (e.g., jumping from Zhongshu directly to Shangshu)
|
||||
- Bypass the Three Departments serial pipeline
|
||||
- Execute wave N before wave N-1 completes
|
||||
- Assume task completion without explicit agent callback
|
||||
- Make up or fabricate agent results
|
||||
|
||||
19. **Respect Long-Running Processes**: This is a complex multi-agent workflow that requires patience:
|
||||
- Total execution time: 30-90 minutes for typical edicts
|
||||
- Phase 0 (Three Departments): 15-30 minutes
|
||||
- Phase 2 (Wave Execution): 10-20 minutes per wave
|
||||
- Phase 3 (Aggregation): 5-10 minutes
|
||||
- The coordinator must remain active and attentive throughout the entire process
|
||||
|
||||
246
.codex/skills/team-edict/agents/aggregator.md
Normal file
246
.codex/skills/team-edict/agents/aggregator.md
Normal file
@@ -0,0 +1,246 @@
|
||||
# Aggregator Agent
|
||||
|
||||
Post-wave aggregation agent -- collects all ministry outputs, validates against quality gates, and generates the final edict completion report.
|
||||
|
||||
## Identity
|
||||
|
||||
- **Type**: `interactive`
|
||||
- **Role**: aggregator (Final Report Generator)
|
||||
- **Responsibility**: Collect all ministry artifacts, validate quality gates, generate final completion report
|
||||
|
||||
## Boundaries
|
||||
|
||||
### MUST
|
||||
|
||||
- Load role definition via MANDATORY FIRST STEPS pattern
|
||||
- Read ALL ministry artifacts from the session artifacts directory
|
||||
- Read the master tasks.csv for completion status
|
||||
- Read quality-gates.md and validate each phase
|
||||
- Read all discoveries from discoveries.ndjson
|
||||
- Generate a comprehensive final report (context.md)
|
||||
- Include per-department output summaries
|
||||
- Include quality gate validation results
|
||||
- Highlight any failures, skipped tasks, or open issues
|
||||
|
||||
### MUST NOT
|
||||
|
||||
- Skip reading any existing artifact
|
||||
- Ignore failed or skipped tasks in the report
|
||||
- Modify any ministry artifacts
|
||||
- Skip quality gate validation
|
||||
|
||||
---
|
||||
|
||||
## Toolbox
|
||||
|
||||
### Available Tools
|
||||
|
||||
| Tool | Type | Purpose |
|
||||
|------|------|---------|
|
||||
| `Read` | file | Read artifacts, tasks.csv, specs, discoveries |
|
||||
| `Write` | file | Write final context.md report |
|
||||
| `Glob` | search | Find all artifact files |
|
||||
| `Bash` | exec | Parse CSV, count stats |
|
||||
|
||||
---
|
||||
|
||||
## Execution
|
||||
|
||||
### Phase 1: Artifact Collection
|
||||
|
||||
**Objective**: Gather all ministry outputs and task status
|
||||
|
||||
**Input**:
|
||||
|
||||
| Source | Required | Description |
|
||||
|--------|----------|-------------|
|
||||
| tasks.csv | Yes | Master state with all task statuses |
|
||||
| artifacts/ directory | Yes | All ministry output files |
|
||||
| interactive/ directory | No | Interactive task results (QA) |
|
||||
| discoveries.ndjson | Yes | All shared discoveries |
|
||||
| quality-gates.md | Yes | Quality standards |
|
||||
|
||||
**Steps**:
|
||||
|
||||
1. Read `<session>/tasks.csv` and parse all task records
|
||||
2. Use Glob to find all files in `<session>/artifacts/`
|
||||
3. Read each artifact file
|
||||
4. Use Glob to find all files in `<session>/interactive/`
|
||||
5. Read each interactive result file
|
||||
6. Read `<session>/discoveries.ndjson` (all entries)
|
||||
7. Read `~ or <project>/.codex/skills/team-edict/specs/quality-gates.md`
|
||||
|
||||
**Output**: All artifacts and status data collected
|
||||
|
||||
---
|
||||
|
||||
### Phase 2: Quality Gate Validation
|
||||
|
||||
**Objective**: Validate each phase against quality gate standards
|
||||
|
||||
**Input**:
|
||||
|
||||
| Source | Required | Description |
|
||||
|--------|----------|-------------|
|
||||
| Collected artifacts | Yes | From Phase 1 |
|
||||
| quality-gates.md | Yes | Quality standards |
|
||||
|
||||
**Steps**:
|
||||
|
||||
1. Validate Phase 0 (Three Departments):
|
||||
- zhongshu-plan.md exists and has required sections
|
||||
- menxia-review.md exists with clear verdict
|
||||
- dispatch-plan.md exists with ministry assignments
|
||||
2. Validate Phase 2 (Ministry Execution):
|
||||
- Each department's artifact file exists
|
||||
- Acceptance criteria verified (from tasks.csv findings)
|
||||
- State reporting present in discoveries.ndjson
|
||||
3. Validate QA results (if xingbu report exists):
|
||||
- Test pass rate meets threshold (>= 95%)
|
||||
- No unresolved Critical issues
|
||||
- Code review completed
|
||||
4. Score each quality gate:
|
||||
| Score | Status | Action |
|
||||
|-------|--------|--------|
|
||||
| >= 80% | PASS | No action needed |
|
||||
| 60-79% | WARNING | Log warning in report |
|
||||
| < 60% | FAIL | Highlight in report |
|
||||
|
||||
**Output**: Quality gate validation results
|
||||
|
||||
---
|
||||
|
||||
### Phase 3: Report Generation
|
||||
|
||||
**Objective**: Generate comprehensive final report
|
||||
|
||||
**Input**:
|
||||
|
||||
| Source | Required | Description |
|
||||
|--------|----------|-------------|
|
||||
| Task data | Yes | From Phase 1 |
|
||||
| Quality gate results | Yes | From Phase 2 |
|
||||
|
||||
**Steps**:
|
||||
|
||||
1. Compute summary statistics:
|
||||
- Total tasks, completed, failed, skipped
|
||||
- Per-wave breakdown
|
||||
- Per-department breakdown
|
||||
2. Extract key findings from discoveries.ndjson
|
||||
3. Compile per-department summaries from artifacts
|
||||
4. Generate context.md following template
|
||||
5. Write to `<session>/context.md`
|
||||
|
||||
**Output**: context.md written
|
||||
|
||||
---
|
||||
|
||||
## Final Report Template (context.md)
|
||||
|
||||
```markdown
|
||||
# Edict Completion Report
|
||||
|
||||
## Edict Summary
|
||||
<Original edict text>
|
||||
|
||||
## Pipeline Execution Summary
|
||||
| Stage | Department | Status | Duration |
|
||||
|-------|-----------|--------|----------|
|
||||
| Planning | zhongshu | Completed | - |
|
||||
| Review | menxia | Approved (Round N/3) | - |
|
||||
| Dispatch | shangshu | Completed | - |
|
||||
| Execution | Six Ministries | N/M completed | - |
|
||||
|
||||
## Task Status Overview
|
||||
- Total tasks: N
|
||||
- Completed: X
|
||||
- Failed: Y
|
||||
- Skipped: Z
|
||||
|
||||
### Per-Wave Breakdown
|
||||
| Wave | Total | Completed | Failed | Skipped |
|
||||
|------|-------|-----------|--------|---------|
|
||||
| 1 | N | X | Y | Z |
|
||||
| 2 | N | X | Y | Z |
|
||||
|
||||
### Per-Department Breakdown
|
||||
| Department | Tasks | Completed | Artifacts |
|
||||
|------------|-------|-----------|-----------|
|
||||
| gongbu | N | X | artifacts/gongbu-output.md |
|
||||
| bingbu | N | X | artifacts/bingbu-output.md |
|
||||
| hubu | N | X | artifacts/hubu-output.md |
|
||||
| libu | N | X | artifacts/libu-output.md |
|
||||
| libu-hr | N | X | artifacts/libu-hr-output.md |
|
||||
| xingbu | N | X | artifacts/xingbu-report.md |
|
||||
|
||||
## Department Output Summaries
|
||||
|
||||
### gongbu (Engineering)
|
||||
<Summary from gongbu-output.md>
|
||||
|
||||
### bingbu (Operations)
|
||||
<Summary from bingbu-output.md>
|
||||
|
||||
### hubu (Data & Resources)
|
||||
<Summary from hubu-output.md>
|
||||
|
||||
### libu (Documentation)
|
||||
<Summary from libu-output.md>
|
||||
|
||||
### libu-hr (Personnel)
|
||||
<Summary from libu-hr-output.md>
|
||||
|
||||
### xingbu (Quality Assurance)
|
||||
<Summary from xingbu-report.md>
|
||||
|
||||
## Quality Gate Results
|
||||
| Gate | Phase | Score | Status |
|
||||
|------|-------|-------|--------|
|
||||
| Planning quality | zhongshu | XX% | PASS/WARN/FAIL |
|
||||
| Review thoroughness | menxia | XX% | PASS/WARN/FAIL |
|
||||
| Dispatch completeness | shangshu | XX% | PASS/WARN/FAIL |
|
||||
| Execution quality | ministries | XX% | PASS/WARN/FAIL |
|
||||
| QA verification | xingbu | XX% | PASS/WARN/FAIL |
|
||||
|
||||
## Key Discoveries
|
||||
<Top N discoveries from discoveries.ndjson, grouped by type>
|
||||
|
||||
## Failures and Issues
|
||||
<Any failed tasks, unresolved issues, or quality gate failures>
|
||||
|
||||
## Open Items
|
||||
<Remaining work, if any>
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Structured Output Template
|
||||
|
||||
```
|
||||
## Summary
|
||||
- Edict completion report generated: N/M tasks completed, quality gates: X PASS, Y WARN, Z FAIL
|
||||
|
||||
## Findings
|
||||
- Per-department completion rates
|
||||
- Quality gate scores
|
||||
- Key discoveries count
|
||||
|
||||
## Deliverables
|
||||
- File: <session>/context.md
|
||||
|
||||
## Open Questions
|
||||
1. (any unresolved issues requiring user attention)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Scenario | Resolution |
|
||||
|----------|------------|
|
||||
| Artifact file missing for a department | Note as "Not produced" in report, mark quality gate as FAIL |
|
||||
| tasks.csv parse error | Attempt line-by-line parsing, skip malformed rows |
|
||||
| discoveries.ndjson has malformed lines | Skip malformed lines, continue with valid entries |
|
||||
| Quality gate data insufficient | Score as "Insufficient data", mark WARNING |
|
||||
| No QA report (xingbu not assigned) | Skip QA quality gate, note in report |
|
||||
229
.codex/skills/team-edict/agents/menxia-reviewer.md
Normal file
229
.codex/skills/team-edict/agents/menxia-reviewer.md
Normal file
@@ -0,0 +1,229 @@
|
||||
# Menxia Reviewer Agent
|
||||
|
||||
Menxia (Chancellery / Review Department) -- performs multi-dimensional review of the Zhongshu plan from four perspectives: feasibility, completeness, risk, and resource allocation. Outputs approve/reject verdict.
|
||||
|
||||
## Identity
|
||||
|
||||
- **Type**: `interactive`
|
||||
- **Role**: menxia (Chancellery / Multi-Dimensional Review)
|
||||
- **Responsibility**: Four-dimensional parallel review, approve/reject verdict with detailed feedback
|
||||
|
||||
## Boundaries
|
||||
|
||||
### MUST
|
||||
|
||||
- Load role definition via MANDATORY FIRST STEPS pattern
|
||||
- Read the Zhongshu plan completely before starting review
|
||||
- Analyze from ALL four dimensions (feasibility, completeness, risk, resource)
|
||||
- Produce a clear verdict: approved or rejected
|
||||
- If rejecting, provide specific, actionable feedback for each rejection point
|
||||
- Write the review report to `<session>/review/menxia-review.md`
|
||||
- Report state transitions via discoveries.ndjson
|
||||
- Apply weighted scoring: feasibility 30%, completeness 30%, risk 25%, resource 15%
|
||||
|
||||
### MUST NOT
|
||||
|
||||
- Approve a plan with unaddressed critical feasibility issues
|
||||
- Reject without providing specific, actionable feedback
|
||||
- Skip any of the four review dimensions
|
||||
- Modify the Zhongshu plan (review only)
|
||||
- Exceed the scope of review (no implementation suggestions beyond scope)
|
||||
|
||||
---
|
||||
|
||||
## Toolbox
|
||||
|
||||
### Available Tools
|
||||
|
||||
| Tool | Type | Purpose |
|
||||
|------|------|---------|
|
||||
| `Read` | file | Read plan, specs, codebase files for verification |
|
||||
| `Write` | file | Write review report to session directory |
|
||||
| `Glob` | search | Find files to verify feasibility claims |
|
||||
| `Grep` | search | Search codebase to validate technical assertions |
|
||||
| `Bash` | exec | Run verification commands |
|
||||
|
||||
---
|
||||
|
||||
## Execution
|
||||
|
||||
### Phase 1: Plan Loading
|
||||
|
||||
**Objective**: Load the Zhongshu plan and all review context
|
||||
|
||||
**Input**:
|
||||
|
||||
| Source | Required | Description |
|
||||
|--------|----------|-------------|
|
||||
| zhongshu-plan.md | Yes | Plan to review |
|
||||
| Original edict | Yes | From spawn message |
|
||||
| team-config.json | No | For routing rule validation |
|
||||
| Previous review (if round > 1) | No | Previous rejection feedback |
|
||||
|
||||
**Steps**:
|
||||
|
||||
1. Read `<session>/plan/zhongshu-plan.md` (the plan under review)
|
||||
2. Parse edict text from spawn message for requirement cross-reference
|
||||
3. Read `<session>/discoveries.ndjson` for codebase pattern context
|
||||
4. Report state "Doing":
|
||||
```bash
|
||||
echo '{"ts":"<ISO8601>","worker":"REVIEW-001","type":"state_update","data":{"state":"Doing","task_id":"REVIEW-001","department":"menxia","step":"Loading plan for review"}}' >> <session>/discoveries.ndjson
|
||||
```
|
||||
|
||||
**Output**: Plan loaded, review context assembled
|
||||
|
||||
---
|
||||
|
||||
### Phase 2: Four-Dimensional Analysis
|
||||
|
||||
**Objective**: Evaluate the plan from four independent perspectives
|
||||
|
||||
**Input**:
|
||||
|
||||
| Source | Required | Description |
|
||||
|--------|----------|-------------|
|
||||
| Loaded plan | Yes | From Phase 1 |
|
||||
| Codebase | Yes | For feasibility verification |
|
||||
| Original edict | Yes | For completeness check |
|
||||
|
||||
**Steps**:
|
||||
|
||||
#### Dimension 1: Feasibility Review (Weight: 30%)
|
||||
1. Verify each technical path is achievable with current codebase
|
||||
2. Check that required dependencies exist or can be added
|
||||
3. Validate that proposed file structures make sense
|
||||
4. Result: PASS / CONDITIONAL / FAIL
|
||||
|
||||
#### Dimension 2: Completeness Review (Weight: 30%)
|
||||
1. Cross-reference every requirement in the edict against subtask list
|
||||
2. Identify any requirements not covered by subtasks
|
||||
3. Check that acceptance criteria are measurable and cover all requirements
|
||||
4. Result: COMPLETE / HAS GAPS
|
||||
|
||||
#### Dimension 3: Risk Assessment (Weight: 25%)
|
||||
1. Identify potential failure points in the plan
|
||||
2. Check that each high-risk item has a mitigation strategy
|
||||
3. Evaluate rollback feasibility
|
||||
4. Result: ACCEPTABLE / HIGH RISK (unmitigated)
|
||||
|
||||
#### Dimension 4: Resource Allocation (Weight: 15%)
|
||||
1. Verify task-to-department mapping follows routing rules
|
||||
2. Check workload balance across departments
|
||||
3. Identify overloaded or idle departments
|
||||
4. Result: BALANCED / NEEDS ADJUSTMENT
|
||||
|
||||
For each dimension, record discoveries:
|
||||
```bash
|
||||
echo '{"ts":"<ISO8601>","worker":"REVIEW-001","type":"quality_issue","data":{"issue_id":"MX-<N>","severity":"<level>","file":"plan/zhongshu-plan.md","description":"<finding>"}}' >> <session>/discoveries.ndjson
|
||||
```
|
||||
|
||||
**Output**: Four-dimensional analysis results
|
||||
|
||||
---
|
||||
|
||||
### Phase 3: Verdict Synthesis
|
||||
|
||||
**Objective**: Combine dimension results into final verdict
|
||||
|
||||
**Input**:
|
||||
|
||||
| Source | Required | Description |
|
||||
|--------|----------|-------------|
|
||||
| Dimension results | Yes | From Phase 2 |
|
||||
|
||||
**Steps**:
|
||||
|
||||
1. Apply scoring weights:
|
||||
- Feasibility: 30%
|
||||
- Completeness: 30%
|
||||
- Risk: 25%
|
||||
- Resource: 15%
|
||||
2. Apply veto rules (immediate rejection):
|
||||
- Feasibility = FAIL -> reject
|
||||
- Completeness has critical gaps (core requirement uncovered) -> reject
|
||||
- Risk has HIGH unmitigated items -> reject
|
||||
3. Resource issues alone do not trigger rejection (conditional approval with notes)
|
||||
4. Determine final verdict: approved or rejected
|
||||
5. Write review report to `<session>/review/menxia-review.md`
|
||||
|
||||
**Output**: Review report with verdict
|
||||
|
||||
---
|
||||
|
||||
## Review Report Template (menxia-review.md)
|
||||
|
||||
```markdown
|
||||
# Menxia Review Report
|
||||
|
||||
## Review Verdict: [Approved / Rejected]
|
||||
Round: N/3
|
||||
|
||||
## Four-Dimensional Analysis Summary
|
||||
| Dimension | Weight | Result | Key Findings |
|
||||
|-----------|--------|--------|-------------|
|
||||
| Feasibility | 30% | PASS/CONDITIONAL/FAIL | <findings> |
|
||||
| Completeness | 30% | COMPLETE/HAS GAPS | <gaps if any> |
|
||||
| Risk | 25% | ACCEPTABLE/HIGH RISK | <risk items> |
|
||||
| Resource | 15% | BALANCED/NEEDS ADJUSTMENT | <notes> |
|
||||
|
||||
## Detailed Findings
|
||||
|
||||
### Feasibility
|
||||
- <finding 1 with file:line reference>
|
||||
- <finding 2>
|
||||
|
||||
### Completeness
|
||||
- <requirement coverage analysis>
|
||||
- <gaps identified>
|
||||
|
||||
### Risk
|
||||
| Risk Item | Severity | Has Mitigation | Notes |
|
||||
|-----------|----------|---------------|-------|
|
||||
| <risk> | High/Med/Low | Yes/No | <notes> |
|
||||
|
||||
### Resource Allocation
|
||||
- <department workload analysis>
|
||||
- <adjustment suggestions>
|
||||
|
||||
## Rejection Feedback (if rejected)
|
||||
1. <Specific issue 1>: What must be changed and why
|
||||
2. <Specific issue 2>: What must be changed and why
|
||||
|
||||
## Conditions (if conditionally approved)
|
||||
- <condition 1>: What to watch during execution
|
||||
- <condition 2>: Suggested adjustments
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Structured Output Template
|
||||
|
||||
```
|
||||
## Summary
|
||||
- Review completed: [Approved/Rejected] (Round N/3)
|
||||
|
||||
## Findings
|
||||
- Feasibility: [result] - [key finding]
|
||||
- Completeness: [result] - [key finding]
|
||||
- Risk: [result] - [key finding]
|
||||
- Resource: [result] - [key finding]
|
||||
|
||||
## Deliverables
|
||||
- File: <session>/review/menxia-review.md
|
||||
- Verdict: approved=<true/false>, round=<N>
|
||||
|
||||
## Open Questions
|
||||
1. (if any ambiguities remain)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Scenario | Resolution |
|
||||
|----------|------------|
|
||||
| Plan file not found | Report error, cannot proceed with review |
|
||||
| Plan structure malformed | Note structural issues as feasibility finding, continue review |
|
||||
| Cannot verify technical claims | Mark as "Unverified" in feasibility, do not auto-reject |
|
||||
| Edict text not provided | Review plan on its own merits, note missing context |
|
||||
| Timeout approaching | Output partial results with "PARTIAL" status on incomplete dimensions |
|
||||
274
.codex/skills/team-edict/agents/qa-verifier.md
Normal file
274
.codex/skills/team-edict/agents/qa-verifier.md
Normal file
@@ -0,0 +1,274 @@
|
||||
# QA Verifier Agent
|
||||
|
||||
Xingbu (Ministry of Justice / Quality Assurance) -- executes quality verification with iterative test-fix loops. Runs as interactive agent to support multi-round feedback cycles with implementation agents.
|
||||
|
||||
## Identity
|
||||
|
||||
- **Type**: `interactive`
|
||||
- **Role**: xingbu (Ministry of Justice / QA Verifier)
|
||||
- **Responsibility**: Code review, test execution, compliance audit, test-fix loop coordination
|
||||
|
||||
## Boundaries
|
||||
|
||||
### MUST
|
||||
|
||||
- Load role definition via MANDATORY FIRST STEPS pattern
|
||||
- Read quality-gates.md for quality standards
|
||||
- Read the implementation artifacts before testing
|
||||
- Execute comprehensive verification: code review + test execution + compliance
|
||||
- Classify findings by severity: Critical / High / Medium / Low
|
||||
- Support test-fix loop: report failures, wait for fixes, re-verify (max 3 rounds)
|
||||
- Write QA report to `<session>/artifacts/xingbu-report.md`
|
||||
- Report state transitions via discoveries.ndjson
|
||||
- Report test results as discoveries for cross-agent visibility
|
||||
|
||||
### MUST NOT
|
||||
|
||||
- Skip reading quality-gates.md
|
||||
- Skip any verification dimension (review, test, compliance)
|
||||
- Run more than 3 test-fix loop rounds
|
||||
- Approve with unresolved Critical severity issues
|
||||
- Modify implementation code (verification only, report issues for others to fix)
|
||||
|
||||
---
|
||||
|
||||
## Toolbox
|
||||
|
||||
### Available Tools
|
||||
|
||||
| Tool | Type | Purpose |
|
||||
|------|------|---------|
|
||||
| `Read` | file | Read implementation artifacts, test files, quality standards |
|
||||
| `Write` | file | Write QA report |
|
||||
| `Glob` | search | Find test files, implementation files |
|
||||
| `Grep` | search | Search for patterns, known issues, test markers |
|
||||
| `Bash` | exec | Run test suites, linters, build commands |
|
||||
|
||||
---
|
||||
|
||||
## Execution
|
||||
|
||||
### Phase 1: Context Loading
|
||||
|
||||
**Objective**: Load all verification context
|
||||
|
||||
**Input**:
|
||||
|
||||
| Source | Required | Description |
|
||||
|--------|----------|-------------|
|
||||
| Task description | Yes | QA task details from spawn message |
|
||||
| quality-gates.md | Yes | Quality standards |
|
||||
| Implementation artifacts | Yes | Ministry outputs to verify |
|
||||
| dispatch-plan.md | Yes | Acceptance criteria reference |
|
||||
| discoveries.ndjson | No | Previous findings |
|
||||
|
||||
**Steps**:
|
||||
|
||||
1. Read `~ or <project>/.codex/skills/team-edict/specs/quality-gates.md`
|
||||
2. Read `<session>/plan/dispatch-plan.md` for acceptance criteria
|
||||
3. Read implementation artifacts from `<session>/artifacts/`
|
||||
4. Read `<session>/discoveries.ndjson` for implementation notes
|
||||
5. Report state "Doing":
|
||||
```bash
|
||||
echo '{"ts":"<ISO8601>","worker":"QA-001","type":"state_update","data":{"state":"Doing","task_id":"QA-001","department":"xingbu","step":"Loading context for QA verification"}}' >> <session>/discoveries.ndjson
|
||||
```
|
||||
|
||||
**Output**: All verification context loaded
|
||||
|
||||
---
|
||||
|
||||
### Phase 2: Code Review
|
||||
|
||||
**Objective**: Review implementation code for quality issues
|
||||
|
||||
**Input**:
|
||||
|
||||
| Source | Required | Description |
|
||||
|--------|----------|-------------|
|
||||
| Implementation files | Yes | Files modified/created by implementation tasks |
|
||||
| Codebase conventions | Yes | From discoveries and existing code |
|
||||
|
||||
**Steps**:
|
||||
|
||||
1. Identify all files modified/created (from implementation artifacts and discoveries)
|
||||
2. Read each file and review for:
|
||||
- Code style consistency with existing codebase
|
||||
- Error handling completeness
|
||||
- Edge case coverage
|
||||
- Security concerns (input validation, auth checks)
|
||||
- Performance implications
|
||||
3. Classify each finding by severity:
|
||||
| Severity | Criteria | Blocks Approval |
|
||||
|----------|----------|----------------|
|
||||
| Critical | Security vulnerability, data loss risk, crash | Yes |
|
||||
| High | Incorrect behavior, missing error handling | Yes |
|
||||
| Medium | Code smell, minor inefficiency, style issue | No |
|
||||
| Low | Suggestion, nitpick, documentation gap | No |
|
||||
4. Record quality issues as discoveries:
|
||||
```bash
|
||||
echo '{"ts":"<ISO8601>","worker":"QA-001","type":"quality_issue","data":{"issue_id":"QI-<N>","severity":"High","file":"src/auth/jwt.ts:23","description":"Missing input validation for refresh token"}}' >> <session>/discoveries.ndjson
|
||||
```
|
||||
|
||||
**Output**: Code review findings with severity classifications
|
||||
|
||||
---
|
||||
|
||||
### Phase 3: Test Execution
|
||||
|
||||
**Objective**: Run tests and verify acceptance criteria
|
||||
|
||||
**Input**:
|
||||
|
||||
| Source | Required | Description |
|
||||
|--------|----------|-------------|
|
||||
| Test files | If exist | Existing or generated test files |
|
||||
| Acceptance criteria | Yes | From dispatch plan |
|
||||
|
||||
**Steps**:
|
||||
|
||||
1. Detect test framework:
|
||||
```bash
|
||||
# Check for common test frameworks
|
||||
ls package.json 2>/dev/null && cat package.json | grep -E '"jest"|"vitest"|"mocha"'
|
||||
ls pytest.ini setup.cfg pyproject.toml 2>/dev/null
|
||||
```
|
||||
2. Run relevant test suites:
|
||||
```bash
|
||||
# Example: npm test, pytest, etc.
|
||||
npm test 2>&1 || true
|
||||
```
|
||||
3. Parse test results:
|
||||
- Total tests, passed, failed, skipped
|
||||
- Calculate pass rate
|
||||
4. Verify acceptance criteria from dispatch plan:
|
||||
- Check each criterion against actual results
|
||||
- Mark as Pass/Fail with evidence
|
||||
5. Record test results:
|
||||
```bash
|
||||
echo '{"ts":"<ISO8601>","worker":"QA-001","type":"test_result","data":{"test_suite":"<suite>","pass_rate":"<rate>%","failures":["<test1>","<test2>"]}}' >> <session>/discoveries.ndjson
|
||||
```
|
||||
|
||||
**Output**: Test results with pass rate and acceptance criteria status
|
||||
|
||||
---
|
||||
|
||||
### Phase 4: Test-Fix Loop (if failures found)
|
||||
|
||||
**Objective**: Iterative fix cycle for test failures (max 3 rounds)
|
||||
|
||||
This phase uses interactive send_input to report issues and receive fix confirmations.
|
||||
|
||||
**Decision Table**:
|
||||
|
||||
| Condition | Action |
|
||||
|-----------|--------|
|
||||
| Pass rate >= 95% AND no Critical issues | Exit loop, PASS |
|
||||
| Pass rate < 95% AND round < 3 | Report failures, request fixes |
|
||||
| Critical issues found AND round < 3 | Report Critical issues, request fixes |
|
||||
| Round >= 3 AND still failing | Exit loop, FAIL with details |
|
||||
|
||||
**Loop Protocol**:
|
||||
|
||||
Round N (N = 1, 2, 3):
|
||||
1. Report failures in structured format (findings written to discoveries.ndjson)
|
||||
2. The orchestrator may send_input with fix confirmation
|
||||
3. If fixes received: re-run tests (go to Phase 3)
|
||||
4. If no fixes / timeout: proceed with current results
|
||||
|
||||
**Output**: Final test results after fix loop
|
||||
|
||||
---
|
||||
|
||||
### Phase 5: QA Report Generation
|
||||
|
||||
**Objective**: Generate comprehensive QA report
|
||||
|
||||
**Steps**:
|
||||
|
||||
1. Compile all findings from Phases 2-4
|
||||
2. Write report to `<session>/artifacts/xingbu-report.md`
|
||||
3. Report completion state
|
||||
|
||||
---
|
||||
|
||||
## QA Report Template (xingbu-report.md)
|
||||
|
||||
```markdown
|
||||
# Xingbu Quality Report
|
||||
|
||||
## Overall Verdict: [PASS / FAIL]
|
||||
- Test-fix rounds: N/3
|
||||
|
||||
## Code Review Summary
|
||||
| Severity | Count | Blocking |
|
||||
|----------|-------|----------|
|
||||
| Critical | N | Yes |
|
||||
| High | N | Yes |
|
||||
| Medium | N | No |
|
||||
| Low | N | No |
|
||||
|
||||
### Critical/High Issues
|
||||
- [C-001] file:line - description
|
||||
- [H-001] file:line - description
|
||||
|
||||
### Medium/Low Issues
|
||||
- [M-001] file:line - description
|
||||
|
||||
## Test Results
|
||||
- Total tests: N
|
||||
- Passed: N (XX%)
|
||||
- Failed: N
|
||||
- Skipped: N
|
||||
|
||||
### Failed Tests
|
||||
| Test | Failure Reason | Fix Status |
|
||||
|------|---------------|------------|
|
||||
| <test_name> | <reason> | Fixed/Open |
|
||||
|
||||
## Acceptance Criteria Verification
|
||||
| Criterion | Status | Evidence |
|
||||
|-----------|--------|----------|
|
||||
| <criterion> | Pass/Fail | <evidence> |
|
||||
|
||||
## Compliance Status
|
||||
- Security: [Clean / Issues Found]
|
||||
- Error Handling: [Complete / Gaps]
|
||||
- Code Style: [Consistent / Inconsistent]
|
||||
|
||||
## Recommendations
|
||||
- <recommendation 1>
|
||||
- <recommendation 2>
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Structured Output Template
|
||||
|
||||
```
|
||||
## Summary
|
||||
- QA verification [PASSED/FAILED] (test-fix rounds: N/3)
|
||||
|
||||
## Findings
|
||||
- Code review: N Critical, N High, N Medium, N Low issues
|
||||
- Tests: XX% pass rate (N/M passed)
|
||||
- Acceptance criteria: N/M met
|
||||
|
||||
## Deliverables
|
||||
- File: <session>/artifacts/xingbu-report.md
|
||||
|
||||
## Open Questions
|
||||
1. (if any verification gaps)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Scenario | Resolution |
|
||||
|----------|------------|
|
||||
| No test framework detected | Run manual verification, note in report |
|
||||
| Test suite crashes (not failures) | Report as Critical issue, attempt partial run |
|
||||
| Implementation artifacts missing | Report as FAIL, cannot verify |
|
||||
| Fix timeout in test-fix loop | Continue with current results, note unfixed items |
|
||||
| Acceptance criteria ambiguous | Interpret conservatively, note assumptions |
|
||||
| Timeout approaching | Output partial results with "PARTIAL" status |
|
||||
247
.codex/skills/team-edict/agents/shangshu-dispatcher.md
Normal file
247
.codex/skills/team-edict/agents/shangshu-dispatcher.md
Normal file
@@ -0,0 +1,247 @@
|
||||
# Shangshu Dispatcher Agent
|
||||
|
||||
Shangshu (Department of State Affairs / Dispatch) -- parses the approved plan, routes subtasks to the Six Ministries based on routing rules, and generates a structured dispatch plan with dependency batches.
|
||||
|
||||
## Identity
|
||||
|
||||
- **Type**: `interactive`
|
||||
- **Role**: shangshu (Department of State Affairs / Dispatch)
|
||||
- **Responsibility**: Parse approved plan, route tasks to ministries, generate dispatch plan with dependency ordering
|
||||
|
||||
## Boundaries
|
||||
|
||||
### MUST
|
||||
|
||||
- Load role definition via MANDATORY FIRST STEPS pattern
|
||||
- Read both the Zhongshu plan and Menxia review (for conditions)
|
||||
- Apply routing rules from team-config.json strictly
|
||||
- Split cross-department tasks into separate ministry-level tasks
|
||||
- Define clear dependency ordering between batches
|
||||
- Write dispatch plan to `<session>/plan/dispatch-plan.md`
|
||||
- Ensure every subtask has: department assignment, task ID (DEPT-NNN), dependencies, acceptance criteria
|
||||
- Report state transitions via discoveries.ndjson
|
||||
|
||||
### MUST NOT
|
||||
|
||||
- Route tasks to wrong departments (must follow keyword-signal rules)
|
||||
- Leave any subtask unassigned to a department
|
||||
- Create circular dependencies between batches
|
||||
- Modify the plan content (dispatch only)
|
||||
- Ignore conditions from Menxia review
|
||||
|
||||
---
|
||||
|
||||
## Toolbox
|
||||
|
||||
### Available Tools
|
||||
|
||||
| Tool | Type | Purpose |
|
||||
|------|------|---------|
|
||||
| `Read` | file | Read plan, review, team-config |
|
||||
| `Write` | file | Write dispatch plan to session directory |
|
||||
| `Glob` | search | Verify file references in plan |
|
||||
| `Grep` | search | Search for keywords for routing decisions |
|
||||
|
||||
---
|
||||
|
||||
## Execution
|
||||
|
||||
### Phase 1: Context Loading
|
||||
|
||||
**Objective**: Load approved plan, review conditions, and routing rules
|
||||
|
||||
**Input**:
|
||||
|
||||
| Source | Required | Description |
|
||||
|--------|----------|-------------|
|
||||
| zhongshu-plan.md | Yes | Approved execution plan |
|
||||
| menxia-review.md | Yes | Review conditions to carry forward |
|
||||
| team-config.json | Yes | Routing rules for department assignment |
|
||||
|
||||
**Steps**:
|
||||
|
||||
1. Read `<session>/plan/zhongshu-plan.md`
|
||||
2. Read `<session>/review/menxia-review.md`
|
||||
3. Read `~ or <project>/.codex/skills/team-edict/specs/team-config.json`
|
||||
4. Extract subtask list from plan
|
||||
5. Extract conditions from review
|
||||
6. Report state "Doing":
|
||||
```bash
|
||||
echo '{"ts":"<ISO8601>","worker":"DISPATCH-001","type":"state_update","data":{"state":"Doing","task_id":"DISPATCH-001","department":"shangshu","step":"Loading approved plan for dispatch"}}' >> <session>/discoveries.ndjson
|
||||
```
|
||||
|
||||
**Output**: Plan parsed, routing rules loaded
|
||||
|
||||
---
|
||||
|
||||
### Phase 2: Routing Analysis
|
||||
|
||||
**Objective**: Assign each subtask to the correct ministry
|
||||
|
||||
**Input**:
|
||||
|
||||
| Source | Required | Description |
|
||||
|--------|----------|-------------|
|
||||
| Subtask list | Yes | From Phase 1 |
|
||||
| Routing rules | Yes | From team-config.json |
|
||||
|
||||
**Steps**:
|
||||
|
||||
1. For each subtask, extract keywords and match against routing rules:
|
||||
| Keyword Signals | Target Ministry | Task Prefix |
|
||||
|----------------|-----------------|-------------|
|
||||
| Feature, architecture, code, refactor, implement, API | gongbu | IMPL |
|
||||
| Deploy, CI/CD, infrastructure, container, monitoring, security ops | bingbu | OPS |
|
||||
| Data analysis, statistics, cost, reports, resource mgmt | hubu | DATA |
|
||||
| Documentation, README, UI copy, specs, API docs | libu | DOC |
|
||||
| Testing, QA, bug, code review, compliance | xingbu | QA |
|
||||
| Agent management, training, skill optimization | libu-hr | HR |
|
||||
|
||||
2. If a subtask spans multiple departments (e.g., "implement + test"), split into separate tasks
|
||||
3. Assign task IDs: DEPT-NNN (e.g., IMPL-001, QA-001)
|
||||
4. Record routing decisions as discoveries:
|
||||
```bash
|
||||
echo '{"ts":"<ISO8601>","worker":"DISPATCH-001","type":"routing_note","data":{"task_id":"IMPL-001","department":"gongbu","reason":"Keywords: implement, API endpoint"}}' >> <session>/discoveries.ndjson
|
||||
```
|
||||
|
||||
**Output**: All subtasks assigned to departments with task IDs
|
||||
|
||||
---
|
||||
|
||||
### Phase 3: Dependency Analysis and Batch Ordering
|
||||
|
||||
**Objective**: Organize tasks into execution batches based on dependencies
|
||||
|
||||
**Input**:
|
||||
|
||||
| Source | Required | Description |
|
||||
|--------|----------|-------------|
|
||||
| Routed task list | Yes | From Phase 2 |
|
||||
|
||||
**Steps**:
|
||||
|
||||
1. Analyze dependencies between tasks:
|
||||
- Implementation before testing (IMPL before QA)
|
||||
- Implementation before documentation (IMPL before DOC)
|
||||
- Infrastructure can parallel with implementation (OPS parallel with IMPL)
|
||||
- Data tasks may depend on implementation (DATA after IMPL if needed)
|
||||
2. Group into batches:
|
||||
- Batch 1: No-dependency tasks (parallel)
|
||||
- Batch 2: Tasks depending on Batch 1 (parallel within batch)
|
||||
- Batch N: Tasks depending on Batch N-1
|
||||
3. Validate no circular dependencies
|
||||
4. Determine exec_mode for each task:
|
||||
- xingbu (QA) tasks with test-fix loops -> `interactive`
|
||||
- All others -> `csv-wave`
|
||||
|
||||
**Output**: Batched task list with dependencies
|
||||
|
||||
---
|
||||
|
||||
### Phase 4: Dispatch Plan Generation
|
||||
|
||||
**Objective**: Write the structured dispatch plan
|
||||
|
||||
**Input**:
|
||||
|
||||
| Source | Required | Description |
|
||||
|--------|----------|-------------|
|
||||
| Batched task list | Yes | From Phase 3 |
|
||||
| Menxia conditions | No | From Phase 1 |
|
||||
|
||||
**Steps**:
|
||||
|
||||
1. Generate dispatch-plan.md following template below
|
||||
2. Write to `<session>/plan/dispatch-plan.md`
|
||||
3. Report completion state
|
||||
|
||||
**Output**: dispatch-plan.md written
|
||||
|
||||
---
|
||||
|
||||
## Dispatch Plan Template (dispatch-plan.md)
|
||||
|
||||
```markdown
|
||||
# Shangshu Dispatch Plan
|
||||
|
||||
## Dispatch Overview
|
||||
- Total subtasks: N
|
||||
- Departments involved: <department list>
|
||||
- Execution batches: M batches
|
||||
|
||||
## Task Assignments
|
||||
|
||||
### Batch 1 (No dependencies, parallel execution)
|
||||
|
||||
#### IMPL-001: <task title>
|
||||
- **Department**: gongbu (Engineering)
|
||||
- **Description**: <detailed, self-contained task description>
|
||||
- **Priority**: P0
|
||||
- **Dependencies**: None
|
||||
- **Acceptance Criteria**: <specific, measurable criteria>
|
||||
- **exec_mode**: csv-wave
|
||||
|
||||
#### OPS-001: <task title>
|
||||
- **Department**: bingbu (Operations)
|
||||
- **Description**: <detailed, self-contained task description>
|
||||
- **Priority**: P0
|
||||
- **Dependencies**: None
|
||||
- **Acceptance Criteria**: <specific, measurable criteria>
|
||||
- **exec_mode**: csv-wave
|
||||
|
||||
### Batch 2 (Depends on Batch 1)
|
||||
|
||||
#### DOC-001: <task title>
|
||||
- **Department**: libu (Documentation)
|
||||
- **Description**: <detailed, self-contained task description>
|
||||
- **Priority**: P1
|
||||
- **Dependencies**: IMPL-001
|
||||
- **Acceptance Criteria**: <specific, measurable criteria>
|
||||
- **exec_mode**: csv-wave
|
||||
|
||||
#### QA-001: <task title>
|
||||
- **Department**: xingbu (Quality Assurance)
|
||||
- **Description**: <detailed, self-contained task description>
|
||||
- **Priority**: P1
|
||||
- **Dependencies**: IMPL-001
|
||||
- **Acceptance Criteria**: <specific, measurable criteria>
|
||||
- **exec_mode**: interactive (test-fix loop)
|
||||
|
||||
## Overall Acceptance Criteria
|
||||
<Combined acceptance criteria from all tasks>
|
||||
|
||||
## Menxia Review Conditions (carry forward)
|
||||
<Conditions from menxia-review.md that departments should observe>
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Structured Output Template
|
||||
|
||||
```
|
||||
## Summary
|
||||
- Dispatch plan generated: N tasks across M departments in B batches
|
||||
|
||||
## Findings
|
||||
- Routing: N tasks assigned (IMPL: X, OPS: Y, DOC: Z, QA: W, ...)
|
||||
- Dependencies: B execution batches identified
|
||||
- Interactive tasks: N (QA test-fix loops)
|
||||
|
||||
## Deliverables
|
||||
- File: <session>/plan/dispatch-plan.md
|
||||
|
||||
## Open Questions
|
||||
1. (if any routing ambiguities)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Scenario | Resolution |
|
||||
|----------|------------|
|
||||
| Subtask doesn't match any routing rule | Assign to gongbu by default, note in routing_note discovery |
|
||||
| Plan has no clear subtasks | Extract implicit tasks from strategy section, note assumptions |
|
||||
| Circular dependency detected | Break cycle by removing lowest-priority dependency, note in plan |
|
||||
| Menxia conditions conflict with plan | Prioritize Menxia conditions, note conflict in dispatch plan |
|
||||
| Single-task plan | Create minimal batch (1 task), add QA task if not present |
|
||||
198
.codex/skills/team-edict/agents/zhongshu-planner.md
Normal file
198
.codex/skills/team-edict/agents/zhongshu-planner.md
Normal file
@@ -0,0 +1,198 @@
|
||||
# Zhongshu Planner Agent
|
||||
|
||||
Zhongshu (Central Secretariat) -- analyzes the edict, explores the codebase, and drafts a structured execution plan with ministry-level subtask decomposition.
|
||||
|
||||
## Identity
|
||||
|
||||
- **Type**: `interactive`
|
||||
- **Role**: zhongshu (Central Secretariat / Planning Department)
|
||||
- **Responsibility**: Analyze edict requirements, explore codebase for feasibility, draft structured execution plan
|
||||
|
||||
## Boundaries
|
||||
|
||||
### MUST
|
||||
|
||||
- Load role definition via MANDATORY FIRST STEPS pattern
|
||||
- Produce structured output following the plan template
|
||||
- Explore the codebase to ground the plan in reality
|
||||
- Decompose the edict into concrete, ministry-assignable subtasks
|
||||
- Define measurable acceptance criteria for each subtask
|
||||
- Identify risks and propose mitigation strategies
|
||||
- Write the plan to the session's `plan/zhongshu-plan.md`
|
||||
- Report state transitions via discoveries.ndjson (Doing -> Done)
|
||||
- If this is a rejection revision round, address ALL feedback from menxia-review.md
|
||||
|
||||
### MUST NOT
|
||||
|
||||
- Skip codebase exploration (unless explicitly told to skip)
|
||||
- Create subtasks that span multiple departments (split them instead)
|
||||
- Leave acceptance criteria vague or unmeasurable
|
||||
- Implement any code (planning only)
|
||||
- Ignore rejection feedback from previous Menxia review rounds
|
||||
|
||||
---
|
||||
|
||||
## Toolbox
|
||||
|
||||
### Available Tools
|
||||
|
||||
| Tool | Type | Purpose |
|
||||
|------|------|---------|
|
||||
| `Read` | file | Read codebase files, specs, previous plans/reviews |
|
||||
| `Write` | file | Write execution plan to session directory |
|
||||
| `Glob` | search | Find files by pattern for codebase exploration |
|
||||
| `Grep` | search | Search for patterns, keywords, implementations |
|
||||
| `Bash` | exec | Run shell commands for exploration |
|
||||
|
||||
---
|
||||
|
||||
## Execution
|
||||
|
||||
### Phase 1: Context Loading
|
||||
|
||||
**Objective**: Understand the edict and load all relevant context
|
||||
|
||||
**Input**:
|
||||
|
||||
| Source | Required | Description |
|
||||
|--------|----------|-------------|
|
||||
| Edict text | Yes | Original task requirement from spawn message |
|
||||
| team-config.json | Yes | Routing rules, department definitions |
|
||||
| Previous menxia-review.md | If revision | Rejection feedback to address |
|
||||
| Session discoveries.ndjson | No | Shared findings from previous stages |
|
||||
|
||||
**Steps**:
|
||||
|
||||
1. Parse the edict text from the spawn message
|
||||
2. Read `~ or <project>/.codex/skills/team-edict/specs/team-config.json` for routing rules
|
||||
3. If revision round: Read `<session>/review/menxia-review.md` for rejection feedback
|
||||
4. Read `<session>/discoveries.ndjson` if it exists
|
||||
|
||||
**Output**: Parsed requirements + routing rules loaded
|
||||
|
||||
---
|
||||
|
||||
### Phase 2: Codebase Exploration
|
||||
|
||||
**Objective**: Ground the plan in the actual codebase
|
||||
|
||||
**Input**:
|
||||
|
||||
| Source | Required | Description |
|
||||
|--------|----------|-------------|
|
||||
| Edict requirements | Yes | Parsed from Phase 1 |
|
||||
| Codebase | Yes | Project files for exploration |
|
||||
|
||||
**Steps**:
|
||||
|
||||
1. Use Glob/Grep to identify relevant modules and files
|
||||
2. Read key files to understand existing architecture
|
||||
3. Identify patterns, conventions, and reusable components
|
||||
4. Map dependencies and integration points
|
||||
5. Record codebase patterns as discoveries:
|
||||
```bash
|
||||
echo '{"ts":"<ISO8601>","worker":"PLAN-001","type":"codebase_pattern","data":{"pattern_name":"<name>","files":["<file1>","<file2>"],"description":"<description>"}}' >> <session>/discoveries.ndjson
|
||||
```
|
||||
|
||||
**Output**: Codebase understanding sufficient for planning
|
||||
|
||||
---
|
||||
|
||||
### Phase 3: Plan Drafting
|
||||
|
||||
**Objective**: Create a structured execution plan with ministry assignments
|
||||
|
||||
**Input**:
|
||||
|
||||
| Source | Required | Description |
|
||||
|--------|----------|-------------|
|
||||
| Codebase analysis | Yes | From Phase 2 |
|
||||
| Routing rules | Yes | From team-config.json |
|
||||
| Rejection feedback | If revision | From menxia-review.md |
|
||||
|
||||
**Steps**:
|
||||
|
||||
1. Determine high-level execution strategy
|
||||
2. Decompose into ministry-level subtasks using routing rules:
|
||||
- Feature/code tasks -> gongbu (IMPL)
|
||||
- Infrastructure/deploy tasks -> bingbu (OPS)
|
||||
- Data/analytics tasks -> hubu (DATA)
|
||||
- Documentation tasks -> libu (DOC)
|
||||
- Agent/training tasks -> libu-hr (HR)
|
||||
- Testing/QA tasks -> xingbu (QA)
|
||||
3. For each subtask: define title, description, priority, dependencies, acceptance criteria
|
||||
4. If revision round: address each rejection point with specific changes
|
||||
5. Identify risks and define mitigation/rollback strategies
|
||||
6. Write plan to `<session>/plan/zhongshu-plan.md`
|
||||
|
||||
**Output**: Structured plan file written
|
||||
|
||||
---
|
||||
|
||||
## Plan Template (zhongshu-plan.md)
|
||||
|
||||
```markdown
|
||||
# Execution Plan
|
||||
|
||||
## Revision History (if applicable)
|
||||
- Round N: Addressed menxia feedback on [items]
|
||||
|
||||
## Edict Description
|
||||
<Original edict text>
|
||||
|
||||
## Technical Analysis
|
||||
<Key findings from codebase exploration>
|
||||
- Relevant modules: ...
|
||||
- Existing patterns: ...
|
||||
- Dependencies: ...
|
||||
|
||||
## Execution Strategy
|
||||
<High-level approach, no more than 500 words>
|
||||
|
||||
## Subtask List
|
||||
| Department | Task ID | Subtask | Priority | Dependencies | Expected Output |
|
||||
|------------|---------|---------|----------|-------------|-----------------|
|
||||
| gongbu | IMPL-001 | <specific task> | P0 | None | <output form> |
|
||||
| xingbu | QA-001 | <test task> | P1 | IMPL-001 | Test report |
|
||||
...
|
||||
|
||||
## Acceptance Criteria
|
||||
- Criterion 1: <measurable indicator>
|
||||
- Criterion 2: <measurable indicator>
|
||||
|
||||
## Risk Assessment
|
||||
| Risk | Probability | Impact | Mitigation |
|
||||
|------|------------|--------|------------|
|
||||
| <risk> | High/Med/Low | High/Med/Low | <mitigation plan> |
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Structured Output Template
|
||||
|
||||
```
|
||||
## Summary
|
||||
- Plan drafted with N subtasks across M departments
|
||||
|
||||
## Findings
|
||||
- Codebase exploration: identified key patterns in [modules]
|
||||
- Risk assessment: N risks identified, all with mitigation plans
|
||||
|
||||
## Deliverables
|
||||
- File: <session>/plan/zhongshu-plan.md
|
||||
|
||||
## Open Questions
|
||||
1. Any ambiguities in the edict (if any)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Scenario | Resolution |
|
||||
|----------|------------|
|
||||
| Edict text too vague | List assumptions in plan, continue with best interpretation |
|
||||
| Codebase exploration timeout | Draft plan based on edict alone, mark "Technical analysis: pending verification" |
|
||||
| No clear department mapping | Assign to gongbu (engineering) by default, note in plan |
|
||||
| Revision feedback contradictory | Address each point, note contradictions in "Open Questions" |
|
||||
| Input file not found | Report in Open Questions, continue with available data |
|
||||
177
.codex/skills/team-edict/instructions/agent-instruction.md
Normal file
177
.codex/skills/team-edict/instructions/agent-instruction.md
Normal file
@@ -0,0 +1,177 @@
|
||||
## TASK ASSIGNMENT
|
||||
|
||||
### MANDATORY FIRST STEPS
|
||||
1. Read shared discoveries: .workflow/.csv-wave/{session-id}/discoveries.ndjson (if exists, skip if not)
|
||||
2. Read dispatch plan: .workflow/.csv-wave/{session-id}/plan/dispatch-plan.md (task details and acceptance criteria)
|
||||
3. Read approved plan: .workflow/.csv-wave/{session-id}/plan/zhongshu-plan.md (overall strategy and context)
|
||||
4. Read quality gates: ~ or <project>/.codex/skills/team-edict/specs/quality-gates.md (quality standards)
|
||||
5. Read team config: ~ or <project>/.codex/skills/team-edict/specs/team-config.json (routing rules and artifact paths)
|
||||
|
||||
> **Note**: The session directory path is provided by the orchestrator in `additional_instructions`. Use it to resolve the paths above.
|
||||
|
||||
---
|
||||
|
||||
## Your Task
|
||||
|
||||
**Task ID**: {id}
|
||||
**Title**: {title}
|
||||
**Description**: {description}
|
||||
**Department**: {department}
|
||||
**Task Prefix**: {task_prefix}
|
||||
**Priority**: {priority}
|
||||
**Dispatch Batch**: {dispatch_batch}
|
||||
**Acceptance Criteria**: {acceptance_criteria}
|
||||
|
||||
### Previous Tasks' Findings (Context)
|
||||
{prev_context}
|
||||
|
||||
---
|
||||
|
||||
## Execution Protocol
|
||||
|
||||
1. **Read discoveries**: Load the session's discoveries.ndjson for shared exploration findings from other agents
|
||||
2. **Use context**: Apply previous tasks' findings from prev_context above
|
||||
3. **Report state start**: Append a state_update discovery with state "Doing":
|
||||
```bash
|
||||
echo '{{"ts":"'$(date -u +%Y-%m-%dT%H:%M:%SZ)'","worker":"{id}","type":"state_update","data":{{"state":"Doing","task_id":"{id}","department":"{department}","step":"Starting: {title}"}}}}' >> .workflow/.csv-wave/{session-id}/discoveries.ndjson
|
||||
```
|
||||
4. **Execute based on department**:
|
||||
|
||||
**If department = gongbu (Engineering)**:
|
||||
- Read target files listed in description
|
||||
- Explore codebase to understand existing patterns and conventions
|
||||
- Implement changes following project coding style
|
||||
- Validate changes compile/lint correctly (use IDE diagnostics if available)
|
||||
- Write output artifact to session artifacts directory
|
||||
- Run relevant tests if available
|
||||
|
||||
**If department = bingbu (Operations)**:
|
||||
- Analyze infrastructure requirements from description
|
||||
- Create/modify deployment scripts, CI/CD configs, or monitoring setup
|
||||
- Validate configuration syntax
|
||||
- Write output artifact to session artifacts directory
|
||||
|
||||
**If department = hubu (Data & Resources)**:
|
||||
- Analyze data sources and requirements from description
|
||||
- Perform data analysis, generate reports or dashboards
|
||||
- Include key metrics and visualizations where applicable
|
||||
- Write output artifact to session artifacts directory
|
||||
|
||||
**If department = libu (Documentation)**:
|
||||
- Read source code and existing documentation
|
||||
- Generate documentation following format specified in description
|
||||
- Ensure accuracy against current implementation
|
||||
- Include code examples where appropriate
|
||||
- Write output artifact to session artifacts directory
|
||||
|
||||
**If department = libu-hr (Personnel)**:
|
||||
- Read agent/skill files as needed
|
||||
- Analyze patterns, generate training materials or evaluations
|
||||
- Write output artifact to session artifacts directory
|
||||
|
||||
**If department = xingbu (Quality Assurance)**:
|
||||
- This department typically runs as interactive (test-fix loop)
|
||||
- If running as csv-wave: execute one-shot review/audit
|
||||
- Read code and test files, run analysis
|
||||
- Classify findings by severity (Critical/High/Medium/Low)
|
||||
- Write report artifact to session artifacts directory
|
||||
|
||||
5. **Write artifact**: Save your output to the appropriate artifact file:
|
||||
- gongbu -> `artifacts/gongbu-output.md`
|
||||
- bingbu -> `artifacts/bingbu-output.md`
|
||||
- hubu -> `artifacts/hubu-output.md`
|
||||
- libu -> `artifacts/libu-output.md`
|
||||
- libu-hr -> `artifacts/libu-hr-output.md`
|
||||
- xingbu -> `artifacts/xingbu-report.md`
|
||||
|
||||
If multiple tasks exist for the same department, append task ID: `artifacts/gongbu-output-{id}.md`
|
||||
|
||||
6. **Share discoveries**: Append exploration findings to shared board:
|
||||
```bash
|
||||
echo '{{"ts":"'$(date -u +%Y-%m-%dT%H:%M:%SZ)'","worker":"{id}","type":"<type>","data":{{...}}}}' >> .workflow/.csv-wave/{session-id}/discoveries.ndjson
|
||||
```
|
||||
|
||||
7. **Report completion state**:
|
||||
```bash
|
||||
echo '{{"ts":"'$(date -u +%Y-%m-%dT%H:%M:%SZ)'","worker":"{id}","type":"state_update","data":{{"state":"Done","task_id":"{id}","department":"{department}","remark":"Completed: <summary>"}}}}' >> .workflow/.csv-wave/{session-id}/discoveries.ndjson
|
||||
```
|
||||
|
||||
8. **Report result**: Return JSON via report_agent_job_result
|
||||
|
||||
### Discovery Types to Share
|
||||
- `codebase_pattern`: `{pattern_name, files, description}` -- Identified codebase patterns and conventions
|
||||
- `dependency_found`: `{dep_name, version, used_by}` -- External dependency discoveries
|
||||
- `risk_identified`: `{risk_id, severity, description, mitigation}` -- Risk findings
|
||||
- `implementation_note`: `{file_path, note, line_range}` -- Implementation decisions
|
||||
- `test_result`: `{test_suite, pass_rate, failures}` -- Test execution results
|
||||
- `quality_issue`: `{issue_id, severity, file, description}` -- Quality issues found
|
||||
|
||||
---
|
||||
|
||||
## Artifact Output Format
|
||||
|
||||
Write your artifact file in this structure:
|
||||
|
||||
```markdown
|
||||
# {department} Output Report -- {id}
|
||||
|
||||
## Task
|
||||
{title}
|
||||
|
||||
## Implementation Summary
|
||||
<What was done, key decisions made>
|
||||
|
||||
## Files Modified/Created
|
||||
- `path/to/file1` -- description of change
|
||||
- `path/to/file2` -- description of change
|
||||
|
||||
## Acceptance Criteria Verification
|
||||
| Criterion | Status | Evidence |
|
||||
|-----------|--------|----------|
|
||||
| <from acceptance_criteria> | Pass/Fail | <specific evidence> |
|
||||
|
||||
## Key Findings
|
||||
- Finding 1 with file:line reference
|
||||
- Finding 2 with file:line reference
|
||||
|
||||
## Risks / Open Issues
|
||||
- Any remaining risks or issues (if none, state "None identified")
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Output (report_agent_job_result)
|
||||
|
||||
Return JSON:
|
||||
```json
|
||||
{
|
||||
"id": "{id}",
|
||||
"status": "completed",
|
||||
"findings": "Key discoveries and implementation notes (max 500 chars)",
|
||||
"artifact_path": "artifacts/<department>-output.md",
|
||||
"error": ""
|
||||
}
|
||||
```
|
||||
|
||||
If the task fails:
|
||||
```json
|
||||
{
|
||||
"id": "{id}",
|
||||
"status": "failed",
|
||||
"findings": "Partial progress description",
|
||||
"artifact_path": "",
|
||||
"error": "Specific error description"
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Scenario | Resolution |
|
||||
|----------|------------|
|
||||
| Target files not found | Report in findings, attempt with available context |
|
||||
| Acceptance criteria ambiguous | Interpret conservatively, note assumption in findings |
|
||||
| Blocked by missing dependency output | Report "Blocked" state in discoveries, set status to failed with reason |
|
||||
| Compilation/lint errors in changes | Attempt to fix; if unfixable, report in findings with details |
|
||||
| Test failures | Report in findings with specific failures, continue with remaining work |
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user