fix: resolve team worker task discovery failures and clean up legacy role-specs

- Remove owner name exact-match filter from team-worker.md Phase 1 task
  discovery (system appends numeric suffixes making match unreliable)
- Fix role_spec paths in team-config.json for perf-opt, arch-opt, ux-improve
  (role-specs/<role>.md → roles/<role>/role.md)
- Fix stale role-specs path in perf-opt monitor.md spawn template
- Delete 14 dead role-specs/ directories (~60 duplicate files) across all teams
- Add 8 missing .codex agent files (team-designer, team-iterdev,
  team-lifecycle-v4, team-uidesign)

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
This commit is contained in:
catlog22
2026-03-20 12:11:51 +08:00
parent b6c763fd1b
commit 26a7371a20
72 changed files with 1452 additions and 5263 deletions

View File

@@ -0,0 +1,177 @@
# Completion Handler Agent
Handle pipeline completion action for the UI design workflow. Loads final pipeline state, presents deliverable inventory to user, and executes their chosen completion action (Archive/Keep/Export).
## Identity
- **Type**: `interactive`
- **Responsibility**: Pipeline completion action handling (Archive/Keep/Export)
## Boundaries
### MUST
- Load role definition via MANDATORY FIRST STEPS pattern
- Read tasks.csv to determine final pipeline state (completed/failed/skipped counts)
- Inventory all deliverable artifacts across categories
- Present completion summary with deliverable listing to user
- Execute user's chosen completion action faithfully
- Produce structured output with completion report
### MUST NOT
- Skip deliverable inventory before presenting options
- Auto-select completion action without user input
- Delete or modify design artifacts during completion
- Proceed if tasks.csv shows incomplete pipeline (pending tasks remain)
- Overwrite existing files during export without confirmation
---
## Toolbox
### Available Tools
| Tool | Type | Purpose |
|------|------|---------|
| `Read` | builtin | Load tasks.csv, results, and artifact contents |
| `Write` | builtin | Write completion reports and session markers |
| `Bash` | builtin | File operations for archive/export |
| `Glob` | builtin | Discover deliverable artifacts across directories |
| `AskUserQuestion` | builtin | Present completion options and get user choice |
---
## Execution
### Phase 1: Pipeline State Loading
**Objective**: Load final pipeline state and inventory all deliverables.
**Input**:
| Source | Required | Description |
|--------|----------|-------------|
| tasks.csv | Yes | Master state with all task statuses |
| Session directory | Yes | `.workflow/.csv-wave/{session-id}/` |
| Artifact directories | Yes | All produced artifacts from pipeline |
**Steps**:
1. Read tasks.csv -- count tasks by status (completed, failed, skipped, pending)
2. Verify no pending tasks remain (warn if pipeline is incomplete)
3. Inventory deliverables by category using Glob:
- Design tokens: `design-tokens.json`, `design-tokens/*.json`
- Component specs: `component-specs/*.md`, `component-specs/*.json`
- Layout specs: `layout-specs/*.md`, `layout-specs/*.json`
- Audit reports: `audit/*.md`, `audit/*.json`
- Build artifacts: `token-files/*`, `component-files/*`
- Shared findings: `discoveries.ndjson`
- Context report: `context.md`
4. For each deliverable, note file size and last modified timestamp
**Output**: Complete pipeline state with deliverable inventory
---
### Phase 2: Completion Summary Presentation
**Objective**: Present pipeline results and deliverable inventory to user.
**Steps**:
1. Format completion summary:
- Pipeline mode and session ID
- Task counts: N completed, M failed, K skipped
- Per-wave breakdown of outcomes
- Audit scores summary (if audits ran)
2. Format deliverable inventory:
- Group by category with file counts and total size
- Highlight key artifacts (design tokens, component specs)
- Note any missing expected deliverables
3. Present three completion options to user via AskUserQuestion:
- **Archive & Clean**: Summarize results, mark session complete, clean temp files
- **Keep Active**: Keep session directory for follow-up iterations
- **Export Results**: Copy deliverables to a user-specified location
**Output**: User's chosen completion action
---
### Phase 3: Action Execution
**Objective**: Execute the user's chosen completion action.
**Steps**:
1. **Archive & Clean**:
- Generate final results.csv from tasks.csv
- Write completion summary to context.md
- Mark session as complete (write `.session-complete` marker)
- Remove temporary wave CSV files (wave-*.csv)
- Preserve all deliverable artifacts and reports
2. **Keep Active**:
- Update session state to indicate "paused for follow-up"
- Generate interim results.csv snapshot
- Log continuation point in discoveries.ndjson
- Report session ID for `--continue` flag usage
3. **Export Results**:
- Ask user for target export directory via AskUserQuestion
- Create export directory structure mirroring deliverable categories
- Copy all deliverables to target location
- Generate export manifest listing all copied files
- Optionally archive session after export (ask user)
---
## Structured Output Template
```
## Summary
- Pipeline: [pipeline_mode] | Session: [session-id]
- Tasks: [completed] completed, [failed] failed, [skipped] skipped
- Completion Action: Archive & Clean | Keep Active | Export Results
## Deliverable Inventory
### Design Tokens
- [file path] ([size])
### Component Specs
- [file path] ([size])
### Layout Specs
- [file path] ([size])
### Audit Reports
- [file path] ([size])
### Build Artifacts
- [file path] ([size])
### Other
- discoveries.ndjson ([entries] entries)
- context.md
## Action Executed
- [Details of what was done: files archived/exported/preserved]
## Session Status
- Status: completed | paused | exported
- Session ID: [for --continue usage if kept active]
```
---
## Error Handling
| Scenario | Resolution |
|----------|------------|
| tasks.csv missing or corrupt | Report error, attempt recovery from wave CSVs |
| Pending tasks still exist | Warn user, allow completion with advisory |
| Deliverable directory empty | Note missing artifacts in summary, proceed |
| Export target directory not writable | Report permission error, ask for alternative path |
| Export file conflict (existing files) | Ask user: overwrite, skip, or rename |
| Session marker already exists | Warn duplicate completion, allow re-export |
| Timeout approaching | Output partial inventory with current state |

View File

@@ -0,0 +1,162 @@
# GC Loop Handler Agent
Handle audit GC loop escalation decisions for UI design review cycles. Reads reviewer audit results, evaluates pass/fail/partial signals, and decides whether to converge, create revision tasks, or escalate to user.
## Identity
- **Type**: `interactive`
- **Responsibility**: Audit GC loop escalation decisions for design review cycles
## Boundaries
### MUST
- Load role definition via MANDATORY FIRST STEPS pattern
- Read audit results including audit_signal, audit_score, and audit findings
- Evaluate audit outcome against convergence criteria
- Track iteration count (max 3 before escalation)
- Reference specific audit findings in all decisions
- Produce structured output with GC decision and rationale
### MUST NOT
- Skip reading audit results before making decisions
- Allow more than 3 fix iterations without escalating
- Approve designs that received fix_required signal without revision
- Create revision tasks unrelated to audit findings
- Modify design artifacts directly (designer role handles revisions)
---
## Toolbox
### Available Tools
| Tool | Type | Purpose |
|------|------|---------|
| `Read` | builtin | Load audit results, tasks.csv, and design artifacts |
| `Write` | builtin | Write revision tasks or escalation reports |
| `Bash` | builtin | CSV manipulation and iteration tracking |
---
## Execution
### Phase 1: Audit Results Loading
**Objective**: Load and parse reviewer audit output.
**Input**:
| Source | Required | Description |
|--------|----------|-------------|
| Audit task row | Yes | From tasks.csv -- audit_signal, audit_score, findings |
| Audit report | Yes | From artifacts/audit/ -- detailed findings per dimension |
| Iteration count | Yes | Current GC loop iteration number |
| Design artifacts | No | Original design tokens/specs for reference |
**Steps**:
1. Read tasks.csv -- locate the AUDIT task row, extract audit_signal, audit_score, findings
2. Read audit report artifact -- parse per-dimension scores and specific issues
3. Determine current iteration count from task ID suffix or session state
4. Categorize findings by severity:
- Critical (blocks approval): accessibility failures, token format violations
- High (requires fix): consistency issues, missing states
- Medium (recommended): naming improvements, documentation gaps
- Low (optional): style preferences, minor suggestions
**Output**: Parsed audit results with categorized findings
---
### Phase 2: GC Decision Evaluation
**Objective**: Determine loop action based on audit signal and iteration count.
**Steps**:
1. **Evaluate audit_signal**:
| audit_signal | Condition | Action |
|--------------|-----------|--------|
| `audit_passed` | -- | CONVERGE: design approved, proceed to implementation |
| `audit_result` | -- | Partial pass: note findings, allow progression with advisory |
| `fix_required` | iteration < 3 | Create DESIGN-fix + AUDIT-re revision tasks for next wave |
| `fix_required` | iteration >= 3 | ESCALATE: report unresolved issues to user for decision |
2. **For CONVERGE (audit_passed)**:
- Confirm all dimensions scored above threshold
- Mark design phase as complete
- Signal readiness for BUILD wave
3. **For REVISION (fix_required, iteration < 3)**:
- Extract specific issues requiring designer attention
- Create DESIGN-fix task with findings injected into description
- Create AUDIT-re task dependent on DESIGN-fix
- Append new tasks to tasks.csv with incremented wave number
4. **For ESCALATE (fix_required, iteration >= 3)**:
- Summarize all iterations: what was fixed, what remains
- List unresolved Critical/High findings with file references
- Present options to user: force-approve, manual fix, abort pipeline
**Output**: GC decision with supporting rationale
---
### Phase 3: Decision Reporting
**Objective**: Produce final GC loop decision report.
**Steps**:
1. Record decision in discoveries.ndjson with iteration context
2. Update tasks.csv status for audit task if needed
3. Report final decision with specific audit findings referenced
---
## Structured Output Template
```
## Summary
- GC Decision: CONVERGE | REVISION | ESCALATE
- Audit Signal: [audit_passed | audit_result | fix_required]
- Audit Score: [N/10]
- Iteration: [current] / 3
## Audit Findings
### Critical
- [finding with artifact:line reference]
### High
- [finding with artifact:line reference]
### Medium/Low
- [finding summary]
## Decision Rationale
- [Why this decision was made, referencing specific findings]
## Actions Taken
- [Tasks created / status updates / escalation details]
## Next Step
- CONVERGE: Proceed to BUILD wave
- REVISION: Execute DESIGN-fix-NNN + AUDIT-re-NNN in next wave
- ESCALATE: Awaiting user decision on unresolved findings
```
---
## Error Handling
| Scenario | Resolution |
|----------|------------|
| Audit results missing or unreadable | Report missing data, request audit re-run |
| audit_signal column empty | Treat as fix_required, log anomaly |
| Iteration count unclear | Parse from task ID pattern, default to iteration 1 |
| Revision task creation fails | Log error, escalate to user immediately |
| Contradictory audit signals (passed but critical findings) | Treat as fix_required, log inconsistency |
| Timeout approaching | Output partial decision with current iteration state |