Files
Claude-Code-Workflow/.codex/skills/team-frontend-debug/agents/conditional-skip-gate.md
catlog22 62d8aa3623 Add unit tests for various components and stores in the terminal dashboard
- Implement tests for AssociationHighlight, DashboardToolbar, QueuePanel, SessionGroupTree, and TerminalDashboardPage to ensure proper functionality and state management.
- Create tests for cliSessionStore, issueQueueIntegrationStore, queueExecutionStore, queueSchedulerStore, sessionManagerStore, and terminalGridStore to validate state resets and workspace scoping.
- Mock necessary dependencies and state management hooks to isolate tests and ensure accurate behavior.
2026-03-08 21:38:20 +08:00

3.1 KiB

Conditional Skip Gate Agent

Interactive agent for evaluating TEST-001 results and determining whether to skip downstream tasks (ANALYZE, FIX, VERIFY) when no issues are found.

Identity

  • Type: interactive
  • Role File: agents/conditional-skip-gate.md
  • Responsibility: Read TEST results, evaluate issue severity, decide skip/proceed

Boundaries

MUST

  • Load role definition via MANDATORY FIRST STEPS pattern
  • Read the TEST-001 issues JSON
  • Evaluate issue count and severity distribution
  • Apply conditional skip logic
  • Present decision to user when only warnings exist
  • Produce structured output following template

MUST NOT

  • Skip the MANDATORY FIRST STEPS role loading
  • Auto-skip when high/medium issues exist
  • Modify test artifacts directly
  • Produce unstructured output

Toolbox

Available Tools

Tool Type Purpose
Read built-in Load test results and issues
AskUserQuestion built-in Get user decision on warnings
Write built-in Store gate decision result

Execution

Phase 1: Load Test Results

Objective: Load TEST-001 issues and evaluate severity

Input:

Source Required Description
TEST-001-issues.json Yes Discovered issues with severity
TEST-001-report.md No Full test report

Steps:

  1. Extract session path from task assignment
  2. Read TEST-001-issues.json
  3. Parse issues array
  4. Count by severity: high, medium, low, warning

Output: Issue severity distribution


Phase 2: Skip Decision

Objective: Apply conditional skip logic

Steps:

  1. Evaluate issues:
Condition Action
issues.length === 0 Skip all downstream. Report "all_pass".
Only low/warning severity Ask user: fix or complete
Any high/medium severity Proceed with ANALYZE -> FIX -> VERIFY
  1. If only warnings, present choice:
AskUserQuestion({
  questions: [{
    question: "Testing found only low-severity warnings. How would you like to proceed?",
    header: "Test Results",
    multiSelect: false,
    options: [
      { label: "Fix warnings", description: "Proceed with analysis and fixes for warnings" },
      { label: "Complete", description: "Accept current state, skip remaining tasks" }
    ]
  }]
})
  1. Handle response and record decision

Output: Skip/proceed directive


Structured Output Template

## Summary
- Test report evaluated: TEST-001
- Issues found: <total>
- High: <count>, Medium: <count>, Low: <count>, Warning: <count>
- Decision: <all_pass|skip_warnings|proceed>

## Findings
- All features tested: <count>
- Pass rate: <percentage>

## Decision Details
- Action: <skip-downstream|proceed-with-fixes>
- Downstream tasks affected: ANALYZE-001, FIX-001, VERIFY-001
- User choice: <if applicable>

Error Handling

Scenario Resolution
TEST-001-issues.json not found Report error, cannot evaluate
Issues JSON malformed Report parse error, default to proceed
User does not respond Timeout, default to proceed with fixes