diff --git a/.codex/prompts/issue-discover-by-prompt.md b/.codex/prompts/issue-discover-by-prompt.md new file mode 100644 index 00000000..ba67bb56 --- /dev/null +++ b/.codex/prompts/issue-discover-by-prompt.md @@ -0,0 +1,364 @@ +--- +description: Discover issues from user prompt with iterative multi-agent exploration and cross-module comparison +argument-hint: " [--scope=src/**] [--depth=standard|deep] [--max-iterations=5]" +--- + +# Issue Discovery by Prompt (Codex Version) + +## Goal + +Prompt-driven issue discovery with intelligent planning. Instead of fixed perspectives, this command: + +1. **Analyzes user intent** to understand what to find +2. **Plans exploration strategy** dynamically based on codebase structure +3. **Executes iterative exploration** with feedback loops +4. **Performs cross-module comparison** when detecting comparison intent + +**Core Difference from `issue-discover.md`**: +- `issue-discover`: Pre-defined perspectives (bug, security, etc.), parallel execution +- `issue-discover-by-prompt`: User-driven prompt, planned strategy, iterative exploration + +## Inputs + +- **Prompt**: Natural language description of what to find +- **Scope**: `--scope=src/**` - File pattern to explore (default: `**/*`) +- **Depth**: `--depth=standard|deep` - standard (3 iterations) or deep (5+ iterations) +- **Max Iterations**: `--max-iterations=N` (default: 5) + +## Output Requirements + +**Generate Files:** +1. `.workflow/issues/discoveries/{discovery-id}/discovery-state.json` - Session state with iteration tracking +2. `.workflow/issues/discoveries/{discovery-id}/iterations/{N}/{dimension}.json` - Per-iteration findings +3. `.workflow/issues/discoveries/{discovery-id}/comparison-analysis.json` - Cross-dimension comparison (if applicable) +4. `.workflow/issues/discoveries/{discovery-id}/discovery-issues.jsonl` - Generated issue candidates + +**Return Summary:** +```json +{ + "discovery_id": "DBP-YYYYMMDD-HHmmss", + "prompt": "Check if frontend API calls match backend implementations", + "intent_type": "comparison", + "dimensions": ["frontend-calls", "backend-handlers"], + "total_iterations": 3, + "total_findings": 24, + "issues_generated": 12, + "comparison_match_rate": 0.75 +} +``` + +## Workflow + +### Step 1: Initialize Discovery Session + +```bash +# Generate discovery ID +DISCOVERY_ID="DBP-$(date -u +%Y%m%d-%H%M%S)" +OUTPUT_DIR=".workflow/issues/discoveries/${DISCOVERY_ID}" + +# Create directory structure +mkdir -p "${OUTPUT_DIR}/iterations" +``` + +Detect intent type from prompt: +- `comparison`: Contains "match", "compare", "versus", "vs", "between" +- `search`: Contains "find", "locate", "where" +- `verification`: Contains "verify", "check", "ensure" +- `audit`: Contains "audit", "review", "analyze" + +### Step 2: Gather Context + +Use `rg` and file exploration to understand codebase structure: + +```bash +# Find relevant modules based on prompt keywords +rg -l "" --type ts | head -10 +rg -l "" --type ts | head -10 + +# Understand project structure +ls -la src/ +cat .workflow/project-tech.json 2>/dev/null || echo "No project-tech.json" +``` + +Build context package: +```json +{ + "prompt_keywords": ["frontend", "API", "backend"], + "codebase_structure": { "modules": [...], "patterns": [...] }, + "relevant_modules": ["src/api/", "src/services/"] +} +``` + +### Step 3: Plan Exploration Strategy + +Analyze the prompt and context to design exploration strategy. + +**Output exploration plan:** +```json +{ + "intent_analysis": { + "type": "comparison", + "primary_question": "Do frontend API calls match backend implementations?", + "sub_questions": ["Are endpoints aligned?", "Are payloads compatible?"] + }, + "dimensions": [ + { + "name": "frontend-calls", + "description": "Client-side API calls and error handling", + "search_targets": ["src/api/**", "src/hooks/**"], + "focus_areas": ["fetch calls", "error boundaries", "response parsing"] + }, + { + "name": "backend-handlers", + "description": "Server-side API implementations", + "search_targets": ["src/server/**", "src/routes/**"], + "focus_areas": ["endpoint handlers", "response schemas", "error responses"] + } + ], + "comparison_matrix": { + "dimension_a": "frontend-calls", + "dimension_b": "backend-handlers", + "comparison_points": [ + {"aspect": "endpoints", "frontend_check": "fetch URLs", "backend_check": "route paths"}, + {"aspect": "methods", "frontend_check": "HTTP methods used", "backend_check": "methods accepted"}, + {"aspect": "payloads", "frontend_check": "request body structure", "backend_check": "expected schema"}, + {"aspect": "responses", "frontend_check": "response parsing", "backend_check": "response format"} + ] + }, + "estimated_iterations": 3, + "termination_conditions": ["All comparison points verified", "No new findings in last iteration"] +} +``` + +### Step 4: Iterative Exploration + +Execute iterations until termination conditions are met: + +``` +WHILE iteration < max_iterations AND shouldContinue: + 1. Plan iteration focus based on previous findings + 2. Explore each dimension + 3. Collect and analyze findings + 4. Cross-reference between dimensions + 5. Check convergence +``` + +**For each iteration:** + +1. **Search for relevant code** using `rg`: +```bash +# Based on dimension focus areas +rg "fetch\s*\(" --type ts -C 3 | head -50 +rg "app\.(get|post|put|delete)" --type ts -C 3 | head -50 +``` + +2. **Analyze and record findings**: +```json +{ + "dimension": "frontend-calls", + "iteration": 1, + "findings": [ + { + "id": "F-001", + "title": "Undefined endpoint in UserService", + "category": "endpoint-mismatch", + "file": "src/api/userService.ts", + "line": 42, + "snippet": "fetch('/api/users/profile')", + "related_dimension": "backend-handlers", + "confidence": 0.85 + } + ], + "coverage": { + "files_explored": 15, + "areas_covered": ["fetch calls", "axios instances"], + "areas_remaining": ["graphql queries"] + }, + "leads": [ + {"description": "Check GraphQL mutations", "suggested_search": "mutation.*User"} + ] +} +``` + +3. **Cross-reference findings** between dimensions: +```javascript +// For each finding in dimension A, look for related code in dimension B +if (finding.related_dimension) { + searchForRelatedCode(finding, otherDimension); +} +``` + +4. **Check convergence**: +```javascript +const convergence = { + newDiscoveries: newFindings.length, + confidence: calculateConfidence(cumulativeFindings), + converged: newFindings.length === 0 || confidence > 0.9 +}; +``` + +### Step 5: Cross-Analysis (for comparison intent) + +If intent is comparison, analyze findings across dimensions: + +```javascript +for (const point of comparisonMatrix.comparison_points) { + const aFindings = findings.filter(f => + f.related_dimension === dimension_a && f.category.includes(point.aspect) + ); + const bFindings = findings.filter(f => + f.related_dimension === dimension_b && f.category.includes(point.aspect) + ); + + // Find discrepancies + const discrepancies = compareFindings(aFindings, bFindings, point); + + // Calculate match rate + const matchRate = calculateMatchRate(aFindings, bFindings); +} +``` + +Write to `comparison-analysis.json`: +```json +{ + "matrix": { "dimension_a": "...", "dimension_b": "...", "comparison_points": [...] }, + "results": [ + { + "aspect": "endpoints", + "dimension_a_count": 15, + "dimension_b_count": 12, + "discrepancies": [ + {"frontend": "/api/users/profile", "backend": "NOT_FOUND", "type": "missing_endpoint"} + ], + "match_rate": 0.80 + } + ], + "summary": { + "total_discrepancies": 5, + "overall_match_rate": 0.75, + "critical_mismatches": ["endpoints", "payloads"] + } +} +``` + +### Step 6: Generate Issues + +Convert high-confidence findings to issues: + +```bash +# For each finding with confidence >= 0.7 or priority critical/high +echo '{"id":"ISS-DBP-001","title":"Missing backend endpoint for /api/users/profile",...}' >> ${OUTPUT_DIR}/discovery-issues.jsonl +``` + +### Step 7: Update Final State + +```json +{ + "discovery_id": "DBP-...", + "type": "prompt-driven", + "prompt": "...", + "intent_type": "comparison", + "phase": "complete", + "created_at": "...", + "updated_at": "...", + "iterations": [ + {"number": 1, "findings_count": 10, "new_discoveries": 10, "confidence": 0.6}, + {"number": 2, "findings_count": 18, "new_discoveries": 8, "confidence": 0.75}, + {"number": 3, "findings_count": 24, "new_discoveries": 6, "confidence": 0.85} + ], + "results": { + "total_iterations": 3, + "total_findings": 24, + "issues_generated": 12, + "comparison_match_rate": 0.75 + } +} +``` + +### Step 8: Output Summary + +```markdown +## Discovery Complete: DBP-... + +**Prompt**: Check if frontend API calls match backend implementations +**Intent**: comparison +**Dimensions**: frontend-calls, backend-handlers + +### Iteration Summary +| # | Findings | New | Confidence | +|---|----------|-----|------------| +| 1 | 10 | 10 | 60% | +| 2 | 18 | 8 | 75% | +| 3 | 24 | 6 | 85% | + +### Comparison Results +- **Overall Match Rate**: 75% +- **Total Discrepancies**: 5 +- **Critical Mismatches**: endpoints, payloads + +### Issues Generated: 12 +- 2 Critical +- 4 High +- 6 Medium + +### Next Steps +- `/issue:plan DBP-001,DBP-002,...` to plan solutions +- `ccw view` to review findings in dashboard +``` + +## Quality Checklist + +Before completing, verify: + +- [ ] Intent type correctly detected from prompt +- [ ] Dimensions dynamically generated based on prompt +- [ ] Iterations executed until convergence or max limit +- [ ] Cross-reference analysis performed (for comparison intent) +- [ ] High-confidence findings converted to issues +- [ ] Discovery state shows `phase: complete` + +## Error Handling + +| Situation | Action | +|-----------|--------| +| No relevant code found | Report empty result, suggest broader scope | +| Max iterations without convergence | Complete with current findings, note in summary | +| Comparison dimension mismatch | Report which dimension has fewer findings | +| No comparison points matched | Report as "No direct matches found" | + +## Use Cases + +| Scenario | Example Prompt | +|----------|----------------| +| API Contract | "Check if frontend calls match backend endpoints" | +| Error Handling | "Find inconsistent error handling patterns" | +| Migration Gap | "Compare old auth with new auth implementation" | +| Feature Parity | "Verify mobile has all web features" | +| Schema Drift | "Check if TypeScript types match API responses" | +| Integration | "Find mismatches between service A and service B" | + +## Start Discovery + +Parse prompt and detect intent: + +```bash +PROMPT="${1}" +SCOPE="${2:-**/*}" +DEPTH="${3:-standard}" + +# Detect intent keywords +if echo "${PROMPT}" | grep -qiE '(match|compare|versus|vs|between)'; then + INTENT="comparison" +elif echo "${PROMPT}" | grep -qiE '(find|locate|where)'; then + INTENT="search" +elif echo "${PROMPT}" | grep -qiE '(verify|check|ensure)'; then + INTENT="verification" +else + INTENT="audit" +fi + +echo "Intent detected: ${INTENT}" +echo "Starting discovery with scope: ${SCOPE}" +``` + +Then follow the workflow to explore and discover issues. diff --git a/.codex/prompts/issue-discover.md b/.codex/prompts/issue-discover.md new file mode 100644 index 00000000..9dabffb0 --- /dev/null +++ b/.codex/prompts/issue-discover.md @@ -0,0 +1,261 @@ +--- +description: Discover potential issues from multiple perspectives (bug, UX, test, quality, security, performance, maintainability, best-practices) +argument-hint: " [--perspectives=bug,ux,...] [--external]" +--- + +# Issue Discovery (Codex Version) + +## Goal + +Multi-perspective issue discovery that explores code from different angles to identify potential bugs, UX improvements, test gaps, and other actionable items. Unlike code review (which assesses existing code quality), discovery focuses on **finding opportunities for improvement and potential problems**. + +**Discovery Scope**: Specified modules/files only +**Output Directory**: `.workflow/issues/discoveries/{discovery-id}/` +**Available Perspectives**: bug, ux, test, quality, security, performance, maintainability, best-practices + +## Inputs + +- **Target Pattern**: File glob pattern (e.g., `src/auth/**`) +- **Perspectives**: Comma-separated list via `--perspectives` (or interactive selection) +- **External Research**: `--external` flag enables Exa research for security and best-practices + +## Output Requirements + +**Generate Files:** +1. `.workflow/issues/discoveries/{discovery-id}/discovery-state.json` - Session state +2. `.workflow/issues/discoveries/{discovery-id}/perspectives/{perspective}.json` - Per-perspective findings +3. `.workflow/issues/discoveries/{discovery-id}/discovery-issues.jsonl` - Generated issue candidates +4. `.workflow/issues/discoveries/{discovery-id}/summary.md` - Summary report + +**Return Summary:** +```json +{ + "discovery_id": "DSC-YYYYMMDD-HHmmss", + "target_pattern": "src/auth/**", + "perspectives_analyzed": ["bug", "security", "test"], + "total_findings": 15, + "issues_generated": 8, + "priority_distribution": { "critical": 1, "high": 3, "medium": 4 } +} +``` + +## Workflow + +### Step 1: Initialize Discovery Session + +```bash +# Generate discovery ID +DISCOVERY_ID="DSC-$(date -u +%Y%m%d-%H%M%S)" +OUTPUT_DIR=".workflow/issues/discoveries/${DISCOVERY_ID}" + +# Create directory structure +mkdir -p "${OUTPUT_DIR}/perspectives" +``` + +Resolve target files: +```bash +# List files matching pattern +find -type f -name "*.ts" -o -name "*.tsx" -o -name "*.js" -o -name "*.jsx" +``` + +If no files found, abort with error message. + +### Step 2: Select Perspectives + +**If `--perspectives` provided:** +- Parse comma-separated list +- Validate against available perspectives + +**If not provided (interactive):** +- Present perspective groups: + - Quick scan: bug, test, quality + - Security audit: security, bug, quality + - Full analysis: all perspectives +- Use first group as default or wait for user input + +### Step 3: Analyze Each Perspective + +For each selected perspective, explore target files and identify issues. + +**Perspective-Specific Focus:** + +| Perspective | Focus Areas | Priority Guide | +|-------------|-------------|----------------| +| **bug** | Null checks, edge cases, resource leaks, race conditions, boundary conditions, exception handling | Critical=data corruption/crash, High=malfunction, Medium=edge case | +| **ux** | Error messages, loading states, feedback, accessibility, interaction patterns | Critical=inaccessible, High=confusing, Medium=inconsistent | +| **test** | Missing unit tests, edge case coverage, integration gaps, assertion quality | Critical=no security tests, High=no core logic tests | +| **quality** | Complexity, duplication, naming, documentation, code smells | Critical=unmaintainable, High=significant issues | +| **security** | Input validation, auth/authz, injection, XSS/CSRF, data exposure | Critical=auth bypass/injection, High=missing authz | +| **performance** | N+1 queries, memory leaks, caching, algorithm efficiency | Critical=memory leaks, High=N+1 queries | +| **maintainability** | Coupling, interface design, tech debt, extensibility | Critical=forced changes, High=unclear boundaries | +| **best-practices** | Framework conventions, language patterns, anti-patterns | Critical=bug-causing anti-patterns, High=convention violations | + +**For each perspective:** + +1. Read target files and analyze for perspective-specific concerns +2. Use `rg` to search for patterns indicating issues +3. Record findings with: + - `id`: Finding ID (e.g., `F-001`) + - `title`: Brief description + - `priority`: critical/high/medium/low + - `category`: Specific category within perspective + - `description`: Detailed explanation + - `file`: File path + - `line`: Line number + - `snippet`: Code snippet + - `suggested_issue`: Proposed issue text + - `confidence`: 0.0-1.0 + +4. Write to `{OUTPUT_DIR}/perspectives/{perspective}.json`: +```json +{ + "perspective": "security", + "analyzed_at": "2025-01-22T...", + "files_analyzed": 15, + "findings": [ + { + "id": "F-001", + "title": "Missing input validation", + "priority": "high", + "category": "input-validation", + "description": "User input is passed directly to database query", + "file": "src/auth/login.ts", + "line": 42, + "snippet": "db.query(`SELECT * FROM users WHERE name = '${input}'`)", + "suggested_issue": "Add input sanitization to prevent SQL injection", + "confidence": 0.95 + } + ] +} +``` + +### Step 4: External Research (if --external) + +For security and best-practices perspectives, use Exa to search for: +- Industry best practices for the tech stack +- Known vulnerability patterns +- Framework-specific security guidelines + +Write results to `{OUTPUT_DIR}/external-research.json`. + +### Step 5: Aggregate and Prioritize + +1. Load all perspective JSON files +2. Deduplicate findings by file+line +3. Calculate priority scores: + - critical: 1.0 + - high: 0.8 + - medium: 0.5 + - low: 0.2 + - Adjust by confidence + +4. Sort by priority score descending + +### Step 6: Generate Issues + +Convert high-priority findings to issue format: + +```bash +# Append to discovery-issues.jsonl +echo '{"id":"ISS-DSC-001","title":"...","priority":"high",...}' >> ${OUTPUT_DIR}/discovery-issues.jsonl +``` + +Issue criteria: +- `priority` is critical or high +- OR `priority_score >= 0.7` +- OR `confidence >= 0.9` with medium priority + +### Step 7: Update Discovery State + +Write final state to `{OUTPUT_DIR}/discovery-state.json`: +```json +{ + "discovery_id": "DSC-...", + "target_pattern": "src/auth/**", + "phase": "complete", + "created_at": "...", + "updated_at": "...", + "perspectives": ["bug", "security", "test"], + "results": { + "total_findings": 15, + "issues_generated": 8, + "priority_distribution": { + "critical": 1, + "high": 3, + "medium": 4 + } + } +} +``` + +### Step 8: Generate Summary + +Write summary to `{OUTPUT_DIR}/summary.md`: +```markdown +# Discovery Summary: DSC-... + +**Target**: src/auth/** +**Perspectives**: bug, security, test +**Total Findings**: 15 +**Issues Generated**: 8 + +## Priority Breakdown +- Critical: 1 +- High: 3 +- Medium: 4 + +## Top Findings + +1. **[Critical] SQL Injection in login.ts:42** + Category: security/input-validation + ... + +2. **[High] Missing null check in auth.ts:128** + Category: bug/null-check + ... + +## Next Steps +- Run `/issue:plan` to plan solutions for generated issues +- Use `ccw view` to review findings in dashboard +``` + +## Quality Checklist + +Before completing, verify: + +- [ ] All target files analyzed for selected perspectives +- [ ] Findings include file:line references +- [ ] Priority assigned to all findings +- [ ] Issues generated from high-priority findings +- [ ] Discovery state shows `phase: complete` +- [ ] Summary includes actionable next steps + +## Error Handling + +| Situation | Action | +|-----------|--------| +| No files match pattern | Abort with clear error message | +| Perspective analysis fails | Log error, continue with other perspectives | +| No findings | Report "No issues found" (not an error) | +| External research fails | Continue without external context | + +## Schema References + +| Schema | Path | Purpose | +|--------|------|---------| +| Discovery State | `~/.claude/workflows/cli-templates/schemas/discovery-state-schema.json` | Session state | +| Discovery Finding | `~/.claude/workflows/cli-templates/schemas/discovery-finding-schema.json` | Finding format | + +## Start Discovery + +Begin by resolving target files: + +```bash +# Parse target pattern from arguments +TARGET_PATTERN="${1:-src/**}" + +# Count matching files +find ${TARGET_PATTERN} -type f \( -name "*.ts" -o -name "*.tsx" -o -name "*.js" \) | wc -l +``` + +Then proceed with perspective selection and analysis. diff --git a/.codex/prompts/issue-execute.md b/.codex/prompts/issue-execute.md index 78adfc5d..04fe9877 100644 --- a/.codex/prompts/issue-execute.md +++ b/.codex/prompts/issue-execute.md @@ -9,6 +9,16 @@ argument-hint: "--queue [--worktree []]" **Serial Execution**: Execute solutions ONE BY ONE from the issue queue via `ccw issue next`. For each solution, complete all tasks sequentially (implement → test → verify), then commit once per solution with formatted summary. Continue autonomously until queue is empty. +## Project Context (MANDATORY FIRST STEPS) + +Before starting execution, load project context: + +1. **Read project tech stack**: `.workflow/project-tech.json` +2. **Read project guidelines**: `.workflow/project-guidelines.json` +3. **Read solution schema**: `~/.claude/workflows/cli-templates/schemas/solution-schema.json` + +This ensures execution follows project conventions and patterns. + ## Queue ID Requirement (MANDATORY) **`--queue ` parameter is REQUIRED** diff --git a/.codex/prompts/issue-new.md b/.codex/prompts/issue-new.md new file mode 100644 index 00000000..2a65618d --- /dev/null +++ b/.codex/prompts/issue-new.md @@ -0,0 +1,285 @@ +--- +description: Create structured issue from GitHub URL or text description +argument-hint: " [--priority 1-5]" +--- + +# Issue New (Codex Version) + +## Goal + +Create a new issue from a GitHub URL or text description. Detect input clarity and ask clarifying questions only when necessary. Register the issue for planning. + +**Core Principle**: Requirement Clarity Detection → Ask only when needed + +``` +Clear Input (GitHub URL, structured text) → Direct creation +Unclear Input (vague description) → Minimal clarifying questions +``` + +## Issue Structure + +```typescript +interface Issue { + id: string; // GH-123 or ISS-YYYYMMDD-HHMMSS + title: string; + status: 'registered' | 'planned' | 'queued' | 'in_progress' | 'completed' | 'failed'; + priority: number; // 1 (critical) to 5 (low) + context: string; // Problem description + source: 'github' | 'text' | 'discovery'; + source_url?: string; + labels?: string[]; + + // GitHub binding (for non-GitHub sources that publish to GitHub) + github_url?: string; + github_number?: number; + + // Optional structured fields + expected_behavior?: string; + actual_behavior?: string; + affected_components?: string[]; + + // Solution binding + bound_solution_id: string | null; + + // Timestamps + created_at: string; + updated_at: string; +} +``` + +## Inputs + +- **GitHub URL**: `https://github.com/owner/repo/issues/123` or `#123` +- **Text description**: Natural language description +- **Priority flag**: `--priority 1-5` (optional, default: 3) + +## Output Requirements + +**Create Issue via CLI** (preferred method): +```bash +# Pipe input (recommended for complex JSON) +echo '{"title":"...", "context":"...", "priority":3}' | ccw issue create + +# Returns created issue JSON +{"id":"ISS-20251229-001","title":"...","status":"registered",...} +``` + +**Return Summary:** +```json +{ + "created": true, + "id": "ISS-20251229-001", + "title": "Login fails with special chars", + "source": "text", + "github_published": false, + "next_step": "/issue:plan ISS-20251229-001" +} +``` + +## Workflow + +### Step 1: Analyze Input Clarity + +Parse and detect input type: + +```javascript +// Detection patterns +const isGitHubUrl = input.match(/github\.com\/[\w-]+\/[\w-]+\/issues\/\d+/); +const isGitHubShort = input.match(/^#(\d+)$/); +const hasStructure = input.match(/(expected|actual|affects|steps):/i); + +// Clarity score: 0-3 +let clarityScore = 0; +if (isGitHubUrl || isGitHubShort) clarityScore = 3; // GitHub = fully clear +else if (hasStructure) clarityScore = 2; // Structured text = clear +else if (input.length > 50) clarityScore = 1; // Long text = somewhat clear +else clarityScore = 0; // Vague +``` + +### Step 2: Extract Issue Data + +**For GitHub URL/Short:** + +```bash +# Fetch issue details via gh CLI +gh issue view --json number,title,body,labels,url + +# Parse response +{ + "id": "GH-123", + "title": "...", + "source": "github", + "source_url": "https://github.com/...", + "labels": ["bug", "priority:high"], + "context": "..." +} +``` + +**For Text Description:** + +```javascript +// Generate issue ID +const id = `ISS-${new Date().toISOString().replace(/[-:T]/g, '').slice(0, 14)}`; + +// Parse structured fields if present +const expected = text.match(/expected:?\s*([^.]+)/i); +const actual = text.match(/actual:?\s*([^.]+)/i); +const affects = text.match(/affects?:?\s*([^.]+)/i); + +// Build issue data +{ + "id": id, + "title": text.split(/[.\n]/)[0].substring(0, 60), + "source": "text", + "context": text.substring(0, 500), + "expected_behavior": expected?.[1]?.trim(), + "actual_behavior": actual?.[1]?.trim() +} +``` + +### Step 3: Context Hint (Conditional) + +For medium clarity (score 1-2) without affected components: + +```bash +# Use rg to find potentially related files +rg -l "" --type ts | head -5 +``` + +Add discovered files to `affected_components` (max 3 files). + +**Note**: Skip this for GitHub issues (already have context) and vague inputs (needs clarification first). + +### Step 4: Clarification (Only if Unclear) + +**Only for clarity score < 2:** + +Present a prompt asking for more details: + +``` +Input unclear. Please describe: +- What is the issue about? +- Where does it occur? +- What is the expected behavior? +``` + +Wait for user response, then update issue data. + +### Step 5: GitHub Publishing Decision + +For non-GitHub sources, determine if user wants to publish to GitHub: + +``` +Would you like to publish this issue to GitHub? +1. Yes, publish to GitHub (create issue and link it) +2. No, keep local only (store without GitHub sync) +``` + +### Step 6: Create Issue + +**Create via CLI:** + +```bash +# Build issue JSON +ISSUE_JSON='{"title":"...","context":"...","priority":3,"source":"text"}' + +# Create issue (auto-generates ID) +echo "${ISSUE_JSON}" | ccw issue create +``` + +**If publishing to GitHub:** + +```bash +# Create on GitHub first +GH_URL=$(gh issue create --title "..." --body "..." | grep -oE 'https://github.com/[^ ]+') +GH_NUMBER=$(echo $GH_URL | grep -oE '/issues/([0-9]+)$' | grep -oE '[0-9]+') + +# Update local issue with binding +ccw issue update ${ISSUE_ID} --github-url "${GH_URL}" --github-number ${GH_NUMBER} +``` + +### Step 7: Output Result + +```markdown +## Issue Created + +**ID**: ISS-20251229-001 +**Title**: Login fails with special chars +**Source**: text +**Priority**: 2 (High) + +**Context**: +500 error when password contains quotes + +**Affected Components**: +- src/auth/login.ts +- src/utils/validation.ts + +**GitHub**: Not published (local only) + +**Next Step**: `/issue:plan ISS-20251229-001` +``` + +## Quality Checklist + +Before completing, verify: + +- [ ] Issue ID generated correctly (GH-xxx or ISS-YYYYMMDD-HHMMSS) +- [ ] Title extracted (max 60 chars) +- [ ] Context captured (problem description) +- [ ] Priority assigned (1-5) +- [ ] Status set to `registered` +- [ ] Created via `ccw issue create` CLI command + +## Error Handling + +| Situation | Action | +|-----------|--------| +| GitHub URL not accessible | Report error, suggest text input | +| gh CLI not available | Fall back to text-based creation | +| Empty input | Prompt for description | +| Very vague input | Ask clarifying questions | +| Issue already exists | Report duplicate, show existing | + +## Examples + +### Clear Input (No Questions) + +```bash +# GitHub URL +codex -p "@.codex/prompts/issue-new.md https://github.com/org/repo/issues/42" +# → Fetches, parses, creates immediately + +# Structured text +codex -p "@.codex/prompts/issue-new.md 'Login fails with special chars. Expected: success. Actual: 500'" +# → Parses structure, creates immediately +``` + +### Vague Input (Clarification) + +```bash +codex -p "@.codex/prompts/issue-new.md 'auth broken'" +# → Asks: "Please describe the issue in more detail" +# → User provides details +# → Creates issue +``` + +## Start Execution + +Parse input and detect clarity: + +```bash +# Get input from arguments +INPUT="${1}" + +# Detect if GitHub URL +if echo "${INPUT}" | grep -qE 'github\.com/.*/issues/[0-9]+'; then + echo "GitHub URL detected - fetching issue..." + gh issue view "${INPUT}" --json number,title,body,labels,url +else + echo "Text input detected - analyzing clarity..." + # Continue with text parsing +fi +``` + +Then follow the workflow based on detected input type. diff --git a/.codex/prompts/issue-plan.md b/.codex/prompts/issue-plan.md index 8d4ed6db..bf2be370 100644 --- a/.codex/prompts/issue-plan.md +++ b/.codex/prompts/issue-plan.md @@ -1,6 +1,6 @@ --- -description: Plan issue(s) into bound solutions (writes solutions JSONL via ccw issue bind) -argument-hint: "[,,...] [--all-pending] [--batch-size 3]" +description: Plan issue(s) into bound solutions using subagent pattern (explore + plan closed-loop) +argument-hint: "[,,...] [--all-pending] [--batch-size 4]" --- # Issue Plan (Codex Version) @@ -9,7 +9,7 @@ argument-hint: "[,,...] [--all-pending] [--batch-size 3]" Create executable solution(s) for issue(s) and bind the selected solution to each issue using `ccw issue bind`. -This workflow is **planning + registration** (no implementation): it explores the codebase just enough to produce a high-quality task breakdown that can be executed later (e.g., by `issue-execute.md`). +This workflow uses **subagent pattern** for parallel batch processing: spawn planning agents per batch, wait for results, handle multi-solution selection. ## Core Guidelines @@ -17,29 +17,25 @@ This workflow is **planning + registration** (no implementation): it explores th | Operation | Correct | Incorrect | |-----------|---------|-----------| -| List issues (brief) | `ccw issue list --status pending --brief` | `Read('issues.jsonl')` | -| Read issue details | `ccw issue status --json` | `Read('issues.jsonl')` | +| List issues (brief) | `ccw issue list --status pending --brief` | Read issues.jsonl | +| Read issue details | `ccw issue status --json` | Read issues.jsonl | | Update status | `ccw issue update --status ...` | Direct file edit | | Bind solution | `ccw issue bind ` | Direct file edit | -**Output Options**: -- `--brief`: JSON with minimal fields (id, title, status, priority, tags) -- `--json`: Full JSON (for detailed processing) - **ALWAYS** use CLI commands for CRUD operations. **NEVER** read entire `issues.jsonl` or `solutions/*.jsonl` directly. ## Inputs - **Explicit issues**: comma-separated IDs, e.g. `ISS-123,ISS-124` - **All pending**: `--all-pending` → plan all issues in `registered` status -- **Batch size**: `--batch-size N` (default `3`) → max issues per batch +- **Batch size**: `--batch-size N` (default `4`) → max issues per subagent batch ## Output Requirements For each issue: -- Register at least one solution and bind one solution to the issue (updates `.workflow/issues/issues.jsonl` and appends to `.workflow/issues/solutions/{issue-id}.jsonl`). -- Ensure tasks conform to `.claude/workflows/cli-templates/schemas/solution-schema.json`. -- Each task includes quantified `acceptance.criteria` and concrete `acceptance.verification`. +- Register at least one solution and bind one solution to the issue +- Ensure tasks conform to `~/.claude/workflows/cli-templates/schemas/solution-schema.json` +- Each task includes quantified `acceptance.criteria` and concrete `acceptance.verification` Return a final summary JSON: ```json @@ -52,73 +48,166 @@ Return a final summary JSON: ## Workflow -### Step 1: Resolve issue list +### Step 1: Resolve Issue List -- If `--all-pending`: - - Run `ccw issue list --status registered --json` and plan all returned issues. -- Else: - - Parse IDs from user input (split by `,`), and ensure each issue exists: - - `ccw issue init --title "Issue "` (safe if already exists) - -### Step 2: Load issue details - -For each issue ID: -- `ccw issue status --json` -- Extract the issue title/context/labels and any discovery hints (affected files, snippets, etc. if present). - -### Step 3: Minimal exploration (evidence-based) - -- If issue context names specific files or symbols: open them first. -- Otherwise: - - Use `rg` to locate relevant code paths by keywords from the title/context. - - Read 3+ similar patterns before proposing refactors or API changes. - -### Step 4: Draft solutions and tasks (schema-driven) - -Default to **one** solution per issue unless there are genuinely different approaches. - -Task rules (from schema): -- `id`: `T1`, `T2`, ... -- `action`: one of `Create|Update|Implement|Refactor|Add|Delete|Configure|Test|Fix` -- `implementation`: step-by-step, executable instructions -- `test.commands`: include at least one command per task when feasible -- `acceptance.criteria`: testable statements -- `acceptance.verification`: concrete steps/commands mapping to criteria -- Prefer small, independently testable tasks; encode dependencies in `depends_on`. - -### Step 5: Register & bind solutions via CLI - -**Create solution** (via CLI endpoint): +**If `--all-pending`:** ```bash -ccw issue solution --data '{"description":"...", "approach":"...", "tasks":[...]}' -# Output: {"id":"SOL-{issue-id}-1", ...} +ccw issue list --status registered --json ``` -**CLI Features:** -| Feature | Description | -|---------|-------------| -| Auto-increment ID | `SOL-{issue-id}-{seq}` (e.g., `SOL-GH-123-1`) | -| Multi-solution | Appends to existing JSONL, supports multiple per issue | -| Trailing newline | Proper JSONL format, no corruption | +**Else (explicit IDs):** +```bash +# For each ID, ensure exists +ccw issue init --title "Issue " 2>/dev/null || true +ccw issue status --json +``` -**Binding:** -- **Single solution**: Auto-bind: `ccw issue bind ` -- **Multiple solutions**: Present alternatives in `pending_selection`, wait for user choice +### Step 2: Group Issues by Similarity -### Step 6: Detect cross-issue file conflicts (best-effort) +Group issues for batch processing (max 4 per batch): -Across the issues planned in this run: -- Build a set of touched files from each solution's `modification_points.file` (and/or task `scope` when explicit files are missing). -- If the same file appears in multiple issues, add it to `conflicts` with all involved issue IDs. -- Recommend a safe execution order (sequential) when conflicts exist. +```bash +# Extract issue metadata for grouping +ccw issue list --status registered --brief --json +``` -### Step 7: Update issue status +Group by: +- Shared tags +- Similar keywords in title +- Related components -After binding, update issue status to `planned`: +### Step 3: Spawn Planning Subagents (Parallel) + +For each batch, spawn a planning subagent: + +```javascript +// Subagent message structure +spawn_agent({ + message: ` +## TASK ASSIGNMENT + +### MANDATORY FIRST STEPS (Agent Execute) +1. **Read role definition**: ~/.codex/agents/issue-plan-agent.md (MUST read first) +2. Read: .workflow/project-tech.json +3. Read: .workflow/project-guidelines.json +4. Read schema: ~/.claude/workflows/cli-templates/schemas/solution-schema.json + +--- + +Goal: Plan solutions for ${batch.length} issues with executable task breakdown + +Scope: +- CAN DO: Explore codebase, design solutions, create tasks +- CANNOT DO: Execute solutions, modify production code +- Directory: ${process.cwd()} + +Context: +- Issues: ${batch.map(i => `${i.id}: ${i.title}`).join('\n')} +- Fetch full details: ccw issue status --json + +Deliverables: +- For each issue: Write solution to .workflow/issues/solutions/{issue-id}.jsonl +- Single solution → auto-bind via ccw issue bind +- Multiple solutions → return in pending_selection + +Quality bar: +- Tasks have quantified acceptance.criteria +- Each task includes test.commands +- Solution follows schema exactly +` +}) +``` + +**Batch execution (parallel):** +```javascript +// Launch all batches in parallel +const agentIds = batches.map(batch => spawn_agent({ message: buildPrompt(batch) })) + +// Wait for all agents to complete +const results = wait({ ids: agentIds, timeout_ms: 900000 }) // 15 min + +// Collect results +const allBound = [] +const allPendingSelection = [] +const allConflicts = [] + +for (const id of agentIds) { + if (results.status[id].completed) { + const result = JSON.parse(results.status[id].completed) + allBound.push(...(result.bound || [])) + allPendingSelection.push(...(result.pending_selection || [])) + allConflicts.push(...(result.conflicts || [])) + } +} + +// Close all agents +agentIds.forEach(id => close_agent({ id })) +``` + +### Step 4: Handle Multi-Solution Selection + +If `pending_selection` is non-empty, present options: + +``` +Issue ISS-001 has multiple solutions: +1. SOL-ISS-001-1: Refactor with adapter pattern (3 tasks) +2. SOL-ISS-001-2: Direct implementation (2 tasks) + +Select solution (1-2): +``` + +Bind selected solution: +```bash +ccw issue bind ISS-001 SOL-ISS-001-1 +``` + +### Step 5: Handle Conflicts + +If conflicts detected: +- Low/Medium severity: Auto-resolve with recommended order +- High severity: Present to user for decision + +### Step 6: Update Issue Status + +After binding, update status: ```bash ccw issue update --status planned ``` +### Step 7: Output Summary + +```markdown +## Planning Complete + +**Planned**: 5 issues +**Bound Solutions**: 4 +**Pending Selection**: 1 + +### Bound Solutions +| Issue | Solution | Tasks | +|-------|----------|-------| +| ISS-001 | SOL-ISS-001-1 | 3 | +| ISS-002 | SOL-ISS-002-1 | 2 | + +### Pending Selection +- ISS-003: 2 solutions available (user selection required) + +### Conflicts Detected +- src/auth.ts touched by ISS-001, ISS-002 (resolved: sequential) + +**Next Step**: `/issue:queue` +``` + +## Subagent Role Reference + +Planning subagent uses role file at: `~/.codex/agents/issue-plan-agent.md` + +Role capabilities: +- Codebase exploration (rg, file reading) +- Solution design with task breakdown +- Schema validation +- Solution registration via CLI + ## Quality Checklist Before completing, verify: @@ -130,19 +219,28 @@ Before completing, verify: - [ ] Task acceptance criteria are quantified (not vague) - [ ] Conflicts detected and reported (if multiple issues touch same files) - [ ] Issue status updated to `planned` after binding +- [ ] All subagents closed after completion ## Error Handling | Error | Resolution | |-------|------------| | Issue not found | Auto-create via `ccw issue init` | +| Subagent timeout | Retry with increased timeout or smaller batch | | No solutions generated | Display error, suggest manual planning | | User cancels selection | Skip issue, continue with others | | File conflicts | Detect and suggest resolution order | -## Done Criteria +## Start Execution -- A bound solution exists for each issue unless explicitly deferred for user selection. -- All tasks validate against the solution schema fields (especially acceptance criteria + verification). -- The final summary JSON matches the required shape. +Begin by resolving issue list: +```bash +# Default to all pending +ccw issue list --status registered --brief --json + +# Or with explicit IDs +ccw issue status ISS-001 --json +``` + +Then group issues and spawn planning subagents. diff --git a/.codex/prompts/issue-queue.md b/.codex/prompts/issue-queue.md index 2003fee2..7f856e0f 100644 --- a/.codex/prompts/issue-queue.md +++ b/.codex/prompts/issue-queue.md @@ -1,15 +1,13 @@ --- -description: Form execution queue from bound solutions (orders solutions, detects conflicts, assigns groups) -argument-hint: "[--issue ] [--append ]" +description: Form execution queue from bound solutions using subagent for conflict analysis and ordering +argument-hint: "[--queues ] [--issue ] [--append ]" --- # Issue Queue (Codex Version) ## Goal -Create an ordered execution queue from all bound solutions. Analyze inter-solution file conflicts, calculate semantic priorities, and assign parallel/sequential execution groups. - -This workflow is **ordering only** (no execution): it reads bound solutions, detects conflicts, and produces a queue file that `issue-execute.md` can consume. +Create an ordered execution queue from all bound solutions. Uses **subagent pattern** to analyze inter-solution file conflicts, calculate semantic priorities, and assign parallel/sequential execution groups. **Design Principle**: Queue items are **solutions**, not individual tasks. Each executor receives a complete solution with all its tasks. @@ -19,22 +17,18 @@ This workflow is **ordering only** (no execution): it reads bound solutions, det | Operation | Correct | Incorrect | |-----------|---------|-----------| -| List issues (brief) | `ccw issue list --status planned --brief` | `Read('issues.jsonl')` | -| List queue (brief) | `ccw issue queue --brief` | `Read('queues/*.json')` | -| Read issue details | `ccw issue status --json` | `Read('issues.jsonl')` | -| Get next item | `ccw issue next --json` | `Read('queues/*.json')` | -| Update status | `ccw issue update --status ...` | Direct file edit | +| List issues (brief) | `ccw issue list --status planned --brief` | Read issues.jsonl | +| List queue (brief) | `ccw issue queue --brief` | Read queues/*.json | +| Read issue details | `ccw issue status --json` | Read issues.jsonl | +| Get next item | `ccw issue next --json` | Read queues/*.json | | Sync from queue | `ccw issue update --from-queue` | Direct file edit | -**Output Options**: -- `--brief`: JSON with minimal fields (id, status, counts) -- `--json`: Full JSON (for detailed processing) - **ALWAYS** use CLI commands for CRUD operations. **NEVER** read entire `issues.jsonl` or `queues/*.json` directly. ## Inputs - **All planned**: Default behavior → queue all issues with `planned` status and bound solutions +- **Multiple queues**: `--queues ` → create N parallel queues - **Specific issue**: `--issue ` → queue only that issue's solution - **Append mode**: `--append ` → append issue to active queue (don't create new) @@ -58,27 +52,19 @@ This workflow is **ordering only** (no execution): it reads bound solutions, det ## Workflow -### Step 1: Generate Queue ID - -Generate queue ID ONCE at start, reuse throughout: +### Step 1: Generate Queue ID and Load Solutions ```bash -# Format: QUE-YYYYMMDD-HHMMSS (UTC) +# Generate queue ID QUEUE_ID="QUE-$(date -u +%Y%m%d-%H%M%S)" -``` -### Step 2: Load Planned Issues - -Get all issues with bound solutions: - -```bash +# Load planned issues with bound solutions ccw issue list --status planned --json ``` -For each issue in the result: -- Extract `id`, `bound_solution_id`, `priority` +For each issue, extract: +- `id`, `bound_solution_id`, `priority` - Read solution from `.workflow/issues/solutions/{issue-id}.jsonl` -- Find the bound solution by matching `solution.id === bound_solution_id` - Collect `files_touched` from all tasks' `modification_points.file` Build solution list: @@ -94,53 +80,166 @@ Build solution list: ] ``` -### Step 3: Detect File Conflicts +### Step 2: Spawn Queue Agent for Conflict Analysis -Build a file → solutions mapping: +Spawn subagent to analyze conflicts and order solutions: ```javascript -fileModifications = { - "src/auth.ts": ["SOL-ISS-001-1", "SOL-ISS-003-1"], - "src/api.ts": ["SOL-ISS-002-1"] +const agentId = spawn_agent({ + message: ` +## TASK ASSIGNMENT + +### MANDATORY FIRST STEPS (Agent Execute) +1. **Read role definition**: ~/.codex/agents/issue-queue-agent.md (MUST read first) +2. Read: .workflow/project-tech.json +3. Read: .workflow/project-guidelines.json + +--- + +Goal: Order ${solutions.length} solutions into execution queue with conflict resolution + +Scope: +- CAN DO: Analyze file conflicts, calculate priorities, assign groups +- CANNOT DO: Execute solutions, modify code +- Queue ID: ${QUEUE_ID} + +Context: +- Solutions: ${JSON.stringify(solutions, null, 2)} +- Project Root: ${process.cwd()} + +Deliverables: +1. Write queue JSON to: .workflow/issues/queues/${QUEUE_ID}.json +2. Update index: .workflow/issues/queues/index.json +3. Return summary JSON + +Quality bar: +- No circular dependencies in DAG +- Parallel groups have NO file overlaps +- Semantic priority calculated (0.0-1.0) +- All conflicts resolved with rationale +` +}) + +// Wait for agent completion +const result = wait({ ids: [agentId], timeout_ms: 600000 }) + +// Parse result +const summary = JSON.parse(result.status[agentId].completed) + +// Check for clarifications +if (summary.clarifications?.length > 0) { + // Handle high-severity conflicts requiring user input + for (const clarification of summary.clarifications) { + console.log(`Conflict: ${clarification.question}`) + console.log(`Options: ${clarification.options.join(', ')}`) + // Get user input and send back + send_input({ + id: agentId, + message: `Conflict ${clarification.conflict_id} resolved: ${userChoice}` + }) + wait({ ids: [agentId], timeout_ms: 300000 }) + } } + +// Close agent +close_agent({ id: agentId }) ``` -Conflicts exist when a file has multiple solutions. For each conflict: -- Record the file and involved solutions -- Will be resolved in Step 4 +### Step 3: Multi-Queue Support (if --queues > 1) -### Step 4: Resolve Conflicts & Build DAG +When creating multiple parallel queues: -**Resolution Rules (in priority order):** -1. Higher issue priority first: `critical > high > medium > low` -2. Foundation solutions first: fewer dependencies -3. More tasks = higher priority: larger impact +1. **Partition solutions** to minimize cross-queue file conflicts +2. **Spawn N agents in parallel** (one per queue) +3. **Wait for all agents** with batch wait -For each file conflict: -- Apply resolution rules to determine order -- Add dependency edge: later solution `depends_on` earlier solution -- Record rationale +```javascript +// Partition solutions by file overlap +const partitions = partitionSolutions(solutions, numQueues) -**Semantic Priority Formula:** -``` -Base: critical=0.9, high=0.7, medium=0.5, low=0.3 -Boost: task_count>=5 → +0.1, task_count>=3 → +0.05 -Final: clamp(base + boost, 0.0, 1.0) +// Spawn agents in parallel +const agentIds = partitions.map((partition, i) => + spawn_agent({ + message: buildQueuePrompt(partition, `${QUEUE_ID}-${i+1}`, i+1, numQueues) + }) +) + +// Batch wait for all agents +const results = wait({ ids: agentIds, timeout_ms: 600000 }) + +// Collect clarifications from all agents +const allClarifications = agentIds.flatMap((id, i) => + (results.status[id].clarifications || []).map(c => ({ ...c, queue_id: `${QUEUE_ID}-${i+1}`, agent_id: id })) +) + +// Handle clarifications, then close all agents +agentIds.forEach(id => close_agent({ id })) ``` -### Step 5: Assign Execution Groups +### Step 4: Update Issue Statuses -- **Parallel (P1, P2, ...)**: Solutions with NO file overlaps between them -- **Sequential (S1, S2, ...)**: Solutions that share files must run in order +**MUST use CLI command:** -Group assignment: -1. Start with all solutions in potential parallel group -2. For each file conflict, move later solution to sequential group -3. Assign group IDs: P1 for first parallel batch, S2 for first sequential, etc. +```bash +# Batch update from queue (recommended) +ccw issue update --from-queue ${QUEUE_ID} -### Step 6: Generate Queue Files +# Or individual update +ccw issue update --status queued +``` -**Queue file structure** (`.workflow/issues/queues/{QUEUE_ID}.json`): +### Step 5: Active Queue Check + +```bash +ccw issue queue list --brief +``` + +**Decision:** +- If no active queue: `ccw issue queue switch ${QUEUE_ID}` +- If active queue exists: Present options to user + +``` +Active queue exists. Choose action: +1. Merge into existing queue +2. Use new queue (keep existing in history) +3. Cancel (delete new queue) + +Select (1-3): +``` + +### Step 6: Output Summary + +```markdown +## Queue Formed: ${QUEUE_ID} + +**Solutions**: 5 +**Tasks**: 18 +**Execution Groups**: 3 + +### Execution Order +| # | Item | Issue | Tasks | Group | Files | +|---|------|-------|-------|-------|-------| +| 1 | S-1 | ISS-001 | 3 | P1 | src/auth.ts | +| 2 | S-2 | ISS-002 | 2 | P1 | src/api.ts | +| 3 | S-3 | ISS-003 | 4 | S2 | src/auth.ts | + +### Conflicts Resolved +- src/auth.ts: S-1 → S-3 (sequential, S-1 creates module) + +**Next Step**: `/issue:execute --queue ${QUEUE_ID}` +``` + +## Subagent Role Reference + +Queue agent uses role file at: `~/.codex/agents/issue-queue-agent.md` + +Role capabilities: +- File conflict detection (5 types) +- Dependency DAG construction +- Semantic priority calculation +- Execution group assignment + +## Queue File Schema ```json { @@ -161,83 +260,11 @@ Group assignment: "task_count": 3 } ], - "conflicts": [ - { - "type": "file_conflict", - "file": "src/auth.ts", - "solutions": ["S-1", "S-3"], - "resolution": "sequential", - "resolution_order": ["S-1", "S-3"], - "rationale": "S-1 creates auth module, S-3 extends it" - } - ], - "execution_groups": [ - { "id": "P1", "type": "parallel", "solutions": ["S-1", "S-2"], "solution_count": 2 }, - { "id": "S2", "type": "sequential", "solutions": ["S-3"], "solution_count": 1 } - ] + "conflicts": [...], + "execution_groups": [...] } ``` -**Update index** (`.workflow/issues/queues/index.json`): - -```json -{ - "active_queue_id": "QUE-20251228-120000", - "active_queue_ids": ["QUE-20251228-120000"], - "queues": [ - { - "id": "QUE-20251228-120000", - "status": "active", - "priority": 1, - "issue_ids": ["ISS-001", "ISS-002"], - "total_solutions": 3, - "completed_solutions": 0, - "created_at": "2025-12-28T12:00:00Z" - } - ] -} -``` - -## Multi-Queue Management - -Multiple queues can be active simultaneously. The system executes queues in priority order (lower = higher priority). - -**Activate multiple queues:** -```bash -ccw issue queue activate QUE-001,QUE-002,QUE-003 -``` - -**Set queue priority:** -```bash -ccw issue queue priority QUE-001 --priority 1 -ccw issue queue priority QUE-002 --priority 2 -``` - -**Execution behavior with multi-queue:** -- `ccw issue next` automatically selects from active queues in priority order -- Complete all items in Q1 before moving to Q2 (serialized execution) -- Use `--queue QUE-xxx` to target a specific queue - -### Step 7: Update Issue Statuses - -**MUST use CLI command** (NOT direct file operations): - -```bash -# Option 1: Batch update from queue (recommended) -ccw issue update --from-queue # Use active queue -ccw issue update --from-queue QUE-xxx # Use specific queue - -# Option 2: Individual issue update -ccw issue update --status queued -``` - -**⚠️ IMPORTANT**: Do NOT directly modify `issues.jsonl`. Always use CLI command to ensure proper validation and history tracking. - -## Queue Item ID Format - -- Solution items: `S-1`, `S-2`, `S-3`, ... -- Sequential numbering starting from 1 - ## Quality Checklist Before completing, verify: @@ -248,14 +275,7 @@ Before completing, verify: - [ ] Semantic priority calculated for each solution (0.0-1.0) - [ ] Execution groups assigned (P* for parallel, S* for sequential) - [ ] Issue statuses updated to `queued` -- [ ] Summary JSON returned with correct shape - -## Validation Rules - -1. **No cycles**: If resolution creates a cycle, abort and report -2. **Parallel safety**: Solutions in same P* group must have NO file overlaps -3. **Sequential order**: Solutions in S* group must be in correct dependency order -4. **Single queue ID**: Use the same queue ID throughout (generated in Step 1) +- [ ] All subagents closed after completion ## Error Handling @@ -264,17 +284,8 @@ Before completing, verify: | No planned issues | Return empty queue summary | | Circular dependency detected | Abort, report cycle details | | Missing solution file | Skip issue, log warning | -| Index file missing | Create new index | -| Index not updated | Auto-fix: Set active_queue_id to new queue | - -## Done Criteria - -- [ ] All planned issues with `bound_solution_id` are included -- [ ] Queue JSON written to `queues/{queue-id}.json` -- [ ] Index updated in `queues/index.json` with `active_queue_id` -- [ ] No circular dependencies in solution DAG -- [ ] Parallel groups have no file overlaps -- [ ] Issue statuses updated to `queued` +| Agent timeout | Retry with increased timeout | +| Clarification rejected | Abort queue formation | ## Start Execution @@ -284,5 +295,4 @@ Begin by listing planned issues: ccw issue list --status planned --json ``` -Then follow the workflow to generate the queue. - +Then extract solution data and spawn queue agent. diff --git a/.codex/skills/ccw-loop-b/README.md b/.codex/skills/ccw-loop-b/README.md new file mode 100644 index 00000000..971dc70d --- /dev/null +++ b/.codex/skills/ccw-loop-b/README.md @@ -0,0 +1,102 @@ +# CCW Loop-B (Hybrid Orchestrator Pattern) + +协调器 + 专用 worker 的迭代开发工作流。 + +## Overview + +CCW Loop-B 采用混合模式设计: +- **Coordinator**: 状态管理、worker 调度、结果汇聚 +- **Workers**: 专注各自领域(develop/debug/validate) + +## Installation + +``` +.codex/skills/ccw-loop-b/ ++-- SKILL.md # Main skill definition ++-- README.md # This file ++-- phases/ +| +-- orchestrator.md # Coordinator logic +| +-- state-schema.md # State structure ++-- specs/ + +-- action-catalog.md # Action catalog + +.codex/agents/ ++-- ccw-loop-b-init.md # Init worker ++-- ccw-loop-b-develop.md # Develop worker ++-- ccw-loop-b-debug.md # Debug worker ++-- ccw-loop-b-validate.md # Validate worker ++-- ccw-loop-b-complete.md # Complete worker +``` + +## Execution Modes + +| Mode | Description | Use Case | +|------|-------------|----------| +| `interactive` | 用户选择 action | 复杂任务,需要人工决策 | +| `auto` | 自动顺序执行 | 标准开发流程 | +| `parallel` | 并行多维度分析 | 需要快速全面评估 | + +## Usage + +```bash +# Interactive (default) +/ccw-loop-b TASK="Implement feature X" + +# Auto mode +/ccw-loop-b --mode=auto TASK="Fix bug Y" + +# Parallel analysis +/ccw-loop-b --mode=parallel TASK="Analyze module Z" + +# Resume +/ccw-loop-b --loop-id=loop-b-xxx +``` + +## Session Files + +``` +.loop/ ++-- {loopId}.json # Master state ++-- {loopId}.workers/ # Worker outputs (JSON) ++-- {loopId}.progress/ # Human-readable progress (MD) +``` + +## Core Pattern + +### Coordinator + Worker + +```javascript +// Coordinator spawns specialized worker +const worker = spawn_agent({ message: buildWorkerPrompt(action) }) + +// Wait for completion +const result = wait({ ids: [worker], timeout_ms: 600000 }) + +// Process result +const output = result.status[worker].completed +updateState(output) + +// Cleanup +close_agent({ id: worker }) +``` + +### Batch Wait (Parallel Mode) + +```javascript +// Spawn multiple workers +const workers = [ + spawn_agent({ message: developPrompt }), + spawn_agent({ message: debugPrompt }), + spawn_agent({ message: validatePrompt }) +] + +// Batch wait +const results = wait({ ids: workers, timeout_ms: 900000 }) + +// Merge results +const merged = mergeOutputs(results) +``` + +## License + +MIT diff --git a/.codex/skills/ccw-loop-b/SKILL.md b/.codex/skills/ccw-loop-b/SKILL.md new file mode 100644 index 00000000..0e19004e --- /dev/null +++ b/.codex/skills/ccw-loop-b/SKILL.md @@ -0,0 +1,322 @@ +--- +description: Hybrid orchestrator pattern for iterative development. Coordinator + specialized workers with batch wait support. Triggers on "ccw-loop-b". +argument-hint: TASK="" [--loop-id=] [--mode=] +--- + +# CCW Loop-B - Hybrid Orchestrator Pattern + +协调器 + 专用 worker 的迭代开发工作流。支持单 agent 深度交互、多 agent 并行、混合模式灵活切换。 + +## Arguments + +| Arg | Required | Description | +|-----|----------|-------------| +| TASK | No | Task description (for new loop) | +| --loop-id | No | Existing loop ID to continue | +| --mode | No | `interactive` (default) / `auto` / `parallel` | + +## Architecture + +``` ++------------------------------------------------------------+ +| Main Coordinator | +| 职责: 状态管理 + worker 调度 + 结果汇聚 + 用户交互 | ++------------------------------------------------------------+ + | + +--------------------+--------------------+ + | | | + v v v ++----------------+ +----------------+ +----------------+ +| Worker-Develop | | Worker-Debug | | Worker-Validate| +| 专注: 代码实现 | | 专注: 问题诊断 | | 专注: 测试验证 | ++----------------+ +----------------+ +----------------+ +``` + +## Execution Modes + +### Mode: Interactive (default) + +协调器展示菜单,用户选择 action,spawn 对应 worker 执行。 + +``` +Coordinator -> Show menu -> User selects -> spawn worker -> wait -> Display result -> Loop +``` + +### Mode: Auto + +自动按预设顺序执行,worker 完成后自动切换到下一阶段。 + +``` +Init -> Develop -> [if issues] Debug -> Validate -> [if fail] Loop back -> Complete +``` + +### Mode: Parallel + +并行 spawn 多个 worker 分析不同维度,batch wait 汇聚结果。 + +``` +Coordinator -> spawn [develop, debug, validate] in parallel -> wait({ ids: all }) -> Merge -> Decide +``` + +## Session Structure + +``` +.loop/ ++-- {loopId}.json # Master state ++-- {loopId}.workers/ # Worker outputs +| +-- develop.output.json +| +-- debug.output.json +| +-- validate.output.json ++-- {loopId}.progress/ # Human-readable progress + +-- develop.md + +-- debug.md + +-- validate.md + +-- summary.md +``` + +## Subagent API + +| API | 作用 | +|-----|------| +| `spawn_agent({ message })` | 创建 agent,返回 `agent_id` | +| `wait({ ids, timeout_ms })` | 等待结果(唯一取结果入口) | +| `send_input({ id, message })` | 继续交互 | +| `close_agent({ id })` | 关闭回收 | + +## Implementation + +### Coordinator Logic + +```javascript +// ==================== HYBRID ORCHESTRATOR ==================== + +// 1. Initialize +const loopId = args['--loop-id'] || generateLoopId() +const mode = args['--mode'] || 'interactive' +let state = readOrCreateState(loopId, taskDescription) + +// 2. Mode selection +switch (mode) { + case 'interactive': + await runInteractiveMode(loopId, state) + break + + case 'auto': + await runAutoMode(loopId, state) + break + + case 'parallel': + await runParallelMode(loopId, state) + break +} +``` + +### Interactive Mode (单 agent 交互或按需 spawn worker) + +```javascript +async function runInteractiveMode(loopId, state) { + while (state.status === 'running') { + // Show menu, get user choice + const action = await showMenuAndGetChoice(state) + + if (action === 'exit') break + + // Spawn specialized worker for the action + const workerId = spawn_agent({ + message: buildWorkerPrompt(action, loopId, state) + }) + + // Wait for worker completion + const result = wait({ ids: [workerId], timeout_ms: 600000 }) + const output = result.status[workerId].completed + + // Update state and display result + state = updateState(loopId, action, output) + displayResult(output) + + // Cleanup worker + close_agent({ id: workerId }) + } +} +``` + +### Auto Mode (顺序执行 worker 链) + +```javascript +async function runAutoMode(loopId, state) { + const actionSequence = ['init', 'develop', 'debug', 'validate', 'complete'] + let currentIndex = state.skill_state?.action_index || 0 + + while (currentIndex < actionSequence.length && state.status === 'running') { + const action = actionSequence[currentIndex] + + // Spawn worker + const workerId = spawn_agent({ + message: buildWorkerPrompt(action, loopId, state) + }) + + const result = wait({ ids: [workerId], timeout_ms: 600000 }) + const output = result.status[workerId].completed + + // Parse worker result to determine next step + const workerResult = parseWorkerResult(output) + + // Update state + state = updateState(loopId, action, output) + + close_agent({ id: workerId }) + + // Determine next action + if (workerResult.needs_loop_back) { + // Loop back to develop or debug + currentIndex = actionSequence.indexOf(workerResult.loop_back_to) + } else if (workerResult.status === 'failed') { + // Stop on failure + break + } else { + currentIndex++ + } + } +} +``` + +### Parallel Mode (批量 spawn + wait) + +```javascript +async function runParallelMode(loopId, state) { + // Spawn multiple workers in parallel + const workers = { + develop: spawn_agent({ message: buildWorkerPrompt('develop', loopId, state) }), + debug: spawn_agent({ message: buildWorkerPrompt('debug', loopId, state) }), + validate: spawn_agent({ message: buildWorkerPrompt('validate', loopId, state) }) + } + + // Batch wait for all workers + const results = wait({ + ids: Object.values(workers), + timeout_ms: 900000 // 15 minutes for all + }) + + // Collect outputs + const outputs = {} + for (const [role, workerId] of Object.entries(workers)) { + outputs[role] = results.status[workerId].completed + close_agent({ id: workerId }) + } + + // Merge and analyze results + const mergedAnalysis = mergeWorkerOutputs(outputs) + + // Update state with merged results + updateState(loopId, 'parallel-analysis', mergedAnalysis) + + // Coordinator decides next action based on merged results + const decision = decideNextAction(mergedAnalysis) + return decision +} +``` + +### Worker Prompt Builder + +```javascript +function buildWorkerPrompt(action, loopId, state) { + const workerRoles = { + develop: '~/.codex/agents/ccw-loop-b-develop.md', + debug: '~/.codex/agents/ccw-loop-b-debug.md', + validate: '~/.codex/agents/ccw-loop-b-validate.md', + init: '~/.codex/agents/ccw-loop-b-init.md', + complete: '~/.codex/agents/ccw-loop-b-complete.md' + } + + return ` +## TASK ASSIGNMENT + +### MANDATORY FIRST STEPS (Agent Execute) +1. **Read role definition**: ${workerRoles[action]} (MUST read first) +2. Read: .workflow/project-tech.json +3. Read: .workflow/project-guidelines.json + +--- + +## LOOP CONTEXT + +- **Loop ID**: ${loopId} +- **Action**: ${action} +- **State File**: .loop/${loopId}.json +- **Output File**: .loop/${loopId}.workers/${action}.output.json +- **Progress File**: .loop/${loopId}.progress/${action}.md + +## CURRENT STATE + +${JSON.stringify(state, null, 2)} + +## TASK DESCRIPTION + +${state.description} + +## EXPECTED OUTPUT + +\`\`\` +WORKER_RESULT: +- action: ${action} +- status: success | failed | needs_input +- summary: +- files_changed: [list] +- next_suggestion: +- loop_back_to: + +DETAILED_OUTPUT: + +\`\`\` + +Execute the ${action} action now. +` +} +``` + +## Worker Roles + +| Worker | Role File | 专注领域 | +|--------|-----------|----------| +| init | ccw-loop-b-init.md | 会话初始化、任务解析 | +| develop | ccw-loop-b-develop.md | 代码实现、重构 | +| debug | ccw-loop-b-debug.md | 问题诊断、假设验证 | +| validate | ccw-loop-b-validate.md | 测试执行、覆盖率 | +| complete | ccw-loop-b-complete.md | 总结收尾 | + +## State Schema + +See [phases/state-schema.md](phases/state-schema.md) + +## Usage + +```bash +# Interactive mode (default) +/ccw-loop-b TASK="Implement user authentication" + +# Auto mode +/ccw-loop-b --mode=auto TASK="Fix login bug" + +# Parallel analysis mode +/ccw-loop-b --mode=parallel TASK="Analyze and improve payment module" + +# Resume existing loop +/ccw-loop-b --loop-id=loop-b-20260122-abc123 +``` + +## Error Handling + +| Situation | Action | +|-----------|--------| +| Worker timeout | send_input 请求收敛 | +| Worker failed | Log error, 协调器决策是否重试 | +| Batch wait partial timeout | 使用已完成结果继续 | +| State corrupted | 从 progress 文件重建 | + +## Best Practices + +1. **协调器保持轻量**: 只做调度和状态管理,具体工作交给 worker +2. **Worker 职责单一**: 每个 worker 专注一个领域 +3. **结果标准化**: Worker 输出遵循统一 WORKER_RESULT 格式 +4. **灵活模式切换**: 根据任务复杂度选择合适模式 +5. **及时清理**: Worker 完成后 close_agent 释放资源 diff --git a/.codex/skills/ccw-loop-b/phases/orchestrator.md b/.codex/skills/ccw-loop-b/phases/orchestrator.md new file mode 100644 index 00000000..c65ce6ee --- /dev/null +++ b/.codex/skills/ccw-loop-b/phases/orchestrator.md @@ -0,0 +1,257 @@ +# Orchestrator (Hybrid Pattern) + +协调器负责状态管理、worker 调度、结果汇聚。 + +## Role + +``` +Read state -> Select mode -> Spawn workers -> Wait results -> Merge -> Update state -> Loop/Exit +``` + +## State Management + +### Read State + +```javascript +function readState(loopId) { + const stateFile = `.loop/${loopId}.json` + return fs.existsSync(stateFile) + ? JSON.parse(Read(stateFile)) + : null +} +``` + +### Create State + +```javascript +function createState(loopId, taskDescription, mode) { + const now = new Date().toISOString() + + return { + loop_id: loopId, + title: taskDescription.substring(0, 100), + description: taskDescription, + mode: mode, + status: 'running', + current_iteration: 0, + max_iterations: 10, + created_at: now, + updated_at: now, + skill_state: { + phase: 'init', + action_index: 0, + workers_completed: [], + parallel_results: null + } + } +} +``` + +## Mode Handlers + +### Interactive Mode + +```javascript +async function runInteractiveMode(loopId, state) { + while (state.status === 'running') { + // 1. Show menu + const action = await showMenu(state) + if (action === 'exit') break + + // 2. Spawn worker + const worker = spawn_agent({ + message: buildWorkerPrompt(action, loopId, state) + }) + + // 3. Wait for result + const result = wait({ ids: [worker], timeout_ms: 600000 }) + + // 4. Handle timeout + if (result.timed_out) { + send_input({ id: worker, message: 'Please converge and output WORKER_RESULT' }) + const retryResult = wait({ ids: [worker], timeout_ms: 300000 }) + if (retryResult.timed_out) { + console.log('Worker timeout, skipping') + close_agent({ id: worker }) + continue + } + } + + // 5. Process output + const output = result.status[worker].completed + state = processWorkerOutput(loopId, action, output, state) + + // 6. Cleanup + close_agent({ id: worker }) + + // 7. Display result + displayResult(output) + } +} +``` + +### Auto Mode + +```javascript +async function runAutoMode(loopId, state) { + const sequence = ['init', 'develop', 'debug', 'validate', 'complete'] + let idx = state.skill_state?.action_index || 0 + + while (idx < sequence.length && state.status === 'running') { + const action = sequence[idx] + + // Spawn and wait + const worker = spawn_agent({ message: buildWorkerPrompt(action, loopId, state) }) + const result = wait({ ids: [worker], timeout_ms: 600000 }) + const output = result.status[worker].completed + close_agent({ id: worker }) + + // Parse result + const workerResult = parseWorkerResult(output) + state = processWorkerOutput(loopId, action, output, state) + + // Determine next + if (workerResult.loop_back_to) { + idx = sequence.indexOf(workerResult.loop_back_to) + } else if (workerResult.status === 'failed') { + break + } else { + idx++ + } + + // Update action index + state.skill_state.action_index = idx + saveState(loopId, state) + } +} +``` + +### Parallel Mode + +```javascript +async function runParallelMode(loopId, state) { + // Spawn all workers + const workers = { + develop: spawn_agent({ message: buildWorkerPrompt('develop', loopId, state) }), + debug: spawn_agent({ message: buildWorkerPrompt('debug', loopId, state) }), + validate: spawn_agent({ message: buildWorkerPrompt('validate', loopId, state) }) + } + + // Batch wait + const results = wait({ + ids: Object.values(workers), + timeout_ms: 900000 + }) + + // Collect outputs + const outputs = {} + for (const [role, id] of Object.entries(workers)) { + if (results.status[id].completed) { + outputs[role] = results.status[id].completed + } + close_agent({ id }) + } + + // Merge analysis + state.skill_state.parallel_results = outputs + saveState(loopId, state) + + // Coordinator analyzes merged results + return analyzeAndDecide(outputs) +} +``` + +## Worker Prompt Template + +```javascript +function buildWorkerPrompt(action, loopId, state) { + const roleFiles = { + init: '~/.codex/agents/ccw-loop-b-init.md', + develop: '~/.codex/agents/ccw-loop-b-develop.md', + debug: '~/.codex/agents/ccw-loop-b-debug.md', + validate: '~/.codex/agents/ccw-loop-b-validate.md', + complete: '~/.codex/agents/ccw-loop-b-complete.md' + } + + return ` +## TASK ASSIGNMENT + +### MANDATORY FIRST STEPS +1. **Read role definition**: ${roleFiles[action]} +2. Read: .workflow/project-tech.json +3. Read: .workflow/project-guidelines.json + +--- + +## CONTEXT +- Loop ID: ${loopId} +- Action: ${action} +- State: ${JSON.stringify(state, null, 2)} + +## TASK +${state.description} + +## OUTPUT FORMAT +\`\`\` +WORKER_RESULT: +- action: ${action} +- status: success | failed | needs_input +- summary: +- files_changed: [] +- next_suggestion: +- loop_back_to: + +DETAILED_OUTPUT: + +\`\`\` +` +} +``` + +## Result Processing + +```javascript +function parseWorkerResult(output) { + const result = { + action: 'unknown', + status: 'unknown', + summary: '', + files_changed: [], + next_suggestion: null, + loop_back_to: null + } + + const match = output.match(/WORKER_RESULT:\s*([\s\S]*?)(?:DETAILED_OUTPUT:|$)/) + if (match) { + const lines = match[1].split('\n') + for (const line of lines) { + const m = line.match(/^-\s*(\w+):\s*(.+)$/) + if (m) { + const [, key, value] = m + if (key === 'files_changed') { + try { result.files_changed = JSON.parse(value) } catch {} + } else { + result[key] = value.trim() + } + } + } + } + + return result +} +``` + +## Termination Conditions + +1. User exits (interactive) +2. Sequence complete (auto) +3. Worker failed with no recovery +4. Max iterations reached +5. API paused/stopped + +## Best Practices + +1. **Worker 生命周期**: spawn → wait → close,不保留 worker +2. **结果持久化**: Worker 输出写入 `.loop/{loopId}.workers/` +3. **状态同步**: 每次 worker 完成后更新 state +4. **超时处理**: send_input 请求收敛,再超时则跳过 diff --git a/.codex/skills/ccw-loop-b/phases/state-schema.md b/.codex/skills/ccw-loop-b/phases/state-schema.md new file mode 100644 index 00000000..d7581e96 --- /dev/null +++ b/.codex/skills/ccw-loop-b/phases/state-schema.md @@ -0,0 +1,181 @@ +# State Schema (CCW Loop-B) + +## Master State Structure + +```json +{ + "loop_id": "loop-b-20260122-abc123", + "title": "Implement user authentication", + "description": "Full task description here", + "mode": "interactive | auto | parallel", + "status": "running | paused | completed | failed", + "current_iteration": 3, + "max_iterations": 10, + "created_at": "2026-01-22T10:00:00.000Z", + "updated_at": "2026-01-22T10:30:00.000Z", + + "skill_state": { + "phase": "develop | debug | validate | complete", + "action_index": 2, + "workers_completed": ["init", "develop"], + "parallel_results": null, + "pending_tasks": [], + "completed_tasks": [], + "findings": [] + } +} +``` + +## Field Descriptions + +### Core Fields (API Compatible) + +| Field | Type | Description | +|-------|------|-------------| +| `loop_id` | string | Unique identifier | +| `title` | string | Short title (max 100 chars) | +| `description` | string | Full task description | +| `mode` | enum | Execution mode | +| `status` | enum | Current status | +| `current_iteration` | number | Iteration counter | +| `max_iterations` | number | Safety limit | +| `created_at` | ISO string | Creation timestamp | +| `updated_at` | ISO string | Last update timestamp | + +### Skill State Fields + +| Field | Type | Description | +|-------|------|-------------| +| `phase` | enum | Current execution phase | +| `action_index` | number | Position in action sequence (auto mode) | +| `workers_completed` | array | List of completed worker actions | +| `parallel_results` | object | Merged results from parallel mode | +| `pending_tasks` | array | Tasks waiting to be executed | +| `completed_tasks` | array | Tasks already done | +| `findings` | array | Discoveries during execution | + +## Worker Output Structure + +Each worker writes to `.loop/{loopId}.workers/{action}.output.json`: + +```json +{ + "action": "develop", + "status": "success", + "summary": "Implemented 3 functions", + "files_changed": ["src/auth.ts", "src/utils.ts"], + "next_suggestion": "validate", + "loop_back_to": null, + "timestamp": "2026-01-22T10:15:00.000Z", + "detailed_output": { + "tasks_completed": [ + { "id": "T1", "description": "Create auth module" } + ], + "metrics": { + "lines_added": 150, + "lines_removed": 20 + } + } +} +``` + +## Progress File Structure + +Human-readable progress in `.loop/{loopId}.progress/{action}.md`: + +```markdown +# Develop Progress + +## Session: loop-b-20260122-abc123 + +### Iteration 1 (2026-01-22 10:15) + +**Task**: Implement auth module + +**Changes**: +- Created `src/auth.ts` with login/logout functions +- Added JWT token handling in `src/utils.ts` + +**Status**: Success + +--- + +### Iteration 2 (2026-01-22 10:30) + +... +``` + +## Status Transitions + +``` + +--------+ + | init | + +--------+ + | + v ++------> +---------+ +| | develop | +| +---------+ +| | +| +--------+--------+ +| | | +| v v +| +-------+ +---------+ +| | debug |<------| validate| +| +-------+ +---------+ +| | | +| +--------+--------+ +| | +| v +| [needs fix?] +| yes | | no +| v v ++------------+ +----------+ + | complete | + +----------+ +``` + +## Parallel Results Schema + +When `mode === 'parallel'`: + +```json +{ + "parallel_results": { + "develop": { + "status": "success", + "summary": "...", + "suggestions": [] + }, + "debug": { + "status": "success", + "issues_found": [], + "suggestions": [] + }, + "validate": { + "status": "success", + "test_results": {}, + "coverage": {} + }, + "merged_at": "2026-01-22T10:45:00.000Z" + } +} +``` + +## Directory Structure + +``` +.loop/ ++-- loop-b-20260122-abc123.json # Master state ++-- loop-b-20260122-abc123.workers/ +| +-- init.output.json +| +-- develop.output.json +| +-- debug.output.json +| +-- validate.output.json +| +-- complete.output.json ++-- loop-b-20260122-abc123.progress/ + +-- develop.md + +-- debug.md + +-- validate.md + +-- summary.md +```