mirror of
https://github.com/catlog22/Claude-Code-Workflow.git
synced 2026-02-10 02:24:35 +08:00
Remove deprecated issue management skills: issue-discover, issue-new, issue-plan, and issue-queue. These skills have been deleted to streamline the codebase and improve maintainability.
This commit is contained in:
@@ -1,365 +0,0 @@
|
||||
---
|
||||
name: issue-discover-by-prompt
|
||||
description: Discover issues from user prompt with iterative multi-agent exploration and cross-module comparison
|
||||
argument-hint: "<prompt> [--scope=src/**] [--depth=standard|deep] [--max-iterations=5]"
|
||||
---
|
||||
|
||||
# Issue Discovery by Prompt (Codex Version)
|
||||
|
||||
## Goal
|
||||
|
||||
Prompt-driven issue discovery with intelligent planning. Instead of fixed perspectives, this command:
|
||||
|
||||
1. **Analyzes user intent** to understand what to find
|
||||
2. **Plans exploration strategy** dynamically based on codebase structure
|
||||
3. **Executes iterative exploration** with feedback loops
|
||||
4. **Performs cross-module comparison** when detecting comparison intent
|
||||
|
||||
**Core Difference from `issue-discover.md`**:
|
||||
- `issue-discover`: Pre-defined perspectives (bug, security, etc.), parallel execution
|
||||
- `issue-discover-by-prompt`: User-driven prompt, planned strategy, iterative exploration
|
||||
|
||||
## Inputs
|
||||
|
||||
- **Prompt**: Natural language description of what to find
|
||||
- **Scope**: `--scope=src/**` - File pattern to explore (default: `**/*`)
|
||||
- **Depth**: `--depth=standard|deep` - standard (3 iterations) or deep (5+ iterations)
|
||||
- **Max Iterations**: `--max-iterations=N` (default: 5)
|
||||
|
||||
## Output Requirements
|
||||
|
||||
**Generate Files:**
|
||||
1. `.workflow/issues/discoveries/{discovery-id}/discovery-state.json` - Session state with iteration tracking
|
||||
2. `.workflow/issues/discoveries/{discovery-id}/iterations/{N}/{dimension}.json` - Per-iteration findings
|
||||
3. `.workflow/issues/discoveries/{discovery-id}/comparison-analysis.json` - Cross-dimension comparison (if applicable)
|
||||
4. `.workflow/issues/discoveries/{discovery-id}/discovery-issues.jsonl` - Generated issue candidates
|
||||
|
||||
**Return Summary:**
|
||||
```json
|
||||
{
|
||||
"discovery_id": "DBP-YYYYMMDD-HHmmss",
|
||||
"prompt": "Check if frontend API calls match backend implementations",
|
||||
"intent_type": "comparison",
|
||||
"dimensions": ["frontend-calls", "backend-handlers"],
|
||||
"total_iterations": 3,
|
||||
"total_findings": 24,
|
||||
"issues_generated": 12,
|
||||
"comparison_match_rate": 0.75
|
||||
}
|
||||
```
|
||||
|
||||
## Workflow
|
||||
|
||||
### Step 1: Initialize Discovery Session
|
||||
|
||||
```bash
|
||||
# Generate discovery ID
|
||||
DISCOVERY_ID="DBP-$(date -u +%Y%m%d-%H%M%S)"
|
||||
OUTPUT_DIR=".workflow/issues/discoveries/${DISCOVERY_ID}"
|
||||
|
||||
# Create directory structure
|
||||
mkdir -p "${OUTPUT_DIR}/iterations"
|
||||
```
|
||||
|
||||
Detect intent type from prompt:
|
||||
- `comparison`: Contains "match", "compare", "versus", "vs", "between"
|
||||
- `search`: Contains "find", "locate", "where"
|
||||
- `verification`: Contains "verify", "check", "ensure"
|
||||
- `audit`: Contains "audit", "review", "analyze"
|
||||
|
||||
### Step 2: Gather Context
|
||||
|
||||
Use `rg` and file exploration to understand codebase structure:
|
||||
|
||||
```bash
|
||||
# Find relevant modules based on prompt keywords
|
||||
rg -l "<keyword1>" --type ts | head -10
|
||||
rg -l "<keyword2>" --type ts | head -10
|
||||
|
||||
# Understand project structure
|
||||
ls -la src/
|
||||
cat .workflow/project-tech.json 2>/dev/null || echo "No project-tech.json"
|
||||
```
|
||||
|
||||
Build context package:
|
||||
```json
|
||||
{
|
||||
"prompt_keywords": ["frontend", "API", "backend"],
|
||||
"codebase_structure": { "modules": [...], "patterns": [...] },
|
||||
"relevant_modules": ["src/api/", "src/services/"]
|
||||
}
|
||||
```
|
||||
|
||||
### Step 3: Plan Exploration Strategy
|
||||
|
||||
Analyze the prompt and context to design exploration strategy.
|
||||
|
||||
**Output exploration plan:**
|
||||
```json
|
||||
{
|
||||
"intent_analysis": {
|
||||
"type": "comparison",
|
||||
"primary_question": "Do frontend API calls match backend implementations?",
|
||||
"sub_questions": ["Are endpoints aligned?", "Are payloads compatible?"]
|
||||
},
|
||||
"dimensions": [
|
||||
{
|
||||
"name": "frontend-calls",
|
||||
"description": "Client-side API calls and error handling",
|
||||
"search_targets": ["src/api/**", "src/hooks/**"],
|
||||
"focus_areas": ["fetch calls", "error boundaries", "response parsing"]
|
||||
},
|
||||
{
|
||||
"name": "backend-handlers",
|
||||
"description": "Server-side API implementations",
|
||||
"search_targets": ["src/server/**", "src/routes/**"],
|
||||
"focus_areas": ["endpoint handlers", "response schemas", "error responses"]
|
||||
}
|
||||
],
|
||||
"comparison_matrix": {
|
||||
"dimension_a": "frontend-calls",
|
||||
"dimension_b": "backend-handlers",
|
||||
"comparison_points": [
|
||||
{"aspect": "endpoints", "frontend_check": "fetch URLs", "backend_check": "route paths"},
|
||||
{"aspect": "methods", "frontend_check": "HTTP methods used", "backend_check": "methods accepted"},
|
||||
{"aspect": "payloads", "frontend_check": "request body structure", "backend_check": "expected schema"},
|
||||
{"aspect": "responses", "frontend_check": "response parsing", "backend_check": "response format"}
|
||||
]
|
||||
},
|
||||
"estimated_iterations": 3,
|
||||
"termination_conditions": ["All comparison points verified", "No new findings in last iteration"]
|
||||
}
|
||||
```
|
||||
|
||||
### Step 4: Iterative Exploration
|
||||
|
||||
Execute iterations until termination conditions are met:
|
||||
|
||||
```
|
||||
WHILE iteration < max_iterations AND shouldContinue:
|
||||
1. Plan iteration focus based on previous findings
|
||||
2. Explore each dimension
|
||||
3. Collect and analyze findings
|
||||
4. Cross-reference between dimensions
|
||||
5. Check convergence
|
||||
```
|
||||
|
||||
**For each iteration:**
|
||||
|
||||
1. **Search for relevant code** using `rg`:
|
||||
```bash
|
||||
# Based on dimension focus areas
|
||||
rg "fetch\s*\(" --type ts -C 3 | head -50
|
||||
rg "app\.(get|post|put|delete)" --type ts -C 3 | head -50
|
||||
```
|
||||
|
||||
2. **Analyze and record findings**:
|
||||
```json
|
||||
{
|
||||
"dimension": "frontend-calls",
|
||||
"iteration": 1,
|
||||
"findings": [
|
||||
{
|
||||
"id": "F-001",
|
||||
"title": "Undefined endpoint in UserService",
|
||||
"category": "endpoint-mismatch",
|
||||
"file": "src/api/userService.ts",
|
||||
"line": 42,
|
||||
"snippet": "fetch('/api/users/profile')",
|
||||
"related_dimension": "backend-handlers",
|
||||
"confidence": 0.85
|
||||
}
|
||||
],
|
||||
"coverage": {
|
||||
"files_explored": 15,
|
||||
"areas_covered": ["fetch calls", "axios instances"],
|
||||
"areas_remaining": ["graphql queries"]
|
||||
},
|
||||
"leads": [
|
||||
{"description": "Check GraphQL mutations", "suggested_search": "mutation.*User"}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
3. **Cross-reference findings** between dimensions:
|
||||
```javascript
|
||||
// For each finding in dimension A, look for related code in dimension B
|
||||
if (finding.related_dimension) {
|
||||
searchForRelatedCode(finding, otherDimension);
|
||||
}
|
||||
```
|
||||
|
||||
4. **Check convergence**:
|
||||
```javascript
|
||||
const convergence = {
|
||||
newDiscoveries: newFindings.length,
|
||||
confidence: calculateConfidence(cumulativeFindings),
|
||||
converged: newFindings.length === 0 || confidence > 0.9
|
||||
};
|
||||
```
|
||||
|
||||
### Step 5: Cross-Analysis (for comparison intent)
|
||||
|
||||
If intent is comparison, analyze findings across dimensions:
|
||||
|
||||
```javascript
|
||||
for (const point of comparisonMatrix.comparison_points) {
|
||||
const aFindings = findings.filter(f =>
|
||||
f.related_dimension === dimension_a && f.category.includes(point.aspect)
|
||||
);
|
||||
const bFindings = findings.filter(f =>
|
||||
f.related_dimension === dimension_b && f.category.includes(point.aspect)
|
||||
);
|
||||
|
||||
// Find discrepancies
|
||||
const discrepancies = compareFindings(aFindings, bFindings, point);
|
||||
|
||||
// Calculate match rate
|
||||
const matchRate = calculateMatchRate(aFindings, bFindings);
|
||||
}
|
||||
```
|
||||
|
||||
Write to `comparison-analysis.json`:
|
||||
```json
|
||||
{
|
||||
"matrix": { "dimension_a": "...", "dimension_b": "...", "comparison_points": [...] },
|
||||
"results": [
|
||||
{
|
||||
"aspect": "endpoints",
|
||||
"dimension_a_count": 15,
|
||||
"dimension_b_count": 12,
|
||||
"discrepancies": [
|
||||
{"frontend": "/api/users/profile", "backend": "NOT_FOUND", "type": "missing_endpoint"}
|
||||
],
|
||||
"match_rate": 0.80
|
||||
}
|
||||
],
|
||||
"summary": {
|
||||
"total_discrepancies": 5,
|
||||
"overall_match_rate": 0.75,
|
||||
"critical_mismatches": ["endpoints", "payloads"]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Step 6: Generate Issues
|
||||
|
||||
Convert high-confidence findings to issues:
|
||||
|
||||
```bash
|
||||
# For each finding with confidence >= 0.7 or priority critical/high
|
||||
echo '{"id":"ISS-DBP-001","title":"Missing backend endpoint for /api/users/profile",...}' >> ${OUTPUT_DIR}/discovery-issues.jsonl
|
||||
```
|
||||
|
||||
### Step 7: Update Final State
|
||||
|
||||
```json
|
||||
{
|
||||
"discovery_id": "DBP-...",
|
||||
"type": "prompt-driven",
|
||||
"prompt": "...",
|
||||
"intent_type": "comparison",
|
||||
"phase": "complete",
|
||||
"created_at": "...",
|
||||
"updated_at": "...",
|
||||
"iterations": [
|
||||
{"number": 1, "findings_count": 10, "new_discoveries": 10, "confidence": 0.6},
|
||||
{"number": 2, "findings_count": 18, "new_discoveries": 8, "confidence": 0.75},
|
||||
{"number": 3, "findings_count": 24, "new_discoveries": 6, "confidence": 0.85}
|
||||
],
|
||||
"results": {
|
||||
"total_iterations": 3,
|
||||
"total_findings": 24,
|
||||
"issues_generated": 12,
|
||||
"comparison_match_rate": 0.75
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Step 8: Output Summary
|
||||
|
||||
```markdown
|
||||
## Discovery Complete: DBP-...
|
||||
|
||||
**Prompt**: Check if frontend API calls match backend implementations
|
||||
**Intent**: comparison
|
||||
**Dimensions**: frontend-calls, backend-handlers
|
||||
|
||||
### Iteration Summary
|
||||
| # | Findings | New | Confidence |
|
||||
|---|----------|-----|------------|
|
||||
| 1 | 10 | 10 | 60% |
|
||||
| 2 | 18 | 8 | 75% |
|
||||
| 3 | 24 | 6 | 85% |
|
||||
|
||||
### Comparison Results
|
||||
- **Overall Match Rate**: 75%
|
||||
- **Total Discrepancies**: 5
|
||||
- **Critical Mismatches**: endpoints, payloads
|
||||
|
||||
### Issues Generated: 12
|
||||
- 2 Critical
|
||||
- 4 High
|
||||
- 6 Medium
|
||||
|
||||
### Next Steps
|
||||
- `/issue:plan DBP-001,DBP-002,...` to plan solutions
|
||||
- `ccw view` to review findings in dashboard
|
||||
```
|
||||
|
||||
## Quality Checklist
|
||||
|
||||
Before completing, verify:
|
||||
|
||||
- [ ] Intent type correctly detected from prompt
|
||||
- [ ] Dimensions dynamically generated based on prompt
|
||||
- [ ] Iterations executed until convergence or max limit
|
||||
- [ ] Cross-reference analysis performed (for comparison intent)
|
||||
- [ ] High-confidence findings converted to issues
|
||||
- [ ] Discovery state shows `phase: complete`
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Situation | Action |
|
||||
|-----------|--------|
|
||||
| No relevant code found | Report empty result, suggest broader scope |
|
||||
| Max iterations without convergence | Complete with current findings, note in summary |
|
||||
| Comparison dimension mismatch | Report which dimension has fewer findings |
|
||||
| No comparison points matched | Report as "No direct matches found" |
|
||||
|
||||
## Use Cases
|
||||
|
||||
| Scenario | Example Prompt |
|
||||
|----------|----------------|
|
||||
| API Contract | "Check if frontend calls match backend endpoints" |
|
||||
| Error Handling | "Find inconsistent error handling patterns" |
|
||||
| Migration Gap | "Compare old auth with new auth implementation" |
|
||||
| Feature Parity | "Verify mobile has all web features" |
|
||||
| Schema Drift | "Check if TypeScript types match API responses" |
|
||||
| Integration | "Find mismatches between service A and service B" |
|
||||
|
||||
## Start Discovery
|
||||
|
||||
Parse prompt and detect intent:
|
||||
|
||||
```bash
|
||||
PROMPT="${1}"
|
||||
SCOPE="${2:-**/*}"
|
||||
DEPTH="${3:-standard}"
|
||||
|
||||
# Detect intent keywords
|
||||
if echo "${PROMPT}" | grep -qiE '(match|compare|versus|vs|between)'; then
|
||||
INTENT="comparison"
|
||||
elif echo "${PROMPT}" | grep -qiE '(find|locate|where)'; then
|
||||
INTENT="search"
|
||||
elif echo "${PROMPT}" | grep -qiE '(verify|check|ensure)'; then
|
||||
INTENT="verification"
|
||||
else
|
||||
INTENT="audit"
|
||||
fi
|
||||
|
||||
echo "Intent detected: ${INTENT}"
|
||||
echo "Starting discovery with scope: ${SCOPE}"
|
||||
```
|
||||
|
||||
Then follow the workflow to explore and discover issues.
|
||||
@@ -1,262 +1,345 @@
|
||||
---
|
||||
name: issue-discover
|
||||
description: Discover potential issues from multiple perspectives (bug, UX, test, quality, security, performance, maintainability, best-practices)
|
||||
argument-hint: "<path-pattern> [--perspectives=bug,ux,...] [--external]"
|
||||
description: Unified issue discovery and creation. Create issues from GitHub/text, discover issues via multi-perspective analysis, or prompt-driven iterative exploration. Triggers on "issue:new", "issue:discover", "issue:discover-by-prompt", "create issue", "discover issues", "find issues".
|
||||
allowed-tools: spawn_agent, wait, send_input, close_agent, AskUserQuestion, Read, Write, Edit, Bash, Glob, Grep, mcp__ace-tool__search_context, mcp__exa__search
|
||||
---
|
||||
|
||||
# Issue Discovery (Codex Version)
|
||||
# Issue Discover
|
||||
|
||||
## Goal
|
||||
Unified issue discovery and creation skill covering three entry points: manual issue creation, perspective-based discovery, and prompt-driven exploration.
|
||||
|
||||
Multi-perspective issue discovery that explores code from different angles to identify potential bugs, UX improvements, test gaps, and other actionable items. Unlike code review (which assesses existing code quality), discovery focuses on **finding opportunities for improvement and potential problems**.
|
||||
## Architecture Overview
|
||||
|
||||
**Discovery Scope**: Specified modules/files only
|
||||
**Output Directory**: `.workflow/issues/discoveries/{discovery-id}/`
|
||||
**Available Perspectives**: bug, ux, test, quality, security, performance, maintainability, best-practices
|
||||
|
||||
## Inputs
|
||||
|
||||
- **Target Pattern**: File glob pattern (e.g., `src/auth/**`)
|
||||
- **Perspectives**: Comma-separated list via `--perspectives` (or interactive selection)
|
||||
- **External Research**: `--external` flag enables Exa research for security and best-practices
|
||||
|
||||
## Output Requirements
|
||||
|
||||
**Generate Files:**
|
||||
1. `.workflow/issues/discoveries/{discovery-id}/discovery-state.json` - Session state
|
||||
2. `.workflow/issues/discoveries/{discovery-id}/perspectives/{perspective}.json` - Per-perspective findings
|
||||
3. `.workflow/issues/discoveries/{discovery-id}/discovery-issues.jsonl` - Generated issue candidates
|
||||
4. `.workflow/issues/discoveries/{discovery-id}/summary.md` - Summary report
|
||||
|
||||
**Return Summary:**
|
||||
```json
|
||||
{
|
||||
"discovery_id": "DSC-YYYYMMDD-HHmmss",
|
||||
"target_pattern": "src/auth/**",
|
||||
"perspectives_analyzed": ["bug", "security", "test"],
|
||||
"total_findings": 15,
|
||||
"issues_generated": 8,
|
||||
"priority_distribution": { "critical": 1, "high": 3, "medium": 4 }
|
||||
}
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────────────┐
|
||||
│ Issue Discover Orchestrator (SKILL.md) │
|
||||
│ → Action selection → Route to phase → Execute → Summary │
|
||||
└───────────────┬─────────────────────────────────────────────────┘
|
||||
│
|
||||
├─ AskUserQuestion: Select action
|
||||
│
|
||||
┌───────────┼───────────┬───────────┐
|
||||
↓ ↓ ↓ │
|
||||
┌─────────┐ ┌─────────┐ ┌─────────┐ │
|
||||
│ Phase 1 │ │ Phase 2 │ │ Phase 3 │ │
|
||||
│ Create │ │Discover │ │Discover │ │
|
||||
│ New │ │ Multi │ │by Prompt│ │
|
||||
└─────────┘ └─────────┘ └─────────┘ │
|
||||
↓ ↓ ↓ │
|
||||
Issue Discoveries Discoveries │
|
||||
(registered) (export) (export) │
|
||||
│ │ │ │
|
||||
└───────────┴───────────┘ │
|
||||
↓ │
|
||||
issue-resolve (plan/queue) │
|
||||
↓ │
|
||||
/issue:execute │
|
||||
```
|
||||
|
||||
## Workflow
|
||||
## Key Design Principles
|
||||
|
||||
### Step 1: Initialize Discovery Session
|
||||
1. **Action-Driven Routing**: AskUserQuestion selects action, then load single phase
|
||||
2. **Progressive Phase Loading**: Only read the selected phase document
|
||||
3. **CLI-First Data Access**: All issue CRUD via `ccw issue` CLI commands
|
||||
4. **Auto Mode Support**: `-y` flag skips action selection with auto-detection
|
||||
5. **Subagent Lifecycle**: Explicit lifecycle management with spawn_agent → wait → close_agent
|
||||
6. **Role Path Loading**: Subagent roles loaded via path reference in MANDATORY FIRST STEPS
|
||||
|
||||
```bash
|
||||
# Generate discovery ID
|
||||
DISCOVERY_ID="DSC-$(date -u +%Y%m%d-%H%M%S)"
|
||||
OUTPUT_DIR=".workflow/issues/discoveries/${DISCOVERY_ID}"
|
||||
## Auto Mode
|
||||
|
||||
# Create directory structure
|
||||
mkdir -p "${OUTPUT_DIR}/perspectives"
|
||||
When `--yes` or `-y`: Skip action selection, auto-detect action from input type.
|
||||
|
||||
## Usage
|
||||
|
||||
```
|
||||
issue-discover <input>
|
||||
issue-discover [FLAGS] "<input>"
|
||||
|
||||
# Flags
|
||||
-y, --yes Skip all confirmations (auto mode)
|
||||
--action <type> Pre-select action: new|discover|discover-by-prompt
|
||||
|
||||
# Phase-specific flags
|
||||
--priority <1-5> Issue priority (new mode)
|
||||
--perspectives <list> Comma-separated perspectives (discover mode)
|
||||
--external Enable Exa research (discover mode)
|
||||
--scope <pattern> File scope (discover/discover-by-prompt mode)
|
||||
--depth <level> standard|deep (discover-by-prompt mode)
|
||||
--max-iterations <n> Max exploration iterations (discover-by-prompt mode)
|
||||
|
||||
# Examples
|
||||
issue-discover https://github.com/org/repo/issues/42 # Create from GitHub
|
||||
issue-discover "Login fails with special chars" # Create from text
|
||||
issue-discover --action discover src/auth/** # Multi-perspective discovery
|
||||
issue-discover --action discover src/api/** --perspectives=security,bug # Focused discovery
|
||||
issue-discover --action discover-by-prompt "Check API contracts" # Prompt-driven discovery
|
||||
issue-discover -y "auth broken" # Auto mode create
|
||||
```
|
||||
|
||||
Resolve target files:
|
||||
```bash
|
||||
# List files matching pattern
|
||||
find <target-pattern> -type f -name "*.ts" -o -name "*.tsx" -o -name "*.js" -o -name "*.jsx"
|
||||
## Execution Flow
|
||||
|
||||
```
|
||||
Input Parsing:
|
||||
└─ Parse flags (--action, -y, --perspectives, etc.) and positional args
|
||||
|
||||
Action Selection:
|
||||
├─ --action flag provided → Route directly
|
||||
├─ Auto-detect from input:
|
||||
│ ├─ GitHub URL or #number → Create New (Phase 1)
|
||||
│ ├─ Path pattern (src/**, *.ts) → Discover (Phase 2)
|
||||
│ ├─ Short text (< 80 chars) → Create New (Phase 1)
|
||||
│ └─ Long descriptive text (≥ 80 chars) → Discover by Prompt (Phase 3)
|
||||
└─ Otherwise → AskUserQuestion to select action
|
||||
|
||||
Phase Execution (load one phase):
|
||||
├─ Phase 1: Create New → phases/01-issue-new.md
|
||||
├─ Phase 2: Discover → phases/02-discover.md
|
||||
└─ Phase 3: Discover by Prompt → phases/03-discover-by-prompt.md
|
||||
|
||||
Post-Phase:
|
||||
└─ Summary + Next steps recommendation
|
||||
```
|
||||
|
||||
If no files found, abort with error message.
|
||||
### Phase Reference Documents
|
||||
|
||||
### Step 2: Select Perspectives
|
||||
| Phase | Document | Load When | Purpose |
|
||||
|-------|----------|-----------|---------|
|
||||
| Phase 1 | [phases/01-issue-new.md](phases/01-issue-new.md) | Action = Create New | Create issue from GitHub URL or text description |
|
||||
| Phase 2 | [phases/02-discover.md](phases/02-discover.md) | Action = Discover | Multi-perspective issue discovery (bug, security, test, etc.) |
|
||||
| Phase 3 | [phases/03-discover-by-prompt.md](phases/03-discover-by-prompt.md) | Action = Discover by Prompt | Prompt-driven iterative exploration with Gemini planning |
|
||||
|
||||
**If `--perspectives` provided:**
|
||||
- Parse comma-separated list
|
||||
- Validate against available perspectives
|
||||
## Core Rules
|
||||
|
||||
**If not provided (interactive):**
|
||||
- Present perspective groups:
|
||||
- Quick scan: bug, test, quality
|
||||
- Security audit: security, bug, quality
|
||||
- Full analysis: all perspectives
|
||||
- Use first group as default or wait for user input
|
||||
1. **Action Selection First**: Always determine action before loading any phase
|
||||
2. **Single Phase Load**: Only read the selected phase document, never load all phases
|
||||
3. **CLI Data Access**: Use `ccw issue` CLI for all issue operations, NEVER read files directly
|
||||
4. **Content Preservation**: Each phase contains complete execution logic from original commands
|
||||
5. **Auto-Detect Input**: Smart input parsing reduces need for explicit --action flag
|
||||
6. **⚠️ CRITICAL: DO NOT STOP**: Continuous multi-phase workflow. After completing each phase, immediately proceed to next
|
||||
7. **Progressive Phase Loading**: Read phase docs ONLY when that phase is about to execute
|
||||
8. **Explicit Lifecycle**: Always close_agent after wait completes to free resources
|
||||
|
||||
### Step 3: Analyze Each Perspective
|
||||
## Input Processing
|
||||
|
||||
For each selected perspective, explore target files and identify issues.
|
||||
### Auto-Detection Logic
|
||||
|
||||
**Perspective-Specific Focus:**
|
||||
```javascript
|
||||
function detectAction(input, flags) {
|
||||
// 1. Explicit --action flag
|
||||
if (flags.action) return flags.action;
|
||||
|
||||
| Perspective | Focus Areas | Priority Guide |
|
||||
|-------------|-------------|----------------|
|
||||
| **bug** | Null checks, edge cases, resource leaks, race conditions, boundary conditions, exception handling | Critical=data corruption/crash, High=malfunction, Medium=edge case |
|
||||
| **ux** | Error messages, loading states, feedback, accessibility, interaction patterns | Critical=inaccessible, High=confusing, Medium=inconsistent |
|
||||
| **test** | Missing unit tests, edge case coverage, integration gaps, assertion quality | Critical=no security tests, High=no core logic tests |
|
||||
| **quality** | Complexity, duplication, naming, documentation, code smells | Critical=unmaintainable, High=significant issues |
|
||||
| **security** | Input validation, auth/authz, injection, XSS/CSRF, data exposure | Critical=auth bypass/injection, High=missing authz |
|
||||
| **performance** | N+1 queries, memory leaks, caching, algorithm efficiency | Critical=memory leaks, High=N+1 queries |
|
||||
| **maintainability** | Coupling, interface design, tech debt, extensibility | Critical=forced changes, High=unclear boundaries |
|
||||
| **best-practices** | Framework conventions, language patterns, anti-patterns | Critical=bug-causing anti-patterns, High=convention violations |
|
||||
const trimmed = input.trim();
|
||||
|
||||
**For each perspective:**
|
||||
|
||||
1. Read target files and analyze for perspective-specific concerns
|
||||
2. Use `rg` to search for patterns indicating issues
|
||||
3. Record findings with:
|
||||
- `id`: Finding ID (e.g., `F-001`)
|
||||
- `title`: Brief description
|
||||
- `priority`: critical/high/medium/low
|
||||
- `category`: Specific category within perspective
|
||||
- `description`: Detailed explanation
|
||||
- `file`: File path
|
||||
- `line`: Line number
|
||||
- `snippet`: Code snippet
|
||||
- `suggested_issue`: Proposed issue text
|
||||
- `confidence`: 0.0-1.0
|
||||
|
||||
4. Write to `{OUTPUT_DIR}/perspectives/{perspective}.json`:
|
||||
```json
|
||||
{
|
||||
"perspective": "security",
|
||||
"analyzed_at": "2025-01-22T...",
|
||||
"files_analyzed": 15,
|
||||
"findings": [
|
||||
{
|
||||
"id": "F-001",
|
||||
"title": "Missing input validation",
|
||||
"priority": "high",
|
||||
"category": "input-validation",
|
||||
"description": "User input is passed directly to database query",
|
||||
"file": "src/auth/login.ts",
|
||||
"line": 42,
|
||||
"snippet": "db.query(`SELECT * FROM users WHERE name = '${input}'`)",
|
||||
"suggested_issue": "Add input sanitization to prevent SQL injection",
|
||||
"confidence": 0.95
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
### Step 4: External Research (if --external)
|
||||
|
||||
For security and best-practices perspectives, use Exa to search for:
|
||||
- Industry best practices for the tech stack
|
||||
- Known vulnerability patterns
|
||||
- Framework-specific security guidelines
|
||||
|
||||
Write results to `{OUTPUT_DIR}/external-research.json`.
|
||||
|
||||
### Step 5: Aggregate and Prioritize
|
||||
|
||||
1. Load all perspective JSON files
|
||||
2. Deduplicate findings by file+line
|
||||
3. Calculate priority scores:
|
||||
- critical: 1.0
|
||||
- high: 0.8
|
||||
- medium: 0.5
|
||||
- low: 0.2
|
||||
- Adjust by confidence
|
||||
|
||||
4. Sort by priority score descending
|
||||
|
||||
### Step 6: Generate Issues
|
||||
|
||||
Convert high-priority findings to issue format:
|
||||
|
||||
```bash
|
||||
# Append to discovery-issues.jsonl
|
||||
echo '{"id":"ISS-DSC-001","title":"...","priority":"high",...}' >> ${OUTPUT_DIR}/discovery-issues.jsonl
|
||||
```
|
||||
|
||||
Issue criteria:
|
||||
- `priority` is critical or high
|
||||
- OR `priority_score >= 0.7`
|
||||
- OR `confidence >= 0.9` with medium priority
|
||||
|
||||
### Step 7: Update Discovery State
|
||||
|
||||
Write final state to `{OUTPUT_DIR}/discovery-state.json`:
|
||||
```json
|
||||
{
|
||||
"discovery_id": "DSC-...",
|
||||
"target_pattern": "src/auth/**",
|
||||
"phase": "complete",
|
||||
"created_at": "...",
|
||||
"updated_at": "...",
|
||||
"perspectives": ["bug", "security", "test"],
|
||||
"results": {
|
||||
"total_findings": 15,
|
||||
"issues_generated": 8,
|
||||
"priority_distribution": {
|
||||
"critical": 1,
|
||||
"high": 3,
|
||||
"medium": 4
|
||||
}
|
||||
// 2. GitHub URL → new
|
||||
if (trimmed.match(/github\.com\/[\w-]+\/[\w-]+\/issues\/\d+/) || trimmed.match(/^#\d+$/)) {
|
||||
return 'new';
|
||||
}
|
||||
|
||||
// 3. Path pattern (contains **, /, or --perspectives) → discover
|
||||
if (trimmed.match(/\*\*/) || trimmed.match(/^src\//) || flags.perspectives) {
|
||||
return 'discover';
|
||||
}
|
||||
|
||||
// 4. Short text (< 80 chars, no special patterns) → new
|
||||
if (trimmed.length > 0 && trimmed.length < 80 && !trimmed.includes('--')) {
|
||||
return 'new';
|
||||
}
|
||||
|
||||
// 5. Long descriptive text → discover-by-prompt
|
||||
if (trimmed.length >= 80) {
|
||||
return 'discover-by-prompt';
|
||||
}
|
||||
|
||||
// Cannot auto-detect → ask user
|
||||
return null;
|
||||
}
|
||||
```
|
||||
|
||||
### Step 8: Generate Summary
|
||||
### Action Selection (AskUserQuestion)
|
||||
|
||||
Write summary to `{OUTPUT_DIR}/summary.md`:
|
||||
```markdown
|
||||
# Discovery Summary: DSC-...
|
||||
```javascript
|
||||
// When action cannot be auto-detected
|
||||
const answer = AskUserQuestion({
|
||||
questions: [{
|
||||
question: "What would you like to do?",
|
||||
header: "Action",
|
||||
multiSelect: false,
|
||||
options: [
|
||||
{
|
||||
label: "Create New Issue (Recommended)",
|
||||
description: "Create issue from GitHub URL, text description, or structured input"
|
||||
},
|
||||
{
|
||||
label: "Discover Issues",
|
||||
description: "Multi-perspective discovery: bug, security, test, quality, performance, etc."
|
||||
},
|
||||
{
|
||||
label: "Discover by Prompt",
|
||||
description: "Describe what to find — Gemini plans the exploration strategy iteratively"
|
||||
}
|
||||
]
|
||||
}]
|
||||
});
|
||||
|
||||
**Target**: src/auth/**
|
||||
**Perspectives**: bug, security, test
|
||||
**Total Findings**: 15
|
||||
**Issues Generated**: 8
|
||||
|
||||
## Priority Breakdown
|
||||
- Critical: 1
|
||||
- High: 3
|
||||
- Medium: 4
|
||||
|
||||
## Top Findings
|
||||
|
||||
1. **[Critical] SQL Injection in login.ts:42**
|
||||
Category: security/input-validation
|
||||
...
|
||||
|
||||
2. **[High] Missing null check in auth.ts:128**
|
||||
Category: bug/null-check
|
||||
...
|
||||
|
||||
## Next Steps
|
||||
- Run `/issue:plan` to plan solutions for generated issues
|
||||
- Use `ccw view` to review findings in dashboard
|
||||
// Route based on selection
|
||||
const actionMap = {
|
||||
"Create New Issue": "new",
|
||||
"Discover Issues": "discover",
|
||||
"Discover by Prompt": "discover-by-prompt"
|
||||
};
|
||||
```
|
||||
|
||||
## Quality Checklist
|
||||
## Data Flow
|
||||
|
||||
Before completing, verify:
|
||||
```
|
||||
User Input (URL / text / path pattern / descriptive prompt)
|
||||
↓
|
||||
[Parse Flags + Auto-Detect Action]
|
||||
↓
|
||||
[Action Selection] ← AskUserQuestion (if needed)
|
||||
↓
|
||||
[Read Selected Phase Document]
|
||||
↓
|
||||
[Execute Phase Logic]
|
||||
↓
|
||||
[Summary + Next Steps]
|
||||
├─ After Create → Suggest issue-resolve (plan solution)
|
||||
└─ After Discover → Suggest export to issues, then issue-resolve
|
||||
```
|
||||
|
||||
- [ ] All target files analyzed for selected perspectives
|
||||
- [ ] Findings include file:line references
|
||||
- [ ] Priority assigned to all findings
|
||||
- [ ] Issues generated from high-priority findings
|
||||
- [ ] Discovery state shows `phase: complete`
|
||||
- [ ] Summary includes actionable next steps
|
||||
## Subagent API Reference
|
||||
|
||||
### spawn_agent
|
||||
|
||||
Create a new subagent with task assignment.
|
||||
|
||||
```javascript
|
||||
const agentId = spawn_agent({
|
||||
message: `
|
||||
## TASK ASSIGNMENT
|
||||
|
||||
### MANDATORY FIRST STEPS (Agent Execute)
|
||||
1. **Read role definition**: ~/.codex/agents/{agent-type}.md (MUST read first)
|
||||
2. Read: .workflow/project-tech.json
|
||||
3. Read: .workflow/project-guidelines.json
|
||||
|
||||
## TASK CONTEXT
|
||||
${taskContext}
|
||||
|
||||
## DELIVERABLES
|
||||
${deliverables}
|
||||
`
|
||||
})
|
||||
```
|
||||
|
||||
### wait
|
||||
|
||||
Get results from subagent (only way to retrieve results).
|
||||
|
||||
```javascript
|
||||
const result = wait({
|
||||
ids: [agentId],
|
||||
timeout_ms: 600000 // 10 minutes
|
||||
})
|
||||
|
||||
if (result.timed_out) {
|
||||
// Handle timeout - can continue waiting or send_input to prompt completion
|
||||
}
|
||||
|
||||
// Check completion status
|
||||
if (result.status[agentId].completed) {
|
||||
const output = result.status[agentId].completed;
|
||||
}
|
||||
```
|
||||
|
||||
### send_input
|
||||
|
||||
Continue interaction with active subagent (for clarification or follow-up).
|
||||
|
||||
```javascript
|
||||
send_input({
|
||||
id: agentId,
|
||||
message: `
|
||||
## CLARIFICATION ANSWERS
|
||||
${answers}
|
||||
|
||||
## NEXT STEP
|
||||
Continue with plan generation.
|
||||
`
|
||||
})
|
||||
```
|
||||
|
||||
### close_agent
|
||||
|
||||
Clean up subagent resources (irreversible).
|
||||
|
||||
```javascript
|
||||
close_agent({ id: agentId })
|
||||
```
|
||||
|
||||
## Core Guidelines
|
||||
|
||||
**Data Access Principle**: Issues files can grow very large. To avoid context overflow:
|
||||
|
||||
| Operation | Correct | Incorrect |
|
||||
|-----------|---------|-----------|
|
||||
| List issues (brief) | `ccw issue list --status pending --brief` | `Read('issues.jsonl')` |
|
||||
| Read issue details | `ccw issue status <id> --json` | `Read('issues.jsonl')` |
|
||||
| Create issue | `echo '...' \| ccw issue create` | Direct file write |
|
||||
| Update status | `ccw issue update <id> --status ...` | Direct file edit |
|
||||
|
||||
**ALWAYS** use CLI commands for CRUD operations. **NEVER** read entire `issues.jsonl` directly.
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Situation | Action |
|
||||
|-----------|--------|
|
||||
| No files match pattern | Abort with clear error message |
|
||||
| Perspective analysis fails | Log error, continue with other perspectives |
|
||||
| No findings | Report "No issues found" (not an error) |
|
||||
| External research fails | Continue without external context |
|
||||
| Error | Resolution |
|
||||
|-------|------------|
|
||||
| No action detected | Show AskUserQuestion with all 3 options |
|
||||
| Invalid action type | Show available actions, re-prompt |
|
||||
| Phase execution fails | Report error, suggest manual intervention |
|
||||
| No files matched (discover) | Check target pattern, verify path exists |
|
||||
| Gemini planning failed (discover-by-prompt) | Retry with qwen fallback |
|
||||
| Agent lifecycle errors | Ensure close_agent in error paths to prevent resource leaks |
|
||||
|
||||
## Schema References
|
||||
## Post-Phase Next Steps
|
||||
|
||||
| Schema | Path | Purpose |
|
||||
|--------|------|---------|
|
||||
| Discovery State | `~/.claude/workflows/cli-templates/schemas/discovery-state-schema.json` | Session state |
|
||||
| Discovery Finding | `~/.claude/workflows/cli-templates/schemas/discovery-finding-schema.json` | Finding format |
|
||||
After successful phase execution, recommend next action:
|
||||
|
||||
## Start Discovery
|
||||
```javascript
|
||||
// After Create New (issue created)
|
||||
AskUserQuestion({
|
||||
questions: [{
|
||||
question: "Issue created. What next?",
|
||||
header: "Next",
|
||||
multiSelect: false,
|
||||
options: [
|
||||
{ label: "Plan Solution", description: "Generate solution via issue-resolve" },
|
||||
{ label: "Create Another", description: "Create more issues" },
|
||||
{ label: "View Issues", description: "Review all issues" },
|
||||
{ label: "Done", description: "Exit workflow" }
|
||||
]
|
||||
}]
|
||||
});
|
||||
|
||||
Begin by resolving target files:
|
||||
|
||||
```bash
|
||||
# Parse target pattern from arguments
|
||||
TARGET_PATTERN="${1:-src/**}"
|
||||
|
||||
# Count matching files
|
||||
find ${TARGET_PATTERN} -type f \( -name "*.ts" -o -name "*.tsx" -o -name "*.js" \) | wc -l
|
||||
// After Discover / Discover by Prompt (discoveries generated)
|
||||
AskUserQuestion({
|
||||
questions: [{
|
||||
question: "Discovery complete. What next?",
|
||||
header: "Next",
|
||||
multiSelect: false,
|
||||
options: [
|
||||
{ label: "Export to Issues", description: "Convert discoveries to issues" },
|
||||
{ label: "Plan Solutions", description: "Plan solutions for exported issues via issue-resolve" },
|
||||
{ label: "Done", description: "Exit workflow" }
|
||||
]
|
||||
}]
|
||||
});
|
||||
```
|
||||
|
||||
Then proceed with perspective selection and analysis.
|
||||
## Related Skills & Commands
|
||||
|
||||
- `issue-resolve` - Plan solutions, convert artifacts, form queues, from brainstorm
|
||||
- `issue-manage` - Interactive issue CRUD operations
|
||||
- `/issue:execute` - Execute queue with DAG-based parallel orchestration
|
||||
- `ccw issue list` - List all issues
|
||||
- `ccw issue status <id>` - View issue details
|
||||
|
||||
348
.codex/skills/issue-discover/phases/01-issue-new.md
Normal file
348
.codex/skills/issue-discover/phases/01-issue-new.md
Normal file
@@ -0,0 +1,348 @@
|
||||
# Phase 1: Create New Issue
|
||||
|
||||
> 来源: `commands/issue/new.md`
|
||||
|
||||
## Overview
|
||||
|
||||
Create structured issue from GitHub URL or text description with clarity-based flow control.
|
||||
|
||||
**Core workflow**: Input Analysis → Clarity Detection → Data Extraction → Optional Clarification → GitHub Publishing → Create Issue
|
||||
|
||||
**Input sources**:
|
||||
- **GitHub URL** - `https://github.com/owner/repo/issues/123` or `#123`
|
||||
- **Structured text** - Text with expected/actual/affects keywords
|
||||
- **Vague text** - Short description that needs clarification
|
||||
|
||||
**Output**:
|
||||
- **Issue** (GH-xxx or ISS-YYYYMMDD-HHMMSS) - Registered issue ready for planning
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- `gh` CLI available (for GitHub URLs)
|
||||
- `ccw issue` CLI available
|
||||
|
||||
## Auto Mode
|
||||
|
||||
When `--yes` or `-y`: Skip clarification questions, create issue with inferred details.
|
||||
|
||||
## Arguments
|
||||
|
||||
| Argument | Required | Type | Default | Description |
|
||||
|----------|----------|------|---------|-------------|
|
||||
| input | Yes | String | - | GitHub URL, `#number`, or text description |
|
||||
| --priority | No | Integer | auto | Priority 1-5 (auto-inferred if omitted) |
|
||||
| -y, --yes | No | Flag | false | Skip all confirmations |
|
||||
|
||||
## Issue Structure
|
||||
|
||||
```typescript
|
||||
interface Issue {
|
||||
id: string; // GH-123 or ISS-YYYYMMDD-HHMMSS
|
||||
title: string;
|
||||
status: 'registered' | 'planned' | 'queued' | 'in_progress' | 'completed' | 'failed';
|
||||
priority: number; // 1 (critical) to 5 (low)
|
||||
context: string; // Problem description (single source of truth)
|
||||
source: 'github' | 'text' | 'discovery';
|
||||
source_url?: string;
|
||||
labels?: string[];
|
||||
|
||||
// GitHub binding (for non-GitHub sources that publish to GitHub)
|
||||
github_url?: string;
|
||||
github_number?: number;
|
||||
|
||||
// Optional structured fields
|
||||
expected_behavior?: string;
|
||||
actual_behavior?: string;
|
||||
affected_components?: string[];
|
||||
|
||||
// Feedback history
|
||||
feedback?: {
|
||||
type: 'failure' | 'clarification' | 'rejection';
|
||||
stage: string;
|
||||
content: string;
|
||||
created_at: string;
|
||||
}[];
|
||||
|
||||
bound_solution_id: string | null;
|
||||
created_at: string;
|
||||
updated_at: string;
|
||||
}
|
||||
```
|
||||
|
||||
## Execution Steps
|
||||
|
||||
### Step 1.1: Input Analysis & Clarity Detection
|
||||
|
||||
```javascript
|
||||
const input = userInput.trim();
|
||||
const flags = parseFlags(userInput);
|
||||
|
||||
// Detect input type and clarity
|
||||
const isGitHubUrl = input.match(/github\.com\/[\w-]+\/[\w-]+\/issues\/\d+/);
|
||||
const isGitHubShort = input.match(/^#(\d+)$/);
|
||||
const hasStructure = input.match(/(expected|actual|affects|steps):/i);
|
||||
|
||||
// Clarity score: 0-3
|
||||
let clarityScore = 0;
|
||||
if (isGitHubUrl || isGitHubShort) clarityScore = 3; // GitHub = fully clear
|
||||
else if (hasStructure) clarityScore = 2; // Structured text = clear
|
||||
else if (input.length > 50) clarityScore = 1; // Long text = somewhat clear
|
||||
else clarityScore = 0; // Vague
|
||||
|
||||
let issueData = {};
|
||||
```
|
||||
|
||||
### Step 1.2: Data Extraction (GitHub or Text)
|
||||
|
||||
```javascript
|
||||
if (isGitHubUrl || isGitHubShort) {
|
||||
// GitHub - fetch via gh CLI
|
||||
const result = Bash(`gh issue view ${extractIssueRef(input)} --json number,title,body,labels,url`);
|
||||
const gh = JSON.parse(result);
|
||||
issueData = {
|
||||
id: `GH-${gh.number}`,
|
||||
title: gh.title,
|
||||
source: 'github',
|
||||
source_url: gh.url,
|
||||
labels: gh.labels.map(l => l.name),
|
||||
context: gh.body?.substring(0, 500) || gh.title,
|
||||
...parseMarkdownBody(gh.body)
|
||||
};
|
||||
} else {
|
||||
// Text description
|
||||
issueData = {
|
||||
id: `ISS-${new Date().toISOString().replace(/[-:T]/g, '').slice(0, 14)}`,
|
||||
source: 'text',
|
||||
...parseTextDescription(input)
|
||||
};
|
||||
}
|
||||
```
|
||||
|
||||
### Step 1.3: Lightweight Context Hint (Conditional)
|
||||
|
||||
```javascript
|
||||
// ACE search ONLY for medium clarity (1-2) AND missing components
|
||||
// Skip for: GitHub (has context), vague (needs clarification first)
|
||||
if (clarityScore >= 1 && clarityScore <= 2 && !issueData.affected_components?.length) {
|
||||
const keywords = extractKeywords(issueData.context);
|
||||
|
||||
if (keywords.length >= 2) {
|
||||
try {
|
||||
const aceResult = mcp__ace-tool__search_context({
|
||||
project_root_path: process.cwd(),
|
||||
query: keywords.slice(0, 3).join(' ')
|
||||
});
|
||||
issueData.affected_components = aceResult.files?.slice(0, 3) || [];
|
||||
} catch {
|
||||
// ACE failure is non-blocking
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Step 1.4: Conditional Clarification (Only if Unclear)
|
||||
|
||||
```javascript
|
||||
// ONLY ask questions if clarity is low
|
||||
if (clarityScore < 2 && (!issueData.context || issueData.context.length < 20)) {
|
||||
const answer = AskUserQuestion({
|
||||
questions: [{
|
||||
question: 'Please describe the issue in more detail:',
|
||||
header: 'Clarify',
|
||||
multiSelect: false,
|
||||
options: [
|
||||
{ label: 'Provide details', description: 'Describe what, where, and expected behavior' }
|
||||
]
|
||||
}]
|
||||
});
|
||||
|
||||
if (answer.customText) {
|
||||
issueData.context = answer.customText;
|
||||
issueData.title = answer.customText.split(/[.\n]/)[0].substring(0, 60);
|
||||
issueData.feedback = [{
|
||||
type: 'clarification',
|
||||
stage: 'new',
|
||||
content: answer.customText,
|
||||
created_at: new Date().toISOString()
|
||||
}];
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Step 1.5: GitHub Publishing Decision (Non-GitHub Sources)
|
||||
|
||||
```javascript
|
||||
// For non-GitHub sources, ask if user wants to publish to GitHub
|
||||
let publishToGitHub = false;
|
||||
|
||||
if (issueData.source !== 'github') {
|
||||
const publishAnswer = AskUserQuestion({
|
||||
questions: [{
|
||||
question: 'Would you like to publish this issue to GitHub?',
|
||||
header: 'Publish',
|
||||
multiSelect: false,
|
||||
options: [
|
||||
{ label: 'Yes, publish to GitHub', description: 'Create issue on GitHub and link it' },
|
||||
{ label: 'No, keep local only', description: 'Store as local issue without GitHub sync' }
|
||||
]
|
||||
}]
|
||||
});
|
||||
|
||||
publishToGitHub = publishAnswer.answers?.['Publish']?.includes('Yes');
|
||||
}
|
||||
```
|
||||
|
||||
### Step 1.6: Create Issue
|
||||
|
||||
**Issue Creation** (via CLI endpoint):
|
||||
```bash
|
||||
# Option 1: Pipe input (recommended for complex JSON)
|
||||
echo '{"title":"...", "context":"...", "priority":3}' | ccw issue create
|
||||
|
||||
# Option 2: Heredoc (for multi-line JSON)
|
||||
ccw issue create << 'EOF'
|
||||
{"title":"...", "context":"含\"引号\"的内容", "priority":3}
|
||||
EOF
|
||||
```
|
||||
|
||||
**GitHub Publishing** (if user opted in):
|
||||
```javascript
|
||||
// Step 1: Create local issue FIRST
|
||||
const localIssue = createLocalIssue(issueData); // ccw issue create
|
||||
|
||||
// Step 2: Publish to GitHub if requested
|
||||
if (publishToGitHub) {
|
||||
const ghResult = Bash(`gh issue create --title "${issueData.title}" --body "${issueData.context}"`);
|
||||
const ghUrl = ghResult.match(/https:\/\/github\.com\/[\w-]+\/[\w-]+\/issues\/\d+/)?.[0];
|
||||
const ghNumber = parseInt(ghUrl?.match(/\/issues\/(\d+)/)?.[1]);
|
||||
|
||||
if (ghNumber) {
|
||||
Bash(`ccw issue update ${localIssue.id} --github-url "${ghUrl}" --github-number ${ghNumber}`);
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Workflow:**
|
||||
```
|
||||
1. Create local issue (ISS-YYYYMMDD-NNN) → stored in .workflow/issues.jsonl
|
||||
2. If publishToGitHub:
|
||||
a. gh issue create → returns GitHub URL
|
||||
b. Update local issue with github_url + github_number binding
|
||||
3. Both local and GitHub issues exist, linked together
|
||||
```
|
||||
|
||||
## Execution Flow
|
||||
|
||||
```
|
||||
Phase 1: Input Analysis
|
||||
└─ Detect clarity score (GitHub URL? Structured text? Keywords?)
|
||||
|
||||
Phase 2: Data Extraction (branched by clarity)
|
||||
┌────────────┬─────────────────┬──────────────┐
|
||||
│ Score 3 │ Score 1-2 │ Score 0 │
|
||||
│ GitHub │ Text + ACE │ Vague │
|
||||
├────────────┼─────────────────┼──────────────┤
|
||||
│ gh CLI │ Parse struct │ AskQuestion │
|
||||
│ → parse │ + quick hint │ (1 question) │
|
||||
│ │ (3 files max) │ → feedback │
|
||||
└────────────┴─────────────────┴──────────────┘
|
||||
|
||||
Phase 3: GitHub Publishing Decision (non-GitHub only)
|
||||
├─ Source = github: Skip (already from GitHub)
|
||||
└─ Source ≠ github: AskUserQuestion
|
||||
├─ Yes → publishToGitHub = true
|
||||
└─ No → publishToGitHub = false
|
||||
|
||||
Phase 4: Create Issue
|
||||
├─ Score ≥ 2: Direct creation
|
||||
└─ Score < 2: Confirm first → Create
|
||||
└─ If publishToGitHub: gh issue create → link URL
|
||||
|
||||
Note: Deep exploration & lifecycle deferred to /issue:plan
|
||||
```
|
||||
|
||||
## Helper Functions
|
||||
|
||||
```javascript
|
||||
function extractKeywords(text) {
|
||||
const stopWords = new Set(['the', 'a', 'an', 'is', 'are', 'was', 'were', 'not', 'with']);
|
||||
return text
|
||||
.toLowerCase()
|
||||
.split(/\W+/)
|
||||
.filter(w => w.length > 3 && !stopWords.has(w))
|
||||
.slice(0, 5);
|
||||
}
|
||||
|
||||
function parseTextDescription(text) {
|
||||
const result = { title: '', context: '' };
|
||||
const sentences = text.split(/\.(?=\s|$)/);
|
||||
|
||||
result.title = sentences[0]?.trim().substring(0, 60) || 'Untitled';
|
||||
result.context = text.substring(0, 500);
|
||||
|
||||
const expected = text.match(/expected:?\s*([^.]+)/i);
|
||||
const actual = text.match(/actual:?\s*([^.]+)/i);
|
||||
const affects = text.match(/affects?:?\s*([^.]+)/i);
|
||||
|
||||
if (expected) result.expected_behavior = expected[1].trim();
|
||||
if (actual) result.actual_behavior = actual[1].trim();
|
||||
if (affects) {
|
||||
result.affected_components = affects[1].split(/[,\s]+/).filter(c => c.includes('/') || c.includes('.'));
|
||||
}
|
||||
|
||||
return result;
|
||||
}
|
||||
|
||||
function parseMarkdownBody(body) {
|
||||
if (!body) return {};
|
||||
const result = {};
|
||||
|
||||
const problem = body.match(/##?\s*(problem|description)[:\s]*([\s\S]*?)(?=##|$)/i);
|
||||
const expected = body.match(/##?\s*expected[:\s]*([\s\S]*?)(?=##|$)/i);
|
||||
const actual = body.match(/##?\s*actual[:\s]*([\s\S]*?)(?=##|$)/i);
|
||||
|
||||
if (problem) result.context = problem[2].trim().substring(0, 500);
|
||||
if (expected) result.expected_behavior = expected[2].trim();
|
||||
if (actual) result.actual_behavior = actual[2].trim();
|
||||
|
||||
return result;
|
||||
}
|
||||
```
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Error | Message | Resolution |
|
||||
|-------|---------|------------|
|
||||
| GitHub fetch failed | gh CLI error | Check gh auth, verify URL |
|
||||
| Clarity too low | Input unclear | Ask clarification question |
|
||||
| Issue creation failed | CLI error | Verify ccw issue endpoint |
|
||||
| GitHub publish failed | gh issue create error | Create local-only, skip GitHub |
|
||||
|
||||
## Examples
|
||||
|
||||
### Clear Input (No Questions)
|
||||
|
||||
```bash
|
||||
issue-discover https://github.com/org/repo/issues/42
|
||||
# → Fetches, parses, creates immediately
|
||||
|
||||
issue-discover "Login fails with special chars. Expected: success. Actual: 500"
|
||||
# → Parses structure, creates immediately
|
||||
```
|
||||
|
||||
### Vague Input (1 Question)
|
||||
|
||||
```bash
|
||||
issue-discover "auth broken"
|
||||
# → Asks: "Please describe the issue in more detail"
|
||||
# → User provides details → saved to feedback[]
|
||||
# → Creates issue
|
||||
```
|
||||
|
||||
## Post-Phase Update
|
||||
|
||||
After issue creation:
|
||||
- Issue created with `status: registered`
|
||||
- Report: issue ID, title, source, affected components
|
||||
- Show GitHub URL (if published)
|
||||
- Recommend next step: `/issue:plan <id>` or `issue-resolve <id>`
|
||||
@@ -1,391 +0,0 @@
|
||||
---
|
||||
name: issue-new
|
||||
description: Create structured issue from GitHub URL or text description. Auto mode with --yes flag.
|
||||
argument-hint: "[--yes|-y] <GITHUB_URL | TEXT_DESCRIPTION> [--priority PRIORITY] [--labels LABELS]"
|
||||
---
|
||||
|
||||
# Issue New Command
|
||||
|
||||
## Core Principles
|
||||
|
||||
**Requirement Clarity Detection** → Ask only when needed
|
||||
**Flexible Parameter Input** → Support multiple formats and flags
|
||||
**Auto Mode Support** → `--yes`/`-y` skips confirmation questions
|
||||
|
||||
```
|
||||
Clear Input (GitHub URL, structured text) → Direct creation (no questions)
|
||||
Unclear Input (vague description) → Clarifying questions (unless --yes)
|
||||
Auto Mode (--yes or -y flag) → Skip all questions, use inference
|
||||
```
|
||||
|
||||
## Parameter Formats
|
||||
|
||||
```bash
|
||||
# GitHub URL (auto-detected)
|
||||
/prompts:issue-new https://github.com/owner/repo/issues/123
|
||||
/prompts:issue-new GH-123
|
||||
|
||||
# Text description with priority
|
||||
/prompts:issue-new "Login fails with special chars" --priority 1
|
||||
|
||||
# Auto mode - skip all questions
|
||||
/prompts:issue-new --yes "something broken"
|
||||
/prompts:issue-new -y https://github.com/owner/repo/issues/456
|
||||
|
||||
# With labels
|
||||
/prompts:issue-new "Database migration needed" --priority 2 --labels "enhancement,database"
|
||||
```
|
||||
|
||||
## Issue Structure
|
||||
|
||||
```typescript
|
||||
interface Issue {
|
||||
id: string; // GH-123 or ISS-YYYYMMDD-HHMMSS
|
||||
title: string;
|
||||
status: 'registered' | 'planned' | 'queued' | 'in_progress' | 'completed' | 'failed';
|
||||
priority: number; // 1 (critical) to 5 (low)
|
||||
context: string; // Problem description
|
||||
source: 'github' | 'text' | 'discovery';
|
||||
source_url?: string;
|
||||
labels?: string[];
|
||||
|
||||
// GitHub binding (for non-GitHub sources that publish to GitHub)
|
||||
github_url?: string;
|
||||
github_number?: number;
|
||||
|
||||
// Optional structured fields
|
||||
expected_behavior?: string;
|
||||
actual_behavior?: string;
|
||||
affected_components?: string[];
|
||||
|
||||
// Solution binding
|
||||
bound_solution_id: string | null;
|
||||
|
||||
// Timestamps
|
||||
created_at: string;
|
||||
updated_at: string;
|
||||
}
|
||||
```
|
||||
|
||||
## Inputs
|
||||
|
||||
- **GitHub URL**: `https://github.com/owner/repo/issues/123` or `#123`
|
||||
- **Text description**: Natural language description
|
||||
- **Priority flag**: `--priority 1-5` (optional, default: 3)
|
||||
|
||||
## Output Requirements
|
||||
|
||||
**Create Issue via CLI** (preferred method):
|
||||
```bash
|
||||
# Pipe input (recommended for complex JSON)
|
||||
echo '{"title":"...", "context":"...", "priority":3}' | ccw issue create
|
||||
|
||||
# Returns created issue JSON
|
||||
{"id":"ISS-20251229-001","title":"...","status":"registered",...}
|
||||
```
|
||||
|
||||
**Return Summary:**
|
||||
```json
|
||||
{
|
||||
"created": true,
|
||||
"id": "ISS-20251229-001",
|
||||
"title": "Login fails with special chars",
|
||||
"source": "text",
|
||||
"github_published": false,
|
||||
"next_step": "/issue:plan ISS-20251229-001"
|
||||
}
|
||||
```
|
||||
|
||||
## Workflow
|
||||
|
||||
### Phase 0: Parse Arguments & Flags
|
||||
|
||||
Extract parameters from user input:
|
||||
|
||||
```bash
|
||||
# Input examples (Codex placeholders)
|
||||
INPUT="$1" # GitHub URL or text description
|
||||
AUTO_MODE="$2" # Check for --yes or -y flag
|
||||
|
||||
# Parse flags (comma-separated in single argument)
|
||||
PRIORITY=$(echo "$INPUT" | grep -oP '(?<=--priority\s)\d+' || echo "3")
|
||||
LABELS=$(echo "$INPUT" | grep -oP '(?<=--labels\s)\K[^-]*' | xargs)
|
||||
AUTO_YES=$(echo "$INPUT" | grep -qE '--yes|-y' && echo "true" || echo "false")
|
||||
|
||||
# Extract main input (URL or text) - remove all flags
|
||||
MAIN_INPUT=$(echo "$INPUT" | sed 's/\s*--priority\s*\d*//; s/\s*--labels\s*[^-]*//; s/\s*--yes\s*//; s/\s*-y\s*//' | xargs)
|
||||
```
|
||||
|
||||
### Phase 1: Analyze Input & Clarity Detection
|
||||
|
||||
```javascript
|
||||
const mainInput = userInput.trim();
|
||||
|
||||
// Detect input type and clarity
|
||||
const isGitHubUrl = mainInput.match(/github\.com\/[\w-]+\/[\w-]+\/issues\/\d+/);
|
||||
const isGitHubShort = mainInput.match(/^GH-?\d+$/);
|
||||
const hasStructure = mainInput.match(/(expected|actual|affects|steps):/i);
|
||||
|
||||
// Clarity score: 0-3
|
||||
let clarityScore = 0;
|
||||
if (isGitHubUrl || isGitHubShort) clarityScore = 3; // GitHub = fully clear
|
||||
else if (hasStructure) clarityScore = 2; // Structured text = clear
|
||||
else if (mainInput.length > 50) clarityScore = 1; // Long text = somewhat clear
|
||||
else clarityScore = 0; // Vague
|
||||
|
||||
// Auto mode override: if --yes/-y flag, skip all questions
|
||||
const skipQuestions = process.env.AUTO_YES === 'true';
|
||||
```
|
||||
|
||||
### Phase 2: Extract Issue Data & Priority
|
||||
|
||||
**For GitHub URL/Short:**
|
||||
|
||||
```bash
|
||||
# Fetch issue details via gh CLI
|
||||
gh issue view <issue-ref> --json number,title,body,labels,url
|
||||
|
||||
# Parse response with priority override
|
||||
{
|
||||
"id": "GH-123",
|
||||
"title": "...",
|
||||
"priority": $PRIORITY || 3, # Use --priority flag if provided
|
||||
"source": "github",
|
||||
"source_url": "https://github.com/...",
|
||||
"labels": $LABELS || [...existing labels],
|
||||
"context": "..."
|
||||
}
|
||||
```
|
||||
|
||||
**For Text Description:**
|
||||
|
||||
```javascript
|
||||
// Generate issue ID
|
||||
const id = `ISS-${new Date().toISOString().replace(/[-:T]/g, '').slice(0, 14)}`;
|
||||
|
||||
// Parse structured fields if present
|
||||
const expected = text.match(/expected:?\s*([^.]+)/i);
|
||||
const actual = text.match(/actual:?\s*([^.]+)/i);
|
||||
const affects = text.match(/affects?:?\s*([^.]+)/i);
|
||||
|
||||
// Build issue data with flags
|
||||
{
|
||||
"id": id,
|
||||
"title": text.split(/[.\n]/)[0].substring(0, 60),
|
||||
"priority": $PRIORITY || 3, # From --priority flag
|
||||
"labels": $LABELS?.split(',') || [], # From --labels flag
|
||||
"source": "text",
|
||||
"context": text.substring(0, 500),
|
||||
"expected_behavior": expected?.[1]?.trim(),
|
||||
"actual_behavior": actual?.[1]?.trim()
|
||||
}
|
||||
```
|
||||
|
||||
### Phase 3: Context Hint (Conditional)
|
||||
|
||||
For medium clarity (score 1-2) without affected components:
|
||||
|
||||
```bash
|
||||
# Use rg to find potentially related files
|
||||
rg -l "<keyword>" --type ts | head -5
|
||||
```
|
||||
|
||||
Add discovered files to `affected_components` (max 3 files).
|
||||
|
||||
**Note**: Skip this for GitHub issues (already have context) and vague inputs (needs clarification first).
|
||||
|
||||
### Phase 4: Conditional Clarification (Skip if Auto Mode)
|
||||
|
||||
**Only ask if**: clarity < 2 AND NOT in auto mode (skipQuestions = false)
|
||||
|
||||
If auto mode (`--yes`/`-y`), proceed directly to creation with inferred details.
|
||||
|
||||
Otherwise, present minimal clarification:
|
||||
|
||||
```
|
||||
Input unclear. Please describe:
|
||||
- What is the issue about?
|
||||
- Where does it occur?
|
||||
- What is the expected behavior?
|
||||
```
|
||||
|
||||
Wait for user response, then update issue data.
|
||||
|
||||
### Phase 5: GitHub Publishing Decision (Skip if Already GitHub)
|
||||
|
||||
For non-GitHub sources, determine if user wants to publish to GitHub:
|
||||
|
||||
```
|
||||
|
||||
For non-GitHub sources AND NOT auto mode, ask:
|
||||
|
||||
```
|
||||
Would you like to publish this issue to GitHub?
|
||||
1. Yes, publish to GitHub (create issue and link it)
|
||||
2. No, keep local only (store without GitHub sync)
|
||||
```
|
||||
|
||||
In auto mode: Default to NO (keep local only, unless explicitly requested with --publish flag).
|
||||
|
||||
### Phase 6: Create Issue
|
||||
|
||||
**Create via CLI:**
|
||||
|
||||
```bash
|
||||
# Build issue JSON
|
||||
ISSUE_JSON='{"title":"...","context":"...","priority":3,"source":"text"}'
|
||||
|
||||
# Create issue (auto-generates ID)
|
||||
echo "${ISSUE_JSON}" | ccw issue create
|
||||
```
|
||||
|
||||
**If publishing to GitHub:**
|
||||
|
||||
```bash
|
||||
# Create on GitHub first
|
||||
GH_URL=$(gh issue create --title "..." --body "..." | grep -oE 'https://github.com/[^ ]+')
|
||||
GH_NUMBER=$(echo $GH_URL | grep -oE '/issues/([0-9]+)$' | grep -oE '[0-9]+')
|
||||
|
||||
# Update local issue with binding
|
||||
ccw issue update ${ISSUE_ID} --github-url "${GH_URL}" --github-number ${GH_NUMBER}
|
||||
```
|
||||
|
||||
### Phase 7: Output Result
|
||||
|
||||
```markdown
|
||||
## Issue Created
|
||||
|
||||
**ID**: ISS-20251229-001
|
||||
**Title**: Login fails with special chars
|
||||
**Source**: text
|
||||
**Priority**: 2 (High)
|
||||
|
||||
**Context**:
|
||||
500 error when password contains quotes
|
||||
|
||||
**Affected Components**:
|
||||
- src/auth/login.ts
|
||||
- src/utils/validation.ts
|
||||
|
||||
**GitHub**: Not published (local only)
|
||||
|
||||
**Next Step**: `/issue:plan ISS-20251229-001`
|
||||
```
|
||||
|
||||
## Quality Checklist
|
||||
|
||||
Before completing, verify:
|
||||
|
||||
- [ ] Issue ID generated correctly (GH-xxx or ISS-YYYYMMDD-HHMMSS)
|
||||
- [ ] Title extracted (max 60 chars)
|
||||
- [ ] Context captured (problem description)
|
||||
- [ ] Priority assigned (1-5)
|
||||
- [ ] Status set to `registered`
|
||||
- [ ] Created via `ccw issue create` CLI command
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Situation | Action |
|
||||
|-----------|--------|
|
||||
| GitHub URL not accessible | Report error, suggest text input |
|
||||
| gh CLI not available | Fall back to text-based creation |
|
||||
| Empty input | Prompt for description |
|
||||
| Very vague input | Ask clarifying questions |
|
||||
| Issue already exists | Report duplicate, show existing |
|
||||
|
||||
|
||||
## Start Execution
|
||||
|
||||
### Parameter Parsing (Phase 0)
|
||||
|
||||
```bash
|
||||
# Codex passes full input as $1
|
||||
INPUT="$1"
|
||||
|
||||
# Extract flags
|
||||
AUTO_YES=false
|
||||
PRIORITY=3
|
||||
LABELS=""
|
||||
|
||||
# Parse using parameter expansion
|
||||
while [[ $INPUT == -* ]]; do
|
||||
case $INPUT in
|
||||
-y|--yes)
|
||||
AUTO_YES=true
|
||||
INPUT="${INPUT#* }" # Remove flag and space
|
||||
;;
|
||||
--priority)
|
||||
PRIORITY="${INPUT#* }"
|
||||
PRIORITY="${PRIORITY%% *}" # Extract next word
|
||||
INPUT="${INPUT#*--priority $PRIORITY }"
|
||||
;;
|
||||
--labels)
|
||||
LABELS="${INPUT#* }"
|
||||
LABELS="${LABELS%% --*}" # Extract until next flag
|
||||
INPUT="${INPUT#*--labels $LABELS }"
|
||||
;;
|
||||
*)
|
||||
INPUT="${INPUT#* }"
|
||||
;;
|
||||
esac
|
||||
done
|
||||
|
||||
# Remaining text is the main input (GitHub URL or description)
|
||||
MAIN_INPUT="$INPUT"
|
||||
```
|
||||
|
||||
### Execution Flow (All Phases)
|
||||
|
||||
```
|
||||
1. Parse Arguments (Phase 0)
|
||||
└─ Extract: AUTO_YES, PRIORITY, LABELS, MAIN_INPUT
|
||||
|
||||
2. Detect Input Type & Clarity (Phase 1)
|
||||
├─ GitHub URL/Short? → Score 3 (clear)
|
||||
├─ Structured text? → Score 2 (somewhat clear)
|
||||
├─ Long text? → Score 1 (vague)
|
||||
└─ Short text? → Score 0 (very vague)
|
||||
|
||||
3. Extract Issue Data (Phase 2)
|
||||
├─ If GitHub: gh CLI fetch + parse
|
||||
└─ If text: Parse structure + apply PRIORITY/LABELS flags
|
||||
|
||||
4. Context Hint (Phase 3, conditional)
|
||||
└─ Only for clarity 1-2 AND no components → ACE search (max 3 files)
|
||||
|
||||
5. Clarification (Phase 4, conditional)
|
||||
└─ If clarity < 2 AND NOT auto mode → Ask for details
|
||||
└─ If auto mode (AUTO_YES=true) → Skip, use inferred data
|
||||
|
||||
6. GitHub Publishing (Phase 5, conditional)
|
||||
├─ If source = github → Skip (already from GitHub)
|
||||
└─ If source != github:
|
||||
├─ If auto mode → Default NO (keep local)
|
||||
└─ If manual → Ask user preference
|
||||
|
||||
7. Create Issue (Phase 6)
|
||||
├─ Create local issue via ccw CLI
|
||||
└─ If publishToGitHub → gh issue create → link
|
||||
|
||||
8. Output Result (Phase 7)
|
||||
└─ Display: ID, title, source, GitHub status, next step
|
||||
```
|
||||
|
||||
## Quick Examples
|
||||
|
||||
```bash
|
||||
# Auto mode - GitHub issue (no questions)
|
||||
/prompts:issue-new -y https://github.com/org/repo/issues/42
|
||||
|
||||
# Standard mode - text with priority
|
||||
/prompts:issue-new "Database connection timeout" --priority 1
|
||||
|
||||
# Auto mode - text with priority and labels
|
||||
/prompts:issue-new --yes "Add caching layer" --priority 2 --labels "enhancement,performance"
|
||||
|
||||
# GitHub short format
|
||||
/prompts:issue-new GH-123
|
||||
|
||||
# Complex text description
|
||||
/prompts:issue-new "User login fails. Expected: redirect to dashboard. Actual: 500 error"
|
||||
```
|
||||
@@ -1,247 +0,0 @@
|
||||
---
|
||||
name: issue-plan
|
||||
description: Plan issue(s) into bound solutions using subagent pattern (explore + plan closed-loop)
|
||||
argument-hint: "<issue-id>[,<issue-id>,...] [--all-pending] [--batch-size 4]"
|
||||
---
|
||||
|
||||
# Issue Plan (Codex Version)
|
||||
|
||||
## Goal
|
||||
|
||||
Create executable solution(s) for issue(s) and bind the selected solution to each issue using `ccw issue bind`.
|
||||
|
||||
This workflow uses **subagent pattern** for parallel batch processing: spawn planning agents per batch, wait for results, handle multi-solution selection.
|
||||
|
||||
## Core Guidelines
|
||||
|
||||
**⚠️ Data Access Principle**: Issues and solutions files can grow very large. To avoid context overflow:
|
||||
|
||||
| Operation | Correct | Incorrect |
|
||||
|-----------|---------|-----------|
|
||||
| List issues (brief) | `ccw issue list --status pending --brief` | Read issues.jsonl |
|
||||
| Read issue details | `ccw issue status <id> --json` | Read issues.jsonl |
|
||||
| Update status | `ccw issue update <id> --status ...` | Direct file edit |
|
||||
| Bind solution | `ccw issue bind <id> <sol-id>` | Direct file edit |
|
||||
|
||||
**ALWAYS** use CLI commands for CRUD operations. **NEVER** read entire `issues.jsonl` or `solutions/*.jsonl` directly.
|
||||
|
||||
## Inputs
|
||||
|
||||
- **Explicit issues**: comma-separated IDs, e.g. `ISS-123,ISS-124`
|
||||
- **All pending**: `--all-pending` → plan all issues in `registered` status
|
||||
- **Batch size**: `--batch-size N` (default `4`) → max issues per subagent batch
|
||||
|
||||
## Output Requirements
|
||||
|
||||
For each issue:
|
||||
- Register at least one solution and bind one solution to the issue
|
||||
- Ensure tasks conform to `~/.claude/workflows/cli-templates/schemas/solution-schema.json`
|
||||
- Each task includes quantified `acceptance.criteria` and concrete `acceptance.verification`
|
||||
|
||||
Return a final summary JSON:
|
||||
```json
|
||||
{
|
||||
"bound": [{ "issue_id": "...", "solution_id": "...", "task_count": 0 }],
|
||||
"pending_selection": [{ "issue_id": "...", "solutions": [{ "id": "...", "task_count": 0, "description": "..." }] }],
|
||||
"conflicts": [{ "file": "...", "issues": ["..."] }]
|
||||
}
|
||||
```
|
||||
|
||||
## Workflow
|
||||
|
||||
### Step 1: Resolve Issue List
|
||||
|
||||
**If `--all-pending`:**
|
||||
```bash
|
||||
ccw issue list --status registered --json
|
||||
```
|
||||
|
||||
**Else (explicit IDs):**
|
||||
```bash
|
||||
# For each ID, ensure exists
|
||||
ccw issue init <issue-id> --title "Issue <issue-id>" 2>/dev/null || true
|
||||
ccw issue status <issue-id> --json
|
||||
```
|
||||
|
||||
### Step 2: Group Issues by Similarity
|
||||
|
||||
Group issues for batch processing (max 4 per batch):
|
||||
|
||||
```bash
|
||||
# Extract issue metadata for grouping
|
||||
ccw issue list --status registered --brief --json
|
||||
```
|
||||
|
||||
Group by:
|
||||
- Shared tags
|
||||
- Similar keywords in title
|
||||
- Related components
|
||||
|
||||
### Step 3: Spawn Planning Subagents (Parallel)
|
||||
|
||||
For each batch, spawn a planning subagent:
|
||||
|
||||
```javascript
|
||||
// Subagent message structure
|
||||
spawn_agent({
|
||||
message: `
|
||||
## TASK ASSIGNMENT
|
||||
|
||||
### MANDATORY FIRST STEPS (Agent Execute)
|
||||
1. **Read role definition**: ~/.codex/agents/issue-plan-agent.md (MUST read first)
|
||||
2. Read: .workflow/project-tech.json
|
||||
3. Read: .workflow/project-guidelines.json
|
||||
4. Read schema: ~/.claude/workflows/cli-templates/schemas/solution-schema.json
|
||||
|
||||
---
|
||||
|
||||
Goal: Plan solutions for ${batch.length} issues with executable task breakdown
|
||||
|
||||
Scope:
|
||||
- CAN DO: Explore codebase, design solutions, create tasks
|
||||
- CANNOT DO: Execute solutions, modify production code
|
||||
- Directory: ${process.cwd()}
|
||||
|
||||
Context:
|
||||
- Issues: ${batch.map(i => `${i.id}: ${i.title}`).join('\n')}
|
||||
- Fetch full details: ccw issue status <id> --json
|
||||
|
||||
Deliverables:
|
||||
- For each issue: Write solution to .workflow/issues/solutions/{issue-id}.jsonl
|
||||
- Single solution → auto-bind via ccw issue bind
|
||||
- Multiple solutions → return in pending_selection
|
||||
|
||||
Quality bar:
|
||||
- Tasks have quantified acceptance.criteria
|
||||
- Each task includes test.commands
|
||||
- Solution follows schema exactly
|
||||
`
|
||||
})
|
||||
```
|
||||
|
||||
**Batch execution (parallel):**
|
||||
```javascript
|
||||
// Launch all batches in parallel
|
||||
const agentIds = batches.map(batch => spawn_agent({ message: buildPrompt(batch) }))
|
||||
|
||||
// Wait for all agents to complete
|
||||
const results = wait({ ids: agentIds, timeout_ms: 900000 }) // 15 min
|
||||
|
||||
// Collect results
|
||||
const allBound = []
|
||||
const allPendingSelection = []
|
||||
const allConflicts = []
|
||||
|
||||
for (const id of agentIds) {
|
||||
if (results.status[id].completed) {
|
||||
const result = JSON.parse(results.status[id].completed)
|
||||
allBound.push(...(result.bound || []))
|
||||
allPendingSelection.push(...(result.pending_selection || []))
|
||||
allConflicts.push(...(result.conflicts || []))
|
||||
}
|
||||
}
|
||||
|
||||
// Close all agents
|
||||
agentIds.forEach(id => close_agent({ id }))
|
||||
```
|
||||
|
||||
### Step 4: Handle Multi-Solution Selection
|
||||
|
||||
If `pending_selection` is non-empty, present options:
|
||||
|
||||
```
|
||||
Issue ISS-001 has multiple solutions:
|
||||
1. SOL-ISS-001-1: Refactor with adapter pattern (3 tasks)
|
||||
2. SOL-ISS-001-2: Direct implementation (2 tasks)
|
||||
|
||||
Select solution (1-2):
|
||||
```
|
||||
|
||||
Bind selected solution:
|
||||
```bash
|
||||
ccw issue bind ISS-001 SOL-ISS-001-1
|
||||
```
|
||||
|
||||
### Step 5: Handle Conflicts
|
||||
|
||||
If conflicts detected:
|
||||
- Low/Medium severity: Auto-resolve with recommended order
|
||||
- High severity: Present to user for decision
|
||||
|
||||
### Step 6: Update Issue Status
|
||||
|
||||
After binding, update status:
|
||||
```bash
|
||||
ccw issue update <issue-id> --status planned
|
||||
```
|
||||
|
||||
### Step 7: Output Summary
|
||||
|
||||
```markdown
|
||||
## Planning Complete
|
||||
|
||||
**Planned**: 5 issues
|
||||
**Bound Solutions**: 4
|
||||
**Pending Selection**: 1
|
||||
|
||||
### Bound Solutions
|
||||
| Issue | Solution | Tasks |
|
||||
|-------|----------|-------|
|
||||
| ISS-001 | SOL-ISS-001-1 | 3 |
|
||||
| ISS-002 | SOL-ISS-002-1 | 2 |
|
||||
|
||||
### Pending Selection
|
||||
- ISS-003: 2 solutions available (user selection required)
|
||||
|
||||
### Conflicts Detected
|
||||
- src/auth.ts touched by ISS-001, ISS-002 (resolved: sequential)
|
||||
|
||||
**Next Step**: `/issue:queue`
|
||||
```
|
||||
|
||||
## Subagent Role Reference
|
||||
|
||||
Planning subagent uses role file at: `~/.codex/agents/issue-plan-agent.md`
|
||||
|
||||
Role capabilities:
|
||||
- Codebase exploration (rg, file reading)
|
||||
- Solution design with task breakdown
|
||||
- Schema validation
|
||||
- Solution registration via CLI
|
||||
|
||||
## Quality Checklist
|
||||
|
||||
Before completing, verify:
|
||||
|
||||
- [ ] All input issues have solutions in `solutions/{issue-id}.jsonl`
|
||||
- [ ] Single solution issues are auto-bound (`bound_solution_id` set)
|
||||
- [ ] Multi-solution issues returned in `pending_selection` for user choice
|
||||
- [ ] Each solution has executable tasks with `modification_points`
|
||||
- [ ] Task acceptance criteria are quantified (not vague)
|
||||
- [ ] Conflicts detected and reported (if multiple issues touch same files)
|
||||
- [ ] Issue status updated to `planned` after binding
|
||||
- [ ] All subagents closed after completion
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Error | Resolution |
|
||||
|-------|------------|
|
||||
| Issue not found | Auto-create via `ccw issue init` |
|
||||
| Subagent timeout | Retry with increased timeout or smaller batch |
|
||||
| No solutions generated | Display error, suggest manual planning |
|
||||
| User cancels selection | Skip issue, continue with others |
|
||||
| File conflicts | Detect and suggest resolution order |
|
||||
|
||||
## Start Execution
|
||||
|
||||
Begin by resolving issue list:
|
||||
|
||||
```bash
|
||||
# Default to all pending
|
||||
ccw issue list --status registered --brief --json
|
||||
|
||||
# Or with explicit IDs
|
||||
ccw issue status ISS-001 --json
|
||||
```
|
||||
|
||||
Then group issues and spawn planning subagents.
|
||||
@@ -1,299 +0,0 @@
|
||||
---
|
||||
name: issue-queue
|
||||
description: Form execution queue from bound solutions using subagent for conflict analysis and ordering
|
||||
argument-hint: "[--queues <n>] [--issue <id>] [--append <id>]"
|
||||
---
|
||||
|
||||
# Issue Queue (Codex Version)
|
||||
|
||||
## Goal
|
||||
|
||||
Create an ordered execution queue from all bound solutions. Uses **subagent pattern** to analyze inter-solution file conflicts, calculate semantic priorities, and assign parallel/sequential execution groups.
|
||||
|
||||
**Design Principle**: Queue items are **solutions**, not individual tasks. Each executor receives a complete solution with all its tasks.
|
||||
|
||||
## Core Guidelines
|
||||
|
||||
**⚠️ Data Access Principle**: Issues and queue files can grow very large. To avoid context overflow:
|
||||
|
||||
| Operation | Correct | Incorrect |
|
||||
|-----------|---------|-----------|
|
||||
| List issues (brief) | `ccw issue list --status planned --brief` | Read issues.jsonl |
|
||||
| List queue (brief) | `ccw issue queue --brief` | Read queues/*.json |
|
||||
| Read issue details | `ccw issue status <id> --json` | Read issues.jsonl |
|
||||
| Get next item | `ccw issue next --json` | Read queues/*.json |
|
||||
| Sync from queue | `ccw issue update --from-queue` | Direct file edit |
|
||||
|
||||
**ALWAYS** use CLI commands for CRUD operations. **NEVER** read entire `issues.jsonl` or `queues/*.json` directly.
|
||||
|
||||
## Inputs
|
||||
|
||||
- **All planned**: Default behavior → queue all issues with `planned` status and bound solutions
|
||||
- **Multiple queues**: `--queues <n>` → create N parallel queues
|
||||
- **Specific issue**: `--issue <id>` → queue only that issue's solution
|
||||
- **Append mode**: `--append <id>` → append issue to active queue (don't create new)
|
||||
|
||||
## Output Requirements
|
||||
|
||||
**Generate Files (EXACTLY 2):**
|
||||
1. `.workflow/issues/queues/{queue-id}.json` - Full queue with solutions, conflicts, groups
|
||||
2. `.workflow/issues/queues/index.json` - Update with new queue entry
|
||||
|
||||
**Return Summary:**
|
||||
```json
|
||||
{
|
||||
"queue_id": "QUE-YYYYMMDD-HHMMSS",
|
||||
"total_solutions": 3,
|
||||
"total_tasks": 12,
|
||||
"execution_groups": [{ "id": "P1", "type": "parallel", "count": 2 }],
|
||||
"conflicts_resolved": 1,
|
||||
"issues_queued": ["ISS-xxx", "ISS-yyy"]
|
||||
}
|
||||
```
|
||||
|
||||
## Workflow
|
||||
|
||||
### Step 1: Generate Queue ID and Load Solutions
|
||||
|
||||
```bash
|
||||
# Generate queue ID
|
||||
QUEUE_ID="QUE-$(date -u +%Y%m%d-%H%M%S)"
|
||||
|
||||
# Load planned issues with bound solutions
|
||||
ccw issue list --status planned --json
|
||||
```
|
||||
|
||||
For each issue, extract:
|
||||
- `id`, `bound_solution_id`, `priority`
|
||||
- Read solution from `.workflow/issues/solutions/{issue-id}.jsonl`
|
||||
- Collect `files_touched` from all tasks' `modification_points.file`
|
||||
|
||||
Build solution list:
|
||||
```json
|
||||
[
|
||||
{
|
||||
"issue_id": "ISS-xxx",
|
||||
"solution_id": "SOL-xxx",
|
||||
"task_count": 3,
|
||||
"files_touched": ["src/auth.ts", "src/utils.ts"],
|
||||
"priority": "medium"
|
||||
}
|
||||
]
|
||||
```
|
||||
|
||||
### Step 2: Spawn Queue Agent for Conflict Analysis
|
||||
|
||||
Spawn subagent to analyze conflicts and order solutions:
|
||||
|
||||
```javascript
|
||||
const agentId = spawn_agent({
|
||||
message: `
|
||||
## TASK ASSIGNMENT
|
||||
|
||||
### MANDATORY FIRST STEPS (Agent Execute)
|
||||
1. **Read role definition**: ~/.codex/agents/issue-queue-agent.md (MUST read first)
|
||||
2. Read: .workflow/project-tech.json
|
||||
3. Read: .workflow/project-guidelines.json
|
||||
|
||||
---
|
||||
|
||||
Goal: Order ${solutions.length} solutions into execution queue with conflict resolution
|
||||
|
||||
Scope:
|
||||
- CAN DO: Analyze file conflicts, calculate priorities, assign groups
|
||||
- CANNOT DO: Execute solutions, modify code
|
||||
- Queue ID: ${QUEUE_ID}
|
||||
|
||||
Context:
|
||||
- Solutions: ${JSON.stringify(solutions, null, 2)}
|
||||
- Project Root: ${process.cwd()}
|
||||
|
||||
Deliverables:
|
||||
1. Write queue JSON to: .workflow/issues/queues/${QUEUE_ID}.json
|
||||
2. Update index: .workflow/issues/queues/index.json
|
||||
3. Return summary JSON
|
||||
|
||||
Quality bar:
|
||||
- No circular dependencies in DAG
|
||||
- Parallel groups have NO file overlaps
|
||||
- Semantic priority calculated (0.0-1.0)
|
||||
- All conflicts resolved with rationale
|
||||
`
|
||||
})
|
||||
|
||||
// Wait for agent completion
|
||||
const result = wait({ ids: [agentId], timeout_ms: 600000 })
|
||||
|
||||
// Parse result
|
||||
const summary = JSON.parse(result.status[agentId].completed)
|
||||
|
||||
// Check for clarifications
|
||||
if (summary.clarifications?.length > 0) {
|
||||
// Handle high-severity conflicts requiring user input
|
||||
for (const clarification of summary.clarifications) {
|
||||
console.log(`Conflict: ${clarification.question}`)
|
||||
console.log(`Options: ${clarification.options.join(', ')}`)
|
||||
// Get user input and send back
|
||||
send_input({
|
||||
id: agentId,
|
||||
message: `Conflict ${clarification.conflict_id} resolved: ${userChoice}`
|
||||
})
|
||||
wait({ ids: [agentId], timeout_ms: 300000 })
|
||||
}
|
||||
}
|
||||
|
||||
// Close agent
|
||||
close_agent({ id: agentId })
|
||||
```
|
||||
|
||||
### Step 3: Multi-Queue Support (if --queues > 1)
|
||||
|
||||
When creating multiple parallel queues:
|
||||
|
||||
1. **Partition solutions** to minimize cross-queue file conflicts
|
||||
2. **Spawn N agents in parallel** (one per queue)
|
||||
3. **Wait for all agents** with batch wait
|
||||
|
||||
```javascript
|
||||
// Partition solutions by file overlap
|
||||
const partitions = partitionSolutions(solutions, numQueues)
|
||||
|
||||
// Spawn agents in parallel
|
||||
const agentIds = partitions.map((partition, i) =>
|
||||
spawn_agent({
|
||||
message: buildQueuePrompt(partition, `${QUEUE_ID}-${i+1}`, i+1, numQueues)
|
||||
})
|
||||
)
|
||||
|
||||
// Batch wait for all agents
|
||||
const results = wait({ ids: agentIds, timeout_ms: 600000 })
|
||||
|
||||
// Collect clarifications from all agents
|
||||
const allClarifications = agentIds.flatMap((id, i) =>
|
||||
(results.status[id].clarifications || []).map(c => ({ ...c, queue_id: `${QUEUE_ID}-${i+1}`, agent_id: id }))
|
||||
)
|
||||
|
||||
// Handle clarifications, then close all agents
|
||||
agentIds.forEach(id => close_agent({ id }))
|
||||
```
|
||||
|
||||
### Step 4: Update Issue Statuses
|
||||
|
||||
**MUST use CLI command:**
|
||||
|
||||
```bash
|
||||
# Batch update from queue (recommended)
|
||||
ccw issue update --from-queue ${QUEUE_ID}
|
||||
|
||||
# Or individual update
|
||||
ccw issue update <issue-id> --status queued
|
||||
```
|
||||
|
||||
### Step 5: Active Queue Check
|
||||
|
||||
```bash
|
||||
ccw issue queue list --brief
|
||||
```
|
||||
|
||||
**Decision:**
|
||||
- If no active queue: `ccw issue queue switch ${QUEUE_ID}`
|
||||
- If active queue exists: Present options to user
|
||||
|
||||
```
|
||||
Active queue exists. Choose action:
|
||||
1. Merge into existing queue
|
||||
2. Use new queue (keep existing in history)
|
||||
3. Cancel (delete new queue)
|
||||
|
||||
Select (1-3):
|
||||
```
|
||||
|
||||
### Step 6: Output Summary
|
||||
|
||||
```markdown
|
||||
## Queue Formed: ${QUEUE_ID}
|
||||
|
||||
**Solutions**: 5
|
||||
**Tasks**: 18
|
||||
**Execution Groups**: 3
|
||||
|
||||
### Execution Order
|
||||
| # | Item | Issue | Tasks | Group | Files |
|
||||
|---|------|-------|-------|-------|-------|
|
||||
| 1 | S-1 | ISS-001 | 3 | P1 | src/auth.ts |
|
||||
| 2 | S-2 | ISS-002 | 2 | P1 | src/api.ts |
|
||||
| 3 | S-3 | ISS-003 | 4 | S2 | src/auth.ts |
|
||||
|
||||
### Conflicts Resolved
|
||||
- src/auth.ts: S-1 → S-3 (sequential, S-1 creates module)
|
||||
|
||||
**Next Step**: `/issue:execute --queue ${QUEUE_ID}`
|
||||
```
|
||||
|
||||
## Subagent Role Reference
|
||||
|
||||
Queue agent uses role file at: `~/.codex/agents/issue-queue-agent.md`
|
||||
|
||||
Role capabilities:
|
||||
- File conflict detection (5 types)
|
||||
- Dependency DAG construction
|
||||
- Semantic priority calculation
|
||||
- Execution group assignment
|
||||
|
||||
## Queue File Schema
|
||||
|
||||
```json
|
||||
{
|
||||
"id": "QUE-20251228-120000",
|
||||
"status": "active",
|
||||
"issue_ids": ["ISS-001", "ISS-002"],
|
||||
"solutions": [
|
||||
{
|
||||
"item_id": "S-1",
|
||||
"issue_id": "ISS-001",
|
||||
"solution_id": "SOL-ISS-001-1",
|
||||
"status": "pending",
|
||||
"execution_order": 1,
|
||||
"execution_group": "P1",
|
||||
"depends_on": [],
|
||||
"semantic_priority": 0.8,
|
||||
"files_touched": ["src/auth.ts"],
|
||||
"task_count": 3
|
||||
}
|
||||
],
|
||||
"conflicts": [...],
|
||||
"execution_groups": [...]
|
||||
}
|
||||
```
|
||||
|
||||
## Quality Checklist
|
||||
|
||||
Before completing, verify:
|
||||
|
||||
- [ ] Exactly 2 files generated: queue JSON + index update
|
||||
- [ ] Queue has valid DAG (no circular dependencies)
|
||||
- [ ] All file conflicts resolved with rationale
|
||||
- [ ] Semantic priority calculated for each solution (0.0-1.0)
|
||||
- [ ] Execution groups assigned (P* for parallel, S* for sequential)
|
||||
- [ ] Issue statuses updated to `queued`
|
||||
- [ ] All subagents closed after completion
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Situation | Action |
|
||||
|-----------|--------|
|
||||
| No planned issues | Return empty queue summary |
|
||||
| Circular dependency detected | Abort, report cycle details |
|
||||
| Missing solution file | Skip issue, log warning |
|
||||
| Agent timeout | Retry with increased timeout |
|
||||
| Clarification rejected | Abort queue formation |
|
||||
|
||||
## Start Execution
|
||||
|
||||
Begin by listing planned issues:
|
||||
|
||||
```bash
|
||||
ccw issue list --status planned --json
|
||||
```
|
||||
|
||||
Then extract solution data and spawn queue agent.
|
||||
343
.codex/skills/issue-resolve/SKILL.md
Normal file
343
.codex/skills/issue-resolve/SKILL.md
Normal file
@@ -0,0 +1,343 @@
|
||||
---
|
||||
name: issue-resolve
|
||||
description: Unified issue resolution pipeline with source selection. Plan issues via AI exploration, convert from artifacts, import from brainstorm sessions, or form execution queues. Triggers on "issue:plan", "issue:queue", "issue:convert-to-plan", "issue:from-brainstorm", "resolve issue", "plan issue", "queue issues", "convert plan to issue".
|
||||
allowed-tools: spawn_agent, wait, send_input, close_agent, AskUserQuestion, Read, Write, Edit, Bash, Glob, Grep
|
||||
---
|
||||
|
||||
# Issue Resolve (Codex Version)
|
||||
|
||||
Unified issue resolution pipeline that orchestrates solution creation from multiple sources and queue formation for execution.
|
||||
|
||||
## Architecture Overview
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────────────┐
|
||||
│ Issue Resolve Orchestrator (SKILL.md) │
|
||||
│ → Source selection → Route to phase → Execute → Summary │
|
||||
└───────────────┬─────────────────────────────────────────────────┘
|
||||
│
|
||||
├─ AskUserQuestion: Select issue source
|
||||
│
|
||||
┌───────────┼───────────┬───────────┬───────────┐
|
||||
↓ ↓ ↓ ↓ │
|
||||
┌─────────┐ ┌─────────┐ ┌─────────┐ ┌─────────┐ │
|
||||
│ Phase 1 │ │ Phase 2 │ │ Phase 3 │ │ Phase 4 │ │
|
||||
│ Explore │ │ Convert │ │ From │ │ Form │ │
|
||||
│ & Plan │ │Artifact │ │Brainstorm│ │ Queue │ │
|
||||
└─────────┘ └─────────┘ └─────────┘ └─────────┘ │
|
||||
↓ ↓ ↓ ↓ │
|
||||
Solutions Solutions Issue+Sol Exec Queue │
|
||||
(bound) (bound) (bound) (ordered) │
|
||||
│
|
||||
┌────────────────────────────────┘
|
||||
↓
|
||||
/issue:execute
|
||||
```
|
||||
|
||||
## Key Design Principles
|
||||
|
||||
1. **Source-Driven Routing**: AskUserQuestion selects workflow, then load single phase
|
||||
2. **Progressive Phase Loading**: Only read the selected phase document
|
||||
3. **CLI-First Data Access**: All issue/solution CRUD via `ccw issue` CLI commands
|
||||
4. **Auto Mode Support**: `-y` flag skips source selection (defaults to Explore & Plan)
|
||||
|
||||
## Subagent API Reference
|
||||
|
||||
### spawn_agent
|
||||
Create a new subagent with task assignment.
|
||||
|
||||
```javascript
|
||||
const agentId = spawn_agent({
|
||||
message: `
|
||||
## TASK ASSIGNMENT
|
||||
|
||||
### MANDATORY FIRST STEPS (Agent Execute)
|
||||
1. **Read role definition**: ~/.codex/agents/{agent-type}.md (MUST read first)
|
||||
2. Read: .workflow/project-tech.json
|
||||
3. Read: .workflow/project-guidelines.json
|
||||
|
||||
## TASK CONTEXT
|
||||
${taskContext}
|
||||
|
||||
## DELIVERABLES
|
||||
${deliverables}
|
||||
`
|
||||
})
|
||||
```
|
||||
|
||||
### wait
|
||||
Get results from subagent (only way to retrieve results).
|
||||
|
||||
```javascript
|
||||
const result = wait({
|
||||
ids: [agentId],
|
||||
timeout_ms: 600000 // 10 minutes
|
||||
})
|
||||
|
||||
if (result.timed_out) {
|
||||
// Handle timeout - can continue waiting or send_input to prompt completion
|
||||
}
|
||||
```
|
||||
|
||||
### send_input
|
||||
Continue interaction with active subagent (for clarification or follow-up).
|
||||
|
||||
```javascript
|
||||
send_input({
|
||||
id: agentId,
|
||||
message: `
|
||||
## CLARIFICATION ANSWERS
|
||||
${answers}
|
||||
|
||||
## NEXT STEP
|
||||
Continue with updated analysis.
|
||||
`
|
||||
})
|
||||
```
|
||||
|
||||
### close_agent
|
||||
Clean up subagent resources (irreversible).
|
||||
|
||||
```javascript
|
||||
close_agent({ id: agentId })
|
||||
```
|
||||
|
||||
## Auto Mode
|
||||
|
||||
When `--yes` or `-y`: Skip source selection, use Explore & Plan for issue IDs, or auto-detect source type for paths.
|
||||
|
||||
## Usage
|
||||
|
||||
```
|
||||
codex -p "@.codex/prompts/issue-resolve.md <task description or issue IDs>"
|
||||
codex -p "@.codex/prompts/issue-resolve.md [FLAGS] \"<input>\""
|
||||
|
||||
# Flags
|
||||
-y, --yes Skip all confirmations (auto mode)
|
||||
--source <type> Pre-select source: plan|convert|brainstorm|queue
|
||||
--batch-size <n> Max issues per agent batch (plan mode, default: 3)
|
||||
--issue <id> Bind to existing issue (convert mode)
|
||||
--supplement Add tasks to existing solution (convert mode)
|
||||
--queues <n> Number of parallel queues (queue mode, default: 1)
|
||||
|
||||
# Examples
|
||||
codex -p "@.codex/prompts/issue-resolve.md GH-123,GH-124" # Explore & plan issues
|
||||
codex -p "@.codex/prompts/issue-resolve.md --source plan --all-pending" # Plan all pending issues
|
||||
codex -p "@.codex/prompts/issue-resolve.md --source convert \".workflow/.lite-plan/my-plan\"" # Convert artifact
|
||||
codex -p "@.codex/prompts/issue-resolve.md --source brainstorm SESSION=\"BS-rate-limiting\"" # From brainstorm
|
||||
codex -p "@.codex/prompts/issue-resolve.md --source queue" # Form execution queue
|
||||
codex -p "@.codex/prompts/issue-resolve.md -y GH-123" # Auto mode, plan single issue
|
||||
```
|
||||
|
||||
## Execution Flow
|
||||
|
||||
```
|
||||
Input Parsing:
|
||||
└─ Parse flags (--source, -y, --issue, etc.) and positional args
|
||||
|
||||
Source Selection:
|
||||
├─ --source flag provided → Route directly
|
||||
├─ Auto-detect from input:
|
||||
│ ├─ Issue IDs (GH-xxx, ISS-xxx) → Explore & Plan
|
||||
│ ├─ SESSION="..." → From Brainstorm
|
||||
│ ├─ File/folder path → Convert from Artifact
|
||||
│ └─ No input or --all-pending → Explore & Plan (all pending)
|
||||
└─ Otherwise → AskUserQuestion to select source
|
||||
|
||||
Phase Execution (load one phase):
|
||||
├─ Phase 1: Explore & Plan → phases/01-issue-plan.md
|
||||
├─ Phase 2: Convert Artifact → phases/02-convert-to-plan.md
|
||||
├─ Phase 3: From Brainstorm → phases/03-from-brainstorm.md
|
||||
└─ Phase 4: Form Queue → phases/04-issue-queue.md
|
||||
|
||||
Post-Phase:
|
||||
└─ Summary + Next steps recommendation
|
||||
```
|
||||
|
||||
### Phase Reference Documents
|
||||
|
||||
| Phase | Document | Load When | Purpose |
|
||||
|-------|----------|-----------|---------|
|
||||
| Phase 1 | [phases/01-issue-plan.md](phases/01-issue-plan.md) | Source = Explore & Plan | Batch plan issues via issue-plan-agent |
|
||||
| Phase 2 | [phases/02-convert-to-plan.md](phases/02-convert-to-plan.md) | Source = Convert Artifact | Convert lite-plan/session/markdown to solutions |
|
||||
| Phase 3 | [phases/03-from-brainstorm.md](phases/03-from-brainstorm.md) | Source = From Brainstorm | Convert brainstorm ideas to issue + solution |
|
||||
| Phase 4 | [phases/04-issue-queue.md](phases/04-issue-queue.md) | Source = Form Queue | Order bound solutions into execution queue |
|
||||
|
||||
## Core Rules
|
||||
|
||||
1. **Source Selection First**: Always determine source before loading any phase
|
||||
2. **Single Phase Load**: Only read the selected phase document, never load all phases
|
||||
3. **CLI Data Access**: Use `ccw issue` CLI for all issue/solution operations, NEVER read files directly
|
||||
4. **Content Preservation**: Each phase contains complete execution logic from original commands
|
||||
5. **Auto-Detect Input**: Smart input parsing reduces need for explicit --source flag
|
||||
6. **DO NOT STOP**: Continuous multi-phase workflow. After completing each phase, immediately proceed to next
|
||||
7. **Explicit Lifecycle**: Always close_agent after wait completes to free resources
|
||||
|
||||
## Input Processing
|
||||
|
||||
### Auto-Detection Logic
|
||||
|
||||
```javascript
|
||||
function detectSource(input, flags) {
|
||||
// 1. Explicit --source flag
|
||||
if (flags.source) return flags.source;
|
||||
|
||||
// 2. Auto-detect from input content
|
||||
const trimmed = input.trim();
|
||||
|
||||
// Issue IDs pattern (GH-xxx, ISS-xxx, comma-separated)
|
||||
if (trimmed.match(/^[A-Z]+-\d+/i) || trimmed.includes(',')) {
|
||||
return 'plan';
|
||||
}
|
||||
|
||||
// --all-pending or empty input → plan all pending
|
||||
if (flags.allPending || trimmed === '') {
|
||||
return 'plan';
|
||||
}
|
||||
|
||||
// SESSION="..." pattern → brainstorm
|
||||
if (trimmed.includes('SESSION=')) {
|
||||
return 'brainstorm';
|
||||
}
|
||||
|
||||
// File/folder path → convert
|
||||
if (trimmed.match(/\.(md|json)$/) || trimmed.includes('.workflow/')) {
|
||||
return 'convert';
|
||||
}
|
||||
|
||||
// Cannot auto-detect → ask user
|
||||
return null;
|
||||
}
|
||||
```
|
||||
|
||||
### Source Selection (AskUserQuestion)
|
||||
|
||||
```javascript
|
||||
// When source cannot be auto-detected
|
||||
const answer = AskUserQuestion({
|
||||
questions: [{
|
||||
question: "How would you like to create/manage issue solutions?",
|
||||
header: "Source",
|
||||
multiSelect: false,
|
||||
options: [
|
||||
{
|
||||
label: "Explore & Plan (Recommended)",
|
||||
description: "AI explores codebase and generates solutions for issues"
|
||||
},
|
||||
{
|
||||
label: "Convert from Artifact",
|
||||
description: "Convert existing lite-plan, workflow session, or markdown to solution"
|
||||
},
|
||||
{
|
||||
label: "From Brainstorm",
|
||||
description: "Convert brainstorm session ideas into issue with solution"
|
||||
},
|
||||
{
|
||||
label: "Form Execution Queue",
|
||||
description: "Order bound solutions into execution queue for /issue:execute"
|
||||
}
|
||||
]
|
||||
}]
|
||||
});
|
||||
|
||||
// Route based on selection
|
||||
const sourceMap = {
|
||||
"Explore & Plan": "plan",
|
||||
"Convert from Artifact": "convert",
|
||||
"From Brainstorm": "brainstorm",
|
||||
"Form Execution Queue": "queue"
|
||||
};
|
||||
```
|
||||
|
||||
## Data Flow
|
||||
|
||||
```
|
||||
User Input (issue IDs / artifact path / session ID / flags)
|
||||
↓
|
||||
[Parse Flags + Auto-Detect Source]
|
||||
↓
|
||||
[Source Selection] ← AskUserQuestion (if needed)
|
||||
↓
|
||||
[Read Selected Phase Document]
|
||||
↓
|
||||
[Execute Phase Logic]
|
||||
↓
|
||||
[Summary + Next Steps]
|
||||
├─ After Plan/Convert/Brainstorm → Suggest /issue:queue or /issue:execute
|
||||
└─ After Queue → Suggest /issue:execute
|
||||
```
|
||||
|
||||
## Task Tracking Pattern
|
||||
|
||||
```javascript
|
||||
// Initialize plan with phase steps
|
||||
update_plan({
|
||||
explanation: "Issue resolve workflow started",
|
||||
plan: [
|
||||
{ step: "Select issue source", status: "completed" },
|
||||
{ step: "Execute: [selected phase name]", status: "in_progress" },
|
||||
{ step: "Summary & next steps", status: "pending" }
|
||||
]
|
||||
})
|
||||
```
|
||||
|
||||
Phase-specific sub-tasks are attached when the phase executes (see individual phase docs for details).
|
||||
|
||||
## Core Guidelines
|
||||
|
||||
**Data Access Principle**: Issues and solutions files can grow very large. To avoid context overflow:
|
||||
|
||||
| Operation | Correct | Incorrect |
|
||||
|-----------|---------|-----------|
|
||||
| List issues (brief) | `ccw issue list --status pending --brief` | `Read('issues.jsonl')` |
|
||||
| Read issue details | `ccw issue status <id> --json` | `Read('issues.jsonl')` |
|
||||
| Update status | `ccw issue update <id> --status ...` | Direct file edit |
|
||||
| Bind solution | `ccw issue bind <id> <sol-id>` | Direct file edit |
|
||||
| Batch solutions | `ccw issue solutions --status planned --brief` | Loop individual queries |
|
||||
|
||||
**Output Options**:
|
||||
- `--brief`: JSON with minimal fields (orchestrator use)
|
||||
- `--json`: Full JSON (agent use only)
|
||||
|
||||
**ALWAYS** use CLI commands for CRUD operations. **NEVER** read entire `issues.jsonl` or `solutions/*.jsonl` directly.
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Error | Resolution |
|
||||
|-------|------------|
|
||||
| No source detected | Show AskUserQuestion with all 4 options |
|
||||
| Invalid source type | Show available sources, re-prompt |
|
||||
| Phase execution fails | Report error, suggest manual intervention |
|
||||
| No pending issues (plan) | Suggest creating issues first |
|
||||
| No bound solutions (queue) | Suggest running plan/convert/brainstorm first |
|
||||
|
||||
## Post-Phase Next Steps
|
||||
|
||||
After successful phase execution, recommend next action:
|
||||
|
||||
```javascript
|
||||
// After Plan/Convert/Brainstorm (solutions created)
|
||||
AskUserQuestion({
|
||||
questions: [{
|
||||
question: "Solutions created. What next?",
|
||||
header: "Next",
|
||||
multiSelect: false,
|
||||
options: [
|
||||
{ label: "Form Queue", description: "Order solutions for execution (/issue:queue)" },
|
||||
{ label: "Plan More Issues", description: "Continue creating solutions" },
|
||||
{ label: "View Issues", description: "Review issue details" },
|
||||
{ label: "Done", description: "Exit workflow" }
|
||||
]
|
||||
}]
|
||||
});
|
||||
|
||||
// After Queue (queue formed)
|
||||
// → Suggest /issue:execute directly
|
||||
```
|
||||
|
||||
## Related Commands
|
||||
|
||||
- `issue-manage` - Interactive issue CRUD operations
|
||||
- `/issue:execute` - Execute queue with DAG-based parallel orchestration
|
||||
- `ccw issue list` - List all issues
|
||||
- `ccw issue status <id>` - View issue details
|
||||
318
.codex/skills/issue-resolve/phases/01-issue-plan.md
Normal file
318
.codex/skills/issue-resolve/phases/01-issue-plan.md
Normal file
@@ -0,0 +1,318 @@
|
||||
# Phase 1: Explore & Plan
|
||||
|
||||
## Overview
|
||||
|
||||
Batch plan issue resolution using **issue-plan-agent** that combines exploration and planning into a single closed-loop workflow.
|
||||
|
||||
**Behavior:**
|
||||
- Single solution per issue → auto-bind
|
||||
- Multiple solutions → return for user selection
|
||||
- Agent handles file generation
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- Issue IDs provided (comma-separated) or `--all-pending` flag
|
||||
- `ccw issue` CLI available
|
||||
- `.workflow/issues/` directory exists or will be created
|
||||
|
||||
## Auto Mode
|
||||
|
||||
When `--yes` or `-y`: Auto-bind solutions without confirmation, use recommended settings.
|
||||
|
||||
## Core Guidelines
|
||||
|
||||
**Data Access Principle**: Issues and solutions files can grow very large. To avoid context overflow:
|
||||
|
||||
| Operation | Correct | Incorrect |
|
||||
|-----------|---------|-----------|
|
||||
| List issues (brief) | `ccw issue list --status pending --brief` | `Read('issues.jsonl')` |
|
||||
| Read issue details | `ccw issue status <id> --json` | `Read('issues.jsonl')` |
|
||||
| Update status | `ccw issue update <id> --status ...` | Direct file edit |
|
||||
| Bind solution | `ccw issue bind <id> <sol-id>` | Direct file edit |
|
||||
|
||||
**Output Options**:
|
||||
- `--brief`: JSON with minimal fields (id, title, status, priority, tags)
|
||||
- `--json`: Full JSON (agent use only)
|
||||
|
||||
**Orchestration vs Execution**:
|
||||
- **Command (orchestrator)**: Use `--brief` for minimal context
|
||||
- **Agent (executor)**: Fetch full details → `ccw issue status <id> --json`
|
||||
|
||||
**ALWAYS** use CLI commands for CRUD operations. **NEVER** read entire `issues.jsonl` or `solutions/*.jsonl` directly.
|
||||
|
||||
## Execution Steps
|
||||
|
||||
### Step 1.1: Issue Loading (Brief Info Only)
|
||||
|
||||
```javascript
|
||||
const batchSize = flags.batchSize || 3;
|
||||
let issues = []; // {id, title, tags} - brief info for grouping only
|
||||
|
||||
// Default to --all-pending if no input provided
|
||||
const useAllPending = flags.allPending || !userInput || userInput.trim() === '';
|
||||
|
||||
if (useAllPending) {
|
||||
// Get pending issues with brief metadata via CLI
|
||||
const result = Bash(`ccw issue list --status pending,registered --json`).trim();
|
||||
const parsed = result ? JSON.parse(result) : [];
|
||||
issues = parsed.map(i => ({ id: i.id, title: i.title || '', tags: i.tags || [] }));
|
||||
|
||||
if (issues.length === 0) {
|
||||
console.log('No pending issues found.');
|
||||
return;
|
||||
}
|
||||
console.log(`Found ${issues.length} pending issues`);
|
||||
} else {
|
||||
// Parse comma-separated issue IDs, fetch brief metadata
|
||||
const ids = userInput.includes(',')
|
||||
? userInput.split(',').map(s => s.trim())
|
||||
: [userInput.trim()];
|
||||
|
||||
for (const id of ids) {
|
||||
Bash(`ccw issue init ${id} --title "Issue ${id}" 2>/dev/null || true`);
|
||||
const info = Bash(`ccw issue status ${id} --json`).trim();
|
||||
const parsed = info ? JSON.parse(info) : {};
|
||||
issues.push({ id, title: parsed.title || '', tags: parsed.tags || [] });
|
||||
}
|
||||
}
|
||||
// Note: Agent fetches full issue content via `ccw issue status <id> --json`
|
||||
|
||||
// Intelligent grouping: Analyze issues by title/tags, group semantically similar ones
|
||||
// Strategy: Same module/component, related bugs, feature clusters
|
||||
// Constraint: Max ${batchSize} issues per batch
|
||||
|
||||
console.log(`Processing ${issues.length} issues in ${batches.length} batch(es)`);
|
||||
|
||||
update_plan({
|
||||
explanation: "Issue loading complete, starting batch planning",
|
||||
plan: batches.map((_, i) => ({
|
||||
step: `Plan batch ${i+1}`,
|
||||
status: 'pending'
|
||||
}))
|
||||
});
|
||||
```
|
||||
|
||||
### Step 1.2: Unified Explore + Plan (issue-plan-agent) - PARALLEL
|
||||
|
||||
```javascript
|
||||
Bash(`mkdir -p .workflow/issues/solutions`);
|
||||
const pendingSelections = []; // Collect multi-solution issues for user selection
|
||||
const agentResults = []; // Collect all agent results for conflict aggregation
|
||||
|
||||
// Build prompts for all batches
|
||||
const agentTasks = batches.map((batch, batchIndex) => {
|
||||
const issueList = batch.map(i => `- ${i.id}: ${i.title}${i.tags.length ? ` [${i.tags.join(', ')}]` : ''}`).join('\n');
|
||||
const batchIds = batch.map(i => i.id);
|
||||
|
||||
const issuePrompt = `
|
||||
## TASK ASSIGNMENT
|
||||
|
||||
### MANDATORY FIRST STEPS (Agent Execute)
|
||||
1. **Read role definition**: ~/.codex/agents/issue-plan-agent.md (MUST read first)
|
||||
2. Read: .workflow/project-tech.json
|
||||
3. Read: .workflow/project-guidelines.json
|
||||
|
||||
---
|
||||
|
||||
## Plan Issues
|
||||
|
||||
**Issues** (grouped by similarity):
|
||||
${issueList}
|
||||
|
||||
**Project Root**: ${process.cwd()}
|
||||
|
||||
### Project Context (MANDATORY)
|
||||
1. Read: .workflow/project-tech.json (technology stack, architecture)
|
||||
2. Read: .workflow/project-guidelines.json (constraints and conventions)
|
||||
|
||||
### Workflow
|
||||
1. Fetch issue details: ccw issue status <id> --json
|
||||
2. **Analyze failure history** (if issue.feedback exists):
|
||||
- Extract failure details from issue.feedback (type='failure', stage='execute')
|
||||
- Parse error_type, message, task_id, solution_id from content JSON
|
||||
- Identify failure patterns: repeated errors, root causes, blockers
|
||||
- **Constraint**: Avoid repeating failed approaches
|
||||
3. Load project context files
|
||||
4. Explore codebase (ACE semantic search)
|
||||
5. Plan solution with tasks (schema: solution-schema.json)
|
||||
- **If previous solution failed**: Reference failure analysis in solution.approach
|
||||
- Add explicit verification steps to prevent same failure mode
|
||||
6. **If github_url exists**: Add final task to comment on GitHub issue
|
||||
7. Write solution to: .workflow/issues/solutions/{issue-id}.jsonl
|
||||
8. **CRITICAL - Binding Decision**:
|
||||
- Single solution → **MUST execute**: ccw issue bind <issue-id> <solution-id>
|
||||
- Multiple solutions → Return pending_selection only (no bind)
|
||||
|
||||
### Failure-Aware Planning Rules
|
||||
- **Extract failure patterns**: Parse issue.feedback where type='failure' and stage='execute'
|
||||
- **Identify root causes**: Analyze error_type (test_failure, compilation, timeout, etc.)
|
||||
- **Design alternative approach**: Create solution that addresses root cause
|
||||
- **Add prevention steps**: Include explicit verification to catch same error earlier
|
||||
- **Document lessons**: Reference previous failures in solution.approach
|
||||
|
||||
### Rules
|
||||
- Solution ID format: SOL-{issue-id}-{uid} (uid: 4 random alphanumeric chars, e.g., a7x9)
|
||||
- Single solution per issue → auto-bind via ccw issue bind
|
||||
- Multiple solutions → register only, return pending_selection
|
||||
- Tasks must have quantified acceptance.criteria
|
||||
|
||||
### Return Summary
|
||||
{"bound":[{"issue_id":"...","solution_id":"...","task_count":N}],"pending_selection":[{"issue_id":"...","solutions":[{"id":"...","description":"...","task_count":N}]}]}
|
||||
`;
|
||||
|
||||
return { batchIndex, batchIds, issuePrompt, batch };
|
||||
});
|
||||
|
||||
// Launch agents in parallel (max 10 concurrent)
|
||||
const MAX_PARALLEL = 10;
|
||||
for (let i = 0; i < agentTasks.length; i += MAX_PARALLEL) {
|
||||
const chunk = agentTasks.slice(i, i + MAX_PARALLEL);
|
||||
const agentIds = [];
|
||||
|
||||
// Step 1: Spawn agents in parallel
|
||||
for (const { batchIndex, batchIds, issuePrompt, batch } of chunk) {
|
||||
updatePlanStep(`Plan batch ${batchIndex + 1}`, 'in_progress');
|
||||
const agentId = spawn_agent({
|
||||
message: issuePrompt
|
||||
});
|
||||
agentIds.push({ agentId, batchIndex });
|
||||
}
|
||||
|
||||
console.log(`Launched ${agentIds.length} agents (chunk ${Math.floor(i/MAX_PARALLEL) + 1}/${Math.ceil(agentTasks.length/MAX_PARALLEL)})...`);
|
||||
|
||||
// Step 2: Batch wait for all agents in this chunk
|
||||
const allIds = agentIds.map(a => a.agentId);
|
||||
const waitResult = wait({
|
||||
ids: allIds,
|
||||
timeout_ms: 600000 // 10 minutes
|
||||
});
|
||||
|
||||
if (waitResult.timed_out) {
|
||||
console.log('Some agents timed out, continuing with completed results');
|
||||
}
|
||||
|
||||
// Step 3: Collect results from completed agents
|
||||
for (const { agentId, batchIndex } of agentIds) {
|
||||
const agentStatus = waitResult.status[agentId];
|
||||
if (!agentStatus || !agentStatus.completed) {
|
||||
console.log(`Batch ${batchIndex + 1}: Agent did not complete, skipping`);
|
||||
updatePlanStep(`Plan batch ${batchIndex + 1}`, 'completed');
|
||||
continue;
|
||||
}
|
||||
|
||||
const result = agentStatus.completed;
|
||||
|
||||
// Extract JSON from potential markdown code blocks (agent may wrap in ```json...```)
|
||||
const jsonText = extractJsonFromMarkdown(result);
|
||||
let summary;
|
||||
try {
|
||||
summary = JSON.parse(jsonText);
|
||||
} catch (e) {
|
||||
console.log(`Batch ${batchIndex + 1}: Failed to parse agent result, skipping`);
|
||||
updatePlanStep(`Plan batch ${batchIndex + 1}`, 'completed');
|
||||
continue;
|
||||
}
|
||||
agentResults.push(summary); // Store for conflict aggregation
|
||||
|
||||
// Verify binding for bound issues (agent should have executed bind)
|
||||
for (const item of summary.bound || []) {
|
||||
const status = JSON.parse(Bash(`ccw issue status ${item.issue_id} --json`).trim());
|
||||
if (status.bound_solution_id === item.solution_id) {
|
||||
console.log(`${item.issue_id}: ${item.solution_id} (${item.task_count} tasks)`);
|
||||
} else {
|
||||
// Fallback: agent failed to bind, execute here
|
||||
Bash(`ccw issue bind ${item.issue_id} ${item.solution_id}`);
|
||||
console.log(`${item.issue_id}: ${item.solution_id} (${item.task_count} tasks) [recovered]`);
|
||||
}
|
||||
}
|
||||
// Collect pending selections
|
||||
for (const pending of summary.pending_selection || []) {
|
||||
pendingSelections.push(pending);
|
||||
}
|
||||
updatePlanStep(`Plan batch ${batchIndex + 1}`, 'completed');
|
||||
}
|
||||
|
||||
// Step 4: Batch cleanup - close all agents in this chunk
|
||||
allIds.forEach(id => close_agent({ id }));
|
||||
}
|
||||
```
|
||||
|
||||
### Step 1.3: Solution Selection (if pending)
|
||||
|
||||
```javascript
|
||||
// Handle multi-solution issues
|
||||
for (const pending of pendingSelections) {
|
||||
if (pending.solutions.length === 0) continue;
|
||||
|
||||
const options = pending.solutions.slice(0, 4).map(sol => ({
|
||||
label: `${sol.id} (${sol.task_count} tasks)`,
|
||||
description: sol.description || sol.approach || 'No description'
|
||||
}));
|
||||
|
||||
const answer = AskUserQuestion({
|
||||
questions: [{
|
||||
question: `Issue ${pending.issue_id}: which solution to bind?`,
|
||||
header: pending.issue_id,
|
||||
options: options,
|
||||
multiSelect: false
|
||||
}]
|
||||
});
|
||||
|
||||
const selected = answer[Object.keys(answer)[0]];
|
||||
if (!selected || selected === 'Other') continue;
|
||||
|
||||
const solId = selected.split(' ')[0];
|
||||
Bash(`ccw issue bind ${pending.issue_id} ${solId}`);
|
||||
console.log(`${pending.issue_id}: ${solId} bound`);
|
||||
}
|
||||
```
|
||||
|
||||
### Step 1.4: Summary
|
||||
|
||||
```javascript
|
||||
// Count planned issues via CLI
|
||||
const planned = JSON.parse(Bash(`ccw issue list --status planned --brief`) || '[]');
|
||||
const plannedCount = planned.length;
|
||||
|
||||
console.log(`
|
||||
## Done: ${issues.length} issues → ${plannedCount} planned
|
||||
|
||||
Next: \`/issue:queue\` → \`/issue:execute\`
|
||||
`);
|
||||
```
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Error | Resolution |
|
||||
|-------|------------|
|
||||
| Issue not found | Auto-create in issues.jsonl |
|
||||
| ACE search fails | Agent falls back to ripgrep |
|
||||
| No solutions generated | Display error, suggest manual planning |
|
||||
| User cancels selection | Skip issue, continue with others |
|
||||
| File conflicts | Agent detects and suggests resolution order |
|
||||
|
||||
## Bash Compatibility
|
||||
|
||||
**Avoid**: `$(cmd)`, `$var`, `for` loops — will be escaped incorrectly
|
||||
|
||||
**Use**: Simple commands + `&&` chains, quote comma params `"pending,registered"`
|
||||
|
||||
## Quality Checklist
|
||||
|
||||
Before completing, verify:
|
||||
|
||||
- [ ] All input issues have solutions in `solutions/{issue-id}.jsonl`
|
||||
- [ ] Single solution issues are auto-bound (`bound_solution_id` set)
|
||||
- [ ] Multi-solution issues returned in `pending_selection` for user choice
|
||||
- [ ] Each solution has executable tasks with `modification_points`
|
||||
- [ ] Task acceptance criteria are quantified (not vague)
|
||||
- [ ] Conflicts detected and reported (if multiple issues touch same files)
|
||||
- [ ] Issue status updated to `planned` after binding
|
||||
- [ ] All spawned agents are properly closed via close_agent
|
||||
|
||||
## Post-Phase Update
|
||||
|
||||
After plan completion:
|
||||
- All processed issues should have `status: planned` and `bound_solution_id` set
|
||||
- Report: total issues processed, solutions bound, pending selections resolved
|
||||
- Recommend next step: Form execution queue via Phase 4
|
||||
811
.codex/skills/workflow-tdd-plan/SKILL.md
Normal file
811
.codex/skills/workflow-tdd-plan/SKILL.md
Normal file
@@ -0,0 +1,811 @@
|
||||
---
|
||||
name: workflow-tdd-plan
|
||||
description: TDD workflow planning with Red-Green-Refactor task chain generation, test-first development structure, cycle tracking, and post-execution compliance verification. Triggers on "workflow:tdd-plan", "workflow:tdd-verify".
|
||||
allowed-tools: spawn_agent, wait, send_input, close_agent, AskUserQuestion, Read, Write, Edit, Bash, Glob, Grep
|
||||
---
|
||||
|
||||
# Workflow TDD Plan
|
||||
|
||||
6-phase TDD planning workflow that orchestrates session discovery, context gathering, test coverage analysis, conflict resolution, and TDD task generation to produce implementation plans with Red-Green-Refactor cycles. Includes post-execution TDD compliance verification.
|
||||
|
||||
## Architecture Overview
|
||||
|
||||
```
|
||||
┌──────────────────────────────────────────────────────────────────┐
|
||||
│ Workflow TDD Plan Orchestrator (SKILL.md) │
|
||||
│ → Pure coordinator: Execute phases, parse outputs, pass context │
|
||||
└───────────────┬──────────────────────────────────────────────────┘
|
||||
│
|
||||
┌────────────┼────────────┬────────────┬────────────┐
|
||||
↓ ↓ ↓ ↓ ↓
|
||||
┌────────┐ ┌────────┐ ┌──────────┐ ┌──────────┐ ┌──────────┐
|
||||
│ Phase 1│ │ Phase 2│ │ Phase 3 │ │ Phase 4 │ │ Phase 5 │
|
||||
│Session │ │Context │ │Test Covg │ │Conflict │ │TDD Task │
|
||||
│Discover│ │Gather │ │Analysis │ │Resolve │ │Generate │
|
||||
│ (ext) │ │ (ext) │ │ (local) │ │(ext,cond)│ │ (local) │
|
||||
└────────┘ └────────┘ └──────────┘ └──────────┘ └──────────┘
|
||||
↓ ↓ ↓ ↓ ↓
|
||||
sessionId contextPath testContext resolved IMPL_PLAN.md
|
||||
conflict_risk artifacts task JSONs
|
||||
|
||||
Phase 6: TDD Structure Validation (inline in SKILL.md)
|
||||
|
||||
Post-execution verification:
|
||||
┌──────────────┐ ┌───────────────────┐
|
||||
│ TDD Verify │────→│ Coverage Analysis │
|
||||
│ (local) │ │ (local) │
|
||||
└──────────────┘ └───────────────────┘
|
||||
phases/03-tdd- phases/04-tdd-
|
||||
verify.md coverage-analysis.md
|
||||
```
|
||||
|
||||
## Key Design Principles
|
||||
|
||||
1. **Pure Orchestrator**: Execute phases in sequence, parse outputs, pass context between them
|
||||
2. **Auto-Continue**: All phases run autonomously without user intervention between phases
|
||||
3. **Subagent Lifecycle**: Explicit lifecycle management with spawn_agent → wait → close_agent
|
||||
4. **Progressive Phase Loading**: Phase docs are read on-demand, not all at once
|
||||
5. **Conditional Execution**: Phase 4 only executes when conflict_risk >= medium
|
||||
6. **TDD-First**: Every feature starts with a failing test (Red phase)
|
||||
7. **Role Path Loading**: Subagent roles loaded via path reference in MANDATORY FIRST STEPS
|
||||
|
||||
**CLI Tool Selection**: CLI tool usage is determined semantically from user's task description. Include "use Codex/Gemini/Qwen" in your request for CLI execution.
|
||||
|
||||
**Task Attachment Model**:
|
||||
- Skill execute **expands workflow** by attaching sub-tasks to current TodoWrite
|
||||
- When executing a sub-command, its internal tasks are attached to the orchestrator's TodoWrite
|
||||
- Orchestrator **executes these attached tasks** sequentially
|
||||
- After completion, attached tasks are **collapsed** back to high-level phase summary
|
||||
- This is **task expansion**, not external delegation
|
||||
|
||||
**Auto-Continue Mechanism**:
|
||||
- TodoList tracks current phase status and dynamically manages task attachment/collapse
|
||||
- When each phase finishes executing, automatically execute next pending phase
|
||||
- All phases run autonomously without user interaction
|
||||
- **CONTINUOUS EXECUTION** - Do not stop until all phases complete
|
||||
|
||||
## Auto Mode
|
||||
|
||||
When `--yes` or `-y`: Auto-continue all phases (skip confirmations), use recommended conflict resolutions, skip TDD clarifications.
|
||||
|
||||
## Subagent API Reference
|
||||
|
||||
### spawn_agent
|
||||
Create a new subagent with task assignment.
|
||||
|
||||
```javascript
|
||||
const agentId = spawn_agent({
|
||||
message: `
|
||||
## TASK ASSIGNMENT
|
||||
|
||||
### MANDATORY FIRST STEPS (Agent Execute)
|
||||
1. **Read role definition**: ~/.codex/agents/{agent-type}.md (MUST read first)
|
||||
2. Read: .workflow/project-tech.json
|
||||
3. Read: .workflow/project-guidelines.json
|
||||
|
||||
## TASK CONTEXT
|
||||
${taskContext}
|
||||
|
||||
## DELIVERABLES
|
||||
${deliverables}
|
||||
`
|
||||
})
|
||||
```
|
||||
|
||||
### wait
|
||||
Get results from subagent (only way to retrieve results).
|
||||
|
||||
```javascript
|
||||
const result = wait({
|
||||
ids: [agentId],
|
||||
timeout_ms: 600000 // 10 minutes
|
||||
})
|
||||
|
||||
if (result.timed_out) {
|
||||
// Handle timeout - can continue waiting or send_input to prompt completion
|
||||
}
|
||||
```
|
||||
|
||||
### send_input
|
||||
Continue interaction with active subagent (for clarification or follow-up).
|
||||
|
||||
```javascript
|
||||
send_input({
|
||||
id: agentId,
|
||||
message: `
|
||||
## CLARIFICATION ANSWERS
|
||||
${answers}
|
||||
|
||||
## NEXT STEP
|
||||
Continue with plan generation.
|
||||
`
|
||||
})
|
||||
```
|
||||
|
||||
### close_agent
|
||||
Clean up subagent resources (irreversible).
|
||||
|
||||
```javascript
|
||||
close_agent({ id: agentId })
|
||||
```
|
||||
|
||||
## Usage
|
||||
|
||||
```
|
||||
workflow-tdd-plan <task description>
|
||||
workflow-tdd-plan [-y|--yes] "<task description>"
|
||||
|
||||
# Flags
|
||||
-y, --yes Skip all confirmations (auto mode)
|
||||
|
||||
# Arguments
|
||||
<task description> Task description text, TDD-structured format, or path to .md file
|
||||
|
||||
# Examples
|
||||
workflow-tdd-plan "Build user authentication with tests" # Simple TDD task
|
||||
workflow-tdd-plan "Add JWT auth with email/password and token refresh" # Detailed task
|
||||
workflow-tdd-plan -y "Implement payment processing" # Auto mode
|
||||
workflow-tdd-plan "tdd-requirements.md" # From file
|
||||
```
|
||||
|
||||
## TDD Compliance Requirements
|
||||
|
||||
### The Iron Law
|
||||
|
||||
```
|
||||
NO PRODUCTION CODE WITHOUT A FAILING TEST FIRST
|
||||
```
|
||||
|
||||
**Enforcement Method**:
|
||||
- Phase 5: `implementation_approach` includes test-first steps (Red → Green → Refactor)
|
||||
- Green phase: Includes test-fix-cycle configuration (max 3 iterations)
|
||||
- Auto-revert: Triggered when max iterations reached without passing tests
|
||||
|
||||
**Verification**: Phase 6 validates Red-Green-Refactor structure in all generated tasks
|
||||
|
||||
### TDD Compliance Checkpoint
|
||||
|
||||
| Checkpoint | Validation Phase | Evidence Required |
|
||||
|------------|------------------|-------------------|
|
||||
| Test-first structure | Phase 5 | `implementation_approach` has 3 steps |
|
||||
| Red phase exists | Phase 6 | Step 1: `tdd_phase: "red"` |
|
||||
| Green phase with test-fix | Phase 6 | Step 2: `tdd_phase: "green"` + test-fix-cycle |
|
||||
| Refactor phase exists | Phase 6 | Step 3: `tdd_phase: "refactor"` |
|
||||
|
||||
### Core TDD Principles
|
||||
|
||||
**Red Flags - STOP and Reassess**:
|
||||
- Code written before test
|
||||
- Test passes immediately (no Red phase witnessed)
|
||||
- Cannot explain why test should fail
|
||||
- "Just this once" rationalization
|
||||
- "Tests after achieve same goals" thinking
|
||||
|
||||
**Why Order Matters**:
|
||||
- Tests written after code pass immediately → proves nothing
|
||||
- Test-first forces edge case discovery before implementation
|
||||
- Tests-after verify what was built, not what's required
|
||||
|
||||
## Core Rules
|
||||
|
||||
1. **Start Immediately**: First action is TodoWrite initialization, second action is execute Phase 1
|
||||
2. **No Preliminary Analysis**: Do not read files before Phase 1
|
||||
3. **Parse Every Output**: Extract required data for next phase
|
||||
4. **Auto-Continue via TodoList**: Check TodoList status to execute next pending phase automatically
|
||||
5. **Track Progress**: Update TodoWrite dynamically with task attachment/collapse pattern
|
||||
6. **TDD Context**: All descriptions include "TDD:" prefix
|
||||
7. **Task Attachment Model**: Skill execute **attaches** sub-tasks to current workflow. Orchestrator **executes** these attached tasks itself, then **collapses** them after completion
|
||||
8. **CRITICAL: DO NOT STOP**: Continuous multi-phase workflow. After executing all attached tasks, immediately collapse them and execute next phase
|
||||
9. **Explicit Lifecycle**: Always close_agent after wait completes to free resources
|
||||
|
||||
## Execution Flow
|
||||
|
||||
```
|
||||
Input Parsing:
|
||||
└─ Convert user input to TDD-structured format (TDD:/GOAL/SCOPE/CONTEXT/TEST_FOCUS)
|
||||
|
||||
Phase 1: Session Discovery
|
||||
└─ Ref: workflow-plan-execute/phases/01-session-discovery.md (external)
|
||||
└─ Output: sessionId (WFS-xxx)
|
||||
|
||||
Phase 2: Context Gathering
|
||||
└─ Ref: workflow-plan-execute/phases/02-context-gathering.md (external)
|
||||
├─ Tasks attached: Analyze structure → Identify integration → Generate package
|
||||
└─ Output: contextPath + conflict_risk
|
||||
|
||||
Phase 3: Test Coverage Analysis ← ATTACHED (3 tasks)
|
||||
└─ Ref: phases/01-test-context-gather.md
|
||||
├─ Phase 3.1: Detect test framework
|
||||
├─ Phase 3.2: Analyze existing test coverage
|
||||
└─ Phase 3.3: Identify coverage gaps
|
||||
└─ Output: test-context-package.json ← COLLAPSED
|
||||
|
||||
Phase 4: Conflict Resolution (conditional)
|
||||
└─ Decision (conflict_risk check):
|
||||
├─ conflict_risk ≥ medium → Inline conflict resolution (within Phase 2)
|
||||
│ ├─ Tasks attached: Detect conflicts → Present to user → Apply strategies
|
||||
│ └─ Output: Modified brainstorm artifacts ← COLLAPSED
|
||||
└─ conflict_risk < medium → Skip to Phase 5
|
||||
|
||||
Phase 5: TDD Task Generation ← ATTACHED (3 tasks)
|
||||
└─ Ref: phases/02-task-generate-tdd.md
|
||||
├─ Phase 5.1: Discovery - analyze TDD requirements
|
||||
├─ Phase 5.2: Planning - design Red-Green-Refactor cycles
|
||||
└─ Phase 5.3: Output - generate IMPL tasks with internal TDD phases
|
||||
└─ Output: IMPL-*.json, IMPL_PLAN.md ← COLLAPSED
|
||||
|
||||
Phase 6: TDD Structure Validation (inline)
|
||||
└─ Internal validation + summary returned
|
||||
└─ Recommend: plan-verify (external)
|
||||
|
||||
Return:
|
||||
└─ Summary with recommended next steps
|
||||
```
|
||||
|
||||
### Phase Reference Documents
|
||||
|
||||
**Local phases** (read on-demand when phase executes):
|
||||
|
||||
| Phase | Document | Purpose |
|
||||
|-------|----------|---------|
|
||||
| Phase 3 | [phases/01-test-context-gather.md](phases/01-test-context-gather.md) | Test coverage context gathering via test-context-search-agent |
|
||||
| Phase 5 | [phases/02-task-generate-tdd.md](phases/02-task-generate-tdd.md) | TDD task JSON generation via action-planning-agent |
|
||||
|
||||
**External phases** (from workflow-plan-execute skill):
|
||||
|
||||
| Phase | Document | Purpose |
|
||||
|-------|----------|---------|
|
||||
| Phase 1 | workflow-plan-execute/phases/01-session-discovery.md | Session creation/discovery |
|
||||
| Phase 2 | workflow-plan-execute/phases/02-context-gathering.md | Project context collection + inline conflict resolution |
|
||||
|
||||
**Post-execution verification**:
|
||||
|
||||
| Phase | Document | Purpose |
|
||||
|-------|----------|---------|
|
||||
| TDD Verify | [phases/03-tdd-verify.md](phases/03-tdd-verify.md) | TDD compliance verification with quality gate |
|
||||
| Coverage Analysis | [phases/04-tdd-coverage-analysis.md](phases/04-tdd-coverage-analysis.md) | Test coverage and cycle analysis (called by TDD Verify) |
|
||||
|
||||
## 6-Phase Execution
|
||||
|
||||
### Phase 1: Session Discovery
|
||||
|
||||
**Step 1.1: Execute** - Session discovery and initialization
|
||||
|
||||
Read and execute: `workflow-plan-execute/phases/01-session-discovery.md` with `--type tdd --auto "TDD: [structured-description]"`
|
||||
|
||||
**TDD Structured Format**:
|
||||
```
|
||||
TDD: [Feature Name]
|
||||
GOAL: [Objective]
|
||||
SCOPE: [Included/excluded]
|
||||
CONTEXT: [Background]
|
||||
TEST_FOCUS: [Test scenarios]
|
||||
```
|
||||
|
||||
**Parse**: Extract sessionId
|
||||
|
||||
**TodoWrite**: Mark phase 1 completed, phase 2 in_progress
|
||||
|
||||
**After Phase 1**: Return to user showing Phase 1 results, then auto-continue to Phase 2
|
||||
|
||||
---
|
||||
|
||||
### Phase 2: Context Gathering
|
||||
|
||||
**Step 2.1: Execute** - Context gathering and analysis
|
||||
|
||||
Read and execute: `workflow-plan-execute/phases/02-context-gathering.md` with `--session [sessionId] "TDD: [structured-description]"`
|
||||
|
||||
**Use Same Structured Description**: Pass the same structured format from Phase 1
|
||||
|
||||
**Input**: `sessionId` from Phase 1
|
||||
|
||||
**Parse Output**:
|
||||
- Extract: context-package.json path (store as `contextPath`)
|
||||
- Typical pattern: `.workflow/active/[sessionId]/.process/context-package.json`
|
||||
|
||||
**Validation**:
|
||||
- Context package path extracted
|
||||
- File exists and is valid JSON
|
||||
|
||||
**TodoWrite**: Mark phase 2 completed, phase 3 in_progress
|
||||
|
||||
**After Phase 2**: Return to user showing Phase 2 results, then auto-continue to Phase 3
|
||||
|
||||
---
|
||||
|
||||
### Phase 3: Test Coverage Analysis
|
||||
|
||||
**Step 3.1: Execute** - Test coverage analysis and framework detection
|
||||
|
||||
Read and execute: `phases/01-test-context-gather.md` with `--session [sessionId]`
|
||||
|
||||
**Purpose**: Analyze existing codebase for:
|
||||
- Existing test patterns and conventions
|
||||
- Current test coverage
|
||||
- Related components and integration points
|
||||
- Test framework detection
|
||||
|
||||
**Parse**: Extract testContextPath (`.workflow/active/[sessionId]/.process/test-context-package.json`)
|
||||
|
||||
**TodoWrite Update (Phase 3 - tasks attached)**:
|
||||
```json
|
||||
[
|
||||
{"content": "Phase 1: Session Discovery", "status": "completed", "activeForm": "Executing session discovery"},
|
||||
{"content": "Phase 2: Context Gathering", "status": "completed", "activeForm": "Executing context gathering"},
|
||||
{"content": "Phase 3: Test Coverage Analysis", "status": "in_progress", "activeForm": "Executing test coverage analysis"},
|
||||
{"content": " → Detect test framework and conventions", "status": "in_progress", "activeForm": "Detecting test framework"},
|
||||
{"content": " → Analyze existing test coverage", "status": "pending", "activeForm": "Analyzing test coverage"},
|
||||
{"content": " → Identify coverage gaps", "status": "pending", "activeForm": "Identifying coverage gaps"},
|
||||
{"content": "Phase 5: TDD Task Generation", "status": "pending", "activeForm": "Executing TDD task generation"},
|
||||
{"content": "Phase 6: TDD Structure Validation", "status": "pending", "activeForm": "Validating TDD structure"}
|
||||
]
|
||||
```
|
||||
|
||||
**Note**: Skill execute **attaches** test-context-gather's 3 tasks. Orchestrator **executes** these tasks.
|
||||
|
||||
**Next Action**: Tasks attached → **Execute Phase 3.1-3.3** sequentially
|
||||
|
||||
**TodoWrite Update (Phase 3 completed - tasks collapsed)**:
|
||||
```json
|
||||
[
|
||||
{"content": "Phase 1: Session Discovery", "status": "completed", "activeForm": "Executing session discovery"},
|
||||
{"content": "Phase 2: Context Gathering", "status": "completed", "activeForm": "Executing context gathering"},
|
||||
{"content": "Phase 3: Test Coverage Analysis", "status": "completed", "activeForm": "Executing test coverage analysis"},
|
||||
{"content": "Phase 5: TDD Task Generation", "status": "pending", "activeForm": "Executing TDD task generation"},
|
||||
{"content": "Phase 6: TDD Structure Validation", "status": "pending", "activeForm": "Validating TDD structure"}
|
||||
]
|
||||
```
|
||||
|
||||
**After Phase 3**: Return to user showing test coverage results, then auto-continue to Phase 4/5
|
||||
|
||||
---
|
||||
|
||||
### Phase 4: Conflict Resolution (Optional)
|
||||
|
||||
**Trigger**: Only execute when context-package.json indicates conflict_risk is "medium" or "high"
|
||||
|
||||
**Step 4.1: Execute** - Conflict detection and resolution
|
||||
|
||||
Conflict resolution is handled inline within Phase 2 (context-gathering). When conflict_risk >= medium, Phase 2 automatically performs detection and resolution.
|
||||
|
||||
**Input**:
|
||||
- sessionId from Phase 1
|
||||
- contextPath from Phase 2
|
||||
- conflict_risk from context-package.json
|
||||
|
||||
**Parse Output**:
|
||||
- Extract: Execution status (success/skipped/failed)
|
||||
- Verify: conflict-resolution.json file path (if executed)
|
||||
|
||||
**Skip Behavior**:
|
||||
- If conflict_risk is "none" or "low", skip directly to Phase 5
|
||||
- Display: "No significant conflicts detected, proceeding to TDD task generation"
|
||||
|
||||
**TodoWrite Update (Phase 4 - tasks attached, if conflict_risk >= medium)**:
|
||||
```json
|
||||
[
|
||||
{"content": "Phase 1: Session Discovery", "status": "completed", "activeForm": "Executing session discovery"},
|
||||
{"content": "Phase 2: Context Gathering", "status": "completed", "activeForm": "Executing context gathering"},
|
||||
{"content": "Phase 3: Test Coverage Analysis", "status": "completed", "activeForm": "Executing test coverage analysis"},
|
||||
{"content": "Phase 4: Conflict Resolution", "status": "in_progress", "activeForm": "Executing conflict resolution"},
|
||||
{"content": " → Detect conflicts with CLI analysis", "status": "in_progress", "activeForm": "Detecting conflicts"},
|
||||
{"content": " → Log and analyze detected conflicts", "status": "pending", "activeForm": "Analyzing conflicts"},
|
||||
{"content": " → Apply resolution strategies", "status": "pending", "activeForm": "Applying resolution strategies"},
|
||||
{"content": "Phase 5: TDD Task Generation", "status": "pending", "activeForm": "Executing TDD task generation"},
|
||||
{"content": "Phase 6: TDD Structure Validation", "status": "pending", "activeForm": "Validating TDD structure"}
|
||||
]
|
||||
```
|
||||
|
||||
**TodoWrite Update (Phase 4 completed - tasks collapsed)**:
|
||||
```json
|
||||
[
|
||||
{"content": "Phase 1: Session Discovery", "status": "completed", "activeForm": "Executing session discovery"},
|
||||
{"content": "Phase 2: Context Gathering", "status": "completed", "activeForm": "Executing context gathering"},
|
||||
{"content": "Phase 3: Test Coverage Analysis", "status": "completed", "activeForm": "Executing test coverage analysis"},
|
||||
{"content": "Phase 4: Conflict Resolution", "status": "completed", "activeForm": "Executing conflict resolution"},
|
||||
{"content": "Phase 5: TDD Task Generation", "status": "pending", "activeForm": "Executing TDD task generation"},
|
||||
{"content": "Phase 6: TDD Structure Validation", "status": "pending", "activeForm": "Validating TDD structure"}
|
||||
]
|
||||
```
|
||||
|
||||
**After Phase 4**: Return to user showing conflict resolution results, then auto-continue to Phase 5
|
||||
|
||||
**Memory State Check**:
|
||||
- Evaluate current context window usage and memory state
|
||||
- If memory usage is high (>110K tokens or approaching context limits):
|
||||
|
||||
**Step 4.5: Execute** - Memory compaction (external skill: compact)
|
||||
|
||||
- This optimizes memory before proceeding to Phase 5
|
||||
- Memory compaction is particularly important after analysis phase which may generate extensive documentation
|
||||
|
||||
---
|
||||
|
||||
### Phase 5: TDD Task Generation
|
||||
|
||||
**Step 5.1: Execute** - TDD task generation via action-planning-agent with Phase 0 user configuration
|
||||
|
||||
Read and execute: `phases/02-task-generate-tdd.md` with `--session [sessionId]`
|
||||
|
||||
**Note**: Phase 0 now includes:
|
||||
- Supplementary materials collection (file paths or inline content)
|
||||
- Execution method preference (Agent/Hybrid/CLI)
|
||||
- CLI tool preference (Codex/Gemini/Qwen/Auto)
|
||||
- These preferences are passed to agent for task generation
|
||||
|
||||
**Parse**: Extract feature count, task count, CLI execution IDs assigned
|
||||
|
||||
**Validate**:
|
||||
- IMPL_PLAN.md exists (unified plan with TDD Implementation Tasks section)
|
||||
- IMPL-*.json files exist (one per feature, or container + subtasks for complex features)
|
||||
- TODO_LIST.md exists with internal TDD phase indicators
|
||||
- Each IMPL task includes:
|
||||
- `meta.tdd_workflow: true`
|
||||
- `meta.cli_execution_id: {session_id}-{task_id}`
|
||||
- `meta.cli_execution: { "strategy": "new|resume|fork|merge_fork", ... }`
|
||||
- `flow_control.implementation_approach` with exactly 3 steps (red/green/refactor)
|
||||
- Green phase includes test-fix-cycle configuration
|
||||
- `context.focus_paths`: absolute or clear relative paths
|
||||
- `flow_control.pre_analysis`: includes exploration integration_points analysis
|
||||
- IMPL_PLAN.md contains workflow_type: "tdd" in frontmatter
|
||||
- Task count <=18 (compliance with hard limit)
|
||||
|
||||
**Red Flag Detection** (Non-Blocking Warnings):
|
||||
- Task count >18: `Warning: Task count exceeds hard limit - request re-scope`
|
||||
- Missing cli_execution_id: `Warning: Task lacks CLI execution ID for resume support`
|
||||
- Missing test-fix-cycle: `Warning: Green phase lacks auto-revert configuration`
|
||||
- Generic task names: `Warning: Vague task names suggest unclear TDD cycles`
|
||||
- Missing focus_paths: `Warning: Task lacks clear file scope for implementation`
|
||||
|
||||
**Action**: Log warnings to `.workflow/active/[sessionId]/.process/tdd-warnings.log` (non-blocking)
|
||||
|
||||
**TodoWrite Update (Phase 5 - tasks attached)**:
|
||||
```json
|
||||
[
|
||||
{"content": "Phase 1: Session Discovery", "status": "completed", "activeForm": "Executing session discovery"},
|
||||
{"content": "Phase 2: Context Gathering", "status": "completed", "activeForm": "Executing context gathering"},
|
||||
{"content": "Phase 3: Test Coverage Analysis", "status": "completed", "activeForm": "Executing test coverage analysis"},
|
||||
{"content": "Phase 5: TDD Task Generation", "status": "in_progress", "activeForm": "Executing TDD task generation"},
|
||||
{"content": " → Discovery - analyze TDD requirements", "status": "in_progress", "activeForm": "Analyzing TDD requirements"},
|
||||
{"content": " → Planning - design Red-Green-Refactor cycles", "status": "pending", "activeForm": "Designing TDD cycles"},
|
||||
{"content": " → Output - generate IMPL tasks with internal TDD phases", "status": "pending", "activeForm": "Generating TDD tasks"},
|
||||
{"content": "Phase 6: TDD Structure Validation", "status": "pending", "activeForm": "Validating TDD structure"}
|
||||
]
|
||||
```
|
||||
|
||||
**TodoWrite Update (Phase 5 completed - tasks collapsed)**:
|
||||
```json
|
||||
[
|
||||
{"content": "Phase 1: Session Discovery", "status": "completed", "activeForm": "Executing session discovery"},
|
||||
{"content": "Phase 2: Context Gathering", "status": "completed", "activeForm": "Executing context gathering"},
|
||||
{"content": "Phase 3: Test Coverage Analysis", "status": "completed", "activeForm": "Executing test coverage analysis"},
|
||||
{"content": "Phase 5: TDD Task Generation", "status": "completed", "activeForm": "Executing TDD task generation"},
|
||||
{"content": "Phase 6: TDD Structure Validation", "status": "in_progress", "activeForm": "Validating TDD structure"}
|
||||
]
|
||||
```
|
||||
|
||||
### Phase 6: TDD Structure Validation & Action Plan Verification (RECOMMENDED)
|
||||
|
||||
**Internal validation first, then recommend external verification**
|
||||
|
||||
**Internal Validation**:
|
||||
1. Each task contains complete TDD workflow (Red-Green-Refactor internally)
|
||||
2. Task structure validation:
|
||||
- `meta.tdd_workflow: true` in all IMPL tasks
|
||||
- `meta.cli_execution_id` present (format: {session_id}-{task_id})
|
||||
- `meta.cli_execution` strategy assigned (new/resume/fork/merge_fork)
|
||||
- `flow_control.implementation_approach` has exactly 3 steps
|
||||
- Each step has correct `tdd_phase`: "red", "green", "refactor"
|
||||
- `context.focus_paths` are absolute or clear relative paths
|
||||
- `flow_control.pre_analysis` includes exploration integration analysis
|
||||
3. Dependency validation:
|
||||
- Sequential features: IMPL-N depends_on ["IMPL-(N-1)"] if needed
|
||||
- Complex features: IMPL-N.M depends_on ["IMPL-N.(M-1)"] for subtasks
|
||||
- CLI execution strategies correctly assigned based on dependency graph
|
||||
4. Agent assignment: All IMPL tasks use @code-developer
|
||||
5. Test-fix cycle: Green phase step includes test-fix-cycle logic with max_iterations
|
||||
6. Task count: Total tasks <=18 (simple + subtasks hard limit)
|
||||
7. User configuration:
|
||||
- Execution method choice reflected in task structure
|
||||
- CLI tool preference documented in implementation guidance (if CLI selected)
|
||||
|
||||
**Red Flag Checklist** (from TDD best practices):
|
||||
- [ ] No tasks skip Red phase (`tdd_phase: "red"` exists in step 1)
|
||||
- [ ] Test files referenced in Red phase (explicit paths, not placeholders)
|
||||
- [ ] Green phase has test-fix-cycle with `max_iterations` configured
|
||||
- [ ] Refactor phase has clear completion criteria
|
||||
|
||||
**Non-Compliance Warning Format**:
|
||||
```
|
||||
Warning TDD Red Flag: [issue description]
|
||||
Task: [IMPL-N]
|
||||
Recommendation: [action to fix]
|
||||
```
|
||||
|
||||
**Evidence Gathering** (Before Completion Claims):
|
||||
|
||||
```bash
|
||||
# Verify session artifacts exist
|
||||
ls -la .workflow/active/[sessionId]/{IMPL_PLAN.md,TODO_LIST.md}
|
||||
ls -la .workflow/active/[sessionId]/.task/IMPL-*.json
|
||||
|
||||
# Count generated artifacts
|
||||
echo "IMPL tasks: $(ls .workflow/active/[sessionId]/.task/IMPL-*.json 2>/dev/null | wc -l)"
|
||||
|
||||
# Sample task structure verification (first task)
|
||||
jq '{id, tdd: .meta.tdd_workflow, cli_id: .meta.cli_execution_id, phases: [.flow_control.implementation_approach[].tdd_phase]}' \
|
||||
"$(ls .workflow/active/[sessionId]/.task/IMPL-*.json | head -1)"
|
||||
```
|
||||
|
||||
**Evidence Required Before Summary**:
|
||||
| Evidence Type | Verification Method | Pass Criteria |
|
||||
|---------------|---------------------|---------------|
|
||||
| File existence | `ls -la` artifacts | All files present |
|
||||
| Task count | Count IMPL-*.json | Count matches claims (<=18) |
|
||||
| TDD structure | jq sample extraction | Shows red/green/refactor + cli_execution_id |
|
||||
| CLI execution IDs | jq extraction | All tasks have cli_execution_id assigned |
|
||||
| Warning log | Check tdd-warnings.log | Logged (may be empty) |
|
||||
|
||||
**Return Summary**:
|
||||
```
|
||||
TDD Planning complete for session: [sessionId]
|
||||
|
||||
Features analyzed: [N]
|
||||
Total tasks: [M] (1 task per simple feature + subtasks for complex features)
|
||||
|
||||
Task breakdown:
|
||||
- Simple features: [K] tasks (IMPL-1 to IMPL-K)
|
||||
- Complex features: [L] features with [P] subtasks
|
||||
- Total task count: [M] (within 18-task hard limit)
|
||||
|
||||
Structure:
|
||||
- IMPL-1: {Feature 1 Name} (Internal: Red → Green → Refactor)
|
||||
- IMPL-2: {Feature 2 Name} (Internal: Red → Green → Refactor)
|
||||
- IMPL-3: {Complex Feature} (Container)
|
||||
- IMPL-3.1: {Sub-feature A} (Internal: Red → Green → Refactor)
|
||||
- IMPL-3.2: {Sub-feature B} (Internal: Red → Green → Refactor)
|
||||
[...]
|
||||
|
||||
Plans generated:
|
||||
- Unified Implementation Plan: .workflow/active/[sessionId]/IMPL_PLAN.md
|
||||
(includes TDD Implementation Tasks section with workflow_type: "tdd")
|
||||
- Task List: .workflow/active/[sessionId]/TODO_LIST.md
|
||||
(with internal TDD phase indicators and CLI execution strategies)
|
||||
- Task JSONs: .workflow/active/[sessionId]/.task/IMPL-*.json
|
||||
(with cli_execution_id and execution strategies for resume support)
|
||||
|
||||
TDD Configuration:
|
||||
- Each task contains complete Red-Green-Refactor cycle
|
||||
- Green phase includes test-fix cycle (max 3 iterations)
|
||||
- Auto-revert on max iterations reached
|
||||
- CLI execution strategies: new/resume/fork/merge_fork based on dependency graph
|
||||
|
||||
User Configuration Applied:
|
||||
- Execution Method: [agent|hybrid|cli]
|
||||
- CLI Tool Preference: [codex|gemini|qwen|auto]
|
||||
- Supplementary Materials: [included|none]
|
||||
- Task generation follows cli-tools-usage.md guidelines
|
||||
|
||||
ACTION REQUIRED: Before execution, ensure you understand WHY each Red phase test is expected to fail.
|
||||
This is crucial for valid TDD - if you don't know why the test fails, you can't verify it tests the right thing.
|
||||
|
||||
Recommended Next Steps:
|
||||
1. plan-verify (external) --session [sessionId] # Verify TDD plan quality and dependencies
|
||||
2. workflow:execute (external) --session [sessionId] # Start TDD execution with CLI strategies
|
||||
3. phases/03-tdd-verify.md [sessionId] # Post-execution TDD compliance check
|
||||
|
||||
Quality Gate: Consider running plan-verify to validate TDD task structure, dependencies, and CLI execution strategies
|
||||
```
|
||||
|
||||
## Input Processing
|
||||
|
||||
Convert user input to TDD-structured format:
|
||||
|
||||
**Simple text** → Add TDD context
|
||||
**Detailed text** → Extract components with TEST_FOCUS
|
||||
**File/Issue** → Read and structure with TDD
|
||||
|
||||
## Data Flow
|
||||
|
||||
```
|
||||
User Input (task description)
|
||||
↓
|
||||
[Convert to TDD Structured Format]
|
||||
↓ TDD Structured Description:
|
||||
↓ TDD: [Feature Name]
|
||||
↓ GOAL: [objective]
|
||||
↓ SCOPE: [boundaries]
|
||||
↓ CONTEXT: [background]
|
||||
↓ TEST_FOCUS: [test scenarios]
|
||||
↓
|
||||
Phase 1: session:start --type tdd --auto "TDD: structured-description"
|
||||
↓ Output: sessionId
|
||||
↓
|
||||
Phase 2: context-gather --session sessionId "TDD: structured-description"
|
||||
↓ Output: contextPath + conflict_risk
|
||||
↓
|
||||
Phase 3: test-context-gather --session sessionId
|
||||
↓ Output: testContextPath (test-context-package.json)
|
||||
↓
|
||||
Phase 4: conflict-resolution [AUTO-TRIGGERED if conflict_risk >= medium]
|
||||
↓ Output: Modified brainstorm artifacts
|
||||
↓ Skip if conflict_risk is none/low → proceed directly to Phase 5
|
||||
↓
|
||||
Phase 5: task-generate-tdd --session sessionId
|
||||
↓ Output: IMPL_PLAN.md, task JSONs, TODO_LIST.md
|
||||
↓
|
||||
Phase 6: Internal validation + summary
|
||||
↓
|
||||
Return summary to user
|
||||
```
|
||||
|
||||
## TodoWrite Pattern
|
||||
|
||||
**Core Concept**: Dynamic task attachment and collapse for TDD workflow with test coverage analysis and Red-Green-Refactor cycle generation.
|
||||
|
||||
### Key Principles
|
||||
|
||||
1. **Task Attachment** (when Skill executed):
|
||||
- Sub-command's internal tasks are **attached** to orchestrator's TodoWrite
|
||||
- First attached task marked as `in_progress`, others as `pending`
|
||||
- Orchestrator **executes** these attached tasks sequentially
|
||||
|
||||
2. **Task Collapse** (after sub-tasks complete):
|
||||
- Remove detailed sub-tasks from TodoWrite
|
||||
- **Collapse** to high-level phase summary
|
||||
- Maintains clean orchestrator-level view
|
||||
|
||||
3. **Continuous Execution**:
|
||||
- After collapse, automatically proceed to next pending phase
|
||||
- No user intervention required between phases
|
||||
- TodoWrite dynamically reflects current execution state
|
||||
|
||||
**Lifecycle Summary**: Initial pending tasks → Phase executed (tasks ATTACHED) → Sub-tasks executed sequentially → Phase completed (tasks COLLAPSED to summary) → Next phase begins (conditional Phase 4 if conflict_risk >= medium) → Repeat until all phases complete.
|
||||
|
||||
### TDD-Specific Features
|
||||
|
||||
- **Phase 3**: Test coverage analysis detects existing patterns and gaps
|
||||
- **Phase 5**: Generated IMPL tasks contain internal Red-Green-Refactor cycles
|
||||
- **Conditional Phase 4**: Conflict resolution only if conflict_risk >= medium
|
||||
|
||||
**Note**: See individual Phase descriptions (Phase 3, 4, 5) for detailed TodoWrite Update examples with full JSON structures.
|
||||
|
||||
## Execution Flow Diagram
|
||||
|
||||
```
|
||||
TDD Workflow Orchestrator
|
||||
│
|
||||
├─ Phase 1: Session Discovery
|
||||
│ └─ workflow-plan-execute/phases/01-session-discovery.md --auto
|
||||
│ └─ Returns: sessionId
|
||||
│
|
||||
├─ Phase 2: Context Gathering
|
||||
│ └─ workflow-plan-execute/phases/02-context-gathering.md
|
||||
│ └─ Returns: context-package.json path
|
||||
│
|
||||
├─ Phase 3: Test Coverage Analysis ← ATTACHED (3 tasks)
|
||||
│ └─ phases/01-test-context-gather.md
|
||||
│ ├─ Phase 3.1: Detect test framework
|
||||
│ ├─ Phase 3.2: Analyze existing test coverage
|
||||
│ └─ Phase 3.3: Identify coverage gaps
|
||||
│ └─ Returns: test-context-package.json ← COLLAPSED
|
||||
│
|
||||
├─ Phase 4: Conflict Resolution (conditional)
|
||||
│ IF conflict_risk >= medium:
|
||||
│ └─ Inline within Phase 2 context-gathering ← ATTACHED (3 tasks)
|
||||
│ ├─ Phase 4.1: Detect conflicts with CLI
|
||||
│ ├─ Phase 4.2: Log and analyze detected conflicts
|
||||
│ └─ Phase 4.3: Apply resolution strategies
|
||||
│ └─ Returns: conflict-resolution.json ← COLLAPSED
|
||||
│ ELSE:
|
||||
│ └─ Skip to Phase 5
|
||||
│
|
||||
├─ Phase 5: TDD Task Generation ← ATTACHED (3 tasks)
|
||||
│ └─ phases/02-task-generate-tdd.md
|
||||
│ ├─ Phase 5.1: Discovery - analyze TDD requirements
|
||||
│ ├─ Phase 5.2: Planning - design Red-Green-Refactor cycles
|
||||
│ └─ Phase 5.3: Output - generate IMPL tasks with internal TDD phases
|
||||
│ └─ Returns: IMPL-*.json, IMPL_PLAN.md ← COLLAPSED
|
||||
│ (Each IMPL task contains internal Red-Green-Refactor cycle)
|
||||
│
|
||||
└─ Phase 6: TDD Structure Validation
|
||||
└─ Internal validation + summary returned
|
||||
└─ Recommend: plan-verify (external)
|
||||
|
||||
Key Points:
|
||||
• ← ATTACHED: Sub-tasks attached to orchestrator TodoWrite
|
||||
• ← COLLAPSED: Sub-tasks executed and collapsed to phase summary
|
||||
• TDD-specific: Each generated IMPL task contains complete Red-Green-Refactor cycle
|
||||
```
|
||||
|
||||
## Error Handling
|
||||
|
||||
- **Parsing failure**: Retry once, then report
|
||||
- **Validation failure**: Report missing/invalid data
|
||||
- **Command failure**: Keep phase in_progress, report error
|
||||
- **TDD validation failure**: Report incomplete chains or wrong dependencies
|
||||
- **Subagent timeout**: Retry wait or send_input to prompt completion, then close_agent
|
||||
|
||||
### TDD Warning Patterns
|
||||
|
||||
| Pattern | Warning Message | Recommended Action |
|
||||
|---------|----------------|-------------------|
|
||||
| Task count >10 | High task count detected | Consider splitting into multiple sessions |
|
||||
| Missing test-fix-cycle | Green phase lacks auto-revert | Add `max_iterations: 3` to task config |
|
||||
| Red phase missing test path | Test file path not specified | Add explicit test file paths |
|
||||
| Generic task names | Vague names like "Add feature" | Use specific behavior descriptions |
|
||||
| No refactor criteria | Refactor phase lacks completion criteria | Define clear refactor scope |
|
||||
|
||||
### Non-Blocking Warning Policy
|
||||
|
||||
**All warnings are advisory** - they do not halt execution:
|
||||
1. Warnings logged to `.process/tdd-warnings.log`
|
||||
2. Summary displayed in Phase 6 output
|
||||
3. User decides whether to address before execution
|
||||
|
||||
### Error Handling Quick Reference
|
||||
|
||||
| Error Type | Detection | Recovery Action |
|
||||
|------------|-----------|-----------------|
|
||||
| Parsing failure | Empty/malformed output | Retry once, then report |
|
||||
| Missing context-package | File read error | Re-run context-gather (workflow-plan-execute/phases/02-context-gathering.md) |
|
||||
| Invalid task JSON | jq parse error | Report malformed file path |
|
||||
| Task count exceeds 18 | Count validation >=19 | Request re-scope, split into multiple sessions |
|
||||
| Missing cli_execution_id | All tasks lack ID | Regenerate tasks with phase 0 user config |
|
||||
| Test-context missing | File not found | Re-run phases/01-test-context-gather.md |
|
||||
| Phase timeout | No response | Retry phase, check CLI connectivity |
|
||||
| CLI tool not available | Tool not in cli-tools.json | Fall back to alternative preferred tool |
|
||||
| Subagent unresponsive | wait timed_out | send_input to prompt, or close_agent and spawn new |
|
||||
|
||||
## Post-Execution: TDD Verification
|
||||
|
||||
After TDD tasks have been executed (via workflow:execute), run TDD compliance verification:
|
||||
|
||||
Read and execute: `phases/03-tdd-verify.md` with `--session [sessionId]`
|
||||
|
||||
This generates a comprehensive TDD_COMPLIANCE_REPORT.md with quality gate recommendation.
|
||||
|
||||
## Related Skills
|
||||
|
||||
**Prerequisite**:
|
||||
- None - TDD planning is self-contained (can optionally run brainstorm before)
|
||||
|
||||
**Called by This Skill** (6 phases):
|
||||
- workflow-plan-execute/phases/01-session-discovery.md - Phase 1: Create or discover TDD workflow session
|
||||
- workflow-plan-execute/phases/02-context-gathering.md - Phase 2: Gather project context and analyze codebase
|
||||
- phases/01-test-context-gather.md - Phase 3: Analyze existing test patterns and coverage
|
||||
- Inline conflict resolution within Phase 2 - Phase 4: Detect and resolve conflicts (conditional)
|
||||
- compact (external skill) - Phase 4.5: Memory optimization (if context approaching limits)
|
||||
- phases/02-task-generate-tdd.md - Phase 5: Generate TDD tasks
|
||||
|
||||
**Follow-up**:
|
||||
- plan-verify (external) - Recommended: Verify TDD plan quality and structure before execution
|
||||
- workflow:status (external) - Review TDD task breakdown
|
||||
- workflow:execute (external) - Begin TDD implementation
|
||||
- phases/03-tdd-verify.md - Post-execution: Verify TDD compliance and generate quality report
|
||||
|
||||
## Next Steps Decision Table
|
||||
|
||||
| Situation | Recommended Action | Purpose |
|
||||
|-----------|-------------------|---------|
|
||||
| First time planning | Run plan-verify (external) | Validate task structure before execution |
|
||||
| Warnings in tdd-warnings.log | Review log, refine tasks | Address Red Flags before proceeding |
|
||||
| High task count warning | Consider new session | Split into focused sub-sessions |
|
||||
| Ready to implement | Run workflow:execute (external) | Begin TDD Red-Green-Refactor cycles |
|
||||
| After implementation | Run phases/03-tdd-verify.md | Generate TDD compliance report |
|
||||
| Need to review tasks | Run workflow:status (external) | Inspect current task breakdown |
|
||||
|
||||
### TDD Workflow State Transitions
|
||||
|
||||
```
|
||||
workflow-tdd-plan (this skill)
|
||||
↓
|
||||
[Planning Complete] ──→ plan-verify (external, recommended)
|
||||
↓
|
||||
[Verified/Ready] ─────→ workflow:execute (external)
|
||||
↓
|
||||
[Implementation] ─────→ phases/03-tdd-verify.md (post-execution)
|
||||
↓
|
||||
[Quality Report] ─────→ Done or iterate
|
||||
```
|
||||
240
.codex/skills/workflow-tdd-plan/phases/01-test-context-gather.md
Normal file
240
.codex/skills/workflow-tdd-plan/phases/01-test-context-gather.md
Normal file
@@ -0,0 +1,240 @@
|
||||
# Phase 1: Test Context Gather
|
||||
|
||||
## Overview
|
||||
|
||||
Orchestrator command that invokes `test-context-search-agent` to gather comprehensive test coverage context for test generation workflows. Generates standardized `test-context-package.json` with coverage analysis, framework detection, and source implementation context.
|
||||
|
||||
## Core Philosophy
|
||||
|
||||
- **Agent Delegation**: Delegate all test coverage analysis to `test-context-search-agent` for autonomous execution
|
||||
- **Detection-First**: Check for existing test-context-package before executing
|
||||
- **Coverage-First**: Analyze existing test coverage before planning new tests
|
||||
- **Source Context Loading**: Import implementation summaries from source session
|
||||
- **Standardized Output**: Generate `.workflow/active/{test_session_id}/.process/test-context-package.json`
|
||||
- **Explicit Lifecycle**: Always close_agent after wait completes to free resources
|
||||
|
||||
## Execution Process
|
||||
|
||||
```
|
||||
Input Parsing:
|
||||
├─ Parse flags: --session
|
||||
└─ Validation: test_session_id REQUIRED
|
||||
|
||||
Step 1: Test-Context-Package Detection
|
||||
└─ Decision (existing package):
|
||||
├─ Valid package exists → Return existing (skip execution)
|
||||
└─ No valid package → Continue to Step 2
|
||||
|
||||
Step 2: Invoke Test-Context-Search Agent
|
||||
├─ Phase 1: Session Validation & Source Context Loading
|
||||
│ ├─ Detection: Check for existing test-context-package
|
||||
│ ├─ Test session validation
|
||||
│ └─ Source context loading (summaries, changed files)
|
||||
├─ Phase 2: Test Coverage Analysis
|
||||
│ ├─ Track 1: Existing test discovery
|
||||
│ ├─ Track 2: Coverage gap analysis
|
||||
│ └─ Track 3: Coverage statistics
|
||||
└─ Phase 3: Framework Detection & Packaging
|
||||
├─ Framework identification
|
||||
├─ Convention analysis
|
||||
└─ Generate test-context-package.json
|
||||
|
||||
Step 3: Output Verification
|
||||
└─ Verify test-context-package.json created
|
||||
```
|
||||
|
||||
## Execution Flow
|
||||
|
||||
### Step 1: Test-Context-Package Detection
|
||||
|
||||
**Execute First** - Check if valid package already exists:
|
||||
|
||||
```javascript
|
||||
const testContextPath = `.workflow/${test_session_id}/.process/test-context-package.json`;
|
||||
|
||||
if (file_exists(testContextPath)) {
|
||||
const existing = Read(testContextPath);
|
||||
|
||||
// Validate package belongs to current test session
|
||||
if (existing?.metadata?.test_session_id === test_session_id) {
|
||||
console.log("Valid test-context-package found for session:", test_session_id);
|
||||
console.log("Coverage Stats:", existing.test_coverage.coverage_stats);
|
||||
console.log("Framework:", existing.test_framework.framework);
|
||||
console.log("Missing Tests:", existing.test_coverage.missing_tests.length);
|
||||
return existing; // Skip execution, return existing
|
||||
} else {
|
||||
console.warn("Invalid test_session_id in existing package, re-generating...");
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Step 2: Invoke Test-Context-Search Agent
|
||||
|
||||
**Only execute if Step 1 finds no valid package**
|
||||
|
||||
```javascript
|
||||
// Spawn test-context-search-agent
|
||||
const agentId = spawn_agent({
|
||||
message: `
|
||||
## TASK ASSIGNMENT
|
||||
|
||||
### MANDATORY FIRST STEPS (Agent Execute)
|
||||
1. **Read role definition**: ~/.codex/agents/test-context-search-agent.md (MUST read first)
|
||||
2. Read: .workflow/project-tech.json
|
||||
3. Read: .workflow/project-guidelines.json
|
||||
|
||||
---
|
||||
|
||||
## Execution Mode
|
||||
**PLAN MODE** (Comprehensive) - Full Phase 1-3 execution
|
||||
|
||||
## Session Information
|
||||
- **Test Session ID**: ${test_session_id}
|
||||
- **Output Path**: .workflow/${test_session_id}/.process/test-context-package.json
|
||||
|
||||
## Mission
|
||||
Execute complete test-context-search-agent workflow for test generation planning:
|
||||
|
||||
### Phase 1: Session Validation & Source Context Loading
|
||||
1. **Detection**: Check for existing test-context-package (early exit if valid)
|
||||
2. **Test Session Validation**: Load test session metadata, extract source_session reference
|
||||
3. **Source Context Loading**: Load source session implementation summaries, changed files, tech stack
|
||||
|
||||
### Phase 2: Test Coverage Analysis
|
||||
Execute coverage discovery:
|
||||
- **Track 1**: Existing test discovery (find *.test.*, *.spec.* files)
|
||||
- **Track 2**: Coverage gap analysis (match implementation files to test files)
|
||||
- **Track 3**: Coverage statistics (calculate percentages, identify gaps by module)
|
||||
|
||||
### Phase 3: Framework Detection & Packaging
|
||||
1. Framework identification from package.json/requirements.txt
|
||||
2. Convention analysis from existing test patterns
|
||||
3. Generate and validate test-context-package.json
|
||||
|
||||
## Output Requirements
|
||||
Complete test-context-package.json with:
|
||||
- **metadata**: test_session_id, source_session_id, task_type, complexity
|
||||
- **source_context**: implementation_summaries, tech_stack, project_patterns
|
||||
- **test_coverage**: existing_tests[], missing_tests[], coverage_stats
|
||||
- **test_framework**: framework, version, test_pattern, conventions
|
||||
- **assets**: implementation_summary[], existing_test[], source_code[] with priorities
|
||||
- **focus_areas**: Test generation guidance based on coverage gaps
|
||||
|
||||
## Quality Validation
|
||||
Before completion verify:
|
||||
- [ ] Valid JSON format with all required fields
|
||||
- [ ] Source session context loaded successfully
|
||||
- [ ] Test coverage gaps identified
|
||||
- [ ] Test framework detected (or marked as 'unknown')
|
||||
- [ ] Coverage percentage calculated correctly
|
||||
- [ ] Missing tests catalogued with priority
|
||||
- [ ] Execution time < 30 seconds (< 60s for large codebases)
|
||||
|
||||
Execute autonomously following agent documentation.
|
||||
Report completion with coverage statistics.
|
||||
`
|
||||
});
|
||||
|
||||
// Wait for agent completion
|
||||
const result = wait({
|
||||
ids: [agentId],
|
||||
timeout_ms: 300000 // 5 minutes
|
||||
});
|
||||
|
||||
// Handle timeout
|
||||
if (result.timed_out) {
|
||||
console.warn("Test context gathering timed out, sending prompt...");
|
||||
send_input({
|
||||
id: agentId,
|
||||
message: "Please complete test-context-package.json generation and report results."
|
||||
});
|
||||
const retryResult = wait({ ids: [agentId], timeout_ms: 120000 });
|
||||
}
|
||||
|
||||
// Clean up agent resources
|
||||
close_agent({ id: agentId });
|
||||
```
|
||||
|
||||
### Step 3: Output Verification
|
||||
|
||||
After agent completes, verify output:
|
||||
|
||||
```javascript
|
||||
// Verify file was created
|
||||
const outputPath = `.workflow/${test_session_id}/.process/test-context-package.json`;
|
||||
if (!file_exists(outputPath)) {
|
||||
throw new Error("Agent failed to generate test-context-package.json");
|
||||
}
|
||||
|
||||
// Load and display summary
|
||||
const testContext = Read(outputPath);
|
||||
console.log("Test context package generated successfully");
|
||||
console.log("Coverage:", testContext.test_coverage.coverage_stats.coverage_percentage + "%");
|
||||
console.log("Tests to generate:", testContext.test_coverage.missing_tests.length);
|
||||
```
|
||||
|
||||
## Parameter Reference
|
||||
|
||||
| Parameter | Type | Required | Description |
|
||||
|-----------|------|----------|-------------|
|
||||
| `--session` | string | Yes | Test workflow session ID (e.g., WFS-test-auth) |
|
||||
|
||||
## Output Schema
|
||||
|
||||
Refer to `test-context-search-agent.md` Phase 3.2 for complete `test-context-package.json` schema.
|
||||
|
||||
**Key Sections**:
|
||||
- **metadata**: Test session info, source session reference, complexity
|
||||
- **source_context**: Implementation summaries with changed files and tech stack
|
||||
- **test_coverage**: Existing tests, missing tests with priorities, coverage statistics
|
||||
- **test_framework**: Framework name, version, patterns, conventions
|
||||
- **assets**: Categorized files with relevance (implementation_summary, existing_test, source_code)
|
||||
- **focus_areas**: Test generation guidance based on analysis
|
||||
|
||||
## Success Criteria
|
||||
|
||||
- Valid test-context-package.json generated in `.workflow/active/{test_session_id}/.process/`
|
||||
- Source session context loaded successfully
|
||||
- Test coverage gaps identified (>90% accuracy)
|
||||
- Test framework detected and documented
|
||||
- Execution completes within 30 seconds (60s for large codebases)
|
||||
- All required schema fields present and valid
|
||||
- Coverage statistics calculated correctly
|
||||
- Agent reports completion with statistics
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Error | Cause | Resolution |
|
||||
|-------|-------|------------|
|
||||
| Package validation failed | Invalid test_session_id in existing package | Re-run agent to regenerate |
|
||||
| Source session not found | Invalid source_session reference | Verify test session metadata |
|
||||
| No implementation summaries | Source session incomplete | Complete source session first |
|
||||
| Agent execution timeout | Large codebase or slow analysis | Increase timeout, check file access |
|
||||
| Missing required fields | Agent incomplete execution | Check agent logs, verify schema compliance |
|
||||
| No test framework detected | Missing test dependencies | Agent marks as 'unknown', manual specification needed |
|
||||
|
||||
## Integration
|
||||
|
||||
### Called By
|
||||
- SKILL.md (Phase 3: Test Coverage Analysis)
|
||||
|
||||
### Calls
|
||||
- `test-context-search-agent` via spawn_agent - Autonomous test coverage analysis
|
||||
|
||||
## Notes
|
||||
|
||||
- **Detection-first**: Always check for existing test-context-package before invoking agent
|
||||
- **No redundancy**: This command is a thin orchestrator, all logic in agent
|
||||
- **Framework agnostic**: Supports Jest, Mocha, pytest, RSpec, Go testing, etc.
|
||||
- **Coverage focus**: Primary goal is identifying implementation files without tests
|
||||
- **Explicit lifecycle**: Always close_agent after wait completes
|
||||
|
||||
---
|
||||
|
||||
## Post-Phase Update
|
||||
|
||||
After Phase 1 (Test Context Gather) completes:
|
||||
- **Output Created**: `test-context-package.json` in `.workflow/active/{session}/.process/`
|
||||
- **Data Available**: Test coverage stats, framework info, missing tests list
|
||||
- **Next Action**: Continue to Phase 4 (Conflict Resolution, if conflict_risk >= medium) or Phase 5 (TDD Task Generation)
|
||||
- **TodoWrite**: Collapse Phase 3 sub-tasks to "Phase 3: Test Coverage Analysis: completed"
|
||||
Reference in New Issue
Block a user