mirror of
https://github.com/catlog22/Claude-Code-Workflow.git
synced 2026-02-10 02:24:35 +08:00
feat: add Chinese localization and new assets for CCW documentation
- Created LICENSE.txt for JavaScript assets including NProgress and React libraries. - Added runtime JavaScript file for main functionality. - Introduced new favicon and logo SVG assets for branding. - Added comprehensive FAQ section in Chinese, covering CCW features, installation, workflows, AI model support, and troubleshooting.
This commit is contained in:
281
.claude/skills/issue-discover/SKILL.md
Normal file
281
.claude/skills/issue-discover/SKILL.md
Normal file
@@ -0,0 +1,281 @@
|
||||
---
|
||||
name: issue-discover
|
||||
description: Unified issue discovery and creation. Create issues from GitHub/text, discover issues via multi-perspective analysis, or prompt-driven iterative exploration. Triggers on "issue:new", "issue:discover", "issue:discover-by-prompt", "create issue", "discover issues", "find issues".
|
||||
allowed-tools: Task, AskUserQuestion, TodoWrite, Read, Write, Edit, Bash, Glob, Grep, Skill, mcp__ace-tool__search_context, mcp__exa__search
|
||||
---
|
||||
|
||||
# Issue Discover
|
||||
|
||||
Unified issue discovery and creation skill covering three entry points: manual issue creation, perspective-based discovery, and prompt-driven exploration.
|
||||
|
||||
## Architecture Overview
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────────────┐
|
||||
│ Issue Discover Orchestrator (SKILL.md) │
|
||||
│ → Action selection → Route to phase → Execute → Summary │
|
||||
└───────────────┬─────────────────────────────────────────────────┘
|
||||
│
|
||||
├─ AskUserQuestion: Select action
|
||||
│
|
||||
┌───────────┼───────────┬───────────┐
|
||||
↓ ↓ ↓ │
|
||||
┌─────────┐ ┌─────────┐ ┌─────────┐ │
|
||||
│ Phase 1 │ │ Phase 2 │ │ Phase 3 │ │
|
||||
│ Create │ │Discover │ │Discover │ │
|
||||
│ New │ │ Multi │ │by Prompt│ │
|
||||
└─────────┘ └─────────┘ └─────────┘ │
|
||||
↓ ↓ ↓ │
|
||||
Issue Discoveries Discoveries │
|
||||
(registered) (export) (export) │
|
||||
│ │ │ │
|
||||
└───────────┴───────────┘ │
|
||||
↓ │
|
||||
issue-resolve (plan/queue) │
|
||||
↓ │
|
||||
/issue:execute │
|
||||
```
|
||||
|
||||
## Key Design Principles
|
||||
|
||||
1. **Action-Driven Routing**: AskUserQuestion selects action, then load single phase
|
||||
2. **Progressive Phase Loading**: Only read the selected phase document
|
||||
3. **CLI-First Data Access**: All issue CRUD via `ccw issue` CLI commands
|
||||
4. **Auto Mode Support**: `-y` flag skips action selection with auto-detection
|
||||
|
||||
## Auto Mode
|
||||
|
||||
When `--yes` or `-y`: Skip action selection, auto-detect action from input type.
|
||||
|
||||
## Usage
|
||||
|
||||
```
|
||||
Skill(skill="issue-discover", args="<input>")
|
||||
Skill(skill="issue-discover", args="[FLAGS] \"<input>\"")
|
||||
|
||||
# Flags
|
||||
-y, --yes Skip all confirmations (auto mode)
|
||||
--action <type> Pre-select action: new|discover|discover-by-prompt
|
||||
|
||||
# Phase-specific flags
|
||||
--priority <1-5> Issue priority (new mode)
|
||||
--perspectives <list> Comma-separated perspectives (discover mode)
|
||||
--external Enable Exa research (discover mode)
|
||||
--scope <pattern> File scope (discover/discover-by-prompt mode)
|
||||
--depth <level> standard|deep (discover-by-prompt mode)
|
||||
--max-iterations <n> Max exploration iterations (discover-by-prompt mode)
|
||||
|
||||
# Examples
|
||||
Skill(skill="issue-discover", args="https://github.com/org/repo/issues/42") # Create from GitHub
|
||||
Skill(skill="issue-discover", args="\"Login fails with special chars\"") # Create from text
|
||||
Skill(skill="issue-discover", args="--action discover src/auth/**") # Multi-perspective discovery
|
||||
Skill(skill="issue-discover", args="--action discover src/api/** --perspectives=security,bug") # Focused discovery
|
||||
Skill(skill="issue-discover", args="--action discover-by-prompt \"Check API contracts\"") # Prompt-driven discovery
|
||||
Skill(skill="issue-discover", args="-y \"auth broken\"") # Auto mode create
|
||||
```
|
||||
|
||||
## Execution Flow
|
||||
|
||||
```
|
||||
Input Parsing:
|
||||
└─ Parse flags (--action, -y, --perspectives, etc.) and positional args
|
||||
|
||||
Action Selection:
|
||||
├─ --action flag provided → Route directly
|
||||
├─ Auto-detect from input:
|
||||
│ ├─ GitHub URL or #number → Create New (Phase 1)
|
||||
│ ├─ Path pattern (src/**, *.ts) → Discover (Phase 2)
|
||||
│ ├─ Short text (< 80 chars) → Create New (Phase 1)
|
||||
│ └─ Long descriptive text (≥ 80 chars) → Discover by Prompt (Phase 3)
|
||||
└─ Otherwise → AskUserQuestion to select action
|
||||
|
||||
Phase Execution (load one phase):
|
||||
├─ Phase 1: Create New → phases/01-issue-new.md
|
||||
├─ Phase 2: Discover → phases/02-discover.md
|
||||
└─ Phase 3: Discover by Prompt → phases/03-discover-by-prompt.md
|
||||
|
||||
Post-Phase:
|
||||
└─ Summary + Next steps recommendation
|
||||
```
|
||||
|
||||
### Phase Reference Documents
|
||||
|
||||
| Phase | Document | Load When | Purpose |
|
||||
|-------|----------|-----------|---------|
|
||||
| Phase 1 | [phases/01-issue-new.md](phases/01-issue-new.md) | Action = Create New | Create issue from GitHub URL or text description |
|
||||
| Phase 2 | [phases/02-discover.md](phases/02-discover.md) | Action = Discover | Multi-perspective issue discovery (bug, security, test, etc.) |
|
||||
| Phase 3 | [phases/03-discover-by-prompt.md](phases/03-discover-by-prompt.md) | Action = Discover by Prompt | Prompt-driven iterative exploration with Gemini planning |
|
||||
|
||||
## Core Rules
|
||||
|
||||
1. **Action Selection First**: Always determine action before loading any phase
|
||||
2. **Single Phase Load**: Only read the selected phase document, never load all phases
|
||||
3. **CLI Data Access**: Use `ccw issue` CLI for all issue operations, NEVER read files directly
|
||||
4. **Content Preservation**: Each phase contains complete execution logic from original commands
|
||||
5. **Auto-Detect Input**: Smart input parsing reduces need for explicit --action flag
|
||||
|
||||
## Input Processing
|
||||
|
||||
### Auto-Detection Logic
|
||||
|
||||
```javascript
|
||||
function detectAction(input, flags) {
|
||||
// 1. Explicit --action flag
|
||||
if (flags.action) return flags.action;
|
||||
|
||||
const trimmed = input.trim();
|
||||
|
||||
// 2. GitHub URL → new
|
||||
if (trimmed.match(/github\.com\/[\w-]+\/[\w-]+\/issues\/\d+/) || trimmed.match(/^#\d+$/)) {
|
||||
return 'new';
|
||||
}
|
||||
|
||||
// 3. Path pattern (contains **, /, or --perspectives) → discover
|
||||
if (trimmed.match(/\*\*/) || trimmed.match(/^src\//) || flags.perspectives) {
|
||||
return 'discover';
|
||||
}
|
||||
|
||||
// 4. Short text (< 80 chars, no special patterns) → new
|
||||
if (trimmed.length > 0 && trimmed.length < 80 && !trimmed.includes('--')) {
|
||||
return 'new';
|
||||
}
|
||||
|
||||
// 5. Long descriptive text → discover-by-prompt
|
||||
if (trimmed.length >= 80) {
|
||||
return 'discover-by-prompt';
|
||||
}
|
||||
|
||||
// Cannot auto-detect → ask user
|
||||
return null;
|
||||
}
|
||||
```
|
||||
|
||||
### Action Selection (AskUserQuestion)
|
||||
|
||||
```javascript
|
||||
// When action cannot be auto-detected
|
||||
const answer = AskUserQuestion({
|
||||
questions: [{
|
||||
question: "What would you like to do?",
|
||||
header: "Action",
|
||||
multiSelect: false,
|
||||
options: [
|
||||
{
|
||||
label: "Create New Issue (Recommended)",
|
||||
description: "Create issue from GitHub URL, text description, or structured input"
|
||||
},
|
||||
{
|
||||
label: "Discover Issues",
|
||||
description: "Multi-perspective discovery: bug, security, test, quality, performance, etc."
|
||||
},
|
||||
{
|
||||
label: "Discover by Prompt",
|
||||
description: "Describe what to find — Gemini plans the exploration strategy iteratively"
|
||||
}
|
||||
]
|
||||
}]
|
||||
});
|
||||
|
||||
// Route based on selection
|
||||
const actionMap = {
|
||||
"Create New Issue": "new",
|
||||
"Discover Issues": "discover",
|
||||
"Discover by Prompt": "discover-by-prompt"
|
||||
};
|
||||
```
|
||||
|
||||
## Data Flow
|
||||
|
||||
```
|
||||
User Input (URL / text / path pattern / descriptive prompt)
|
||||
↓
|
||||
[Parse Flags + Auto-Detect Action]
|
||||
↓
|
||||
[Action Selection] ← AskUserQuestion (if needed)
|
||||
↓
|
||||
[Read Selected Phase Document]
|
||||
↓
|
||||
[Execute Phase Logic]
|
||||
↓
|
||||
[Summary + Next Steps]
|
||||
├─ After Create → Suggest issue-resolve (plan solution)
|
||||
└─ After Discover → Suggest export to issues, then issue-resolve
|
||||
```
|
||||
|
||||
## TodoWrite Pattern
|
||||
|
||||
```json
|
||||
[
|
||||
{"content": "Select action", "status": "completed"},
|
||||
{"content": "Execute: [selected phase name]", "status": "in_progress"},
|
||||
{"content": "Summary & next steps", "status": "pending"}
|
||||
]
|
||||
```
|
||||
|
||||
Phase-specific sub-tasks are attached when the phase executes (see individual phase docs for details).
|
||||
|
||||
## Core Guidelines
|
||||
|
||||
**Data Access Principle**: Issues files can grow very large. To avoid context overflow:
|
||||
|
||||
| Operation | Correct | Incorrect |
|
||||
|-----------|---------|-----------|
|
||||
| List issues (brief) | `ccw issue list --status pending --brief` | `Read('issues.jsonl')` |
|
||||
| Read issue details | `ccw issue status <id> --json` | `Read('issues.jsonl')` |
|
||||
| Create issue | `echo '...' \| ccw issue create` | Direct file write |
|
||||
| Update status | `ccw issue update <id> --status ...` | Direct file edit |
|
||||
|
||||
**ALWAYS** use CLI commands for CRUD operations. **NEVER** read entire `issues.jsonl` directly.
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Error | Resolution |
|
||||
|-------|------------|
|
||||
| No action detected | Show AskUserQuestion with all 3 options |
|
||||
| Invalid action type | Show available actions, re-prompt |
|
||||
| Phase execution fails | Report error, suggest manual intervention |
|
||||
| No files matched (discover) | Check target pattern, verify path exists |
|
||||
| Gemini planning failed (discover-by-prompt) | Retry with qwen fallback |
|
||||
|
||||
## Post-Phase Next Steps
|
||||
|
||||
After successful phase execution, recommend next action:
|
||||
|
||||
```javascript
|
||||
// After Create New (issue created)
|
||||
AskUserQuestion({
|
||||
questions: [{
|
||||
question: "Issue created. What next?",
|
||||
header: "Next",
|
||||
multiSelect: false,
|
||||
options: [
|
||||
{ label: "Plan Solution", description: "Generate solution via issue-resolve" },
|
||||
{ label: "Create Another", description: "Create more issues" },
|
||||
{ label: "View Issues", description: "Review all issues" },
|
||||
{ label: "Done", description: "Exit workflow" }
|
||||
]
|
||||
}]
|
||||
});
|
||||
|
||||
// After Discover / Discover by Prompt (discoveries generated)
|
||||
AskUserQuestion({
|
||||
questions: [{
|
||||
question: "Discovery complete. What next?",
|
||||
header: "Next",
|
||||
multiSelect: false,
|
||||
options: [
|
||||
{ label: "Export to Issues", description: "Convert discoveries to issues" },
|
||||
{ label: "Plan Solutions", description: "Plan solutions for exported issues via issue-resolve" },
|
||||
{ label: "Done", description: "Exit workflow" }
|
||||
]
|
||||
}]
|
||||
});
|
||||
```
|
||||
|
||||
## Related Skills & Commands
|
||||
|
||||
- `issue-resolve` - Plan solutions, convert artifacts, form queues, from brainstorm
|
||||
- `issue-manage` - Interactive issue CRUD operations
|
||||
- `/issue:execute` - Execute queue with DAG-based parallel orchestration
|
||||
- `ccw issue list` - List all issues
|
||||
- `ccw issue status <id>` - View issue details
|
||||
348
.claude/skills/issue-discover/phases/01-issue-new.md
Normal file
348
.claude/skills/issue-discover/phases/01-issue-new.md
Normal file
@@ -0,0 +1,348 @@
|
||||
# Phase 1: Create New Issue
|
||||
|
||||
> 来源: `commands/issue/new.md`
|
||||
|
||||
## Overview
|
||||
|
||||
Create structured issue from GitHub URL or text description with clarity-based flow control.
|
||||
|
||||
**Core workflow**: Input Analysis → Clarity Detection → Data Extraction → Optional Clarification → GitHub Publishing → Create Issue
|
||||
|
||||
**Input sources**:
|
||||
- **GitHub URL** - `https://github.com/owner/repo/issues/123` or `#123`
|
||||
- **Structured text** - Text with expected/actual/affects keywords
|
||||
- **Vague text** - Short description that needs clarification
|
||||
|
||||
**Output**:
|
||||
- **Issue** (GH-xxx or ISS-YYYYMMDD-HHMMSS) - Registered issue ready for planning
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- `gh` CLI available (for GitHub URLs)
|
||||
- `ccw issue` CLI available
|
||||
|
||||
## Auto Mode
|
||||
|
||||
When `--yes` or `-y`: Skip clarification questions, create issue with inferred details.
|
||||
|
||||
## Arguments
|
||||
|
||||
| Argument | Required | Type | Default | Description |
|
||||
|----------|----------|------|---------|-------------|
|
||||
| input | Yes | String | - | GitHub URL, `#number`, or text description |
|
||||
| --priority | No | Integer | auto | Priority 1-5 (auto-inferred if omitted) |
|
||||
| -y, --yes | No | Flag | false | Skip all confirmations |
|
||||
|
||||
## Issue Structure
|
||||
|
||||
```typescript
|
||||
interface Issue {
|
||||
id: string; // GH-123 or ISS-YYYYMMDD-HHMMSS
|
||||
title: string;
|
||||
status: 'registered' | 'planned' | 'queued' | 'in_progress' | 'completed' | 'failed';
|
||||
priority: number; // 1 (critical) to 5 (low)
|
||||
context: string; // Problem description (single source of truth)
|
||||
source: 'github' | 'text' | 'discovery';
|
||||
source_url?: string;
|
||||
labels?: string[];
|
||||
|
||||
// GitHub binding (for non-GitHub sources that publish to GitHub)
|
||||
github_url?: string;
|
||||
github_number?: number;
|
||||
|
||||
// Optional structured fields
|
||||
expected_behavior?: string;
|
||||
actual_behavior?: string;
|
||||
affected_components?: string[];
|
||||
|
||||
// Feedback history
|
||||
feedback?: {
|
||||
type: 'failure' | 'clarification' | 'rejection';
|
||||
stage: string;
|
||||
content: string;
|
||||
created_at: string;
|
||||
}[];
|
||||
|
||||
bound_solution_id: string | null;
|
||||
created_at: string;
|
||||
updated_at: string;
|
||||
}
|
||||
```
|
||||
|
||||
## Execution Steps
|
||||
|
||||
### Step 1.1: Input Analysis & Clarity Detection
|
||||
|
||||
```javascript
|
||||
const input = userInput.trim();
|
||||
const flags = parseFlags(userInput);
|
||||
|
||||
// Detect input type and clarity
|
||||
const isGitHubUrl = input.match(/github\.com\/[\w-]+\/[\w-]+\/issues\/\d+/);
|
||||
const isGitHubShort = input.match(/^#(\d+)$/);
|
||||
const hasStructure = input.match(/(expected|actual|affects|steps):/i);
|
||||
|
||||
// Clarity score: 0-3
|
||||
let clarityScore = 0;
|
||||
if (isGitHubUrl || isGitHubShort) clarityScore = 3; // GitHub = fully clear
|
||||
else if (hasStructure) clarityScore = 2; // Structured text = clear
|
||||
else if (input.length > 50) clarityScore = 1; // Long text = somewhat clear
|
||||
else clarityScore = 0; // Vague
|
||||
|
||||
let issueData = {};
|
||||
```
|
||||
|
||||
### Step 1.2: Data Extraction (GitHub or Text)
|
||||
|
||||
```javascript
|
||||
if (isGitHubUrl || isGitHubShort) {
|
||||
// GitHub - fetch via gh CLI
|
||||
const result = Bash(`gh issue view ${extractIssueRef(input)} --json number,title,body,labels,url`);
|
||||
const gh = JSON.parse(result);
|
||||
issueData = {
|
||||
id: `GH-${gh.number}`,
|
||||
title: gh.title,
|
||||
source: 'github',
|
||||
source_url: gh.url,
|
||||
labels: gh.labels.map(l => l.name),
|
||||
context: gh.body?.substring(0, 500) || gh.title,
|
||||
...parseMarkdownBody(gh.body)
|
||||
};
|
||||
} else {
|
||||
// Text description
|
||||
issueData = {
|
||||
id: `ISS-${new Date().toISOString().replace(/[-:T]/g, '').slice(0, 14)}`,
|
||||
source: 'text',
|
||||
...parseTextDescription(input)
|
||||
};
|
||||
}
|
||||
```
|
||||
|
||||
### Step 1.3: Lightweight Context Hint (Conditional)
|
||||
|
||||
```javascript
|
||||
// ACE search ONLY for medium clarity (1-2) AND missing components
|
||||
// Skip for: GitHub (has context), vague (needs clarification first)
|
||||
if (clarityScore >= 1 && clarityScore <= 2 && !issueData.affected_components?.length) {
|
||||
const keywords = extractKeywords(issueData.context);
|
||||
|
||||
if (keywords.length >= 2) {
|
||||
try {
|
||||
const aceResult = mcp__ace-tool__search_context({
|
||||
project_root_path: process.cwd(),
|
||||
query: keywords.slice(0, 3).join(' ')
|
||||
});
|
||||
issueData.affected_components = aceResult.files?.slice(0, 3) || [];
|
||||
} catch {
|
||||
// ACE failure is non-blocking
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Step 1.4: Conditional Clarification (Only if Unclear)
|
||||
|
||||
```javascript
|
||||
// ONLY ask questions if clarity is low
|
||||
if (clarityScore < 2 && (!issueData.context || issueData.context.length < 20)) {
|
||||
const answer = AskUserQuestion({
|
||||
questions: [{
|
||||
question: 'Please describe the issue in more detail:',
|
||||
header: 'Clarify',
|
||||
multiSelect: false,
|
||||
options: [
|
||||
{ label: 'Provide details', description: 'Describe what, where, and expected behavior' }
|
||||
]
|
||||
}]
|
||||
});
|
||||
|
||||
if (answer.customText) {
|
||||
issueData.context = answer.customText;
|
||||
issueData.title = answer.customText.split(/[.\n]/)[0].substring(0, 60);
|
||||
issueData.feedback = [{
|
||||
type: 'clarification',
|
||||
stage: 'new',
|
||||
content: answer.customText,
|
||||
created_at: new Date().toISOString()
|
||||
}];
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Step 1.5: GitHub Publishing Decision (Non-GitHub Sources)
|
||||
|
||||
```javascript
|
||||
// For non-GitHub sources, ask if user wants to publish to GitHub
|
||||
let publishToGitHub = false;
|
||||
|
||||
if (issueData.source !== 'github') {
|
||||
const publishAnswer = AskUserQuestion({
|
||||
questions: [{
|
||||
question: 'Would you like to publish this issue to GitHub?',
|
||||
header: 'Publish',
|
||||
multiSelect: false,
|
||||
options: [
|
||||
{ label: 'Yes, publish to GitHub', description: 'Create issue on GitHub and link it' },
|
||||
{ label: 'No, keep local only', description: 'Store as local issue without GitHub sync' }
|
||||
]
|
||||
}]
|
||||
});
|
||||
|
||||
publishToGitHub = publishAnswer.answers?.['Publish']?.includes('Yes');
|
||||
}
|
||||
```
|
||||
|
||||
### Step 1.6: Create Issue
|
||||
|
||||
**Issue Creation** (via CLI endpoint):
|
||||
```bash
|
||||
# Option 1: Pipe input (recommended for complex JSON)
|
||||
echo '{"title":"...", "context":"...", "priority":3}' | ccw issue create
|
||||
|
||||
# Option 2: Heredoc (for multi-line JSON)
|
||||
ccw issue create << 'EOF'
|
||||
{"title":"...", "context":"含\"引号\"的内容", "priority":3}
|
||||
EOF
|
||||
```
|
||||
|
||||
**GitHub Publishing** (if user opted in):
|
||||
```javascript
|
||||
// Step 1: Create local issue FIRST
|
||||
const localIssue = createLocalIssue(issueData); // ccw issue create
|
||||
|
||||
// Step 2: Publish to GitHub if requested
|
||||
if (publishToGitHub) {
|
||||
const ghResult = Bash(`gh issue create --title "${issueData.title}" --body "${issueData.context}"`);
|
||||
const ghUrl = ghResult.match(/https:\/\/github\.com\/[\w-]+\/[\w-]+\/issues\/\d+/)?.[0];
|
||||
const ghNumber = parseInt(ghUrl?.match(/\/issues\/(\d+)/)?.[1]);
|
||||
|
||||
if (ghNumber) {
|
||||
Bash(`ccw issue update ${localIssue.id} --github-url "${ghUrl}" --github-number ${ghNumber}`);
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Workflow:**
|
||||
```
|
||||
1. Create local issue (ISS-YYYYMMDD-NNN) → stored in .workflow/issues.jsonl
|
||||
2. If publishToGitHub:
|
||||
a. gh issue create → returns GitHub URL
|
||||
b. Update local issue with github_url + github_number binding
|
||||
3. Both local and GitHub issues exist, linked together
|
||||
```
|
||||
|
||||
## Execution Flow
|
||||
|
||||
```
|
||||
Phase 1: Input Analysis
|
||||
└─ Detect clarity score (GitHub URL? Structured text? Keywords?)
|
||||
|
||||
Phase 2: Data Extraction (branched by clarity)
|
||||
┌────────────┬─────────────────┬──────────────┐
|
||||
│ Score 3 │ Score 1-2 │ Score 0 │
|
||||
│ GitHub │ Text + ACE │ Vague │
|
||||
├────────────┼─────────────────┼──────────────┤
|
||||
│ gh CLI │ Parse struct │ AskQuestion │
|
||||
│ → parse │ + quick hint │ (1 question) │
|
||||
│ │ (3 files max) │ → feedback │
|
||||
└────────────┴─────────────────┴──────────────┘
|
||||
|
||||
Phase 3: GitHub Publishing Decision (non-GitHub only)
|
||||
├─ Source = github: Skip (already from GitHub)
|
||||
└─ Source ≠ github: AskUserQuestion
|
||||
├─ Yes → publishToGitHub = true
|
||||
└─ No → publishToGitHub = false
|
||||
|
||||
Phase 4: Create Issue
|
||||
├─ Score ≥ 2: Direct creation
|
||||
└─ Score < 2: Confirm first → Create
|
||||
└─ If publishToGitHub: gh issue create → link URL
|
||||
|
||||
Note: Deep exploration & lifecycle deferred to /issue:plan
|
||||
```
|
||||
|
||||
## Helper Functions
|
||||
|
||||
```javascript
|
||||
function extractKeywords(text) {
|
||||
const stopWords = new Set(['the', 'a', 'an', 'is', 'are', 'was', 'were', 'not', 'with']);
|
||||
return text
|
||||
.toLowerCase()
|
||||
.split(/\W+/)
|
||||
.filter(w => w.length > 3 && !stopWords.has(w))
|
||||
.slice(0, 5);
|
||||
}
|
||||
|
||||
function parseTextDescription(text) {
|
||||
const result = { title: '', context: '' };
|
||||
const sentences = text.split(/\.(?=\s|$)/);
|
||||
|
||||
result.title = sentences[0]?.trim().substring(0, 60) || 'Untitled';
|
||||
result.context = text.substring(0, 500);
|
||||
|
||||
const expected = text.match(/expected:?\s*([^.]+)/i);
|
||||
const actual = text.match(/actual:?\s*([^.]+)/i);
|
||||
const affects = text.match(/affects?:?\s*([^.]+)/i);
|
||||
|
||||
if (expected) result.expected_behavior = expected[1].trim();
|
||||
if (actual) result.actual_behavior = actual[1].trim();
|
||||
if (affects) {
|
||||
result.affected_components = affects[1].split(/[,\s]+/).filter(c => c.includes('/') || c.includes('.'));
|
||||
}
|
||||
|
||||
return result;
|
||||
}
|
||||
|
||||
function parseMarkdownBody(body) {
|
||||
if (!body) return {};
|
||||
const result = {};
|
||||
|
||||
const problem = body.match(/##?\s*(problem|description)[:\s]*([\s\S]*?)(?=##|$)/i);
|
||||
const expected = body.match(/##?\s*expected[:\s]*([\s\S]*?)(?=##|$)/i);
|
||||
const actual = body.match(/##?\s*actual[:\s]*([\s\S]*?)(?=##|$)/i);
|
||||
|
||||
if (problem) result.context = problem[2].trim().substring(0, 500);
|
||||
if (expected) result.expected_behavior = expected[2].trim();
|
||||
if (actual) result.actual_behavior = actual[2].trim();
|
||||
|
||||
return result;
|
||||
}
|
||||
```
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Error | Message | Resolution |
|
||||
|-------|---------|------------|
|
||||
| GitHub fetch failed | gh CLI error | Check gh auth, verify URL |
|
||||
| Clarity too low | Input unclear | Ask clarification question |
|
||||
| Issue creation failed | CLI error | Verify ccw issue endpoint |
|
||||
| GitHub publish failed | gh issue create error | Create local-only, skip GitHub |
|
||||
|
||||
## Examples
|
||||
|
||||
### Clear Input (No Questions)
|
||||
|
||||
```bash
|
||||
Skill(skill="issue-lifecycle", args="https://github.com/org/repo/issues/42")
|
||||
# → Fetches, parses, creates immediately
|
||||
|
||||
Skill(skill="issue-lifecycle", args="\"Login fails with special chars. Expected: success. Actual: 500\"")
|
||||
# → Parses structure, creates immediately
|
||||
```
|
||||
|
||||
### Vague Input (1 Question)
|
||||
|
||||
```bash
|
||||
Skill(skill="issue-lifecycle", args="\"auth broken\"")
|
||||
# → Asks: "Please describe the issue in more detail"
|
||||
# → User provides details → saved to feedback[]
|
||||
# → Creates issue
|
||||
```
|
||||
|
||||
## Post-Phase Update
|
||||
|
||||
After issue creation:
|
||||
- Issue created with `status: registered`
|
||||
- Report: issue ID, title, source, affected components
|
||||
- Show GitHub URL (if published)
|
||||
- Recommend next step: `/issue:plan <id>` or `Skill(skill="issue-resolve", args="<id>")`
|
||||
337
.claude/skills/issue-discover/phases/02-discover.md
Normal file
337
.claude/skills/issue-discover/phases/02-discover.md
Normal file
@@ -0,0 +1,337 @@
|
||||
# Phase 2: Discover Issues (Multi-Perspective)
|
||||
|
||||
> 来源: `commands/issue/discover.md`
|
||||
|
||||
## Overview
|
||||
|
||||
Multi-perspective issue discovery orchestrator that explores code from different angles to identify potential bugs, UX improvements, test gaps, and other actionable items.
|
||||
|
||||
**Core workflow**: Initialize → Select Perspectives → Parallel Analysis → Aggregate → Generate Issues → User Action
|
||||
|
||||
**Discovery Scope**: Specified modules/files only
|
||||
**Output Directory**: `.workflow/issues/discoveries/{discovery-id}/`
|
||||
**Available Perspectives**: bug, ux, test, quality, security, performance, maintainability, best-practices
|
||||
**Exa Integration**: Auto-enabled for security and best-practices perspectives
|
||||
**CLI Tools**: Gemini → Qwen → Codex (fallback chain)
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- Target file/module pattern (e.g., `src/auth/**`)
|
||||
- `ccw issue` CLI available
|
||||
|
||||
## Auto Mode
|
||||
|
||||
When `--yes` or `-y`: Auto-select all perspectives, skip confirmations.
|
||||
|
||||
## Arguments
|
||||
|
||||
| Argument | Required | Type | Default | Description |
|
||||
|----------|----------|------|---------|-------------|
|
||||
| target | Yes | String | - | File/module glob pattern (e.g., `src/auth/**`) |
|
||||
| --perspectives | No | String | interactive | Comma-separated: bug,ux,test,quality,security,performance,maintainability,best-practices |
|
||||
| --external | No | Flag | false | Enable Exa research for all perspectives |
|
||||
| -y, --yes | No | Flag | false | Skip all confirmations |
|
||||
|
||||
## Perspectives
|
||||
|
||||
| Perspective | Focus | Categories | Exa |
|
||||
|-------------|-------|------------|-----|
|
||||
| **bug** | Potential Bugs | edge-case, null-check, resource-leak, race-condition, boundary, exception-handling | - |
|
||||
| **ux** | User Experience | error-message, loading-state, feedback, accessibility, interaction, consistency | - |
|
||||
| **test** | Test Coverage | missing-test, edge-case-test, integration-gap, coverage-hole, assertion-quality | - |
|
||||
| **quality** | Code Quality | complexity, duplication, naming, documentation, code-smell, readability | - |
|
||||
| **security** | Security Issues | injection, auth, encryption, input-validation, data-exposure, access-control | ✓ |
|
||||
| **performance** | Performance | n-plus-one, memory-usage, caching, algorithm, blocking-operation, resource | - |
|
||||
| **maintainability** | Maintainability | coupling, cohesion, tech-debt, extensibility, module-boundary, interface-design | - |
|
||||
| **best-practices** | Best Practices | convention, pattern, framework-usage, anti-pattern, industry-standard | ✓ |
|
||||
|
||||
## Execution Steps
|
||||
|
||||
### Step 2.1: Discovery & Initialization
|
||||
|
||||
```javascript
|
||||
// Parse target pattern and resolve files
|
||||
const resolvedFiles = await expandGlobPattern(targetPattern);
|
||||
if (resolvedFiles.length === 0) {
|
||||
throw new Error(`No files matched pattern: ${targetPattern}`);
|
||||
}
|
||||
|
||||
// Generate discovery ID
|
||||
const discoveryId = `DSC-${formatDate(new Date(), 'YYYYMMDD-HHmmss')}`;
|
||||
|
||||
// Create output directory
|
||||
const outputDir = `.workflow/issues/discoveries/${discoveryId}`;
|
||||
await mkdir(outputDir, { recursive: true });
|
||||
await mkdir(`${outputDir}/perspectives`, { recursive: true });
|
||||
|
||||
// Initialize unified discovery state
|
||||
await writeJson(`${outputDir}/discovery-state.json`, {
|
||||
discovery_id: discoveryId,
|
||||
target_pattern: targetPattern,
|
||||
phase: "initialization",
|
||||
created_at: new Date().toISOString(),
|
||||
updated_at: new Date().toISOString(),
|
||||
target: { files_count: { total: resolvedFiles.length }, project: {} },
|
||||
perspectives: [],
|
||||
external_research: { enabled: false, completed: false },
|
||||
results: { total_findings: 0, issues_generated: 0, priority_distribution: {} }
|
||||
});
|
||||
```
|
||||
|
||||
### Step 2.2: Interactive Perspective Selection
|
||||
|
||||
```javascript
|
||||
let selectedPerspectives = [];
|
||||
|
||||
if (args.perspectives) {
|
||||
selectedPerspectives = args.perspectives.split(',').map(p => p.trim());
|
||||
} else {
|
||||
// Interactive selection via AskUserQuestion
|
||||
const response = AskUserQuestion({
|
||||
questions: [{
|
||||
question: "Select primary discovery focus:",
|
||||
header: "Focus",
|
||||
multiSelect: false,
|
||||
options: [
|
||||
{ label: "Bug + Test + Quality", description: "Quick scan: potential bugs, test gaps, code quality (Recommended)" },
|
||||
{ label: "Security + Performance", description: "System audit: security issues, performance bottlenecks" },
|
||||
{ label: "Maintainability + Best-practices", description: "Long-term health: coupling, tech debt, conventions" },
|
||||
{ label: "Full analysis", description: "All 8 perspectives (comprehensive, takes longer)" }
|
||||
]
|
||||
}]
|
||||
});
|
||||
selectedPerspectives = parseSelectedPerspectives(response);
|
||||
}
|
||||
```
|
||||
|
||||
### Step 2.3: Parallel Perspective Analysis
|
||||
|
||||
Launch N agents in parallel (one per selected perspective):
|
||||
|
||||
```javascript
|
||||
const agentPromises = selectedPerspectives.map(perspective =>
|
||||
Task({
|
||||
subagent_type: "cli-explore-agent",
|
||||
run_in_background: false,
|
||||
description: `Discover ${perspective} issues`,
|
||||
prompt: buildPerspectivePrompt(perspective, discoveryId, resolvedFiles, outputDir)
|
||||
})
|
||||
);
|
||||
|
||||
const results = await Promise.all(agentPromises);
|
||||
```
|
||||
|
||||
### Step 2.4: Aggregation & Prioritization
|
||||
|
||||
```javascript
|
||||
// Load all perspective JSON files written by agents
|
||||
const allFindings = [];
|
||||
for (const perspective of selectedPerspectives) {
|
||||
const jsonPath = `${outputDir}/perspectives/${perspective}.json`;
|
||||
if (await fileExists(jsonPath)) {
|
||||
const data = await readJson(jsonPath);
|
||||
allFindings.push(...data.findings.map(f => ({ ...f, perspective })));
|
||||
}
|
||||
}
|
||||
|
||||
// Deduplicate and prioritize
|
||||
const prioritizedFindings = deduplicateAndPrioritize(allFindings);
|
||||
```
|
||||
|
||||
### Step 2.5: Issue Generation & Summary
|
||||
|
||||
```javascript
|
||||
// Convert high-priority findings to issues
|
||||
const issueWorthy = prioritizedFindings.filter(f =>
|
||||
f.priority === 'critical' || f.priority === 'high' || f.priority_score >= 0.7
|
||||
);
|
||||
|
||||
// Write discovery-issues.jsonl
|
||||
await writeJsonl(`${outputDir}/discovery-issues.jsonl`, issues);
|
||||
|
||||
// Generate summary from agent returns
|
||||
await writeSummaryFromAgentReturns(outputDir, results, prioritizedFindings, issues);
|
||||
|
||||
// Update final state
|
||||
await updateDiscoveryState(outputDir, {
|
||||
phase: 'complete',
|
||||
updated_at: new Date().toISOString(),
|
||||
'results.issues_generated': issues.length
|
||||
});
|
||||
```
|
||||
|
||||
### Step 2.6: User Action Prompt
|
||||
|
||||
```javascript
|
||||
const hasHighPriority = issues.some(i => i.priority === 'critical' || i.priority === 'high');
|
||||
|
||||
await AskUserQuestion({
|
||||
questions: [{
|
||||
question: `Discovery complete: ${issues.length} issues generated, ${prioritizedFindings.length} total findings. What next?`,
|
||||
header: "Next Step",
|
||||
multiSelect: false,
|
||||
options: hasHighPriority ? [
|
||||
{ label: "Export to Issues (Recommended)", description: `${issues.length} high-priority issues found - export to tracker` },
|
||||
{ label: "Open Dashboard", description: "Review findings in ccw view before exporting" },
|
||||
{ label: "Skip", description: "Complete discovery without exporting" }
|
||||
] : [
|
||||
{ label: "Open Dashboard (Recommended)", description: "Review findings in ccw view to decide which to export" },
|
||||
{ label: "Export to Issues", description: `Export ${issues.length} issues to tracker` },
|
||||
{ label: "Skip", description: "Complete discovery without exporting" }
|
||||
]
|
||||
}]
|
||||
});
|
||||
|
||||
if (response === "Export to Issues") {
|
||||
await appendJsonl('.workflow/issues/issues.jsonl', issues);
|
||||
}
|
||||
```
|
||||
|
||||
## Agent Invocation Template
|
||||
|
||||
### Perspective Analysis Agent
|
||||
|
||||
```javascript
|
||||
Task({
|
||||
subagent_type: "cli-explore-agent",
|
||||
run_in_background: false,
|
||||
description: `Discover ${perspective} issues`,
|
||||
prompt: `
|
||||
## Task Objective
|
||||
Discover potential ${perspective} issues in specified module files.
|
||||
|
||||
## Discovery Context
|
||||
- Discovery ID: ${discoveryId}
|
||||
- Perspective: ${perspective}
|
||||
- Target Pattern: ${targetPattern}
|
||||
- Resolved Files: ${resolvedFiles.length} files
|
||||
- Output Directory: ${outputDir}
|
||||
|
||||
## MANDATORY FIRST STEPS
|
||||
1. Read discovery state: ${outputDir}/discovery-state.json
|
||||
2. Read schema: ~/.claude/workflows/cli-templates/schemas/discovery-finding-schema.json
|
||||
3. Analyze target files for ${perspective} concerns
|
||||
|
||||
## Output Requirements
|
||||
|
||||
**1. Write JSON file**: ${outputDir}/perspectives/${perspective}.json
|
||||
- Follow discovery-finding-schema.json exactly
|
||||
- Each finding: id, title, priority, category, description, file, line, snippet, suggested_issue, confidence
|
||||
|
||||
**2. Return summary** (DO NOT write report file):
|
||||
- Total findings, priority breakdown, key issues
|
||||
|
||||
## Perspective-Specific Guidance
|
||||
${getPerspectiveGuidance(perspective)}
|
||||
|
||||
## Success Criteria
|
||||
- [ ] JSON written to ${outputDir}/perspectives/${perspective}.json
|
||||
- [ ] Summary returned with findings count and key issues
|
||||
- [ ] Each finding includes actionable suggested_issue
|
||||
- [ ] Priority uses lowercase enum: critical/high/medium/low
|
||||
`
|
||||
})
|
||||
```
|
||||
|
||||
### Exa Research Agent (for security and best-practices)
|
||||
|
||||
```javascript
|
||||
Task({
|
||||
subagent_type: "cli-explore-agent",
|
||||
run_in_background: false,
|
||||
description: `External research for ${perspective} via Exa`,
|
||||
prompt: `
|
||||
## Task Objective
|
||||
Research industry best practices for ${perspective} using Exa search
|
||||
|
||||
## Research Steps
|
||||
1. Read project tech stack: .workflow/project-tech.json
|
||||
2. Use Exa to search for best practices
|
||||
3. Synthesize findings relevant to this project
|
||||
|
||||
## Output Requirements
|
||||
**1. Write JSON file**: ${outputDir}/external-research.json
|
||||
**2. Return summary** (DO NOT write report file)
|
||||
|
||||
## Success Criteria
|
||||
- [ ] JSON written to ${outputDir}/external-research.json
|
||||
- [ ] Findings are relevant to project's tech stack
|
||||
`
|
||||
})
|
||||
```
|
||||
|
||||
## Perspective Guidance Reference
|
||||
|
||||
```javascript
|
||||
function getPerspectiveGuidance(perspective) {
|
||||
const guidance = {
|
||||
bug: `Focus: Null checks, edge cases, resource leaks, race conditions, boundary conditions, exception handling
|
||||
Priority: Critical=data corruption/crash, High=malfunction, Medium=edge case issues, Low=minor`,
|
||||
ux: `Focus: Error messages, loading states, feedback, accessibility, interaction patterns, form validation
|
||||
Priority: Critical=inaccessible, High=confusing, Medium=inconsistent, Low=cosmetic`,
|
||||
test: `Focus: Missing unit tests, edge case coverage, integration gaps, assertion quality, test isolation
|
||||
Priority: Critical=no security tests, High=no core logic tests, Medium=weak coverage, Low=minor gaps`,
|
||||
quality: `Focus: Complexity, duplication, naming, documentation, code smells, readability
|
||||
Priority: Critical=unmaintainable, High=significant issues, Medium=naming/docs, Low=minor refactoring`,
|
||||
security: `Focus: Input validation, auth/authz, injection, XSS/CSRF, data exposure, access control
|
||||
Priority: Critical=auth bypass/injection, High=missing authz, Medium=weak validation, Low=headers`,
|
||||
performance: `Focus: N+1 queries, memory leaks, caching, algorithm efficiency, blocking operations
|
||||
Priority: Critical=memory leaks, High=N+1/inefficient, Medium=missing cache, Low=minor optimization`,
|
||||
maintainability: `Focus: Coupling, interface design, tech debt, extensibility, module boundaries, configuration
|
||||
Priority: Critical=unrelated code changes, High=unclear boundaries, Medium=coupling, Low=refactoring`,
|
||||
'best-practices': `Focus: Framework conventions, language patterns, anti-patterns, deprecated APIs, coding standards
|
||||
Priority: Critical=anti-patterns causing bugs, High=convention violations, Medium=style, Low=cosmetic`
|
||||
};
|
||||
return guidance[perspective] || 'General code discovery analysis';
|
||||
}
|
||||
```
|
||||
|
||||
## Output File Structure
|
||||
|
||||
```
|
||||
.workflow/issues/discoveries/
|
||||
├── index.json # Discovery session index
|
||||
└── {discovery-id}/
|
||||
├── discovery-state.json # Unified state
|
||||
├── perspectives/
|
||||
│ └── {perspective}.json # Per-perspective findings
|
||||
├── external-research.json # Exa research results (if enabled)
|
||||
├── discovery-issues.jsonl # Generated candidate issues
|
||||
└── summary.md # Summary from agent returns
|
||||
```
|
||||
|
||||
## Schema References
|
||||
|
||||
| Schema | Path | Purpose |
|
||||
|--------|------|---------|
|
||||
| **Discovery State** | `~/.claude/workflows/cli-templates/schemas/discovery-state-schema.json` | Session state machine |
|
||||
| **Discovery Finding** | `~/.claude/workflows/cli-templates/schemas/discovery-finding-schema.json` | Perspective analysis results |
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Error | Message | Resolution |
|
||||
|-------|---------|------------|
|
||||
| No files matched | Pattern empty | Check target pattern, verify path exists |
|
||||
| Agent failure | Perspective analysis error | Retry failed perspective, check agent logs |
|
||||
| No findings | All perspectives clean | Report clean status, no issues to generate |
|
||||
|
||||
## Examples
|
||||
|
||||
```bash
|
||||
# Quick scan with default perspectives
|
||||
Skill(skill="issue-lifecycle", args="--action discover src/auth/**")
|
||||
|
||||
# Security-focused audit
|
||||
Skill(skill="issue-lifecycle", args="--action discover src/payment/** --perspectives=security,bug")
|
||||
|
||||
# Full analysis with external research
|
||||
Skill(skill="issue-lifecycle", args="--action discover src/api/** --external")
|
||||
```
|
||||
|
||||
## Post-Phase Update
|
||||
|
||||
After discovery:
|
||||
- Findings aggregated with priority distribution
|
||||
- Issue candidates written to discovery-issues.jsonl
|
||||
- Report: total findings, issues generated, priority breakdown
|
||||
- Recommend next step: Export to issues → `/issue:plan` or `issue-resolve`
|
||||
509
.claude/skills/issue-discover/phases/03-discover-by-prompt.md
Normal file
509
.claude/skills/issue-discover/phases/03-discover-by-prompt.md
Normal file
@@ -0,0 +1,509 @@
|
||||
# Phase 3: Discover by Prompt
|
||||
|
||||
> 来源: `commands/issue/discover-by-prompt.md`
|
||||
|
||||
## Overview
|
||||
|
||||
Prompt-driven issue discovery with intelligent planning. Instead of fixed perspectives, this command analyzes user intent via Gemini, plans exploration strategy dynamically, and executes iterative multi-agent exploration with ACE semantic search.
|
||||
|
||||
**Core workflow**: Prompt Analysis → ACE Context → Gemini Planning → Iterative Exploration → Cross-Analysis → Issue Generation
|
||||
|
||||
**Core Difference from Phase 2 (Discover)**:
|
||||
- Phase 2: Pre-defined perspectives (bug, security, etc.), parallel execution
|
||||
- Phase 3: User-driven prompt, Gemini-planned strategy, iterative exploration
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- User prompt describing what to discover
|
||||
- `ccw cli` available (for Gemini planning)
|
||||
- `ccw issue` CLI available
|
||||
|
||||
## Auto Mode
|
||||
|
||||
When `--yes` or `-y`: Auto-continue all iterations, skip confirmations.
|
||||
|
||||
## Arguments
|
||||
|
||||
| Argument | Required | Type | Default | Description |
|
||||
|----------|----------|------|---------|-------------|
|
||||
| prompt | Yes | String | - | Natural language description of what to find |
|
||||
| --scope | No | String | `**/*` | File pattern to explore |
|
||||
| --depth | No | String | `standard` | `standard` (3 iterations) or `deep` (5+ iterations) |
|
||||
| --max-iterations | No | Integer | 5 | Maximum exploration iterations |
|
||||
| --plan-only | No | Flag | false | Stop after Gemini planning, show plan |
|
||||
| -y, --yes | No | Flag | false | Skip all confirmations |
|
||||
|
||||
## Use Cases
|
||||
|
||||
| Scenario | Example Prompt |
|
||||
|----------|----------------|
|
||||
| API Contract | "Check if frontend calls match backend endpoints" |
|
||||
| Error Handling | "Find inconsistent error handling patterns" |
|
||||
| Migration Gap | "Compare old auth with new auth implementation" |
|
||||
| Feature Parity | "Verify mobile has all web features" |
|
||||
| Schema Drift | "Check if TypeScript types match API responses" |
|
||||
| Integration | "Find mismatches between service A and service B" |
|
||||
|
||||
## Execution Steps
|
||||
|
||||
### Step 3.1: Prompt Analysis & Initialization
|
||||
|
||||
```javascript
|
||||
// Parse arguments
|
||||
const { prompt, scope, depth, maxIterations } = parseArgs(args);
|
||||
|
||||
// Generate discovery ID
|
||||
const discoveryId = `DBP-${formatDate(new Date(), 'YYYYMMDD-HHmmss')}`;
|
||||
|
||||
// Create output directory
|
||||
const outputDir = `.workflow/issues/discoveries/${discoveryId}`;
|
||||
await mkdir(outputDir, { recursive: true });
|
||||
await mkdir(`${outputDir}/iterations`, { recursive: true });
|
||||
|
||||
// Detect intent type from prompt
|
||||
const intentType = detectIntent(prompt);
|
||||
// Returns: 'comparison' | 'search' | 'verification' | 'audit'
|
||||
|
||||
// Initialize discovery state
|
||||
await writeJson(`${outputDir}/discovery-state.json`, {
|
||||
discovery_id: discoveryId,
|
||||
type: 'prompt-driven',
|
||||
prompt: prompt,
|
||||
intent_type: intentType,
|
||||
scope: scope || '**/*',
|
||||
depth: depth || 'standard',
|
||||
max_iterations: maxIterations || 5,
|
||||
phase: 'initialization',
|
||||
created_at: new Date().toISOString(),
|
||||
iterations: [],
|
||||
cumulative_findings: [],
|
||||
comparison_matrix: null
|
||||
});
|
||||
```
|
||||
|
||||
### Step 3.2: ACE Context Gathering
|
||||
|
||||
```javascript
|
||||
// Extract keywords from prompt for semantic search
|
||||
const keywords = extractKeywords(prompt);
|
||||
|
||||
// Use ACE to understand codebase structure
|
||||
const aceQueries = [
|
||||
`Project architecture and module structure for ${keywords.join(', ')}`,
|
||||
`Where are ${keywords[0]} implementations located?`,
|
||||
`How does ${keywords.slice(0, 2).join(' ')} work in this codebase?`
|
||||
];
|
||||
|
||||
const aceResults = [];
|
||||
for (const query of aceQueries) {
|
||||
const result = await mcp__ace-tool__search_context({
|
||||
project_root_path: process.cwd(),
|
||||
query: query
|
||||
});
|
||||
aceResults.push({ query, result });
|
||||
}
|
||||
|
||||
// Build context package for Gemini (kept in memory)
|
||||
const aceContext = {
|
||||
prompt_keywords: keywords,
|
||||
codebase_structure: aceResults[0].result,
|
||||
relevant_modules: aceResults.slice(1).map(r => r.result),
|
||||
detected_patterns: extractPatterns(aceResults)
|
||||
};
|
||||
```
|
||||
|
||||
**ACE Query Strategy by Intent Type**:
|
||||
|
||||
| Intent | ACE Queries |
|
||||
|--------|-------------|
|
||||
| **comparison** | "frontend API calls", "backend API handlers", "API contract definitions" |
|
||||
| **search** | "{keyword} implementations", "{keyword} usage patterns" |
|
||||
| **verification** | "expected behavior for {feature}", "test coverage for {feature}" |
|
||||
| **audit** | "all {category} patterns", "{category} security concerns" |
|
||||
|
||||
### Step 3.3: Gemini Strategy Planning
|
||||
|
||||
```javascript
|
||||
// Build Gemini planning prompt with ACE context
|
||||
const planningPrompt = `
|
||||
PURPOSE: Analyze discovery prompt and create exploration strategy based on codebase context
|
||||
TASK:
|
||||
• Parse user intent from prompt: "${prompt}"
|
||||
• Use codebase context to identify specific modules and files to explore
|
||||
• Create exploration dimensions with precise search targets
|
||||
• Define comparison matrix structure (if comparison intent)
|
||||
• Set success criteria and iteration strategy
|
||||
MODE: analysis
|
||||
CONTEXT: @${scope || '**/*'} | Discovery type: ${intentType}
|
||||
|
||||
## Codebase Context (from ACE semantic search)
|
||||
${JSON.stringify(aceContext, null, 2)}
|
||||
|
||||
EXPECTED: JSON exploration plan:
|
||||
{
|
||||
"intent_analysis": { "type": "${intentType}", "primary_question": "...", "sub_questions": [...] },
|
||||
"dimensions": [{ "name": "...", "description": "...", "search_targets": [...], "focus_areas": [...], "agent_prompt": "..." }],
|
||||
"comparison_matrix": { "dimension_a": "...", "dimension_b": "...", "comparison_points": [...] },
|
||||
"success_criteria": [...],
|
||||
"estimated_iterations": N,
|
||||
"termination_conditions": [...]
|
||||
}
|
||||
CONSTRAINTS: Use ACE context to inform targets | Focus on actionable plan
|
||||
`;
|
||||
|
||||
// Execute Gemini planning
|
||||
Bash({
|
||||
command: `ccw cli -p "${planningPrompt}" --tool gemini --mode analysis`,
|
||||
run_in_background: true,
|
||||
timeout: 300000
|
||||
});
|
||||
|
||||
// Parse and validate
|
||||
const explorationPlan = await parseGeminiPlanOutput(geminiResult);
|
||||
```
|
||||
|
||||
**Gemini Planning Output Schema**:
|
||||
|
||||
```json
|
||||
{
|
||||
"intent_analysis": {
|
||||
"type": "comparison|search|verification|audit",
|
||||
"primary_question": "string",
|
||||
"sub_questions": ["string"]
|
||||
},
|
||||
"dimensions": [
|
||||
{
|
||||
"name": "frontend",
|
||||
"description": "Client-side API calls and error handling",
|
||||
"search_targets": ["src/api/**", "src/hooks/**"],
|
||||
"focus_areas": ["fetch calls", "error boundaries", "response parsing"],
|
||||
"agent_prompt": "Explore frontend API consumption patterns..."
|
||||
}
|
||||
],
|
||||
"comparison_matrix": {
|
||||
"dimension_a": "frontend",
|
||||
"dimension_b": "backend",
|
||||
"comparison_points": [
|
||||
{"aspect": "endpoints", "frontend_check": "fetch URLs", "backend_check": "route paths"},
|
||||
{"aspect": "methods", "frontend_check": "HTTP methods used", "backend_check": "methods accepted"},
|
||||
{"aspect": "payloads", "frontend_check": "request body structure", "backend_check": "expected schema"},
|
||||
{"aspect": "responses", "frontend_check": "response parsing", "backend_check": "response format"},
|
||||
{"aspect": "errors", "frontend_check": "error handling", "backend_check": "error responses"}
|
||||
]
|
||||
},
|
||||
"success_criteria": ["All API endpoints mapped", "Discrepancies identified with file:line"],
|
||||
"estimated_iterations": 3,
|
||||
"termination_conditions": ["All comparison points verified", "Confidence > 0.8"]
|
||||
}
|
||||
```
|
||||
|
||||
### Step 3.4: Iterative Agent Exploration (with ACE)
|
||||
|
||||
```javascript
|
||||
let iteration = 0;
|
||||
let cumulativeFindings = [];
|
||||
let sharedContext = { aceDiscoveries: [], crossReferences: [] };
|
||||
let shouldContinue = true;
|
||||
|
||||
while (shouldContinue && iteration < maxIterations) {
|
||||
iteration++;
|
||||
const iterationDir = `${outputDir}/iterations/${iteration}`;
|
||||
await mkdir(iterationDir, { recursive: true });
|
||||
|
||||
// ACE-assisted iteration planning
|
||||
const iterationAceQueries = iteration === 1
|
||||
? explorationPlan.dimensions.map(d => d.focus_areas[0])
|
||||
: deriveQueriesFromFindings(cumulativeFindings);
|
||||
|
||||
const iterationAceResults = [];
|
||||
for (const query of iterationAceQueries) {
|
||||
const result = await mcp__ace-tool__search_context({
|
||||
project_root_path: process.cwd(),
|
||||
query: `${query} in ${explorationPlan.scope}`
|
||||
});
|
||||
iterationAceResults.push({ query, result });
|
||||
}
|
||||
|
||||
sharedContext.aceDiscoveries.push(...iterationAceResults);
|
||||
|
||||
// Plan this iteration
|
||||
const iterationPlan = planIteration(iteration, explorationPlan, cumulativeFindings, iterationAceResults);
|
||||
|
||||
// Launch dimension agents with ACE context
|
||||
const agentPromises = iterationPlan.dimensions.map(dimension =>
|
||||
Task({
|
||||
subagent_type: "cli-explore-agent",
|
||||
run_in_background: false,
|
||||
description: `Explore ${dimension.name} (iteration ${iteration})`,
|
||||
prompt: buildDimensionPromptWithACE(dimension, iteration, cumulativeFindings, iterationAceResults, iterationDir)
|
||||
})
|
||||
);
|
||||
|
||||
const iterationResults = await Promise.all(agentPromises);
|
||||
|
||||
// Collect and analyze iteration findings
|
||||
const iterationFindings = await collectIterationFindings(iterationDir, iterationPlan.dimensions);
|
||||
|
||||
// Cross-reference findings between dimensions
|
||||
if (iterationPlan.dimensions.length > 1) {
|
||||
const crossRefs = findCrossReferences(iterationFindings, iterationPlan.dimensions);
|
||||
sharedContext.crossReferences.push(...crossRefs);
|
||||
}
|
||||
|
||||
cumulativeFindings.push(...iterationFindings);
|
||||
|
||||
// Decide whether to continue
|
||||
const convergenceCheck = checkConvergence(iterationFindings, cumulativeFindings, explorationPlan);
|
||||
shouldContinue = !convergenceCheck.converged;
|
||||
|
||||
// Update state
|
||||
await updateDiscoveryState(outputDir, {
|
||||
iterations: [...state.iterations, {
|
||||
number: iteration,
|
||||
findings_count: iterationFindings.length,
|
||||
ace_queries: iterationAceQueries.length,
|
||||
cross_references: sharedContext.crossReferences.length,
|
||||
new_discoveries: convergenceCheck.newDiscoveries,
|
||||
confidence: convergenceCheck.confidence,
|
||||
continued: shouldContinue
|
||||
}],
|
||||
cumulative_findings: cumulativeFindings
|
||||
});
|
||||
}
|
||||
```
|
||||
|
||||
**Iteration Loop**:
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────────┐
|
||||
│ Iteration Loop │
|
||||
├─────────────────────────────────────────────────────────────┤
|
||||
│ 1. Plan: What to explore this iteration │
|
||||
│ └─ Based on: previous findings + unexplored areas │
|
||||
│ │
|
||||
│ 2. Execute: Launch agents for this iteration │
|
||||
│ └─ Each agent: explore → collect → return summary │
|
||||
│ │
|
||||
│ 3. Analyze: Process iteration results │
|
||||
│ └─ New findings? Gaps? Contradictions? │
|
||||
│ │
|
||||
│ 4. Decide: Continue or terminate │
|
||||
│ └─ Terminate if: max iterations OR convergence OR │
|
||||
│ high confidence on all questions │
|
||||
└─────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
### Step 3.5: Cross-Analysis & Synthesis
|
||||
|
||||
```javascript
|
||||
// For comparison intent, perform cross-analysis
|
||||
if (intentType === 'comparison' && explorationPlan.comparison_matrix) {
|
||||
const comparisonResults = [];
|
||||
|
||||
for (const point of explorationPlan.comparison_matrix.comparison_points) {
|
||||
const dimensionAFindings = cumulativeFindings.filter(f =>
|
||||
f.related_dimension === explorationPlan.comparison_matrix.dimension_a &&
|
||||
f.category.includes(point.aspect)
|
||||
);
|
||||
|
||||
const dimensionBFindings = cumulativeFindings.filter(f =>
|
||||
f.related_dimension === explorationPlan.comparison_matrix.dimension_b &&
|
||||
f.category.includes(point.aspect)
|
||||
);
|
||||
|
||||
const discrepancies = findDiscrepancies(dimensionAFindings, dimensionBFindings, point);
|
||||
|
||||
comparisonResults.push({
|
||||
aspect: point.aspect,
|
||||
dimension_a_count: dimensionAFindings.length,
|
||||
dimension_b_count: dimensionBFindings.length,
|
||||
discrepancies: discrepancies,
|
||||
match_rate: calculateMatchRate(dimensionAFindings, dimensionBFindings)
|
||||
});
|
||||
}
|
||||
|
||||
await writeJson(`${outputDir}/comparison-analysis.json`, {
|
||||
matrix: explorationPlan.comparison_matrix,
|
||||
results: comparisonResults,
|
||||
summary: {
|
||||
total_discrepancies: comparisonResults.reduce((sum, r) => sum + r.discrepancies.length, 0),
|
||||
overall_match_rate: average(comparisonResults.map(r => r.match_rate)),
|
||||
critical_mismatches: comparisonResults.filter(r => r.match_rate < 0.5)
|
||||
}
|
||||
});
|
||||
}
|
||||
|
||||
const prioritizedFindings = prioritizeFindings(cumulativeFindings, explorationPlan);
|
||||
```
|
||||
|
||||
### Step 3.6: Issue Generation & Summary
|
||||
|
||||
```javascript
|
||||
// Convert high-confidence findings to issues
|
||||
const issueWorthy = prioritizedFindings.filter(f =>
|
||||
f.confidence >= 0.7 || f.priority === 'critical' || f.priority === 'high'
|
||||
);
|
||||
|
||||
const issues = issueWorthy.map(finding => ({
|
||||
id: `ISS-${discoveryId}-${finding.id}`,
|
||||
title: finding.title,
|
||||
description: finding.description,
|
||||
source: { discovery_id: discoveryId, finding_id: finding.id, dimension: finding.related_dimension },
|
||||
file: finding.file,
|
||||
line: finding.line,
|
||||
priority: finding.priority,
|
||||
category: finding.category,
|
||||
confidence: finding.confidence,
|
||||
status: 'discovered',
|
||||
created_at: new Date().toISOString()
|
||||
}));
|
||||
|
||||
await writeJsonl(`${outputDir}/discovery-issues.jsonl`, issues);
|
||||
|
||||
// Update final state
|
||||
await updateDiscoveryState(outputDir, {
|
||||
phase: 'complete',
|
||||
updated_at: new Date().toISOString(),
|
||||
results: {
|
||||
total_iterations: iteration,
|
||||
total_findings: cumulativeFindings.length,
|
||||
issues_generated: issues.length,
|
||||
comparison_match_rate: comparisonResults
|
||||
? average(comparisonResults.map(r => r.match_rate))
|
||||
: null
|
||||
}
|
||||
});
|
||||
|
||||
// Prompt user for next action
|
||||
await AskUserQuestion({
|
||||
questions: [{
|
||||
question: `Discovery complete: ${issues.length} issues from ${cumulativeFindings.length} findings across ${iteration} iterations. What next?`,
|
||||
header: "Next Step",
|
||||
multiSelect: false,
|
||||
options: [
|
||||
{ label: "Export to Issues (Recommended)", description: `Export ${issues.length} issues for planning` },
|
||||
{ label: "Review Details", description: "View comparison analysis and iteration details" },
|
||||
{ label: "Run Deeper", description: "Continue with more iterations" },
|
||||
{ label: "Skip", description: "Complete without exporting" }
|
||||
]
|
||||
}]
|
||||
});
|
||||
```
|
||||
|
||||
## Dimension Agent Prompt Template
|
||||
|
||||
```javascript
|
||||
function buildDimensionPromptWithACE(dimension, iteration, previousFindings, aceResults, outputDir) {
|
||||
const relevantAceResults = aceResults.filter(r =>
|
||||
r.query.includes(dimension.name) || dimension.focus_areas.some(fa => r.query.includes(fa))
|
||||
);
|
||||
|
||||
return `
|
||||
## Task Objective
|
||||
Explore ${dimension.name} dimension for issue discovery (Iteration ${iteration})
|
||||
|
||||
## Context
|
||||
- Dimension: ${dimension.name}
|
||||
- Description: ${dimension.description}
|
||||
- Search Targets: ${dimension.search_targets.join(', ')}
|
||||
- Focus Areas: ${dimension.focus_areas.join(', ')}
|
||||
|
||||
## ACE Semantic Search Results (Pre-gathered)
|
||||
${JSON.stringify(relevantAceResults.map(r => ({ query: r.query, files: r.result.slice(0, 5) })), null, 2)}
|
||||
|
||||
**Use ACE for deeper exploration**: mcp__ace-tool__search_context available.
|
||||
|
||||
${iteration > 1 ? `
|
||||
## Previous Findings to Build Upon
|
||||
${summarizePreviousFindings(previousFindings, dimension.name)}
|
||||
|
||||
## This Iteration Focus
|
||||
- Explore areas not yet covered
|
||||
- Verify/deepen previous findings
|
||||
- Follow leads from previous discoveries
|
||||
` : ''}
|
||||
|
||||
## MANDATORY FIRST STEPS
|
||||
1. Read schema: ~/.claude/workflows/cli-templates/schemas/discovery-finding-schema.json
|
||||
2. Review ACE results above for starting points
|
||||
3. Explore files identified by ACE
|
||||
|
||||
## Exploration Instructions
|
||||
${dimension.agent_prompt}
|
||||
|
||||
## Output Requirements
|
||||
**1. Write JSON file**: ${outputDir}/${dimension.name}.json
|
||||
- findings: [{id, title, category, description, file, line, snippet, confidence, related_dimension}]
|
||||
- coverage: {files_explored, areas_covered, areas_remaining}
|
||||
- leads: [{description, suggested_search}]
|
||||
- ace_queries_used: [{query, result_count}]
|
||||
|
||||
**2. Return summary**: Total findings, key discoveries, recommended next areas
|
||||
`;
|
||||
}
|
||||
```
|
||||
|
||||
## Output File Structure
|
||||
|
||||
```
|
||||
.workflow/issues/discoveries/
|
||||
└── {DBP-YYYYMMDD-HHmmss}/
|
||||
├── discovery-state.json # Session state with iteration tracking
|
||||
├── iterations/
|
||||
│ ├── 1/
|
||||
│ │ └── {dimension}.json # Dimension findings
|
||||
│ ├── 2/
|
||||
│ │ └── {dimension}.json
|
||||
│ └── ...
|
||||
├── comparison-analysis.json # Cross-dimension comparison (if applicable)
|
||||
└── discovery-issues.jsonl # Generated issue candidates
|
||||
```
|
||||
|
||||
## Configuration Options
|
||||
|
||||
| Flag | Default | Description |
|
||||
|------|---------|-------------|
|
||||
| `--scope` | `**/*` | File pattern to explore |
|
||||
| `--depth` | `standard` | `standard` (3 iterations) or `deep` (5+ iterations) |
|
||||
| `--max-iterations` | 5 | Maximum exploration iterations |
|
||||
| `--tool` | `gemini` | Planning tool (gemini/qwen) |
|
||||
| `--plan-only` | `false` | Stop after Gemini planning, show plan |
|
||||
|
||||
## Schema References
|
||||
|
||||
| Schema | Path | Used By |
|
||||
|--------|------|---------|
|
||||
| **Discovery State** | `discovery-state-schema.json` | Orchestrator (state tracking) |
|
||||
| **Discovery Finding** | `discovery-finding-schema.json` | Dimension agents (output) |
|
||||
| **Exploration Plan** | `exploration-plan-schema.json` | Gemini output validation (memory only) |
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Error | Message | Resolution |
|
||||
|-------|---------|------------|
|
||||
| Gemini planning failed | CLI error | Retry with qwen fallback |
|
||||
| ACE search failed | No results | Fall back to file glob patterns |
|
||||
| No findings after iterations | Convergence at 0 | Report clean status |
|
||||
| Agent timeout | Exploration too large | Narrow scope, reduce iterations |
|
||||
|
||||
## Examples
|
||||
|
||||
```bash
|
||||
# Single module deep dive
|
||||
Skill(skill="issue-discover", args="--action discover-by-prompt \"Find all potential issues in auth\" --scope=src/auth/**")
|
||||
|
||||
# API contract comparison
|
||||
Skill(skill="issue-discover", args="--action discover-by-prompt \"Check if API calls match implementations\" --scope=src/**")
|
||||
|
||||
# Plan only mode
|
||||
Skill(skill="issue-discover", args="--action discover-by-prompt \"Find inconsistent patterns\" --plan-only")
|
||||
```
|
||||
|
||||
## Post-Phase Update
|
||||
|
||||
After prompt-driven discovery:
|
||||
- Findings aggregated across iterations with confidence scores
|
||||
- Comparison analysis generated (if comparison intent)
|
||||
- Issue candidates written to discovery-issues.jsonl
|
||||
- Report: total iterations, findings, issues, match rate
|
||||
- Recommend next step: Export → issue-resolve (plan solutions)
|
||||
Reference in New Issue
Block a user