mirror of
https://github.com/catlog22/Claude-Code-Workflow.git
synced 2026-02-06 01:54:11 +08:00
Compare commits
109 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
847abcefce | ||
|
|
c24ad501b5 | ||
|
|
35c7fe28bb | ||
|
|
a33cacfd75 | ||
|
|
338c3d612c | ||
|
|
8b17fad723 | ||
|
|
169f218f7a | ||
|
|
3ef1e54412 | ||
|
|
4419c50942 | ||
|
|
7aa1cda367 | ||
|
|
a2c88ba885 | ||
|
|
e16950ef1e | ||
|
|
5b973b00ea | ||
|
|
3a1ebf8684 | ||
|
|
2eaefb61ab | ||
|
|
4c6b28030f | ||
|
|
2c42cefa5a | ||
|
|
35ffd3419e | ||
|
|
e3223edbb1 | ||
|
|
a061fc1428 | ||
|
|
0992d27523 | ||
|
|
5aa0c9610d | ||
|
|
7620ff703d | ||
|
|
d705a3e7d9 | ||
|
|
726151bfea | ||
|
|
b58589ddad | ||
|
|
2e493277a1 | ||
|
|
8b19edd2de | ||
|
|
3e54b5f7d8 | ||
|
|
4da06864f8 | ||
|
|
8f310339df | ||
|
|
0157e36344 | ||
|
|
cdf4833977 | ||
|
|
c8a914aeca | ||
|
|
a5ba7c0f6c | ||
|
|
1cf0d92ec2 | ||
|
|
02930bd56b | ||
|
|
4061ae48c4 | ||
|
|
ecd5085e51 | ||
|
|
6bc8b7de95 | ||
|
|
e79e33773f | ||
|
|
0c0301d811 | ||
|
|
89f6ac6804 | ||
|
|
f14c3299bc | ||
|
|
a73828b4d6 | ||
|
|
6244bf0405 | ||
|
|
90852c7788 | ||
|
|
3b842ed290 | ||
|
|
673e1d117a | ||
|
|
f64f619713 | ||
|
|
a742fa0f8a | ||
|
|
6894c7e80b | ||
|
|
203100431b | ||
|
|
e8b9bcae92 | ||
|
|
052351ab5b | ||
|
|
9dd84e3416 | ||
|
|
211c25d969 | ||
|
|
275684d319 | ||
|
|
0f8a47e8f6 | ||
|
|
303c840464 | ||
|
|
b15008fbce | ||
|
|
a8cf3e1ad6 | ||
|
|
0515ef6e8b | ||
|
|
777d5df573 | ||
|
|
c5f379ba01 | ||
|
|
145d38c9bd | ||
|
|
eab957ce00 | ||
|
|
b5fb077ad6 | ||
|
|
ebcbb11cb2 | ||
|
|
a1413dd1b3 | ||
|
|
4e6ee2db25 | ||
|
|
8e744597d1 | ||
|
|
dfa8b541b4 | ||
|
|
1dc55f8811 | ||
|
|
501d9a05d4 | ||
|
|
229d51cd18 | ||
|
|
40e61b30d6 | ||
|
|
3c3ce55842 | ||
|
|
e3e61bcae9 | ||
|
|
dfca4d60ee | ||
|
|
e671b45948 | ||
|
|
b00113d212 | ||
|
|
9b926d1a1e | ||
|
|
98c9f1a830 | ||
|
|
46ac591fe8 | ||
|
|
bf66b095c7 | ||
|
|
5228581324 | ||
|
|
c9c704e671 | ||
|
|
16d4c7c646 | ||
|
|
39056292b7 | ||
|
|
87ffd283ce | ||
|
|
8eb42816f1 | ||
|
|
ebdf64c0b9 | ||
|
|
caab5f476e | ||
|
|
1998f3ae8a | ||
|
|
5ff2a43b70 | ||
|
|
3cd842ca1a | ||
|
|
86cefa7bda | ||
|
|
fdac697f6e | ||
|
|
8203d690cb | ||
|
|
cf58dc0dd3 | ||
|
|
6a69af3bf1 | ||
|
|
acdfbb4644 | ||
|
|
72f24bf535 | ||
|
|
ba23244876 | ||
|
|
624f9f18b4 | ||
|
|
17002345c9 | ||
|
|
f3f2051c45 | ||
|
|
e60d793c8c |
@@ -3,4 +3,31 @@
|
||||
- **CLI Tools Usage**: @~/.claude/workflows/cli-tools-usage.md
|
||||
- **Coding Philosophy**: @~/.claude/workflows/coding-philosophy.md
|
||||
- **Context Requirements**: @~/.claude/workflows/context-tools.md
|
||||
- **File Modification**: @~/.claude/workflows/file-modification.md
|
||||
- **File Modification**: @~/.claude/workflows/file-modification.md
|
||||
- **CLI Endpoints Config**: @.claude/cli-tools.json
|
||||
|
||||
## CLI Endpoints
|
||||
|
||||
**Strictly follow the @.claude/cli-tools.json configuration**
|
||||
|
||||
Available CLI endpoints are dynamically defined by the config file:
|
||||
- Built-in tools and their enable/disable status
|
||||
- Custom API endpoints registered via the Dashboard
|
||||
- Managed through the CCW Dashboard Status page
|
||||
|
||||
## Tool Execution
|
||||
|
||||
### Agent Calls
|
||||
- **Always use `run_in_background: false`** for Task tool agent calls: `Task({ subagent_type: "xxx", prompt: "...", run_in_background: false })` to ensure synchronous execution and immediate result visibility
|
||||
- **TaskOutput usage**: Only use `TaskOutput({ task_id: "xxx", block: false })` + sleep loop to poll completion status. NEVER read intermediate output during agent/CLI execution - wait for final result only
|
||||
|
||||
### CLI Tool Calls (ccw cli)
|
||||
- **Always use `run_in_background: true`** for Bash tool when calling ccw cli:
|
||||
```
|
||||
Bash({ command: "ccw cli -p '...' --tool gemini", run_in_background: true })
|
||||
```
|
||||
- **After CLI call**: If no other tasks, stop immediately - let CLI execute in background, do NOT poll with TaskOutput
|
||||
|
||||
## Code Diagnostics
|
||||
|
||||
- **Prefer `mcp__ide__getDiagnostics`** for code error checking over shell-based TypeScript compilation
|
||||
|
||||
276
.claude/agents/issue-plan-agent.md
Normal file
276
.claude/agents/issue-plan-agent.md
Normal file
@@ -0,0 +1,276 @@
|
||||
---
|
||||
name: issue-plan-agent
|
||||
description: |
|
||||
Closed-loop issue planning agent combining ACE exploration and solution generation.
|
||||
Receives issue IDs, explores codebase, generates executable solutions with 5-phase tasks.
|
||||
|
||||
Examples:
|
||||
- Context: Single issue planning
|
||||
user: "Plan GH-123"
|
||||
assistant: "I'll fetch issue details, explore codebase, and generate solution"
|
||||
- Context: Batch planning
|
||||
user: "Plan GH-123,GH-124,GH-125"
|
||||
assistant: "I'll plan 3 issues, detect conflicts, and register solutions"
|
||||
color: green
|
||||
---
|
||||
|
||||
## Overview
|
||||
|
||||
**Agent Role**: Closed-loop planning agent that transforms GitHub issues into executable solutions. Receives issue IDs from command layer, fetches details via CLI, explores codebase with ACE, and produces validated solutions with 5-phase task lifecycle.
|
||||
|
||||
**Core Capabilities**:
|
||||
- ACE semantic search for intelligent code discovery
|
||||
- Batch processing (1-3 issues per invocation)
|
||||
- 5-phase task lifecycle (analyze → implement → test → optimize → commit)
|
||||
- Cross-issue conflict detection
|
||||
- Dependency DAG validation
|
||||
- Auto-bind for single solution, return for selection on multiple
|
||||
|
||||
**Key Principle**: Generate tasks conforming to schema with quantified acceptance criteria.
|
||||
|
||||
---
|
||||
|
||||
## 1. Input & Execution
|
||||
|
||||
### 1.1 Input Context
|
||||
|
||||
```javascript
|
||||
{
|
||||
issue_ids: string[], // Issue IDs only (e.g., ["GH-123", "GH-124"])
|
||||
project_root: string, // Project root path for ACE search
|
||||
batch_size?: number, // Max issues per batch (default: 3)
|
||||
}
|
||||
```
|
||||
|
||||
**Note**: Agent receives IDs only. Fetch details via `ccw issue status <id> --json`.
|
||||
|
||||
### 1.2 Execution Flow
|
||||
|
||||
```
|
||||
Phase 1: Issue Understanding (5%)
|
||||
↓ Fetch details, extract requirements, determine complexity
|
||||
Phase 2: ACE Exploration (30%)
|
||||
↓ Semantic search, pattern discovery, dependency mapping
|
||||
Phase 3: Solution Planning (50%)
|
||||
↓ Task decomposition, 5-phase lifecycle, acceptance criteria
|
||||
Phase 4: Validation & Output (15%)
|
||||
↓ DAG validation, conflict detection, solution registration
|
||||
```
|
||||
|
||||
#### Phase 1: Issue Understanding
|
||||
|
||||
**Step 1**: Fetch issue details via CLI
|
||||
```bash
|
||||
ccw issue status <issue-id> --json
|
||||
```
|
||||
|
||||
**Step 2**: Analyze and classify
|
||||
```javascript
|
||||
function analyzeIssue(issue) {
|
||||
return {
|
||||
issue_id: issue.id,
|
||||
requirements: extractRequirements(issue.context),
|
||||
scope: inferScope(issue.title, issue.context),
|
||||
complexity: determineComplexity(issue) // Low | Medium | High
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Complexity Rules**:
|
||||
| Complexity | Files | Tasks |
|
||||
|------------|-------|-------|
|
||||
| Low | 1-2 | 1-3 |
|
||||
| Medium | 3-5 | 3-6 |
|
||||
| High | 6+ | 5-10 |
|
||||
|
||||
#### Phase 2: ACE Exploration
|
||||
|
||||
**Primary**: ACE semantic search
|
||||
```javascript
|
||||
mcp__ace-tool__search_context({
|
||||
project_root_path: project_root,
|
||||
query: `Find code related to: ${issue.title}. Keywords: ${extractKeywords(issue)}`
|
||||
})
|
||||
```
|
||||
|
||||
**Exploration Checklist**:
|
||||
- [ ] Identify relevant files (direct matches)
|
||||
- [ ] Find related patterns (similar implementations)
|
||||
- [ ] Map integration points
|
||||
- [ ] Discover dependencies
|
||||
- [ ] Locate test patterns
|
||||
|
||||
**Fallback Chain**: ACE → smart_search → Grep → rg → Glob
|
||||
|
||||
| Tool | When to Use |
|
||||
|------|-------------|
|
||||
| `mcp__ace-tool__search_context` | Semantic search (primary) |
|
||||
| `mcp__ccw-tools__smart_search` | Symbol/pattern search |
|
||||
| `Grep` | Exact regex matching |
|
||||
| `rg` / `grep` | CLI fallback |
|
||||
| `Glob` | File path discovery |
|
||||
|
||||
#### Phase 3: Solution Planning
|
||||
|
||||
**Multi-Solution Generation**:
|
||||
|
||||
Generate multiple candidate solutions when:
|
||||
- Issue complexity is HIGH
|
||||
- Multiple valid implementation approaches exist
|
||||
- Trade-offs between approaches (performance vs simplicity, etc.)
|
||||
|
||||
| Condition | Solutions |
|
||||
|-----------|-----------|
|
||||
| Low complexity, single approach | 1 solution, auto-bind |
|
||||
| Medium complexity, clear path | 1-2 solutions |
|
||||
| High complexity, multiple approaches | 2-3 solutions, user selection |
|
||||
|
||||
**Solution Evaluation** (for each candidate):
|
||||
```javascript
|
||||
{
|
||||
analysis: {
|
||||
risk: "low|medium|high", // Implementation risk
|
||||
impact: "low|medium|high", // Scope of changes
|
||||
complexity: "low|medium|high" // Technical complexity
|
||||
},
|
||||
score: 0.0-1.0 // Overall quality score (higher = recommended)
|
||||
}
|
||||
```
|
||||
|
||||
**Selection Flow**:
|
||||
1. Generate all candidate solutions
|
||||
2. Evaluate and score each
|
||||
3. Single solution → auto-bind
|
||||
4. Multiple solutions → return `pending_selection` for user choice
|
||||
|
||||
**Task Decomposition** following schema:
|
||||
```javascript
|
||||
function decomposeTasks(issue, exploration) {
|
||||
return groups.map(group => ({
|
||||
id: `T${taskId++}`, // Pattern: ^T[0-9]+$
|
||||
title: group.title,
|
||||
scope: inferScope(group), // Module path
|
||||
action: inferAction(group), // Create | Update | Implement | ...
|
||||
description: group.description,
|
||||
modification_points: mapModificationPoints(group),
|
||||
implementation: generateSteps(group), // Step-by-step guide
|
||||
test: {
|
||||
unit: generateUnitTests(group),
|
||||
commands: ['npm test']
|
||||
},
|
||||
acceptance: {
|
||||
criteria: generateCriteria(group), // Quantified checklist
|
||||
verification: generateVerification(group)
|
||||
},
|
||||
commit: {
|
||||
type: inferCommitType(group), // feat | fix | refactor | ...
|
||||
scope: inferScope(group),
|
||||
message_template: generateCommitMsg(group)
|
||||
},
|
||||
depends_on: inferDependencies(group, tasks),
|
||||
priority: calculatePriority(group) // 1-5 (1=highest)
|
||||
}))
|
||||
}
|
||||
```
|
||||
|
||||
#### Phase 4: Validation & Output
|
||||
|
||||
**Validation**:
|
||||
- DAG validation (no circular dependencies)
|
||||
- Task validation (all 5 phases present)
|
||||
- Conflict detection (cross-issue file modifications)
|
||||
|
||||
**Solution Registration** (CRITICAL: check solution count first):
|
||||
```javascript
|
||||
for (const issue of issues) {
|
||||
const solutions = generatedSolutions[issue.id];
|
||||
|
||||
if (solutions.length === 1) {
|
||||
// Single solution → auto-bind
|
||||
Bash(`ccw issue bind ${issue.id} --solution ${solutions[0].file}`);
|
||||
bound.push({ issue_id: issue.id, solution_id: solutions[0].id, task_count: solutions[0].tasks.length });
|
||||
} else {
|
||||
// Multiple solutions → DO NOT BIND, return for user selection
|
||||
pending_selection.push({
|
||||
issue_id: issue.id,
|
||||
solutions: solutions.map(s => ({ id: s.id, description: s.description, task_count: s.tasks.length }))
|
||||
});
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 2. Output Requirements
|
||||
|
||||
### 2.1 Generate Files (Primary)
|
||||
|
||||
**Solution file per issue**:
|
||||
```
|
||||
.workflow/issues/solutions/{issue-id}.jsonl
|
||||
```
|
||||
|
||||
Each line is a solution JSON containing tasks. Schema: `cat .claude/workflows/cli-templates/schemas/solution-schema.json`
|
||||
|
||||
### 2.2 Binding
|
||||
|
||||
| Scenario | Action |
|
||||
|----------|--------|
|
||||
| Single solution | `ccw issue bind <id> --solution <file>` (auto) |
|
||||
| Multiple solutions | Register only, return for selection |
|
||||
|
||||
### 2.3 Return Summary
|
||||
|
||||
```json
|
||||
{
|
||||
"bound": [{ "issue_id": "...", "solution_id": "...", "task_count": N }],
|
||||
"pending_selection": [{ "issue_id": "...", "solutions": [{ "id": "SOL-001", "description": "...", "task_count": N }] }],
|
||||
"conflicts": [{ "file": "...", "issues": [...] }]
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 3. Quality Standards
|
||||
|
||||
### 3.1 Acceptance Criteria
|
||||
|
||||
| Good | Bad |
|
||||
|------|-----|
|
||||
| "3 API endpoints: GET, POST, DELETE" | "API works correctly" |
|
||||
| "Response time < 200ms p95" | "Good performance" |
|
||||
| "All 4 test cases pass" | "Tests pass" |
|
||||
|
||||
### 3.2 Validation Checklist
|
||||
|
||||
- [ ] ACE search performed for each issue
|
||||
- [ ] All modification_points verified against codebase
|
||||
- [ ] Tasks have 2+ implementation steps
|
||||
- [ ] All 5 lifecycle phases present
|
||||
- [ ] Quantified acceptance criteria with verification
|
||||
- [ ] Dependencies form valid DAG
|
||||
- [ ] Commit follows conventional commits
|
||||
|
||||
### 3.3 Guidelines
|
||||
|
||||
**ALWAYS**:
|
||||
1. Read schema first: `cat .claude/workflows/cli-templates/schemas/solution-schema.json`
|
||||
2. Use ACE semantic search as PRIMARY exploration tool
|
||||
3. Fetch issue details via `ccw issue status <id> --json`
|
||||
4. Quantify acceptance.criteria with testable conditions
|
||||
5. Validate DAG before output
|
||||
6. Evaluate each solution with `analysis` and `score`
|
||||
7. Single solution → auto-bind; Multiple → return `pending_selection`
|
||||
8. For HIGH complexity: generate 2-3 candidate solutions
|
||||
|
||||
**NEVER**:
|
||||
1. Execute implementation (return plan only)
|
||||
2. Use vague criteria ("works correctly", "good performance")
|
||||
3. Create circular dependencies
|
||||
4. Generate more than 10 tasks per issue
|
||||
5. **Bind when multiple solutions exist** - MUST check `solutions.length === 1` before calling `ccw issue bind`
|
||||
|
||||
**OUTPUT**:
|
||||
1. Register solutions via `ccw issue bind <id> --solution <file>`
|
||||
2. Return JSON with `bound`, `pending_selection`, `conflicts`
|
||||
3. Solutions written to `.workflow/issues/solutions/{issue-id}.jsonl`
|
||||
254
.claude/agents/issue-queue-agent.md
Normal file
254
.claude/agents/issue-queue-agent.md
Normal file
@@ -0,0 +1,254 @@
|
||||
---
|
||||
name: issue-queue-agent
|
||||
description: |
|
||||
Solution ordering agent for queue formation with dependency analysis and conflict resolution.
|
||||
Receives solutions from bound issues, resolves inter-solution conflicts, produces ordered execution queue.
|
||||
|
||||
Examples:
|
||||
- Context: Single issue queue
|
||||
user: "Order solutions for GH-123"
|
||||
assistant: "I'll analyze dependencies and generate execution queue"
|
||||
- Context: Multi-issue queue with conflicts
|
||||
user: "Order solutions for GH-123, GH-124"
|
||||
assistant: "I'll detect file conflicts between solutions, resolve ordering, and assign groups"
|
||||
color: orange
|
||||
---
|
||||
|
||||
## Overview
|
||||
|
||||
**Agent Role**: Queue formation agent that transforms solutions from bound issues into an ordered execution queue. Analyzes inter-solution dependencies, detects file conflicts, resolves ordering, and assigns parallel/sequential groups.
|
||||
|
||||
**Core Capabilities**:
|
||||
- Inter-solution dependency DAG construction
|
||||
- File conflict detection between solutions (based on files_touched intersection)
|
||||
- Conflict resolution with semantic ordering rules
|
||||
- Priority calculation (0.0-1.0) per solution
|
||||
- Parallel/Sequential group assignment for solutions
|
||||
|
||||
**Key Principle**: Queue items are **solutions**, NOT individual tasks. Each executor receives a complete solution with all its tasks.
|
||||
|
||||
---
|
||||
|
||||
## 1. Input & Execution
|
||||
|
||||
### 1.1 Input Context
|
||||
|
||||
```javascript
|
||||
{
|
||||
solutions: [{
|
||||
issue_id: string, // e.g., "ISS-20251227-001"
|
||||
solution_id: string, // e.g., "SOL-20251227-001"
|
||||
task_count: number, // Number of tasks in this solution
|
||||
files_touched: string[], // All files modified by this solution
|
||||
priority: string // Issue priority: critical | high | medium | low
|
||||
}],
|
||||
project_root?: string,
|
||||
rebuild?: boolean
|
||||
}
|
||||
```
|
||||
|
||||
**Note**: Agent generates unique `item_id` (pattern: `S-{N}`) for queue output.
|
||||
|
||||
### 1.2 Execution Flow
|
||||
|
||||
```
|
||||
Phase 1: Solution Analysis (20%)
|
||||
| Parse solutions, collect files_touched, build DAG
|
||||
Phase 2: Conflict Detection (30%)
|
||||
| Identify file overlaps between solutions
|
||||
Phase 3: Conflict Resolution (25%)
|
||||
| Apply ordering rules, update DAG
|
||||
Phase 4: Ordering & Grouping (25%)
|
||||
| Topological sort, assign parallel/sequential groups
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 2. Processing Logic
|
||||
|
||||
### 2.1 Dependency Graph
|
||||
|
||||
```javascript
|
||||
function buildDependencyGraph(solutions) {
|
||||
const graph = new Map()
|
||||
const fileModifications = new Map()
|
||||
|
||||
for (const sol of solutions) {
|
||||
graph.set(sol.solution_id, { ...sol, inDegree: 0, outEdges: [] })
|
||||
|
||||
for (const file of sol.files_touched || []) {
|
||||
if (!fileModifications.has(file)) fileModifications.set(file, [])
|
||||
fileModifications.get(file).push(sol.solution_id)
|
||||
}
|
||||
}
|
||||
|
||||
return { graph, fileModifications }
|
||||
}
|
||||
```
|
||||
|
||||
### 2.2 Conflict Detection
|
||||
|
||||
Conflict when multiple solutions modify same file:
|
||||
```javascript
|
||||
function detectConflicts(fileModifications, graph) {
|
||||
return [...fileModifications.entries()]
|
||||
.filter(([_, solutions]) => solutions.length > 1)
|
||||
.map(([file, solutions]) => ({
|
||||
type: 'file_conflict',
|
||||
file,
|
||||
solutions,
|
||||
resolved: false
|
||||
}))
|
||||
}
|
||||
```
|
||||
|
||||
### 2.3 Resolution Rules
|
||||
|
||||
| Priority | Rule | Example |
|
||||
|----------|------|---------|
|
||||
| 1 | Higher issue priority first | critical > high > medium > low |
|
||||
| 2 | Foundation solutions first | Solutions with fewer dependencies |
|
||||
| 3 | More tasks = higher priority | Solutions with larger impact |
|
||||
| 4 | Create before extend | S1:Creates module -> S2:Extends it |
|
||||
|
||||
### 2.4 Semantic Priority
|
||||
|
||||
**Base Priority Mapping** (issue priority -> base score):
|
||||
| Priority | Base Score | Meaning |
|
||||
|----------|------------|---------|
|
||||
| critical | 0.9 | Highest |
|
||||
| high | 0.7 | High |
|
||||
| medium | 0.5 | Medium |
|
||||
| low | 0.3 | Low |
|
||||
|
||||
**Task-count Boost** (applied to base score):
|
||||
| Factor | Boost |
|
||||
|--------|-------|
|
||||
| task_count >= 5 | +0.1 |
|
||||
| task_count >= 3 | +0.05 |
|
||||
| Foundation scope | +0.1 |
|
||||
| Fewer dependencies | +0.05 |
|
||||
|
||||
**Formula**: `semantic_priority = clamp(baseScore + sum(boosts), 0.0, 1.0)`
|
||||
|
||||
### 2.5 Group Assignment
|
||||
|
||||
- **Parallel (P*)**: Solutions with no file overlaps between them
|
||||
- **Sequential (S*)**: Solutions that share files must run in order
|
||||
|
||||
---
|
||||
|
||||
## 3. Output Requirements
|
||||
|
||||
### 3.1 Generate Files (Primary)
|
||||
|
||||
**Queue files**:
|
||||
```
|
||||
.workflow/issues/queues/{queue-id}.json # Full queue with solutions, conflicts, groups
|
||||
.workflow/issues/queues/index.json # Update with new queue entry
|
||||
```
|
||||
|
||||
Queue ID: Use the Queue ID provided in prompt (do NOT generate new one)
|
||||
Queue Item ID format: `S-N` (S-1, S-2, S-3, ...)
|
||||
|
||||
### 3.2 Queue File Schema
|
||||
|
||||
```json
|
||||
{
|
||||
"id": "QUE-20251227-143000",
|
||||
"status": "active",
|
||||
"solutions": [
|
||||
{
|
||||
"item_id": "S-1",
|
||||
"issue_id": "ISS-20251227-003",
|
||||
"solution_id": "SOL-20251227-003",
|
||||
"status": "pending",
|
||||
"execution_order": 1,
|
||||
"execution_group": "P1",
|
||||
"depends_on": [],
|
||||
"semantic_priority": 0.8,
|
||||
"assigned_executor": "codex",
|
||||
"files_touched": ["src/auth.ts", "src/utils.ts"],
|
||||
"task_count": 3
|
||||
}
|
||||
],
|
||||
"conflicts": [
|
||||
{
|
||||
"type": "file_conflict",
|
||||
"file": "src/auth.ts",
|
||||
"solutions": ["S-1", "S-3"],
|
||||
"resolution": "sequential",
|
||||
"resolution_order": ["S-1", "S-3"],
|
||||
"rationale": "S-1 creates auth module, S-3 extends it"
|
||||
}
|
||||
],
|
||||
"execution_groups": [
|
||||
{ "id": "P1", "type": "parallel", "solutions": ["S-1", "S-2"], "solution_count": 2 },
|
||||
{ "id": "S2", "type": "sequential", "solutions": ["S-3"], "solution_count": 1 }
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
### 3.3 Return Summary
|
||||
|
||||
```json
|
||||
{
|
||||
"queue_id": "QUE-20251227-143000",
|
||||
"total_solutions": N,
|
||||
"total_tasks": N,
|
||||
"execution_groups": [{ "id": "P1", "type": "parallel", "count": N }],
|
||||
"conflicts_resolved": N,
|
||||
"issues_queued": ["ISS-xxx", "ISS-yyy"]
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 4. Quality Standards
|
||||
|
||||
### 4.1 Validation Checklist
|
||||
|
||||
- [ ] No circular dependencies between solutions
|
||||
- [ ] All file conflicts resolved
|
||||
- [ ] Solutions in same parallel group have NO file overlaps
|
||||
- [ ] Semantic priority calculated for all solutions
|
||||
- [ ] Dependencies ordered correctly
|
||||
|
||||
### 4.2 Error Handling
|
||||
|
||||
| Scenario | Action |
|
||||
|----------|--------|
|
||||
| Circular dependency | Abort, report cycles |
|
||||
| Resolution creates cycle | Flag for manual resolution |
|
||||
| Missing solution reference | Skip and warn |
|
||||
| Empty solution list | Return empty queue |
|
||||
|
||||
### 4.3 Guidelines
|
||||
|
||||
**ALWAYS**:
|
||||
1. Build dependency graph before ordering
|
||||
2. Detect file overlaps between solutions
|
||||
3. Apply resolution rules consistently
|
||||
4. Calculate semantic priority for all solutions
|
||||
5. Include rationale for conflict resolutions
|
||||
6. Validate ordering before output
|
||||
|
||||
**NEVER**:
|
||||
1. Execute solutions (ordering only)
|
||||
2. Ignore circular dependencies
|
||||
3. Skip conflict detection
|
||||
4. Output invalid DAG
|
||||
5. Merge conflicting solutions in parallel group
|
||||
6. Split tasks from their solution
|
||||
|
||||
**OUTPUT** (STRICT - only these 2 files):
|
||||
```
|
||||
.workflow/issues/queues/{Queue ID}.json # Use Queue ID from prompt
|
||||
.workflow/issues/queues/index.json # Update existing index
|
||||
```
|
||||
- Use the Queue ID provided in prompt, do NOT generate new one
|
||||
- Write ONLY the 2 files listed above, NO other files
|
||||
- Final return: PURE JSON summary (no markdown, no prose):
|
||||
```json
|
||||
{"queue_id":"QUE-xxx","total_solutions":N,"total_tasks":N,"execution_groups":[...],"conflicts_resolved":N,"issues_queued":["ISS-xxx"]}
|
||||
```
|
||||
47
.claude/cli-tools.json
Normal file
47
.claude/cli-tools.json
Normal file
@@ -0,0 +1,47 @@
|
||||
{
|
||||
"version": "1.0.0",
|
||||
"tools": {
|
||||
"gemini": {
|
||||
"enabled": true,
|
||||
"isBuiltin": true,
|
||||
"command": "gemini",
|
||||
"description": "Google AI for code analysis"
|
||||
},
|
||||
"qwen": {
|
||||
"enabled": true,
|
||||
"isBuiltin": true,
|
||||
"command": "qwen",
|
||||
"description": "Alibaba AI assistant"
|
||||
},
|
||||
"codex": {
|
||||
"enabled": true,
|
||||
"isBuiltin": true,
|
||||
"command": "codex",
|
||||
"description": "OpenAI code generation"
|
||||
},
|
||||
"claude": {
|
||||
"enabled": true,
|
||||
"isBuiltin": true,
|
||||
"command": "claude",
|
||||
"description": "Anthropic AI assistant"
|
||||
}
|
||||
},
|
||||
"customEndpoints": [],
|
||||
"defaultTool": "gemini",
|
||||
"settings": {
|
||||
"promptFormat": "plain",
|
||||
"smartContext": {
|
||||
"enabled": false,
|
||||
"maxFiles": 10
|
||||
},
|
||||
"nativeResume": true,
|
||||
"recursiveQuery": true,
|
||||
"cache": {
|
||||
"injectionMode": "auto",
|
||||
"defaultPrefix": "",
|
||||
"defaultSuffix": ""
|
||||
},
|
||||
"codeIndexMcp": "ace"
|
||||
},
|
||||
"$schema": "./cli-tools.schema.json"
|
||||
}
|
||||
427
.claude/commands/issue/discover.md
Normal file
427
.claude/commands/issue/discover.md
Normal file
@@ -0,0 +1,427 @@
|
||||
---
|
||||
name: issue:discover
|
||||
description: Discover potential issues from multiple perspectives (bug, UX, test, quality, security, performance, maintainability, best-practices) using CLI explore. Supports Exa external research for security and best-practices perspectives.
|
||||
argument-hint: "<path-pattern> [--perspectives=bug,ux,...] [--external]"
|
||||
allowed-tools: SlashCommand(*), TodoWrite(*), Read(*), Bash(*), Task(*), AskUserQuestion(*), Glob(*), Grep(*)
|
||||
---
|
||||
|
||||
# Issue Discovery Command
|
||||
|
||||
## Quick Start
|
||||
|
||||
```bash
|
||||
# Discover issues in specific module (interactive perspective selection)
|
||||
/issue:discover src/auth/**
|
||||
|
||||
# Discover with specific perspectives
|
||||
/issue:discover src/payment/** --perspectives=bug,security,test
|
||||
|
||||
# Discover with external research for all perspectives
|
||||
/issue:discover src/api/** --external
|
||||
|
||||
# Discover in multiple modules
|
||||
/issue:discover src/auth/**,src/payment/**
|
||||
```
|
||||
|
||||
**Discovery Scope**: Specified modules/files only
|
||||
**Output Directory**: `.workflow/issues/discoveries/{discovery-id}/`
|
||||
**Available Perspectives**: bug, ux, test, quality, security, performance, maintainability, best-practices
|
||||
**Exa Integration**: Auto-enabled for security and best-practices perspectives
|
||||
**CLI Tools**: Gemini → Qwen → Codex (fallback chain)
|
||||
|
||||
## What & Why
|
||||
|
||||
### Core Concept
|
||||
Multi-perspective issue discovery orchestrator that explores code from different angles to identify potential bugs, UX improvements, test gaps, and other actionable items. Unlike code review (which assesses existing code quality), discovery focuses on **finding opportunities for improvement and potential problems**.
|
||||
|
||||
**vs Code Review**:
|
||||
- **Code Review** (`review-module-cycle`): Evaluates code quality against standards
|
||||
- **Issue Discovery** (`issue:discover`): Finds actionable issues, bugs, and improvement opportunities
|
||||
|
||||
### Value Proposition
|
||||
1. **Proactive Issue Detection**: Find problems before they become bugs
|
||||
2. **Multi-Perspective Analysis**: Each perspective surfaces different types of issues
|
||||
3. **External Benchmarking**: Compare against industry best practices via Exa
|
||||
4. **Direct Issue Integration**: Discoveries can be exported to issue tracker
|
||||
5. **Dashboard Management**: View, filter, and export discoveries via CCW dashboard
|
||||
|
||||
## How It Works
|
||||
|
||||
### Execution Flow
|
||||
|
||||
```
|
||||
Phase 1: Discovery & Initialization
|
||||
└─ Parse target pattern, create session, initialize output structure
|
||||
|
||||
Phase 2: Interactive Perspective Selection
|
||||
└─ AskUserQuestion for perspective selection (or use --perspectives)
|
||||
|
||||
Phase 3: Parallel Perspective Analysis
|
||||
├─ Launch N @cli-explore-agent instances (one per perspective)
|
||||
├─ Security & Best-Practices auto-trigger Exa research
|
||||
├─ Agent writes perspective JSON, returns summary
|
||||
└─ Update discovery-progress.json
|
||||
|
||||
Phase 4: Aggregation & Prioritization
|
||||
├─ Collect agent return summaries
|
||||
├─ Load perspective JSON files
|
||||
├─ Merge findings, deduplicate by file+line
|
||||
└─ Calculate priority scores
|
||||
|
||||
Phase 5: Issue Generation & Summary
|
||||
├─ Convert high-priority discoveries to issue format
|
||||
├─ Write to discovery-issues.jsonl
|
||||
├─ Generate single summary.md from agent returns
|
||||
└─ Update discovery-state.json to complete
|
||||
```
|
||||
|
||||
## Perspectives
|
||||
|
||||
### Available Perspectives
|
||||
|
||||
| Perspective | Focus | Categories | Exa |
|
||||
|-------------|-------|------------|-----|
|
||||
| **bug** | Potential Bugs | edge-case, null-check, resource-leak, race-condition, boundary, exception-handling | - |
|
||||
| **ux** | User Experience | error-message, loading-state, feedback, accessibility, interaction, consistency | - |
|
||||
| **test** | Test Coverage | missing-test, edge-case-test, integration-gap, coverage-hole, assertion-quality | - |
|
||||
| **quality** | Code Quality | complexity, duplication, naming, documentation, code-smell, readability | - |
|
||||
| **security** | Security Issues | injection, auth, encryption, input-validation, data-exposure, access-control | ✓ |
|
||||
| **performance** | Performance | n-plus-one, memory-usage, caching, algorithm, blocking-operation, resource | - |
|
||||
| **maintainability** | Maintainability | coupling, cohesion, tech-debt, extensibility, module-boundary, interface-design | - |
|
||||
| **best-practices** | Best Practices | convention, pattern, framework-usage, anti-pattern, industry-standard | ✓ |
|
||||
|
||||
### Interactive Perspective Selection
|
||||
|
||||
When no `--perspectives` flag is provided, the command uses AskUserQuestion:
|
||||
|
||||
```javascript
|
||||
AskUserQuestion({
|
||||
questions: [{
|
||||
question: "Select primary discovery focus:",
|
||||
header: "Focus",
|
||||
multiSelect: false,
|
||||
options: [
|
||||
{ label: "Bug + Test + Quality", description: "Quick scan: potential bugs, test gaps, code quality (Recommended)" },
|
||||
{ label: "Security + Performance", description: "System audit: security issues, performance bottlenecks" },
|
||||
{ label: "Maintainability + Best-practices", description: "Long-term health: coupling, tech debt, conventions" },
|
||||
{ label: "Full analysis", description: "All 7 perspectives (comprehensive, takes longer)" }
|
||||
]
|
||||
}]
|
||||
})
|
||||
```
|
||||
|
||||
**Recommended Combinations**:
|
||||
- Quick scan: bug, test, quality
|
||||
- Full analysis: all perspectives
|
||||
- Security audit: security, bug, quality
|
||||
|
||||
## Core Responsibilities
|
||||
|
||||
### Orchestrator
|
||||
|
||||
**Phase 1: Discovery & Initialization**
|
||||
|
||||
```javascript
|
||||
// Step 1: Parse target pattern and resolve files
|
||||
const resolvedFiles = await expandGlobPattern(targetPattern);
|
||||
if (resolvedFiles.length === 0) {
|
||||
throw new Error(`No files matched pattern: ${targetPattern}`);
|
||||
}
|
||||
|
||||
// Step 2: Generate discovery ID
|
||||
const discoveryId = `DSC-${formatDate(new Date(), 'YYYYMMDD-HHmmss')}`;
|
||||
|
||||
// Step 3: Create output directory
|
||||
const outputDir = `.workflow/issues/discoveries/${discoveryId}`;
|
||||
await mkdir(outputDir, { recursive: true });
|
||||
await mkdir(`${outputDir}/perspectives`, { recursive: true });
|
||||
|
||||
// Step 4: Initialize unified discovery state (merged state+progress)
|
||||
await writeJson(`${outputDir}/discovery-state.json`, {
|
||||
discovery_id: discoveryId,
|
||||
target_pattern: targetPattern,
|
||||
phase: "initialization",
|
||||
created_at: new Date().toISOString(),
|
||||
updated_at: new Date().toISOString(),
|
||||
target: { files_count: { total: resolvedFiles.length }, project: {} },
|
||||
perspectives: [], // filled after selection: [{name, status, findings}]
|
||||
external_research: { enabled: false, completed: false },
|
||||
results: { total_findings: 0, issues_generated: 0, priority_distribution: {} }
|
||||
});
|
||||
```
|
||||
|
||||
**Phase 2: Perspective Selection**
|
||||
|
||||
```javascript
|
||||
// Check for --perspectives flag
|
||||
let selectedPerspectives = [];
|
||||
|
||||
if (args.perspectives) {
|
||||
selectedPerspectives = args.perspectives.split(',').map(p => p.trim());
|
||||
} else {
|
||||
// Interactive selection via AskUserQuestion
|
||||
const response = await AskUserQuestion({...});
|
||||
selectedPerspectives = parseSelectedPerspectives(response);
|
||||
}
|
||||
|
||||
// Validate and update state
|
||||
await updateDiscoveryState(outputDir, {
|
||||
'metadata.perspectives': selectedPerspectives,
|
||||
phase: 'parallel'
|
||||
});
|
||||
```
|
||||
|
||||
**Phase 3: Parallel Perspective Analysis**
|
||||
|
||||
Launch N agents in parallel (one per selected perspective):
|
||||
|
||||
```javascript
|
||||
// Launch agents in parallel - agents write JSON and return summary
|
||||
const agentPromises = selectedPerspectives.map(perspective =>
|
||||
Task({
|
||||
subagent_type: "cli-explore-agent",
|
||||
run_in_background: false,
|
||||
description: `Discover ${perspective} issues`,
|
||||
prompt: buildPerspectivePrompt(perspective, discoveryId, resolvedFiles, outputDir)
|
||||
})
|
||||
);
|
||||
|
||||
// Wait for all agents - collect their return summaries
|
||||
const results = await Promise.all(agentPromises);
|
||||
// results contain agent summaries for final report
|
||||
```
|
||||
|
||||
**Phase 4: Aggregation & Prioritization**
|
||||
|
||||
```javascript
|
||||
// Load all perspective JSON files written by agents
|
||||
const allFindings = [];
|
||||
for (const perspective of selectedPerspectives) {
|
||||
const jsonPath = `${outputDir}/perspectives/${perspective}.json`;
|
||||
if (await fileExists(jsonPath)) {
|
||||
const data = await readJson(jsonPath);
|
||||
allFindings.push(...data.findings.map(f => ({ ...f, perspective })));
|
||||
}
|
||||
}
|
||||
|
||||
// Deduplicate and prioritize
|
||||
const prioritizedFindings = deduplicateAndPrioritize(allFindings);
|
||||
|
||||
// Update unified state
|
||||
await updateDiscoveryState(outputDir, {
|
||||
phase: 'aggregation',
|
||||
'results.total_findings': prioritizedFindings.length,
|
||||
'results.priority_distribution': countByPriority(prioritizedFindings)
|
||||
});
|
||||
```
|
||||
|
||||
**Phase 5: Issue Generation & Summary**
|
||||
|
||||
```javascript
|
||||
// Convert high-priority findings to issues
|
||||
const issueWorthy = prioritizedFindings.filter(f =>
|
||||
f.priority === 'critical' || f.priority === 'high' || f.priority_score >= 0.7
|
||||
);
|
||||
|
||||
// Write discovery-issues.jsonl
|
||||
await writeJsonl(`${outputDir}/discovery-issues.jsonl`, issues);
|
||||
|
||||
// Generate single summary.md from agent return summaries
|
||||
// Orchestrator briefly summarizes what agents returned (NO detailed reports)
|
||||
await writeSummaryFromAgentReturns(outputDir, results, prioritizedFindings, issues);
|
||||
|
||||
// Update final state
|
||||
await updateDiscoveryState(outputDir, {
|
||||
phase: 'complete',
|
||||
updated_at: new Date().toISOString(),
|
||||
'results.issues_generated': issues.length
|
||||
});
|
||||
```
|
||||
|
||||
### Output File Structure
|
||||
|
||||
```
|
||||
.workflow/issues/discoveries/
|
||||
├── index.json # Discovery session index
|
||||
└── {discovery-id}/
|
||||
├── discovery-state.json # Unified state (merged state+progress)
|
||||
├── perspectives/
|
||||
│ └── {perspective}.json # Per-perspective findings
|
||||
├── external-research.json # Exa research results (if enabled)
|
||||
├── discovery-issues.jsonl # Generated candidate issues
|
||||
└── summary.md # Single summary (from agent returns)
|
||||
```
|
||||
|
||||
### Schema References
|
||||
|
||||
**External Schema Files** (agent MUST read and follow exactly):
|
||||
|
||||
| Schema | Path | Purpose |
|
||||
|--------|------|---------|
|
||||
| **Discovery State** | `~/.claude/workflows/cli-templates/schemas/discovery-state-schema.json` | Session state machine |
|
||||
| **Discovery Finding** | `~/.claude/workflows/cli-templates/schemas/discovery-finding-schema.json` | Perspective analysis results |
|
||||
|
||||
### Agent Invocation Template
|
||||
|
||||
**Perspective Analysis Agent**:
|
||||
|
||||
```javascript
|
||||
Task({
|
||||
subagent_type: "cli-explore-agent",
|
||||
run_in_background: false,
|
||||
description: `Discover ${perspective} issues`,
|
||||
prompt: `
|
||||
## Task Objective
|
||||
Discover potential ${perspective} issues in specified module files.
|
||||
|
||||
## Discovery Context
|
||||
- Discovery ID: ${discoveryId}
|
||||
- Perspective: ${perspective}
|
||||
- Target Pattern: ${targetPattern}
|
||||
- Resolved Files: ${resolvedFiles.length} files
|
||||
- Output Directory: ${outputDir}
|
||||
|
||||
## MANDATORY FIRST STEPS
|
||||
1. Read discovery state: ${outputDir}/discovery-state.json
|
||||
2. Read schema: ~/.claude/workflows/cli-templates/schemas/discovery-finding-schema.json
|
||||
3. Analyze target files for ${perspective} concerns
|
||||
|
||||
## Output Requirements
|
||||
|
||||
**1. Write JSON file**: ${outputDir}/perspectives/${perspective}.json
|
||||
- Follow discovery-finding-schema.json exactly
|
||||
- Each finding: id, title, priority, category, description, file, line, snippet, suggested_issue, confidence
|
||||
|
||||
**2. Return summary** (DO NOT write report file):
|
||||
- Return a brief text summary of findings
|
||||
- Include: total findings, priority breakdown, key issues
|
||||
- This summary will be used by orchestrator for final report
|
||||
|
||||
## Perspective-Specific Guidance
|
||||
${getPerspectiveGuidance(perspective)}
|
||||
|
||||
## Success Criteria
|
||||
- [ ] JSON written to ${outputDir}/perspectives/${perspective}.json
|
||||
- [ ] Summary returned with findings count and key issues
|
||||
- [ ] Each finding includes actionable suggested_issue
|
||||
- [ ] Priority uses lowercase enum: critical/high/medium/low
|
||||
`
|
||||
})
|
||||
```
|
||||
|
||||
**Exa Research Agent** (for security and best-practices):
|
||||
|
||||
```javascript
|
||||
Task({
|
||||
subagent_type: "cli-explore-agent",
|
||||
run_in_background: false,
|
||||
description: `External research for ${perspective} via Exa`,
|
||||
prompt: `
|
||||
## Task Objective
|
||||
Research industry best practices for ${perspective} using Exa search
|
||||
|
||||
## Research Steps
|
||||
1. Read project tech stack: .workflow/project-tech.json
|
||||
2. Use Exa to search for best practices
|
||||
3. Synthesize findings relevant to this project
|
||||
|
||||
## Output Requirements
|
||||
|
||||
**1. Write JSON file**: ${outputDir}/external-research.json
|
||||
- Include sources, key_findings, gap_analysis, recommendations
|
||||
|
||||
**2. Return summary** (DO NOT write report file):
|
||||
- Brief summary of external research findings
|
||||
- Key recommendations for the project
|
||||
|
||||
## Success Criteria
|
||||
- [ ] JSON written to ${outputDir}/external-research.json
|
||||
- [ ] Summary returned with key recommendations
|
||||
- [ ] Findings are relevant to project's tech stack
|
||||
`
|
||||
})
|
||||
```
|
||||
|
||||
### Perspective Guidance Reference
|
||||
|
||||
```javascript
|
||||
function getPerspectiveGuidance(perspective) {
|
||||
const guidance = {
|
||||
bug: `
|
||||
Focus: Null checks, edge cases, resource leaks, race conditions, boundary conditions, exception handling
|
||||
Priority: Critical=data corruption/crash, High=malfunction, Medium=edge case issues, Low=minor
|
||||
`,
|
||||
ux: `
|
||||
Focus: Error messages, loading states, feedback, accessibility, interaction patterns, form validation
|
||||
Priority: Critical=inaccessible, High=confusing, Medium=inconsistent, Low=cosmetic
|
||||
`,
|
||||
test: `
|
||||
Focus: Missing unit tests, edge case coverage, integration gaps, assertion quality, test isolation
|
||||
Priority: Critical=no security tests, High=no core logic tests, Medium=weak coverage, Low=minor gaps
|
||||
`,
|
||||
quality: `
|
||||
Focus: Complexity, duplication, naming, documentation, code smells, readability
|
||||
Priority: Critical=unmaintainable, High=significant issues, Medium=naming/docs, Low=minor refactoring
|
||||
`,
|
||||
security: `
|
||||
Focus: Input validation, auth/authz, injection, XSS/CSRF, data exposure, access control
|
||||
Priority: Critical=auth bypass/injection, High=missing authz, Medium=weak validation, Low=headers
|
||||
`,
|
||||
performance: `
|
||||
Focus: N+1 queries, memory leaks, caching, algorithm efficiency, blocking operations
|
||||
Priority: Critical=memory leaks, High=N+1/inefficient, Medium=missing cache, Low=minor optimization
|
||||
`,
|
||||
maintainability: `
|
||||
Focus: Coupling, interface design, tech debt, extensibility, module boundaries, configuration
|
||||
Priority: Critical=unrelated code changes, High=unclear boundaries, Medium=coupling, Low=refactoring
|
||||
`,
|
||||
'best-practices': `
|
||||
Focus: Framework conventions, language patterns, anti-patterns, deprecated APIs, coding standards
|
||||
Priority: Critical=anti-patterns causing bugs, High=convention violations, Medium=style, Low=cosmetic
|
||||
`
|
||||
};
|
||||
return guidance[perspective] || 'General code discovery analysis';
|
||||
}
|
||||
```
|
||||
|
||||
## Dashboard Integration
|
||||
|
||||
### Viewing Discoveries
|
||||
|
||||
Open CCW dashboard to manage discoveries:
|
||||
|
||||
```bash
|
||||
ccw view
|
||||
```
|
||||
|
||||
Navigate to **Issues > Discovery** to:
|
||||
- View all discovery sessions
|
||||
- Filter findings by perspective and priority
|
||||
- Preview finding details
|
||||
- Select and export findings as issues
|
||||
|
||||
### Exporting to Issues
|
||||
|
||||
From the dashboard, select findings and click "Export as Issues" to:
|
||||
1. Convert discoveries to standard issue format
|
||||
2. Append to `.workflow/issues/issues.jsonl`
|
||||
3. Set status to `registered`
|
||||
4. Continue with `/issue:plan` workflow
|
||||
|
||||
## Related Commands
|
||||
|
||||
```bash
|
||||
# After discovery, plan solutions for exported issues
|
||||
/issue:plan DSC-001,DSC-002,DSC-003
|
||||
|
||||
# Or use interactive management
|
||||
/issue:manage
|
||||
```
|
||||
|
||||
## Best Practices
|
||||
|
||||
1. **Start Focused**: Begin with specific modules rather than entire codebase
|
||||
2. **Use Quick Scan First**: Start with bug, test, quality for fast results
|
||||
3. **Review Before Export**: Not all discoveries warrant issues - use dashboard to filter
|
||||
4. **Combine Perspectives**: Run related perspectives together (e.g., security + bug)
|
||||
5. **Enable Exa for New Tech**: When using unfamiliar frameworks, enable external research
|
||||
294
.claude/commands/issue/execute.md
Normal file
294
.claude/commands/issue/execute.md
Normal file
@@ -0,0 +1,294 @@
|
||||
---
|
||||
name: execute
|
||||
description: Execute queue with codex using DAG-based parallel orchestration (solution-level)
|
||||
argument-hint: "[--parallel <n>] [--executor codex|gemini|agent]"
|
||||
allowed-tools: TodoWrite(*), Bash(*), Read(*), AskUserQuestion(*)
|
||||
---
|
||||
|
||||
# Issue Execute Command (/issue:execute)
|
||||
|
||||
## Overview
|
||||
|
||||
Minimal orchestrator that dispatches **solution IDs** to executors. Each executor receives a complete solution with all its tasks.
|
||||
|
||||
**Design Principles:**
|
||||
- `queue dag` → returns parallel batches with solution IDs (S-1, S-2, ...)
|
||||
- `detail <id>` → READ-ONLY solution fetch (returns full solution with all tasks)
|
||||
- `done <id>` → update solution completion status
|
||||
- No race conditions: status changes only via `done`
|
||||
- **Executor handles all tasks within a solution sequentially**
|
||||
|
||||
## Usage
|
||||
|
||||
```bash
|
||||
/issue:execute [FLAGS]
|
||||
|
||||
# Examples
|
||||
/issue:execute # Execute with default parallelism
|
||||
/issue:execute --parallel 4 # Execute up to 4 tasks in parallel
|
||||
/issue:execute --executor agent # Use agent instead of codex
|
||||
|
||||
# Flags
|
||||
--parallel <n> Max parallel executors (default: 3)
|
||||
--executor <type> Force executor: codex|gemini|agent (default: codex)
|
||||
--dry-run Show DAG and batches without executing
|
||||
```
|
||||
|
||||
## Execution Flow
|
||||
|
||||
```
|
||||
Phase 1: Get DAG
|
||||
└─ ccw issue queue dag → { parallel_batches: [["S-1","S-2"], ["S-3"]] }
|
||||
|
||||
Phase 2: Dispatch Parallel Batch
|
||||
├─ For each solution ID in batch (parallel):
|
||||
│ ├─ Executor calls: ccw issue detail <id> (READ-ONLY)
|
||||
│ ├─ Executor gets FULL SOLUTION with all tasks
|
||||
│ ├─ Executor implements all tasks sequentially (T1 → T2 → T3)
|
||||
│ ├─ Executor tests + commits per task
|
||||
│ └─ Executor calls: ccw issue done <id>
|
||||
└─ Wait for batch completion
|
||||
|
||||
Phase 3: Next Batch
|
||||
└─ ccw issue queue dag → check for newly-ready solutions
|
||||
```
|
||||
|
||||
## Implementation
|
||||
|
||||
### Phase 1: Get DAG
|
||||
|
||||
```javascript
|
||||
// Get dependency graph and parallel batches
|
||||
const dagJson = Bash(`ccw issue queue dag`).trim();
|
||||
const dag = JSON.parse(dagJson);
|
||||
|
||||
if (dag.error || dag.ready_count === 0) {
|
||||
console.log(dag.error || 'No solutions ready for execution');
|
||||
console.log('Use /issue:queue to form a queue first');
|
||||
return;
|
||||
}
|
||||
|
||||
console.log(`
|
||||
## Queue DAG (Solution-Level)
|
||||
|
||||
- Total Solutions: ${dag.total}
|
||||
- Ready: ${dag.ready_count}
|
||||
- Completed: ${dag.completed_count}
|
||||
- Parallel in batch 1: ${dag.parallel_batches[0]?.length || 0}
|
||||
`);
|
||||
|
||||
// Dry run mode
|
||||
if (flags.dryRun) {
|
||||
console.log('### Parallel Batches:\n');
|
||||
dag.parallel_batches.forEach((batch, i) => {
|
||||
console.log(`Batch ${i + 1}: ${batch.join(', ')}`);
|
||||
});
|
||||
return;
|
||||
}
|
||||
```
|
||||
|
||||
### Phase 2: Dispatch Parallel Batch
|
||||
|
||||
```javascript
|
||||
const parallelLimit = flags.parallel || 3;
|
||||
const executor = flags.executor || 'codex';
|
||||
|
||||
// Process first batch (all solutions can run in parallel)
|
||||
const batch = dag.parallel_batches[0] || [];
|
||||
|
||||
// Initialize TodoWrite
|
||||
TodoWrite({
|
||||
todos: batch.map(id => ({
|
||||
content: `Execute solution ${id}`,
|
||||
status: 'pending',
|
||||
activeForm: `Executing solution ${id}`
|
||||
}))
|
||||
});
|
||||
|
||||
// Dispatch all in parallel (up to limit)
|
||||
const chunks = [];
|
||||
for (let i = 0; i < batch.length; i += parallelLimit) {
|
||||
chunks.push(batch.slice(i, i + parallelLimit));
|
||||
}
|
||||
|
||||
for (const chunk of chunks) {
|
||||
console.log(`\n### Executing Solutions: ${chunk.join(', ')}`);
|
||||
|
||||
// Launch all in parallel
|
||||
const executions = chunk.map(solutionId => {
|
||||
updateTodo(solutionId, 'in_progress');
|
||||
return dispatchExecutor(solutionId, executor);
|
||||
});
|
||||
|
||||
await Promise.all(executions);
|
||||
chunk.forEach(id => updateTodo(id, 'completed'));
|
||||
}
|
||||
```
|
||||
|
||||
### Executor Dispatch
|
||||
|
||||
```javascript
|
||||
function dispatchExecutor(solutionId, executorType) {
|
||||
// Executor fetches FULL SOLUTION via READ-ONLY detail command
|
||||
// Executor handles all tasks within solution sequentially
|
||||
// Then reports completion via done command
|
||||
const prompt = `
|
||||
## Execute Solution ${solutionId}
|
||||
|
||||
### Step 1: Get Solution (read-only)
|
||||
\`\`\`bash
|
||||
ccw issue detail ${solutionId}
|
||||
\`\`\`
|
||||
|
||||
### Step 2: Execute All Tasks Sequentially
|
||||
The detail command returns a FULL SOLUTION with all tasks.
|
||||
Execute each task in order (T1 → T2 → T3 → ...):
|
||||
|
||||
For each task:
|
||||
1. Follow task.implementation steps
|
||||
2. Run task.test commands
|
||||
3. Verify task.acceptance criteria
|
||||
4. Commit using task.commit specification
|
||||
|
||||
### Step 3: Report Completion
|
||||
When ALL tasks in solution are done:
|
||||
\`\`\`bash
|
||||
ccw issue done ${solutionId} --result '{"summary": "...", "files_modified": [...], "tasks_completed": N}'
|
||||
\`\`\`
|
||||
|
||||
If any task failed:
|
||||
\`\`\`bash
|
||||
ccw issue done ${solutionId} --fail --reason "Task TX failed: ..."
|
||||
\`\`\`
|
||||
`;
|
||||
|
||||
if (executorType === 'codex') {
|
||||
return Bash(
|
||||
`ccw cli -p "${escapePrompt(prompt)}" --tool codex --mode write --id exec-${solutionId}`,
|
||||
{ timeout: 7200000, run_in_background: true } // 2hr for full solution
|
||||
);
|
||||
} else if (executorType === 'gemini') {
|
||||
return Bash(
|
||||
`ccw cli -p "${escapePrompt(prompt)}" --tool gemini --mode write --id exec-${solutionId}`,
|
||||
{ timeout: 3600000, run_in_background: true }
|
||||
);
|
||||
} else {
|
||||
return Task({
|
||||
subagent_type: 'code-developer',
|
||||
run_in_background: false,
|
||||
description: `Execute solution ${solutionId}`,
|
||||
prompt: prompt
|
||||
});
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Phase 3: Check Next Batch
|
||||
|
||||
```javascript
|
||||
// Refresh DAG after batch completes
|
||||
const refreshedDag = JSON.parse(Bash(`ccw issue queue dag`).trim());
|
||||
|
||||
console.log(`
|
||||
## Batch Complete
|
||||
|
||||
- Solutions Completed: ${refreshedDag.completed_count}/${refreshedDag.total}
|
||||
- Next ready: ${refreshedDag.ready_count}
|
||||
`);
|
||||
|
||||
if (refreshedDag.ready_count > 0) {
|
||||
console.log('Run `/issue:execute` again for next batch.');
|
||||
}
|
||||
```
|
||||
|
||||
## Parallel Execution Model
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────────┐
|
||||
│ Orchestrator │
|
||||
├─────────────────────────────────────────────────────────────┤
|
||||
│ 1. ccw issue queue dag │
|
||||
│ → { parallel_batches: [["S-1","S-2"], ["S-3"]] } │
|
||||
│ │
|
||||
│ 2. Dispatch batch 1 (parallel): │
|
||||
│ ┌──────────────────────┐ ┌──────────────────────┐ │
|
||||
│ │ Executor 1 │ │ Executor 2 │ │
|
||||
│ │ detail S-1 │ │ detail S-2 │ │
|
||||
│ │ → gets full solution │ │ → gets full solution │ │
|
||||
│ │ [T1→T2→T3 sequential]│ │ [T1→T2 sequential] │ │
|
||||
│ │ done S-1 │ │ done S-2 │ │
|
||||
│ └──────────────────────┘ └──────────────────────┘ │
|
||||
│ │
|
||||
│ 3. ccw issue queue dag (refresh) │
|
||||
│ → S-3 now ready (S-1 completed, file conflict resolved) │
|
||||
└─────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
**Why this works for parallel:**
|
||||
- `detail <id>` is READ-ONLY → no race conditions
|
||||
- Each executor handles **all tasks within a solution** sequentially
|
||||
- `done <id>` updates only its own solution status
|
||||
- `queue dag` recalculates ready solutions after each batch
|
||||
- Solutions in same batch have NO file conflicts
|
||||
|
||||
## CLI Endpoint Contract
|
||||
|
||||
### `ccw issue queue dag`
|
||||
Returns dependency graph with parallel batches (solution-level):
|
||||
```json
|
||||
{
|
||||
"queue_id": "QUE-...",
|
||||
"total": 3,
|
||||
"ready_count": 2,
|
||||
"completed_count": 0,
|
||||
"nodes": [
|
||||
{ "id": "S-1", "issue_id": "ISS-xxx", "status": "pending", "ready": true, "task_count": 3 },
|
||||
{ "id": "S-2", "issue_id": "ISS-yyy", "status": "pending", "ready": true, "task_count": 2 },
|
||||
{ "id": "S-3", "issue_id": "ISS-zzz", "status": "pending", "ready": false, "depends_on": ["S-1"] }
|
||||
],
|
||||
"parallel_batches": [["S-1", "S-2"], ["S-3"]]
|
||||
}
|
||||
```
|
||||
|
||||
### `ccw issue detail <item_id>`
|
||||
Returns FULL SOLUTION with all tasks (READ-ONLY):
|
||||
```json
|
||||
{
|
||||
"item_id": "S-1",
|
||||
"issue_id": "ISS-xxx",
|
||||
"solution_id": "SOL-xxx",
|
||||
"status": "pending",
|
||||
"solution": {
|
||||
"id": "SOL-xxx",
|
||||
"approach": "...",
|
||||
"tasks": [
|
||||
{ "id": "T1", "title": "...", "implementation": [...], "test": {...} },
|
||||
{ "id": "T2", "title": "...", "implementation": [...], "test": {...} },
|
||||
{ "id": "T3", "title": "...", "implementation": [...], "test": {...} }
|
||||
],
|
||||
"exploration_context": { "relevant_files": [...] }
|
||||
},
|
||||
"execution_hints": { "executor": "codex", "estimated_minutes": 180 }
|
||||
}
|
||||
```
|
||||
|
||||
### `ccw issue done <item_id>`
|
||||
Marks solution completed/failed, updates queue state, checks for queue completion.
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Error | Resolution |
|
||||
|-------|------------|
|
||||
| No queue | Run /issue:queue first |
|
||||
| No ready solutions | Dependencies blocked, check DAG |
|
||||
| Executor timeout | Solution not marked done, can retry |
|
||||
| Solution failure | Use `ccw issue retry` to reset |
|
||||
| Partial task failure | Executor reports which task failed via `done --fail` |
|
||||
|
||||
## Related Commands
|
||||
|
||||
- `/issue:plan` - Plan issues with solutions
|
||||
- `/issue:queue` - Form execution queue
|
||||
- `ccw issue queue dag` - View dependency graph
|
||||
- `ccw issue detail <id>` - View task details
|
||||
- `ccw issue retry` - Reset failed tasks
|
||||
113
.claude/commands/issue/manage.md
Normal file
113
.claude/commands/issue/manage.md
Normal file
@@ -0,0 +1,113 @@
|
||||
---
|
||||
name: manage
|
||||
description: Interactive issue management (CRUD) via ccw cli endpoints with menu-driven interface
|
||||
argument-hint: "[issue-id] [--action list|view|edit|delete|bulk]"
|
||||
allowed-tools: TodoWrite(*), Bash(*), Read(*), Write(*), AskUserQuestion(*), Task(*)
|
||||
---
|
||||
|
||||
# Issue Manage Command (/issue:manage)
|
||||
|
||||
## Overview
|
||||
|
||||
Interactive menu-driven interface for issue management using `ccw issue` CLI endpoints:
|
||||
- **List**: Browse and filter issues
|
||||
- **View**: Detailed issue inspection
|
||||
- **Edit**: Modify issue fields
|
||||
- **Delete**: Remove issues
|
||||
- **Bulk**: Batch operations on multiple issues
|
||||
|
||||
## CLI Endpoints Reference
|
||||
|
||||
```bash
|
||||
# Core endpoints (ccw issue)
|
||||
ccw issue list # List all issues
|
||||
ccw issue list <id> --json # Get issue details
|
||||
ccw issue status <id> # Detailed status
|
||||
ccw issue init <id> --title "..." # Create issue
|
||||
ccw issue task <id> --title "..." # Add task
|
||||
ccw issue bind <id> <solution-id> # Bind solution
|
||||
|
||||
# Queue management
|
||||
ccw issue queue # List current queue
|
||||
ccw issue queue add <id> # Add to queue
|
||||
ccw issue queue list # Queue history
|
||||
ccw issue queue switch <queue-id> # Switch queue
|
||||
ccw issue queue archive # Archive queue
|
||||
ccw issue queue delete <queue-id> # Delete queue
|
||||
ccw issue next # Get next task
|
||||
ccw issue done <queue-id> # Mark completed
|
||||
ccw issue complete <item-id> # (legacy alias for done)
|
||||
```
|
||||
|
||||
## Usage
|
||||
|
||||
```bash
|
||||
# Interactive mode (menu-driven)
|
||||
/issue:manage
|
||||
|
||||
# Direct to specific issue
|
||||
/issue:manage GH-123
|
||||
|
||||
# Direct action
|
||||
/issue:manage --action list
|
||||
/issue:manage GH-123 --action edit
|
||||
```
|
||||
|
||||
## Implementation
|
||||
|
||||
This command delegates to the `issue-manage` skill for detailed implementation.
|
||||
|
||||
### Entry Point
|
||||
|
||||
```javascript
|
||||
const issueId = parseIssueId(userInput);
|
||||
const action = flags.action;
|
||||
|
||||
// Show main menu if no action specified
|
||||
if (!action) {
|
||||
await showMainMenu(issueId);
|
||||
} else {
|
||||
await executeAction(action, issueId);
|
||||
}
|
||||
```
|
||||
|
||||
### Main Menu Flow
|
||||
|
||||
1. **Dashboard**: Fetch issues summary via `ccw issue list --json`
|
||||
2. **Menu**: Present action options via AskUserQuestion
|
||||
3. **Route**: Execute selected action (List/View/Edit/Delete/Bulk)
|
||||
4. **Loop**: Return to menu after each action
|
||||
|
||||
### Available Actions
|
||||
|
||||
| Action | Description | CLI Command |
|
||||
|--------|-------------|-------------|
|
||||
| List | Browse with filters | `ccw issue list --json` |
|
||||
| View | Detail view | `ccw issue status <id> --json` |
|
||||
| Edit | Modify fields | Update `issues.jsonl` |
|
||||
| Delete | Remove issue | Clean up all related files |
|
||||
| Bulk | Batch operations | Multi-select + batch update |
|
||||
|
||||
## Data Files
|
||||
|
||||
| File | Purpose |
|
||||
|------|---------|
|
||||
| `.workflow/issues/issues.jsonl` | Issue records |
|
||||
| `.workflow/issues/solutions/<id>.jsonl` | Solutions per issue |
|
||||
| `.workflow/issues/queue.json` | Execution queue |
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Error | Resolution |
|
||||
|-------|------------|
|
||||
| No issues found | Suggest creating with /issue:new |
|
||||
| Issue not found | Show available issues, ask for correction |
|
||||
| Invalid selection | Show error, re-prompt |
|
||||
| Write failure | Check permissions, show error |
|
||||
|
||||
## Related Commands
|
||||
|
||||
- `/issue:new` - Create structured issue
|
||||
- `/issue:plan` - Plan solution for issue
|
||||
- `/issue:queue` - Form execution queue
|
||||
- `/issue:execute` - Execute queued tasks
|
||||
463
.claude/commands/issue/new.md
Normal file
463
.claude/commands/issue/new.md
Normal file
@@ -0,0 +1,463 @@
|
||||
---
|
||||
name: new
|
||||
description: Create structured issue from GitHub URL or text description, extracting key elements into issues.jsonl
|
||||
argument-hint: "<github-url | text-description> [--priority 1-5] [--labels label1,label2]"
|
||||
allowed-tools: TodoWrite(*), Bash(*), Read(*), Write(*), WebFetch(*), AskUserQuestion(*)
|
||||
---
|
||||
|
||||
# Issue New Command (/issue:new)
|
||||
|
||||
## Overview
|
||||
|
||||
Creates a new structured issue from either:
|
||||
1. **GitHub Issue URL** - Fetches and parses issue content via `gh` CLI
|
||||
2. **Text Description** - Parses natural language into structured fields
|
||||
|
||||
Outputs a well-formed issue entry to `.workflow/issues/issues.jsonl`.
|
||||
|
||||
## Issue Structure (Closed-Loop)
|
||||
|
||||
```typescript
|
||||
interface Issue {
|
||||
id: string; // GH-123 or ISS-YYYYMMDD-HHMMSS
|
||||
title: string; // Issue title (clear, concise)
|
||||
status: 'registered'; // Initial status
|
||||
priority: number; // 1 (critical) to 5 (low)
|
||||
context: string; // Problem description
|
||||
source: 'github' | 'text' | 'discovery'; // Input source type
|
||||
source_url?: string; // GitHub URL if applicable
|
||||
labels?: string[]; // Categorization labels
|
||||
|
||||
// Structured extraction
|
||||
problem_statement: string; // What is the problem?
|
||||
expected_behavior?: string; // What should happen?
|
||||
actual_behavior?: string; // What actually happens?
|
||||
affected_components?: string[];// Files/modules affected
|
||||
reproduction_steps?: string[]; // Steps to reproduce
|
||||
|
||||
// Discovery context (when source='discovery')
|
||||
discovery_context?: {
|
||||
discovery_id: string; // Source discovery session
|
||||
perspective: string; // bug, test, quality, etc.
|
||||
category: string; // Finding category
|
||||
file: string; // Primary affected file
|
||||
line: number; // Line number
|
||||
snippet?: string; // Code snippet
|
||||
confidence: number; // Agent confidence (0-1)
|
||||
suggested_fix?: string; // Suggested remediation
|
||||
};
|
||||
|
||||
// Closed-loop requirements (guide plan generation)
|
||||
lifecycle_requirements: {
|
||||
test_strategy: 'unit' | 'integration' | 'e2e' | 'manual' | 'auto';
|
||||
regression_scope: 'affected' | 'related' | 'full'; // Which tests to run
|
||||
acceptance_type: 'automated' | 'manual' | 'both'; // How to verify
|
||||
commit_strategy: 'per-task' | 'squash' | 'atomic'; // Commit granularity
|
||||
};
|
||||
|
||||
// Metadata
|
||||
bound_solution_id: null;
|
||||
solution_count: 0;
|
||||
created_at: string;
|
||||
updated_at: string;
|
||||
}
|
||||
```
|
||||
|
||||
## Lifecycle Requirements
|
||||
|
||||
The `lifecycle_requirements` field guides downstream commands (`/issue:plan`, `/issue:execute`):
|
||||
|
||||
| Field | Options | Purpose |
|
||||
|-------|---------|---------|
|
||||
| `test_strategy` | `unit`, `integration`, `e2e`, `manual`, `auto` | Which test types to generate |
|
||||
| `regression_scope` | `affected`, `related`, `full` | Which tests to run for regression |
|
||||
| `acceptance_type` | `automated`, `manual`, `both` | How to verify completion |
|
||||
| `commit_strategy` | `per-task`, `squash`, `atomic` | Commit granularity |
|
||||
|
||||
> **Note**: Task structure (SolutionTask) is defined in `/issue:plan` - see `.claude/commands/issue/plan.md`
|
||||
|
||||
## Usage
|
||||
|
||||
```bash
|
||||
# From GitHub URL
|
||||
/issue:new https://github.com/owner/repo/issues/123
|
||||
|
||||
# From text description
|
||||
/issue:new "Login fails when password contains special characters. Expected: successful login. Actual: 500 error. Affects src/auth/*"
|
||||
|
||||
# With options
|
||||
/issue:new <url-or-text> --priority 2 --labels "bug,auth"
|
||||
```
|
||||
|
||||
## Implementation
|
||||
|
||||
### Phase 1: Input Detection
|
||||
|
||||
```javascript
|
||||
const input = userInput.trim();
|
||||
const flags = parseFlags(userInput); // --priority, --labels
|
||||
|
||||
// Detect input type
|
||||
const isGitHubUrl = input.match(/github\.com\/[\w-]+\/[\w-]+\/issues\/\d+/);
|
||||
const isGitHubShort = input.match(/^#(\d+)$/); // #123 format
|
||||
|
||||
let issueData = {};
|
||||
|
||||
if (isGitHubUrl || isGitHubShort) {
|
||||
// GitHub issue - fetch via gh CLI
|
||||
issueData = await fetchGitHubIssue(input);
|
||||
} else {
|
||||
// Text description - parse structure
|
||||
issueData = await parseTextDescription(input);
|
||||
}
|
||||
```
|
||||
|
||||
### Phase 2: GitHub Issue Fetching
|
||||
|
||||
```javascript
|
||||
async function fetchGitHubIssue(urlOrNumber) {
|
||||
let issueRef;
|
||||
|
||||
if (urlOrNumber.startsWith('http')) {
|
||||
// Extract owner/repo/number from URL
|
||||
const match = urlOrNumber.match(/github\.com\/([\w-]+)\/([\w-]+)\/issues\/(\d+)/);
|
||||
if (!match) throw new Error('Invalid GitHub URL');
|
||||
issueRef = `${match[1]}/${match[2]}#${match[3]}`;
|
||||
} else {
|
||||
// #123 format - use current repo
|
||||
issueRef = urlOrNumber.replace('#', '');
|
||||
}
|
||||
|
||||
// Fetch via gh CLI
|
||||
const result = Bash(`gh issue view ${issueRef} --json number,title,body,labels,state,url`);
|
||||
const ghIssue = JSON.parse(result);
|
||||
|
||||
// Parse body for structure
|
||||
const parsed = parseIssueBody(ghIssue.body);
|
||||
|
||||
return {
|
||||
id: `GH-${ghIssue.number}`,
|
||||
title: ghIssue.title,
|
||||
source: 'github',
|
||||
source_url: ghIssue.url,
|
||||
labels: ghIssue.labels.map(l => l.name),
|
||||
context: ghIssue.body,
|
||||
...parsed
|
||||
};
|
||||
}
|
||||
|
||||
function parseIssueBody(body) {
|
||||
// Extract structured sections from markdown body
|
||||
const sections = {};
|
||||
|
||||
// Problem/Description
|
||||
const problemMatch = body.match(/##?\s*(problem|description|issue)[:\s]*([\s\S]*?)(?=##|$)/i);
|
||||
if (problemMatch) sections.problem_statement = problemMatch[2].trim();
|
||||
|
||||
// Expected behavior
|
||||
const expectedMatch = body.match(/##?\s*(expected|should)[:\s]*([\s\S]*?)(?=##|$)/i);
|
||||
if (expectedMatch) sections.expected_behavior = expectedMatch[2].trim();
|
||||
|
||||
// Actual behavior
|
||||
const actualMatch = body.match(/##?\s*(actual|current)[:\s]*([\s\S]*?)(?=##|$)/i);
|
||||
if (actualMatch) sections.actual_behavior = actualMatch[2].trim();
|
||||
|
||||
// Steps to reproduce
|
||||
const stepsMatch = body.match(/##?\s*(steps|reproduce)[:\s]*([\s\S]*?)(?=##|$)/i);
|
||||
if (stepsMatch) {
|
||||
const stepsText = stepsMatch[2].trim();
|
||||
sections.reproduction_steps = stepsText
|
||||
.split('\n')
|
||||
.filter(line => line.match(/^\s*[\d\-\*]/))
|
||||
.map(line => line.replace(/^\s*[\d\.\-\*]\s*/, '').trim());
|
||||
}
|
||||
|
||||
// Affected components (from file references)
|
||||
const fileMatches = body.match(/`[^`]*\.(ts|js|tsx|jsx|py|go|rs)[^`]*`/g);
|
||||
if (fileMatches) {
|
||||
sections.affected_components = [...new Set(fileMatches.map(f => f.replace(/`/g, '')))];
|
||||
}
|
||||
|
||||
// Fallback: use entire body as problem statement
|
||||
if (!sections.problem_statement) {
|
||||
sections.problem_statement = body.substring(0, 500);
|
||||
}
|
||||
|
||||
return sections;
|
||||
}
|
||||
```
|
||||
|
||||
### Phase 3: Text Description Parsing
|
||||
|
||||
```javascript
|
||||
async function parseTextDescription(text) {
|
||||
// Generate unique ID
|
||||
const id = `ISS-${new Date().toISOString().replace(/[-:T]/g, '').slice(0, 14)}`;
|
||||
|
||||
// Extract structured elements using patterns
|
||||
const result = {
|
||||
id,
|
||||
source: 'text',
|
||||
title: '',
|
||||
problem_statement: '',
|
||||
expected_behavior: null,
|
||||
actual_behavior: null,
|
||||
affected_components: [],
|
||||
reproduction_steps: []
|
||||
};
|
||||
|
||||
// Pattern: "Title. Description. Expected: X. Actual: Y. Affects: files"
|
||||
const sentences = text.split(/\.(?=\s|$)/);
|
||||
|
||||
// First sentence as title
|
||||
result.title = sentences[0]?.trim() || 'Untitled Issue';
|
||||
|
||||
// Look for keywords
|
||||
for (const sentence of sentences) {
|
||||
const s = sentence.trim();
|
||||
|
||||
if (s.match(/^expected:?\s*/i)) {
|
||||
result.expected_behavior = s.replace(/^expected:?\s*/i, '');
|
||||
} else if (s.match(/^actual:?\s*/i)) {
|
||||
result.actual_behavior = s.replace(/^actual:?\s*/i, '');
|
||||
} else if (s.match(/^affects?:?\s*/i)) {
|
||||
const components = s.replace(/^affects?:?\s*/i, '').split(/[,\s]+/);
|
||||
result.affected_components = components.filter(c => c.includes('/') || c.includes('.'));
|
||||
} else if (s.match(/^steps?:?\s*/i)) {
|
||||
result.reproduction_steps = s.replace(/^steps?:?\s*/i, '').split(/[,;]/);
|
||||
} else if (!result.problem_statement && s.length > 10) {
|
||||
result.problem_statement = s;
|
||||
}
|
||||
}
|
||||
|
||||
// Fallback problem statement
|
||||
if (!result.problem_statement) {
|
||||
result.problem_statement = text.substring(0, 300);
|
||||
}
|
||||
|
||||
return result;
|
||||
}
|
||||
```
|
||||
|
||||
### Phase 4: Lifecycle Configuration
|
||||
|
||||
```javascript
|
||||
// Ask for lifecycle requirements (or use smart defaults)
|
||||
const lifecycleAnswer = AskUserQuestion({
|
||||
questions: [
|
||||
{
|
||||
question: 'Test strategy for this issue?',
|
||||
header: 'Test',
|
||||
multiSelect: false,
|
||||
options: [
|
||||
{ label: 'auto', description: 'Auto-detect based on affected files (Recommended)' },
|
||||
{ label: 'unit', description: 'Unit tests only' },
|
||||
{ label: 'integration', description: 'Integration tests' },
|
||||
{ label: 'e2e', description: 'End-to-end tests' },
|
||||
{ label: 'manual', description: 'Manual testing only' }
|
||||
]
|
||||
},
|
||||
{
|
||||
question: 'Regression scope?',
|
||||
header: 'Regression',
|
||||
multiSelect: false,
|
||||
options: [
|
||||
{ label: 'affected', description: 'Only affected module tests (Recommended)' },
|
||||
{ label: 'related', description: 'Affected + dependent modules' },
|
||||
{ label: 'full', description: 'Full test suite' }
|
||||
]
|
||||
},
|
||||
{
|
||||
question: 'Commit strategy?',
|
||||
header: 'Commit',
|
||||
multiSelect: false,
|
||||
options: [
|
||||
{ label: 'per-task', description: 'One commit per task (Recommended)' },
|
||||
{ label: 'atomic', description: 'Single commit for entire issue' },
|
||||
{ label: 'squash', description: 'Squash at the end' }
|
||||
]
|
||||
}
|
||||
]
|
||||
});
|
||||
|
||||
const lifecycle = {
|
||||
test_strategy: lifecycleAnswer.test || 'auto',
|
||||
regression_scope: lifecycleAnswer.regression || 'affected',
|
||||
acceptance_type: 'automated',
|
||||
commit_strategy: lifecycleAnswer.commit || 'per-task'
|
||||
};
|
||||
|
||||
issueData.lifecycle_requirements = lifecycle;
|
||||
```
|
||||
|
||||
### Phase 5: User Confirmation
|
||||
|
||||
```javascript
|
||||
// Show parsed data and ask for confirmation
|
||||
console.log(`
|
||||
## Parsed Issue
|
||||
|
||||
**ID**: ${issueData.id}
|
||||
**Title**: ${issueData.title}
|
||||
**Source**: ${issueData.source}${issueData.source_url ? ` (${issueData.source_url})` : ''}
|
||||
|
||||
### Problem Statement
|
||||
${issueData.problem_statement}
|
||||
|
||||
${issueData.expected_behavior ? `### Expected Behavior\n${issueData.expected_behavior}\n` : ''}
|
||||
${issueData.actual_behavior ? `### Actual Behavior\n${issueData.actual_behavior}\n` : ''}
|
||||
${issueData.affected_components?.length ? `### Affected Components\n${issueData.affected_components.map(c => `- ${c}`).join('\n')}\n` : ''}
|
||||
${issueData.reproduction_steps?.length ? `### Reproduction Steps\n${issueData.reproduction_steps.map((s, i) => `${i+1}. ${s}`).join('\n')}\n` : ''}
|
||||
|
||||
### Lifecycle Configuration
|
||||
- **Test Strategy**: ${lifecycle.test_strategy}
|
||||
- **Regression Scope**: ${lifecycle.regression_scope}
|
||||
- **Commit Strategy**: ${lifecycle.commit_strategy}
|
||||
`);
|
||||
|
||||
// Ask user to confirm or edit
|
||||
const answer = AskUserQuestion({
|
||||
questions: [{
|
||||
question: 'Create this issue?',
|
||||
header: 'Confirm',
|
||||
multiSelect: false,
|
||||
options: [
|
||||
{ label: 'Create', description: 'Save issue to issues.jsonl' },
|
||||
{ label: 'Edit Title', description: 'Modify the issue title' },
|
||||
{ label: 'Edit Priority', description: 'Change priority (1-5)' },
|
||||
{ label: 'Cancel', description: 'Discard and exit' }
|
||||
]
|
||||
}]
|
||||
});
|
||||
|
||||
if (answer.includes('Cancel')) {
|
||||
console.log('Issue creation cancelled.');
|
||||
return;
|
||||
}
|
||||
|
||||
if (answer.includes('Edit Title')) {
|
||||
const titleAnswer = AskUserQuestion({
|
||||
questions: [{
|
||||
question: 'Enter new title:',
|
||||
header: 'Title',
|
||||
multiSelect: false,
|
||||
options: [
|
||||
{ label: issueData.title.substring(0, 40), description: 'Keep current' }
|
||||
]
|
||||
}]
|
||||
});
|
||||
// Handle custom input via "Other"
|
||||
if (titleAnswer.customText) {
|
||||
issueData.title = titleAnswer.customText;
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Phase 6: Write to JSONL
|
||||
|
||||
```javascript
|
||||
// Construct final issue object
|
||||
const priority = flags.priority ? parseInt(flags.priority) : 3;
|
||||
const labels = flags.labels ? flags.labels.split(',').map(l => l.trim()) : [];
|
||||
|
||||
const newIssue = {
|
||||
id: issueData.id,
|
||||
title: issueData.title,
|
||||
status: 'registered',
|
||||
priority,
|
||||
context: issueData.problem_statement,
|
||||
source: issueData.source,
|
||||
source_url: issueData.source_url || null,
|
||||
labels: [...(issueData.labels || []), ...labels],
|
||||
|
||||
// Structured fields
|
||||
problem_statement: issueData.problem_statement,
|
||||
expected_behavior: issueData.expected_behavior || null,
|
||||
actual_behavior: issueData.actual_behavior || null,
|
||||
affected_components: issueData.affected_components || [],
|
||||
reproduction_steps: issueData.reproduction_steps || [],
|
||||
|
||||
// Closed-loop lifecycle requirements
|
||||
lifecycle_requirements: issueData.lifecycle_requirements || {
|
||||
test_strategy: 'auto',
|
||||
regression_scope: 'affected',
|
||||
acceptance_type: 'automated',
|
||||
commit_strategy: 'per-task'
|
||||
},
|
||||
|
||||
// Metadata
|
||||
bound_solution_id: null,
|
||||
solution_count: 0,
|
||||
created_at: new Date().toISOString(),
|
||||
updated_at: new Date().toISOString()
|
||||
};
|
||||
|
||||
// Ensure directory exists
|
||||
Bash('mkdir -p .workflow/issues');
|
||||
|
||||
// Append to issues.jsonl
|
||||
const issuesPath = '.workflow/issues/issues.jsonl';
|
||||
Bash(`echo '${JSON.stringify(newIssue)}' >> "${issuesPath}"`);
|
||||
|
||||
console.log(`
|
||||
## Issue Created
|
||||
|
||||
**ID**: ${newIssue.id}
|
||||
**Title**: ${newIssue.title}
|
||||
**Priority**: ${newIssue.priority}
|
||||
**Labels**: ${newIssue.labels.join(', ') || 'none'}
|
||||
**Source**: ${newIssue.source}
|
||||
|
||||
### Next Steps
|
||||
1. Plan solution: \`/issue:plan ${newIssue.id}\`
|
||||
2. View details: \`ccw issue status ${newIssue.id}\`
|
||||
3. Manage issues: \`/issue:manage\`
|
||||
`);
|
||||
```
|
||||
|
||||
## Examples
|
||||
|
||||
### GitHub Issue
|
||||
|
||||
```bash
|
||||
/issue:new https://github.com/myorg/myrepo/issues/42 --priority 2
|
||||
|
||||
# Output:
|
||||
## Issue Created
|
||||
**ID**: GH-42
|
||||
**Title**: Fix memory leak in WebSocket handler
|
||||
**Priority**: 2
|
||||
**Labels**: bug, performance
|
||||
**Source**: github (https://github.com/myorg/myrepo/issues/42)
|
||||
```
|
||||
|
||||
### Text Description
|
||||
|
||||
```bash
|
||||
/issue:new "API rate limiting not working. Expected: 429 after 100 requests. Actual: No limit. Affects src/middleware/rate-limit.ts"
|
||||
|
||||
# Output:
|
||||
## Issue Created
|
||||
**ID**: ISS-20251227-142530
|
||||
**Title**: API rate limiting not working
|
||||
**Priority**: 3
|
||||
**Labels**: none
|
||||
**Source**: text
|
||||
```
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Error | Resolution |
|
||||
|-------|------------|
|
||||
| Invalid GitHub URL | Show format hint, ask for correction |
|
||||
| gh CLI not available | Fall back to WebFetch for public issues |
|
||||
| Empty description | Prompt user for required fields |
|
||||
| Duplicate issue ID | Auto-increment or suggest merge |
|
||||
| Parse failure | Show raw input, ask for manual structuring |
|
||||
|
||||
## Related Commands
|
||||
|
||||
- `/issue:plan` - Plan solution for issue
|
||||
- `/issue:manage` - Interactive issue management
|
||||
- `ccw issue list` - List all issues
|
||||
- `ccw issue status <id>` - View issue details
|
||||
327
.claude/commands/issue/plan.md
Normal file
327
.claude/commands/issue/plan.md
Normal file
@@ -0,0 +1,327 @@
|
||||
---
|
||||
name: plan
|
||||
description: Batch plan issue resolution using issue-plan-agent (explore + plan closed-loop)
|
||||
argument-hint: "--all-pending <issue-id>[,<issue-id>,...] [--batch-size 3] "
|
||||
allowed-tools: TodoWrite(*), Task(*), SlashCommand(*), AskUserQuestion(*), Bash(*), Read(*), Write(*)
|
||||
---
|
||||
|
||||
# Issue Plan Command (/issue:plan)
|
||||
|
||||
## Overview
|
||||
|
||||
Unified planning command using **issue-plan-agent** that combines exploration and planning into a single closed-loop workflow.
|
||||
|
||||
## Output Requirements
|
||||
|
||||
**Generate Files:**
|
||||
1. `.workflow/issues/solutions/{issue-id}.jsonl` - Solution with tasks for each issue
|
||||
|
||||
**Return Summary:**
|
||||
```json
|
||||
{
|
||||
"bound": [{ "issue_id": "...", "solution_id": "...", "task_count": N }],
|
||||
"pending_selection": [{ "issue_id": "...", "solutions": [...] }],
|
||||
"conflicts": [{ "file": "...", "issues": [...] }]
|
||||
}
|
||||
```
|
||||
|
||||
**Completion Criteria:**
|
||||
- [ ] Solution file generated for each issue
|
||||
- [ ] Single solution → auto-bound via `ccw issue bind`
|
||||
- [ ] Multiple solutions → returned for user selection
|
||||
- [ ] Tasks conform to schema: `cat .claude/workflows/cli-templates/schemas/solution-schema.json`
|
||||
- [ ] Each task has quantified `acceptance.criteria`
|
||||
|
||||
## Core Capabilities
|
||||
|
||||
- **Closed-loop agent**: issue-plan-agent combines explore + plan
|
||||
- Batch processing: 1 agent processes 1-3 issues
|
||||
- ACE semantic search integrated into planning
|
||||
- Solution with executable tasks and delivery criteria
|
||||
- Automatic solution registration and binding
|
||||
|
||||
## Storage Structure (Flat JSONL)
|
||||
|
||||
```
|
||||
.workflow/issues/
|
||||
├── issues.jsonl # All issues (one per line)
|
||||
├── queue.json # Execution queue
|
||||
└── solutions/
|
||||
├── {issue-id}.jsonl # Solutions for issue (one per line)
|
||||
└── ...
|
||||
```
|
||||
|
||||
## Usage
|
||||
|
||||
```bash
|
||||
/issue:plan <issue-id>[,<issue-id>,...] [FLAGS]
|
||||
|
||||
# Examples
|
||||
/issue:plan GH-123 # Single issue
|
||||
/issue:plan GH-123,GH-124,GH-125 # Batch (up to 3)
|
||||
/issue:plan --all-pending # All pending issues
|
||||
|
||||
# Flags
|
||||
--batch-size <n> Max issues per agent batch (default: 3)
|
||||
```
|
||||
|
||||
## Execution Process
|
||||
|
||||
```
|
||||
Phase 1: Issue Loading
|
||||
├─ Parse input (single, comma-separated, or --all-pending)
|
||||
├─ Fetch issue metadata (ID, title, tags)
|
||||
├─ Validate issues exist (create if needed)
|
||||
└─ Group by similarity (shared tags or title keywords, max 3 per batch)
|
||||
|
||||
Phase 2: Unified Explore + Plan (issue-plan-agent)
|
||||
├─ Launch issue-plan-agent per batch
|
||||
├─ Agent performs:
|
||||
│ ├─ ACE semantic search for each issue
|
||||
│ ├─ Codebase exploration (files, patterns, dependencies)
|
||||
│ ├─ Solution generation with task breakdown
|
||||
│ └─ Conflict detection across issues
|
||||
└─ Output: solution JSON per issue
|
||||
|
||||
Phase 3: Solution Registration & Binding
|
||||
├─ Append solutions to solutions/{issue-id}.jsonl
|
||||
├─ Single solution per issue → auto-bind
|
||||
├─ Multiple candidates → AskUserQuestion to select
|
||||
└─ Update issues.jsonl with bound_solution_id
|
||||
|
||||
Phase 4: Summary
|
||||
├─ Display bound solutions
|
||||
├─ Show task counts per issue
|
||||
└─ Display next steps (/issue:queue)
|
||||
```
|
||||
|
||||
## Implementation
|
||||
|
||||
### Phase 1: Issue Loading (ID + Title + Tags)
|
||||
|
||||
```javascript
|
||||
const batchSize = flags.batchSize || 3;
|
||||
let issues = []; // {id, title, tags}
|
||||
|
||||
if (flags.allPending) {
|
||||
// Get pending issues with metadata via CLI (JSON output)
|
||||
const result = Bash(`ccw issue list --status pending,registered --json`).trim();
|
||||
const parsed = result ? JSON.parse(result) : [];
|
||||
issues = parsed.map(i => ({ id: i.id, title: i.title || '', tags: i.tags || [] }));
|
||||
|
||||
if (issues.length === 0) {
|
||||
console.log('No pending issues found.');
|
||||
return;
|
||||
}
|
||||
console.log(`Found ${issues.length} pending issues`);
|
||||
} else {
|
||||
// Parse comma-separated issue IDs, fetch metadata
|
||||
const ids = userInput.includes(',')
|
||||
? userInput.split(',').map(s => s.trim())
|
||||
: [userInput.trim()];
|
||||
|
||||
for (const id of ids) {
|
||||
Bash(`ccw issue init ${id} --title "Issue ${id}" 2>/dev/null || true`);
|
||||
const info = Bash(`ccw issue status ${id} --json`).trim();
|
||||
const parsed = info ? JSON.parse(info) : {};
|
||||
issues.push({ id, title: parsed.title || '', tags: parsed.tags || [] });
|
||||
}
|
||||
}
|
||||
|
||||
// Intelligent grouping by similarity (tags → title keywords)
|
||||
function groupBySimilarity(issues, maxSize) {
|
||||
const batches = [];
|
||||
const used = new Set();
|
||||
|
||||
for (const issue of issues) {
|
||||
if (used.has(issue.id)) continue;
|
||||
|
||||
const batch = [issue];
|
||||
used.add(issue.id);
|
||||
const issueTags = new Set(issue.tags);
|
||||
const issueWords = new Set(issue.title.toLowerCase().split(/\s+/));
|
||||
|
||||
// Find similar issues
|
||||
for (const other of issues) {
|
||||
if (used.has(other.id) || batch.length >= maxSize) continue;
|
||||
|
||||
// Similarity: shared tags or shared title keywords
|
||||
const sharedTags = other.tags.filter(t => issueTags.has(t)).length;
|
||||
const otherWords = other.title.toLowerCase().split(/\s+/);
|
||||
const sharedWords = otherWords.filter(w => issueWords.has(w) && w.length > 3).length;
|
||||
|
||||
if (sharedTags > 0 || sharedWords >= 2) {
|
||||
batch.push(other);
|
||||
used.add(other.id);
|
||||
}
|
||||
}
|
||||
batches.push(batch);
|
||||
}
|
||||
return batches;
|
||||
}
|
||||
|
||||
const batches = groupBySimilarity(issues, batchSize);
|
||||
console.log(`Processing ${issues.length} issues in ${batches.length} batch(es) (grouped by similarity)`);
|
||||
|
||||
TodoWrite({
|
||||
todos: batches.map((_, i) => ({
|
||||
content: `Plan batch ${i+1}`,
|
||||
status: 'pending',
|
||||
activeForm: `Planning batch ${i+1}`
|
||||
}))
|
||||
});
|
||||
```
|
||||
|
||||
### Phase 2: Unified Explore + Plan (issue-plan-agent) - PARALLEL
|
||||
|
||||
```javascript
|
||||
Bash(`mkdir -p .workflow/issues/solutions`);
|
||||
const pendingSelections = []; // Collect multi-solution issues for user selection
|
||||
|
||||
// Build prompts for all batches
|
||||
const agentTasks = batches.map((batch, batchIndex) => {
|
||||
const issueList = batch.map(i => `- ${i.id}: ${i.title}${i.tags.length ? ` [${i.tags.join(', ')}]` : ''}`).join('\n');
|
||||
const batchIds = batch.map(i => i.id);
|
||||
|
||||
const issuePrompt = `
|
||||
## Plan Issues
|
||||
|
||||
**Issues** (grouped by similarity):
|
||||
${issueList}
|
||||
|
||||
**Project Root**: ${process.cwd()}
|
||||
|
||||
### Project Context (MANDATORY - Read Both Files First)
|
||||
1. Read: .workflow/project-tech.json (technology stack, architecture, key components)
|
||||
2. Read: .workflow/project-guidelines.json (user-defined constraints and conventions)
|
||||
|
||||
**CRITICAL**: All solution tasks MUST comply with constraints in project-guidelines.json
|
||||
|
||||
### Steps
|
||||
1. Fetch: \`ccw issue status <id> --json\`
|
||||
2. Load project context (project-tech.json + project-guidelines.json)
|
||||
3. **If source=discovery**: Use discovery_context (file, line, snippet, suggested_fix) as planning hints
|
||||
4. Explore (ACE) → Plan solution (respecting guidelines)
|
||||
5. Register & bind: \`ccw issue bind <id> --solution <file>\`
|
||||
|
||||
### Generate Files
|
||||
\`.workflow/issues/solutions/{issue-id}.jsonl\` - Solution with tasks (schema: cat .claude/workflows/cli-templates/schemas/solution-schema.json)
|
||||
|
||||
### Binding Rules
|
||||
- **Single solution**: Auto-bind via \`ccw issue bind <id> --solution <file>\`
|
||||
- **Multiple solutions**: Register only, return for user selection
|
||||
|
||||
### Return Summary
|
||||
\`\`\`json
|
||||
{
|
||||
"bound": [{ "issue_id": "...", "solution_id": "...", "task_count": N }],
|
||||
"pending_selection": [{ "issue_id": "...", "solutions": [{ "id": "...", "description": "...", "task_count": N }] }],
|
||||
"conflicts": [{ "file": "...", "issues": [...] }]
|
||||
}
|
||||
\`\`\`
|
||||
`;
|
||||
|
||||
return { batchIndex, batchIds, issuePrompt, batch };
|
||||
});
|
||||
|
||||
// Launch agents in parallel (max 10 concurrent)
|
||||
const MAX_PARALLEL = 10;
|
||||
for (let i = 0; i < agentTasks.length; i += MAX_PARALLEL) {
|
||||
const chunk = agentTasks.slice(i, i + MAX_PARALLEL);
|
||||
const taskIds = [];
|
||||
|
||||
// Launch chunk in parallel
|
||||
for (const { batchIndex, batchIds, issuePrompt, batch } of chunk) {
|
||||
updateTodo(`Plan batch ${batchIndex + 1}`, 'in_progress');
|
||||
const taskId = Task(
|
||||
subagent_type="issue-plan-agent",
|
||||
run_in_background=true,
|
||||
description=`Explore & plan ${batch.length} issues: ${batchIds.join(', ')}`,
|
||||
prompt=issuePrompt
|
||||
);
|
||||
taskIds.push({ taskId, batchIndex });
|
||||
}
|
||||
|
||||
console.log(`Launched ${taskIds.length} agents (batch ${i/MAX_PARALLEL + 1}/${Math.ceil(agentTasks.length/MAX_PARALLEL)})...`);
|
||||
|
||||
// Collect results from this chunk
|
||||
for (const { taskId, batchIndex } of taskIds) {
|
||||
const result = TaskOutput(task_id=taskId, block=true);
|
||||
const summary = JSON.parse(result);
|
||||
|
||||
for (const item of summary.bound || []) {
|
||||
console.log(`✓ ${item.issue_id}: ${item.solution_id} (${item.task_count} tasks)`);
|
||||
}
|
||||
// Collect and notify pending selections
|
||||
for (const pending of summary.pending_selection || []) {
|
||||
console.log(`⏳ ${pending.issue_id}: ${pending.solutions.length} solutions → awaiting selection`);
|
||||
pendingSelections.push(pending);
|
||||
}
|
||||
if (summary.conflicts?.length > 0) {
|
||||
console.log(`⚠ Conflicts: ${summary.conflicts.map(c => c.file).join(', ')}`);
|
||||
}
|
||||
updateTodo(`Plan batch ${batchIndex + 1}`, 'completed');
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Phase 3: Multi-Solution Selection (MANDATORY when pendingSelections > 0)
|
||||
|
||||
```javascript
|
||||
// MUST trigger user selection when multiple solutions exist
|
||||
if (pendingSelections.length > 0) {
|
||||
console.log(`\n## User Selection Required: ${pendingSelections.length} issue(s) have multiple solutions\n`);
|
||||
|
||||
const answer = AskUserQuestion({
|
||||
questions: pendingSelections.map(({ issue_id, solutions }) => ({
|
||||
question: `Select solution for ${issue_id}:`,
|
||||
header: issue_id,
|
||||
multiSelect: false,
|
||||
options: solutions.map(s => ({
|
||||
label: `${s.id} (${s.task_count} tasks)`,
|
||||
description: s.description
|
||||
}))
|
||||
}))
|
||||
});
|
||||
|
||||
// Bind user-selected solutions
|
||||
for (const { issue_id } of pendingSelections) {
|
||||
const selectedId = extractSelectedSolutionId(answer, issue_id);
|
||||
if (selectedId) {
|
||||
Bash(`ccw issue bind ${issue_id} ${selectedId}`);
|
||||
console.log(`✓ ${issue_id}: ${selectedId} bound`);
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Phase 4: Summary
|
||||
|
||||
```javascript
|
||||
// Count planned issues via CLI
|
||||
const plannedIds = Bash(`ccw issue list --status planned --ids`).trim();
|
||||
const plannedCount = plannedIds ? plannedIds.split('\n').length : 0;
|
||||
|
||||
console.log(`
|
||||
## Done: ${issues.length} issues → ${plannedCount} planned
|
||||
|
||||
Next: \`/issue:queue\` → \`/issue:execute\`
|
||||
`);
|
||||
```
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Error | Resolution |
|
||||
|-------|------------|
|
||||
| Issue not found | Auto-create in issues.jsonl |
|
||||
| ACE search fails | Agent falls back to ripgrep |
|
||||
| No solutions generated | Display error, suggest manual planning |
|
||||
| User cancels selection | Skip issue, continue with others |
|
||||
| File conflicts | Agent detects and suggests resolution order |
|
||||
|
||||
## Related Commands
|
||||
|
||||
- `/issue:queue` - Form execution queue from bound solutions
|
||||
- `/issue:execute` - Execute queue with codex
|
||||
- `ccw issue list` - List all issues
|
||||
- `ccw issue status` - View issue and solution details
|
||||
368
.claude/commands/issue/queue.md
Normal file
368
.claude/commands/issue/queue.md
Normal file
@@ -0,0 +1,368 @@
|
||||
---
|
||||
name: queue
|
||||
description: Form execution queue from bound solutions using issue-queue-agent (solution-level)
|
||||
argument-hint: "[--rebuild] [--issue <id>]"
|
||||
allowed-tools: TodoWrite(*), Task(*), Bash(*), Read(*), Write(*)
|
||||
---
|
||||
|
||||
# Issue Queue Command (/issue:queue)
|
||||
|
||||
## Overview
|
||||
|
||||
Queue formation command using **issue-queue-agent** that analyzes all bound solutions, resolves **inter-solution** conflicts, and creates an ordered execution queue at **solution level**.
|
||||
|
||||
**Design Principle**: Queue items are **solutions**, not individual tasks. Each executor receives a complete solution with all its tasks.
|
||||
|
||||
## Output Requirements
|
||||
|
||||
**Generate Files:**
|
||||
1. `.workflow/issues/queues/{queue-id}.json` - Full queue with solutions, conflicts, groups
|
||||
2. `.workflow/issues/queues/index.json` - Update with new queue entry
|
||||
|
||||
**Return Summary:**
|
||||
```json
|
||||
{
|
||||
"queue_id": "QUE-20251227-143000",
|
||||
"total_solutions": N,
|
||||
"total_tasks": N,
|
||||
"execution_groups": [{ "id": "P1", "type": "parallel", "count": N }],
|
||||
"conflicts_resolved": N,
|
||||
"issues_queued": ["ISS-xxx", "ISS-yyy"]
|
||||
}
|
||||
```
|
||||
|
||||
**Completion Criteria:**
|
||||
- [ ] Queue JSON generated with valid DAG (no cycles between solutions)
|
||||
- [ ] All inter-solution file conflicts resolved with rationale
|
||||
- [ ] Semantic priority calculated for each solution
|
||||
- [ ] Execution groups assigned (parallel P* / sequential S*)
|
||||
- [ ] Issue statuses updated to `queued` via `ccw issue update`
|
||||
|
||||
## Core Capabilities
|
||||
|
||||
- **Agent-driven**: issue-queue-agent handles all ordering logic
|
||||
- **Solution-level granularity**: Queue items are solutions, not tasks
|
||||
- Inter-solution dependency DAG (based on file conflicts)
|
||||
- File conflict detection between solutions
|
||||
- Semantic priority calculation per solution (0.0-1.0)
|
||||
- Parallel/Sequential group assignment for solutions
|
||||
|
||||
## Storage Structure (Queue History)
|
||||
|
||||
```
|
||||
.workflow/issues/
|
||||
├── issues.jsonl # All issues (one per line)
|
||||
├── queues/ # Queue history directory
|
||||
│ ├── index.json # Queue index (active + history)
|
||||
│ ├── {queue-id}.json # Individual queue files
|
||||
│ └── ...
|
||||
└── solutions/
|
||||
├── {issue-id}.jsonl # Solutions for issue
|
||||
└── ...
|
||||
```
|
||||
|
||||
### Queue Index Schema
|
||||
|
||||
```json
|
||||
{
|
||||
"active_queue_id": "QUE-20251227-143000",
|
||||
"queues": [
|
||||
{
|
||||
"id": "QUE-20251227-143000",
|
||||
"status": "active",
|
||||
"issue_ids": ["ISS-xxx", "ISS-yyy"],
|
||||
"total_solutions": 3,
|
||||
"completed_solutions": 1,
|
||||
"created_at": "2025-12-27T14:30:00Z"
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
### Queue File Schema (Solution-Level)
|
||||
|
||||
```json
|
||||
{
|
||||
"id": "QUE-20251227-143000",
|
||||
"status": "active",
|
||||
"solutions": [
|
||||
{
|
||||
"item_id": "S-1",
|
||||
"issue_id": "ISS-20251227-003",
|
||||
"solution_id": "SOL-20251227-003",
|
||||
"status": "pending",
|
||||
"execution_order": 1,
|
||||
"execution_group": "P1",
|
||||
"depends_on": [],
|
||||
"semantic_priority": 0.8,
|
||||
"assigned_executor": "codex",
|
||||
"files_touched": ["src/auth.ts", "src/utils.ts"],
|
||||
"task_count": 3
|
||||
},
|
||||
{
|
||||
"item_id": "S-2",
|
||||
"issue_id": "ISS-20251227-001",
|
||||
"solution_id": "SOL-20251227-001",
|
||||
"status": "pending",
|
||||
"execution_order": 2,
|
||||
"execution_group": "P1",
|
||||
"depends_on": [],
|
||||
"semantic_priority": 0.7,
|
||||
"assigned_executor": "codex",
|
||||
"files_touched": ["src/api.ts"],
|
||||
"task_count": 2
|
||||
},
|
||||
{
|
||||
"item_id": "S-3",
|
||||
"issue_id": "ISS-20251227-002",
|
||||
"solution_id": "SOL-20251227-002",
|
||||
"status": "pending",
|
||||
"execution_order": 3,
|
||||
"execution_group": "S2",
|
||||
"depends_on": ["S-1"],
|
||||
"semantic_priority": 0.5,
|
||||
"assigned_executor": "codex",
|
||||
"files_touched": ["src/auth.ts"],
|
||||
"task_count": 4
|
||||
}
|
||||
],
|
||||
"conflicts": [
|
||||
{
|
||||
"type": "file_conflict",
|
||||
"file": "src/auth.ts",
|
||||
"solutions": ["S-1", "S-3"],
|
||||
"resolution": "sequential",
|
||||
"resolution_order": ["S-1", "S-3"],
|
||||
"rationale": "S-1 creates auth module, S-3 extends it"
|
||||
}
|
||||
],
|
||||
"execution_groups": [
|
||||
{ "id": "P1", "type": "parallel", "solutions": ["S-1", "S-2"] },
|
||||
{ "id": "S2", "type": "sequential", "solutions": ["S-3"] }
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
## Usage
|
||||
|
||||
```bash
|
||||
/issue:queue [FLAGS]
|
||||
|
||||
# Examples
|
||||
/issue:queue # Form NEW queue from all bound solutions
|
||||
/issue:queue --issue GH-123 # Form queue for specific issue only
|
||||
/issue:queue --append GH-124 # Append to active queue
|
||||
/issue:queue --list # List all queues (history)
|
||||
/issue:queue --switch QUE-xxx # Switch active queue
|
||||
/issue:queue --archive # Archive completed active queue
|
||||
|
||||
# Flags
|
||||
--issue <id> Form queue for specific issue only
|
||||
--append <id> Append issue to active queue (don't create new)
|
||||
|
||||
# CLI subcommands (ccw issue queue ...)
|
||||
ccw issue queue list List all queues with status
|
||||
ccw issue queue switch <queue-id> Switch active queue
|
||||
ccw issue queue archive Archive current queue
|
||||
ccw issue queue delete <queue-id> Delete queue from history
|
||||
```
|
||||
|
||||
## Execution Process
|
||||
|
||||
```
|
||||
Phase 1: Solution Loading
|
||||
├─ Load issues.jsonl
|
||||
├─ Filter issues with bound_solution_id
|
||||
├─ Read solutions/{issue-id}.jsonl for each issue
|
||||
├─ Find bound solution by ID
|
||||
├─ Collect files_touched from all tasks in solution
|
||||
└─ Build solution objects (NOT individual tasks)
|
||||
|
||||
Phase 2-4: Agent-Driven Queue Formation (issue-queue-agent)
|
||||
├─ Launch issue-queue-agent with all solutions
|
||||
├─ Agent performs:
|
||||
│ ├─ Detect file overlaps between solutions
|
||||
│ ├─ Build dependency DAG from file conflicts
|
||||
│ ├─ Detect circular dependencies
|
||||
│ ├─ Resolve conflicts using priority rules
|
||||
│ ├─ Calculate semantic priority per solution
|
||||
│ └─ Assign execution groups (parallel/sequential)
|
||||
└─ Output: queue JSON with ordered solutions (S-1, S-2, ...)
|
||||
|
||||
Phase 5: Queue Output
|
||||
├─ Write queue.json with solutions array
|
||||
├─ Update issue statuses in issues.jsonl
|
||||
└─ Display queue summary
|
||||
```
|
||||
|
||||
## Implementation
|
||||
|
||||
### Phase 1: Solution Loading
|
||||
|
||||
**NOTE**: Execute code directly. DO NOT pre-read solution files - Bash cat handles all reading.
|
||||
|
||||
```javascript
|
||||
// Load issues.jsonl
|
||||
const issuesPath = '.workflow/issues/issues.jsonl';
|
||||
const allIssues = Bash(`cat "${issuesPath}" 2>/dev/null || echo ''`)
|
||||
.split('\n')
|
||||
.filter(line => line.trim())
|
||||
.map(line => JSON.parse(line));
|
||||
|
||||
// Filter issues with bound solutions
|
||||
const plannedIssues = allIssues.filter(i =>
|
||||
i.status === 'planned' && i.bound_solution_id
|
||||
);
|
||||
|
||||
if (plannedIssues.length === 0) {
|
||||
console.log('No issues with bound solutions found.');
|
||||
console.log('Run /issue:plan first to create and bind solutions.');
|
||||
return;
|
||||
}
|
||||
|
||||
// Load bound solutions (not individual tasks)
|
||||
const allSolutions = [];
|
||||
for (const issue of plannedIssues) {
|
||||
const solPath = `.workflow/issues/solutions/${issue.id}.jsonl`;
|
||||
const solutions = Bash(`cat "${solPath}" 2>/dev/null || echo ''`)
|
||||
.split('\n')
|
||||
.filter(line => line.trim())
|
||||
.map(line => JSON.parse(line));
|
||||
|
||||
// Find bound solution
|
||||
const boundSol = solutions.find(s => s.id === issue.bound_solution_id);
|
||||
|
||||
if (!boundSol) {
|
||||
console.log(`⚠ Bound solution ${issue.bound_solution_id} not found for ${issue.id}`);
|
||||
continue;
|
||||
}
|
||||
|
||||
// Collect all files touched by this solution
|
||||
const filesTouched = new Set();
|
||||
for (const task of boundSol.tasks || []) {
|
||||
for (const mp of task.modification_points || []) {
|
||||
filesTouched.add(mp.file);
|
||||
}
|
||||
}
|
||||
|
||||
allSolutions.push({
|
||||
issue_id: issue.id,
|
||||
solution_id: issue.bound_solution_id,
|
||||
task_count: boundSol.tasks?.length || 0,
|
||||
files_touched: Array.from(filesTouched),
|
||||
priority: issue.priority || 'medium'
|
||||
});
|
||||
}
|
||||
|
||||
console.log(`Loaded ${allSolutions.length} solutions from ${plannedIssues.length} issues`);
|
||||
```
|
||||
|
||||
### Phase 2-4: Agent-Driven Queue Formation
|
||||
|
||||
```javascript
|
||||
// Generate queue-id ONCE here, pass to agent
|
||||
const now = new Date();
|
||||
const queueId = `QUE-${now.toISOString().replace(/[-:T]/g, '').slice(0, 14)}`;
|
||||
|
||||
// Build minimal prompt - agent orders SOLUTIONS, not tasks
|
||||
const agentPrompt = `
|
||||
## Order Solutions
|
||||
|
||||
**Queue ID**: ${queueId}
|
||||
**Solutions**: ${allSolutions.length} from ${plannedIssues.length} issues
|
||||
**Project Root**: ${process.cwd()}
|
||||
|
||||
### Input (Solution-Level)
|
||||
\`\`\`json
|
||||
${JSON.stringify(allSolutions, null, 2)}
|
||||
\`\`\`
|
||||
|
||||
### Steps
|
||||
1. Parse solutions: Extract solution IDs, files_touched, task_count, priority
|
||||
2. Detect conflicts: Find file overlaps between solutions (files_touched intersection)
|
||||
3. Build DAG: Create dependency edges where solutions share files
|
||||
4. Detect cycles: Verify no circular dependencies (abort if found)
|
||||
5. Resolve conflicts: Apply ordering rules based on action types
|
||||
6. Calculate priority: Compute semantic priority (0.0-1.0) per solution
|
||||
7. Assign groups: Parallel (P*) for no-conflict, Sequential (S*) for conflicts
|
||||
8. Generate queue: Write queue JSON with ordered solutions
|
||||
9. Update index: Update queues/index.json with new queue entry
|
||||
|
||||
### Rules
|
||||
- **Solution Granularity**: Queue items are solutions, NOT individual tasks
|
||||
- **DAG Validity**: Output must be valid DAG with no circular dependencies
|
||||
- **Conflict Detection**: Two solutions conflict if files_touched intersect
|
||||
- **Ordering Priority**:
|
||||
1. Higher issue priority first (critical > high > medium > low)
|
||||
2. Fewer dependencies first (foundation solutions)
|
||||
3. More tasks = higher priority (larger impact)
|
||||
- **Parallel Safety**: Solutions in same parallel group must have NO file overlaps
|
||||
- **Queue Item ID Format**: \`S-N\` (S-1, S-2, S-3, ...)
|
||||
- **Queue ID**: Use the provided Queue ID (passed above), do NOT generate new one
|
||||
|
||||
### Generate Files (STRICT - only these 2)
|
||||
1. \`.workflow/issues/queues/{Queue ID}.json\` - Use Queue ID from above
|
||||
2. \`.workflow/issues/queues/index.json\` - Update existing index
|
||||
|
||||
Write ONLY these 2 files, using the provided Queue ID.
|
||||
|
||||
### Return Summary
|
||||
\`\`\`json
|
||||
{
|
||||
"queue_id": "QUE-YYYYMMDD-HHMMSS",
|
||||
"total_solutions": N,
|
||||
"total_tasks": N,
|
||||
"execution_groups": [{ "id": "P1", "type": "parallel", "count": N }],
|
||||
"conflicts_resolved": N,
|
||||
"issues_queued": ["ISS-xxx"]
|
||||
}
|
||||
\`\`\`
|
||||
`;
|
||||
|
||||
const result = Task(
|
||||
subagent_type="issue-queue-agent",
|
||||
run_in_background=false,
|
||||
description=`Order ${allSolutions.length} solutions`,
|
||||
prompt=agentPrompt
|
||||
);
|
||||
|
||||
const summary = JSON.parse(result);
|
||||
```
|
||||
|
||||
### Phase 5: Summary & Status Update
|
||||
|
||||
```javascript
|
||||
// Agent already generated queue files, use summary
|
||||
console.log(`
|
||||
## Queue Formed: ${summary.queue_id}
|
||||
|
||||
**Solutions**: ${summary.total_solutions}
|
||||
**Tasks**: ${summary.total_tasks}
|
||||
**Issues**: ${summary.issues_queued.join(', ')}
|
||||
**Groups**: ${summary.execution_groups.map(g => `${g.id}(${g.count})`).join(', ')}
|
||||
**Conflicts Resolved**: ${summary.conflicts_resolved}
|
||||
|
||||
Next: \`/issue:execute\`
|
||||
`);
|
||||
|
||||
// Update issue statuses via CLI (use `update` for pure field changes)
|
||||
// Note: `queue add` has its own logic; here we only need status update
|
||||
for (const issueId of summary.issues_queued) {
|
||||
Bash(`ccw issue update ${issueId} --status queued`);
|
||||
}
|
||||
```
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Error | Resolution |
|
||||
|-------|------------|
|
||||
| No bound solutions | Display message, suggest /issue:plan |
|
||||
| Circular dependency | List cycles, abort queue formation |
|
||||
| Unresolved conflicts | Agent resolves using ordering rules |
|
||||
| Invalid task reference | Skip and warn |
|
||||
|
||||
## Related Commands
|
||||
|
||||
- `/issue:plan` - Plan issues and bind solutions
|
||||
- `/issue:execute` - Execute queue with codex
|
||||
- `ccw issue queue list` - View current queue
|
||||
@@ -5,7 +5,7 @@ argument-hint: "[--dry-run] [\"focus area\"]"
|
||||
allowed-tools: TodoWrite(*), Task(*), AskUserQuestion(*), Read(*), Glob(*), Bash(*), Write(*)
|
||||
---
|
||||
|
||||
# Clean Command (/clean)
|
||||
# Clean Command (/workflow:clean)
|
||||
|
||||
## Overview
|
||||
|
||||
@@ -20,9 +20,9 @@ Intelligent cleanup command that explores the codebase to identify the developme
|
||||
## Usage
|
||||
|
||||
```bash
|
||||
/clean # Full intelligent cleanup (explore → analyze → confirm → execute)
|
||||
/clean --dry-run # Explore and analyze only, no execution
|
||||
/clean "auth module" # Focus cleanup on specific area
|
||||
/workflow:clean # Full intelligent cleanup (explore → analyze → confirm → execute)
|
||||
/workflow:clean --dry-run # Explore and analyze only, no execution
|
||||
/workflow:clean "auth module" # Focus cleanup on specific area
|
||||
```
|
||||
|
||||
## Execution Process
|
||||
@@ -321,7 +321,7 @@ if (flags.includes('--dry-run')) {
|
||||
**Dry-run mode**: No changes made.
|
||||
Manifest saved to: ${sessionFolder}/cleanup-manifest.json
|
||||
|
||||
To execute cleanup: /clean
|
||||
To execute cleanup: /workflow:clean
|
||||
`)
|
||||
return
|
||||
}
|
||||
1467
.claude/commands/workflow/docs/analyze.md
Normal file
1467
.claude/commands/workflow/docs/analyze.md
Normal file
File diff suppressed because it is too large
Load Diff
1265
.claude/commands/workflow/docs/copyright.md
Normal file
1265
.claude/commands/workflow/docs/copyright.md
Normal file
File diff suppressed because it is too large
Load Diff
@@ -410,7 +410,6 @@ Task(subagent_type="{meta.agent}",
|
||||
1. Read complete task JSON: {session.task_json_path}
|
||||
2. Load context package: {session.context_package_path}
|
||||
|
||||
Follow complete execution guidelines in @.claude/agents/{meta.agent}.md
|
||||
|
||||
**Session Paths**:
|
||||
- Workflow Dir: {session.workflow_dir}
|
||||
|
||||
@@ -10,7 +10,11 @@ examples:
|
||||
# Workflow Init Command (/workflow:init)
|
||||
|
||||
## Overview
|
||||
Initialize `.workflow/project.json` with comprehensive project understanding by delegating analysis to **cli-explore-agent**.
|
||||
Initialize `.workflow/project-tech.json` and `.workflow/project-guidelines.json` with comprehensive project understanding by delegating analysis to **cli-explore-agent**.
|
||||
|
||||
**Dual File System**:
|
||||
- `project-tech.json`: Auto-generated technical analysis (stack, architecture, components)
|
||||
- `project-guidelines.json`: User-maintained rules and constraints (created as scaffold)
|
||||
|
||||
**Note**: This command may be called by other workflow commands. Upon completion, return immediately to continue the calling workflow without interrupting the task flow.
|
||||
|
||||
@@ -27,7 +31,7 @@ Input Parsing:
|
||||
└─ Parse --regenerate flag → regenerate = true | false
|
||||
|
||||
Decision:
|
||||
├─ EXISTS + no --regenerate → Exit: "Already initialized"
|
||||
├─ BOTH_EXIST + no --regenerate → Exit: "Already initialized"
|
||||
├─ EXISTS + --regenerate → Backup existing → Continue analysis
|
||||
└─ NOT_FOUND → Continue analysis
|
||||
|
||||
@@ -37,11 +41,14 @@ Analysis Flow:
|
||||
│ ├─ Structural scan (get_modules_by_depth.sh, find, wc)
|
||||
│ ├─ Semantic analysis (Gemini CLI)
|
||||
│ ├─ Synthesis and merge
|
||||
│ └─ Write .workflow/project.json
|
||||
│ └─ Write .workflow/project-tech.json
|
||||
├─ Create guidelines scaffold (if not exists)
|
||||
│ └─ Write .workflow/project-guidelines.json (empty structure)
|
||||
└─ Display summary
|
||||
|
||||
Output:
|
||||
└─ .workflow/project.json (+ .backup if regenerate)
|
||||
├─ .workflow/project-tech.json (+ .backup if regenerate)
|
||||
└─ .workflow/project-guidelines.json (scaffold if new)
|
||||
```
|
||||
|
||||
## Implementation
|
||||
@@ -56,13 +63,18 @@ const regenerate = $ARGUMENTS.includes('--regenerate')
|
||||
**Check existing state**:
|
||||
|
||||
```bash
|
||||
bash(test -f .workflow/project.json && echo "EXISTS" || echo "NOT_FOUND")
|
||||
bash(test -f .workflow/project-tech.json && echo "TECH_EXISTS" || echo "TECH_NOT_FOUND")
|
||||
bash(test -f .workflow/project-guidelines.json && echo "GUIDELINES_EXISTS" || echo "GUIDELINES_NOT_FOUND")
|
||||
```
|
||||
|
||||
**If EXISTS and no --regenerate**: Exit early
|
||||
**If BOTH_EXIST and no --regenerate**: Exit early
|
||||
```
|
||||
Project already initialized at .workflow/project.json
|
||||
Use /workflow:init --regenerate to rebuild
|
||||
Project already initialized:
|
||||
- Tech analysis: .workflow/project-tech.json
|
||||
- Guidelines: .workflow/project-guidelines.json
|
||||
|
||||
Use /workflow:init --regenerate to rebuild tech analysis
|
||||
Use /workflow:session:solidify to add guidelines
|
||||
Use /workflow:status --project to view state
|
||||
```
|
||||
|
||||
@@ -78,7 +90,7 @@ bash(mkdir -p .workflow)
|
||||
|
||||
**For --regenerate**: Backup and preserve existing data
|
||||
```bash
|
||||
bash(cp .workflow/project.json .workflow/project.json.backup)
|
||||
bash(cp .workflow/project-tech.json .workflow/project-tech.json.backup)
|
||||
```
|
||||
|
||||
**Delegate analysis to agent**:
|
||||
@@ -89,20 +101,17 @@ Task(
|
||||
run_in_background=false,
|
||||
description="Deep project analysis",
|
||||
prompt=`
|
||||
Analyze project for workflow initialization and generate .workflow/project.json.
|
||||
Analyze project for workflow initialization and generate .workflow/project-tech.json.
|
||||
|
||||
## MANDATORY FIRST STEPS
|
||||
1. Execute: cat ~/.claude/workflows/cli-templates/schemas/project-json-schema.json (get schema reference)
|
||||
1. Execute: cat ~/.claude/workflows/cli-templates/schemas/project-tech-schema.json (get schema reference)
|
||||
2. Execute: ccw tool exec get_modules_by_depth '{}' (get project structure)
|
||||
|
||||
## Task
|
||||
Generate complete project.json with:
|
||||
- project_name: ${projectName}
|
||||
- initialized_at: current ISO timestamp
|
||||
- overview: {description, technology_stack, architecture, key_components}
|
||||
- features: ${regenerate ? 'preserve from backup' : '[] (empty)'}
|
||||
- development_index: ${regenerate ? 'preserve from backup' : '{feature: [], enhancement: [], bugfix: [], refactor: [], docs: []}'}
|
||||
- statistics: ${regenerate ? 'preserve from backup' : '{total_features: 0, total_sessions: 0, last_updated}'}
|
||||
Generate complete project-tech.json with:
|
||||
- project_metadata: {name: ${projectName}, root_path: ${projectRoot}, initialized_at, updated_at}
|
||||
- technology_analysis: {description, languages, frameworks, build_tools, test_frameworks, architecture, key_components, dependencies}
|
||||
- development_status: ${regenerate ? 'preserve from backup' : '{completed_features: [], development_index: {feature: [], enhancement: [], bugfix: [], refactor: [], docs: []}, statistics: {total_features: 0, total_sessions: 0, last_updated}}'}
|
||||
- _metadata: {initialized_by: "cli-explore-agent", analysis_timestamp, analysis_mode}
|
||||
|
||||
## Analysis Requirements
|
||||
@@ -123,8 +132,8 @@ Generate complete project.json with:
|
||||
1. Structural scan: get_modules_by_depth.sh, find, wc -l
|
||||
2. Semantic analysis: Gemini for patterns/architecture
|
||||
3. Synthesis: Merge findings
|
||||
4. ${regenerate ? 'Merge with preserved features/development_index/statistics from .workflow/project.json.backup' : ''}
|
||||
5. Write JSON: Write('.workflow/project.json', jsonContent)
|
||||
4. ${regenerate ? 'Merge with preserved development_status from .workflow/project-tech.json.backup' : ''}
|
||||
5. Write JSON: Write('.workflow/project-tech.json', jsonContent)
|
||||
6. Report: Return brief completion summary
|
||||
|
||||
Project root: ${projectRoot}
|
||||
@@ -132,29 +141,66 @@ Project root: ${projectRoot}
|
||||
)
|
||||
```
|
||||
|
||||
### Step 3.5: Create Guidelines Scaffold (if not exists)
|
||||
|
||||
```javascript
|
||||
// Only create if not exists (never overwrite user guidelines)
|
||||
if (!file_exists('.workflow/project-guidelines.json')) {
|
||||
const guidelinesScaffold = {
|
||||
conventions: {
|
||||
coding_style: [],
|
||||
naming_patterns: [],
|
||||
file_structure: [],
|
||||
documentation: []
|
||||
},
|
||||
constraints: {
|
||||
architecture: [],
|
||||
tech_stack: [],
|
||||
performance: [],
|
||||
security: []
|
||||
},
|
||||
quality_rules: [],
|
||||
learnings: [],
|
||||
_metadata: {
|
||||
created_at: new Date().toISOString(),
|
||||
version: "1.0.0"
|
||||
}
|
||||
};
|
||||
|
||||
Write('.workflow/project-guidelines.json', JSON.stringify(guidelinesScaffold, null, 2));
|
||||
}
|
||||
```
|
||||
|
||||
### Step 4: Display Summary
|
||||
|
||||
```javascript
|
||||
const projectJson = JSON.parse(Read('.workflow/project.json'));
|
||||
const projectTech = JSON.parse(Read('.workflow/project-tech.json'));
|
||||
const guidelinesExists = file_exists('.workflow/project-guidelines.json');
|
||||
|
||||
console.log(`
|
||||
✓ Project initialized successfully
|
||||
|
||||
## Project Overview
|
||||
Name: ${projectJson.project_name}
|
||||
Description: ${projectJson.overview.description}
|
||||
Name: ${projectTech.project_metadata.name}
|
||||
Description: ${projectTech.technology_analysis.description}
|
||||
|
||||
### Technology Stack
|
||||
Languages: ${projectJson.overview.technology_stack.languages.map(l => l.name).join(', ')}
|
||||
Frameworks: ${projectJson.overview.technology_stack.frameworks.join(', ')}
|
||||
Languages: ${projectTech.technology_analysis.languages.map(l => l.name).join(', ')}
|
||||
Frameworks: ${projectTech.technology_analysis.frameworks.join(', ')}
|
||||
|
||||
### Architecture
|
||||
Style: ${projectJson.overview.architecture.style}
|
||||
Components: ${projectJson.overview.key_components.length} core modules
|
||||
Style: ${projectTech.technology_analysis.architecture.style}
|
||||
Components: ${projectTech.technology_analysis.key_components.length} core modules
|
||||
|
||||
---
|
||||
Project state: .workflow/project.json
|
||||
${regenerate ? 'Backup: .workflow/project.json.backup' : ''}
|
||||
Files created:
|
||||
- Tech analysis: .workflow/project-tech.json
|
||||
- Guidelines: .workflow/project-guidelines.json ${guidelinesExists ? '(scaffold)' : ''}
|
||||
${regenerate ? '- Backup: .workflow/project-tech.json.backup' : ''}
|
||||
|
||||
Next steps:
|
||||
- Use /workflow:session:solidify to add project guidelines
|
||||
- Use /workflow:plan to start planning
|
||||
`);
|
||||
```
|
||||
|
||||
|
||||
@@ -181,6 +181,8 @@ Execute **${angle}** diagnosis for bug root cause analysis. Analyze codebase fro
|
||||
1. Run: ccw tool exec get_modules_by_depth '{}' (project structure)
|
||||
2. Run: rg -l "{error_keyword_from_bug}" --type ts (locate relevant files)
|
||||
3. Execute: cat ~/.claude/workflows/cli-templates/schemas/diagnosis-json-schema.json (get output schema reference)
|
||||
4. Read: .workflow/project-tech.json (technology stack and architecture context)
|
||||
5. Read: .workflow/project-guidelines.json (user-defined constraints and conventions)
|
||||
|
||||
## Diagnosis Strategy (${angle} focus)
|
||||
|
||||
@@ -409,6 +411,12 @@ Generate fix plan and write fix-plan.json.
|
||||
## Output Schema Reference
|
||||
Execute: cat ~/.claude/workflows/cli-templates/schemas/fix-plan-json-schema.json (get schema reference before generating plan)
|
||||
|
||||
## Project Context (MANDATORY - Read Both Files)
|
||||
1. Read: .workflow/project-tech.json (technology stack, architecture, key components)
|
||||
2. Read: .workflow/project-guidelines.json (user-defined constraints and conventions)
|
||||
|
||||
**CRITICAL**: All fix tasks MUST comply with constraints in project-guidelines.json
|
||||
|
||||
## Bug Description
|
||||
${bug_description}
|
||||
|
||||
|
||||
@@ -15,7 +15,7 @@ Intelligent lightweight planning command with dynamic workflow adaptation based
|
||||
- Intelligent task analysis with automatic exploration detection
|
||||
- Dynamic code exploration (cli-explore-agent) when codebase understanding needed
|
||||
- Interactive clarification after exploration to gather missing information
|
||||
- Adaptive planning strategy (direct Claude vs cli-lite-planning-agent) based on complexity
|
||||
- Adaptive planning: Low complexity → Direct Claude; Medium/High → cli-lite-planning-agent
|
||||
- Two-step confirmation: plan display → multi-dimensional input collection
|
||||
- Execution dispatch with complete context handoff to lite-execute
|
||||
|
||||
@@ -38,7 +38,7 @@ Phase 1: Task Analysis & Exploration
|
||||
├─ Parse input (description or .md file)
|
||||
├─ intelligent complexity assessment (Low/Medium/High)
|
||||
├─ Exploration decision (auto-detect or --explore flag)
|
||||
├─ ⚠️ Context protection: If file reading ≥50k chars → force cli-explore-agent
|
||||
├─ Context protection: If file reading ≥50k chars → force cli-explore-agent
|
||||
└─ Decision:
|
||||
├─ needsExploration=true → Launch parallel cli-explore-agents (1-4 based on complexity)
|
||||
└─ needsExploration=false → Skip to Phase 2/3
|
||||
@@ -140,11 +140,17 @@ function selectAngles(taskDescription, count) {
|
||||
|
||||
const selectedAngles = selectAngles(task_description, complexity === 'High' ? 4 : (complexity === 'Medium' ? 3 : 1))
|
||||
|
||||
// Planning strategy determination
|
||||
const planningStrategy = complexity === 'Low'
|
||||
? 'Direct Claude Planning'
|
||||
: 'cli-lite-planning-agent'
|
||||
|
||||
console.log(`
|
||||
## Exploration Plan
|
||||
|
||||
Task Complexity: ${complexity}
|
||||
Selected Angles: ${selectedAngles.join(', ')}
|
||||
Planning Strategy: ${planningStrategy}
|
||||
|
||||
Launching ${selectedAngles.length} parallel explorations...
|
||||
`)
|
||||
@@ -178,6 +184,8 @@ Execute **${angle}** exploration for task planning context. Analyze codebase fro
|
||||
1. Run: ccw tool exec get_modules_by_depth '{}' (project structure)
|
||||
2. Run: rg -l "{keyword_from_task}" --type ts (locate relevant files)
|
||||
3. Execute: cat ~/.claude/workflows/cli-templates/schemas/explore-json-schema.json (get output schema reference)
|
||||
4. Read: .workflow/project-tech.json (technology stack and architecture context)
|
||||
5. Read: .workflow/project-guidelines.json (user-defined constraints and conventions)
|
||||
|
||||
## Exploration Strategy (${angle} focus)
|
||||
|
||||
@@ -358,10 +366,7 @@ if (dedupedClarifications.length > 0) {
|
||||
```javascript
|
||||
// 分配规则(优先级从高到低):
|
||||
// 1. 用户明确指定:"用 gemini 分析..." → gemini, "codex 实现..." → codex
|
||||
// 2. 任务类型推断:
|
||||
// - 分析|审查|评估|探索 → gemini
|
||||
// - 实现|创建|修改|修复 → codex (复杂) 或 agent (简单)
|
||||
// 3. 默认 → agent
|
||||
// 2. 默认 → agent
|
||||
|
||||
const executorAssignments = {} // { taskId: { executor: 'gemini'|'codex'|'agent', reason: string } }
|
||||
plan.tasks.forEach(task => {
|
||||
@@ -413,6 +418,12 @@ Generate implementation plan and write plan.json.
|
||||
## Output Schema Reference
|
||||
Execute: cat ~/.claude/workflows/cli-templates/schemas/plan-json-schema.json (get schema reference before generating plan)
|
||||
|
||||
## Project Context (MANDATORY - Read Both Files)
|
||||
1. Read: .workflow/project-tech.json (technology stack, architecture, key components)
|
||||
2. Read: .workflow/project-guidelines.json (user-defined constraints and conventions)
|
||||
|
||||
**CRITICAL**: All generated tasks MUST comply with constraints in project-guidelines.json
|
||||
|
||||
## Task Description
|
||||
${task_description}
|
||||
|
||||
|
||||
@@ -409,6 +409,8 @@ Task(
|
||||
2. Get target files: Read resolved_files from review-state.json
|
||||
3. Validate file access: bash(ls -la ${targetFiles.join(' ')})
|
||||
4. Execute: cat ~/.claude/workflows/cli-templates/schemas/review-dimension-results-schema.json (get output schema reference)
|
||||
5. Read: .workflow/project-tech.json (technology stack and architecture context)
|
||||
6. Read: .workflow/project-guidelines.json (user-defined constraints and conventions to validate against)
|
||||
|
||||
## Review Context
|
||||
- Review Type: module (independent)
|
||||
@@ -511,6 +513,8 @@ Task(
|
||||
3. Identify related code: bash(grep -r "import.*${basename(file)}" ${projectDir}/src --include="*.ts")
|
||||
4. Read test files: bash(find ${projectDir}/tests -name "*${basename(file, '.ts')}*" -type f)
|
||||
5. Execute: cat ~/.claude/workflows/cli-templates/schemas/review-deep-dive-results-schema.json (get output schema reference)
|
||||
6. Read: .workflow/project-tech.json (technology stack and architecture context)
|
||||
7. Read: .workflow/project-guidelines.json (user-defined constraints for remediation compliance)
|
||||
|
||||
## CLI Configuration
|
||||
- Tool Priority: gemini → qwen → codex
|
||||
|
||||
@@ -420,6 +420,8 @@ Task(
|
||||
3. Get changed files: bash(cd ${workflowDir} && git log --since="${sessionCreatedAt}" --name-only --pretty=format: | sort -u)
|
||||
4. Read review state: ${reviewStateJsonPath}
|
||||
5. Execute: cat ~/.claude/workflows/cli-templates/schemas/review-dimension-results-schema.json (get output schema reference)
|
||||
6. Read: .workflow/project-tech.json (technology stack and architecture context)
|
||||
7. Read: .workflow/project-guidelines.json (user-defined constraints and conventions to validate against)
|
||||
|
||||
## Session Context
|
||||
- Session ID: ${sessionId}
|
||||
@@ -522,6 +524,8 @@ Task(
|
||||
3. Identify related code: bash(grep -r "import.*${basename(file)}" ${workflowDir}/src --include="*.ts")
|
||||
4. Read test files: bash(find ${workflowDir}/tests -name "*${basename(file, '.ts')}*" -type f)
|
||||
5. Execute: cat ~/.claude/workflows/cli-templates/schemas/review-deep-dive-results-schema.json (get output schema reference)
|
||||
6. Read: .workflow/project-tech.json (technology stack and architecture context)
|
||||
7. Read: .workflow/project-guidelines.json (user-defined constraints for remediation compliance)
|
||||
|
||||
## CLI Configuration
|
||||
- Tool Priority: gemini → qwen → codex
|
||||
|
||||
@@ -139,7 +139,7 @@ After bash validation, the model takes control to:
|
||||
ccw cli -p "
|
||||
PURPOSE: Security audit of completed implementation
|
||||
TASK: Review code for security vulnerabilities, insecure patterns, auth/authz issues
|
||||
CONTEXT: @.summaries/IMPL-*.md,../.. @../../CLAUDE.md
|
||||
CONTEXT: @.summaries/IMPL-*.md,../.. @../../project-tech.json @../../project-guidelines.json
|
||||
EXPECTED: Security findings report with severity levels
|
||||
RULES: Focus on OWASP Top 10, authentication, authorization, data validation, injection risks
|
||||
" --tool gemini --mode write --cd .workflow/active/${sessionId}
|
||||
@@ -151,7 +151,7 @@ After bash validation, the model takes control to:
|
||||
ccw cli -p "
|
||||
PURPOSE: Architecture compliance review
|
||||
TASK: Evaluate adherence to architectural patterns, identify technical debt, review design decisions
|
||||
CONTEXT: @.summaries/IMPL-*.md,../.. @../../CLAUDE.md
|
||||
CONTEXT: @.summaries/IMPL-*.md,../.. @../../project-tech.json @../../project-guidelines.json
|
||||
EXPECTED: Architecture assessment with recommendations
|
||||
RULES: Check for patterns, separation of concerns, modularity, scalability
|
||||
" --tool qwen --mode write --cd .workflow/active/${sessionId}
|
||||
@@ -163,7 +163,7 @@ After bash validation, the model takes control to:
|
||||
ccw cli -p "
|
||||
PURPOSE: Code quality and best practices review
|
||||
TASK: Assess code readability, maintainability, adherence to best practices
|
||||
CONTEXT: @.summaries/IMPL-*.md,../.. @../../CLAUDE.md
|
||||
CONTEXT: @.summaries/IMPL-*.md,../.. @../../project-tech.json @../../project-guidelines.json
|
||||
EXPECTED: Quality assessment with improvement suggestions
|
||||
RULES: Check for code smells, duplication, complexity, naming conventions
|
||||
" --tool gemini --mode write --cd .workflow/active/${sessionId}
|
||||
@@ -185,7 +185,7 @@ After bash validation, the model takes control to:
|
||||
ccw cli -p "
|
||||
PURPOSE: Verify all requirements and acceptance criteria are met
|
||||
TASK: Cross-check implementation summaries against original requirements
|
||||
CONTEXT: @.task/IMPL-*.json,.summaries/IMPL-*.md,../.. @../../CLAUDE.md
|
||||
CONTEXT: @.task/IMPL-*.json,.summaries/IMPL-*.md,../.. @../../project-tech.json @../../project-guidelines.json
|
||||
EXPECTED:
|
||||
- Requirements coverage matrix
|
||||
- Acceptance criteria verification
|
||||
|
||||
299
.claude/commands/workflow/session/solidify.md
Normal file
299
.claude/commands/workflow/session/solidify.md
Normal file
@@ -0,0 +1,299 @@
|
||||
---
|
||||
name: solidify
|
||||
description: Crystallize session learnings and user-defined constraints into permanent project guidelines
|
||||
argument-hint: "[--type <convention|constraint|learning>] [--category <category>] \"rule or insight\""
|
||||
examples:
|
||||
- /workflow:session:solidify "Use functional components for all React code" --type convention
|
||||
- /workflow:session:solidify "No direct DB access from controllers" --type constraint --category architecture
|
||||
- /workflow:session:solidify "Cache invalidation requires event sourcing" --type learning --category architecture
|
||||
- /workflow:session:solidify --interactive
|
||||
---
|
||||
|
||||
# Session Solidify Command (/workflow:session:solidify)
|
||||
|
||||
## Overview
|
||||
|
||||
Crystallizes ephemeral session context (insights, decisions, constraints) into permanent project guidelines stored in `.workflow/project-guidelines.json`. This ensures valuable learnings persist across sessions and inform future planning.
|
||||
|
||||
## Use Cases
|
||||
|
||||
1. **During Session**: Capture important decisions as they're made
|
||||
2. **After Session**: Reflect on lessons learned before archiving
|
||||
3. **Proactive**: Add team conventions or architectural rules
|
||||
|
||||
## Parameters
|
||||
|
||||
| Parameter | Type | Required | Description |
|
||||
|-----------|------|----------|-------------|
|
||||
| `rule` | string | ✅ (unless --interactive) | The rule, convention, or insight to solidify |
|
||||
| `--type` | enum | ❌ | Type: `convention`, `constraint`, `learning` (default: auto-detect) |
|
||||
| `--category` | string | ❌ | Category for organization (see categories below) |
|
||||
| `--interactive` | flag | ❌ | Launch guided wizard for adding rules |
|
||||
|
||||
### Type Categories
|
||||
|
||||
**convention** → Coding style preferences (goes to `conventions` section)
|
||||
- Subcategories: `coding_style`, `naming_patterns`, `file_structure`, `documentation`
|
||||
|
||||
**constraint** → Hard rules that must not be violated (goes to `constraints` section)
|
||||
- Subcategories: `architecture`, `tech_stack`, `performance`, `security`
|
||||
|
||||
**learning** → Session-specific insights (goes to `learnings` array)
|
||||
- Subcategories: `architecture`, `performance`, `security`, `testing`, `process`, `other`
|
||||
|
||||
## Execution Process
|
||||
|
||||
```
|
||||
Input Parsing:
|
||||
├─ Parse: rule text (required unless --interactive)
|
||||
├─ Parse: --type (convention|constraint|learning)
|
||||
├─ Parse: --category (subcategory)
|
||||
└─ Parse: --interactive (flag)
|
||||
|
||||
Step 1: Ensure Guidelines File Exists
|
||||
└─ If not exists → Create with empty structure
|
||||
|
||||
Step 2: Auto-detect Type (if not specified)
|
||||
└─ Analyze rule text for keywords
|
||||
|
||||
Step 3: Validate and Format Entry
|
||||
└─ Build entry object based on type
|
||||
|
||||
Step 4: Update Guidelines File
|
||||
└─ Add entry to appropriate section
|
||||
|
||||
Step 5: Display Confirmation
|
||||
└─ Show what was added and where
|
||||
```
|
||||
|
||||
## Implementation
|
||||
|
||||
### Step 1: Ensure Guidelines File Exists
|
||||
|
||||
```bash
|
||||
bash(test -f .workflow/project-guidelines.json && echo "EXISTS" || echo "NOT_FOUND")
|
||||
```
|
||||
|
||||
**If NOT_FOUND**, create scaffold:
|
||||
|
||||
```javascript
|
||||
const scaffold = {
|
||||
conventions: {
|
||||
coding_style: [],
|
||||
naming_patterns: [],
|
||||
file_structure: [],
|
||||
documentation: []
|
||||
},
|
||||
constraints: {
|
||||
architecture: [],
|
||||
tech_stack: [],
|
||||
performance: [],
|
||||
security: []
|
||||
},
|
||||
quality_rules: [],
|
||||
learnings: [],
|
||||
_metadata: {
|
||||
created_at: new Date().toISOString(),
|
||||
version: "1.0.0"
|
||||
}
|
||||
};
|
||||
|
||||
Write('.workflow/project-guidelines.json', JSON.stringify(scaffold, null, 2));
|
||||
```
|
||||
|
||||
### Step 2: Auto-detect Type (if not specified)
|
||||
|
||||
```javascript
|
||||
function detectType(ruleText) {
|
||||
const text = ruleText.toLowerCase();
|
||||
|
||||
// Constraint indicators
|
||||
if (/\b(no|never|must not|forbidden|prohibited|always must)\b/.test(text)) {
|
||||
return 'constraint';
|
||||
}
|
||||
|
||||
// Learning indicators
|
||||
if (/\b(learned|discovered|realized|found that|turns out)\b/.test(text)) {
|
||||
return 'learning';
|
||||
}
|
||||
|
||||
// Default to convention
|
||||
return 'convention';
|
||||
}
|
||||
|
||||
function detectCategory(ruleText, type) {
|
||||
const text = ruleText.toLowerCase();
|
||||
|
||||
if (type === 'constraint' || type === 'learning') {
|
||||
if (/\b(architecture|layer|module|dependency|circular)\b/.test(text)) return 'architecture';
|
||||
if (/\b(security|auth|permission|sanitize|xss|sql)\b/.test(text)) return 'security';
|
||||
if (/\b(performance|cache|lazy|async|sync|slow)\b/.test(text)) return 'performance';
|
||||
if (/\b(test|coverage|mock|stub)\b/.test(text)) return 'testing';
|
||||
}
|
||||
|
||||
if (type === 'convention') {
|
||||
if (/\b(name|naming|prefix|suffix|camel|pascal)\b/.test(text)) return 'naming_patterns';
|
||||
if (/\b(file|folder|directory|structure|organize)\b/.test(text)) return 'file_structure';
|
||||
if (/\b(doc|comment|jsdoc|readme)\b/.test(text)) return 'documentation';
|
||||
return 'coding_style';
|
||||
}
|
||||
|
||||
return type === 'constraint' ? 'tech_stack' : 'other';
|
||||
}
|
||||
```
|
||||
|
||||
### Step 3: Build Entry
|
||||
|
||||
```javascript
|
||||
function buildEntry(rule, type, category, sessionId) {
|
||||
if (type === 'learning') {
|
||||
return {
|
||||
date: new Date().toISOString().split('T')[0],
|
||||
session_id: sessionId || null,
|
||||
insight: rule,
|
||||
category: category,
|
||||
context: null
|
||||
};
|
||||
}
|
||||
|
||||
// For conventions and constraints, just return the rule string
|
||||
return rule;
|
||||
}
|
||||
```
|
||||
|
||||
### Step 4: Update Guidelines File
|
||||
|
||||
```javascript
|
||||
const guidelines = JSON.parse(Read('.workflow/project-guidelines.json'));
|
||||
|
||||
if (type === 'convention') {
|
||||
if (!guidelines.conventions[category]) {
|
||||
guidelines.conventions[category] = [];
|
||||
}
|
||||
if (!guidelines.conventions[category].includes(rule)) {
|
||||
guidelines.conventions[category].push(rule);
|
||||
}
|
||||
} else if (type === 'constraint') {
|
||||
if (!guidelines.constraints[category]) {
|
||||
guidelines.constraints[category] = [];
|
||||
}
|
||||
if (!guidelines.constraints[category].includes(rule)) {
|
||||
guidelines.constraints[category].push(rule);
|
||||
}
|
||||
} else if (type === 'learning') {
|
||||
guidelines.learnings.push(buildEntry(rule, type, category, sessionId));
|
||||
}
|
||||
|
||||
guidelines._metadata.updated_at = new Date().toISOString();
|
||||
guidelines._metadata.last_solidified_by = sessionId;
|
||||
|
||||
Write('.workflow/project-guidelines.json', JSON.stringify(guidelines, null, 2));
|
||||
```
|
||||
|
||||
### Step 5: Display Confirmation
|
||||
|
||||
```
|
||||
✓ Guideline solidified
|
||||
|
||||
Type: ${type}
|
||||
Category: ${category}
|
||||
Rule: "${rule}"
|
||||
|
||||
Location: .workflow/project-guidelines.json → ${type}s.${category}
|
||||
|
||||
Total ${type}s in ${category}: ${count}
|
||||
```
|
||||
|
||||
## Interactive Mode
|
||||
|
||||
When `--interactive` flag is provided:
|
||||
|
||||
```javascript
|
||||
AskUserQuestion({
|
||||
questions: [
|
||||
{
|
||||
question: "What type of guideline are you adding?",
|
||||
header: "Type",
|
||||
multiSelect: false,
|
||||
options: [
|
||||
{ label: "Convention", description: "Coding style preference (e.g., use functional components)" },
|
||||
{ label: "Constraint", description: "Hard rule that must not be violated (e.g., no direct DB access)" },
|
||||
{ label: "Learning", description: "Insight from this session (e.g., cache invalidation needs events)" }
|
||||
]
|
||||
}
|
||||
]
|
||||
});
|
||||
|
||||
// Follow-up based on type selection...
|
||||
```
|
||||
|
||||
## Examples
|
||||
|
||||
### Add a Convention
|
||||
```bash
|
||||
/workflow:session:solidify "Use async/await instead of callbacks" --type convention --category coding_style
|
||||
```
|
||||
|
||||
Result in `project-guidelines.json`:
|
||||
```json
|
||||
{
|
||||
"conventions": {
|
||||
"coding_style": ["Use async/await instead of callbacks"]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Add an Architectural Constraint
|
||||
```bash
|
||||
/workflow:session:solidify "No direct DB access from controllers" --type constraint --category architecture
|
||||
```
|
||||
|
||||
Result:
|
||||
```json
|
||||
{
|
||||
"constraints": {
|
||||
"architecture": ["No direct DB access from controllers"]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Capture a Session Learning
|
||||
```bash
|
||||
/workflow:session:solidify "Cache invalidation requires event sourcing for consistency" --type learning
|
||||
```
|
||||
|
||||
Result:
|
||||
```json
|
||||
{
|
||||
"learnings": [
|
||||
{
|
||||
"date": "2024-12-28",
|
||||
"session_id": "WFS-auth-feature",
|
||||
"insight": "Cache invalidation requires event sourcing for consistency",
|
||||
"category": "architecture"
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
## Integration with Planning
|
||||
|
||||
The `project-guidelines.json` is consumed by:
|
||||
|
||||
1. **`/workflow:tools:context-gather`**: Loads guidelines into context-package.json
|
||||
2. **`/workflow:plan`**: Passes guidelines to task generation agent
|
||||
3. **`task-generate-agent`**: Includes guidelines as "CRITICAL CONSTRAINTS" in system prompt
|
||||
|
||||
This ensures all future planning respects solidified rules without users needing to re-state them.
|
||||
|
||||
## Error Handling
|
||||
|
||||
- **Duplicate Rule**: Warn and skip if exact rule already exists
|
||||
- **Invalid Category**: Suggest valid categories for the type
|
||||
- **File Corruption**: Backup existing file before modification
|
||||
|
||||
## Related Commands
|
||||
|
||||
- `/workflow:session:start` - Start a session (may prompt for solidify at end)
|
||||
- `/workflow:session:complete` - Complete session (prompts for learnings to solidify)
|
||||
- `/workflow:init` - Creates project-guidelines.json scaffold if missing
|
||||
@@ -38,26 +38,29 @@ ERROR: Invalid session type. Valid types: workflow, review, tdd, test, docs
|
||||
|
||||
## Step 0: Initialize Project State (First-time Only)
|
||||
|
||||
**Executed before all modes** - Ensures project-level state file exists by calling `/workflow:init`.
|
||||
**Executed before all modes** - Ensures project-level state files exist by calling `/workflow:init`.
|
||||
|
||||
### Check and Initialize
|
||||
```bash
|
||||
# Check if project state exists
|
||||
bash(test -f .workflow/project.json && echo "EXISTS" || echo "NOT_FOUND")
|
||||
# Check if project state exists (both files required)
|
||||
bash(test -f .workflow/project-tech.json && echo "TECH_EXISTS" || echo "TECH_NOT_FOUND")
|
||||
bash(test -f .workflow/project-guidelines.json && echo "GUIDELINES_EXISTS" || echo "GUIDELINES_NOT_FOUND")
|
||||
```
|
||||
|
||||
**If NOT_FOUND**, delegate to `/workflow:init`:
|
||||
**If either NOT_FOUND**, delegate to `/workflow:init`:
|
||||
```javascript
|
||||
// Call workflow:init for intelligent project analysis
|
||||
SlashCommand({command: "/workflow:init"});
|
||||
|
||||
// Wait for init completion
|
||||
// project.json will be created with comprehensive project overview
|
||||
// project-tech.json and project-guidelines.json will be created
|
||||
```
|
||||
|
||||
**Output**:
|
||||
- If EXISTS: `PROJECT_STATE: initialized`
|
||||
- If NOT_FOUND: Calls `/workflow:init` → creates `.workflow/project.json` with full project analysis
|
||||
- If BOTH_EXIST: `PROJECT_STATE: initialized`
|
||||
- If NOT_FOUND: Calls `/workflow:init` → creates:
|
||||
- `.workflow/project-tech.json` with full technical analysis
|
||||
- `.workflow/project-guidelines.json` with empty scaffold
|
||||
|
||||
**Note**: `/workflow:init` uses cli-explore-agent to build comprehensive project understanding (technology stack, architecture, key components). This step runs once per project. Subsequent executions skip initialization.
|
||||
|
||||
|
||||
@@ -124,6 +124,9 @@ Task(subagent_type="cli-execution-agent", run_in_background=false, prompt=`
|
||||
|
||||
## Analysis Steps
|
||||
|
||||
### 0. Load Output Schema (MANDATORY)
|
||||
Execute: cat ~/.claude/workflows/cli-templates/schemas/conflict-resolution-schema.json
|
||||
|
||||
### 1. Load Context
|
||||
- Read existing files from conflict_detection.existing_files
|
||||
- Load plan from .workflow/active/{session_id}/.process/context-package.json
|
||||
@@ -171,123 +174,14 @@ Task(subagent_type="cli-execution-agent", run_in_background=false, prompt=`
|
||||
|
||||
⚠️ Output to conflict-resolution.json (generated in Phase 4)
|
||||
|
||||
Return JSON format for programmatic processing:
|
||||
**Schema Reference**: Execute \`cat ~/.claude/workflows/cli-templates/schemas/conflict-resolution-schema.json\` to get full schema
|
||||
|
||||
\`\`\`json
|
||||
{
|
||||
"conflicts": [
|
||||
{
|
||||
"id": "CON-001",
|
||||
"brief": "一行中文冲突摘要",
|
||||
"severity": "Critical|High|Medium",
|
||||
"category": "Architecture|API|Data|Dependency|ModuleOverlap",
|
||||
"affected_files": [
|
||||
".workflow/active/{session}/.brainstorm/guidance-specification.md",
|
||||
".workflow/active/{session}/.brainstorm/system-architect/analysis.md"
|
||||
],
|
||||
"description": "详细描述冲突 - 什么不兼容",
|
||||
"impact": {
|
||||
"scope": "影响的模块/组件",
|
||||
"compatibility": "Yes|No|Partial",
|
||||
"migration_required": true|false,
|
||||
"estimated_effort": "人天估计"
|
||||
},
|
||||
"overlap_analysis": {
|
||||
"// NOTE": "仅当 category=ModuleOverlap 时需要此字段",
|
||||
"new_module": {
|
||||
"name": "新模块名称",
|
||||
"scenarios": ["场景1", "场景2", "场景3"],
|
||||
"responsibilities": "职责描述"
|
||||
},
|
||||
"existing_modules": [
|
||||
{
|
||||
"file": "src/existing/module.ts",
|
||||
"name": "现有模块名称",
|
||||
"scenarios": ["场景A", "场景B"],
|
||||
"overlap_scenarios": ["重叠场景1", "重叠场景2"],
|
||||
"responsibilities": "现有模块职责"
|
||||
}
|
||||
]
|
||||
},
|
||||
"strategies": [
|
||||
{
|
||||
"name": "策略名称(中文)",
|
||||
"approach": "实现方法简述",
|
||||
"complexity": "Low|Medium|High",
|
||||
"risk": "Low|Medium|High",
|
||||
"effort": "时间估计",
|
||||
"pros": ["优点1", "优点2"],
|
||||
"cons": ["缺点1", "缺点2"],
|
||||
"clarification_needed": [
|
||||
"// NOTE: 仅当需要用户进一步澄清时需要此字段(尤其是 ModuleOverlap)",
|
||||
"新模块的核心职责边界是什么?",
|
||||
"如何与现有模块 X 协作?",
|
||||
"哪些场景应该由新模块处理?"
|
||||
],
|
||||
"modifications": [
|
||||
{
|
||||
"file": ".workflow/active/{session}/.brainstorm/guidance-specification.md",
|
||||
"section": "## 2. System Architect Decisions",
|
||||
"change_type": "update",
|
||||
"old_content": "原始内容片段(用于定位)",
|
||||
"new_content": "修改后的内容",
|
||||
"rationale": "为什么这样改"
|
||||
},
|
||||
{
|
||||
"file": ".workflow/active/{session}/.brainstorm/system-architect/analysis.md",
|
||||
"section": "## Design Decisions",
|
||||
"change_type": "update",
|
||||
"old_content": "原始内容片段",
|
||||
"new_content": "修改后的内容",
|
||||
"rationale": "修改理由"
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"name": "策略2名称",
|
||||
"approach": "...",
|
||||
"complexity": "Medium",
|
||||
"risk": "Low",
|
||||
"effort": "1-2天",
|
||||
"pros": ["优点"],
|
||||
"cons": ["缺点"],
|
||||
"modifications": [...]
|
||||
}
|
||||
],
|
||||
"recommended": 0,
|
||||
"modification_suggestions": [
|
||||
"建议1:具体的修改方向或注意事项",
|
||||
"建议2:可能需要考虑的边界情况",
|
||||
"建议3:相关的最佳实践或模式"
|
||||
]
|
||||
}
|
||||
],
|
||||
"summary": {
|
||||
"total": 2,
|
||||
"critical": 1,
|
||||
"high": 1,
|
||||
"medium": 0
|
||||
}
|
||||
}
|
||||
\`\`\`
|
||||
|
||||
⚠️ CRITICAL Requirements for modifications field:
|
||||
- old_content: Must be exact text from target file (20-100 chars for unique match)
|
||||
- new_content: Complete replacement text (maintains formatting)
|
||||
- change_type: "update" (replace), "add" (insert), "remove" (delete)
|
||||
- file: Full path relative to project root
|
||||
- section: Markdown heading for context (helps locate position)
|
||||
Return JSON following the schema above. Key requirements:
|
||||
- Minimum 2 strategies per conflict, max 4
|
||||
- All text in Chinese for user-facing fields (brief, name, pros, cons)
|
||||
- modification_suggestions: 2-5 actionable suggestions for custom handling (Chinese)
|
||||
|
||||
Quality Standards:
|
||||
- Each strategy must have actionable modifications
|
||||
- old_content must be precise enough for Edit tool matching
|
||||
- new_content preserves markdown formatting and structure
|
||||
- Recommended strategy (index) based on lowest complexity + risk
|
||||
- modification_suggestions must be specific, actionable, and context-aware
|
||||
- Each suggestion should address a specific aspect (compatibility, migration, testing, etc.)
|
||||
- All text in Chinese for user-facing fields (brief, name, pros, cons, modification_suggestions)
|
||||
- modifications.old_content: 20-100 chars for unique Edit tool matching
|
||||
- modifications.new_content: preserves markdown formatting
|
||||
- modification_suggestions: 2-5 actionable suggestions for custom handling
|
||||
`)
|
||||
```
|
||||
|
||||
@@ -312,143 +206,85 @@ Task(subagent_type="cli-execution-agent", run_in_background=false, prompt=`
|
||||
8. Return execution log path
|
||||
```
|
||||
|
||||
### Phase 3: Iterative User Interaction with Clarification Loop
|
||||
### Phase 3: User Interaction Loop
|
||||
|
||||
**Execution Flow**:
|
||||
```
|
||||
FOR each conflict (逐个处理,无数量限制):
|
||||
clarified = false
|
||||
round = 0
|
||||
userClarifications = []
|
||||
```javascript
|
||||
FOR each conflict:
|
||||
round = 0, clarified = false, userClarifications = []
|
||||
|
||||
WHILE (!clarified && round < 10):
|
||||
round++
|
||||
WHILE (!clarified && round++ < 10):
|
||||
// 1. Display conflict info (text output for context)
|
||||
displayConflictSummary(conflict) // id, brief, severity, overlap_analysis if ModuleOverlap
|
||||
|
||||
// 1. Display conflict (包含所有关键字段)
|
||||
- category, id, brief, severity, description
|
||||
- IF ModuleOverlap: 展示 overlap_analysis
|
||||
* new_module: {name, scenarios, responsibilities}
|
||||
* existing_modules[]: {file, name, scenarios, overlap_scenarios, responsibilities}
|
||||
// 2. Strategy selection via AskUserQuestion
|
||||
AskUserQuestion({
|
||||
questions: [{
|
||||
question: formatStrategiesForDisplay(conflict.strategies),
|
||||
header: "策略选择",
|
||||
multiSelect: false,
|
||||
options: [
|
||||
...conflict.strategies.map((s, i) => ({
|
||||
label: `${s.name}${i === conflict.recommended ? ' (推荐)' : ''}`,
|
||||
description: `${s.complexity}复杂度 | ${s.risk}风险${s.clarification_needed?.length ? ' | ⚠️需澄清' : ''}`
|
||||
})),
|
||||
{ label: "自定义修改", description: `建议: ${conflict.modification_suggestions?.slice(0,2).join('; ')}` }
|
||||
]
|
||||
}]
|
||||
})
|
||||
|
||||
// 2. Display strategies (2-4个策略 + 自定义选项)
|
||||
- FOR each strategy: {name, approach, complexity, risk, effort, pros, cons}
|
||||
* IF clarification_needed: 展示待澄清问题列表
|
||||
- 自定义选项: {suggestions: modification_suggestions[]}
|
||||
// 3. Handle selection
|
||||
if (userChoice === "自定义修改") {
|
||||
customConflicts.push({ id, brief, category, suggestions, overlap_analysis })
|
||||
break
|
||||
}
|
||||
|
||||
// 3. User selects strategy
|
||||
userChoice = readInput()
|
||||
selectedStrategy = findStrategyByName(userChoice)
|
||||
|
||||
IF userChoice == "自定义":
|
||||
customConflicts.push({id, brief, category, suggestions, overlap_analysis})
|
||||
clarified = true
|
||||
BREAK
|
||||
// 4. Clarification (if needed) - batched max 4 per call
|
||||
if (selectedStrategy.clarification_needed?.length > 0) {
|
||||
for (batch of chunk(selectedStrategy.clarification_needed, 4)) {
|
||||
AskUserQuestion({
|
||||
questions: batch.map((q, i) => ({
|
||||
question: q, header: `澄清${i+1}`, multiSelect: false,
|
||||
options: [{ label: "详细说明", description: "提供答案" }]
|
||||
}))
|
||||
})
|
||||
userClarifications.push(...collectAnswers(batch))
|
||||
}
|
||||
|
||||
selectedStrategy = strategies[userChoice]
|
||||
|
||||
// 4. Clarification loop
|
||||
IF selectedStrategy.clarification_needed.length > 0:
|
||||
// 收集澄清答案
|
||||
FOR each question:
|
||||
answer = readInput()
|
||||
userClarifications.push({question, answer})
|
||||
|
||||
// Agent 重新分析
|
||||
reanalysisResult = Task(cli-execution-agent, prompt={
|
||||
冲突信息: {id, brief, category, 策略}
|
||||
用户澄清: userClarifications[]
|
||||
场景分析: overlap_analysis (if ModuleOverlap)
|
||||
|
||||
输出: {
|
||||
uniqueness_confirmed: bool,
|
||||
rationale: string,
|
||||
updated_strategy: {name, approach, complexity, risk, effort, modifications[]},
|
||||
remaining_questions: [] (如果仍有歧义)
|
||||
}
|
||||
// 5. Agent re-analysis
|
||||
reanalysisResult = Task({
|
||||
subagent_type: "cli-execution-agent",
|
||||
run_in_background: false,
|
||||
prompt: `Conflict: ${conflict.id}, Strategy: ${selectedStrategy.name}
|
||||
User Clarifications: ${JSON.stringify(userClarifications)}
|
||||
Output: { uniqueness_confirmed, rationale, updated_strategy, remaining_questions }`
|
||||
})
|
||||
|
||||
IF reanalysisResult.uniqueness_confirmed:
|
||||
selectedStrategy = updated_strategy
|
||||
selectedStrategy.clarifications = userClarifications
|
||||
if (reanalysisResult.uniqueness_confirmed) {
|
||||
selectedStrategy = { ...reanalysisResult.updated_strategy, clarifications: userClarifications }
|
||||
clarified = true
|
||||
ELSE:
|
||||
// 更新澄清问题,继续下一轮
|
||||
selectedStrategy.clarification_needed = remaining_questions
|
||||
ELSE:
|
||||
} else {
|
||||
selectedStrategy.clarification_needed = reanalysisResult.remaining_questions
|
||||
}
|
||||
} else {
|
||||
clarified = true
|
||||
}
|
||||
|
||||
resolvedConflicts.push({conflict, strategy: selectedStrategy})
|
||||
if (clarified) resolvedConflicts.push({ conflict, strategy: selectedStrategy })
|
||||
END WHILE
|
||||
END FOR
|
||||
|
||||
// Build output
|
||||
selectedStrategies = resolvedConflicts.map(r => ({
|
||||
conflict_id, strategy, clarifications[]
|
||||
conflict_id: r.conflict.id, strategy: r.strategy, clarifications: r.strategy.clarifications || []
|
||||
}))
|
||||
```
|
||||
|
||||
**Key Data Structures**:
|
||||
|
||||
```javascript
|
||||
// Custom conflict tracking
|
||||
customConflicts[] = {
|
||||
id, brief, category,
|
||||
suggestions: modification_suggestions[],
|
||||
overlap_analysis: { new_module{}, existing_modules[] } // ModuleOverlap only
|
||||
}
|
||||
|
||||
// Agent re-analysis prompt output
|
||||
{
|
||||
uniqueness_confirmed: bool,
|
||||
rationale: string,
|
||||
updated_strategy: {
|
||||
name, approach, complexity, risk, effort,
|
||||
modifications: [{file, section, change_type, old_content, new_content, rationale}]
|
||||
},
|
||||
remaining_questions: string[]
|
||||
}
|
||||
```
|
||||
|
||||
**Text Output Example** (展示关键字段):
|
||||
|
||||
```markdown
|
||||
============================================================
|
||||
冲突 1/3 - 第 1 轮
|
||||
============================================================
|
||||
【ModuleOverlap】CON-001: 新增用户认证服务与现有模块功能重叠
|
||||
严重程度: High | 描述: 计划中的 UserAuthService 与现有 AuthManager 场景重叠
|
||||
|
||||
--- 场景重叠分析 ---
|
||||
新模块: UserAuthService | 场景: 登录, Token验证, 权限, MFA
|
||||
现有模块: AuthManager (src/auth/AuthManager.ts) | 重叠: 登录, Token验证
|
||||
|
||||
--- 解决策略 ---
|
||||
1) 合并 (Low复杂度 | Low风险 | 2-3天)
|
||||
⚠️ 需澄清: AuthManager是否能承担MFA?
|
||||
|
||||
2) 拆分边界 (Medium复杂度 | Medium风险 | 4-5天)
|
||||
⚠️ 需澄清: 基础/高级认证边界? Token验证归谁?
|
||||
|
||||
3) 自定义修改
|
||||
建议: 评估扩展性; 策略模式分离; 定义接口边界
|
||||
|
||||
请选择 (1-3): > 2
|
||||
|
||||
--- 澄清问答 (第1轮) ---
|
||||
Q: 基础/高级认证边界?
|
||||
A: 基础=密码登录+token验证, 高级=MFA+OAuth+SSO
|
||||
|
||||
Q: Token验证归谁?
|
||||
A: 统一由 AuthManager 负责
|
||||
|
||||
🔄 重新分析...
|
||||
✅ 唯一性已确认 | 理由: 边界清晰 - AuthManager(基础+token), UserAuthService(MFA+OAuth+SSO)
|
||||
|
||||
============================================================
|
||||
冲突 2/3 - 第 1 轮 [下一个冲突]
|
||||
============================================================
|
||||
```
|
||||
|
||||
**Loop Characteristics**: 逐个处理 | 无限轮次(max 10) | 动态问题生成 | Agent重新分析判断唯一性 | ModuleOverlap场景边界澄清
|
||||
**Key Points**:
|
||||
- AskUserQuestion: max 4 questions/call, batch if more
|
||||
- Strategy options: 2-4 strategies + "自定义修改"
|
||||
- Clarification loop: max 10 rounds, agent判断 uniqueness_confirmed
|
||||
- Custom conflicts: 记录 overlap_analysis 供后续手动处理
|
||||
|
||||
### Phase 4: Apply Modifications
|
||||
|
||||
|
||||
@@ -15,7 +15,6 @@ allowed-tools: Task(*), Read(*), Glob(*)
|
||||
|
||||
Orchestrator command that invokes `context-search-agent` to gather comprehensive project context for implementation planning. Generates standardized `context-package.json` with codebase analysis, dependencies, and conflict detection.
|
||||
|
||||
**Agent**: `context-search-agent` (`.claude/agents/context-search-agent.md`)
|
||||
|
||||
## Core Philosophy
|
||||
|
||||
@@ -237,7 +236,10 @@ Task(
|
||||
Execute complete context-search-agent workflow for implementation planning:
|
||||
|
||||
### Phase 1: Initialization & Pre-Analysis
|
||||
1. **Project State Loading**: Read and parse `.workflow/project.json`. Use its `overview` section as the foundational `project_context`. This is your primary source for architecture, tech stack, and key components. If file doesn't exist, proceed with fresh analysis.
|
||||
1. **Project State Loading**:
|
||||
- Read and parse `.workflow/project-tech.json`. Use its `technology_analysis` section as the foundational `project_context`. This is your primary source for architecture, tech stack, and key components.
|
||||
- Read and parse `.workflow/project-guidelines.json`. Load `conventions`, `constraints`, and `learnings` into a `project_guidelines` section.
|
||||
- If files don't exist, proceed with fresh analysis.
|
||||
2. **Detection**: Check for existing context-package (early exit if valid)
|
||||
3. **Foundation**: Initialize CodexLens, get project structure, load docs
|
||||
4. **Analysis**: Extract keywords, determine scope, classify complexity based on task description and project state
|
||||
@@ -252,17 +254,19 @@ Execute all discovery tracks:
|
||||
|
||||
### Phase 3: Synthesis, Assessment & Packaging
|
||||
1. Apply relevance scoring and build dependency graph
|
||||
2. **Synthesize 4-source data**: Merge findings from all sources (archive > docs > code > web). **Prioritize the context from `project.json`** for architecture and tech stack unless code analysis reveals it's outdated.
|
||||
3. **Populate `project_context`**: Directly use the `overview` from `project.json` to fill the `project_context` section of the output `context-package.json`. Include description, technology_stack, architecture, and key_components.
|
||||
4. Integrate brainstorm artifacts (if .brainstorming/ exists, read content)
|
||||
5. Perform conflict detection with risk assessment
|
||||
6. **Inject historical conflicts** from archive analysis into conflict_detection
|
||||
7. Generate and validate context-package.json
|
||||
2. **Synthesize 4-source data**: Merge findings from all sources (archive > docs > code > web). **Prioritize the context from `project-tech.json`** for architecture and tech stack unless code analysis reveals it's outdated.
|
||||
3. **Populate `project_context`**: Directly use the `technology_analysis` from `project-tech.json` to fill the `project_context` section. Include description, technology_stack, architecture, and key_components.
|
||||
4. **Populate `project_guidelines`**: Load conventions, constraints, and learnings from `project-guidelines.json` into a dedicated section.
|
||||
5. Integrate brainstorm artifacts (if .brainstorming/ exists, read content)
|
||||
6. Perform conflict detection with risk assessment
|
||||
7. **Inject historical conflicts** from archive analysis into conflict_detection
|
||||
8. Generate and validate context-package.json
|
||||
|
||||
## Output Requirements
|
||||
Complete context-package.json with:
|
||||
- **metadata**: task_description, keywords, complexity, tech_stack, session_id
|
||||
- **project_context**: description, technology_stack, architecture, key_components (sourced from `project.json` overview)
|
||||
- **project_context**: description, technology_stack, architecture, key_components (sourced from `project-tech.json`)
|
||||
- **project_guidelines**: {conventions, constraints, quality_rules, learnings} (sourced from `project-guidelines.json`)
|
||||
- **assets**: {documentation[], source_code[], config[], tests[]} with relevance scores
|
||||
- **dependencies**: {internal[], external[]} with dependency graph
|
||||
- **brainstorm_artifacts**: {guidance_specification, role_analyses[], synthesis_output} with content
|
||||
@@ -315,7 +319,8 @@ Refer to `context-search-agent.md` Phase 3.7 for complete `context-package.json`
|
||||
|
||||
**Key Sections**:
|
||||
- **metadata**: Session info, keywords, complexity, tech stack
|
||||
- **project_context**: Architecture patterns, conventions, tech stack (populated from `project.json` overview)
|
||||
- **project_context**: Architecture patterns, conventions, tech stack (populated from `project-tech.json`)
|
||||
- **project_guidelines**: Conventions, constraints, quality rules, learnings (populated from `project-guidelines.json`)
|
||||
- **assets**: Categorized files with relevance scores (documentation, source_code, config, tests)
|
||||
- **dependencies**: Internal and external dependency graphs
|
||||
- **brainstorm_artifacts**: Brainstorm documents with full content (if exists)
|
||||
@@ -430,7 +435,7 @@ if (historicalConflicts.length > 0 && currentRisk === "low") {
|
||||
## Notes
|
||||
|
||||
- **Detection-first**: Always check for existing package before invoking agent
|
||||
- **Project.json integration**: Agent reads `.workflow/project.json` as primary source for project context, avoiding redundant analysis
|
||||
- **Agent autonomy**: Agent handles all discovery logic per `.claude/agents/context-search-agent.md`
|
||||
- **Dual project file integration**: Agent reads both `.workflow/project-tech.json` (tech analysis) and `.workflow/project-guidelines.json` (user constraints) as primary sources
|
||||
- **Guidelines injection**: Project guidelines are included in context-package to ensure task generation respects user-defined constraints
|
||||
- **No redundancy**: This command is a thin orchestrator, all logic in agent
|
||||
- **Plan-specific**: Use this for implementation planning; brainstorm mode uses direct agent call
|
||||
|
||||
@@ -354,19 +354,20 @@ Generate task JSON files for ${module.name} module within workflow session
|
||||
IMPORTANT: This is PLANNING ONLY - generate task JSONs, NOT implementing code.
|
||||
IMPORTANT: Generate Task JSONs ONLY. IMPL_PLAN.md and TODO_LIST.md by Phase 3 Coordinator.
|
||||
|
||||
CRITICAL: Follow progressive loading strategy in agent specification
|
||||
CRITICAL: Follow the progressive loading strategy defined in agent specification (load analysis.md files incrementally due to file size)
|
||||
|
||||
## MODULE SCOPE
|
||||
- Module: ${module.name} (${module.type})
|
||||
- Focus Paths: ${module.paths.join(', ')}
|
||||
- Task ID Prefix: IMPL-${module.prefix}
|
||||
- Task Limit: ≤9 tasks
|
||||
- Other Modules: ${otherModules.join(', ')}
|
||||
- Task Limit: ≤9 tasks (hard limit for this module)
|
||||
- Other Modules: ${otherModules.join(', ')} (reference only, do NOT generate tasks for them)
|
||||
|
||||
## SESSION PATHS
|
||||
Input:
|
||||
- Session Metadata: .workflow/active/{session-id}/workflow-session.json
|
||||
- Context Package: .workflow/active/{session-id}/.process/context-package.json
|
||||
|
||||
Output:
|
||||
- Task Dir: .workflow/active/{session-id}/.task/
|
||||
|
||||
@@ -374,21 +375,93 @@ Output:
|
||||
Session ID: {session-id}
|
||||
MCP Capabilities: {exa_code, exa_web, code_index}
|
||||
|
||||
## USER CONFIGURATION (from Phase 0)
|
||||
Execution Method: ${userConfig.executionMethod} // agent|hybrid|cli
|
||||
Preferred CLI Tool: ${userConfig.preferredCliTool} // codex|gemini|qwen|auto
|
||||
Supplementary Materials: ${userConfig.supplementaryMaterials}
|
||||
|
||||
## CLI TOOL SELECTION
|
||||
Based on userConfig.executionMethod:
|
||||
- "agent": No command field in implementation_approach steps
|
||||
- "hybrid": Add command field to complex steps only (agent handles simple steps)
|
||||
- "cli": Add command field to ALL implementation_approach steps
|
||||
|
||||
CLI Resume Support (MANDATORY for all CLI commands):
|
||||
- Use --resume parameter to continue from previous task execution
|
||||
- Read previous task's cliExecutionId from session state
|
||||
- Format: ccw cli -p "[prompt]" --resume ${previousCliId} --tool ${tool} --mode write
|
||||
|
||||
## EXPLORATION CONTEXT (from context-package.exploration_results)
|
||||
- Load exploration_results from context-package.json
|
||||
- Filter for ${module.name} module: Use aggregated_insights.critical_files matching ${module.paths.join(', ')}
|
||||
- Apply module-relevant constraints from aggregated_insights.constraints
|
||||
- Reference aggregated_insights.all_patterns applicable to ${module.name}
|
||||
- Use aggregated_insights.all_integration_points for precise modification locations within module scope
|
||||
- Use conflict_indicators for risk-aware task sequencing
|
||||
|
||||
## CONFLICT RESOLUTION CONTEXT (if exists)
|
||||
- Check context-package.conflict_detection.resolution_file for conflict-resolution.json path
|
||||
- If exists, load .process/conflict-resolution.json:
|
||||
- Apply planning_constraints relevant to ${module.name} as task constraints
|
||||
- Reference resolved_conflicts affecting ${module.name} for implementation approach alignment
|
||||
- Handle custom_conflicts with explicit task notes
|
||||
|
||||
## CROSS-MODULE DEPENDENCIES
|
||||
- Use placeholder: depends_on: ["CROSS::{module}::{pattern}"]
|
||||
- Example: depends_on: ["CROSS::B::api-endpoint"]
|
||||
- For dependencies ON other modules: Use placeholder depends_on: ["CROSS::{module}::{pattern}"]
|
||||
- Example: depends_on: ["CROSS::B::api-endpoint"] (this module depends on B's api-endpoint task)
|
||||
- Phase 3 Coordinator resolves to actual task IDs
|
||||
- For dependencies FROM other modules: Document in task context as "provides_for" annotation
|
||||
|
||||
## EXPECTED DELIVERABLES
|
||||
Task JSON Files (.task/IMPL-${module.prefix}*.json):
|
||||
- 6-field schema per agent specification
|
||||
- 6-field schema (id, title, status, context_package_path, meta, context, flow_control)
|
||||
- Task ID format: IMPL-${module.prefix}1, IMPL-${module.prefix}2, ...
|
||||
- Quantified requirements with explicit counts
|
||||
- Artifacts integration from context package (filtered for ${module.name})
|
||||
- **focus_paths enhanced with exploration critical_files (module-scoped)**
|
||||
- Flow control with pre_analysis steps (include exploration integration_points analysis)
|
||||
- **CLI Execution IDs and strategies (MANDATORY)**
|
||||
- Focus ONLY on ${module.name} module scope
|
||||
|
||||
## CLI EXECUTION ID REQUIREMENTS (MANDATORY)
|
||||
Each task JSON MUST include:
|
||||
- **cli_execution_id**: Unique ID for CLI execution (format: `{session_id}-IMPL-${module.prefix}{seq}`)
|
||||
- **cli_execution**: Strategy object based on depends_on:
|
||||
- No deps → `{ "strategy": "new" }`
|
||||
- 1 dep (single child) → `{ "strategy": "resume", "resume_from": "parent-cli-id" }`
|
||||
- 1 dep (multiple children) → `{ "strategy": "fork", "resume_from": "parent-cli-id" }`
|
||||
- N deps → `{ "strategy": "merge_fork", "merge_from": ["id1", "id2", ...] }`
|
||||
- Cross-module dep → `{ "strategy": "cross_module_fork", "resume_from": "CROSS::{module}::{pattern}" }`
|
||||
|
||||
**CLI Execution Strategy Rules**:
|
||||
1. **new**: Task has no dependencies - starts fresh CLI conversation
|
||||
2. **resume**: Task has 1 parent AND that parent has only this child - continues same conversation
|
||||
3. **fork**: Task has 1 parent BUT parent has multiple children - creates new branch with parent context
|
||||
4. **merge_fork**: Task has multiple parents - merges all parent contexts into new conversation
|
||||
5. **cross_module_fork**: Task depends on task from another module - Phase 3 resolves placeholder
|
||||
|
||||
**Execution Command Patterns**:
|
||||
- new: `ccw cli -p "[prompt]" --tool [tool] --mode write --id [cli_execution_id]`
|
||||
- resume: `ccw cli -p "[prompt]" --resume [resume_from] --tool [tool] --mode write`
|
||||
- fork: `ccw cli -p "[prompt]" --resume [resume_from] --id [cli_execution_id] --tool [tool] --mode write`
|
||||
- merge_fork: `ccw cli -p "[prompt]" --resume [merge_from.join(',')] --id [cli_execution_id] --tool [tool] --mode write`
|
||||
- cross_module_fork: (Phase 3 resolves placeholder, then uses fork pattern)
|
||||
|
||||
## QUALITY STANDARDS
|
||||
Hard Constraints:
|
||||
- Task count <= 9 for this module (hard limit - coordinate with Phase 3 if exceeded)
|
||||
- All requirements quantified (explicit counts and enumerated lists)
|
||||
- Acceptance criteria measurable (include verification commands)
|
||||
- Artifact references mapped from context package (module-scoped filter)
|
||||
- Focus paths use absolute paths or clear relative paths from project root
|
||||
- Cross-module dependencies use CROSS:: placeholder format
|
||||
|
||||
## SUCCESS CRITERIA
|
||||
- Task JSONs saved to .task/ with IMPL-${module.prefix}* naming
|
||||
- Cross-module dependencies use CROSS:: placeholder format
|
||||
- Return task count and brief summary
|
||||
- All task JSONs include cli_execution_id and cli_execution strategy
|
||||
- Cross-module dependencies use CROSS:: placeholder format consistently
|
||||
- Focus paths scoped to ${module.paths.join(', ')} only
|
||||
- Return: task count, task IDs, dependency summary (internal + cross-module)
|
||||
`
|
||||
)
|
||||
);
|
||||
|
||||
@@ -239,15 +239,6 @@ If conflict_risk was medium/high, modifications have been applied to:
|
||||
|
||||
**Agent Configuration Reference**: All TDD task generation rules, quantification requirements, Red-Green-Refactor cycle structure, quality standards, and execution details are defined in action-planning-agent.
|
||||
|
||||
Refer to: @.claude/agents/action-planning-agent.md for:
|
||||
- TDD Task Decomposition Standards
|
||||
- Red-Green-Refactor Cycle Requirements
|
||||
- Quantification Requirements (MANDATORY)
|
||||
- 5-Field Task JSON Schema
|
||||
- IMPL_PLAN.md Structure (TDD variant)
|
||||
- TODO_LIST.md Format
|
||||
- TDD Execution Flow & Quality Validation
|
||||
|
||||
### TDD-Specific Requirements Summary
|
||||
|
||||
#### Task Structure Philosophy
|
||||
|
||||
@@ -14,7 +14,7 @@ allowed-tools: Task(*), Read(*), Glob(*)
|
||||
|
||||
Orchestrator command that invokes `test-context-search-agent` to gather comprehensive test coverage context for test generation workflows. Generates standardized `test-context-package.json` with coverage analysis, framework detection, and source implementation context.
|
||||
|
||||
**Agent**: `test-context-search-agent` (`.claude/agents/test-context-search-agent.md`)
|
||||
|
||||
|
||||
## Core Philosophy
|
||||
|
||||
@@ -89,7 +89,6 @@ Task(
|
||||
run_in_background=false,
|
||||
description="Gather test coverage context",
|
||||
prompt=`
|
||||
You are executing as test-context-search-agent (.claude/agents/test-context-search-agent.md).
|
||||
|
||||
## Execution Mode
|
||||
**PLAN MODE** (Comprehensive) - Full Phase 1-3 execution
|
||||
@@ -229,7 +228,7 @@ Refer to `test-context-search-agent.md` Phase 3.2 for complete `test-context-pac
|
||||
## Notes
|
||||
|
||||
- **Detection-first**: Always check for existing test-context-package before invoking agent
|
||||
- **Agent autonomy**: Agent handles all coverage analysis logic per `.claude/agents/test-context-search-agent.md`
|
||||
|
||||
- **No redundancy**: This command is a thin orchestrator, all logic in agent
|
||||
- **Framework agnostic**: Supports Jest, Mocha, pytest, RSpec, Go testing, etc.
|
||||
- **Coverage focus**: Primary goal is identifying implementation files without tests
|
||||
|
||||
@@ -107,8 +107,6 @@ CRITICAL:
|
||||
- Follow the progressive loading strategy defined in your agent specification (load context incrementally from memory-first approach)
|
||||
|
||||
## AGENT CONFIGURATION REFERENCE
|
||||
All test task generation rules, schemas, and quality standards are defined in your agent specification:
|
||||
@.claude/agents/action-planning-agent.md
|
||||
|
||||
Refer to your specification for:
|
||||
- Test Task JSON Schema (6-field structure with test-specific metadata)
|
||||
|
||||
584
.claude/skills/_shared/mermaid-utils.md
Normal file
584
.claude/skills/_shared/mermaid-utils.md
Normal file
@@ -0,0 +1,584 @@
|
||||
# Mermaid Utilities Library
|
||||
|
||||
Shared utilities for generating and validating Mermaid diagrams across all analysis skills.
|
||||
|
||||
## Sanitization Functions
|
||||
|
||||
### sanitizeId
|
||||
|
||||
Convert any text to a valid Mermaid node ID.
|
||||
|
||||
```javascript
|
||||
/**
|
||||
* Sanitize text to valid Mermaid node ID
|
||||
* - Only alphanumeric and underscore allowed
|
||||
* - Cannot start with number
|
||||
* - Truncates to 50 chars max
|
||||
*
|
||||
* @param {string} text - Input text
|
||||
* @returns {string} - Valid Mermaid ID
|
||||
*/
|
||||
function sanitizeId(text) {
|
||||
if (!text) return '_empty';
|
||||
return text
|
||||
.replace(/[^a-zA-Z0-9_\u4e00-\u9fa5]/g, '_') // Allow Chinese chars
|
||||
.replace(/^[0-9]/, '_$&') // Prefix number with _
|
||||
.replace(/_+/g, '_') // Collapse multiple _
|
||||
.substring(0, 50); // Limit length
|
||||
}
|
||||
|
||||
// Examples:
|
||||
// sanitizeId("User-Service") → "User_Service"
|
||||
// sanitizeId("3rdParty") → "_3rdParty"
|
||||
// sanitizeId("用户服务") → "用户服务"
|
||||
```
|
||||
|
||||
### escapeLabel
|
||||
|
||||
Escape special characters for Mermaid labels.
|
||||
|
||||
```javascript
|
||||
/**
|
||||
* Escape special characters in Mermaid labels
|
||||
* Uses HTML entity encoding for problematic chars
|
||||
*
|
||||
* @param {string} text - Label text
|
||||
* @returns {string} - Escaped label
|
||||
*/
|
||||
function escapeLabel(text) {
|
||||
if (!text) return '';
|
||||
return text
|
||||
.replace(/"/g, "'") // Avoid quote issues
|
||||
.replace(/\(/g, '#40;') // (
|
||||
.replace(/\)/g, '#41;') // )
|
||||
.replace(/\{/g, '#123;') // {
|
||||
.replace(/\}/g, '#125;') // }
|
||||
.replace(/\[/g, '#91;') // [
|
||||
.replace(/\]/g, '#93;') // ]
|
||||
.replace(/</g, '#60;') // <
|
||||
.replace(/>/g, '#62;') // >
|
||||
.replace(/\|/g, '#124;') // |
|
||||
.substring(0, 80); // Limit length
|
||||
}
|
||||
|
||||
// Examples:
|
||||
// escapeLabel("Process(data)") → "Process#40;data#41;"
|
||||
// escapeLabel("Check {valid?}") → "Check #123;valid?#125;"
|
||||
```
|
||||
|
||||
### sanitizeType
|
||||
|
||||
Sanitize type names for class diagrams.
|
||||
|
||||
```javascript
|
||||
/**
|
||||
* Sanitize type names for Mermaid classDiagram
|
||||
* Removes generics syntax that causes issues
|
||||
*
|
||||
* @param {string} type - Type name
|
||||
* @returns {string} - Sanitized type
|
||||
*/
|
||||
function sanitizeType(type) {
|
||||
if (!type) return 'any';
|
||||
return type
|
||||
.replace(/<[^>]*>/g, '') // Remove generics <T>
|
||||
.replace(/\|/g, ' or ') // Union types
|
||||
.replace(/&/g, ' and ') // Intersection types
|
||||
.replace(/\[\]/g, 'Array') // Array notation
|
||||
.substring(0, 30);
|
||||
}
|
||||
|
||||
// Examples:
|
||||
// sanitizeType("Array<string>") → "Array"
|
||||
// sanitizeType("string | number") → "string or number"
|
||||
```
|
||||
|
||||
## Diagram Generation Functions
|
||||
|
||||
### generateFlowchartNode
|
||||
|
||||
Generate a flowchart node with proper shape.
|
||||
|
||||
```javascript
|
||||
/**
|
||||
* Generate flowchart node with shape
|
||||
*
|
||||
* @param {string} id - Node ID
|
||||
* @param {string} label - Display label
|
||||
* @param {string} type - Node type: start|end|process|decision|io|subroutine
|
||||
* @returns {string} - Mermaid node definition
|
||||
*/
|
||||
function generateFlowchartNode(id, label, type = 'process') {
|
||||
const safeId = sanitizeId(id);
|
||||
const safeLabel = escapeLabel(label);
|
||||
|
||||
const shapes = {
|
||||
start: `${safeId}(["${safeLabel}"])`, // Stadium shape
|
||||
end: `${safeId}(["${safeLabel}"])`, // Stadium shape
|
||||
process: `${safeId}["${safeLabel}"]`, // Rectangle
|
||||
decision: `${safeId}{"${safeLabel}"}`, // Diamond
|
||||
io: `${safeId}[/"${safeLabel}"/]`, // Parallelogram
|
||||
subroutine: `${safeId}[["${safeLabel}"]]`, // Subroutine
|
||||
database: `${safeId}[("${safeLabel}")]`, // Cylinder
|
||||
manual: `${safeId}[/"${safeLabel}"\\]` // Trapezoid
|
||||
};
|
||||
|
||||
return shapes[type] || shapes.process;
|
||||
}
|
||||
```
|
||||
|
||||
### generateFlowchartEdge
|
||||
|
||||
Generate a flowchart edge with optional label.
|
||||
|
||||
```javascript
|
||||
/**
|
||||
* Generate flowchart edge
|
||||
*
|
||||
* @param {string} from - Source node ID
|
||||
* @param {string} to - Target node ID
|
||||
* @param {string} label - Edge label (optional)
|
||||
* @param {string} style - Edge style: solid|dashed|thick
|
||||
* @returns {string} - Mermaid edge definition
|
||||
*/
|
||||
function generateFlowchartEdge(from, to, label = '', style = 'solid') {
|
||||
const safeFrom = sanitizeId(from);
|
||||
const safeTo = sanitizeId(to);
|
||||
const safeLabel = label ? `|"${escapeLabel(label)}"|` : '';
|
||||
|
||||
const arrows = {
|
||||
solid: '-->',
|
||||
dashed: '-.->',
|
||||
thick: '==>'
|
||||
};
|
||||
|
||||
const arrow = arrows[style] || arrows.solid;
|
||||
return ` ${safeFrom} ${arrow}${safeLabel} ${safeTo}`;
|
||||
}
|
||||
```
|
||||
|
||||
### generateAlgorithmFlowchart (Enhanced)
|
||||
|
||||
Generate algorithm flowchart with branch/loop support.
|
||||
|
||||
```javascript
|
||||
/**
|
||||
* Generate algorithm flowchart with decision support
|
||||
*
|
||||
* @param {Object} algorithm - Algorithm definition
|
||||
* - name: Algorithm name
|
||||
* - inputs: [{name, type}]
|
||||
* - outputs: [{name, type}]
|
||||
* - steps: [{id, description, type, next: [id], conditions: [text]}]
|
||||
* @returns {string} - Complete Mermaid flowchart
|
||||
*/
|
||||
function generateAlgorithmFlowchart(algorithm) {
|
||||
let mermaid = 'flowchart TD\n';
|
||||
|
||||
// Start node
|
||||
mermaid += ` START(["开始: ${escapeLabel(algorithm.name)}"])\n`;
|
||||
|
||||
// Input node (if has inputs)
|
||||
if (algorithm.inputs?.length > 0) {
|
||||
const inputList = algorithm.inputs.map(i => `${i.name}: ${i.type}`).join(', ');
|
||||
mermaid += ` INPUT[/"输入: ${escapeLabel(inputList)}"/]\n`;
|
||||
mermaid += ` START --> INPUT\n`;
|
||||
}
|
||||
|
||||
// Process nodes
|
||||
const steps = algorithm.steps || [];
|
||||
for (const step of steps) {
|
||||
const nodeId = sanitizeId(step.id || `STEP_${step.step_num}`);
|
||||
|
||||
if (step.type === 'decision') {
|
||||
mermaid += ` ${nodeId}{"${escapeLabel(step.description)}"}\n`;
|
||||
} else if (step.type === 'io') {
|
||||
mermaid += ` ${nodeId}[/"${escapeLabel(step.description)}"/]\n`;
|
||||
} else if (step.type === 'loop_start') {
|
||||
mermaid += ` ${nodeId}[["循环: ${escapeLabel(step.description)}"]]\n`;
|
||||
} else {
|
||||
mermaid += ` ${nodeId}["${escapeLabel(step.description)}"]\n`;
|
||||
}
|
||||
}
|
||||
|
||||
// Output node
|
||||
const outputDesc = algorithm.outputs?.map(o => o.name).join(', ') || '结果';
|
||||
mermaid += ` OUTPUT[/"输出: ${escapeLabel(outputDesc)}"/]\n`;
|
||||
mermaid += ` END_(["结束"])\n`;
|
||||
|
||||
// Connect first step to input/start
|
||||
if (steps.length > 0) {
|
||||
const firstStep = sanitizeId(steps[0].id || 'STEP_1');
|
||||
if (algorithm.inputs?.length > 0) {
|
||||
mermaid += ` INPUT --> ${firstStep}\n`;
|
||||
} else {
|
||||
mermaid += ` START --> ${firstStep}\n`;
|
||||
}
|
||||
}
|
||||
|
||||
// Connect steps based on next array
|
||||
for (const step of steps) {
|
||||
const nodeId = sanitizeId(step.id || `STEP_${step.step_num}`);
|
||||
|
||||
if (step.next && step.next.length > 0) {
|
||||
step.next.forEach((nextId, index) => {
|
||||
const safeNextId = sanitizeId(nextId);
|
||||
const condition = step.conditions?.[index];
|
||||
|
||||
if (condition) {
|
||||
mermaid += ` ${nodeId} -->|"${escapeLabel(condition)}"| ${safeNextId}\n`;
|
||||
} else {
|
||||
mermaid += ` ${nodeId} --> ${safeNextId}\n`;
|
||||
}
|
||||
});
|
||||
} else if (!step.type?.includes('end')) {
|
||||
// Default: connect to next step or output
|
||||
const stepIndex = steps.indexOf(step);
|
||||
if (stepIndex < steps.length - 1) {
|
||||
const nextStep = sanitizeId(steps[stepIndex + 1].id || `STEP_${stepIndex + 2}`);
|
||||
mermaid += ` ${nodeId} --> ${nextStep}\n`;
|
||||
} else {
|
||||
mermaid += ` ${nodeId} --> OUTPUT\n`;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Connect output to end
|
||||
mermaid += ` OUTPUT --> END_\n`;
|
||||
|
||||
return mermaid;
|
||||
}
|
||||
```
|
||||
|
||||
## Diagram Validation
|
||||
|
||||
### validateMermaidSyntax
|
||||
|
||||
Comprehensive Mermaid syntax validation.
|
||||
|
||||
```javascript
|
||||
/**
|
||||
* Validate Mermaid diagram syntax
|
||||
*
|
||||
* @param {string} content - Mermaid diagram content
|
||||
* @returns {Object} - {valid: boolean, issues: string[]}
|
||||
*/
|
||||
function validateMermaidSyntax(content) {
|
||||
const issues = [];
|
||||
|
||||
// Check 1: Diagram type declaration
|
||||
if (!content.match(/^(graph|flowchart|classDiagram|sequenceDiagram|stateDiagram|erDiagram|gantt|pie|mindmap)/m)) {
|
||||
issues.push('Missing diagram type declaration');
|
||||
}
|
||||
|
||||
// Check 2: Undefined values
|
||||
if (content.includes('undefined') || content.includes('null')) {
|
||||
issues.push('Contains undefined/null values');
|
||||
}
|
||||
|
||||
// Check 3: Invalid arrow syntax
|
||||
if (content.match(/-->\s*-->/)) {
|
||||
issues.push('Double arrow syntax error');
|
||||
}
|
||||
|
||||
// Check 4: Unescaped special characters in labels
|
||||
const labelMatches = content.match(/\["[^"]*[(){}[\]<>][^"]*"\]/g);
|
||||
if (labelMatches?.some(m => !m.includes('#'))) {
|
||||
issues.push('Unescaped special characters in labels');
|
||||
}
|
||||
|
||||
// Check 5: Node ID starts with number
|
||||
if (content.match(/\n\s*[0-9][a-zA-Z0-9_]*[\[\({]/)) {
|
||||
issues.push('Node ID cannot start with number');
|
||||
}
|
||||
|
||||
// Check 6: Nested subgraph syntax error
|
||||
if (content.match(/subgraph\s+\S+\s*\n[^e]*subgraph/)) {
|
||||
// This is actually valid, only flag if brackets don't match
|
||||
const subgraphCount = (content.match(/subgraph/g) || []).length;
|
||||
const endCount = (content.match(/\bend\b/g) || []).length;
|
||||
if (subgraphCount > endCount) {
|
||||
issues.push('Unbalanced subgraph/end blocks');
|
||||
}
|
||||
}
|
||||
|
||||
// Check 7: Invalid arrow type for diagram type
|
||||
const diagramType = content.match(/^(graph|flowchart|classDiagram|sequenceDiagram)/m)?.[1];
|
||||
if (diagramType === 'classDiagram' && content.includes('-->|')) {
|
||||
issues.push('Invalid edge label syntax for classDiagram');
|
||||
}
|
||||
|
||||
// Check 8: Empty node labels
|
||||
if (content.match(/\[""\]|\{\}|\(\)/)) {
|
||||
issues.push('Empty node labels detected');
|
||||
}
|
||||
|
||||
// Check 9: Reserved keywords as IDs
|
||||
const reserved = ['end', 'graph', 'subgraph', 'direction', 'class', 'click'];
|
||||
for (const keyword of reserved) {
|
||||
const pattern = new RegExp(`\\n\\s*${keyword}\\s*[\\[\\(\\{]`, 'i');
|
||||
if (content.match(pattern)) {
|
||||
issues.push(`Reserved keyword "${keyword}" used as node ID`);
|
||||
}
|
||||
}
|
||||
|
||||
// Check 10: Line length (Mermaid has issues with very long lines)
|
||||
const lines = content.split('\n');
|
||||
for (let i = 0; i < lines.length; i++) {
|
||||
if (lines[i].length > 500) {
|
||||
issues.push(`Line ${i + 1} exceeds 500 characters`);
|
||||
}
|
||||
}
|
||||
|
||||
return {
|
||||
valid: issues.length === 0,
|
||||
issues
|
||||
};
|
||||
}
|
||||
```
|
||||
|
||||
### validateDiagramDirectory
|
||||
|
||||
Validate all diagrams in a directory.
|
||||
|
||||
```javascript
|
||||
/**
|
||||
* Validate all Mermaid diagrams in directory
|
||||
*
|
||||
* @param {string} diagramDir - Path to diagrams directory
|
||||
* @returns {Object[]} - Array of {file, valid, issues}
|
||||
*/
|
||||
function validateDiagramDirectory(diagramDir) {
|
||||
const files = Glob(`${diagramDir}/*.mmd`);
|
||||
const results = [];
|
||||
|
||||
for (const file of files) {
|
||||
const content = Read(file);
|
||||
const validation = validateMermaidSyntax(content);
|
||||
|
||||
results.push({
|
||||
file: file.split('/').pop(),
|
||||
path: file,
|
||||
valid: validation.valid,
|
||||
issues: validation.issues,
|
||||
lines: content.split('\n').length
|
||||
});
|
||||
}
|
||||
|
||||
return results;
|
||||
}
|
||||
```
|
||||
|
||||
## Class Diagram Utilities
|
||||
|
||||
### generateClassDiagram
|
||||
|
||||
Generate class diagram with relationships.
|
||||
|
||||
```javascript
|
||||
/**
|
||||
* Generate class diagram from analysis data
|
||||
*
|
||||
* @param {Object} analysis - Data structure analysis
|
||||
* - entities: [{name, type, properties, methods}]
|
||||
* - relationships: [{from, to, type, label}]
|
||||
* @param {Object} options - Generation options
|
||||
* - maxClasses: Max classes to include (default: 15)
|
||||
* - maxProperties: Max properties per class (default: 8)
|
||||
* - maxMethods: Max methods per class (default: 6)
|
||||
* @returns {string} - Mermaid classDiagram
|
||||
*/
|
||||
function generateClassDiagram(analysis, options = {}) {
|
||||
const maxClasses = options.maxClasses || 15;
|
||||
const maxProperties = options.maxProperties || 8;
|
||||
const maxMethods = options.maxMethods || 6;
|
||||
|
||||
let mermaid = 'classDiagram\n';
|
||||
|
||||
const entities = (analysis.entities || []).slice(0, maxClasses);
|
||||
|
||||
// Generate classes
|
||||
for (const entity of entities) {
|
||||
const className = sanitizeId(entity.name);
|
||||
mermaid += ` class ${className} {\n`;
|
||||
|
||||
// Properties
|
||||
for (const prop of (entity.properties || []).slice(0, maxProperties)) {
|
||||
const vis = {public: '+', private: '-', protected: '#'}[prop.visibility] || '+';
|
||||
const type = sanitizeType(prop.type);
|
||||
mermaid += ` ${vis}${type} ${prop.name}\n`;
|
||||
}
|
||||
|
||||
// Methods
|
||||
for (const method of (entity.methods || []).slice(0, maxMethods)) {
|
||||
const vis = {public: '+', private: '-', protected: '#'}[method.visibility] || '+';
|
||||
const params = (method.params || []).map(p => p.name).join(', ');
|
||||
const returnType = sanitizeType(method.returnType || 'void');
|
||||
mermaid += ` ${vis}${method.name}(${params}) ${returnType}\n`;
|
||||
}
|
||||
|
||||
mermaid += ' }\n';
|
||||
|
||||
// Add stereotype if applicable
|
||||
if (entity.type === 'interface') {
|
||||
mermaid += ` <<interface>> ${className}\n`;
|
||||
} else if (entity.type === 'abstract') {
|
||||
mermaid += ` <<abstract>> ${className}\n`;
|
||||
}
|
||||
}
|
||||
|
||||
// Generate relationships
|
||||
const arrows = {
|
||||
inheritance: '--|>',
|
||||
implementation: '..|>',
|
||||
composition: '*--',
|
||||
aggregation: 'o--',
|
||||
association: '-->',
|
||||
dependency: '..>'
|
||||
};
|
||||
|
||||
for (const rel of (analysis.relationships || [])) {
|
||||
const from = sanitizeId(rel.from);
|
||||
const to = sanitizeId(rel.to);
|
||||
const arrow = arrows[rel.type] || '-->';
|
||||
const label = rel.label ? ` : ${escapeLabel(rel.label)}` : '';
|
||||
|
||||
// Only include if both entities exist
|
||||
if (entities.some(e => sanitizeId(e.name) === from) &&
|
||||
entities.some(e => sanitizeId(e.name) === to)) {
|
||||
mermaid += ` ${from} ${arrow} ${to}${label}\n`;
|
||||
}
|
||||
}
|
||||
|
||||
return mermaid;
|
||||
}
|
||||
```
|
||||
|
||||
## Sequence Diagram Utilities
|
||||
|
||||
### generateSequenceDiagram
|
||||
|
||||
Generate sequence diagram from scenario.
|
||||
|
||||
```javascript
|
||||
/**
|
||||
* Generate sequence diagram from scenario
|
||||
*
|
||||
* @param {Object} scenario - Sequence scenario
|
||||
* - name: Scenario name
|
||||
* - actors: [{id, name, type}]
|
||||
* - messages: [{from, to, description, type}]
|
||||
* - blocks: [{type, condition, messages}]
|
||||
* @returns {string} - Mermaid sequenceDiagram
|
||||
*/
|
||||
function generateSequenceDiagram(scenario) {
|
||||
let mermaid = 'sequenceDiagram\n';
|
||||
|
||||
// Title
|
||||
if (scenario.name) {
|
||||
mermaid += ` title ${escapeLabel(scenario.name)}\n`;
|
||||
}
|
||||
|
||||
// Participants
|
||||
for (const actor of scenario.actors || []) {
|
||||
const actorType = actor.type === 'external' ? 'actor' : 'participant';
|
||||
mermaid += ` ${actorType} ${sanitizeId(actor.id)} as ${escapeLabel(actor.name)}\n`;
|
||||
}
|
||||
|
||||
mermaid += '\n';
|
||||
|
||||
// Messages
|
||||
for (const msg of scenario.messages || []) {
|
||||
const from = sanitizeId(msg.from);
|
||||
const to = sanitizeId(msg.to);
|
||||
const desc = escapeLabel(msg.description);
|
||||
|
||||
let arrow;
|
||||
switch (msg.type) {
|
||||
case 'async': arrow = '->>'; break;
|
||||
case 'response': arrow = '-->>'; break;
|
||||
case 'create': arrow = '->>+'; break;
|
||||
case 'destroy': arrow = '->>-'; break;
|
||||
case 'self': arrow = '->>'; break;
|
||||
default: arrow = '->>';
|
||||
}
|
||||
|
||||
mermaid += ` ${from}${arrow}${to}: ${desc}\n`;
|
||||
|
||||
// Activation
|
||||
if (msg.activate) {
|
||||
mermaid += ` activate ${to}\n`;
|
||||
}
|
||||
if (msg.deactivate) {
|
||||
mermaid += ` deactivate ${from}\n`;
|
||||
}
|
||||
|
||||
// Notes
|
||||
if (msg.note) {
|
||||
mermaid += ` Note over ${to}: ${escapeLabel(msg.note)}\n`;
|
||||
}
|
||||
}
|
||||
|
||||
// Blocks (loops, alt, opt)
|
||||
for (const block of scenario.blocks || []) {
|
||||
switch (block.type) {
|
||||
case 'loop':
|
||||
mermaid += ` loop ${escapeLabel(block.condition)}\n`;
|
||||
break;
|
||||
case 'alt':
|
||||
mermaid += ` alt ${escapeLabel(block.condition)}\n`;
|
||||
break;
|
||||
case 'opt':
|
||||
mermaid += ` opt ${escapeLabel(block.condition)}\n`;
|
||||
break;
|
||||
}
|
||||
|
||||
for (const m of block.messages || []) {
|
||||
mermaid += ` ${sanitizeId(m.from)}->>${sanitizeId(m.to)}: ${escapeLabel(m.description)}\n`;
|
||||
}
|
||||
|
||||
mermaid += ' end\n';
|
||||
}
|
||||
|
||||
return mermaid;
|
||||
}
|
||||
```
|
||||
|
||||
## Usage Examples
|
||||
|
||||
### Example 1: Algorithm with Branches
|
||||
|
||||
```javascript
|
||||
const algorithm = {
|
||||
name: "用户认证流程",
|
||||
inputs: [{name: "credentials", type: "Object"}],
|
||||
outputs: [{name: "token", type: "JWT"}],
|
||||
steps: [
|
||||
{id: "validate", description: "验证输入格式", type: "process"},
|
||||
{id: "check_user", description: "用户是否存在?", type: "decision",
|
||||
next: ["verify_pwd", "error_user"], conditions: ["是", "否"]},
|
||||
{id: "verify_pwd", description: "验证密码", type: "process"},
|
||||
{id: "pwd_ok", description: "密码正确?", type: "decision",
|
||||
next: ["gen_token", "error_pwd"], conditions: ["是", "否"]},
|
||||
{id: "gen_token", description: "生成 JWT Token", type: "process"},
|
||||
{id: "error_user", description: "返回用户不存在", type: "io"},
|
||||
{id: "error_pwd", description: "返回密码错误", type: "io"}
|
||||
]
|
||||
};
|
||||
|
||||
const flowchart = generateAlgorithmFlowchart(algorithm);
|
||||
```
|
||||
|
||||
### Example 2: Validate Before Output
|
||||
|
||||
```javascript
|
||||
const diagram = generateClassDiagram(analysis);
|
||||
const validation = validateMermaidSyntax(diagram);
|
||||
|
||||
if (!validation.valid) {
|
||||
console.log("Diagram has issues:", validation.issues);
|
||||
// Fix issues or regenerate
|
||||
} else {
|
||||
Write(`${outputDir}/class-diagram.mmd`, diagram);
|
||||
}
|
||||
```
|
||||
@@ -806,8 +806,6 @@ Use `analysis_results.complexity` or task count to determine structure:
|
||||
**Examples**:
|
||||
- GOOD: `"Implement 5 commands: [cmd1, cmd2, cmd3, cmd4, cmd5]"`
|
||||
- BAD: `"Implement new commands"`
|
||||
- GOOD: `"5 files created: verify by ls .claude/commands/*.md | wc -l = 5"`
|
||||
- BAD: `"All commands implemented successfully"`
|
||||
|
||||
### 3.2 Planning & Organization Standards
|
||||
|
||||
|
||||
@@ -400,7 +400,7 @@ Task(subagent_type="{meta.agent}",
|
||||
1. Read complete task JSON: {session.task_json_path}
|
||||
2. Load context package: {session.context_package_path}
|
||||
|
||||
Follow complete execution guidelines in @.claude/agents/{meta.agent}.md
|
||||
|
||||
|
||||
**Session Paths**:
|
||||
- Workflow Dir: {session.workflow_dir}
|
||||
|
||||
@@ -15,7 +15,6 @@ allowed-tools: Task(*), Read(*), Glob(*)
|
||||
|
||||
Orchestrator command that invokes `context-search-agent` to gather comprehensive project context for implementation planning. Generates standardized `context-package.json` with codebase analysis, dependencies, and conflict detection.
|
||||
|
||||
**Agent**: `context-search-agent` (`.claude/agents/context-search-agent.md`)
|
||||
|
||||
## Core Philosophy
|
||||
|
||||
@@ -429,6 +428,6 @@ if (historicalConflicts.length > 0 && currentRisk === "low") {
|
||||
|
||||
- **Detection-first**: Always check for existing package before invoking agent
|
||||
- **Project.json integration**: Agent reads `.workflow/project.json` as primary source for project context, avoiding redundant analysis
|
||||
- **Agent autonomy**: Agent handles all discovery logic per `.claude/agents/context-search-agent.md`
|
||||
|
||||
- **No redundancy**: This command is a thin orchestrator, all logic in agent
|
||||
- **Plan-specific**: Use this for implementation planning; brainstorm mode uses direct agent call
|
||||
|
||||
@@ -238,14 +238,7 @@ If conflict_risk was medium/high, modifications have been applied to:
|
||||
|
||||
**Agent Configuration Reference**: All TDD task generation rules, quantification requirements, Red-Green-Refactor cycle structure, quality standards, and execution details are defined in action-planning-agent.
|
||||
|
||||
Refer to: @.claude/agents/action-planning-agent.md for:
|
||||
- TDD Task Decomposition Standards
|
||||
- Red-Green-Refactor Cycle Requirements
|
||||
- Quantification Requirements (MANDATORY)
|
||||
- 5-Field Task JSON Schema
|
||||
- IMPL_PLAN.md Structure (TDD variant)
|
||||
- TODO_LIST.md Format
|
||||
- TDD Execution Flow & Quality Validation
|
||||
|
||||
|
||||
### TDD-Specific Requirements Summary
|
||||
|
||||
|
||||
@@ -14,8 +14,6 @@ allowed-tools: Task(*), Read(*), Glob(*)
|
||||
|
||||
Orchestrator command that invokes `test-context-search-agent` to gather comprehensive test coverage context for test generation workflows. Generates standardized `test-context-package.json` with coverage analysis, framework detection, and source implementation context.
|
||||
|
||||
**Agent**: `test-context-search-agent` (`.claude/agents/test-context-search-agent.md`)
|
||||
|
||||
## Core Philosophy
|
||||
|
||||
- **Agent Delegation**: Delegate all test coverage analysis to `test-context-search-agent` for autonomous execution
|
||||
@@ -88,7 +86,6 @@ Task(
|
||||
subagent_type="test-context-search-agent",
|
||||
description="Gather test coverage context",
|
||||
prompt=`
|
||||
You are executing as test-context-search-agent (.claude/agents/test-context-search-agent.md).
|
||||
|
||||
## Execution Mode
|
||||
**PLAN MODE** (Comprehensive) - Full Phase 1-3 execution
|
||||
@@ -228,7 +225,7 @@ Refer to `test-context-search-agent.md` Phase 3.2 for complete `test-context-pac
|
||||
## Notes
|
||||
|
||||
- **Detection-first**: Always check for existing test-context-package before invoking agent
|
||||
- **Agent autonomy**: Agent handles all coverage analysis logic per `.claude/agents/test-context-search-agent.md`
|
||||
|
||||
- **No redundancy**: This command is a thin orchestrator, all logic in agent
|
||||
- **Framework agnostic**: Supports Jest, Mocha, pytest, RSpec, Go testing, etc.
|
||||
- **Coverage focus**: Primary goal is identifying implementation files without tests
|
||||
|
||||
@@ -106,8 +106,6 @@ CRITICAL:
|
||||
- Follow the progressive loading strategy defined in your agent specification (load context incrementally from memory-first approach)
|
||||
|
||||
## AGENT CONFIGURATION REFERENCE
|
||||
All test task generation rules, schemas, and quality standards are defined in your agent specification:
|
||||
@.claude/agents/action-planning-agent.md
|
||||
|
||||
Refer to your specification for:
|
||||
- Test Task JSON Schema (6-field structure with test-specific metadata)
|
||||
|
||||
132
.claude/skills/copyright-docs/SKILL.md
Normal file
132
.claude/skills/copyright-docs/SKILL.md
Normal file
@@ -0,0 +1,132 @@
|
||||
---
|
||||
name: copyright-docs
|
||||
description: Generate software copyright design specification documents compliant with China Copyright Protection Center (CPCC) standards. Creates complete design documents with Mermaid diagrams based on source code analysis. Use for software copyright registration, generating design specification, creating CPCC-compliant documents, or documenting software for intellectual property protection. Triggers on "软件著作权", "设计说明书", "版权登记", "CPCC", "软著申请".
|
||||
allowed-tools: Task, AskUserQuestion, Read, Bash, Glob, Grep, Write
|
||||
---
|
||||
|
||||
# Software Copyright Documentation Skill
|
||||
|
||||
Generate CPCC-compliant software design specification documents (软件设计说明书) through multi-phase code analysis.
|
||||
|
||||
## Architecture Overview
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────────────┐
|
||||
│ Context-Optimized Architecture │
|
||||
├─────────────────────────────────────────────────────────────────┤
|
||||
│ │
|
||||
│ Phase 1: Metadata → project-metadata.json │
|
||||
│ ↓ │
|
||||
│ Phase 2: 6 Parallel → sections/section-N.md (直接写MD) │
|
||||
│ Agents ↓ 返回简要JSON │
|
||||
│ ↓ │
|
||||
│ Phase 2.5: Consolidation → cross-module-summary.md │
|
||||
│ Agent ↓ 返回问题列表 │
|
||||
│ ↓ │
|
||||
│ Phase 4: Assembly → 合并MD + 跨模块总结 │
|
||||
│ ↓ │
|
||||
│ Phase 5: Refinement → 最终文档 │
|
||||
│ │
|
||||
└─────────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
## Key Design Principles
|
||||
|
||||
1. **Agent 直接输出 MD**: 避免 JSON → MD 转换的上下文开销
|
||||
2. **简要返回**: Agent 只返回路径+摘要,不返回完整内容
|
||||
3. **汇总 Agent**: 独立 Agent 负责跨模块问题检测
|
||||
4. **引用合并**: Phase 4 读取文件合并,不在上下文中传递
|
||||
|
||||
## Execution Flow
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────────────┐
|
||||
│ Phase 1: Metadata Collection │
|
||||
│ → Read: phases/01-metadata-collection.md │
|
||||
│ → Collect: software name, version, category, scope │
|
||||
│ → Output: project-metadata.json │
|
||||
├─────────────────────────────────────────────────────────────────┤
|
||||
│ Phase 2: Deep Code Analysis (6 Parallel Agents) │
|
||||
│ → Read: phases/02-deep-analysis.md │
|
||||
│ → Reference: specs/cpcc-requirements.md │
|
||||
│ → Each Agent: 分析代码 → 直接写 sections/section-N.md │
|
||||
│ → Return: {"status", "output_file", "summary", "cross_notes"} │
|
||||
├─────────────────────────────────────────────────────────────────┤
|
||||
│ Phase 2.5: Consolidation (New!) │
|
||||
│ → Read: phases/02.5-consolidation.md │
|
||||
│ → Input: Agent 返回的简要信息 + cross_module_notes │
|
||||
│ → Analyze: 一致性/完整性/关联性/质量检查 │
|
||||
│ → Output: cross-module-summary.md │
|
||||
│ → Return: {"issues": {errors, warnings, info}, "stats"} │
|
||||
├─────────────────────────────────────────────────────────────────┤
|
||||
│ Phase 4: Document Assembly │
|
||||
│ → Read: phases/04-document-assembly.md │
|
||||
│ → Check: 如有 errors,提示用户处理 │
|
||||
│ → Merge: Section 1 + sections/*.md + 跨模块附录 │
|
||||
│ → Output: {软件名称}-软件设计说明书.md │
|
||||
├─────────────────────────────────────────────────────────────────┤
|
||||
│ Phase 5: Compliance Review & Refinement │
|
||||
│ → Read: phases/05-compliance-refinement.md │
|
||||
│ → Reference: specs/cpcc-requirements.md │
|
||||
│ → Loop: 发现问题 → 提问 → 修复 → 重新检查 │
|
||||
└─────────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
## Document Sections (7 Required)
|
||||
|
||||
| Section | Title | Diagram | Agent |
|
||||
|---------|-------|---------|-------|
|
||||
| 1 | 软件概述 | - | Phase 4 生成 |
|
||||
| 2 | 系统架构图 | graph TD | architecture |
|
||||
| 3 | 功能模块设计 | flowchart TD | functions |
|
||||
| 4 | 核心算法与流程 | flowchart TD | algorithms |
|
||||
| 5 | 数据结构设计 | classDiagram | data_structures |
|
||||
| 6 | 接口设计 | sequenceDiagram | interfaces |
|
||||
| 7 | 异常处理设计 | flowchart TD | exceptions |
|
||||
|
||||
## Directory Setup
|
||||
|
||||
```javascript
|
||||
// 生成时间戳目录名
|
||||
const timestamp = new Date().toISOString().slice(0,19).replace(/[-:T]/g, '');
|
||||
const dir = `.workflow/.scratchpad/copyright-${timestamp}`;
|
||||
|
||||
// Windows (cmd)
|
||||
Bash(`mkdir "${dir}\\sections"`);
|
||||
Bash(`mkdir "${dir}\\iterations"`);
|
||||
|
||||
// Unix/macOS
|
||||
// Bash(`mkdir -p "${dir}/sections" "${dir}/iterations"`);
|
||||
```
|
||||
|
||||
## Output Structure
|
||||
|
||||
```
|
||||
.workflow/.scratchpad/copyright-{timestamp}/
|
||||
├── project-metadata.json # Phase 1
|
||||
├── sections/ # Phase 2 (Agent 直接写入)
|
||||
│ ├── section-2-architecture.md
|
||||
│ ├── section-3-functions.md
|
||||
│ ├── section-4-algorithms.md
|
||||
│ ├── section-5-data-structures.md
|
||||
│ ├── section-6-interfaces.md
|
||||
│ └── section-7-exceptions.md
|
||||
├── cross-module-summary.md # Phase 2.5
|
||||
├── iterations/ # Phase 5
|
||||
│ ├── v1.md
|
||||
│ └── v2.md
|
||||
└── {软件名称}-软件设计说明书.md # Final Output
|
||||
```
|
||||
|
||||
## Reference Documents
|
||||
|
||||
| Document | Purpose |
|
||||
|----------|---------|
|
||||
| [phases/01-metadata-collection.md](phases/01-metadata-collection.md) | Software info collection |
|
||||
| [phases/02-deep-analysis.md](phases/02-deep-analysis.md) | 6-agent parallel analysis |
|
||||
| [phases/02.5-consolidation.md](phases/02.5-consolidation.md) | Cross-module consolidation |
|
||||
| [phases/04-document-assembly.md](phases/04-document-assembly.md) | Document merge & assembly |
|
||||
| [phases/05-compliance-refinement.md](phases/05-compliance-refinement.md) | Iterative refinement loop |
|
||||
| [specs/cpcc-requirements.md](specs/cpcc-requirements.md) | CPCC compliance checklist |
|
||||
| [templates/agent-base.md](templates/agent-base.md) | Agent prompt templates |
|
||||
| [../_shared/mermaid-utils.md](../_shared/mermaid-utils.md) | Shared Mermaid utilities |
|
||||
@@ -0,0 +1,78 @@
|
||||
# Phase 1: Metadata Collection
|
||||
|
||||
Collect software metadata for document header and context.
|
||||
|
||||
## Execution
|
||||
|
||||
### Step 1: Software Name & Version
|
||||
|
||||
```javascript
|
||||
AskUserQuestion({
|
||||
questions: [{
|
||||
question: "请输入软件名称(将显示在文档页眉):",
|
||||
header: "软件名称",
|
||||
multiSelect: false,
|
||||
options: [
|
||||
{label: "自动检测", description: "从 package.json 或项目配置读取"},
|
||||
{label: "手动输入", description: "输入自定义名称"}
|
||||
]
|
||||
}]
|
||||
})
|
||||
```
|
||||
|
||||
### Step 2: Software Category
|
||||
|
||||
```javascript
|
||||
AskUserQuestion({
|
||||
questions: [{
|
||||
question: "软件属于哪种类型?",
|
||||
header: "软件类型",
|
||||
multiSelect: false,
|
||||
options: [
|
||||
{label: "命令行工具 (CLI)", description: "重点描述命令、参数"},
|
||||
{label: "后端服务/API", description: "重点描述端点、协议"},
|
||||
{label: "SDK/库", description: "重点描述接口、集成"},
|
||||
{label: "数据处理系统", description: "重点描述数据流、转换"},
|
||||
{label: "自动化脚本", description: "重点描述工作流、触发器"}
|
||||
]
|
||||
}]
|
||||
})
|
||||
```
|
||||
|
||||
### Step 3: Scope Definition
|
||||
|
||||
```javascript
|
||||
AskUserQuestion({
|
||||
questions: [{
|
||||
question: "分析范围是什么?",
|
||||
header: "分析范围",
|
||||
multiSelect: false,
|
||||
options: [
|
||||
{label: "整个项目", description: "分析全部源代码"},
|
||||
{label: "指定目录", description: "仅分析 src/ 或其他目录"},
|
||||
{label: "自定义路径", description: "手动指定路径"}
|
||||
]
|
||||
}]
|
||||
})
|
||||
```
|
||||
|
||||
## Output
|
||||
|
||||
Save metadata to `project-metadata.json`:
|
||||
|
||||
```json
|
||||
{
|
||||
"software_name": "智能数据分析系统",
|
||||
"version": "V1.0.0",
|
||||
"category": "后端服务/API",
|
||||
"scope_path": "src/",
|
||||
"tech_stack": {
|
||||
"language": "TypeScript",
|
||||
"runtime": "Node.js 18+",
|
||||
"framework": "Express.js",
|
||||
"dependencies": ["mongoose", "redis", "bull"]
|
||||
},
|
||||
"entry_points": ["src/index.ts", "src/cli.ts"],
|
||||
"main_modules": ["auth", "data", "api", "worker"]
|
||||
}
|
||||
```
|
||||
150
.claude/skills/copyright-docs/phases/01.5-project-exploration.md
Normal file
150
.claude/skills/copyright-docs/phases/01.5-project-exploration.md
Normal file
@@ -0,0 +1,150 @@
|
||||
# Phase 1.5: Project Exploration
|
||||
|
||||
基于元数据,启动并行探索 Agent 收集代码信息。
|
||||
|
||||
## Execution
|
||||
|
||||
### Step 1: Intelligent Angle Selection
|
||||
|
||||
```javascript
|
||||
// 根据软件类型选择探索角度
|
||||
const ANGLE_PRESETS = {
|
||||
'CLI': ['architecture', 'commands', 'algorithms', 'exceptions'],
|
||||
'API': ['architecture', 'endpoints', 'data-structures', 'interfaces'],
|
||||
'SDK': ['architecture', 'interfaces', 'data-structures', 'algorithms'],
|
||||
'DataProcessing': ['architecture', 'algorithms', 'data-structures', 'dataflow'],
|
||||
'Automation': ['architecture', 'algorithms', 'exceptions', 'dataflow']
|
||||
};
|
||||
|
||||
// 从 metadata.category 映射到预设
|
||||
function getCategoryKey(category) {
|
||||
if (category.includes('CLI') || category.includes('命令行')) return 'CLI';
|
||||
if (category.includes('API') || category.includes('后端')) return 'API';
|
||||
if (category.includes('SDK') || category.includes('库')) return 'SDK';
|
||||
if (category.includes('数据处理')) return 'DataProcessing';
|
||||
if (category.includes('自动化')) return 'Automation';
|
||||
return 'API'; // default
|
||||
}
|
||||
|
||||
const categoryKey = getCategoryKey(metadata.category);
|
||||
const selectedAngles = ANGLE_PRESETS[categoryKey];
|
||||
|
||||
console.log(`
|
||||
## Exploration Plan
|
||||
|
||||
Software: ${metadata.software_name}
|
||||
Category: ${metadata.category} → ${categoryKey}
|
||||
Selected Angles: ${selectedAngles.join(', ')}
|
||||
|
||||
Launching ${selectedAngles.length} parallel explorations...
|
||||
`);
|
||||
```
|
||||
|
||||
### Step 2: Launch Parallel Agents (Direct Output)
|
||||
|
||||
**⚠️ CRITICAL**: Agents write output files directly.
|
||||
|
||||
```javascript
|
||||
const explorationTasks = selectedAngles.map((angle, index) =>
|
||||
Task({
|
||||
subagent_type: "cli-explore-agent",
|
||||
run_in_background: false,
|
||||
description: `Explore: ${angle}`,
|
||||
prompt: `
|
||||
## Exploration Objective
|
||||
为 CPCC 软著申请文档执行 **${angle}** 探索。
|
||||
|
||||
## Assigned Context
|
||||
- **Exploration Angle**: ${angle}
|
||||
- **Software Name**: ${metadata.software_name}
|
||||
- **Scope Path**: ${metadata.scope_path}
|
||||
- **Category**: ${metadata.category}
|
||||
- **Exploration Index**: ${index + 1} of ${selectedAngles.length}
|
||||
- **Output File**: ${sessionFolder}/exploration-${angle}.json
|
||||
|
||||
## MANDATORY FIRST STEPS
|
||||
1. Run: ccw tool exec get_modules_by_depth '{}' (project structure)
|
||||
2. Run: rg -l "{relevant_keyword}" --type ts (locate relevant files)
|
||||
3. Analyze from ${angle} perspective
|
||||
|
||||
## Exploration Strategy (${angle} focus)
|
||||
|
||||
**Step 1: Structural Scan**
|
||||
- 识别与 ${angle} 相关的模块和文件
|
||||
- 分析导入/导出关系
|
||||
|
||||
**Step 2: Pattern Recognition**
|
||||
- ${angle} 相关的设计模式
|
||||
- 代码组织方式
|
||||
|
||||
**Step 3: Write Output**
|
||||
- 输出 JSON 到指定路径
|
||||
|
||||
## Expected Output Schema
|
||||
|
||||
**File**: ${sessionFolder}/exploration-${angle}.json
|
||||
|
||||
\`\`\`json
|
||||
{
|
||||
"angle": "${angle}",
|
||||
"findings": {
|
||||
"structure": [
|
||||
{ "component": "...", "type": "module|layer|service", "path": "...", "description": "..." }
|
||||
],
|
||||
"patterns": [
|
||||
{ "name": "...", "usage": "...", "files": ["path1", "path2"] }
|
||||
],
|
||||
"key_files": [
|
||||
{ "path": "src/file.ts", "relevance": 0.85, "rationale": "Core ${angle} logic" }
|
||||
]
|
||||
},
|
||||
"insights": [
|
||||
{ "observation": "...", "cpcc_section": "2|3|4|5|6|7", "recommendation": "..." }
|
||||
],
|
||||
"_metadata": {
|
||||
"exploration_angle": "${angle}",
|
||||
"exploration_index": ${index + 1},
|
||||
"software_name": "${metadata.software_name}",
|
||||
"timestamp": "ISO8601"
|
||||
}
|
||||
}
|
||||
\`\`\`
|
||||
|
||||
## Success Criteria
|
||||
- [ ] get_modules_by_depth 执行完成
|
||||
- [ ] 至少识别 3 个相关文件
|
||||
- [ ] patterns 包含具体代码示例
|
||||
- [ ] insights 关联到 CPCC 章节 (2-7)
|
||||
- [ ] JSON 输出到指定路径
|
||||
- [ ] Return: 2-3 句话总结 ${angle} 发现
|
||||
`
|
||||
})
|
||||
);
|
||||
|
||||
// Execute all exploration tasks in parallel
|
||||
```
|
||||
|
||||
## Output
|
||||
|
||||
Session folder structure after exploration:
|
||||
|
||||
```
|
||||
${sessionFolder}/
|
||||
├── exploration-architecture.json
|
||||
├── exploration-{angle2}.json
|
||||
├── exploration-{angle3}.json
|
||||
└── exploration-{angle4}.json
|
||||
```
|
||||
|
||||
## Downstream Usage (Phase 2 Analysis Input)
|
||||
|
||||
Phase 2 agents read exploration files as context:
|
||||
|
||||
```javascript
|
||||
// Discover exploration files by known angle pattern
|
||||
const explorationData = {};
|
||||
selectedAngles.forEach(angle => {
|
||||
const filePath = `${sessionFolder}/exploration-${angle}.json`;
|
||||
explorationData[angle] = JSON.parse(Read(filePath));
|
||||
});
|
||||
```
|
||||
664
.claude/skills/copyright-docs/phases/02-deep-analysis.md
Normal file
664
.claude/skills/copyright-docs/phases/02-deep-analysis.md
Normal file
@@ -0,0 +1,664 @@
|
||||
# Phase 2: Deep Code Analysis
|
||||
|
||||
6 个并行 Agent,各自直接写入 MD 章节文件。
|
||||
|
||||
> **模板参考**: [../templates/agent-base.md](../templates/agent-base.md)
|
||||
> **规范参考**: [../specs/cpcc-requirements.md](../specs/cpcc-requirements.md)
|
||||
|
||||
## Exploration → Agent 自动分配
|
||||
|
||||
根据 Phase 1.5 生成的 exploration 文件名自动分配对应的 analysis agent。
|
||||
|
||||
### 映射规则
|
||||
|
||||
```javascript
|
||||
// Exploration 角度 → Agent 映射(基于文件名识别,不读取内容)
|
||||
const EXPLORATION_TO_AGENT = {
|
||||
'architecture': 'architecture',
|
||||
'commands': 'functions', // CLI 命令 → 功能模块
|
||||
'endpoints': 'interfaces', // API 端点 → 接口设计
|
||||
'algorithms': 'algorithms',
|
||||
'data-structures': 'data_structures',
|
||||
'dataflow': 'data_structures', // 数据流 → 数据结构
|
||||
'interfaces': 'interfaces',
|
||||
'exceptions': 'exceptions'
|
||||
};
|
||||
|
||||
// 从文件名提取角度
|
||||
function extractAngle(filename) {
|
||||
// exploration-architecture.json → architecture
|
||||
const match = filename.match(/exploration-(.+)\.json$/);
|
||||
return match ? match[1] : null;
|
||||
}
|
||||
|
||||
// 分配 agent
|
||||
function assignAgent(explorationFile) {
|
||||
const angle = extractAngle(path.basename(explorationFile));
|
||||
return EXPLORATION_TO_AGENT[angle] || null;
|
||||
}
|
||||
|
||||
// Agent 配置(用于 buildAgentPrompt)
|
||||
const AGENT_CONFIGS = {
|
||||
architecture: {
|
||||
role: '系统架构师,专注于分层设计和模块依赖',
|
||||
section: '2',
|
||||
output: 'section-2-architecture.md',
|
||||
focus: '分层结构、模块依赖、数据流向'
|
||||
},
|
||||
functions: {
|
||||
role: '功能分析师,专注于功能点识别和交互',
|
||||
section: '3',
|
||||
output: 'section-3-functions.md',
|
||||
focus: '功能点枚举、模块分组、入口文件、功能交互'
|
||||
},
|
||||
algorithms: {
|
||||
role: '算法工程师,专注于核心逻辑和复杂度分析',
|
||||
section: '4',
|
||||
output: 'section-4-algorithms.md',
|
||||
focus: '核心算法、流程步骤、复杂度、输入输出'
|
||||
},
|
||||
data_structures: {
|
||||
role: '数据建模师,专注于实体关系和类型定义',
|
||||
section: '5',
|
||||
output: 'section-5-data-structures.md',
|
||||
focus: '实体定义、属性类型、关系映射、枚举'
|
||||
},
|
||||
interfaces: {
|
||||
role: 'API设计师,专注于接口契约和协议',
|
||||
section: '6',
|
||||
output: 'section-6-interfaces.md',
|
||||
focus: 'API端点、参数校验、响应格式、时序'
|
||||
},
|
||||
exceptions: {
|
||||
role: '可靠性工程师,专注于异常处理和恢复策略',
|
||||
section: '7',
|
||||
output: 'section-7-exceptions.md',
|
||||
focus: '异常类型、错误码、处理模式、恢复策略'
|
||||
}
|
||||
};
|
||||
```
|
||||
|
||||
### 自动发现与分配流程
|
||||
|
||||
```javascript
|
||||
// 1. 发现所有 exploration 文件(仅看文件名)
|
||||
const explorationFiles = bash(`find ${sessionFolder} -name "exploration-*.json" -type f`)
|
||||
.split('\n')
|
||||
.filter(f => f.trim());
|
||||
|
||||
// 2. 按文件名自动分配 agent
|
||||
const agentAssignments = explorationFiles.map(file => {
|
||||
const angle = extractAngle(path.basename(file));
|
||||
const agentName = EXPLORATION_TO_AGENT[angle];
|
||||
return {
|
||||
exploration_file: file,
|
||||
angle: angle,
|
||||
agent: agentName,
|
||||
output_file: AGENT_CONFIGS[agentName]?.output
|
||||
};
|
||||
}).filter(a => a.agent);
|
||||
|
||||
// 3. 补充未被 exploration 覆盖的必需 agent(分配相关 exploration)
|
||||
const coveredAgents = new Set(agentAssignments.map(a => a.agent));
|
||||
const requiredAgents = ['architecture', 'functions', 'algorithms', 'data_structures', 'interfaces', 'exceptions'];
|
||||
const missingAgents = requiredAgents.filter(a => !coveredAgents.has(a));
|
||||
|
||||
// 相关性映射:为缺失 agent 分配最相关的 exploration
|
||||
const RELATED_EXPLORATIONS = {
|
||||
architecture: ['architecture', 'dataflow', 'interfaces'],
|
||||
functions: ['commands', 'endpoints', 'architecture'],
|
||||
algorithms: ['algorithms', 'dataflow', 'architecture'],
|
||||
data_structures: ['data-structures', 'dataflow', 'architecture'],
|
||||
interfaces: ['interfaces', 'endpoints', 'architecture'],
|
||||
exceptions: ['exceptions', 'algorithms', 'architecture']
|
||||
};
|
||||
|
||||
function findRelatedExploration(agent, availableFiles) {
|
||||
const preferences = RELATED_EXPLORATIONS[agent] || ['architecture'];
|
||||
for (const pref of preferences) {
|
||||
const match = availableFiles.find(f => f.includes(`exploration-${pref}.json`));
|
||||
if (match) return { file: match, angle: pref, isRelated: true };
|
||||
}
|
||||
// 最后兜底:任意 exploration 都比没有强
|
||||
return availableFiles.length > 0
|
||||
? { file: availableFiles[0], angle: extractAngle(path.basename(availableFiles[0])), isRelated: true }
|
||||
: { file: null, angle: null, isRelated: false };
|
||||
}
|
||||
|
||||
missingAgents.forEach(agent => {
|
||||
const related = findRelatedExploration(agent, explorationFiles);
|
||||
agentAssignments.push({
|
||||
exploration_file: related.file,
|
||||
angle: related.angle,
|
||||
agent: agent,
|
||||
output_file: AGENT_CONFIGS[agent].output,
|
||||
is_related: related.isRelated // 标记为相关而非直接匹配
|
||||
});
|
||||
});
|
||||
|
||||
console.log(`
|
||||
## Agent Auto-Assignment
|
||||
|
||||
Found ${explorationFiles.length} exploration files:
|
||||
${agentAssignments.map(a => {
|
||||
if (!a.exploration_file) return `- ${a.agent} agent (no exploration)`;
|
||||
if (a.is_related) return `- ${a.agent} agent ← ${a.angle} (related)`;
|
||||
return `- ${a.agent} agent ← ${a.angle} (direct)`;
|
||||
}).join('\n')}
|
||||
`);
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Agent 执行前置条件
|
||||
|
||||
**每个 Agent 接收 exploration 文件路径,自行读取内容**:
|
||||
|
||||
```javascript
|
||||
// Agent prompt 中包含文件路径
|
||||
// Agent 启动后的操作顺序:
|
||||
// 1. Read exploration 文件(如有)
|
||||
// 2. Read CPCC 规范文件
|
||||
// 3. 执行分析任务
|
||||
```
|
||||
|
||||
规范文件路径(相对于 skill 根目录):
|
||||
- `specs/cpcc-requirements.md` - CPCC 软著申请规范要求
|
||||
|
||||
---
|
||||
|
||||
## Agent 配置
|
||||
|
||||
| Agent | 输出文件 | 章节 |
|
||||
|-------|----------|------|
|
||||
| architecture | section-2-architecture.md | 系统架构图 |
|
||||
| functions | section-3-functions.md | 功能模块设计 |
|
||||
| algorithms | section-4-algorithms.md | 核心算法与流程 |
|
||||
| data_structures | section-5-data-structures.md | 数据结构设计 |
|
||||
| interfaces | section-6-interfaces.md | 接口设计 |
|
||||
| exceptions | section-7-exceptions.md | 异常处理设计 |
|
||||
|
||||
## CPCC 规范要点 (所有 Agent 共用)
|
||||
|
||||
```
|
||||
[CPCC_SPEC]
|
||||
1. 内容基于代码分析,无臆测或未来计划
|
||||
2. 图表编号格式: 图N-M (如图2-1, 图3-1)
|
||||
3. 每个子章节内容不少于100字
|
||||
4. Mermaid 语法必须正确可渲染
|
||||
5. 包含具体文件路径引用
|
||||
6. 中文输出,技术术语可用英文
|
||||
```
|
||||
|
||||
## 执行流程
|
||||
|
||||
```javascript
|
||||
// 1. 发现 exploration 文件并自动分配 agent
|
||||
const explorationFiles = bash(`find ${sessionFolder} -name "exploration-*.json" -type f`)
|
||||
.split('\n')
|
||||
.filter(f => f.trim());
|
||||
|
||||
const agentAssignments = explorationFiles.map(file => {
|
||||
const angle = extractAngle(path.basename(file));
|
||||
const agentName = EXPLORATION_TO_AGENT[angle];
|
||||
return { exploration_file: file, angle, agent: agentName };
|
||||
}).filter(a => a.agent);
|
||||
|
||||
// 补充必需 agent
|
||||
const coveredAgents = new Set(agentAssignments.map(a => a.agent));
|
||||
const requiredAgents = ['architecture', 'functions', 'algorithms', 'data_structures', 'interfaces', 'exceptions'];
|
||||
requiredAgents.filter(a => !coveredAgents.has(a)).forEach(agent => {
|
||||
agentAssignments.push({ exploration_file: null, angle: null, agent });
|
||||
});
|
||||
|
||||
// 2. 准备目录
|
||||
Bash(`mkdir -p ${outputDir}/sections`);
|
||||
|
||||
// 3. 并行启动所有 Agent(传递 exploration 文件路径)
|
||||
const results = await Promise.all(
|
||||
agentAssignments.map(assignment =>
|
||||
Task({
|
||||
subagent_type: "cli-explore-agent",
|
||||
run_in_background: false,
|
||||
description: `Analyze: ${assignment.agent}`,
|
||||
prompt: buildAgentPrompt(assignment, metadata, outputDir)
|
||||
})
|
||||
)
|
||||
);
|
||||
|
||||
// 4. 收集返回信息
|
||||
const summaries = results.map(r => JSON.parse(r));
|
||||
|
||||
// 5. 传递给 Phase 2.5
|
||||
return { summaries, cross_notes: summaries.flatMap(s => s.cross_module_notes) };
|
||||
```
|
||||
|
||||
### Agent Prompt 构建
|
||||
|
||||
```javascript
|
||||
function buildAgentPrompt(assignment, metadata, outputDir) {
|
||||
const config = AGENT_CONFIGS[assignment.agent];
|
||||
let contextSection = '';
|
||||
|
||||
if (assignment.exploration_file) {
|
||||
const matchType = assignment.is_related ? '相关' : '直接匹配';
|
||||
contextSection = `[CONTEXT]
|
||||
**Exploration 文件**: ${assignment.exploration_file}
|
||||
**匹配类型**: ${matchType}
|
||||
首先读取此文件获取 ${assignment.angle} 探索结果作为分析上下文。
|
||||
${assignment.is_related ? `注意:这是相关探索结果(非直接匹配),请提取与 ${config.focus} 相关的信息。` : ''}
|
||||
`;
|
||||
}
|
||||
|
||||
return `
|
||||
${contextSection}
|
||||
[SPEC]
|
||||
读取规范文件:
|
||||
- Read: ${skillRoot}/specs/cpcc-requirements.md
|
||||
|
||||
[ROLE] ${config.role}
|
||||
|
||||
[TASK]
|
||||
分析 ${metadata.scope_path},生成 Section ${config.section}。
|
||||
输出: ${outputDir}/sections/${config.output}
|
||||
|
||||
[CPCC_SPEC]
|
||||
- 内容基于代码分析,无臆测
|
||||
- 图表编号: 图${config.section}-1, 图${config.section}-2...
|
||||
- 每个子章节 ≥100字
|
||||
- 包含文件路径引用
|
||||
|
||||
[FOCUS]
|
||||
${config.focus}
|
||||
|
||||
[RETURN JSON]
|
||||
{"status":"completed","output_file":"${config.output}","summary":"<50字>","cross_module_notes":[],"stats":{}}
|
||||
`;
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Agent 提示词
|
||||
|
||||
### Architecture
|
||||
|
||||
```javascript
|
||||
Task({
|
||||
subagent_type: "cli-explore-agent",
|
||||
run_in_background: false,
|
||||
prompt: `
|
||||
[SPEC]
|
||||
首先读取规范文件:
|
||||
- Read: ${skillRoot}/specs/cpcc-requirements.md
|
||||
严格遵循 CPCC 软著申请规范要求。
|
||||
|
||||
[ROLE] 系统架构师,专注于分层设计和模块依赖。
|
||||
|
||||
[TASK]
|
||||
分析 ${meta.scope_path},生成 Section 2: 系统架构图。
|
||||
输出: ${outDir}/sections/section-2-architecture.md
|
||||
|
||||
[CPCC_SPEC]
|
||||
- 内容基于代码分析,无臆测
|
||||
- 图表编号: 图2-1, 图2-2...
|
||||
- 每个子章节 ≥100字
|
||||
- 包含文件路径引用
|
||||
|
||||
[TEMPLATE]
|
||||
## 2. 系统架构图
|
||||
|
||||
本章节展示${meta.software_name}的系统架构设计。
|
||||
|
||||
\`\`\`mermaid
|
||||
graph TD
|
||||
subgraph Layer1["层名"]
|
||||
Comp1[组件1]
|
||||
end
|
||||
Comp1 --> Comp2
|
||||
\`\`\`
|
||||
|
||||
**图2-1 系统架构图**
|
||||
|
||||
### 2.1 分层说明
|
||||
| 层级 | 组件 | 职责 |
|
||||
|------|------|------|
|
||||
|
||||
### 2.2 模块依赖
|
||||
| 模块 | 依赖 | 说明 |
|
||||
|------|------|------|
|
||||
|
||||
[FOCUS]
|
||||
1. 分层: 识别代码层次 (Controller/Service/Repository 或其他)
|
||||
2. 模块: 核心模块及职责边界
|
||||
3. 依赖: 模块间依赖方向
|
||||
4. 数据流: 请求/数据的流动路径
|
||||
|
||||
[RETURN JSON]
|
||||
{"status":"completed","output_file":"section-2-architecture.md","summary":"<50字摘要>","cross_module_notes":["跨模块发现"],"stats":{"diagrams":1,"subsections":2}}
|
||||
`
|
||||
})
|
||||
```
|
||||
|
||||
### Functions
|
||||
|
||||
```javascript
|
||||
Task({
|
||||
subagent_type: "cli-explore-agent",
|
||||
run_in_background: false,
|
||||
prompt: `
|
||||
[SPEC]
|
||||
首先读取规范文件:
|
||||
- Read: ${skillRoot}/specs/cpcc-requirements.md
|
||||
严格遵循 CPCC 软著申请规范要求。
|
||||
|
||||
[ROLE] 功能分析师,专注于功能点识别和交互。
|
||||
|
||||
[TASK]
|
||||
分析 ${meta.scope_path},生成 Section 3: 功能模块设计。
|
||||
输出: ${outDir}/sections/section-3-functions.md
|
||||
|
||||
[CPCC_SPEC]
|
||||
- 内容基于代码分析,无臆测
|
||||
- 图表编号: 图3-1, 图3-2...
|
||||
- 每个子章节 ≥100字
|
||||
- 包含文件路径引用
|
||||
|
||||
[TEMPLATE]
|
||||
## 3. 功能模块设计
|
||||
|
||||
本章节展示${meta.software_name}的功能模块结构。
|
||||
|
||||
\`\`\`mermaid
|
||||
flowchart TD
|
||||
ROOT["${meta.software_name}"]
|
||||
subgraph Group1["模块组1"]
|
||||
F1["功能1"]
|
||||
end
|
||||
ROOT --> Group1
|
||||
\`\`\`
|
||||
|
||||
**图3-1 功能模块结构图**
|
||||
|
||||
### 3.1 功能清单
|
||||
| ID | 功能名称 | 模块 | 入口文件 | 说明 |
|
||||
|----|----------|------|----------|------|
|
||||
|
||||
### 3.2 功能交互
|
||||
| 调用方 | 被调用方 | 触发条件 |
|
||||
|--------|----------|----------|
|
||||
|
||||
[FOCUS]
|
||||
1. 功能点: 枚举所有用户可见功能
|
||||
2. 模块分组: 按业务域分组
|
||||
3. 入口: 每个功能的代码入口 \`src/path/file.ts\`
|
||||
4. 交互: 功能间的调用关系
|
||||
|
||||
[RETURN JSON]
|
||||
{"status":"completed","output_file":"section-3-functions.md","summary":"<50字>","cross_module_notes":[],"stats":{}}
|
||||
`
|
||||
})
|
||||
```
|
||||
|
||||
### Algorithms
|
||||
|
||||
```javascript
|
||||
Task({
|
||||
subagent_type: "cli-explore-agent",
|
||||
run_in_background: false,
|
||||
prompt: `
|
||||
[SPEC]
|
||||
首先读取规范文件:
|
||||
- Read: ${skillRoot}/specs/cpcc-requirements.md
|
||||
严格遵循 CPCC 软著申请规范要求。
|
||||
|
||||
[ROLE] 算法工程师,专注于核心逻辑和复杂度分析。
|
||||
|
||||
[TASK]
|
||||
分析 ${meta.scope_path},生成 Section 4: 核心算法与流程。
|
||||
输出: ${outDir}/sections/section-4-algorithms.md
|
||||
|
||||
[CPCC_SPEC]
|
||||
- 内容基于代码分析,无臆测
|
||||
- 图表编号: 图4-1, 图4-2... (每个算法一个流程图)
|
||||
- 每个算法说明 ≥100字
|
||||
- 包含文件路径和行号引用
|
||||
|
||||
[TEMPLATE]
|
||||
## 4. 核心算法与流程
|
||||
|
||||
本章节展示${meta.software_name}的核心算法设计。
|
||||
|
||||
### 4.1 {算法名称}
|
||||
|
||||
**说明**: {描述,≥100字}
|
||||
**位置**: \`src/path/file.ts:line\`
|
||||
|
||||
**输入**: param1 (type) - 说明
|
||||
**输出**: result (type) - 说明
|
||||
|
||||
\`\`\`mermaid
|
||||
flowchart TD
|
||||
Start([开始]) --> Input[/输入/]
|
||||
Input --> Check{判断}
|
||||
Check -->|是| P1[步骤1]
|
||||
Check -->|否| P2[步骤2]
|
||||
P1 --> End([结束])
|
||||
P2 --> End
|
||||
\`\`\`
|
||||
|
||||
**图4-1 {算法名称}流程图**
|
||||
|
||||
### 4.N 复杂度分析
|
||||
| 算法 | 时间 | 空间 | 文件 |
|
||||
|------|------|------|------|
|
||||
|
||||
[FOCUS]
|
||||
1. 核心算法: 业务逻辑的关键算法 (>10行或含分支循环)
|
||||
2. 流程步骤: 分支/循环/条件逻辑
|
||||
3. 复杂度: 时间/空间复杂度估算
|
||||
4. 输入输出: 参数类型和返回值
|
||||
|
||||
[RETURN JSON]
|
||||
{"status":"completed","output_file":"section-4-algorithms.md","summary":"<50字>","cross_module_notes":[],"stats":{}}
|
||||
`
|
||||
})
|
||||
```
|
||||
|
||||
### Data Structures
|
||||
|
||||
```javascript
|
||||
Task({
|
||||
subagent_type: "cli-explore-agent",
|
||||
run_in_background: false,
|
||||
prompt: `
|
||||
[SPEC]
|
||||
首先读取规范文件:
|
||||
- Read: ${skillRoot}/specs/cpcc-requirements.md
|
||||
严格遵循 CPCC 软著申请规范要求。
|
||||
|
||||
[ROLE] 数据建模师,专注于实体关系和类型定义。
|
||||
|
||||
[TASK]
|
||||
分析 ${meta.scope_path},生成 Section 5: 数据结构设计。
|
||||
输出: ${outDir}/sections/section-5-data-structures.md
|
||||
|
||||
[CPCC_SPEC]
|
||||
- 内容基于代码分析,无臆测
|
||||
- 图表编号: 图5-1 (数据结构类图)
|
||||
- 每个子章节 ≥100字
|
||||
- 包含文件路径引用
|
||||
|
||||
[TEMPLATE]
|
||||
## 5. 数据结构设计
|
||||
|
||||
本章节展示${meta.software_name}的核心数据结构。
|
||||
|
||||
\`\`\`mermaid
|
||||
classDiagram
|
||||
class Entity1 {
|
||||
+type field1
|
||||
+method1()
|
||||
}
|
||||
Entity1 "1" --> "*" Entity2 : 关系
|
||||
\`\`\`
|
||||
|
||||
**图5-1 数据结构类图**
|
||||
|
||||
### 5.1 实体说明
|
||||
| 实体 | 类型 | 文件 | 说明 |
|
||||
|------|------|------|------|
|
||||
|
||||
### 5.2 关系说明
|
||||
| 源 | 目标 | 类型 | 基数 |
|
||||
|----|------|------|------|
|
||||
|
||||
[FOCUS]
|
||||
1. 实体: class/interface/type 定义
|
||||
2. 属性: 字段类型和可见性 (+public/-private/#protected)
|
||||
3. 关系: 继承(--|>)/组合(*--)/关联(-->)
|
||||
4. 枚举: enum 类型及其值
|
||||
|
||||
[RETURN JSON]
|
||||
{"status":"completed","output_file":"section-5-data-structures.md","summary":"<50字>","cross_module_notes":[],"stats":{}}
|
||||
`
|
||||
})
|
||||
```
|
||||
|
||||
### Interfaces
|
||||
|
||||
```javascript
|
||||
Task({
|
||||
subagent_type: "cli-explore-agent",
|
||||
run_in_background: false,
|
||||
prompt: `
|
||||
[SPEC]
|
||||
首先读取规范文件:
|
||||
- Read: ${skillRoot}/specs/cpcc-requirements.md
|
||||
严格遵循 CPCC 软著申请规范要求。
|
||||
|
||||
[ROLE] API设计师,专注于接口契约和协议。
|
||||
|
||||
[TASK]
|
||||
分析 ${meta.scope_path},生成 Section 6: 接口设计。
|
||||
输出: ${outDir}/sections/section-6-interfaces.md
|
||||
|
||||
[CPCC_SPEC]
|
||||
- 内容基于代码分析,无臆测
|
||||
- 图表编号: 图6-1, 图6-2... (每个核心接口一个时序图)
|
||||
- 每个接口详情 ≥100字
|
||||
- 包含文件路径引用
|
||||
|
||||
[TEMPLATE]
|
||||
## 6. 接口设计
|
||||
|
||||
本章节展示${meta.software_name}的接口设计。
|
||||
|
||||
\`\`\`mermaid
|
||||
sequenceDiagram
|
||||
participant C as Client
|
||||
participant A as API
|
||||
participant S as Service
|
||||
C->>A: POST /api/xxx
|
||||
A->>S: method()
|
||||
S-->>A: result
|
||||
A-->>C: 200 OK
|
||||
\`\`\`
|
||||
|
||||
**图6-1 {接口名}时序图**
|
||||
|
||||
### 6.1 接口清单
|
||||
| 接口 | 方法 | 路径 | 说明 |
|
||||
|------|------|------|------|
|
||||
|
||||
### 6.2 接口详情
|
||||
|
||||
#### METHOD /path
|
||||
**请求**:
|
||||
| 参数 | 类型 | 必填 | 说明 |
|
||||
|------|------|------|------|
|
||||
|
||||
**响应**:
|
||||
| 字段 | 类型 | 说明 |
|
||||
|------|------|------|
|
||||
|
||||
[FOCUS]
|
||||
1. API端点: 路径/方法/说明
|
||||
2. 参数: 请求参数类型和校验规则
|
||||
3. 响应: 响应格式、状态码、错误码
|
||||
4. 时序: 典型调用流程 (选2-3个核心接口)
|
||||
|
||||
[RETURN JSON]
|
||||
{"status":"completed","output_file":"section-6-interfaces.md","summary":"<50字>","cross_module_notes":[],"stats":{}}
|
||||
`
|
||||
})
|
||||
```
|
||||
|
||||
### Exceptions
|
||||
|
||||
```javascript
|
||||
Task({
|
||||
subagent_type: "cli-explore-agent",
|
||||
run_in_background: false,
|
||||
prompt: `
|
||||
[SPEC]
|
||||
首先读取规范文件:
|
||||
- Read: ${skillRoot}/specs/cpcc-requirements.md
|
||||
严格遵循 CPCC 软著申请规范要求。
|
||||
|
||||
[ROLE] 可靠性工程师,专注于异常处理和恢复策略。
|
||||
|
||||
[TASK]
|
||||
分析 ${meta.scope_path},生成 Section 7: 异常处理设计。
|
||||
输出: ${outDir}/sections/section-7-exceptions.md
|
||||
|
||||
[CPCC_SPEC]
|
||||
- 内容基于代码分析,无臆测
|
||||
- 图表编号: 图7-1 (异常处理流程图)
|
||||
- 每个子章节 ≥100字
|
||||
- 包含文件路径引用
|
||||
|
||||
[TEMPLATE]
|
||||
## 7. 异常处理设计
|
||||
|
||||
本章节展示${meta.software_name}的异常处理机制。
|
||||
|
||||
\`\`\`mermaid
|
||||
flowchart TD
|
||||
Req[请求] --> Try{Try-Catch}
|
||||
Try -->|正常| Process[处理]
|
||||
Try -->|异常| ErrType{类型}
|
||||
ErrType -->|E1| H1[处理1]
|
||||
ErrType -->|E2| H2[处理2]
|
||||
H1 --> Log[日志]
|
||||
H2 --> Log
|
||||
Process --> Resp[响应]
|
||||
\`\`\`
|
||||
|
||||
**图7-1 异常处理流程图**
|
||||
|
||||
### 7.1 异常类型
|
||||
| 异常类 | 错误码 | HTTP状态 | 说明 |
|
||||
|--------|--------|----------|------|
|
||||
|
||||
### 7.2 恢复策略
|
||||
| 场景 | 策略 | 说明 |
|
||||
|------|------|------|
|
||||
|
||||
[FOCUS]
|
||||
1. 异常类型: 自定义异常类及继承关系
|
||||
2. 错误码: 错误码定义和分类
|
||||
3. 处理模式: try-catch/中间件/装饰器
|
||||
4. 恢复策略: 重试/降级/熔断/告警
|
||||
|
||||
[RETURN JSON]
|
||||
{"status":"completed","output_file":"section-7-exceptions.md","summary":"<50字>","cross_module_notes":[],"stats":{}}
|
||||
`
|
||||
})
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Output
|
||||
|
||||
各 Agent 写入 `sections/section-N-xxx.md`,返回简要 JSON 供 Phase 2.5 汇总。
|
||||
192
.claude/skills/copyright-docs/phases/02.5-consolidation.md
Normal file
192
.claude/skills/copyright-docs/phases/02.5-consolidation.md
Normal file
@@ -0,0 +1,192 @@
|
||||
# Phase 2.5: Consolidation Agent
|
||||
|
||||
汇总所有分析 Agent 的产出,生成设计综述,为 Phase 4 索引文档提供内容。
|
||||
|
||||
> **规范参考**: [../specs/cpcc-requirements.md](../specs/cpcc-requirements.md)
|
||||
|
||||
## 核心职责
|
||||
|
||||
1. **设计综述**:生成 synthesis(软件整体设计思路)
|
||||
2. **章节摘要**:生成 section_summaries(导航表格内容)
|
||||
3. **跨模块分析**:识别问题和关联
|
||||
4. **质量检查**:验证 CPCC 合规性
|
||||
|
||||
## 输入
|
||||
|
||||
```typescript
|
||||
interface ConsolidationInput {
|
||||
output_dir: string;
|
||||
agent_summaries: AgentReturn[];
|
||||
cross_module_notes: string[];
|
||||
metadata: ProjectMetadata;
|
||||
}
|
||||
```
|
||||
|
||||
## 执行
|
||||
|
||||
```javascript
|
||||
Task({
|
||||
subagent_type: "cli-explore-agent",
|
||||
run_in_background: false,
|
||||
prompt: `
|
||||
## 规范前置
|
||||
首先读取规范文件:
|
||||
- Read: ${skillRoot}/specs/cpcc-requirements.md
|
||||
严格遵循 CPCC 软著申请规范要求。
|
||||
|
||||
## 任务
|
||||
作为汇总 Agent,读取所有章节文件,生成设计综述和跨模块分析报告。
|
||||
|
||||
## 输入
|
||||
- 章节文件: ${outputDir}/sections/section-*.md
|
||||
- Agent 摘要: ${JSON.stringify(agent_summaries)}
|
||||
- 跨模块备注: ${JSON.stringify(cross_module_notes)}
|
||||
- 软件信息: ${JSON.stringify(metadata)}
|
||||
|
||||
## 核心产出
|
||||
|
||||
### 1. 设计综述 (synthesis)
|
||||
用 2-3 段落描述软件整体设计思路:
|
||||
- 第一段:软件定位与核心设计理念
|
||||
- 第二段:模块划分与协作机制
|
||||
- 第三段:技术选型与设计特点
|
||||
|
||||
### 2. 章节摘要 (section_summaries)
|
||||
为每个章节提取一句话说明,用于导航表格:
|
||||
|
||||
| 章节 | 文件 | 一句话说明 |
|
||||
|------|------|------------|
|
||||
| 2. 系统架构设计 | section-2-architecture.md | ... |
|
||||
| 3. 功能模块设计 | section-3-functions.md | ... |
|
||||
| 4. 核心算法与流程 | section-4-algorithms.md | ... |
|
||||
| 5. 数据结构设计 | section-5-data-structures.md | ... |
|
||||
| 6. 接口设计 | section-6-interfaces.md | ... |
|
||||
| 7. 异常处理设计 | section-7-exceptions.md | ... |
|
||||
|
||||
### 3. 跨模块分析
|
||||
- 一致性:术语、命名规范
|
||||
- 完整性:功能-接口对应、异常覆盖
|
||||
- 关联性:模块依赖、数据流向
|
||||
|
||||
## 输出文件
|
||||
|
||||
写入: ${outputDir}/cross-module-summary.md
|
||||
|
||||
### 文件格式
|
||||
|
||||
\`\`\`markdown
|
||||
# 跨模块分析报告
|
||||
|
||||
## 设计综述
|
||||
|
||||
[2-3 段落的软件设计思路描述]
|
||||
|
||||
## 章节摘要
|
||||
|
||||
| 章节 | 文件 | 说明 |
|
||||
|------|------|------|
|
||||
| 2. 系统架构设计 | section-2-architecture.md | 一句话说明 |
|
||||
| ... | ... | ... |
|
||||
|
||||
## 文档统计
|
||||
|
||||
| 章节 | 图表数 | 字数 |
|
||||
|------|--------|------|
|
||||
| ... | ... | ... |
|
||||
|
||||
## 发现的问题
|
||||
|
||||
### 严重问题 (必须修复)
|
||||
|
||||
| ID | 类型 | 位置 | 描述 | 建议 |
|
||||
|----|------|------|------|------|
|
||||
| E001 | ... | ... | ... | ... |
|
||||
|
||||
### 警告 (建议修复)
|
||||
|
||||
| ID | 类型 | 位置 | 描述 | 建议 |
|
||||
|----|------|------|------|------|
|
||||
| W001 | ... | ... | ... | ... |
|
||||
|
||||
### 提示 (可选修复)
|
||||
|
||||
| ID | 类型 | 位置 | 描述 |
|
||||
|----|------|------|------|
|
||||
| I001 | ... | ... | ... |
|
||||
|
||||
## 跨模块关联图
|
||||
|
||||
\`\`\`mermaid
|
||||
graph LR
|
||||
S2[架构] --> S3[功能]
|
||||
S3 --> S4[算法]
|
||||
S3 --> S6[接口]
|
||||
S5[数据结构] --> S6
|
||||
S6 --> S7[异常]
|
||||
\`\`\`
|
||||
|
||||
## 修复建议优先级
|
||||
|
||||
[按优先级排序的建议,段落式描述]
|
||||
\`\`\`
|
||||
|
||||
## 返回格式 (JSON)
|
||||
|
||||
{
|
||||
"status": "completed",
|
||||
"output_file": "cross-module-summary.md",
|
||||
|
||||
// Phase 4 索引文档所需
|
||||
"synthesis": "2-3 段落的设计综述文本",
|
||||
"section_summaries": [
|
||||
{"file": "section-2-architecture.md", "title": "2. 系统架构设计", "summary": "一句话说明"},
|
||||
{"file": "section-3-functions.md", "title": "3. 功能模块设计", "summary": "一句话说明"},
|
||||
{"file": "section-4-algorithms.md", "title": "4. 核心算法与流程", "summary": "一句话说明"},
|
||||
{"file": "section-5-data-structures.md", "title": "5. 数据结构设计", "summary": "一句话说明"},
|
||||
{"file": "section-6-interfaces.md", "title": "6. 接口设计", "summary": "一句话说明"},
|
||||
{"file": "section-7-exceptions.md", "title": "7. 异常处理设计", "summary": "一句话说明"}
|
||||
],
|
||||
|
||||
// 质量信息
|
||||
"stats": {
|
||||
"total_sections": 6,
|
||||
"total_diagrams": 8,
|
||||
"total_words": 3500
|
||||
},
|
||||
"issues": {
|
||||
"errors": [...],
|
||||
"warnings": [...],
|
||||
"info": [...]
|
||||
},
|
||||
"cross_refs": {
|
||||
"found": 12,
|
||||
"missing": 3
|
||||
}
|
||||
}
|
||||
`
|
||||
})
|
||||
```
|
||||
|
||||
## 问题分类
|
||||
|
||||
| 严重级别 | 前缀 | 含义 | 处理方式 |
|
||||
|----------|------|------|----------|
|
||||
| Error | E | 阻塞合规检查 | 必须修复 |
|
||||
| Warning | W | 影响文档质量 | 建议修复 |
|
||||
| Info | I | 可改进项 | 可选修复 |
|
||||
|
||||
## 问题类型
|
||||
|
||||
| 类型 | 说明 |
|
||||
|------|------|
|
||||
| missing | 缺失内容(功能-接口对应、异常覆盖)|
|
||||
| inconsistency | 不一致(术语、命名、编号)|
|
||||
| circular | 循环依赖 |
|
||||
| orphan | 孤立内容(未被引用)|
|
||||
| syntax | Mermaid 语法错误 |
|
||||
| enhancement | 增强建议 |
|
||||
|
||||
## Output
|
||||
|
||||
- **文件**: `cross-module-summary.md`(完整汇总报告)
|
||||
- **返回**: JSON 包含 Phase 4 所需的 synthesis 和 section_summaries
|
||||
261
.claude/skills/copyright-docs/phases/04-document-assembly.md
Normal file
261
.claude/skills/copyright-docs/phases/04-document-assembly.md
Normal file
@@ -0,0 +1,261 @@
|
||||
# Phase 4: Document Assembly
|
||||
|
||||
生成索引式文档,通过 markdown 链接引用章节文件。
|
||||
|
||||
> **规范参考**: [../specs/cpcc-requirements.md](../specs/cpcc-requirements.md)
|
||||
|
||||
## 设计原则
|
||||
|
||||
1. **引用而非嵌入**:主文档通过链接引用章节,不复制内容
|
||||
2. **索引 + 综述**:主文档提供导航和软件概述
|
||||
3. **CPCC 合规**:保持章节编号符合软著申请要求
|
||||
4. **独立可读**:各章节文件可单独阅读
|
||||
|
||||
## 输入
|
||||
|
||||
```typescript
|
||||
interface AssemblyInput {
|
||||
output_dir: string;
|
||||
metadata: ProjectMetadata;
|
||||
consolidation: {
|
||||
synthesis: string; // 跨章节综合分析
|
||||
section_summaries: Array<{
|
||||
file: string;
|
||||
title: string;
|
||||
summary: string;
|
||||
}>;
|
||||
issues: { errors: Issue[], warnings: Issue[], info: Issue[] };
|
||||
stats: { total_sections: number, total_diagrams: number };
|
||||
};
|
||||
}
|
||||
```
|
||||
|
||||
## 执行流程
|
||||
|
||||
```javascript
|
||||
// 1. 检查是否有阻塞性问题
|
||||
if (consolidation.issues.errors.length > 0) {
|
||||
const response = await AskUserQuestion({
|
||||
questions: [{
|
||||
question: `发现 ${consolidation.issues.errors.length} 个严重问题,如何处理?`,
|
||||
header: "阻塞问题",
|
||||
multiSelect: false,
|
||||
options: [
|
||||
{label: "查看并修复", description: "显示问题列表,手动修复后重试"},
|
||||
{label: "忽略继续", description: "跳过问题检查,继续装配"},
|
||||
{label: "终止", description: "停止文档生成"}
|
||||
]
|
||||
}]
|
||||
});
|
||||
|
||||
if (response === "查看并修复") {
|
||||
return { action: "fix_required", errors: consolidation.issues.errors };
|
||||
}
|
||||
if (response === "终止") {
|
||||
return { action: "abort" };
|
||||
}
|
||||
}
|
||||
|
||||
// 2. 生成索引式文档(不读取章节内容)
|
||||
const doc = generateIndexDocument(metadata, consolidation);
|
||||
|
||||
// 3. 写入最终文件
|
||||
Write(`${outputDir}/${metadata.software_name}-软件设计说明书.md`, doc);
|
||||
```
|
||||
|
||||
## 文档模板
|
||||
|
||||
```markdown
|
||||
<!-- 页眉:{软件名称} - 版本号:{版本号} -->
|
||||
|
||||
# {软件名称} 软件设计说明书
|
||||
|
||||
## 文档信息
|
||||
|
||||
| 项目 | 内容 |
|
||||
|------|------|
|
||||
| 软件名称 | {software_name} |
|
||||
| 版本号 | {version} |
|
||||
| 生成日期 | {date} |
|
||||
|
||||
---
|
||||
|
||||
## 1. 软件概述
|
||||
|
||||
### 1.1 软件背景与用途
|
||||
|
||||
[从 metadata 生成的软件背景描述]
|
||||
|
||||
### 1.2 开发目标与特点
|
||||
|
||||
[从 metadata 生成的目标和特点]
|
||||
|
||||
### 1.3 运行环境与技术架构
|
||||
|
||||
[从 metadata.tech_stack 生成]
|
||||
|
||||
---
|
||||
|
||||
## 文档导航
|
||||
|
||||
{consolidation.synthesis - 软件整体设计思路综述}
|
||||
|
||||
| 章节 | 说明 | 详情 |
|
||||
|------|------|------|
|
||||
| 2. 系统架构设计 | {summary} | [查看](./sections/section-2-architecture.md) |
|
||||
| 3. 功能模块设计 | {summary} | [查看](./sections/section-3-functions.md) |
|
||||
| 4. 核心算法与流程 | {summary} | [查看](./sections/section-4-algorithms.md) |
|
||||
| 5. 数据结构设计 | {summary} | [查看](./sections/section-5-data-structures.md) |
|
||||
| 6. 接口设计 | {summary} | [查看](./sections/section-6-interfaces.md) |
|
||||
| 7. 异常处理设计 | {summary} | [查看](./sections/section-7-exceptions.md) |
|
||||
|
||||
---
|
||||
|
||||
## 附录
|
||||
|
||||
- [跨模块分析报告](./cross-module-summary.md)
|
||||
- [章节文件目录](./sections/)
|
||||
|
||||
---
|
||||
|
||||
<!-- 页脚:生成时间 {timestamp} -->
|
||||
```
|
||||
|
||||
## 生成函数
|
||||
|
||||
```javascript
|
||||
function generateIndexDocument(metadata, consolidation) {
|
||||
const date = new Date().toLocaleDateString('zh-CN');
|
||||
|
||||
// 章节导航表格
|
||||
const sectionTable = consolidation.section_summaries
|
||||
.map(s => `| ${s.title} | ${s.summary} | [查看](./sections/${s.file}) |`)
|
||||
.join('\n');
|
||||
|
||||
return `<!-- 页眉:${metadata.software_name} - 版本号:${metadata.version} -->
|
||||
|
||||
# ${metadata.software_name} 软件设计说明书
|
||||
|
||||
## 文档信息
|
||||
|
||||
| 项目 | 内容 |
|
||||
|------|------|
|
||||
| 软件名称 | ${metadata.software_name} |
|
||||
| 版本号 | ${metadata.version} |
|
||||
| 生成日期 | ${date} |
|
||||
|
||||
---
|
||||
|
||||
## 1. 软件概述
|
||||
|
||||
### 1.1 软件背景与用途
|
||||
|
||||
${generateBackground(metadata)}
|
||||
|
||||
### 1.2 开发目标与特点
|
||||
|
||||
${generateObjectives(metadata)}
|
||||
|
||||
### 1.3 运行环境与技术架构
|
||||
|
||||
${generateTechStack(metadata)}
|
||||
|
||||
---
|
||||
|
||||
## 设计综述
|
||||
|
||||
${consolidation.synthesis}
|
||||
|
||||
---
|
||||
|
||||
## 文档导航
|
||||
|
||||
| 章节 | 说明 | 详情 |
|
||||
|------|------|------|
|
||||
${sectionTable}
|
||||
|
||||
---
|
||||
|
||||
## 附录
|
||||
|
||||
- [跨模块分析报告](./cross-module-summary.md)
|
||||
- [章节文件目录](./sections/)
|
||||
|
||||
---
|
||||
|
||||
<!-- 页脚:生成时间 ${new Date().toISOString()} -->
|
||||
`;
|
||||
}
|
||||
|
||||
function generateBackground(metadata) {
|
||||
const categoryDescriptions = {
|
||||
"命令行工具 (CLI)": "提供命令行界面,用户通过终端命令与系统交互",
|
||||
"后端服务/API": "提供 RESTful/GraphQL API 接口,支持前端或其他服务调用",
|
||||
"SDK/库": "提供可复用的代码库,供其他项目集成使用",
|
||||
"数据处理系统": "处理数据导入、转换、分析和导出",
|
||||
"自动化脚本": "自动执行重复性任务,提高工作效率"
|
||||
};
|
||||
|
||||
return `${metadata.software_name}是一款${metadata.category}软件。${categoryDescriptions[metadata.category] || ''}
|
||||
|
||||
本软件基于${metadata.tech_stack.language}语言开发,运行于${metadata.tech_stack.runtime}环境,采用${metadata.tech_stack.framework || '原生'}框架实现核心功能。`;
|
||||
}
|
||||
|
||||
function generateObjectives(metadata) {
|
||||
return `本软件旨在${metadata.purpose || '解决特定领域的技术问题'}。
|
||||
|
||||
主要技术特点包括${metadata.tech_stack.framework ? `采用 ${metadata.tech_stack.framework} 框架` : '模块化设计'},具备良好的可扩展性和可维护性。`;
|
||||
}
|
||||
|
||||
function generateTechStack(metadata) {
|
||||
return `**运行环境**
|
||||
|
||||
- 操作系统:${metadata.os || 'Windows/Linux/macOS'}
|
||||
- 运行时:${metadata.tech_stack.runtime}
|
||||
- 依赖环境:${metadata.tech_stack.dependencies?.join(', ') || '无特殊依赖'}
|
||||
|
||||
**技术架构**
|
||||
|
||||
- 架构模式:${metadata.architecture_pattern || '分层架构'}
|
||||
- 核心框架:${metadata.tech_stack.framework || '原生实现'}
|
||||
- 主要模块:详见第2章系统架构设计`;
|
||||
}
|
||||
```
|
||||
|
||||
## 输出结构
|
||||
|
||||
```
|
||||
.workflow/.scratchpad/copyright-{timestamp}/
|
||||
├── sections/ # 独立章节(Phase 2 产出)
|
||||
│ ├── section-2-architecture.md
|
||||
│ ├── section-3-functions.md
|
||||
│ └── ...
|
||||
├── cross-module-summary.md # 跨模块报告(Phase 2.5 产出)
|
||||
└── {软件名称}-软件设计说明书.md # 索引文档(本阶段产出)
|
||||
```
|
||||
|
||||
## 与 Phase 2.5 的协作
|
||||
|
||||
Phase 2.5 consolidation agent 需要提供:
|
||||
|
||||
```typescript
|
||||
interface ConsolidationOutput {
|
||||
synthesis: string; // 设计思路综述(2-3 段落)
|
||||
section_summaries: Array<{
|
||||
file: string; // 文件名
|
||||
title: string; // 章节标题(如"2. 系统架构设计")
|
||||
summary: string; // 一句话说明
|
||||
}>;
|
||||
issues: {...};
|
||||
stats: {...};
|
||||
}
|
||||
```
|
||||
|
||||
## 关键变更
|
||||
|
||||
| 原设计 | 新设计 |
|
||||
|--------|--------|
|
||||
| 读取章节内容并拼接 | 链接引用,不读取内容 |
|
||||
| 嵌入完整章节 | 仅提供导航索引 |
|
||||
| 重复生成统计 | 引用 cross-module-summary.md |
|
||||
| 大文件 | 精简索引文档 |
|
||||
192
.claude/skills/copyright-docs/phases/05-compliance-refinement.md
Normal file
192
.claude/skills/copyright-docs/phases/05-compliance-refinement.md
Normal file
@@ -0,0 +1,192 @@
|
||||
# Phase 5: Compliance Review & Iterative Refinement
|
||||
|
||||
Discovery-driven refinement loop until CPCC compliance is met.
|
||||
|
||||
## Execution
|
||||
|
||||
### Step 1: Extract Compliance Issues
|
||||
|
||||
```javascript
|
||||
function extractComplianceIssues(validationResult, deepAnalysis) {
|
||||
return {
|
||||
// Missing or incomplete sections
|
||||
missingSections: validationResult.details
|
||||
.filter(d => !d.pass)
|
||||
.map(d => ({
|
||||
section: d.name,
|
||||
severity: 'critical',
|
||||
suggestion: `需要补充 ${d.name} 相关内容`
|
||||
})),
|
||||
|
||||
// Features with weak descriptions (< 50 chars)
|
||||
weakDescriptions: (deepAnalysis.functions?.feature_list || [])
|
||||
.filter(f => !f.description || f.description.length < 50)
|
||||
.map(f => ({
|
||||
feature: f.name,
|
||||
current: f.description || '(无描述)',
|
||||
severity: 'warning'
|
||||
})),
|
||||
|
||||
// Complex algorithms without detailed flowcharts
|
||||
complexAlgorithms: (deepAnalysis.algorithms?.algorithms || [])
|
||||
.filter(a => (a.complexity || 0) > 10 && (a.steps?.length || 0) < 5)
|
||||
.map(a => ({
|
||||
algorithm: a.name,
|
||||
complexity: a.complexity,
|
||||
file: a.file,
|
||||
severity: 'warning'
|
||||
})),
|
||||
|
||||
// Data relationships without descriptions
|
||||
incompleteRelationships: (deepAnalysis.data_structures?.relationships || [])
|
||||
.filter(r => !r.description)
|
||||
.map(r => ({from: r.from, to: r.to, severity: 'info'})),
|
||||
|
||||
// Diagram validation issues
|
||||
diagramIssues: (deepAnalysis.diagrams?.validation || [])
|
||||
.filter(d => !d.valid)
|
||||
.map(d => ({file: d.file, issues: d.issues, severity: 'critical'}))
|
||||
};
|
||||
}
|
||||
```
|
||||
|
||||
### Step 2: Build Dynamic Questions
|
||||
|
||||
```javascript
|
||||
function buildComplianceQuestions(issues) {
|
||||
const questions = [];
|
||||
|
||||
if (issues.missingSections.length > 0) {
|
||||
questions.push({
|
||||
question: `发现 ${issues.missingSections.length} 个章节内容不完整,需要补充哪些?`,
|
||||
header: "章节补充",
|
||||
multiSelect: true,
|
||||
options: issues.missingSections.slice(0, 4).map(s => ({
|
||||
label: s.section,
|
||||
description: s.suggestion
|
||||
}))
|
||||
});
|
||||
}
|
||||
|
||||
if (issues.weakDescriptions.length > 0) {
|
||||
questions.push({
|
||||
question: `以下 ${issues.weakDescriptions.length} 个功能描述过于简短,请选择需要详细说明的:`,
|
||||
header: "功能描述",
|
||||
multiSelect: true,
|
||||
options: issues.weakDescriptions.slice(0, 4).map(f => ({
|
||||
label: f.feature,
|
||||
description: `当前:${f.current.substring(0, 30)}...`
|
||||
}))
|
||||
});
|
||||
}
|
||||
|
||||
if (issues.complexAlgorithms.length > 0) {
|
||||
questions.push({
|
||||
question: `发现 ${issues.complexAlgorithms.length} 个复杂算法缺少详细流程图,是否生成?`,
|
||||
header: "算法详解",
|
||||
multiSelect: false,
|
||||
options: [
|
||||
{label: "全部生成 (推荐)", description: "为所有复杂算法生成含分支/循环的流程图"},
|
||||
{label: "仅最复杂的", description: `仅为 ${issues.complexAlgorithms[0]?.algorithm} 生成`},
|
||||
{label: "跳过", description: "保持当前简单流程图"}
|
||||
]
|
||||
});
|
||||
}
|
||||
|
||||
questions.push({
|
||||
question: "如何处理当前文档?",
|
||||
header: "操作",
|
||||
multiSelect: false,
|
||||
options: [
|
||||
{label: "应用修改并继续", description: "应用上述选择,继续检查"},
|
||||
{label: "完成文档", description: "当前文档满足要求,生成最终版本"},
|
||||
{label: "重新分析", description: "使用不同配置重新分析代码"}
|
||||
]
|
||||
});
|
||||
|
||||
return questions.slice(0, 4);
|
||||
}
|
||||
```
|
||||
|
||||
### Step 3: Apply Updates
|
||||
|
||||
```javascript
|
||||
async function applyComplianceUpdates(responses, issues, analyses, outputDir) {
|
||||
const updates = [];
|
||||
|
||||
if (responses['章节补充']) {
|
||||
for (const section of responses['章节补充']) {
|
||||
const sectionAnalysis = await Task({
|
||||
subagent_type: "cli-explore-agent",
|
||||
prompt: `深入分析 ${section.section} 所需内容...`
|
||||
});
|
||||
updates.push({type: 'section_supplement', section: section.section, data: sectionAnalysis});
|
||||
}
|
||||
}
|
||||
|
||||
if (responses['算法详解'] === '全部生成 (推荐)') {
|
||||
for (const algo of issues.complexAlgorithms) {
|
||||
const detailedSteps = await analyzeAlgorithmInDepth(algo, analyses);
|
||||
const flowchart = generateAlgorithmFlowchart({
|
||||
name: algo.algorithm,
|
||||
inputs: detailedSteps.inputs,
|
||||
outputs: detailedSteps.outputs,
|
||||
steps: detailedSteps.steps
|
||||
});
|
||||
Write(`${outputDir}/diagrams/algorithm-${sanitizeId(algo.algorithm)}-detailed.mmd`, flowchart);
|
||||
updates.push({type: 'algorithm_flowchart', algorithm: algo.algorithm});
|
||||
}
|
||||
}
|
||||
|
||||
return updates;
|
||||
}
|
||||
```
|
||||
|
||||
### Step 4: Iteration Loop
|
||||
|
||||
```javascript
|
||||
async function runComplianceLoop(documentPath, analyses, metadata, outputDir) {
|
||||
let iteration = 0;
|
||||
const maxIterations = 5;
|
||||
|
||||
while (iteration < maxIterations) {
|
||||
iteration++;
|
||||
|
||||
// Validate current document
|
||||
const document = Read(documentPath);
|
||||
const validation = validateCPCCCompliance(document, analyses);
|
||||
|
||||
// Extract issues
|
||||
const issues = extractComplianceIssues(validation, analyses);
|
||||
const totalIssues = Object.values(issues).flat().length;
|
||||
|
||||
if (totalIssues === 0) {
|
||||
console.log("✅ 所有检查通过,文档符合 CPCC 要求");
|
||||
break;
|
||||
}
|
||||
|
||||
// Ask user
|
||||
const questions = buildComplianceQuestions(issues);
|
||||
const responses = await AskUserQuestion({questions});
|
||||
|
||||
if (responses['操作'] === '完成文档') break;
|
||||
if (responses['操作'] === '重新分析') return {action: 'restart'};
|
||||
|
||||
// Apply updates
|
||||
const updates = await applyComplianceUpdates(responses, issues, analyses, outputDir);
|
||||
|
||||
// Regenerate document
|
||||
const updatedDocument = regenerateDocument(document, updates, analyses);
|
||||
Write(documentPath, updatedDocument);
|
||||
|
||||
// Archive iteration
|
||||
Write(`${outputDir}/iterations/v${iteration}.md`, document);
|
||||
}
|
||||
|
||||
return {action: 'finalized', iterations: iteration};
|
||||
}
|
||||
```
|
||||
|
||||
## Output
|
||||
|
||||
Final compliant document + iteration history in `iterations/`.
|
||||
121
.claude/skills/copyright-docs/specs/cpcc-requirements.md
Normal file
121
.claude/skills/copyright-docs/specs/cpcc-requirements.md
Normal file
@@ -0,0 +1,121 @@
|
||||
# CPCC Compliance Requirements
|
||||
|
||||
China Copyright Protection Center (CPCC) requirements for software design specification.
|
||||
|
||||
## When to Use
|
||||
|
||||
| Phase | Usage | Section |
|
||||
|-------|-------|---------|
|
||||
| Phase 4 | Check document structure before assembly | Document Requirements, Mandatory Sections |
|
||||
| Phase 4 | Apply correct figure numbering | Figure Numbering Convention |
|
||||
| Phase 5 | Validate before each iteration | Validation Function |
|
||||
| Phase 5 | Handle failures during refinement | Error Handling |
|
||||
|
||||
---
|
||||
|
||||
## Document Requirements
|
||||
|
||||
### Format
|
||||
- [ ] 页眉包含软件名称和版本号
|
||||
- [ ] 页码位于右上角说明
|
||||
- [ ] 每页不少于30行文字(图表页除外)
|
||||
- [ ] A4纵向排版,文字从左至右
|
||||
|
||||
### Mandatory Sections (7 章节)
|
||||
- [ ] 1. 软件概述
|
||||
- [ ] 2. 系统架构图
|
||||
- [ ] 3. 功能模块设计
|
||||
- [ ] 4. 核心算法与流程
|
||||
- [ ] 5. 数据结构设计
|
||||
- [ ] 6. 接口设计
|
||||
- [ ] 7. 异常处理设计
|
||||
|
||||
### Content Requirements
|
||||
- [ ] 所有内容基于代码分析
|
||||
- [ ] 无臆测或未来计划
|
||||
- [ ] 无原始指令性文字
|
||||
- [ ] Mermaid 语法正确
|
||||
- [ ] 图表编号和说明完整
|
||||
|
||||
## Validation Function
|
||||
|
||||
```javascript
|
||||
function validateCPCCCompliance(document, analyses) {
|
||||
const checks = [
|
||||
{name: "软件概述完整性", pass: document.includes("## 1. 软件概述")},
|
||||
{name: "系统架构图存在", pass: document.includes("图2-1 系统架构图")},
|
||||
{name: "功能模块设计完整", pass: document.includes("## 3. 功能模块设计")},
|
||||
{name: "核心算法描述", pass: document.includes("## 4. 核心算法与流程")},
|
||||
{name: "数据结构设计", pass: document.includes("## 5. 数据结构设计")},
|
||||
{name: "接口设计说明", pass: document.includes("## 6. 接口设计")},
|
||||
{name: "异常处理设计", pass: document.includes("## 7. 异常处理设计")},
|
||||
{name: "Mermaid图表语法", pass: !document.includes("mermaid error")},
|
||||
{name: "页眉信息", pass: document.includes("页眉")},
|
||||
{name: "页码说明", pass: document.includes("页码")}
|
||||
];
|
||||
|
||||
return {
|
||||
passed: checks.filter(c => c.pass).length,
|
||||
total: checks.length,
|
||||
details: checks
|
||||
};
|
||||
}
|
||||
```
|
||||
|
||||
## Software Categories
|
||||
|
||||
| Category | Document Focus |
|
||||
|----------|----------------|
|
||||
| 命令行工具 (CLI) | 命令、参数、使用流程 |
|
||||
| 后端服务/API | 端点、协议、数据流 |
|
||||
| SDK/库 | 接口、集成、使用示例 |
|
||||
| 数据处理系统 | 数据流、转换、ETL |
|
||||
| 自动化脚本 | 工作流、触发器、调度 |
|
||||
|
||||
## Figure Numbering Convention
|
||||
|
||||
| Section | Figure | Title |
|
||||
|---------|--------|-------|
|
||||
| 2 | 图2-1 | 系统架构图 |
|
||||
| 3 | 图3-1 | 功能模块结构图 |
|
||||
| 4 | 图4-N | {算法名称}流程图 |
|
||||
| 5 | 图5-1 | 数据结构类图 |
|
||||
| 6 | 图6-N | {接口名称}时序图 |
|
||||
| 7 | 图7-1 | 异常处理流程图 |
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Error | Recovery |
|
||||
|-------|----------|
|
||||
| Analysis timeout | Reduce scope, retry |
|
||||
| Missing section data | Re-run targeted agent |
|
||||
| Diagram validation fails | Regenerate with fixes |
|
||||
| User abandons iteration | Save progress, allow resume |
|
||||
|
||||
---
|
||||
|
||||
## Integration with Phases
|
||||
|
||||
**Phase 4 - Document Assembly**:
|
||||
```javascript
|
||||
// Before assembling document
|
||||
const docChecks = [
|
||||
{check: "页眉格式", value: `<!-- 页眉:${metadata.software_name} - 版本号:${metadata.version} -->`},
|
||||
{check: "页码说明", value: `<!-- 注:最终文档页码位于每页右上角 -->`}
|
||||
];
|
||||
|
||||
// Apply figure numbering from convention table
|
||||
const figureNumbers = getFigureNumbers(sectionIndex);
|
||||
```
|
||||
|
||||
**Phase 5 - Compliance Refinement**:
|
||||
```javascript
|
||||
// In 05-compliance-refinement.md
|
||||
const validation = validateCPCCCompliance(document, analyses);
|
||||
|
||||
if (validation.passed < validation.total) {
|
||||
// Failed checks become discovery questions
|
||||
const failedChecks = validation.details.filter(d => !d.pass);
|
||||
discoveries.complianceIssues = failedChecks;
|
||||
}
|
||||
```
|
||||
200
.claude/skills/copyright-docs/templates/agent-base.md
Normal file
200
.claude/skills/copyright-docs/templates/agent-base.md
Normal file
@@ -0,0 +1,200 @@
|
||||
# Agent Base Template
|
||||
|
||||
所有分析 Agent 的基础模板,确保一致性和高效执行。
|
||||
|
||||
## 通用提示词结构
|
||||
|
||||
```
|
||||
[ROLE] 你是{角色},专注于{职责}。
|
||||
|
||||
[TASK]
|
||||
分析代码库,生成 CPCC 合规的章节文档。
|
||||
- 输出: {output_dir}/sections/{filename}
|
||||
- 格式: Markdown + Mermaid
|
||||
- 范围: {scope_path}
|
||||
|
||||
[CONSTRAINTS]
|
||||
- 只描述已实现的代码,不臆测
|
||||
- 中文输出,技术术语可用英文
|
||||
- Mermaid 图表必须可渲染
|
||||
- 文件/类/函数需包含路径引用
|
||||
|
||||
[OUTPUT_FORMAT]
|
||||
1. 直接写入 MD 文件
|
||||
2. 返回 JSON 简要信息
|
||||
|
||||
[QUALITY_CHECKLIST]
|
||||
- [ ] 包含至少1个 Mermaid 图表
|
||||
- [ ] 每个子章节有实质内容 (>100字)
|
||||
- [ ] 代码引用格式: `src/path/file.ts:line`
|
||||
- [ ] 图表编号正确 (图N-M)
|
||||
```
|
||||
|
||||
## 变量说明
|
||||
|
||||
| 变量 | 来源 | 示例 |
|
||||
|------|------|------|
|
||||
| {output_dir} | Phase 1 创建 | .workflow/.scratchpad/copyright-xxx |
|
||||
| {software_name} | metadata.software_name | 智能数据分析系统 |
|
||||
| {scope_path} | metadata.scope_path | src/ |
|
||||
| {tech_stack} | metadata.tech_stack | TypeScript/Node.js |
|
||||
|
||||
## Agent 提示词模板
|
||||
|
||||
### 精简版 (推荐)
|
||||
|
||||
```javascript
|
||||
const agentPrompt = (agent, meta, outDir) => `
|
||||
[ROLE] ${AGENT_ROLES[agent]}
|
||||
|
||||
[TASK]
|
||||
分析 ${meta.scope_path},生成 ${AGENT_SECTIONS[agent]}。
|
||||
输出: ${outDir}/sections/${AGENT_FILES[agent]}
|
||||
|
||||
[TEMPLATE]
|
||||
${AGENT_TEMPLATES[agent]}
|
||||
|
||||
[FOCUS]
|
||||
${AGENT_FOCUS[agent].join('\n')}
|
||||
|
||||
[RETURN]
|
||||
{"status":"completed","output_file":"${AGENT_FILES[agent]}","summary":"<50字>","cross_module_notes":[],"stats":{}}
|
||||
`;
|
||||
```
|
||||
|
||||
### 配置映射
|
||||
|
||||
```javascript
|
||||
const AGENT_ROLES = {
|
||||
architecture: "系统架构师,专注于分层设计和模块依赖",
|
||||
functions: "功能分析师,专注于功能点识别和交互",
|
||||
algorithms: "算法工程师,专注于核心逻辑和复杂度",
|
||||
data_structures: "数据建模师,专注于实体关系和类型",
|
||||
interfaces: "API设计师,专注于接口契约和协议",
|
||||
exceptions: "可靠性工程师,专注于异常处理和恢复"
|
||||
};
|
||||
|
||||
const AGENT_SECTIONS = {
|
||||
architecture: "Section 2: 系统架构图",
|
||||
functions: "Section 3: 功能模块设计",
|
||||
algorithms: "Section 4: 核心算法与流程",
|
||||
data_structures: "Section 5: 数据结构设计",
|
||||
interfaces: "Section 6: 接口设计",
|
||||
exceptions: "Section 7: 异常处理设计"
|
||||
};
|
||||
|
||||
const AGENT_FILES = {
|
||||
architecture: "section-2-architecture.md",
|
||||
functions: "section-3-functions.md",
|
||||
algorithms: "section-4-algorithms.md",
|
||||
data_structures: "section-5-data-structures.md",
|
||||
interfaces: "section-6-interfaces.md",
|
||||
exceptions: "section-7-exceptions.md"
|
||||
};
|
||||
|
||||
const AGENT_FOCUS = {
|
||||
architecture: [
|
||||
"1. 分层: 识别代码层次 (Controller/Service/Repository)",
|
||||
"2. 模块: 核心模块及职责边界",
|
||||
"3. 依赖: 模块间依赖方向",
|
||||
"4. 数据流: 请求/数据的流动路径"
|
||||
],
|
||||
functions: [
|
||||
"1. 功能点: 枚举所有用户可见功能",
|
||||
"2. 模块分组: 按业务域分组",
|
||||
"3. 入口: 每个功能的代码入口",
|
||||
"4. 交互: 功能间的调用关系"
|
||||
],
|
||||
algorithms: [
|
||||
"1. 核心算法: 业务逻辑的关键算法",
|
||||
"2. 流程步骤: 分支/循环/条件",
|
||||
"3. 复杂度: 时间/空间复杂度",
|
||||
"4. 输入输出: 参数和返回值"
|
||||
],
|
||||
data_structures: [
|
||||
"1. 实体: class/interface/type 定义",
|
||||
"2. 属性: 字段类型和可见性",
|
||||
"3. 关系: 继承/组合/关联",
|
||||
"4. 枚举: 枚举类型及其值"
|
||||
],
|
||||
interfaces: [
|
||||
"1. API端点: 路径/方法/说明",
|
||||
"2. 参数: 请求参数类型和校验",
|
||||
"3. 响应: 响应格式和状态码",
|
||||
"4. 时序: 典型调用流程"
|
||||
],
|
||||
exceptions: [
|
||||
"1. 异常类型: 自定义异常类",
|
||||
"2. 错误码: 错误码定义和含义",
|
||||
"3. 处理模式: try-catch/中间件",
|
||||
"4. 恢复策略: 重试/降级/告警"
|
||||
]
|
||||
};
|
||||
```
|
||||
|
||||
## 效率优化
|
||||
|
||||
### 1. 减少冗余
|
||||
|
||||
**Before (冗余)**:
|
||||
```
|
||||
你是一个专业的系统架构师,具有丰富的软件设计经验。
|
||||
你需要分析代码库,识别系统的分层结构...
|
||||
```
|
||||
|
||||
**After (精简)**:
|
||||
```
|
||||
[ROLE] 系统架构师,专注于分层设计和模块依赖。
|
||||
[TASK] 分析 src/,生成系统架构图章节。
|
||||
```
|
||||
|
||||
### 2. 模板驱动
|
||||
|
||||
**Before (描述性)**:
|
||||
```
|
||||
请按照以下格式输出:
|
||||
首先写一个二级标题...
|
||||
然后添加一个Mermaid图...
|
||||
```
|
||||
|
||||
**After (模板)**:
|
||||
```
|
||||
[TEMPLATE]
|
||||
## 2. 系统架构图
|
||||
{intro}
|
||||
\`\`\`mermaid
|
||||
{diagram}
|
||||
\`\`\`
|
||||
**图2-1 系统架构图**
|
||||
### 2.1 {subsection}
|
||||
{content}
|
||||
```
|
||||
|
||||
### 3. 焦点明确
|
||||
|
||||
**Before (模糊)**:
|
||||
```
|
||||
分析项目的各个方面,包括架构、模块、依赖等
|
||||
```
|
||||
|
||||
**After (具体)**:
|
||||
```
|
||||
[FOCUS]
|
||||
1. 分层: Controller/Service/Repository
|
||||
2. 模块: 职责边界
|
||||
3. 依赖: 方向性
|
||||
4. 数据流: 路径
|
||||
```
|
||||
|
||||
### 4. 返回简洁
|
||||
|
||||
**Before (冗长)**:
|
||||
```
|
||||
请返回详细的分析结果,包括所有发现的问题...
|
||||
```
|
||||
|
||||
**After (结构化)**:
|
||||
```
|
||||
[RETURN]
|
||||
{"status":"completed","output_file":"xxx.md","summary":"<50字>","cross_module_notes":[],"stats":{}}
|
||||
```
|
||||
244
.claude/skills/issue-manage/SKILL.md
Normal file
244
.claude/skills/issue-manage/SKILL.md
Normal file
@@ -0,0 +1,244 @@
|
||||
---
|
||||
name: issue-manage
|
||||
description: Interactive issue management with menu-driven CRUD operations. Use when managing issues, viewing issue status, editing issue fields, or performing bulk operations on issues. Triggers on "manage issue", "list issues", "edit issue", "delete issue", "bulk update", "issue dashboard".
|
||||
allowed-tools: Bash, Read, Write, AskUserQuestion, Task, Glob
|
||||
---
|
||||
|
||||
# Issue Management Skill
|
||||
|
||||
Interactive menu-driven interface for issue CRUD operations via `ccw issue` CLI.
|
||||
|
||||
## Quick Start
|
||||
|
||||
Ask me:
|
||||
- "Show all issues" → List with filters
|
||||
- "View issue GH-123" → Detailed inspection
|
||||
- "Edit issue priority" → Modify fields
|
||||
- "Delete old issues" → Remove with confirmation
|
||||
- "Bulk update status" → Batch operations
|
||||
|
||||
## CLI Endpoints
|
||||
|
||||
```bash
|
||||
# Core operations
|
||||
ccw issue list # List all issues
|
||||
ccw issue list <id> --json # Get issue details
|
||||
ccw issue status <id> # Detailed status
|
||||
ccw issue init <id> --title "..." # Create issue
|
||||
ccw issue task <id> --title "..." # Add task
|
||||
ccw issue bind <id> <solution-id> # Bind solution
|
||||
|
||||
# Queue management
|
||||
ccw issue queue # List current queue
|
||||
ccw issue queue add <id> # Add to queue
|
||||
ccw issue queue list # Queue history
|
||||
ccw issue queue switch <queue-id> # Switch queue
|
||||
ccw issue queue archive # Archive queue
|
||||
ccw issue queue delete <queue-id> # Delete queue
|
||||
ccw issue next # Get next task
|
||||
ccw issue done <queue-id> # Mark completed
|
||||
```
|
||||
|
||||
## Operations
|
||||
|
||||
### 1. LIST 📋
|
||||
|
||||
Filter and browse issues:
|
||||
|
||||
```
|
||||
┌─ Filter by Status ─────────────────┐
|
||||
│ □ All □ Registered │
|
||||
│ □ Planned □ Queued │
|
||||
│ □ Executing □ Completed │
|
||||
└────────────────────────────────────┘
|
||||
```
|
||||
|
||||
**Flow**:
|
||||
1. Ask filter preferences → `ccw issue list --json`
|
||||
2. Display table: ID | Status | Priority | Title
|
||||
3. Select issue for detail view
|
||||
|
||||
### 2. VIEW 🔍
|
||||
|
||||
Detailed issue inspection:
|
||||
|
||||
```
|
||||
┌─ Issue: GH-123 ─────────────────────┐
|
||||
│ Title: Fix authentication bug │
|
||||
│ Status: planned | Priority: P2 │
|
||||
│ Solutions: 2 (1 bound) │
|
||||
│ Tasks: 5 pending │
|
||||
└─────────────────────────────────────┘
|
||||
```
|
||||
|
||||
**Flow**:
|
||||
1. Fetch `ccw issue status <id> --json`
|
||||
2. Display issue + solutions + tasks
|
||||
3. Offer actions: Edit | Plan | Queue | Delete
|
||||
|
||||
### 3. EDIT ✏️
|
||||
|
||||
Modify issue fields:
|
||||
|
||||
| Field | Options |
|
||||
|-------|---------|
|
||||
| Title | Free text |
|
||||
| Priority | P1-P5 |
|
||||
| Status | registered → completed |
|
||||
| Context | Problem description |
|
||||
| Labels | Comma-separated |
|
||||
|
||||
**Flow**:
|
||||
1. Select field to edit
|
||||
2. Show current value
|
||||
3. Collect new value via AskUserQuestion
|
||||
4. Update `.workflow/issues/issues.jsonl`
|
||||
|
||||
### 4. DELETE 🗑️
|
||||
|
||||
Remove with confirmation:
|
||||
|
||||
```
|
||||
⚠️ Delete issue GH-123?
|
||||
This will also remove:
|
||||
- Associated solutions
|
||||
- Queued tasks
|
||||
|
||||
[Delete] [Cancel]
|
||||
```
|
||||
|
||||
**Flow**:
|
||||
1. Confirm deletion via AskUserQuestion
|
||||
2. Remove from `issues.jsonl`
|
||||
3. Clean up `solutions/<id>.jsonl`
|
||||
4. Remove from `queue.json`
|
||||
|
||||
### 5. BULK 📦
|
||||
|
||||
Batch operations:
|
||||
|
||||
| Operation | Description |
|
||||
|-----------|-------------|
|
||||
| Update Status | Change multiple issues |
|
||||
| Update Priority | Batch priority change |
|
||||
| Add Labels | Tag multiple issues |
|
||||
| Delete Multiple | Bulk removal |
|
||||
| Queue All Planned | Add all planned to queue |
|
||||
| Retry All Failed | Reset failed tasks |
|
||||
|
||||
## Workflow
|
||||
|
||||
```
|
||||
┌──────────────────────────────────────┐
|
||||
│ Main Menu │
|
||||
│ ┌────┐ ┌────┐ ┌────┐ ┌────┐ │
|
||||
│ │List│ │View│ │Edit│ │Bulk│ │
|
||||
│ └──┬─┘ └──┬─┘ └──┬─┘ └──┬─┘ │
|
||||
└─────┼──────┼──────┼──────┼──────────┘
|
||||
│ │ │ │
|
||||
▼ ▼ ▼ ▼
|
||||
Filter Detail Fields Multi
|
||||
Select Actions Update Select
|
||||
│ │ │ │
|
||||
└──────┴──────┴──────┘
|
||||
│
|
||||
▼
|
||||
Back to Menu
|
||||
```
|
||||
|
||||
## Implementation Guide
|
||||
|
||||
### Entry Point
|
||||
|
||||
```javascript
|
||||
// Parse input for issue ID
|
||||
const issueId = input.match(/^([A-Z]+-\d+|ISS-\d+)/i)?.[1];
|
||||
|
||||
// Show main menu
|
||||
await showMainMenu(issueId);
|
||||
```
|
||||
|
||||
### Main Menu Pattern
|
||||
|
||||
```javascript
|
||||
// 1. Fetch dashboard data
|
||||
const issues = JSON.parse(Bash('ccw issue list --json') || '[]');
|
||||
const queue = JSON.parse(Bash('ccw issue queue --json 2>/dev/null') || '{}');
|
||||
|
||||
// 2. Display summary
|
||||
console.log(`Issues: ${issues.length} | Queue: ${queue.pending_count || 0} pending`);
|
||||
|
||||
// 3. Ask action via AskUserQuestion
|
||||
const action = AskUserQuestion({
|
||||
questions: [{
|
||||
question: 'What would you like to do?',
|
||||
header: 'Action',
|
||||
options: [
|
||||
{ label: 'List Issues', description: 'Browse with filters' },
|
||||
{ label: 'View Issue', description: 'Detail view' },
|
||||
{ label: 'Edit Issue', description: 'Modify fields' },
|
||||
{ label: 'Bulk Operations', description: 'Batch actions' }
|
||||
]
|
||||
}]
|
||||
});
|
||||
|
||||
// 4. Route to handler
|
||||
```
|
||||
|
||||
### Filter Pattern
|
||||
|
||||
```javascript
|
||||
const filter = AskUserQuestion({
|
||||
questions: [{
|
||||
question: 'Filter by status?',
|
||||
header: 'Filter',
|
||||
multiSelect: true,
|
||||
options: [
|
||||
{ label: 'All', description: 'Show all' },
|
||||
{ label: 'Registered', description: 'Unplanned' },
|
||||
{ label: 'Planned', description: 'Has solution' },
|
||||
{ label: 'Executing', description: 'In progress' }
|
||||
]
|
||||
}]
|
||||
});
|
||||
```
|
||||
|
||||
### Edit Pattern
|
||||
|
||||
```javascript
|
||||
// Select field
|
||||
const field = AskUserQuestion({...});
|
||||
|
||||
// Get new value based on field type
|
||||
// For Priority: show P1-P5 options
|
||||
// For Status: show status options
|
||||
// For Title: accept free text via "Other"
|
||||
|
||||
// Update file
|
||||
const issuesPath = '.workflow/issues/issues.jsonl';
|
||||
// Read → Parse → Update → Write
|
||||
```
|
||||
|
||||
## Data Files
|
||||
|
||||
| File | Purpose |
|
||||
|------|---------|
|
||||
| `.workflow/issues/issues.jsonl` | Issue records |
|
||||
| `.workflow/issues/solutions/<id>.jsonl` | Solutions per issue |
|
||||
| `.workflow/issues/queue.json` | Execution queue |
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Error | Resolution |
|
||||
|-------|------------|
|
||||
| No issues found | Suggest `/issue:new` to create |
|
||||
| Issue not found | Show available issues, re-prompt |
|
||||
| Write failure | Check file permissions |
|
||||
| Queue error | Display ccw error message |
|
||||
|
||||
## Related Commands
|
||||
|
||||
- `/issue:new` - Create structured issue
|
||||
- `/issue:plan` - Generate solution
|
||||
- `/issue:queue` - Form execution queue
|
||||
- `/issue:execute` - Execute tasks
|
||||
162
.claude/skills/project-analyze/SKILL.md
Normal file
162
.claude/skills/project-analyze/SKILL.md
Normal file
@@ -0,0 +1,162 @@
|
||||
---
|
||||
name: project-analyze
|
||||
description: Multi-phase iterative project analysis with Mermaid diagrams. Generates architecture reports, design reports, method analysis reports. Use when analyzing codebases, understanding project structure, reviewing architecture, exploring design patterns, or documenting system components. Triggers on "analyze project", "architecture report", "design analysis", "code structure", "system overview".
|
||||
allowed-tools: Task, AskUserQuestion, Read, Bash, Glob, Grep, Write
|
||||
---
|
||||
|
||||
# Project Analysis Skill
|
||||
|
||||
Generate comprehensive project analysis reports through multi-phase iterative workflow.
|
||||
|
||||
## Architecture Overview
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────────────┐
|
||||
│ Context-Optimized Architecture │
|
||||
├─────────────────────────────────────────────────────────────────┤
|
||||
│ │
|
||||
│ Phase 1: Requirements → analysis-config.json │
|
||||
│ ↓ │
|
||||
│ Phase 2: Exploration → 初步探索,确定范围 │
|
||||
│ ↓ │
|
||||
│ Phase 3: Parallel Agents → sections/section-*.md (直接写MD) │
|
||||
│ ↓ 返回简要JSON │
|
||||
│ Phase 3.5: Consolidation → consolidation-summary.md │
|
||||
│ Agent ↓ 返回质量评分+问题列表 │
|
||||
│ ↓ │
|
||||
│ Phase 4: Assembly → 合并MD + 质量附录 │
|
||||
│ ↓ │
|
||||
│ Phase 5: Refinement → 最终报告 │
|
||||
│ │
|
||||
└─────────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
## Key Design Principles
|
||||
|
||||
1. **Agent 直接输出 MD**: 避免 JSON → MD 转换的上下文开销
|
||||
2. **简要返回**: Agent 只返回路径+摘要,不返回完整内容
|
||||
3. **汇总 Agent**: 独立 Agent 负责跨章节问题检测和质量评分
|
||||
4. **引用合并**: Phase 4 读取文件合并,不在上下文中传递
|
||||
5. **段落式描述**: 禁止清单罗列,层层递进,客观学术表达
|
||||
|
||||
## Execution Flow
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────────────┐
|
||||
│ Phase 1: Requirements Discovery │
|
||||
│ → Read: phases/01-requirements-discovery.md │
|
||||
│ → Collect: report type, depth level, scope, focus areas │
|
||||
│ → Output: analysis-config.json │
|
||||
├─────────────────────────────────────────────────────────────────┤
|
||||
│ Phase 2: Project Exploration │
|
||||
│ → Read: phases/02-project-exploration.md │
|
||||
│ → Launch: parallel exploration agents │
|
||||
│ → Output: exploration context for Phase 3 │
|
||||
├─────────────────────────────────────────────────────────────────┤
|
||||
│ Phase 3: Deep Analysis (Parallel Agents) │
|
||||
│ → Read: phases/03-deep-analysis.md │
|
||||
│ → Reference: specs/quality-standards.md │
|
||||
│ → Each Agent: 分析代码 → 直接写 sections/section-*.md │
|
||||
│ → Return: {"status", "output_file", "summary", "cross_notes"} │
|
||||
├─────────────────────────────────────────────────────────────────┤
|
||||
│ Phase 3.5: Consolidation (New!) │
|
||||
│ → Read: phases/03.5-consolidation.md │
|
||||
│ → Input: Agent 返回的简要信息 + cross_module_notes │
|
||||
│ → Analyze: 一致性/完整性/关联性/质量检查 │
|
||||
│ → Output: consolidation-summary.md │
|
||||
│ → Return: {"quality_score", "issues", "stats"} │
|
||||
├─────────────────────────────────────────────────────────────────┤
|
||||
│ Phase 4: Report Generation │
|
||||
│ → Read: phases/04-report-generation.md │
|
||||
│ → Check: 如有 errors,提示用户处理 │
|
||||
│ → Merge: Executive Summary + sections/*.md + 质量附录 │
|
||||
│ → Output: {TYPE}-REPORT.md │
|
||||
├─────────────────────────────────────────────────────────────────┤
|
||||
│ Phase 5: Iterative Refinement │
|
||||
│ → Read: phases/05-iterative-refinement.md │
|
||||
│ → Reference: specs/quality-standards.md │
|
||||
│ → Loop: 发现问题 → 提问 → 修复 → 重新检查 │
|
||||
└─────────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
## Report Types
|
||||
|
||||
| Type | Output | Agents | Focus |
|
||||
|------|--------|--------|-------|
|
||||
| `architecture` | ARCHITECTURE-REPORT.md | 5 | System structure, modules, dependencies |
|
||||
| `design` | DESIGN-REPORT.md | 4 | Patterns, classes, interfaces |
|
||||
| `methods` | METHODS-REPORT.md | 4 | Algorithms, critical paths, APIs |
|
||||
| `comprehensive` | COMPREHENSIVE-REPORT.md | All | All above combined |
|
||||
|
||||
## Agent Configuration by Report Type
|
||||
|
||||
### Architecture Report
|
||||
| Agent | Output File | Section |
|
||||
|-------|-------------|---------|
|
||||
| overview | section-overview.md | System Overview |
|
||||
| layers | section-layers.md | Layer Analysis |
|
||||
| dependencies | section-dependencies.md | Module Dependencies |
|
||||
| dataflow | section-dataflow.md | Data Flow |
|
||||
| entrypoints | section-entrypoints.md | Entry Points |
|
||||
|
||||
### Design Report
|
||||
| Agent | Output File | Section |
|
||||
|-------|-------------|---------|
|
||||
| patterns | section-patterns.md | Design Patterns |
|
||||
| classes | section-classes.md | Class Relationships |
|
||||
| interfaces | section-interfaces.md | Interface Contracts |
|
||||
| state | section-state.md | State Management |
|
||||
|
||||
### Methods Report
|
||||
| Agent | Output File | Section |
|
||||
|-------|-------------|---------|
|
||||
| algorithms | section-algorithms.md | Core Algorithms |
|
||||
| paths | section-paths.md | Critical Code Paths |
|
||||
| apis | section-apis.md | Public API Reference |
|
||||
| logic | section-logic.md | Complex Logic |
|
||||
|
||||
## Directory Setup
|
||||
|
||||
```javascript
|
||||
// 生成时间戳目录名
|
||||
const timestamp = new Date().toISOString().slice(0,19).replace(/[-:T]/g, '');
|
||||
const dir = `.workflow/.scratchpad/analyze-${timestamp}`;
|
||||
|
||||
// Windows (cmd)
|
||||
Bash(`mkdir "${dir}\\sections"`);
|
||||
Bash(`mkdir "${dir}\\iterations"`);
|
||||
|
||||
// Unix/macOS
|
||||
// Bash(`mkdir -p "${dir}/sections" "${dir}/iterations"`);
|
||||
```
|
||||
|
||||
## Output Structure
|
||||
|
||||
```
|
||||
.workflow/.scratchpad/analyze-{timestamp}/
|
||||
├── analysis-config.json # Phase 1
|
||||
├── sections/ # Phase 3 (Agent 直接写入)
|
||||
│ ├── section-overview.md
|
||||
│ ├── section-layers.md
|
||||
│ ├── section-dependencies.md
|
||||
│ └── ...
|
||||
├── consolidation-summary.md # Phase 3.5
|
||||
├── {TYPE}-REPORT.md # Final Output
|
||||
└── iterations/ # Phase 5
|
||||
├── v1.md
|
||||
└── v2.md
|
||||
```
|
||||
|
||||
## Reference Documents
|
||||
|
||||
| Document | Purpose |
|
||||
|----------|---------|
|
||||
| [phases/01-requirements-discovery.md](phases/01-requirements-discovery.md) | User interaction, config collection |
|
||||
| [phases/02-project-exploration.md](phases/02-project-exploration.md) | Initial exploration |
|
||||
| [phases/03-deep-analysis.md](phases/03-deep-analysis.md) | Parallel agent analysis |
|
||||
| [phases/03.5-consolidation.md](phases/03.5-consolidation.md) | Cross-section consolidation |
|
||||
| [phases/04-report-generation.md](phases/04-report-generation.md) | Report assembly |
|
||||
| [phases/05-iterative-refinement.md](phases/05-iterative-refinement.md) | Quality refinement |
|
||||
| [specs/quality-standards.md](specs/quality-standards.md) | Quality gates, standards |
|
||||
| [specs/writing-style.md](specs/writing-style.md) | 段落式学术写作规范 |
|
||||
| [../_shared/mermaid-utils.md](../_shared/mermaid-utils.md) | Shared Mermaid utilities |
|
||||
@@ -0,0 +1,79 @@
|
||||
# Phase 1: Requirements Discovery
|
||||
|
||||
Collect user requirements before analysis begins.
|
||||
|
||||
## Execution
|
||||
|
||||
### Step 1: Report Type Selection
|
||||
|
||||
```javascript
|
||||
AskUserQuestion({
|
||||
questions: [{
|
||||
question: "What type of project analysis report would you like?",
|
||||
header: "Report Type",
|
||||
multiSelect: false,
|
||||
options: [
|
||||
{label: "Architecture (Recommended)", description: "System structure, module relationships, layer analysis, dependency graph"},
|
||||
{label: "Design", description: "Design patterns, class relationships, component interactions, abstraction analysis"},
|
||||
{label: "Methods", description: "Key algorithms, critical code paths, core function explanations with examples"},
|
||||
{label: "Comprehensive", description: "All above combined into a complete project analysis"}
|
||||
]
|
||||
}]
|
||||
})
|
||||
```
|
||||
|
||||
### Step 2: Depth Level Selection
|
||||
|
||||
```javascript
|
||||
AskUserQuestion({
|
||||
questions: [{
|
||||
question: "What depth level do you need?",
|
||||
header: "Depth",
|
||||
multiSelect: false,
|
||||
options: [
|
||||
{label: "Overview", description: "High-level understanding, suitable for onboarding"},
|
||||
{label: "Detailed", description: "In-depth analysis with code examples"},
|
||||
{label: "Deep-Dive", description: "Exhaustive analysis with implementation details"}
|
||||
]
|
||||
}]
|
||||
})
|
||||
```
|
||||
|
||||
### Step 3: Scope Definition
|
||||
|
||||
```javascript
|
||||
AskUserQuestion({
|
||||
questions: [{
|
||||
question: "What scope should the analysis cover?",
|
||||
header: "Scope",
|
||||
multiSelect: false,
|
||||
options: [
|
||||
{label: "Full Project", description: "Analyze entire codebase"},
|
||||
{label: "Specific Module", description: "Focus on a specific module or directory"},
|
||||
{label: "Custom Path", description: "Specify custom path pattern"}
|
||||
]
|
||||
}]
|
||||
})
|
||||
```
|
||||
|
||||
## Focus Areas Mapping
|
||||
|
||||
| Report Type | Focus Areas |
|
||||
|-------------|-------------|
|
||||
| Architecture | Layer Structure, Module Dependencies, Entry Points, Data Flow |
|
||||
| Design | Design Patterns, Class Relationships, Interface Contracts, State Management |
|
||||
| Methods | Core Algorithms, Critical Paths, Public APIs, Complex Logic |
|
||||
| Comprehensive | All above combined |
|
||||
|
||||
## Output
|
||||
|
||||
Save configuration to `analysis-config.json`:
|
||||
|
||||
```json
|
||||
{
|
||||
"type": "architecture|design|methods|comprehensive",
|
||||
"depth": "overview|detailed|deep-dive",
|
||||
"scope": "**/*|src/**/*|custom",
|
||||
"focus_areas": ["..."]
|
||||
}
|
||||
```
|
||||
176
.claude/skills/project-analyze/phases/02-project-exploration.md
Normal file
176
.claude/skills/project-analyze/phases/02-project-exploration.md
Normal file
@@ -0,0 +1,176 @@
|
||||
# Phase 2: Project Exploration
|
||||
|
||||
Launch parallel exploration agents based on report type and task context.
|
||||
|
||||
## Execution
|
||||
|
||||
### Step 1: Intelligent Angle Selection
|
||||
|
||||
```javascript
|
||||
// Angle presets based on report type (adapted from lite-plan.md)
|
||||
const ANGLE_PRESETS = {
|
||||
architecture: ['layer-structure', 'module-dependencies', 'entry-points', 'data-flow'],
|
||||
design: ['design-patterns', 'class-relationships', 'interface-contracts', 'state-management'],
|
||||
methods: ['core-algorithms', 'critical-paths', 'public-apis', 'complex-logic'],
|
||||
comprehensive: ['architecture', 'patterns', 'dependencies', 'integration-points']
|
||||
};
|
||||
|
||||
// Depth-based angle count
|
||||
const angleCount = {
|
||||
shallow: 2,
|
||||
standard: 3,
|
||||
deep: 4
|
||||
};
|
||||
|
||||
function selectAngles(reportType, depth) {
|
||||
const preset = ANGLE_PRESETS[reportType] || ANGLE_PRESETS.comprehensive;
|
||||
const count = angleCount[depth] || 3;
|
||||
return preset.slice(0, count);
|
||||
}
|
||||
|
||||
const selectedAngles = selectAngles(config.type, config.depth);
|
||||
|
||||
console.log(`
|
||||
## Exploration Plan
|
||||
|
||||
Report Type: ${config.type}
|
||||
Depth: ${config.depth}
|
||||
Selected Angles: ${selectedAngles.join(', ')}
|
||||
|
||||
Launching ${selectedAngles.length} parallel explorations...
|
||||
`);
|
||||
```
|
||||
|
||||
### Step 2: Launch Parallel Agents (Direct Output)
|
||||
|
||||
**⚠️ CRITICAL**: Agents write output files directly. No aggregation needed.
|
||||
|
||||
```javascript
|
||||
// Launch agents with pre-assigned angles
|
||||
const explorationTasks = selectedAngles.map((angle, index) =>
|
||||
Task({
|
||||
subagent_type: "cli-explore-agent",
|
||||
run_in_background: false, // ⚠️ MANDATORY: Must wait for results
|
||||
description: `Explore: ${angle}`,
|
||||
prompt: `
|
||||
## Exploration Objective
|
||||
Execute **${angle}** exploration for ${config.type} project analysis report.
|
||||
|
||||
## Assigned Context
|
||||
- **Exploration Angle**: ${angle}
|
||||
- **Report Type**: ${config.type}
|
||||
- **Depth**: ${config.depth}
|
||||
- **Scope**: ${config.scope}
|
||||
- **Exploration Index**: ${index + 1} of ${selectedAngles.length}
|
||||
- **Output File**: ${sessionFolder}/exploration-${angle}.json
|
||||
|
||||
## MANDATORY FIRST STEPS (Execute by Agent)
|
||||
**You (cli-explore-agent) MUST execute these steps in order:**
|
||||
1. Run: ccw tool exec get_modules_by_depth '{}' (project structure)
|
||||
2. Run: rg -l "{relevant_keyword}" --type ts (locate relevant files)
|
||||
3. Analyze project from ${angle} perspective
|
||||
|
||||
## Exploration Strategy (${angle} focus)
|
||||
|
||||
**Step 1: Structural Scan** (Bash)
|
||||
- get_modules_by_depth.sh → identify modules related to ${angle}
|
||||
- find/rg → locate files relevant to ${angle} aspect
|
||||
- Analyze imports/dependencies from ${angle} perspective
|
||||
|
||||
**Step 2: Semantic Analysis** (Gemini/Qwen CLI)
|
||||
- How does existing code handle ${angle} concerns?
|
||||
- What patterns are used for ${angle}?
|
||||
- Identify key architectural decisions related to ${angle}
|
||||
|
||||
**Step 3: Write Output Directly**
|
||||
- Consolidate ${angle} findings into JSON
|
||||
- Write to output file path specified above
|
||||
|
||||
## Expected Output Schema
|
||||
|
||||
**File**: ${sessionFolder}/exploration-${angle}.json
|
||||
|
||||
\`\`\`json
|
||||
{
|
||||
"angle": "${angle}",
|
||||
"findings": {
|
||||
"structure": [
|
||||
{ "component": "...", "type": "module|layer|service", "description": "..." }
|
||||
],
|
||||
"patterns": [
|
||||
{ "name": "...", "usage": "...", "files": ["path1", "path2"] }
|
||||
],
|
||||
"relationships": [
|
||||
{ "from": "...", "to": "...", "type": "depends|imports|calls", "strength": "high|medium|low" }
|
||||
],
|
||||
"key_files": [
|
||||
{ "path": "src/file.ts", "relevance": 0.85, "rationale": "Core ${angle} logic" }
|
||||
]
|
||||
},
|
||||
"insights": [
|
||||
{ "observation": "...", "impact": "high|medium|low", "recommendation": "..." }
|
||||
],
|
||||
"_metadata": {
|
||||
"exploration_angle": "${angle}",
|
||||
"exploration_index": ${index + 1},
|
||||
"report_type": "${config.type}",
|
||||
"timestamp": "ISO8601"
|
||||
}
|
||||
}
|
||||
\`\`\`
|
||||
|
||||
## Success Criteria
|
||||
- [ ] get_modules_by_depth.sh executed
|
||||
- [ ] At least 3 relevant files identified with ${angle} rationale
|
||||
- [ ] Patterns are actionable (code examples, not generic advice)
|
||||
- [ ] Relationships include concrete file references
|
||||
- [ ] JSON output written to ${sessionFolder}/exploration-${angle}.json
|
||||
- [ ] Return: 2-3 sentence summary of ${angle} findings
|
||||
`
|
||||
})
|
||||
);
|
||||
|
||||
// Execute all exploration tasks in parallel
|
||||
```
|
||||
|
||||
## Output
|
||||
|
||||
Session folder structure after exploration:
|
||||
|
||||
```
|
||||
${sessionFolder}/
|
||||
├── exploration-{angle1}.json # Agent 1 direct output
|
||||
├── exploration-{angle2}.json # Agent 2 direct output
|
||||
├── exploration-{angle3}.json # Agent 3 direct output (if applicable)
|
||||
└── exploration-{angle4}.json # Agent 4 direct output (if applicable)
|
||||
```
|
||||
|
||||
## Downstream Usage (Phase 3 Analysis Input)
|
||||
|
||||
Subsequent analysis phases MUST read exploration outputs as input:
|
||||
|
||||
```javascript
|
||||
// Discover exploration files by known angle pattern
|
||||
const explorationData = {};
|
||||
selectedAngles.forEach(angle => {
|
||||
const filePath = `${sessionFolder}/exploration-${angle}.json`;
|
||||
explorationData[angle] = JSON.parse(Read(filePath));
|
||||
});
|
||||
|
||||
// Pass to analysis agent
|
||||
Task({
|
||||
subagent_type: "analysis-agent",
|
||||
prompt: `
|
||||
## Analysis Input
|
||||
|
||||
### Exploration Data by Angle
|
||||
${Object.entries(explorationData).map(([angle, data]) => `
|
||||
#### ${angle}
|
||||
${JSON.stringify(data, null, 2)}
|
||||
`).join('\n')}
|
||||
|
||||
## Analysis Task
|
||||
Synthesize findings from all exploration angles...
|
||||
`
|
||||
});
|
||||
```
|
||||
854
.claude/skills/project-analyze/phases/03-deep-analysis.md
Normal file
854
.claude/skills/project-analyze/phases/03-deep-analysis.md
Normal file
@@ -0,0 +1,854 @@
|
||||
# Phase 3: Deep Analysis
|
||||
|
||||
并行 Agent 撰写设计报告章节,返回简要信息。
|
||||
|
||||
> **规范参考**: [../specs/quality-standards.md](../specs/quality-standards.md)
|
||||
> **写作风格**: [../specs/writing-style.md](../specs/writing-style.md)
|
||||
|
||||
## Exploration → Agent 自动分配
|
||||
|
||||
根据 Phase 2 生成的 exploration 文件名自动分配对应的 analysis agent。
|
||||
|
||||
### 映射规则
|
||||
|
||||
```javascript
|
||||
// Exploration 角度 → Agent 映射(基于文件名识别,不读取内容)
|
||||
const EXPLORATION_TO_AGENT = {
|
||||
// Architecture Report 角度
|
||||
'layer-structure': 'layers',
|
||||
'module-dependencies': 'dependencies',
|
||||
'entry-points': 'entrypoints',
|
||||
'data-flow': 'dataflow',
|
||||
|
||||
// Design Report 角度
|
||||
'design-patterns': 'patterns',
|
||||
'class-relationships': 'classes',
|
||||
'interface-contracts': 'interfaces',
|
||||
'state-management': 'state',
|
||||
|
||||
// Methods Report 角度
|
||||
'core-algorithms': 'algorithms',
|
||||
'critical-paths': 'paths',
|
||||
'public-apis': 'apis',
|
||||
'complex-logic': 'logic',
|
||||
|
||||
// Comprehensive 角度
|
||||
'architecture': 'overview',
|
||||
'patterns': 'patterns',
|
||||
'dependencies': 'dependencies',
|
||||
'integration-points': 'entrypoints'
|
||||
};
|
||||
|
||||
// 从文件名提取角度
|
||||
function extractAngle(filename) {
|
||||
// exploration-layer-structure.json → layer-structure
|
||||
const match = filename.match(/exploration-(.+)\.json$/);
|
||||
return match ? match[1] : null;
|
||||
}
|
||||
|
||||
// 分配 agent
|
||||
function assignAgent(explorationFile) {
|
||||
const angle = extractAngle(path.basename(explorationFile));
|
||||
return EXPLORATION_TO_AGENT[angle] || null;
|
||||
}
|
||||
|
||||
// Agent 配置(用于 buildAgentPrompt)
|
||||
const AGENT_CONFIGS = {
|
||||
overview: {
|
||||
role: '首席系统架构师',
|
||||
task: '基于代码库全貌,撰写"总体架构"章节,洞察核心价值主张和顶层技术决策',
|
||||
focus: '领域边界与定位、架构范式、核心技术决策、顶层模块划分',
|
||||
constraint: '避免罗列目录结构,重点阐述设计意图,包含至少1个 Mermaid 架构图'
|
||||
},
|
||||
layers: {
|
||||
role: '资深软件设计师',
|
||||
task: '分析系统逻辑分层结构,撰写"逻辑视点与分层架构"章节',
|
||||
focus: '职责分配体系、数据流向与约束、边界隔离策略、异常处理流',
|
||||
constraint: '不要列举具体文件名,关注层级间契约和隔离艺术'
|
||||
},
|
||||
dependencies: {
|
||||
role: '集成架构专家',
|
||||
task: '审视系统外部连接与内部耦合,撰写"依赖管理与生态集成"章节',
|
||||
focus: '外部集成拓扑、核心依赖分析、依赖注入与控制反转、供应链安全',
|
||||
constraint: '禁止简单列出依赖配置,必须分析集成策略和风险控制模型'
|
||||
},
|
||||
dataflow: {
|
||||
role: '数据架构师',
|
||||
task: '追踪系统数据流转机制,撰写"数据流与状态管理"章节',
|
||||
focus: '数据入口与出口、数据转换管道、持久化策略、一致性保障',
|
||||
constraint: '关注数据生命周期和形态演变,不要罗列数据库表结构'
|
||||
},
|
||||
entrypoints: {
|
||||
role: '系统边界分析师',
|
||||
task: '识别系统入口设计和关键路径,撰写"系统入口与调用链"章节',
|
||||
focus: '入口类型与职责、请求处理管道、关键业务路径、异常与边界处理',
|
||||
constraint: '关注入口设计哲学,不要逐个列举所有端点'
|
||||
},
|
||||
patterns: {
|
||||
role: '核心开发规范制定者',
|
||||
task: '挖掘代码中的复用机制和标准化实践,撰写"设计模式与工程规范"章节',
|
||||
focus: '架构级模式、通信与并发模式、横切关注点实现、抽象与复用策略',
|
||||
constraint: '避免教科书式解释,必须结合项目上下文说明应用场景'
|
||||
},
|
||||
classes: {
|
||||
role: '领域模型设计师',
|
||||
task: '分析系统类型体系和领域模型,撰写"类型体系与领域建模"章节',
|
||||
focus: '领域模型设计、继承与组合策略、职责分配原则、类型安全与约束',
|
||||
constraint: '关注建模思想,用 UML 类图辅助说明核心关系'
|
||||
},
|
||||
interfaces: {
|
||||
role: '契约设计专家',
|
||||
task: '分析系统接口设计和抽象层次,撰写"接口契约与抽象设计"章节',
|
||||
focus: '抽象层次设计、契约与实现分离、扩展点设计、版本演进策略',
|
||||
constraint: '关注接口设计哲学,不要逐个列举接口方法签名'
|
||||
},
|
||||
state: {
|
||||
role: '状态管理架构师',
|
||||
task: '分析系统状态管理机制,撰写"状态管理与生命周期"章节',
|
||||
focus: '状态模型设计、状态生命周期、并发与一致性、状态恢复与容错',
|
||||
constraint: '关注状态管理设计决策,不要列举具体变量名'
|
||||
},
|
||||
algorithms: {
|
||||
role: '算法架构师',
|
||||
task: '分析系统核心算法设计,撰写"核心算法与计算模型"章节',
|
||||
focus: '算法选型与权衡、计算模型设计、性能与可扩展性、正确性保障',
|
||||
constraint: '关注算法思想,用流程图辅助说明复杂逻辑'
|
||||
},
|
||||
paths: {
|
||||
role: '性能架构师',
|
||||
task: '分析系统关键执行路径,撰写"关键路径与性能设计"章节',
|
||||
focus: '关键业务路径、性能敏感区域、瓶颈识别与缓解、降级与熔断',
|
||||
constraint: '关注路径设计战略考量,不要罗列所有代码执行步骤'
|
||||
},
|
||||
apis: {
|
||||
role: 'API 设计规范专家',
|
||||
task: '分析系统对外接口设计规范,撰写"API 设计与规范"章节',
|
||||
focus: 'API 设计风格、命名与结构规范、版本管理策略、错误处理规范',
|
||||
constraint: '关注设计规范和一致性,不要逐个列举所有 API 端点'
|
||||
},
|
||||
logic: {
|
||||
role: '业务逻辑架构师',
|
||||
task: '分析系统业务逻辑建模,撰写"业务逻辑与规则引擎"章节',
|
||||
focus: '业务规则建模、决策点设计、边界条件处理、业务流程编排',
|
||||
constraint: '关注业务逻辑组织方式,不要逐行解释代码逻辑'
|
||||
}
|
||||
};
|
||||
```
|
||||
|
||||
### 自动发现与分配流程
|
||||
|
||||
```javascript
|
||||
// 1. 发现所有 exploration 文件(仅看文件名)
|
||||
const explorationFiles = bash(`find ${sessionFolder} -name "exploration-*.json" -type f`)
|
||||
.split('\n')
|
||||
.filter(f => f.trim());
|
||||
|
||||
// 2. 按文件名自动分配 agent
|
||||
const agentAssignments = explorationFiles.map(file => {
|
||||
const angle = extractAngle(path.basename(file));
|
||||
const agentName = EXPLORATION_TO_AGENT[angle];
|
||||
return {
|
||||
exploration_file: file,
|
||||
angle: angle,
|
||||
agent: agentName,
|
||||
output_file: `section-${agentName}.md`
|
||||
};
|
||||
}).filter(a => a.agent); // 过滤未映射的角度
|
||||
|
||||
console.log(`
|
||||
## Agent Auto-Assignment
|
||||
|
||||
Found ${explorationFiles.length} exploration files:
|
||||
${agentAssignments.map(a => `- ${a.angle} → ${a.agent} agent`).join('\n')}
|
||||
`);
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Agent 执行前置条件
|
||||
|
||||
**每个 Agent 接收 exploration 文件路径,自行读取内容**:
|
||||
|
||||
```javascript
|
||||
// Agent prompt 中包含文件路径
|
||||
// Agent 启动后的操作顺序:
|
||||
// 1. Read exploration 文件(上下文输入)
|
||||
// 2. Read 规范文件
|
||||
// 3. 执行分析任务
|
||||
```
|
||||
|
||||
规范文件路径(相对于 skill 根目录):
|
||||
- `specs/quality-standards.md` - 质量标准和检查清单
|
||||
- `specs/writing-style.md` - 段落式写作规范
|
||||
|
||||
---
|
||||
|
||||
## 通用写作规范(所有 Agent 共用)
|
||||
|
||||
```
|
||||
[STYLE]
|
||||
- **语言规范**:使用严谨、专业的中文进行技术写作。仅专业术语(如 Singleton, Middleware, ORM)保留英文原文。
|
||||
- **叙述视角**:采用完全客观的第三人称视角("上帝视角")。严禁使用"我们"、"开发者"、"用户"、"你"或"我"。主语应为"系统"、"模块"、"设计"、"架构"或"该层"。
|
||||
- **段落结构**:
|
||||
- 禁止使用无序列表作为主要叙述方式,必须将观点融合在连贯的段落中。
|
||||
- 采用"论点-论据-结论"的逻辑结构。
|
||||
- 善用逻辑连接词("因此"、"然而"、"鉴于"、"进而")来体现设计思路的推演过程。
|
||||
- **内容深度**:
|
||||
- 抽象化:描述"做什么"和"为什么这么做",而不是"怎么写的"。
|
||||
- 方法论:强调设计模式、架构原则(如 SOLID、高内聚低耦合)的应用。
|
||||
- 非代码化:除非定义关键接口,否则不直接引用代码。文件引用仅作为括号内的来源标注 (参考: path/to/file)。
|
||||
```
|
||||
|
||||
## Agent 配置
|
||||
|
||||
### Architecture Report Agents
|
||||
|
||||
| Agent | 输出文件 | 关注点 |
|
||||
|-------|----------|--------|
|
||||
| overview | section-overview.md | 顶层架构、技术决策、设计哲学 |
|
||||
| layers | section-layers.md | 逻辑分层、职责边界、隔离策略 |
|
||||
| dependencies | section-dependencies.md | 依赖治理、集成拓扑、风险控制 |
|
||||
| dataflow | section-dataflow.md | 数据流向、转换机制、一致性保障 |
|
||||
| entrypoints | section-entrypoints.md | 入口设计、调用链、异常传播 |
|
||||
|
||||
### Design Report Agents
|
||||
|
||||
| Agent | 输出文件 | 关注点 |
|
||||
|-------|----------|--------|
|
||||
| patterns | section-patterns.md | 架构模式、通信机制、横切关注点 |
|
||||
| classes | section-classes.md | 类型体系、继承策略、职责划分 |
|
||||
| interfaces | section-interfaces.md | 契约设计、抽象层次、扩展机制 |
|
||||
| state | section-state.md | 状态模型、生命周期、并发控制 |
|
||||
|
||||
### Methods Report Agents
|
||||
|
||||
| Agent | 输出文件 | 关注点 |
|
||||
|-------|----------|--------|
|
||||
| algorithms | section-algorithms.md | 核心算法思想、复杂度权衡、优化策略 |
|
||||
| paths | section-paths.md | 关键路径设计、性能敏感点、瓶颈分析 |
|
||||
| apis | section-apis.md | API 设计规范、版本策略、兼容性 |
|
||||
| logic | section-logic.md | 业务逻辑建模、决策机制、边界处理 |
|
||||
|
||||
---
|
||||
|
||||
## Agent 返回格式
|
||||
|
||||
```typescript
|
||||
interface AgentReturn {
|
||||
status: "completed" | "partial" | "failed";
|
||||
output_file: string;
|
||||
summary: string; // 50字以内
|
||||
cross_module_notes: string[]; // 跨模块发现
|
||||
stats: { diagrams: number; };
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Agent 提示词
|
||||
|
||||
### Overview Agent
|
||||
|
||||
```javascript
|
||||
Task({
|
||||
subagent_type: "cli-explore-agent",
|
||||
run_in_background: false,
|
||||
prompt: `
|
||||
[SPEC]
|
||||
首先读取规范文件:
|
||||
- Read: ${skillRoot}/specs/quality-standards.md
|
||||
- Read: ${skillRoot}/specs/writing-style.md
|
||||
严格遵循规范中的质量标准和段落式写作要求。
|
||||
|
||||
[ROLE] 首席系统架构师
|
||||
|
||||
[TASK]
|
||||
基于代码库的全貌,撰写《系统架构设计报告》的"总体架构"章节。透过代码表象,洞察系统的核心价值主张和顶层技术决策。
|
||||
输出: ${outDir}/sections/section-overview.md
|
||||
|
||||
[STYLE]
|
||||
- 严谨专业的中文技术写作,专业术语保留英文
|
||||
- 完全客观的第三人称视角,严禁"我们"、"开发者"
|
||||
- 段落式叙述,采用"论点-论据-结论"结构
|
||||
- 善用逻辑连接词体现设计推演过程
|
||||
- 描述"做什么"和"为什么",非"怎么写的"
|
||||
- 不直接引用代码,文件仅作来源标注
|
||||
|
||||
[FOCUS]
|
||||
- 领域边界与定位:系统旨在解决什么核心业务问题?其在更大的技术生态中处于什么位置?
|
||||
- 架构范式:采用何种架构风格(分层、六边形、微服务、事件驱动等)?选择该范式的根本原因是什么?
|
||||
- 核心技术决策:关键技术栈的选型依据,这些选型如何支撑系统的非功能性需求(性能、扩展性、维护性)
|
||||
- 顶层模块划分:系统在最高层级被划分为哪些逻辑单元?它们之间的高层协作机制是怎样的?
|
||||
|
||||
[CONSTRAINT]
|
||||
- 避免罗列目录结构
|
||||
- 重点阐述"设计意图"而非"现有功能"
|
||||
- 包含至少1个 Mermaid 架构图辅助说明
|
||||
|
||||
[RETURN JSON]
|
||||
{"status":"completed","output_file":"section-overview.md","summary":"<50字>","cross_module_notes":[],"stats":{"diagrams":1}}
|
||||
`
|
||||
})
|
||||
```
|
||||
|
||||
### Layers Agent
|
||||
|
||||
```javascript
|
||||
Task({
|
||||
subagent_type: "cli-explore-agent",
|
||||
run_in_background: false,
|
||||
prompt: `
|
||||
[SPEC]
|
||||
首先读取规范文件:
|
||||
- Read: ${skillRoot}/specs/quality-standards.md
|
||||
- Read: ${skillRoot}/specs/writing-style.md
|
||||
严格遵循规范中的质量标准和段落式写作要求。
|
||||
|
||||
[ROLE] 资深软件设计师
|
||||
|
||||
[TASK]
|
||||
分析系统的逻辑分层结构,撰写《系统架构设计报告》的"逻辑视点与分层架构"章节。重点揭示系统如何通过分层来隔离关注点。
|
||||
输出: ${outDir}/sections/section-layers.md
|
||||
|
||||
[STYLE]
|
||||
- 严谨专业的中文技术写作
|
||||
- 客观第三人称视角,主语为"系统"、"该层"、"设计"
|
||||
- 段落式叙述,禁止无序列表作为主体
|
||||
- 强调方法论和架构原则的应用
|
||||
|
||||
[FOCUS]
|
||||
- 职责分配体系:系统被划分为哪几个逻辑层级?每一层的核心职责和输入输出是什么?
|
||||
- 数据流向与约束:数据在各层之间是如何流动的?是否存在严格的单向依赖规则?
|
||||
- 边界隔离策略:各层之间通过何种方式解耦(接口抽象、DTO转换、依赖注入)?如何防止下层实现细节泄露到上层?
|
||||
- 异常处理流:异常信息如何在分层结构中传递和转化?
|
||||
|
||||
[CONSTRAINT]
|
||||
- 不要列举具体的文件名列表
|
||||
- 关注"层级间的契约"和"隔离的艺术"
|
||||
|
||||
[RETURN JSON]
|
||||
{"status":"completed","output_file":"section-layers.md","summary":"<50字>","cross_module_notes":[],"stats":{}}
|
||||
`
|
||||
})
|
||||
```
|
||||
|
||||
### Dependencies Agent
|
||||
|
||||
```javascript
|
||||
Task({
|
||||
subagent_type: "cli-explore-agent",
|
||||
run_in_background: false,
|
||||
prompt: `
|
||||
[SPEC]
|
||||
首先读取规范文件:
|
||||
- Read: ${skillRoot}/specs/quality-standards.md
|
||||
- Read: ${skillRoot}/specs/writing-style.md
|
||||
严格遵循规范中的质量标准和段落式写作要求。
|
||||
|
||||
[ROLE] 集成架构专家
|
||||
|
||||
[TASK]
|
||||
审视系统的外部连接与内部耦合情况,撰写《系统架构设计报告》的"依赖管理与生态集成"章节。
|
||||
输出: ${outDir}/sections/section-dependencies.md
|
||||
|
||||
[STYLE]
|
||||
- 严谨专业的中文技术写作
|
||||
- 客观第三人称视角
|
||||
- 段落式叙述,逻辑连贯
|
||||
|
||||
[FOCUS]
|
||||
- 外部集成拓扑:系统如何与外部世界(第三方API、数据库、中间件)交互?采用了何种适配器或防腐层设计来隔离外部变化?
|
||||
- 核心依赖分析:区分"核心业务依赖"与"基础设施依赖"。系统对关键框架的依赖程度如何?是否存在被锁定的风险?
|
||||
- 依赖注入与控制反转:系统内部模块间的组装方式是什么?是否实现了依赖倒置原则以支持可测试性?
|
||||
- 供应链安全与治理:对于复杂的依赖树,系统采用了何种策略来管理版本和兼容性?
|
||||
|
||||
[CONSTRAINT]
|
||||
- 禁止简单列出依赖配置文件的内容
|
||||
- 必须分析依赖背后的"集成策略"和"风险控制模型"
|
||||
|
||||
[RETURN JSON]
|
||||
{"status":"completed","output_file":"section-dependencies.md","summary":"<50字>","cross_module_notes":[],"stats":{}}
|
||||
`
|
||||
})
|
||||
```
|
||||
|
||||
### Patterns Agent
|
||||
|
||||
```javascript
|
||||
Task({
|
||||
subagent_type: "cli-explore-agent",
|
||||
run_in_background: false,
|
||||
prompt: `
|
||||
[SPEC]
|
||||
首先读取规范文件:
|
||||
- Read: ${skillRoot}/specs/quality-standards.md
|
||||
- Read: ${skillRoot}/specs/writing-style.md
|
||||
严格遵循规范中的质量标准和段落式写作要求。
|
||||
|
||||
[ROLE] 核心开发规范制定者
|
||||
|
||||
[TASK]
|
||||
挖掘代码中的复用机制和标准化实践,撰写《系统架构设计报告》的"设计模式与工程规范"章节。
|
||||
输出: ${outDir}/sections/section-patterns.md
|
||||
|
||||
[STYLE]
|
||||
- 严谨专业的中文技术写作
|
||||
- 客观第三人称视角
|
||||
- 段落式叙述,结合项目上下文
|
||||
|
||||
[FOCUS]
|
||||
- 架构级模式:识别系统中广泛使用的架构模式(CQRS、Event Sourcing、Repository Pattern、Unit of Work)。阐述引入这些模式解决了什么特定难题
|
||||
- 通信与并发模式:分析组件间的通信机制(同步/异步、观察者模式、发布订阅)以及并发控制策略
|
||||
- 横切关注点实现:系统如何统一处理日志、鉴权、缓存、事务管理等横切逻辑(AOP、中间件管道、装饰器)?
|
||||
- 抽象与复用策略:分析基类、泛型、工具类的设计思想,系统如何通过抽象来减少重复代码并提高一致性?
|
||||
|
||||
[CONSTRAINT]
|
||||
- 避免教科书式地解释设计模式定义,必须结合当前项目上下文说明其应用场景
|
||||
- 关注"解决类问题的通用机制"
|
||||
|
||||
[RETURN JSON]
|
||||
{"status":"completed","output_file":"section-patterns.md","summary":"<50字>","cross_module_notes":[],"stats":{}}
|
||||
`
|
||||
})
|
||||
```
|
||||
|
||||
### DataFlow Agent
|
||||
|
||||
```javascript
|
||||
Task({
|
||||
subagent_type: "cli-explore-agent",
|
||||
run_in_background: false,
|
||||
prompt: `
|
||||
[SPEC]
|
||||
首先读取规范文件:
|
||||
- Read: ${skillRoot}/specs/quality-standards.md
|
||||
- Read: ${skillRoot}/specs/writing-style.md
|
||||
严格遵循规范中的质量标准和段落式写作要求。
|
||||
|
||||
[ROLE] 数据架构师
|
||||
|
||||
[TASK]
|
||||
追踪系统的数据流转机制,撰写《系统架构设计报告》的"数据流与状态管理"章节。
|
||||
输出: ${outDir}/sections/section-dataflow.md
|
||||
|
||||
[STYLE]
|
||||
- 严谨专业的中文技术写作
|
||||
- 客观第三人称视角
|
||||
- 段落式叙述
|
||||
|
||||
[FOCUS]
|
||||
- 数据入口与出口:数据从何处进入系统,最终流向何处?边界处的数据校验和转换策略是什么?
|
||||
- 数据转换管道:数据在各层/模块间经历了怎样的形态变化?DTO、Entity、VO 等数据对象的职责边界如何划分?
|
||||
- 持久化策略:系统如何设计数据存储方案?采用了何种 ORM 策略或数据访问模式?
|
||||
- 一致性保障:系统如何处理事务边界?分布式场景下如何保证数据一致性?
|
||||
|
||||
[CONSTRAINT]
|
||||
- 关注数据的"生命周期"和"形态演变"
|
||||
- 不要罗列数据库表结构
|
||||
|
||||
[RETURN JSON]
|
||||
{"status":"completed","output_file":"section-dataflow.md","summary":"<50字>","cross_module_notes":[],"stats":{}}
|
||||
`
|
||||
})
|
||||
```
|
||||
|
||||
### EntryPoints Agent
|
||||
|
||||
```javascript
|
||||
Task({
|
||||
subagent_type: "cli-explore-agent",
|
||||
run_in_background: false,
|
||||
prompt: `
|
||||
[SPEC]
|
||||
首先读取规范文件:
|
||||
- Read: ${skillRoot}/specs/quality-standards.md
|
||||
- Read: ${skillRoot}/specs/writing-style.md
|
||||
严格遵循规范中的质量标准和段落式写作要求。
|
||||
|
||||
[ROLE] 系统边界分析师
|
||||
|
||||
[TASK]
|
||||
识别系统的入口设计和关键路径,撰写《系统架构设计报告》的"系统入口与调用链"章节。
|
||||
输出: ${outDir}/sections/section-entrypoints.md
|
||||
|
||||
[STYLE]
|
||||
- 严谨专业的中文技术写作
|
||||
- 客观第三人称视角
|
||||
- 段落式叙述
|
||||
|
||||
[FOCUS]
|
||||
- 入口类型与职责:系统提供了哪些类型的入口(REST API、CLI、消息队列消费者、定时任务)?各入口的设计目的和适用场景是什么?
|
||||
- 请求处理管道:从入口到核心逻辑,请求经过了怎样的处理管道?中间件/拦截器的编排逻辑是什么?
|
||||
- 关键业务路径:最重要的几条业务流程的调用链是怎样的?关键节点的设计考量是什么?
|
||||
- 异常与边界处理:系统如何统一处理异常?异常信息如何传播和转化?
|
||||
|
||||
[CONSTRAINT]
|
||||
- 关注"入口的设计哲学"而非 API 清单
|
||||
- 不要逐个列举所有端点
|
||||
|
||||
[RETURN JSON]
|
||||
{"status":"completed","output_file":"section-entrypoints.md","summary":"<50字>","cross_module_notes":[],"stats":{}}
|
||||
`
|
||||
})
|
||||
```
|
||||
|
||||
### Classes Agent
|
||||
|
||||
```javascript
|
||||
Task({
|
||||
subagent_type: "cli-explore-agent",
|
||||
run_in_background: false,
|
||||
prompt: `
|
||||
[SPEC]
|
||||
首先读取规范文件:
|
||||
- Read: ${skillRoot}/specs/quality-standards.md
|
||||
- Read: ${skillRoot}/specs/writing-style.md
|
||||
严格遵循规范中的质量标准和段落式写作要求。
|
||||
|
||||
[ROLE] 领域模型设计师
|
||||
|
||||
[TASK]
|
||||
分析系统的类型体系和领域模型,撰写《系统架构设计报告》的"类型体系与领域建模"章节。
|
||||
输出: ${outDir}/sections/section-classes.md
|
||||
|
||||
[STYLE]
|
||||
- 严谨专业的中文技术写作
|
||||
- 客观第三人称视角
|
||||
- 段落式叙述
|
||||
|
||||
[FOCUS]
|
||||
- 领域模型设计:系统的核心领域概念有哪些?它们之间的关系如何建模(聚合、实体、值对象)?
|
||||
- 继承与组合策略:系统倾向于使用继承还是组合?基类/接口的设计意图是什么?
|
||||
- 职责分配原则:类的职责划分遵循了什么原则?是否体现了单一职责原则?
|
||||
- 类型安全与约束:系统如何利用类型系统来表达业务约束和不变量?
|
||||
|
||||
[CONSTRAINT]
|
||||
- 关注"建模思想"而非类的属性列表
|
||||
- 用 UML 类图辅助说明核心关系
|
||||
|
||||
[RETURN JSON]
|
||||
{"status":"completed","output_file":"section-classes.md","summary":"<50字>","cross_module_notes":[],"stats":{}}
|
||||
`
|
||||
})
|
||||
```
|
||||
|
||||
### Interfaces Agent
|
||||
|
||||
```javascript
|
||||
Task({
|
||||
subagent_type: "cli-explore-agent",
|
||||
run_in_background: false,
|
||||
prompt: `
|
||||
[SPEC]
|
||||
首先读取规范文件:
|
||||
- Read: ${skillRoot}/specs/quality-standards.md
|
||||
- Read: ${skillRoot}/specs/writing-style.md
|
||||
严格遵循规范中的质量标准和段落式写作要求。
|
||||
|
||||
[ROLE] 契约设计专家
|
||||
|
||||
[TASK]
|
||||
分析系统的接口设计和抽象层次,撰写《系统架构设计报告》的"接口契约与抽象设计"章节。
|
||||
输出: ${outDir}/sections/section-interfaces.md
|
||||
|
||||
[STYLE]
|
||||
- 严谨专业的中文技术写作
|
||||
- 客观第三人称视角
|
||||
- 段落式叙述
|
||||
|
||||
[FOCUS]
|
||||
- 抽象层次设计:系统定义了哪些核心接口/抽象类?这些抽象的设计意图和职责边界是什么?
|
||||
- 契约与实现分离:接口如何隔离契约与实现?多态机制如何被运用?
|
||||
- 扩展点设计:系统预留了哪些扩展点?如何在不修改核心代码的情况下扩展功能?
|
||||
- 版本演进策略:接口如何支持版本演进?向后兼容性如何保障?
|
||||
|
||||
[CONSTRAINT]
|
||||
- 关注"接口的设计哲学"
|
||||
- 不要逐个列举接口方法签名
|
||||
|
||||
[RETURN JSON]
|
||||
{"status":"completed","output_file":"section-interfaces.md","summary":"<50字>","cross_module_notes":[],"stats":{}}
|
||||
`
|
||||
})
|
||||
```
|
||||
|
||||
### State Agent
|
||||
|
||||
```javascript
|
||||
Task({
|
||||
subagent_type: "cli-explore-agent",
|
||||
run_in_background: false,
|
||||
prompt: `
|
||||
[SPEC]
|
||||
首先读取规范文件:
|
||||
- Read: ${skillRoot}/specs/quality-standards.md
|
||||
- Read: ${skillRoot}/specs/writing-style.md
|
||||
严格遵循规范中的质量标准和段落式写作要求。
|
||||
|
||||
[ROLE] 状态管理架构师
|
||||
|
||||
[TASK]
|
||||
分析系统的状态管理机制,撰写《系统架构设计报告》的"状态管理与生命周期"章节。
|
||||
输出: ${outDir}/sections/section-state.md
|
||||
|
||||
[STYLE]
|
||||
- 严谨专业的中文技术写作
|
||||
- 客观第三人称视角
|
||||
- 段落式叙述
|
||||
|
||||
[FOCUS]
|
||||
- 状态模型设计:系统需要管理哪些类型的状态(会话状态、应用状态、领域状态)?状态的存储位置和作用域是什么?
|
||||
- 状态生命周期:状态如何创建、更新、销毁?生命周期管理的机制是什么?
|
||||
- 并发与一致性:多线程/多实例场景下,状态如何保持一致?采用了何种并发控制策略?
|
||||
- 状态恢复与容错:系统如何处理状态丢失或损坏?是否有状态恢复机制?
|
||||
|
||||
[CONSTRAINT]
|
||||
- 关注"状态管理的设计决策"
|
||||
- 不要列举具体的变量名
|
||||
|
||||
[RETURN JSON]
|
||||
{"status":"completed","output_file":"section-state.md","summary":"<50字>","cross_module_notes":[],"stats":{}}
|
||||
`
|
||||
})
|
||||
```
|
||||
|
||||
### Algorithms Agent
|
||||
|
||||
```javascript
|
||||
Task({
|
||||
subagent_type: "cli-explore-agent",
|
||||
run_in_background: false,
|
||||
prompt: `
|
||||
[SPEC]
|
||||
首先读取规范文件:
|
||||
- Read: ${skillRoot}/specs/quality-standards.md
|
||||
- Read: ${skillRoot}/specs/writing-style.md
|
||||
严格遵循规范中的质量标准和段落式写作要求。
|
||||
|
||||
[ROLE] 算法架构师
|
||||
|
||||
[TASK]
|
||||
分析系统的核心算法设计,撰写《系统架构设计报告》的"核心算法与计算模型"章节。
|
||||
输出: ${outDir}/sections/section-algorithms.md
|
||||
|
||||
[STYLE]
|
||||
- 严谨专业的中文技术写作
|
||||
- 客观第三人称视角
|
||||
- 段落式叙述
|
||||
|
||||
[FOCUS]
|
||||
- 算法选型与权衡:系统的核心业务逻辑采用了哪些关键算法?选择这些算法的考量因素是什么(时间复杂度、空间复杂度、可维护性)?
|
||||
- 计算模型设计:复杂计算如何被分解和组织?是否采用了流水线、Map-Reduce 等计算模式?
|
||||
- 性能与可扩展性:算法设计如何考虑性能和可扩展性?是否有针对大数据量的优化策略?
|
||||
- 正确性保障:关键算法的正确性如何保障?是否有边界条件的特殊处理?
|
||||
|
||||
[CONSTRAINT]
|
||||
- 关注"算法思想"而非具体实现代码
|
||||
- 用流程图辅助说明复杂逻辑
|
||||
|
||||
[RETURN JSON]
|
||||
{"status":"completed","output_file":"section-algorithms.md","summary":"<50字>","cross_module_notes":[],"stats":{}}
|
||||
`
|
||||
})
|
||||
```
|
||||
|
||||
### Paths Agent
|
||||
|
||||
```javascript
|
||||
Task({
|
||||
subagent_type: "cli-explore-agent",
|
||||
run_in_background: false,
|
||||
prompt: `
|
||||
[SPEC]
|
||||
首先读取规范文件:
|
||||
- Read: ${skillRoot}/specs/quality-standards.md
|
||||
- Read: ${skillRoot}/specs/writing-style.md
|
||||
严格遵循规范中的质量标准和段落式写作要求。
|
||||
|
||||
[ROLE] 性能架构师
|
||||
|
||||
[TASK]
|
||||
分析系统的关键执行路径,撰写《系统架构设计报告》的"关键路径与性能设计"章节。
|
||||
输出: ${outDir}/sections/section-paths.md
|
||||
|
||||
[STYLE]
|
||||
- 严谨专业的中文技术写作
|
||||
- 客观第三人称视角
|
||||
- 段落式叙述
|
||||
|
||||
[FOCUS]
|
||||
- 关键业务路径:系统中最重要的几条业务执行路径是什么?这些路径的设计目标和约束是什么?
|
||||
- 性能敏感区域:哪些环节是性能敏感的?系统采用了何种优化策略(缓存、异步、批处理)?
|
||||
- 瓶颈识别与缓解:潜在的性能瓶颈在哪里?设计中是否预留了扩展空间?
|
||||
- 降级与熔断:在高负载或故障场景下,系统如何保护关键路径?
|
||||
|
||||
[CONSTRAINT]
|
||||
- 关注"路径设计的战略考量"
|
||||
- 不要罗列所有代码执行步骤
|
||||
|
||||
[RETURN JSON]
|
||||
{"status":"completed","output_file":"section-paths.md","summary":"<50字>","cross_module_notes":[],"stats":{}}
|
||||
`
|
||||
})
|
||||
```
|
||||
|
||||
### APIs Agent
|
||||
|
||||
```javascript
|
||||
Task({
|
||||
subagent_type: "cli-explore-agent",
|
||||
run_in_background: false,
|
||||
prompt: `
|
||||
[SPEC]
|
||||
首先读取规范文件:
|
||||
- Read: ${skillRoot}/specs/quality-standards.md
|
||||
- Read: ${skillRoot}/specs/writing-style.md
|
||||
严格遵循规范中的质量标准和段落式写作要求。
|
||||
|
||||
[ROLE] API 设计规范专家
|
||||
|
||||
[TASK]
|
||||
分析系统的对外接口设计规范,撰写《系统架构设计报告》的"API 设计与规范"章节。
|
||||
输出: ${outDir}/sections/section-apis.md
|
||||
|
||||
[STYLE]
|
||||
- 严谨专业的中文技术写作
|
||||
- 客观第三人称视角
|
||||
- 段落式叙述
|
||||
|
||||
[FOCUS]
|
||||
- API 设计风格:系统采用了何种 API 设计风格(RESTful、GraphQL、RPC)?选择该风格的原因是什么?
|
||||
- 命名与结构规范:API 的命名、路径结构、参数设计遵循了什么规范?是否有一致性保障机制?
|
||||
- 版本管理策略:API 如何支持版本演进?向后兼容性策略是什么?
|
||||
- 错误处理规范:API 错误响应的设计规范是什么?错误码体系如何组织?
|
||||
|
||||
[CONSTRAINT]
|
||||
- 关注"设计规范和一致性"
|
||||
- 不要逐个列举所有 API 端点
|
||||
|
||||
[RETURN JSON]
|
||||
{"status":"completed","output_file":"section-apis.md","summary":"<50字>","cross_module_notes":[],"stats":{}}
|
||||
`
|
||||
})
|
||||
```
|
||||
|
||||
### Logic Agent
|
||||
|
||||
```javascript
|
||||
Task({
|
||||
subagent_type: "cli-explore-agent",
|
||||
run_in_background: false,
|
||||
prompt: `
|
||||
[SPEC]
|
||||
首先读取规范文件:
|
||||
- Read: ${skillRoot}/specs/quality-standards.md
|
||||
- Read: ${skillRoot}/specs/writing-style.md
|
||||
严格遵循规范中的质量标准和段落式写作要求。
|
||||
|
||||
[ROLE] 业务逻辑架构师
|
||||
|
||||
[TASK]
|
||||
分析系统的业务逻辑建模,撰写《系统架构设计报告》的"业务逻辑与规则引擎"章节。
|
||||
输出: ${outDir}/sections/section-logic.md
|
||||
|
||||
[STYLE]
|
||||
- 严谨专业的中文技术写作
|
||||
- 客观第三人称视角
|
||||
- 段落式叙述
|
||||
|
||||
[FOCUS]
|
||||
- 业务规则建模:核心业务规则如何被表达和组织?是否采用了规则引擎或策略模式?
|
||||
- 决策点设计:系统中的关键决策点有哪些?决策逻辑如何被封装和测试?
|
||||
- 边界条件处理:系统如何处理边界条件和异常情况?是否有防御性编程措施?
|
||||
- 业务流程编排:复杂业务流程如何被编排?是否采用了工作流引擎或状态机?
|
||||
|
||||
[CONSTRAINT]
|
||||
- 关注"业务逻辑的组织方式"
|
||||
- 不要逐行解释代码逻辑
|
||||
|
||||
[RETURN JSON]
|
||||
{"status":"completed","output_file":"section-logic.md","summary":"<50字>","cross_module_notes":[],"stats":{}}
|
||||
`
|
||||
})
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 执行流程
|
||||
|
||||
```javascript
|
||||
// 1. 发现 exploration 文件并自动分配 agent
|
||||
const explorationFiles = bash(`find ${sessionFolder} -name "exploration-*.json" -type f`)
|
||||
.split('\n')
|
||||
.filter(f => f.trim());
|
||||
|
||||
const agentAssignments = explorationFiles.map(file => {
|
||||
const angle = extractAngle(path.basename(file));
|
||||
const agentName = EXPLORATION_TO_AGENT[angle];
|
||||
return { exploration_file: file, angle, agent: agentName };
|
||||
}).filter(a => a.agent);
|
||||
|
||||
// 2. 准备目录
|
||||
Bash(`mkdir "${outputDir}\\sections"`);
|
||||
|
||||
// 3. 并行启动所有 Agent(传递 exploration 文件路径)
|
||||
const results = await Promise.all(
|
||||
agentAssignments.map(assignment =>
|
||||
Task({
|
||||
subagent_type: "cli-explore-agent",
|
||||
run_in_background: false,
|
||||
description: `Analyze: ${assignment.agent}`,
|
||||
prompt: buildAgentPrompt(assignment, config, outputDir)
|
||||
})
|
||||
)
|
||||
);
|
||||
|
||||
// 4. 收集简要返回信息
|
||||
const summaries = results.map(r => JSON.parse(r));
|
||||
|
||||
// 5. 传递给 Phase 3.5 汇总 Agent
|
||||
return { summaries, cross_notes: summaries.flatMap(s => s.cross_module_notes) };
|
||||
```
|
||||
|
||||
### Agent Prompt 构建
|
||||
|
||||
```javascript
|
||||
function buildAgentPrompt(assignment, config, outputDir) {
|
||||
const agentConfig = AGENT_CONFIGS[assignment.agent];
|
||||
return `
|
||||
[CONTEXT]
|
||||
**Exploration 文件**: ${assignment.exploration_file}
|
||||
首先读取此文件获取 ${assignment.angle} 探索结果作为分析上下文。
|
||||
|
||||
[SPEC]
|
||||
读取规范文件:
|
||||
- Read: ${skillRoot}/specs/quality-standards.md
|
||||
- Read: ${skillRoot}/specs/writing-style.md
|
||||
|
||||
[ROLE] ${agentConfig.role}
|
||||
|
||||
[TASK]
|
||||
${agentConfig.task}
|
||||
输出: ${outputDir}/sections/section-${assignment.agent}.md
|
||||
|
||||
[STYLE]
|
||||
- 严谨专业的中文技术写作,专业术语保留英文
|
||||
- 完全客观的第三人称视角,严禁"我们"、"开发者"
|
||||
- 段落式叙述,采用"论点-论据-结论"结构
|
||||
- 善用逻辑连接词体现设计推演过程
|
||||
|
||||
[FOCUS]
|
||||
${agentConfig.focus}
|
||||
|
||||
[CONSTRAINT]
|
||||
${agentConfig.constraint}
|
||||
|
||||
[RETURN JSON]
|
||||
{"status":"completed","output_file":"section-${assignment.agent}.md","summary":"<50字>","cross_module_notes":[],"stats":{}}
|
||||
`;
|
||||
}
|
||||
```
|
||||
|
||||
## Output
|
||||
|
||||
各 Agent 写入 `sections/section-xxx.md`,返回简要 JSON 供 Phase 3.5 汇总。
|
||||
233
.claude/skills/project-analyze/phases/03.5-consolidation.md
Normal file
233
.claude/skills/project-analyze/phases/03.5-consolidation.md
Normal file
@@ -0,0 +1,233 @@
|
||||
# Phase 3.5: Consolidation Agent
|
||||
|
||||
汇总所有分析 Agent 的产出,生成跨章节综合分析,为 Phase 4 索引报告提供内容。
|
||||
|
||||
> **写作规范**: [../specs/writing-style.md](../specs/writing-style.md)
|
||||
|
||||
## 执行要求
|
||||
|
||||
**必须执行**:Phase 3 所有 Analysis Agents 完成后,主编排器**必须**调用此 Consolidation Agent。
|
||||
|
||||
**触发条件**:
|
||||
- Phase 3 所有 agent 已返回结果(status: completed/partial/failed)
|
||||
- `sections/section-*.md` 文件已生成
|
||||
|
||||
**输入来源**:
|
||||
- `agent_summaries`: Phase 3 各 agent 返回的 JSON(包含 status, output_file, summary, cross_module_notes)
|
||||
- `cross_module_notes`: 从各 agent 返回中提取的跨模块备注数组
|
||||
|
||||
**调用时机**:
|
||||
```javascript
|
||||
// Phase 3 完成后,主编排器执行:
|
||||
const phase3Results = await runPhase3Agents(); // 并行执行所有 analysis agents
|
||||
const agentSummaries = phase3Results.map(r => JSON.parse(r));
|
||||
const crossNotes = agentSummaries.flatMap(s => s.cross_module_notes || []);
|
||||
|
||||
// 必须调用 Phase 3.5 Consolidation Agent
|
||||
await runPhase35Consolidation(agentSummaries, crossNotes);
|
||||
```
|
||||
|
||||
## 核心职责
|
||||
|
||||
1. **跨章节综合分析**:生成 synthesis(报告综述)
|
||||
2. **章节摘要提取**:生成 section_summaries(索引表格内容)
|
||||
3. **质量检查**:识别问题并评分
|
||||
4. **建议汇总**:生成 recommendations(优先级排序)
|
||||
|
||||
## 输入
|
||||
|
||||
```typescript
|
||||
interface ConsolidationInput {
|
||||
output_dir: string;
|
||||
config: AnalysisConfig;
|
||||
agent_summaries: AgentReturn[];
|
||||
cross_module_notes: string[];
|
||||
}
|
||||
```
|
||||
|
||||
## Agent 调用代码
|
||||
|
||||
主编排器使用以下代码调用 Consolidation Agent:
|
||||
|
||||
```javascript
|
||||
Task({
|
||||
subagent_type: "cli-explore-agent",
|
||||
run_in_background: false,
|
||||
prompt: `
|
||||
## 规范前置
|
||||
首先读取规范文件:
|
||||
- Read: ${skillRoot}/specs/quality-standards.md
|
||||
- Read: ${skillRoot}/specs/writing-style.md
|
||||
严格遵循规范中的质量标准和段落式写作要求。
|
||||
|
||||
## 任务
|
||||
作为汇总 Agent,读取所有章节文件,执行跨章节分析,生成汇总报告和索引内容。
|
||||
|
||||
## 输入
|
||||
- 章节文件: ${outputDir}/sections/section-*.md
|
||||
- Agent 摘要: ${JSON.stringify(agent_summaries)}
|
||||
- 跨模块备注: ${JSON.stringify(cross_module_notes)}
|
||||
- 报告类型: ${config.type}
|
||||
|
||||
## 核心产出
|
||||
|
||||
### 1. 综合分析 (synthesis)
|
||||
阅读所有章节,用 2-3 段落描述项目全貌:
|
||||
- 第一段:项目定位与核心架构特征
|
||||
- 第二段:关键设计决策与技术选型
|
||||
- 第三段:整体质量评价与显著特点
|
||||
|
||||
### 2. 章节摘要 (section_summaries)
|
||||
为每个章节提取一句话核心发现,用于索引表格。
|
||||
|
||||
### 3. 架构洞察 (cross_analysis)
|
||||
描述章节间的关联性,如:
|
||||
- 模块间的依赖关系如何体现在各章节
|
||||
- 设计决策如何贯穿多个层面
|
||||
- 潜在的一致性或冲突
|
||||
|
||||
### 4. 建议汇总 (recommendations)
|
||||
按优先级整理各章节的建议,段落式描述。
|
||||
|
||||
## 质量检查维度
|
||||
|
||||
### 一致性检查
|
||||
- 术语一致性:同一概念是否使用相同名称
|
||||
- 代码引用:file:line 格式是否正确
|
||||
|
||||
### 完整性检查
|
||||
- 章节覆盖:是否涵盖所有必需章节
|
||||
- 内容深度:每章节是否达到 ${config.depth} 级别
|
||||
|
||||
### 质量检查
|
||||
- Mermaid 语法:图表是否可渲染
|
||||
- 段落式写作:是否符合写作规范(禁止清单罗列)
|
||||
|
||||
## 输出文件
|
||||
|
||||
写入: ${outputDir}/consolidation-summary.md
|
||||
|
||||
### 文件格式
|
||||
|
||||
\`\`\`markdown
|
||||
# 分析汇总报告
|
||||
|
||||
## 综合分析
|
||||
|
||||
[2-3 段落的项目全貌描述,段落式写作]
|
||||
|
||||
## 章节摘要
|
||||
|
||||
| 章节 | 文件 | 核心发现 |
|
||||
|------|------|----------|
|
||||
| 系统概述 | section-overview.md | 一句话描述 |
|
||||
| 层次分析 | section-layers.md | 一句话描述 |
|
||||
| ... | ... | ... |
|
||||
|
||||
## 架构洞察
|
||||
|
||||
[跨章节关联分析,段落式描述]
|
||||
|
||||
## 建议汇总
|
||||
|
||||
[优先级排序的建议,段落式描述]
|
||||
|
||||
---
|
||||
|
||||
## 质量评估
|
||||
|
||||
### 评分
|
||||
|
||||
| 维度 | 得分 | 说明 |
|
||||
|------|------|------|
|
||||
| 完整性 | 85% | ... |
|
||||
| 一致性 | 90% | ... |
|
||||
| 深度 | 95% | ... |
|
||||
| 可读性 | 88% | ... |
|
||||
| 综合 | 89% | ... |
|
||||
|
||||
### 发现的问题
|
||||
|
||||
#### 严重问题
|
||||
| ID | 类型 | 位置 | 描述 |
|
||||
|----|------|------|------|
|
||||
| E001 | ... | ... | ... |
|
||||
|
||||
#### 警告
|
||||
| ID | 类型 | 位置 | 描述 |
|
||||
|----|------|------|------|
|
||||
| W001 | ... | ... | ... |
|
||||
|
||||
#### 提示
|
||||
| ID | 类型 | 位置 | 描述 |
|
||||
|----|------|------|------|
|
||||
| I001 | ... | ... | ... |
|
||||
|
||||
### 统计
|
||||
|
||||
- 章节数: X
|
||||
- 图表数: X
|
||||
- 总字数: X
|
||||
\`\`\`
|
||||
|
||||
## 返回格式 (JSON)
|
||||
|
||||
{
|
||||
"status": "completed",
|
||||
"output_file": "consolidation-summary.md",
|
||||
|
||||
// Phase 4 索引报告所需
|
||||
"synthesis": "2-3 段落的综合分析文本",
|
||||
"cross_analysis": "跨章节关联分析文本",
|
||||
"recommendations": "优先级排序的建议文本",
|
||||
"section_summaries": [
|
||||
{"file": "section-overview.md", "title": "系统概述", "summary": "一句话核心发现"},
|
||||
{"file": "section-layers.md", "title": "层次分析", "summary": "一句话核心发现"}
|
||||
],
|
||||
|
||||
// 质量信息
|
||||
"quality_score": {
|
||||
"completeness": 85,
|
||||
"consistency": 90,
|
||||
"depth": 95,
|
||||
"readability": 88,
|
||||
"overall": 89
|
||||
},
|
||||
"issues": {
|
||||
"errors": [...],
|
||||
"warnings": [...],
|
||||
"info": [...]
|
||||
},
|
||||
"stats": {
|
||||
"total_sections": 5,
|
||||
"total_diagrams": 8,
|
||||
"total_words": 3500
|
||||
}
|
||||
}
|
||||
`
|
||||
})
|
||||
```
|
||||
|
||||
## 问题分类
|
||||
|
||||
| 严重级别 | 前缀 | 含义 | 处理方式 |
|
||||
|----------|------|------|----------|
|
||||
| Error | E | 阻塞报告生成 | 必须修复 |
|
||||
| Warning | W | 影响报告质量 | 建议修复 |
|
||||
| Info | I | 可改进项 | 可选修复 |
|
||||
|
||||
## 问题类型
|
||||
|
||||
| 类型 | 说明 |
|
||||
|------|------|
|
||||
| missing | 缺失章节 |
|
||||
| inconsistency | 术语/描述不一致 |
|
||||
| invalid_ref | 无效代码引用 |
|
||||
| syntax | Mermaid 语法错误 |
|
||||
| shallow | 内容过浅 |
|
||||
| list_style | 违反段落式写作规范 |
|
||||
|
||||
## Output
|
||||
|
||||
- **文件**: `consolidation-summary.md`(完整汇总报告)
|
||||
- **返回**: JSON 包含 Phase 4 所需的所有字段
|
||||
217
.claude/skills/project-analyze/phases/04-report-generation.md
Normal file
217
.claude/skills/project-analyze/phases/04-report-generation.md
Normal file
@@ -0,0 +1,217 @@
|
||||
# Phase 4: Report Generation
|
||||
|
||||
生成索引式报告,通过 markdown 链接引用章节文件。
|
||||
|
||||
> **规范参考**: [../specs/quality-standards.md](../specs/quality-standards.md)
|
||||
|
||||
## 设计原则
|
||||
|
||||
1. **引用而非嵌入**:主报告通过链接引用章节,不复制内容
|
||||
2. **索引 + 综述**:主报告提供导航和高阶分析
|
||||
3. **避免重复**:综述来自 consolidation,不重新生成
|
||||
4. **独立可读**:各章节文件可单独阅读
|
||||
|
||||
## 输入
|
||||
|
||||
```typescript
|
||||
interface ReportInput {
|
||||
output_dir: string;
|
||||
config: AnalysisConfig;
|
||||
consolidation: {
|
||||
quality_score: QualityScore;
|
||||
issues: { errors: Issue[], warnings: Issue[], info: Issue[] };
|
||||
stats: Stats;
|
||||
synthesis: string; // consolidation agent 的综合分析
|
||||
section_summaries: Array<{file: string, summary: string}>;
|
||||
};
|
||||
}
|
||||
```
|
||||
|
||||
## 执行流程
|
||||
|
||||
```javascript
|
||||
// 1. 质量门禁检查
|
||||
if (consolidation.issues.errors.length > 0) {
|
||||
const response = await AskUserQuestion({
|
||||
questions: [{
|
||||
question: `发现 ${consolidation.issues.errors.length} 个严重问题,如何处理?`,
|
||||
header: "质量检查",
|
||||
multiSelect: false,
|
||||
options: [
|
||||
{label: "查看并修复", description: "显示问题列表,手动修复后重试"},
|
||||
{label: "忽略继续", description: "跳过问题检查,继续装配"},
|
||||
{label: "终止", description: "停止报告生成"}
|
||||
]
|
||||
}]
|
||||
});
|
||||
|
||||
if (response === "查看并修复") {
|
||||
return { action: "fix_required", errors: consolidation.issues.errors };
|
||||
}
|
||||
if (response === "终止") {
|
||||
return { action: "abort" };
|
||||
}
|
||||
}
|
||||
|
||||
// 2. 生成索引式报告(不读取章节内容)
|
||||
const report = generateIndexReport(config, consolidation);
|
||||
|
||||
// 3. 写入最终文件
|
||||
const fileName = `${config.type.toUpperCase()}-REPORT.md`;
|
||||
Write(`${outputDir}/${fileName}`, report);
|
||||
```
|
||||
|
||||
## 报告模板
|
||||
|
||||
### 通用结构
|
||||
|
||||
```markdown
|
||||
# {报告标题}
|
||||
|
||||
> 生成日期:{date}
|
||||
> 分析范围:{scope}
|
||||
> 分析深度:{depth}
|
||||
> 质量评分:{overall}%
|
||||
|
||||
---
|
||||
|
||||
## 报告综述
|
||||
|
||||
{consolidation.synthesis - 来自汇总 Agent 的跨章节综合分析}
|
||||
|
||||
---
|
||||
|
||||
## 章节索引
|
||||
|
||||
| 章节 | 核心发现 | 详情 |
|
||||
|------|----------|------|
|
||||
{section_summaries 生成的表格行}
|
||||
|
||||
---
|
||||
|
||||
## 架构洞察
|
||||
|
||||
{从 consolidation 提取的跨模块关联分析}
|
||||
|
||||
---
|
||||
|
||||
## 建议与展望
|
||||
|
||||
{consolidation.recommendations - 优先级排序的综合建议}
|
||||
|
||||
---
|
||||
|
||||
**附录**
|
||||
|
||||
- [质量报告](./consolidation-summary.md)
|
||||
- [章节文件目录](./sections/)
|
||||
```
|
||||
|
||||
### 报告标题映射
|
||||
|
||||
| 类型 | 标题 |
|
||||
|------|------|
|
||||
| architecture | 项目架构设计报告 |
|
||||
| design | 项目设计模式报告 |
|
||||
| methods | 项目核心方法报告 |
|
||||
| comprehensive | 项目综合分析报告 |
|
||||
|
||||
## 生成函数
|
||||
|
||||
```javascript
|
||||
function generateIndexReport(config, consolidation) {
|
||||
const titles = {
|
||||
architecture: "项目架构设计报告",
|
||||
design: "项目设计模式报告",
|
||||
methods: "项目核心方法报告",
|
||||
comprehensive: "项目综合分析报告"
|
||||
};
|
||||
|
||||
const date = new Date().toLocaleDateString('zh-CN');
|
||||
|
||||
// 章节索引表格
|
||||
const sectionTable = consolidation.section_summaries
|
||||
.map(s => `| ${s.title} | ${s.summary} | [查看详情](./sections/${s.file}) |`)
|
||||
.join('\n');
|
||||
|
||||
return `# ${titles[config.type]}
|
||||
|
||||
> 生成日期:${date}
|
||||
> 分析范围:${config.scope}
|
||||
> 分析深度:${config.depth}
|
||||
> 质量评分:${consolidation.quality_score.overall}%
|
||||
|
||||
---
|
||||
|
||||
## 报告综述
|
||||
|
||||
${consolidation.synthesis}
|
||||
|
||||
---
|
||||
|
||||
## 章节索引
|
||||
|
||||
| 章节 | 核心发现 | 详情 |
|
||||
|------|----------|------|
|
||||
${sectionTable}
|
||||
|
||||
---
|
||||
|
||||
## 架构洞察
|
||||
|
||||
${consolidation.cross_analysis || '详见各章节分析。'}
|
||||
|
||||
---
|
||||
|
||||
## 建议与展望
|
||||
|
||||
${consolidation.recommendations || '详见质量报告中的改进建议。'}
|
||||
|
||||
---
|
||||
|
||||
**附录**
|
||||
|
||||
- [质量报告](./consolidation-summary.md)
|
||||
- [章节文件目录](./sections/)
|
||||
`;
|
||||
}
|
||||
```
|
||||
|
||||
## 输出结构
|
||||
|
||||
```
|
||||
.workflow/.scratchpad/analyze-{timestamp}/
|
||||
├── sections/ # 独立章节(Phase 3 产出)
|
||||
│ ├── section-overview.md
|
||||
│ ├── section-layers.md
|
||||
│ └── ...
|
||||
├── consolidation-summary.md # 质量报告(Phase 3.5 产出)
|
||||
└── {TYPE}-REPORT.md # 索引报告(本阶段产出)
|
||||
```
|
||||
|
||||
## 与 Phase 3.5 的协作
|
||||
|
||||
Phase 3.5 consolidation agent 需要提供:
|
||||
|
||||
```typescript
|
||||
interface ConsolidationOutput {
|
||||
// ... 原有字段
|
||||
synthesis: string; // 跨章节综合分析(2-3 段落)
|
||||
cross_analysis: string; // 架构级关联洞察
|
||||
recommendations: string; // 优先级排序的建议
|
||||
section_summaries: Array<{
|
||||
file: string; // 文件名
|
||||
title: string; // 章节标题
|
||||
summary: string; // 一句话核心发现
|
||||
}>;
|
||||
}
|
||||
```
|
||||
|
||||
## 关键变更
|
||||
|
||||
| 原设计 | 新设计 |
|
||||
|--------|--------|
|
||||
| 读取章节内容并拼接 | 链接引用,不读取内容 |
|
||||
| 重新生成 Executive Summary | 直接使用 consolidation.synthesis |
|
||||
| 嵌入质量评分表格 | 链接引用 consolidation-summary.md |
|
||||
| 主报告包含全部内容 | 主报告仅为索引 + 综述 |
|
||||
124
.claude/skills/project-analyze/phases/05-iterative-refinement.md
Normal file
124
.claude/skills/project-analyze/phases/05-iterative-refinement.md
Normal file
@@ -0,0 +1,124 @@
|
||||
# Phase 5: Iterative Refinement
|
||||
|
||||
Discovery-driven refinement based on analysis findings.
|
||||
|
||||
## Execution
|
||||
|
||||
### Step 1: Extract Discoveries
|
||||
|
||||
```javascript
|
||||
function extractDiscoveries(deepAnalysis) {
|
||||
return {
|
||||
ambiguities: deepAnalysis.findings.filter(f => f.confidence < 0.7),
|
||||
complexityHotspots: deepAnalysis.findings.filter(f => f.complexity === 'high'),
|
||||
patternDeviations: deepAnalysis.patterns.filter(p => p.consistency < 0.8),
|
||||
unclearDependencies: deepAnalysis.dependencies.filter(d => d.type === 'implicit'),
|
||||
potentialIssues: deepAnalysis.recommendations.filter(r => r.priority === 'investigate'),
|
||||
depthOpportunities: deepAnalysis.sections.filter(s => s.has_more_detail)
|
||||
};
|
||||
}
|
||||
|
||||
const discoveries = extractDiscoveries(deepAnalysis);
|
||||
```
|
||||
|
||||
### Step 2: Build Dynamic Questions
|
||||
|
||||
Questions emerge from discoveries, NOT predetermined:
|
||||
|
||||
```javascript
|
||||
function buildDynamicQuestions(discoveries, config) {
|
||||
const questions = [];
|
||||
|
||||
if (discoveries.ambiguities.length > 0) {
|
||||
questions.push({
|
||||
question: `Analysis found ambiguity in "${discoveries.ambiguities[0].area}". Which interpretation is correct?`,
|
||||
header: "Clarify",
|
||||
options: discoveries.ambiguities[0].interpretations
|
||||
});
|
||||
}
|
||||
|
||||
if (discoveries.complexityHotspots.length > 0) {
|
||||
questions.push({
|
||||
question: `These areas have high complexity. Which would you like explained?`,
|
||||
header: "Deep-Dive",
|
||||
multiSelect: true,
|
||||
options: discoveries.complexityHotspots.slice(0, 4).map(h => ({
|
||||
label: h.name,
|
||||
description: h.summary
|
||||
}))
|
||||
});
|
||||
}
|
||||
|
||||
if (discoveries.patternDeviations.length > 0) {
|
||||
questions.push({
|
||||
question: `Found pattern deviations. Should these be highlighted in the report?`,
|
||||
header: "Patterns",
|
||||
options: [
|
||||
{label: "Yes, include analysis", description: "Add section explaining deviations"},
|
||||
{label: "No, skip", description: "Omit from report"}
|
||||
]
|
||||
});
|
||||
}
|
||||
|
||||
// Always include action question
|
||||
questions.push({
|
||||
question: "How would you like to proceed?",
|
||||
header: "Action",
|
||||
options: [
|
||||
{label: "Continue refining", description: "Address more discoveries"},
|
||||
{label: "Finalize report", description: "Generate final output"},
|
||||
{label: "Change scope", description: "Modify analysis scope"}
|
||||
]
|
||||
});
|
||||
|
||||
return questions.slice(0, 4); // Max 4 questions
|
||||
}
|
||||
```
|
||||
|
||||
### Step 3: Apply Refinements
|
||||
|
||||
```javascript
|
||||
if (userAction === "Continue refining") {
|
||||
// Apply selected refinements
|
||||
for (const selection of userSelections) {
|
||||
applyRefinement(selection, deepAnalysis, report);
|
||||
}
|
||||
|
||||
// Save iteration
|
||||
Write(`${outputDir}/iterations/iteration-${iterationCount}.json`, {
|
||||
timestamp: new Date().toISOString(),
|
||||
discoveries: discoveries,
|
||||
selections: userSelections,
|
||||
changes: appliedChanges
|
||||
});
|
||||
|
||||
// Loop back to Step 1
|
||||
iterationCount++;
|
||||
goto Step1;
|
||||
}
|
||||
|
||||
if (userAction === "Finalize report") {
|
||||
// Proceed to final output
|
||||
goto FinalizeReport;
|
||||
}
|
||||
```
|
||||
|
||||
### Step 4: Finalize Report
|
||||
|
||||
```javascript
|
||||
// Add iteration history to report metadata
|
||||
const finalReport = {
|
||||
...report,
|
||||
metadata: {
|
||||
iterations: iterationCount,
|
||||
refinements_applied: allRefinements,
|
||||
final_discoveries: discoveries
|
||||
}
|
||||
};
|
||||
|
||||
Write(`${outputDir}/${config.type.toUpperCase()}-REPORT.md`, finalReport);
|
||||
```
|
||||
|
||||
## Output
|
||||
|
||||
Updated report with refinements, saved iterations to `iterations/` folder.
|
||||
115
.claude/skills/project-analyze/specs/quality-standards.md
Normal file
115
.claude/skills/project-analyze/specs/quality-standards.md
Normal file
@@ -0,0 +1,115 @@
|
||||
# Quality Standards
|
||||
|
||||
Quality gates and requirements for project analysis reports.
|
||||
|
||||
## When to Use
|
||||
|
||||
| Phase | Usage | Section |
|
||||
|-------|-------|---------|
|
||||
| Phase 4 | Check report structure before assembly | Report Requirements |
|
||||
| Phase 5 | Validate before each iteration | Quality Gates |
|
||||
| Phase 5 | Handle failures during refinement | Error Handling |
|
||||
|
||||
---
|
||||
|
||||
## Report Requirements
|
||||
|
||||
**Use in Phase 4**: Ensure report includes all required elements.
|
||||
|
||||
| Requirement | Check | How to Fix |
|
||||
|-------------|-------|------------|
|
||||
| Executive Summary | 3-5 key takeaways | Extract from analysis findings |
|
||||
| Visual diagrams | Valid Mermaid syntax | Use `../_shared/mermaid-utils.md` |
|
||||
| Code references | `file:line` format | Link to actual source locations |
|
||||
| Recommendations | Actionable, specific | Derive from analysis insights |
|
||||
| Consistent depth | Match user's depth level | Adjust detail per config.depth |
|
||||
|
||||
---
|
||||
|
||||
## Quality Gates
|
||||
|
||||
**Use in Phase 5**: Run these checks before asking user questions.
|
||||
|
||||
```javascript
|
||||
function runQualityGates(report, config, diagrams) {
|
||||
const gates = [
|
||||
{
|
||||
name: "focus_areas_covered",
|
||||
check: () => config.focus_areas.every(area =>
|
||||
report.toLowerCase().includes(area.toLowerCase())
|
||||
),
|
||||
fix: "Re-analyze missing focus areas"
|
||||
},
|
||||
{
|
||||
name: "diagrams_valid",
|
||||
check: () => diagrams.every(d => d.valid),
|
||||
fix: "Regenerate failed diagrams with mermaid-utils"
|
||||
},
|
||||
{
|
||||
name: "code_refs_accurate",
|
||||
check: () => extractCodeRefs(report).every(ref => fileExists(ref)),
|
||||
fix: "Update invalid file references"
|
||||
},
|
||||
{
|
||||
name: "no_placeholders",
|
||||
check: () => !report.includes('[TODO]') && !report.includes('[PLACEHOLDER]'),
|
||||
fix: "Fill in all placeholder content"
|
||||
},
|
||||
{
|
||||
name: "recommendations_specific",
|
||||
check: () => !report.includes('consider') || report.includes('specifically'),
|
||||
fix: "Make recommendations project-specific"
|
||||
}
|
||||
];
|
||||
|
||||
const results = gates.map(g => ({...g, passed: g.check()}));
|
||||
const allPassed = results.every(r => r.passed);
|
||||
|
||||
return { allPassed, results };
|
||||
}
|
||||
```
|
||||
|
||||
**Integration with Phase 5**:
|
||||
```javascript
|
||||
// In 05-iterative-refinement.md
|
||||
const { allPassed, results } = runQualityGates(report, config, diagrams);
|
||||
|
||||
if (allPassed) {
|
||||
// All gates passed → ask user to confirm or finalize
|
||||
} else {
|
||||
// Gates failed → include failed gates in discovery questions
|
||||
const failedGates = results.filter(r => !r.passed);
|
||||
discoveries.qualityIssues = failedGates;
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Error Handling
|
||||
|
||||
**Use when**: Encountering errors during any phase.
|
||||
|
||||
| Error | Detection | Recovery |
|
||||
|-------|-----------|----------|
|
||||
| CLI timeout | Bash exits with timeout | Reduce scope via `config.scope`, retry |
|
||||
| Exploration failure | Agent returns error | Fall back to `Read` + `Grep` directly |
|
||||
| User abandons | User selects "cancel" | Save to `iterations/`, allow resume |
|
||||
| Invalid scope path | Path doesn't exist | `AskUserQuestion` to correct path |
|
||||
| Diagram validation fails | `validateMermaidSyntax` returns issues | Regenerate with stricter escaping |
|
||||
|
||||
**Recovery Flow**:
|
||||
```javascript
|
||||
try {
|
||||
await executePhase(phase);
|
||||
} catch (error) {
|
||||
const recovery = ERROR_HANDLERS[error.type];
|
||||
if (recovery) {
|
||||
await recovery.action(error, config);
|
||||
// Retry phase or continue
|
||||
} else {
|
||||
// Save progress and ask user
|
||||
Write(`${outputDir}/error-state.json`, { phase, error, config });
|
||||
AskUserQuestion({ question: "遇到错误,如何处理?", ... });
|
||||
}
|
||||
}
|
||||
```
|
||||
152
.claude/skills/project-analyze/specs/writing-style.md
Normal file
152
.claude/skills/project-analyze/specs/writing-style.md
Normal file
@@ -0,0 +1,152 @@
|
||||
# 写作风格规范
|
||||
|
||||
## 核心原则
|
||||
|
||||
**段落式描述,层层递进,禁止清单罗列。**
|
||||
|
||||
## 禁止的写作模式
|
||||
|
||||
```markdown
|
||||
<!-- 禁止:清单罗列 -->
|
||||
### 模块列表
|
||||
- 用户模块:处理用户相关功能
|
||||
- 订单模块:处理订单相关功能
|
||||
- 支付模块:处理支付相关功能
|
||||
|
||||
### 依赖关系
|
||||
| 模块 | 依赖 | 说明 |
|
||||
|------|------|------|
|
||||
| A | B | xxx |
|
||||
```
|
||||
|
||||
## 推荐的写作模式
|
||||
|
||||
```markdown
|
||||
<!-- 推荐:段落式描述 -->
|
||||
### 模块架构设计
|
||||
|
||||
系统采用分层模块化架构,核心业务逻辑围绕用户、订单、支付三大领域展开。
|
||||
用户模块作为系统的入口层,承担身份认证与权限管理职责,为下游模块提供
|
||||
统一的用户上下文。订单模块位于业务核心层,依赖用户模块获取会话信息,
|
||||
并协调支付模块完成交易闭环。
|
||||
|
||||
值得注意的是,支付模块采用策略模式实现多渠道支付,通过接口抽象与
|
||||
具体支付网关解耦。这一设计使得新增支付渠道时,仅需实现相应策略类,
|
||||
无需修改核心订单逻辑,体现了开闭原则的应用。
|
||||
|
||||
从依赖方向分析,系统呈现清晰的单向依赖:表现层依赖业务层,业务层
|
||||
依赖数据层,未发现循环依赖。这一架构特征确保了模块的独立可测试性,
|
||||
同时为后续微服务拆分奠定了基础。
|
||||
```
|
||||
|
||||
## 写作策略
|
||||
|
||||
### 策略一:主语转换
|
||||
|
||||
将主语从开发者视角转移到系统/代码本身:
|
||||
|
||||
| 禁止 | 推荐 |
|
||||
|------|------|
|
||||
| 我们设计了... | 系统采用... |
|
||||
| 开发者实现了... | 该模块通过... |
|
||||
| 代码中使用了... | 架构设计体现了... |
|
||||
|
||||
### 策略二:逻辑连接
|
||||
|
||||
使用连接词确保段落递进:
|
||||
|
||||
- **承接**:此外、进一步、在此基础上
|
||||
- **转折**:然而、值得注意的是、不同于
|
||||
- **因果**:因此、这一设计使得、由此可见
|
||||
- **总结**:综上所述、从整体来看、概言之
|
||||
|
||||
### 策略三:深度阐释
|
||||
|
||||
每个技术点需包含:
|
||||
1. **是什么**:客观描述技术实现
|
||||
2. **为什么**:阐释设计意图和考量
|
||||
3. **影响**:说明对系统的影响和价值
|
||||
|
||||
```markdown
|
||||
<!-- 示例 -->
|
||||
系统采用依赖注入模式管理组件生命周期(是什么)。这一选择源于
|
||||
对可测试性和松耦合的追求(为什么)。通过将依赖关系外置于
|
||||
配置层,各模块可独立进行单元测试,同时为运行时替换实现
|
||||
提供了可能(影响)。
|
||||
```
|
||||
|
||||
## 章节模板
|
||||
|
||||
### 架构概述(段落式)
|
||||
|
||||
```markdown
|
||||
## 系统架构概述
|
||||
|
||||
{项目名称}采用{架构模式}架构,整体设计围绕{核心理念}展开。
|
||||
从宏观视角审视,系统可划分为{N}个主要层次,各层职责明确,
|
||||
边界清晰。
|
||||
|
||||
{表现层/入口层}作为系统与外部交互的唯一入口,承担请求解析、
|
||||
参数校验、响应封装等职责。该层通过{框架/技术}实现,遵循
|
||||
{设计原则},确保接口的一致性与可维护性。
|
||||
|
||||
{业务层}是系统的核心所在,封装了全部业务逻辑。该层采用
|
||||
{模式/策略}组织代码,将复杂业务拆解为{N}个领域模块。
|
||||
值得注意的是,{关键设计决策}体现了对{质量属性}的重视。
|
||||
|
||||
{数据层}负责持久化与数据访问,通过{技术/框架}实现。
|
||||
该层与业务层通过{接口/抽象}解耦,使得数据源的替换
|
||||
不影响上层逻辑,体现了依赖倒置原则的应用。
|
||||
```
|
||||
|
||||
### 设计模式分析(段落式)
|
||||
|
||||
```markdown
|
||||
## 设计模式应用
|
||||
|
||||
代码库中可识别出{模式1}、{模式2}等设计模式的应用,
|
||||
这些模式的选择与系统的{核心需求}密切相关。
|
||||
|
||||
{模式1}主要应用于{场景/模块}。具体实现位于
|
||||
`{文件路径}`,通过{实现方式}达成{目标}。
|
||||
这一模式的引入有效解决了{问题},使得{效果}。
|
||||
|
||||
在{另一场景}中,系统采用{模式2}应对{挑战}。
|
||||
不同于{模式1}的{特点},{模式2}更侧重于{关注点}。
|
||||
从`{文件路径}`的实现可以看出,设计者通过
|
||||
{具体实现}实现了{目标}。
|
||||
|
||||
综合来看,模式的选择体现了对{原则}的遵循,
|
||||
为系统的{质量属性}提供了有力支撑。
|
||||
```
|
||||
|
||||
### 算法流程分析(段落式)
|
||||
|
||||
```markdown
|
||||
## 核心算法设计
|
||||
|
||||
{算法名称}是系统处理{业务场景}的核心逻辑,
|
||||
其实现位于`{文件路径}`。
|
||||
|
||||
从算法流程来看,整体可分为{N}个阶段。首先,
|
||||
{第一阶段描述},这一步骤的目的在于{目的}。
|
||||
随后,算法进入{第二阶段},通过{方法}实现{目标}。
|
||||
最终,{结果处理}完成整个处理流程。
|
||||
|
||||
在复杂度方面,该算法的时间复杂度为{O(x)},
|
||||
空间复杂度为{O(y)}。这一复杂度特征源于
|
||||
{原因},在{数据规模}场景下表现良好。
|
||||
|
||||
值得关注的是,{算法名称}采用了{优化策略},
|
||||
相较于朴素实现,{具体优化点}。这一设计决策
|
||||
使得{性能提升/效果}。
|
||||
```
|
||||
|
||||
## 质量检查清单
|
||||
|
||||
- [ ] 无清单罗列(禁止 `-` 或 `|` 表格作为主体内容)
|
||||
- [ ] 段落完整(每段 3-5 句,逻辑闭环)
|
||||
- [ ] 逻辑递进(有连接词串联)
|
||||
- [ ] 客观表达(无"我们"、"开发者"等主观主语)
|
||||
- [ ] 深度阐释(包含是什么/为什么/影响)
|
||||
- [ ] 代码引用(关键点附文件路径)
|
||||
184
.claude/skills/software-manual/SKILL.md
Normal file
184
.claude/skills/software-manual/SKILL.md
Normal file
@@ -0,0 +1,184 @@
|
||||
---
|
||||
name: software-manual
|
||||
description: Generate interactive TiddlyWiki-style HTML software manuals with screenshots, API docs, and multi-level code examples. Use when creating user guides, software documentation, or API references. Triggers on "software manual", "user guide", "generate manual", "create docs".
|
||||
allowed-tools: Task, AskUserQuestion, Read, Bash, Glob, Grep, Write, mcp__chrome__*
|
||||
---
|
||||
|
||||
# Software Manual Skill
|
||||
|
||||
Generate comprehensive, interactive software manuals in TiddlyWiki-style single-file HTML format.
|
||||
|
||||
## Architecture Overview
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────────────┐
|
||||
│ Context-Optimized Architecture │
|
||||
├─────────────────────────────────────────────────────────────────┤
|
||||
│ │
|
||||
│ Phase 1: Requirements → manual-config.json │
|
||||
│ ↓ │
|
||||
│ Phase 2: Exploration → exploration-*.json │
|
||||
│ ↓ │
|
||||
│ Phase 3: Parallel Agents → sections/section-*.md │
|
||||
│ ↓ (6 Agents) │
|
||||
│ Phase 3.5: Consolidation → consolidation-summary.md │
|
||||
│ ↓ │
|
||||
│ Phase 4: Screenshot → screenshots/*.png │
|
||||
│ Capture (via Chrome MCP) │
|
||||
│ ↓ │
|
||||
│ Phase 5: HTML Assembly → {name}-使用手册.html │
|
||||
│ ↓ │
|
||||
│ Phase 6: Refinement → iterations/ │
|
||||
│ │
|
||||
└─────────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
## Key Design Principles
|
||||
|
||||
1. **主 Agent 编排,子 Agent 执行**: 所有繁重计算委托给 `universal-executor` 子 Agent
|
||||
2. **Brief Returns**: Agents return path + summary, not full content (avoid context overflow)
|
||||
3. **System Agents**: 使用 `cli-explore-agent` (探索) 和 `universal-executor` (执行)
|
||||
4. **成熟库内嵌**: marked.js (MD 解析) + highlight.js (语法高亮),无 CDN 依赖
|
||||
5. **Single-File HTML**: TiddlyWiki-style interactive document with embedded resources
|
||||
6. **动态标签**: 根据实际章节自动生成导航标签
|
||||
|
||||
## Execution Flow
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────────────┐
|
||||
│ Phase 1: Requirements Discovery (主 Agent) │
|
||||
│ → AskUserQuestion: 收集软件类型、目标用户、文档范围 │
|
||||
│ → Output: manual-config.json │
|
||||
├─────────────────────────────────────────────────────────────────┤
|
||||
│ Phase 2: Project Exploration (cli-explore-agent × N) │
|
||||
│ → 并行探索: architecture, ui-routes, api-endpoints, config │
|
||||
│ → Output: exploration-*.json │
|
||||
├─────────────────────────────────────────────────────────────────┤
|
||||
│ Phase 2.5: API Extraction (extract_apis.py) │
|
||||
│ → 自动提取: FastAPI/TypeDoc/pdoc │
|
||||
│ → Output: api-docs/{backend,frontend,modules}/*.md │
|
||||
├─────────────────────────────────────────────────────────────────┤
|
||||
│ Phase 3: Parallel Analysis (universal-executor × 6) │
|
||||
│ → 6 个子 Agent 并行: overview, ui-guide, api-docs, config, │
|
||||
│ troubleshooting, code-examples │
|
||||
│ → Output: sections/section-*.md │
|
||||
├─────────────────────────────────────────────────────────────────┤
|
||||
│ Phase 3.5: Consolidation (universal-executor) │
|
||||
│ → 质量检查: 一致性、交叉引用、截图标记 │
|
||||
│ → Output: consolidation-summary.md, screenshots-list.json │
|
||||
├─────────────────────────────────────────────────────────────────┤
|
||||
│ Phase 4: Screenshot Capture (universal-executor + Chrome MCP) │
|
||||
│ → 批量截图: 调用 mcp__chrome__screenshot │
|
||||
│ → Output: screenshots/*.png + manifest.json │
|
||||
├─────────────────────────────────────────────────────────────────┤
|
||||
│ Phase 5: HTML Assembly (universal-executor) │
|
||||
│ → 组装 HTML: MD→tiddlers, 嵌入 CSS/JS/图片 │
|
||||
│ → Output: {name}-使用手册.html │
|
||||
├─────────────────────────────────────────────────────────────────┤
|
||||
│ Phase 6: Iterative Refinement (主 Agent) │
|
||||
│ → 预览 + 用户反馈 + 迭代修复 │
|
||||
│ → Output: iterations/v*.html │
|
||||
└─────────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
## Agent Configuration
|
||||
|
||||
| Agent | Role | Output File | Focus Areas |
|
||||
|-------|------|-------------|-------------|
|
||||
| overview | Product Manager | section-overview.md | Product intro, features, quick start |
|
||||
| ui-guide | UX Expert | section-ui-guide.md | UI operations, step-by-step guides |
|
||||
| api-docs | API Architect | section-api-reference.md | REST API, Frontend API |
|
||||
| config | DevOps Engineer | section-configuration.md | Env vars, deployment, settings |
|
||||
| troubleshooting | Support Engineer | section-troubleshooting.md | FAQs, error codes, solutions |
|
||||
| code-examples | Developer Advocate | section-examples.md | Beginner/Intermediate/Advanced examples |
|
||||
|
||||
## Agent Return Format
|
||||
|
||||
```typescript
|
||||
interface ManualAgentReturn {
|
||||
status: "completed" | "partial" | "failed";
|
||||
output_file: string;
|
||||
summary: string; // Max 50 chars
|
||||
screenshots_needed: Array<{
|
||||
id: string; // e.g., "ss-login-form"
|
||||
url: string; // Relative or absolute URL
|
||||
description: string; // "Login form interface"
|
||||
selector?: string; // CSS selector for partial screenshot
|
||||
wait_for?: string; // Element to wait for
|
||||
}>;
|
||||
cross_references: string[]; // Other sections referenced
|
||||
difficulty_level: "beginner" | "intermediate" | "advanced";
|
||||
}
|
||||
```
|
||||
|
||||
## HTML Features (TiddlyWiki-style)
|
||||
|
||||
1. **Search**: Full-text search with result highlighting
|
||||
2. **Collapse/Expand**: Per-section collapsible content
|
||||
3. **Tag Navigation**: Filter by category tags
|
||||
4. **Theme Toggle**: Light/Dark mode with localStorage persistence
|
||||
5. **Single File**: All CSS/JS/images embedded as Base64
|
||||
6. **Offline**: Works without internet connection
|
||||
7. **Print-friendly**: Optimized print stylesheet
|
||||
|
||||
## Directory Setup
|
||||
|
||||
```javascript
|
||||
// Generate timestamp directory name
|
||||
const timestamp = new Date().toISOString().slice(0,19).replace(/[-:T]/g, '');
|
||||
const dir = `.workflow/.scratchpad/manual-${timestamp}`;
|
||||
|
||||
// Windows
|
||||
Bash(`mkdir "${dir}\\sections" && mkdir "${dir}\\screenshots" && mkdir "${dir}\\api-docs" && mkdir "${dir}\\iterations"`);
|
||||
```
|
||||
|
||||
## Output Structure
|
||||
|
||||
```
|
||||
.workflow/.scratchpad/manual-{timestamp}/
|
||||
├── manual-config.json # Phase 1
|
||||
├── exploration/ # Phase 2
|
||||
│ ├── exploration-architecture.json
|
||||
│ ├── exploration-ui-routes.json
|
||||
│ └── exploration-api-endpoints.json
|
||||
├── sections/ # Phase 3
|
||||
│ ├── section-overview.md
|
||||
│ ├── section-ui-guide.md
|
||||
│ ├── section-api-reference.md
|
||||
│ ├── section-configuration.md
|
||||
│ ├── section-troubleshooting.md
|
||||
│ └── section-examples.md
|
||||
├── consolidation-summary.md # Phase 3.5
|
||||
├── api-docs/ # API documentation
|
||||
│ ├── frontend/ # TypeDoc output
|
||||
│ └── backend/ # Swagger/OpenAPI output
|
||||
├── screenshots/ # Phase 4
|
||||
│ ├── ss-*.png
|
||||
│ └── screenshots-manifest.json
|
||||
├── iterations/ # Phase 6
|
||||
│ ├── v1.html
|
||||
│ └── v2.html
|
||||
└── {软件名}-使用手册.html # Final Output
|
||||
```
|
||||
|
||||
## Reference Documents
|
||||
|
||||
| Document | Purpose |
|
||||
|----------|---------|
|
||||
| [phases/01-requirements-discovery.md](phases/01-requirements-discovery.md) | 用户配置收集 |
|
||||
| [phases/02-project-exploration.md](phases/02-project-exploration.md) | 项目类型检测 |
|
||||
| [phases/02.5-api-extraction.md](phases/02.5-api-extraction.md) | API 自动提取 |
|
||||
| [phases/03-parallel-analysis.md](phases/03-parallel-analysis.md) | 6 Agent 并行分析 |
|
||||
| [phases/03.5-consolidation.md](phases/03.5-consolidation.md) | 整合与质量检查 |
|
||||
| [phases/04-screenshot-capture.md](phases/04-screenshot-capture.md) | Chrome MCP 截图 |
|
||||
| [phases/05-html-assembly.md](phases/05-html-assembly.md) | HTML 组装 |
|
||||
| [phases/06-iterative-refinement.md](phases/06-iterative-refinement.md) | 迭代优化 |
|
||||
| [specs/quality-standards.md](specs/quality-standards.md) | 质量标准 |
|
||||
| [specs/writing-style.md](specs/writing-style.md) | 写作风格 |
|
||||
| [templates/tiddlywiki-shell.html](templates/tiddlywiki-shell.html) | HTML 模板 |
|
||||
| [templates/css/wiki-base.css](templates/css/wiki-base.css) | 基础样式 |
|
||||
| [templates/css/wiki-dark.css](templates/css/wiki-dark.css) | 暗色主题 |
|
||||
| [scripts/bundle-libraries.md](scripts/bundle-libraries.md) | 库文件打包 |
|
||||
| [scripts/api-extractor.md](scripts/api-extractor.md) | API 提取说明 |
|
||||
| [scripts/extract_apis.py](scripts/extract_apis.py) | API 提取脚本 |
|
||||
| [scripts/screenshot-helper.md](scripts/screenshot-helper.md) | 截图辅助 |
|
||||
@@ -0,0 +1,162 @@
|
||||
# Phase 1: Requirements Discovery
|
||||
|
||||
Collect user requirements and generate configuration for the manual generation process.
|
||||
|
||||
## Objective
|
||||
|
||||
Gather essential information about the software project to customize the manual generation:
|
||||
- Software type and characteristics
|
||||
- Target user audience
|
||||
- Documentation scope and depth
|
||||
- Special requirements
|
||||
|
||||
## Execution Steps
|
||||
|
||||
### Step 1: Software Information Collection
|
||||
|
||||
Use `AskUserQuestion` to collect:
|
||||
|
||||
```javascript
|
||||
AskUserQuestion({
|
||||
questions: [
|
||||
{
|
||||
question: "What type of software is this project?",
|
||||
header: "Software Type",
|
||||
options: [
|
||||
{ label: "Web Application", description: "Frontend + Backend web app with UI" },
|
||||
{ label: "CLI Tool", description: "Command-line interface tool" },
|
||||
{ label: "SDK/Library", description: "Developer library or SDK" },
|
||||
{ label: "Desktop App", description: "Desktop application (Electron, etc.)" }
|
||||
],
|
||||
multiSelect: false
|
||||
},
|
||||
{
|
||||
question: "Who is the target audience for this manual?",
|
||||
header: "Target Users",
|
||||
options: [
|
||||
{ label: "End Users", description: "Non-technical users who use the product" },
|
||||
{ label: "Developers", description: "Developers integrating or extending the product" },
|
||||
{ label: "Administrators", description: "System admins deploying and maintaining" },
|
||||
{ label: "All Audiences", description: "Mixed audience with different sections" }
|
||||
],
|
||||
multiSelect: false
|
||||
},
|
||||
{
|
||||
question: "What documentation scope do you need?",
|
||||
header: "Doc Scope",
|
||||
options: [
|
||||
{ label: "Quick Start", description: "Essential getting started guide only" },
|
||||
{ label: "User Guide", description: "Complete user-facing documentation" },
|
||||
{ label: "API Reference", description: "Focus on API documentation" },
|
||||
{ label: "Comprehensive", description: "Full documentation including all sections" }
|
||||
],
|
||||
multiSelect: false
|
||||
},
|
||||
{
|
||||
question: "What difficulty levels should code examples cover?",
|
||||
header: "Example Levels",
|
||||
options: [
|
||||
{ label: "Beginner Only", description: "Simple, basic examples" },
|
||||
{ label: "Beginner + Intermediate", description: "Basic to moderate complexity" },
|
||||
{ label: "All Levels", description: "Beginner, Intermediate, and Advanced" }
|
||||
],
|
||||
multiSelect: false
|
||||
}
|
||||
]
|
||||
});
|
||||
```
|
||||
|
||||
### Step 2: Auto-Detection (Supplement)
|
||||
|
||||
Automatically detect project characteristics:
|
||||
|
||||
```javascript
|
||||
// Detect from package.json
|
||||
const packageJson = Read('package.json');
|
||||
const softwareName = packageJson.name;
|
||||
const version = packageJson.version;
|
||||
const description = packageJson.description;
|
||||
|
||||
// Detect tech stack
|
||||
const hasReact = packageJson.dependencies?.react;
|
||||
const hasVue = packageJson.dependencies?.vue;
|
||||
const hasExpress = packageJson.dependencies?.express;
|
||||
const hasNestJS = packageJson.dependencies?.['@nestjs/core'];
|
||||
|
||||
// Detect CLI
|
||||
const hasBin = !!packageJson.bin;
|
||||
|
||||
// Detect UI
|
||||
const hasPages = Glob('src/pages/**/*').length > 0 || Glob('pages/**/*').length > 0;
|
||||
const hasRoutes = Glob('**/routes.*').length > 0;
|
||||
```
|
||||
|
||||
### Step 3: Generate Configuration
|
||||
|
||||
Create `manual-config.json`:
|
||||
|
||||
```json
|
||||
{
|
||||
"software": {
|
||||
"name": "{{detected or user input}}",
|
||||
"version": "{{from package.json}}",
|
||||
"description": "{{from package.json}}",
|
||||
"type": "{{web|cli|sdk|desktop}}"
|
||||
},
|
||||
"target_audience": "{{end_users|developers|admins|all}}",
|
||||
"doc_scope": "{{quick_start|user_guide|api_reference|comprehensive}}",
|
||||
"example_levels": ["beginner", "intermediate", "advanced"],
|
||||
"tech_stack": {
|
||||
"frontend": "{{react|vue|angular|vanilla}}",
|
||||
"backend": "{{express|nestjs|fastify|none}}",
|
||||
"language": "{{typescript|javascript}}",
|
||||
"ui_framework": "{{tailwind|mui|antd|none}}"
|
||||
},
|
||||
"features": {
|
||||
"has_ui": true,
|
||||
"has_api": true,
|
||||
"has_cli": false,
|
||||
"has_config": true
|
||||
},
|
||||
"agents_to_run": [
|
||||
"overview",
|
||||
"ui-guide",
|
||||
"api-docs",
|
||||
"config",
|
||||
"troubleshooting",
|
||||
"code-examples"
|
||||
],
|
||||
"screenshot_config": {
|
||||
"enabled": true,
|
||||
"dev_command": "npm run dev",
|
||||
"dev_url": "http://localhost:3000",
|
||||
"wait_timeout": 5000
|
||||
},
|
||||
"output": {
|
||||
"filename": "{{name}}-使用手册.html",
|
||||
"theme": "light",
|
||||
"language": "zh-CN"
|
||||
},
|
||||
"timestamp": "{{ISO8601}}"
|
||||
}
|
||||
```
|
||||
|
||||
## Agent Selection Logic
|
||||
|
||||
Based on `doc_scope`, select agents to run:
|
||||
|
||||
| Scope | Agents |
|
||||
|-------|--------|
|
||||
| quick_start | overview |
|
||||
| user_guide | overview, ui-guide, config, troubleshooting |
|
||||
| api_reference | overview, api-docs, code-examples |
|
||||
| comprehensive | ALL 6 agents |
|
||||
|
||||
## Output
|
||||
|
||||
- **File**: `manual-config.json`
|
||||
- **Location**: `.workflow/.scratchpad/manual-{timestamp}/`
|
||||
|
||||
## Next Phase
|
||||
|
||||
Proceed to [Phase 2: Project Exploration](02-project-exploration.md) with the generated configuration.
|
||||
101
.claude/skills/software-manual/phases/02-project-exploration.md
Normal file
101
.claude/skills/software-manual/phases/02-project-exploration.md
Normal file
@@ -0,0 +1,101 @@
|
||||
# Phase 2: Project Exploration
|
||||
|
||||
使用 `cli-explore-agent` 探索项目结构,生成文档所需的结构化数据。
|
||||
|
||||
## 探索角度
|
||||
|
||||
```javascript
|
||||
const EXPLORATION_ANGLES = {
|
||||
web: ['architecture', 'ui-routes', 'api-endpoints', 'config'],
|
||||
cli: ['architecture', 'commands', 'config'],
|
||||
sdk: ['architecture', 'public-api', 'types', 'config'],
|
||||
desktop: ['architecture', 'ui-screens', 'config']
|
||||
};
|
||||
```
|
||||
|
||||
## 执行流程
|
||||
|
||||
```javascript
|
||||
const config = JSON.parse(Read(`${workDir}/manual-config.json`));
|
||||
const angles = EXPLORATION_ANGLES[config.software.type];
|
||||
|
||||
// 并行探索
|
||||
const tasks = angles.map(angle => Task({
|
||||
subagent_type: 'cli-explore-agent',
|
||||
run_in_background: false,
|
||||
prompt: buildExplorationPrompt(angle, config, workDir)
|
||||
}));
|
||||
|
||||
const results = await Promise.all(tasks);
|
||||
```
|
||||
|
||||
## 探索配置
|
||||
|
||||
```javascript
|
||||
const EXPLORATION_CONFIGS = {
|
||||
architecture: {
|
||||
task: '分析项目模块结构、入口点、依赖关系',
|
||||
patterns: ['src/*/', 'package.json', 'tsconfig.json'],
|
||||
output: 'exploration-architecture.json'
|
||||
},
|
||||
'ui-routes': {
|
||||
task: '提取 UI 路由、页面组件、导航结构',
|
||||
patterns: ['src/pages/**', 'src/views/**', 'app/**/page.*', 'src/router/**'],
|
||||
output: 'exploration-ui-routes.json'
|
||||
},
|
||||
'api-endpoints': {
|
||||
task: '提取 REST API 端点、请求/响应类型',
|
||||
patterns: ['src/**/*.controller.*', 'src/routes/**', 'openapi.*', 'swagger.*'],
|
||||
output: 'exploration-api-endpoints.json'
|
||||
},
|
||||
config: {
|
||||
task: '提取环境变量、配置文件选项',
|
||||
patterns: ['.env.example', 'config/**', 'docker-compose.yml'],
|
||||
output: 'exploration-config.json'
|
||||
},
|
||||
commands: {
|
||||
task: '提取 CLI 命令、选项、示例',
|
||||
patterns: ['src/cli*', 'bin/*', 'src/commands/**'],
|
||||
output: 'exploration-commands.json'
|
||||
}
|
||||
};
|
||||
```
|
||||
|
||||
## Prompt 构建
|
||||
|
||||
```javascript
|
||||
function buildExplorationPrompt(angle, config, workDir) {
|
||||
const cfg = EXPLORATION_CONFIGS[angle];
|
||||
return `
|
||||
[TASK]
|
||||
${cfg.task}
|
||||
|
||||
[SCOPE]
|
||||
项目类型: ${config.software.type}
|
||||
扫描模式: deep-scan
|
||||
文件模式: ${cfg.patterns.join(', ')}
|
||||
|
||||
[OUTPUT]
|
||||
文件: ${workDir}/exploration/${cfg.output}
|
||||
格式: JSON (schema-compliant)
|
||||
|
||||
[RETURN]
|
||||
简要说明发现的内容数量和关键发现
|
||||
`;
|
||||
}
|
||||
```
|
||||
|
||||
## 输出结构
|
||||
|
||||
```
|
||||
exploration/
|
||||
├── exploration-architecture.json # 模块结构
|
||||
├── exploration-ui-routes.json # UI 路由
|
||||
├── exploration-api-endpoints.json # API 端点
|
||||
├── exploration-config.json # 配置选项
|
||||
└── exploration-commands.json # CLI 命令 (if CLI)
|
||||
```
|
||||
|
||||
## 下一阶段
|
||||
|
||||
→ [Phase 3: Parallel Analysis](03-parallel-analysis.md)
|
||||
161
.claude/skills/software-manual/phases/02.5-api-extraction.md
Normal file
161
.claude/skills/software-manual/phases/02.5-api-extraction.md
Normal file
@@ -0,0 +1,161 @@
|
||||
# Phase 2.5: API Extraction
|
||||
|
||||
在项目探索后、并行分析前,自动提取 API 文档。
|
||||
|
||||
## 核心原则
|
||||
|
||||
**使用成熟工具提取,确保输出格式与 wiki 模板兼容。**
|
||||
|
||||
## 执行流程
|
||||
|
||||
```javascript
|
||||
const config = JSON.parse(Read(`${workDir}/manual-config.json`));
|
||||
|
||||
// 检查项目路径配置
|
||||
const apiSources = config.api_sources || detectApiSources(config.project_path);
|
||||
|
||||
// 执行 API 提取
|
||||
Bash({
|
||||
command: `python .claude/skills/software-manual/scripts/extract_apis.py -o "${workDir}" -p ${apiSources.join(' ')}`
|
||||
});
|
||||
|
||||
// 验证输出
|
||||
const apiDocsDir = `${workDir}/api-docs`;
|
||||
const extractedFiles = Glob(`${apiDocsDir}/**/*.{json,md}`);
|
||||
console.log(`Extracted ${extractedFiles.length} API documentation files`);
|
||||
```
|
||||
|
||||
## 支持的项目类型
|
||||
|
||||
| 类型 | 检测方式 | 提取工具 | 输出格式 |
|
||||
|------|----------|----------|----------|
|
||||
| FastAPI | `app/main.py` + FastAPI import | OpenAPI JSON | `openapi.json` + `API_SUMMARY.md` |
|
||||
| Next.js | `package.json` + next | TypeDoc | `*.md` (Markdown) |
|
||||
| Python Module | `__init__.py` + setup.py/pyproject.toml | pdoc | `*.md` (Markdown) |
|
||||
| Express | `package.json` + express | swagger-jsdoc | `openapi.json` |
|
||||
| NestJS | `package.json` + @nestjs | @nestjs/swagger | `openapi.json` |
|
||||
|
||||
## 输出格式规范
|
||||
|
||||
### Markdown 兼容性要求
|
||||
|
||||
确保输出 Markdown 与 wiki CSS 样式兼容:
|
||||
|
||||
```markdown
|
||||
# API Reference → <h1> (wiki-base.css)
|
||||
|
||||
## Endpoints → <h2>
|
||||
|
||||
| Method | Path | Summary | → <table> 蓝色表头
|
||||
|--------|------|---------|
|
||||
| `GET` | `/api/...` | ... | → <code> 红色高亮
|
||||
|
||||
### GET /api/users → <h3>
|
||||
|
||||
\`\`\`json → <pre><code> 深色背景
|
||||
{
|
||||
"id": 1,
|
||||
"name": "example"
|
||||
}
|
||||
\`\`\`
|
||||
|
||||
- Parameter: `id` (required) → <ul><li> + <code>
|
||||
```
|
||||
|
||||
### 格式验证检查
|
||||
|
||||
```javascript
|
||||
function validateApiDocsFormat(apiDocsDir) {
|
||||
const issues = [];
|
||||
const mdFiles = Glob(`${apiDocsDir}/**/*.md`);
|
||||
|
||||
for (const file of mdFiles) {
|
||||
const content = Read(file);
|
||||
|
||||
// 检查表格格式
|
||||
if (content.includes('|') && !content.match(/\|.*\|.*\|/)) {
|
||||
issues.push(`${file}: 表格格式不完整`);
|
||||
}
|
||||
|
||||
// 检查代码块语言标注
|
||||
const codeBlocks = content.match(/```(\w*)\n/g) || [];
|
||||
const unlabeled = codeBlocks.filter(b => b === '```\n');
|
||||
if (unlabeled.length > 0) {
|
||||
issues.push(`${file}: ${unlabeled.length} 个代码块缺少语言标注`);
|
||||
}
|
||||
|
||||
// 检查标题层级
|
||||
if (!content.match(/^# /m)) {
|
||||
issues.push(`${file}: 缺少一级标题`);
|
||||
}
|
||||
}
|
||||
|
||||
return issues;
|
||||
}
|
||||
```
|
||||
|
||||
## 项目配置示例
|
||||
|
||||
在 `manual-config.json` 中配置 API 源:
|
||||
|
||||
```json
|
||||
{
|
||||
"software": {
|
||||
"name": "Hydro Generator Workbench",
|
||||
"type": "web"
|
||||
},
|
||||
"api_sources": {
|
||||
"backend": {
|
||||
"path": "D:/dongdiankaifa9/backend",
|
||||
"type": "fastapi",
|
||||
"entry": "app.main:app"
|
||||
},
|
||||
"frontend": {
|
||||
"path": "D:/dongdiankaifa9/frontend",
|
||||
"type": "typescript",
|
||||
"entries": ["lib", "hooks", "components"]
|
||||
},
|
||||
"hydro_generator_module": {
|
||||
"path": "D:/dongdiankaifa9/hydro_generator_module",
|
||||
"type": "python"
|
||||
},
|
||||
"multiphysics_network": {
|
||||
"path": "D:/dongdiankaifa9/multiphysics_network",
|
||||
"type": "python"
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## 输出结构
|
||||
|
||||
```
|
||||
{workDir}/api-docs/
|
||||
├── backend/
|
||||
│ ├── openapi.json # OpenAPI 3.0 规范
|
||||
│ └── API_SUMMARY.md # Markdown 摘要(wiki 兼容)
|
||||
├── frontend/
|
||||
│ ├── modules.md # TypeDoc 模块文档
|
||||
│ ├── classes/ # 类文档
|
||||
│ └── functions/ # 函数文档
|
||||
├── hydro_generator/
|
||||
│ ├── assembler.md # pdoc 模块文档
|
||||
│ ├── blueprint.md
|
||||
│ └── builders/
|
||||
└── multiphysics/
|
||||
├── analysis_domain.md
|
||||
├── builders.md
|
||||
└── compilers.md
|
||||
```
|
||||
|
||||
## 质量门禁
|
||||
|
||||
- [ ] 所有配置的 API 源已提取
|
||||
- [ ] Markdown 格式与 wiki CSS 兼容
|
||||
- [ ] 表格正确渲染(蓝色表头)
|
||||
- [ ] 代码块有语言标注
|
||||
- [ ] 无空文件或错误文件
|
||||
|
||||
## 下一阶段
|
||||
|
||||
→ [Phase 3: Parallel Analysis](03-parallel-analysis.md)
|
||||
183
.claude/skills/software-manual/phases/03-parallel-analysis.md
Normal file
183
.claude/skills/software-manual/phases/03-parallel-analysis.md
Normal file
@@ -0,0 +1,183 @@
|
||||
# Phase 3: Parallel Analysis
|
||||
|
||||
使用 `universal-executor` 并行生成 6 个文档章节。
|
||||
|
||||
## Agent 配置
|
||||
|
||||
```javascript
|
||||
const AGENT_CONFIGS = {
|
||||
overview: {
|
||||
role: 'Product Manager',
|
||||
output: 'section-overview.md',
|
||||
task: '撰写产品概览、核心功能、快速入门指南',
|
||||
focus: '产品定位、目标用户、5步快速入门、系统要求',
|
||||
input: ['exploration-architecture.json', 'README.md', 'package.json'],
|
||||
tag: 'getting-started'
|
||||
},
|
||||
'interface-guide': {
|
||||
role: 'Product Designer',
|
||||
output: 'section-interface.md',
|
||||
task: '撰写界面或交互指南(Web 截图、CLI 命令交互、桌面应用操作)',
|
||||
focus: '视觉布局、交互流程、命令行参数、输入/输出示例',
|
||||
input: ['exploration-ui-routes.json', 'src/**', 'pages/**', 'views/**', 'components/**', 'src/commands/**'],
|
||||
tag: 'interface',
|
||||
screenshot_rules: `
|
||||
根据项目类型标注交互点:
|
||||
|
||||
[Web] <!-- SCREENSHOT: id="ss-{功能}" url="{路由}" selector="{CSS选择器}" description="{描述}" -->
|
||||
[CLI] 使用代码块展示命令交互:
|
||||
\`\`\`bash
|
||||
$ command --flag value
|
||||
Expected output here
|
||||
\`\`\`
|
||||
[Desktop] <!-- SCREENSHOT: id="ss-{功能}" description="{描述}" -->
|
||||
`
|
||||
},
|
||||
'api-reference': {
|
||||
role: 'Technical Architect',
|
||||
output: 'section-reference.md',
|
||||
task: '撰写接口参考文档(REST API / 函数库 / CLI 命令)',
|
||||
focus: '函数签名、端点定义、参数说明、返回值、错误代码',
|
||||
pre_extract: 'python .claude/skills/software-manual/scripts/extract_apis.py -o ${workDir}',
|
||||
input: [
|
||||
'${workDir}/api-docs/backend/openapi.json', // FastAPI OpenAPI
|
||||
'${workDir}/api-docs/backend/API_SUMMARY.md', // Backend summary
|
||||
'${workDir}/api-docs/frontend/**/*.md', // TypeDoc output
|
||||
'${workDir}/api-docs/hydro_generator/**/*.md', // Python module
|
||||
'${workDir}/api-docs/multiphysics/**/*.md' // Python module
|
||||
],
|
||||
tag: 'api'
|
||||
},
|
||||
config: {
|
||||
role: 'DevOps Engineer',
|
||||
output: 'section-configuration.md',
|
||||
task: '撰写配置指南,涵盖环境变量、配置文件、部署设置',
|
||||
focus: '环境变量表格、配置文件格式、部署选项、安全设置',
|
||||
input: ['exploration-config.json', '.env.example', 'config/**', '*.config.*'],
|
||||
tag: 'config'
|
||||
},
|
||||
troubleshooting: {
|
||||
role: 'Support Engineer',
|
||||
output: 'section-troubleshooting.md',
|
||||
task: '撰写故障排查指南,涵盖常见问题、错误码、FAQ',
|
||||
focus: '常见问题与解决方案、错误码参考、FAQ、获取帮助',
|
||||
input: ['docs/troubleshooting.md', 'src/**/errors.*', 'src/**/exceptions.*', 'TROUBLESHOOTING.md'],
|
||||
tag: 'troubleshooting'
|
||||
},
|
||||
'code-examples': {
|
||||
role: 'Developer Advocate',
|
||||
output: 'section-examples.md',
|
||||
task: '撰写多难度级别代码示例(入门40%/进阶40%/高级20%)',
|
||||
focus: '完整可运行代码、分步解释、预期输出、最佳实践',
|
||||
input: ['examples/**', 'tests/**', 'demo/**', 'samples/**'],
|
||||
tag: 'examples'
|
||||
}
|
||||
};
|
||||
```
|
||||
|
||||
## 执行流程
|
||||
|
||||
```javascript
|
||||
const config = JSON.parse(Read(`${workDir}/manual-config.json`));
|
||||
|
||||
// 1. 预提取 API 文档(如有 pre_extract 配置)
|
||||
for (const [name, cfg] of Object.entries(AGENT_CONFIGS)) {
|
||||
if (cfg.pre_extract) {
|
||||
const cmd = cfg.pre_extract.replace(/\$\{workDir\}/g, workDir);
|
||||
console.log(`[Pre-extract] ${name}: ${cmd}`);
|
||||
Bash({ command: cmd });
|
||||
}
|
||||
}
|
||||
|
||||
// 2. 并行启动 6 个 universal-executor
|
||||
const tasks = Object.entries(AGENT_CONFIGS).map(([name, cfg]) =>
|
||||
Task({
|
||||
subagent_type: 'universal-executor',
|
||||
run_in_background: false,
|
||||
prompt: buildAgentPrompt(name, cfg, config, workDir)
|
||||
})
|
||||
);
|
||||
|
||||
const results = await Promise.all(tasks);
|
||||
```
|
||||
|
||||
## Prompt 构建
|
||||
|
||||
```javascript
|
||||
function buildAgentPrompt(name, cfg, config, workDir) {
|
||||
const screenshotSection = cfg.screenshot_rules
|
||||
? `\n[SCREENSHOT RULES]\n${cfg.screenshot_rules}`
|
||||
: '';
|
||||
|
||||
return `
|
||||
[ROLE] ${cfg.role}
|
||||
|
||||
[PROJECT CONTEXT]
|
||||
项目类型: ${config.software.type} (web/cli/sdk/desktop)
|
||||
语言: ${config.software.language || 'auto-detect'}
|
||||
名称: ${config.software.name}
|
||||
|
||||
[TASK]
|
||||
${cfg.task}
|
||||
输出: ${workDir}/sections/${cfg.output}
|
||||
|
||||
[INPUT]
|
||||
- 配置: ${workDir}/manual-config.json
|
||||
- 探索结果: ${workDir}/exploration/
|
||||
- 扫描路径: ${cfg.input.join(', ')}
|
||||
|
||||
[CONTENT REQUIREMENTS]
|
||||
- 标题层级: # ## ### (最多3级)
|
||||
- 代码块: \`\`\`language ... \`\`\` (必须标注语言)
|
||||
- 表格: | col1 | col2 | 格式
|
||||
- 列表: 有序 1. 2. 3. / 无序 - - -
|
||||
- 内联代码: \`code\`
|
||||
- 链接: [text](url)
|
||||
${screenshotSection}
|
||||
|
||||
[FOCUS]
|
||||
${cfg.focus}
|
||||
|
||||
[OUTPUT FORMAT]
|
||||
Markdown 文件,包含:
|
||||
- 清晰的章节结构
|
||||
- 具体的代码示例
|
||||
- 参数/配置表格
|
||||
- 常见用例说明
|
||||
|
||||
[RETURN JSON]
|
||||
{
|
||||
"status": "completed",
|
||||
"output_file": "sections/${cfg.output}",
|
||||
"summary": "<50字>",
|
||||
"tag": "${cfg.tag}",
|
||||
"screenshots_needed": []
|
||||
}
|
||||
`;
|
||||
}
|
||||
```
|
||||
|
||||
## 结果收集
|
||||
|
||||
```javascript
|
||||
const agentResults = results.map(r => JSON.parse(r));
|
||||
const allScreenshots = agentResults.flatMap(r => r.screenshots_needed);
|
||||
|
||||
Write(`${workDir}/agent-results.json`, JSON.stringify({
|
||||
results: agentResults,
|
||||
screenshots_needed: allScreenshots,
|
||||
timestamp: new Date().toISOString()
|
||||
}, null, 2));
|
||||
```
|
||||
|
||||
## 质量检查
|
||||
|
||||
- [ ] Markdown 语法有效
|
||||
- [ ] 无占位符文本
|
||||
- [ ] 代码块标注语言
|
||||
- [ ] 截图标记格式正确
|
||||
- [ ] 交叉引用有效
|
||||
|
||||
## 下一阶段
|
||||
|
||||
→ [Phase 3.5: Consolidation](03.5-consolidation.md)
|
||||
82
.claude/skills/software-manual/phases/03.5-consolidation.md
Normal file
82
.claude/skills/software-manual/phases/03.5-consolidation.md
Normal file
@@ -0,0 +1,82 @@
|
||||
# Phase 3.5: Consolidation
|
||||
|
||||
使用 `universal-executor` 子 Agent 执行质量检查,避免主 Agent 内存溢出。
|
||||
|
||||
## 核心原则
|
||||
|
||||
**主 Agent 负责编排,子 Agent 负责繁重计算。**
|
||||
|
||||
## 执行流程
|
||||
|
||||
```javascript
|
||||
const agentResults = JSON.parse(Read(`${workDir}/agent-results.json`));
|
||||
|
||||
// 委托给 universal-executor 执行整合检查
|
||||
const result = Task({
|
||||
subagent_type: 'universal-executor',
|
||||
run_in_background: false,
|
||||
prompt: buildConsolidationPrompt(workDir)
|
||||
});
|
||||
|
||||
const consolidationResult = JSON.parse(result);
|
||||
```
|
||||
|
||||
## Prompt 构建
|
||||
|
||||
```javascript
|
||||
function buildConsolidationPrompt(workDir) {
|
||||
return `
|
||||
[ROLE] Quality Analyst
|
||||
|
||||
[TASK]
|
||||
检查所有章节的一致性和完整性
|
||||
|
||||
[INPUT]
|
||||
- 章节文件: ${workDir}/sections/section-*.md
|
||||
- Agent 结果: ${workDir}/agent-results.json
|
||||
|
||||
[CHECKS]
|
||||
1. Markdown 语法有效性
|
||||
2. 截图标记格式 (<!-- SCREENSHOT: id="..." -->)
|
||||
3. 交叉引用有效性
|
||||
4. 术语一致性
|
||||
5. 代码块语言标注
|
||||
|
||||
[OUTPUT]
|
||||
1. 写入 ${workDir}/consolidation-summary.md
|
||||
2. 写入 ${workDir}/screenshots-list.json (截图清单)
|
||||
|
||||
[RETURN JSON]
|
||||
{
|
||||
"status": "completed",
|
||||
"sections_checked": <n>,
|
||||
"screenshots_found": <n>,
|
||||
"issues": { "errors": <n>, "warnings": <n> },
|
||||
"quality_score": <0-100>
|
||||
}
|
||||
`;
|
||||
}
|
||||
```
|
||||
|
||||
## Agent 职责
|
||||
|
||||
1. **读取章节** → 逐个检查 section-*.md
|
||||
2. **提取截图** → 收集所有截图标记
|
||||
3. **验证引用** → 检查交叉引用有效性
|
||||
4. **评估质量** → 计算综合分数
|
||||
5. **输出报告** → consolidation-summary.md
|
||||
|
||||
## 输出
|
||||
|
||||
- `consolidation-summary.md` - 质量报告
|
||||
- `screenshots-list.json` - 截图清单(供 Phase 4 使用)
|
||||
|
||||
## 质量门禁
|
||||
|
||||
- [ ] 无错误
|
||||
- [ ] 总分 >= 60%
|
||||
- [ ] 交叉引用有效
|
||||
|
||||
## 下一阶段
|
||||
|
||||
→ [Phase 4: Screenshot Capture](04-screenshot-capture.md)
|
||||
@@ -0,0 +1,89 @@
|
||||
# Phase 4: Screenshot Capture
|
||||
|
||||
使用 `universal-executor` 子 Agent 调用 Chrome MCP 截图。
|
||||
|
||||
## 核心原则
|
||||
|
||||
**主 Agent 负责编排,子 Agent 负责截图采集。**
|
||||
|
||||
## 执行流程
|
||||
|
||||
```javascript
|
||||
const config = JSON.parse(Read(`${workDir}/manual-config.json`));
|
||||
const screenshotsList = JSON.parse(Read(`${workDir}/screenshots-list.json`));
|
||||
|
||||
// 委托给 universal-executor 执行截图
|
||||
const result = Task({
|
||||
subagent_type: 'universal-executor',
|
||||
run_in_background: false,
|
||||
prompt: buildScreenshotPrompt(config, screenshotsList, workDir)
|
||||
});
|
||||
|
||||
const captureResult = JSON.parse(result);
|
||||
```
|
||||
|
||||
## Prompt 构建
|
||||
|
||||
```javascript
|
||||
function buildScreenshotPrompt(config, screenshotsList, workDir) {
|
||||
return `
|
||||
[ROLE] Screenshot Capturer
|
||||
|
||||
[TASK]
|
||||
使用 Chrome MCP 批量截图
|
||||
|
||||
[INPUT]
|
||||
- 配置: ${workDir}/manual-config.json
|
||||
- 截图清单: ${workDir}/screenshots-list.json
|
||||
|
||||
[STEPS]
|
||||
1. 检查 Chrome MCP 可用性 (mcp__chrome__*)
|
||||
2. 启动开发服务器: ${config.screenshot_config?.dev_command || 'npm run dev'}
|
||||
3. 等待服务器就绪: ${config.screenshot_config?.dev_url || 'http://localhost:3000'}
|
||||
4. 遍历截图清单,逐个调用 mcp__chrome__screenshot
|
||||
5. 保存截图到 ${workDir}/screenshots/
|
||||
6. 生成 manifest: ${workDir}/screenshots/screenshots-manifest.json
|
||||
7. 停止开发服务器
|
||||
|
||||
[MCP CALLS]
|
||||
- mcp__chrome__screenshot({ url, selector?, viewport })
|
||||
- 保存为 PNG 文件
|
||||
|
||||
[FALLBACK]
|
||||
若 Chrome MCP 不可用,生成手动截图指南: MANUAL_CAPTURE.md
|
||||
|
||||
[RETURN JSON]
|
||||
{
|
||||
"status": "completed|skipped",
|
||||
"captured": <n>,
|
||||
"failed": <n>,
|
||||
"manifest_file": "screenshots-manifest.json"
|
||||
}
|
||||
`;
|
||||
}
|
||||
```
|
||||
|
||||
## Agent 职责
|
||||
|
||||
1. **检查 MCP** → Chrome MCP 可用性
|
||||
2. **启动服务** → 开发服务器
|
||||
3. **批量截图** → 调用 mcp__chrome__screenshot
|
||||
4. **保存文件** → screenshots/*.png
|
||||
5. **生成清单** → screenshots-manifest.json
|
||||
|
||||
## 输出
|
||||
|
||||
- `screenshots/*.png` - 截图文件
|
||||
- `screenshots/screenshots-manifest.json` - 清单
|
||||
- `screenshots/MANUAL_CAPTURE.md` - 手动指南(fallback)
|
||||
|
||||
## 质量门禁
|
||||
|
||||
- [ ] 高优先级截图完成
|
||||
- [ ] 尺寸一致 (1280×800)
|
||||
- [ ] 无空白截图
|
||||
- [ ] Manifest 完整
|
||||
|
||||
## 下一阶段
|
||||
|
||||
→ [Phase 5: HTML Assembly](05-html-assembly.md)
|
||||
132
.claude/skills/software-manual/phases/05-html-assembly.md
Normal file
132
.claude/skills/software-manual/phases/05-html-assembly.md
Normal file
@@ -0,0 +1,132 @@
|
||||
# Phase 5: HTML Assembly
|
||||
|
||||
使用 `universal-executor` 子 Agent 生成最终 HTML,避免主 Agent 内存溢出。
|
||||
|
||||
## 核心原则
|
||||
|
||||
**主 Agent 负责编排,子 Agent 负责繁重计算。**
|
||||
|
||||
## 执行流程
|
||||
|
||||
```javascript
|
||||
const config = JSON.parse(Read(`${workDir}/manual-config.json`));
|
||||
|
||||
// 委托给 universal-executor 执行 HTML 组装
|
||||
const result = Task({
|
||||
subagent_type: 'universal-executor',
|
||||
run_in_background: false,
|
||||
prompt: buildAssemblyPrompt(config, workDir)
|
||||
});
|
||||
|
||||
const buildResult = JSON.parse(result);
|
||||
```
|
||||
|
||||
## Prompt 构建
|
||||
|
||||
```javascript
|
||||
function buildAssemblyPrompt(config, workDir) {
|
||||
return `
|
||||
[ROLE] HTML Assembler
|
||||
|
||||
[TASK]
|
||||
生成 TiddlyWiki 风格的交互式 HTML 手册(使用成熟库,无外部 CDN 依赖)
|
||||
|
||||
[INPUT]
|
||||
- 模板: .claude/skills/software-manual/templates/tiddlywiki-shell.html
|
||||
- CSS: .claude/skills/software-manual/templates/css/wiki-base.css, wiki-dark.css
|
||||
- 配置: ${workDir}/manual-config.json
|
||||
- 章节: ${workDir}/sections/section-*.md
|
||||
- Agent 结果: ${workDir}/agent-results.json (含 tag 信息)
|
||||
- 截图: ${workDir}/screenshots/
|
||||
|
||||
[LIBRARIES TO EMBED]
|
||||
1. marked.js (v14+) - Markdown 转 HTML
|
||||
- 从 https://unpkg.com/marked/marked.min.js 获取内容内嵌
|
||||
2. highlight.js (v11+) - 代码语法高亮
|
||||
- 核心 + 常用语言包 (js, ts, python, bash, json, yaml, html, css)
|
||||
- 使用 github-dark 主题
|
||||
|
||||
[STEPS]
|
||||
1. 读取 HTML 模板和 CSS
|
||||
2. 内嵌 marked.js 和 highlight.js 代码
|
||||
3. 读取 agent-results.json 提取各章节 tag
|
||||
4. 动态生成 {{TAG_BUTTONS_HTML}} (基于实际使用的 tags)
|
||||
5. 逐个读取 section-*.md,使用 marked 转换为 HTML
|
||||
6. 为代码块添加 data-language 属性和语法高亮
|
||||
7. 处理 <!-- SCREENSHOT: id="..." --> 标记,嵌入 Base64 图片
|
||||
8. 生成目录、搜索索引
|
||||
9. 组装最终 HTML,写入 ${workDir}/${config.software.name}-使用手册.html
|
||||
|
||||
[CONTENT FORMATTING]
|
||||
- 代码块: 深色背景 + 语言标签 + 语法高亮
|
||||
- 表格: 蓝色表头 + 边框 + 悬停效果
|
||||
- 内联代码: 红色高亮
|
||||
- 列表: 有序/无序样式增强
|
||||
- 左侧导航: 固定侧边栏 + TOC
|
||||
|
||||
[RETURN JSON]
|
||||
{
|
||||
"status": "completed",
|
||||
"output_file": "${config.software.name}-使用手册.html",
|
||||
"file_size": "<size>",
|
||||
"sections_count": <n>,
|
||||
"tags_generated": [],
|
||||
"screenshots_embedded": <n>
|
||||
}
|
||||
`;
|
||||
}
|
||||
```
|
||||
|
||||
## Agent 职责
|
||||
|
||||
1. **读取模板** → HTML + CSS
|
||||
2. **转换章节** → Markdown → HTML tiddlers
|
||||
3. **嵌入截图** → Base64 编码
|
||||
4. **生成索引** → 搜索数据
|
||||
5. **组装输出** → 单文件 HTML
|
||||
|
||||
## Markdown 转换规则
|
||||
|
||||
Agent 内部实现:
|
||||
|
||||
```
|
||||
# H1 → <h1>
|
||||
## H2 → <h2>
|
||||
### H3 → <h3>
|
||||
```code``` → <pre><code>
|
||||
**bold** → <strong>
|
||||
*italic* → <em>
|
||||
[text](url) → <a href>
|
||||
- item → <li>
|
||||
<!-- SCREENSHOT: id="xxx" --> → <figure><img src="data:..."></figure>
|
||||
```
|
||||
|
||||
## Tiddler 结构
|
||||
|
||||
```html
|
||||
<article class="tiddler" id="tiddler-{name}" data-tags="..." data-difficulty="...">
|
||||
<header class="tiddler-header">
|
||||
<h2><button class="collapse-toggle">▼</button> {title}</h2>
|
||||
<div class="tiddler-meta">{badges}</div>
|
||||
</header>
|
||||
<div class="tiddler-content">{html}</div>
|
||||
</article>
|
||||
```
|
||||
|
||||
## 输出
|
||||
|
||||
- `{软件名}-使用手册.html` - 最终 HTML
|
||||
- `build-report.json` - 构建报告
|
||||
|
||||
## 质量门禁
|
||||
|
||||
- [ ] HTML 渲染正确
|
||||
- [ ] 搜索功能可用
|
||||
- [ ] 折叠/展开正常
|
||||
- [ ] 主题切换持久化
|
||||
- [ ] 截图显示正确
|
||||
- [ ] 文件大小 < 10MB
|
||||
|
||||
## 下一阶段
|
||||
|
||||
→ [Phase 6: Iterative Refinement](06-iterative-refinement.md)
|
||||
259
.claude/skills/software-manual/phases/06-iterative-refinement.md
Normal file
259
.claude/skills/software-manual/phases/06-iterative-refinement.md
Normal file
@@ -0,0 +1,259 @@
|
||||
# Phase 6: Iterative Refinement
|
||||
|
||||
Preview, collect feedback, and iterate until quality meets standards.
|
||||
|
||||
## Objective
|
||||
|
||||
- Preview generated HTML in browser
|
||||
- Collect user feedback
|
||||
- Address issues iteratively
|
||||
- Finalize documentation
|
||||
|
||||
## Execution Steps
|
||||
|
||||
### Step 1: Preview HTML
|
||||
|
||||
```javascript
|
||||
const buildReport = JSON.parse(Read(`${workDir}/build-report.json`));
|
||||
const outputFile = `${workDir}/${buildReport.output}`;
|
||||
|
||||
// Open in default browser for preview
|
||||
Bash({ command: `start "${outputFile}"` }); // Windows
|
||||
// Bash({ command: `open "${outputFile}"` }); // macOS
|
||||
|
||||
// Report to user
|
||||
console.log(`
|
||||
📖 Manual Preview
|
||||
|
||||
File: ${buildReport.output}
|
||||
Size: ${buildReport.size_human}
|
||||
Sections: ${buildReport.sections}
|
||||
Screenshots: ${buildReport.screenshots}
|
||||
|
||||
Please review the manual in your browser.
|
||||
`);
|
||||
```
|
||||
|
||||
### Step 2: Collect Feedback
|
||||
|
||||
```javascript
|
||||
const feedback = await AskUserQuestion({
|
||||
questions: [
|
||||
{
|
||||
question: "How does the manual look overall?",
|
||||
header: "Overall",
|
||||
options: [
|
||||
{ label: "Looks great!", description: "Ready to finalize" },
|
||||
{ label: "Minor issues", description: "Small tweaks needed" },
|
||||
{ label: "Major issues", description: "Significant changes required" },
|
||||
{ label: "Missing content", description: "Need to add more sections" }
|
||||
],
|
||||
multiSelect: false
|
||||
},
|
||||
{
|
||||
question: "Which aspects need improvement? (Select all that apply)",
|
||||
header: "Improvements",
|
||||
options: [
|
||||
{ label: "Content accuracy", description: "Fix incorrect information" },
|
||||
{ label: "More examples", description: "Add more code examples" },
|
||||
{ label: "Better screenshots", description: "Retake or add screenshots" },
|
||||
{ label: "Styling/Layout", description: "Improve visual appearance" }
|
||||
],
|
||||
multiSelect: true
|
||||
}
|
||||
]
|
||||
});
|
||||
```
|
||||
|
||||
### Step 3: Address Feedback
|
||||
|
||||
Based on feedback, take appropriate action:
|
||||
|
||||
#### Minor Issues
|
||||
|
||||
```javascript
|
||||
if (feedback.overall === "Minor issues") {
|
||||
// Prompt for specific changes
|
||||
const details = await AskUserQuestion({
|
||||
questions: [{
|
||||
question: "What specific changes are needed?",
|
||||
header: "Details",
|
||||
options: [
|
||||
{ label: "Typo fixes", description: "Fix spelling/grammar" },
|
||||
{ label: "Reorder sections", description: "Change section order" },
|
||||
{ label: "Update content", description: "Modify existing text" },
|
||||
{ label: "Custom changes", description: "I'll describe the changes" }
|
||||
],
|
||||
multiSelect: true
|
||||
}]
|
||||
});
|
||||
|
||||
// Apply changes based on user input
|
||||
applyMinorChanges(details);
|
||||
}
|
||||
```
|
||||
|
||||
#### Major Issues
|
||||
|
||||
```javascript
|
||||
if (feedback.overall === "Major issues") {
|
||||
// Return to relevant phase
|
||||
console.log(`
|
||||
Major issues require returning to an earlier phase:
|
||||
|
||||
- Content issues → Phase 3 (Parallel Analysis)
|
||||
- Screenshot issues → Phase 4 (Screenshot Capture)
|
||||
- Structure issues → Phase 2 (Project Exploration)
|
||||
|
||||
Which phase should we return to?
|
||||
`);
|
||||
|
||||
const phase = await selectPhase();
|
||||
return { action: 'restart', from_phase: phase };
|
||||
}
|
||||
```
|
||||
|
||||
#### Missing Content
|
||||
|
||||
```javascript
|
||||
if (feedback.overall === "Missing content") {
|
||||
// Identify missing sections
|
||||
const missing = await AskUserQuestion({
|
||||
questions: [{
|
||||
question: "What content is missing?",
|
||||
header: "Missing",
|
||||
options: [
|
||||
{ label: "API endpoints", description: "More API documentation" },
|
||||
{ label: "UI features", description: "Additional UI guides" },
|
||||
{ label: "Examples", description: "More code examples" },
|
||||
{ label: "Troubleshooting", description: "More FAQ items" }
|
||||
],
|
||||
multiSelect: true
|
||||
}]
|
||||
});
|
||||
|
||||
// Run additional agent(s) for missing content
|
||||
await runSupplementaryAgents(missing);
|
||||
}
|
||||
```
|
||||
|
||||
### Step 4: Save Iteration
|
||||
|
||||
```javascript
|
||||
// Save current version before changes
|
||||
const iterationNum = getNextIterationNumber(workDir);
|
||||
const iterationDir = `${workDir}/iterations`;
|
||||
|
||||
// Copy current version
|
||||
Bash({ command: `copy "${outputFile}" "${iterationDir}\\v${iterationNum}.html"` });
|
||||
|
||||
// Log iteration
|
||||
const iterationLog = {
|
||||
version: iterationNum,
|
||||
timestamp: new Date().toISOString(),
|
||||
feedback: feedback,
|
||||
changes: appliedChanges
|
||||
};
|
||||
|
||||
Write(`${iterationDir}/iteration-${iterationNum}.json`, JSON.stringify(iterationLog, null, 2));
|
||||
```
|
||||
|
||||
### Step 5: Regenerate if Needed
|
||||
|
||||
```javascript
|
||||
if (changesApplied) {
|
||||
// Re-run HTML assembly with updated sections
|
||||
await runPhase('05-html-assembly');
|
||||
|
||||
// Open updated preview
|
||||
Bash({ command: `start "${outputFile}"` });
|
||||
}
|
||||
```
|
||||
|
||||
### Step 6: Finalize
|
||||
|
||||
When user approves:
|
||||
|
||||
```javascript
|
||||
if (feedback.overall === "Looks great!") {
|
||||
// Final quality check
|
||||
const finalReport = {
|
||||
...buildReport,
|
||||
iterations: iterationNum,
|
||||
finalized_at: new Date().toISOString(),
|
||||
quality_score: calculateFinalQuality()
|
||||
};
|
||||
|
||||
Write(`${workDir}/final-report.json`, JSON.stringify(finalReport, null, 2));
|
||||
|
||||
// Suggest final location
|
||||
console.log(`
|
||||
✅ Manual Finalized!
|
||||
|
||||
Output: ${buildReport.output}
|
||||
Size: ${buildReport.size_human}
|
||||
Quality: ${finalReport.quality_score}%
|
||||
Iterations: ${iterationNum}
|
||||
|
||||
Suggested actions:
|
||||
1. Copy to project root: copy "${outputFile}" "docs/"
|
||||
2. Add to version control
|
||||
3. Publish to documentation site
|
||||
`);
|
||||
|
||||
return { status: 'completed', output: outputFile };
|
||||
}
|
||||
```
|
||||
|
||||
## Iteration History
|
||||
|
||||
Each iteration is logged:
|
||||
|
||||
```
|
||||
iterations/
|
||||
├── v1.html # First version
|
||||
├── iteration-1.json # Feedback and changes
|
||||
├── v2.html # After first iteration
|
||||
├── iteration-2.json # Feedback and changes
|
||||
└── ...
|
||||
```
|
||||
|
||||
## Quality Metrics
|
||||
|
||||
Track improvement across iterations:
|
||||
|
||||
```javascript
|
||||
const qualityMetrics = {
|
||||
content_completeness: 0, // All sections present
|
||||
screenshot_coverage: 0, // Screenshots for all UI
|
||||
example_diversity: 0, // Different difficulty levels
|
||||
search_accuracy: 0, // Search returns relevant results
|
||||
user_satisfaction: 0 // Based on feedback
|
||||
};
|
||||
```
|
||||
|
||||
## Exit Conditions
|
||||
|
||||
The refinement phase ends when:
|
||||
1. User explicitly approves ("Looks great!")
|
||||
2. Maximum iterations reached (configurable, default: 5)
|
||||
3. Quality score exceeds threshold (default: 90%)
|
||||
|
||||
## Output
|
||||
|
||||
- **Final HTML**: `{软件名}-使用手册.html`
|
||||
- **Final Report**: `final-report.json`
|
||||
- **Iteration History**: `iterations/`
|
||||
|
||||
## Completion
|
||||
|
||||
When finalized, the skill is complete. Final output location:
|
||||
|
||||
```
|
||||
.workflow/.scratchpad/manual-{timestamp}/
|
||||
├── {软件名}-使用手册.html ← Final deliverable
|
||||
├── final-report.json
|
||||
└── iterations/
|
||||
```
|
||||
|
||||
Consider copying to a permanent location like `docs/` or project root.
|
||||
245
.claude/skills/software-manual/scripts/api-extractor.md
Normal file
245
.claude/skills/software-manual/scripts/api-extractor.md
Normal file
@@ -0,0 +1,245 @@
|
||||
# API 文档提取脚本
|
||||
|
||||
根据项目类型自动提取 API 文档,支持 FastAPI、Next.js、Python 模块。
|
||||
|
||||
## 支持的技术栈
|
||||
|
||||
| 类型 | 技术栈 | 工具 | 输出格式 |
|
||||
|------|--------|------|----------|
|
||||
| Backend | FastAPI | openapi-to-md | Markdown |
|
||||
| Frontend | Next.js/TypeScript | TypeDoc | Markdown |
|
||||
| Python Module | Python | pdoc | Markdown/HTML |
|
||||
|
||||
## 使用方法
|
||||
|
||||
### 1. FastAPI Backend (OpenAPI)
|
||||
|
||||
```bash
|
||||
# 提取 OpenAPI JSON
|
||||
cd D:/dongdiankaifa9/backend
|
||||
python -c "
|
||||
from app.main import app
|
||||
import json
|
||||
print(json.dumps(app.openapi(), indent=2))
|
||||
" > api-docs/openapi.json
|
||||
|
||||
# 转换为 Markdown (使用 widdershins)
|
||||
npx widdershins api-docs/openapi.json -o api-docs/API_REFERENCE.md --language_tabs 'python:Python' 'javascript:JavaScript' 'bash:cURL'
|
||||
```
|
||||
|
||||
**备选方案 (无需启动服务)**:
|
||||
```python
|
||||
# scripts/extract_fastapi_openapi.py
|
||||
import sys
|
||||
sys.path.insert(0, 'D:/dongdiankaifa9/backend')
|
||||
|
||||
from app.main import app
|
||||
import json
|
||||
|
||||
openapi_schema = app.openapi()
|
||||
with open('api-docs/openapi.json', 'w', encoding='utf-8') as f:
|
||||
json.dump(openapi_schema, f, indent=2, ensure_ascii=False)
|
||||
|
||||
print(f"Extracted {len(openapi_schema.get('paths', {}))} endpoints")
|
||||
```
|
||||
|
||||
### 2. Next.js Frontend (TypeDoc)
|
||||
|
||||
```bash
|
||||
cd D:/dongdiankaifa9/frontend
|
||||
|
||||
# 安装 TypeDoc
|
||||
npm install --save-dev typedoc typedoc-plugin-markdown
|
||||
|
||||
# 生成文档
|
||||
npx typedoc --plugin typedoc-plugin-markdown \
|
||||
--out api-docs \
|
||||
--entryPoints "./lib" "./hooks" "./components" \
|
||||
--entryPointStrategy expand \
|
||||
--exclude "**/node_modules/**" \
|
||||
--exclude "**/*.test.*" \
|
||||
--readme none
|
||||
```
|
||||
|
||||
**typedoc.json 配置**:
|
||||
```json
|
||||
{
|
||||
"$schema": "https://typedoc.org/schema.json",
|
||||
"entryPoints": ["./lib", "./hooks", "./components"],
|
||||
"entryPointStrategy": "expand",
|
||||
"out": "api-docs",
|
||||
"plugin": ["typedoc-plugin-markdown"],
|
||||
"exclude": ["**/node_modules/**", "**/*.test.*", "**/*.spec.*"],
|
||||
"excludePrivate": true,
|
||||
"excludeInternal": true,
|
||||
"readme": "none",
|
||||
"name": "Frontend API Reference"
|
||||
}
|
||||
```
|
||||
|
||||
### 3. Python Module (pdoc)
|
||||
|
||||
```bash
|
||||
# 安装 pdoc
|
||||
pip install pdoc
|
||||
|
||||
# hydro_generator_module
|
||||
cd D:/dongdiankaifa9
|
||||
pdoc hydro_generator_module \
|
||||
--output-dir api-docs/hydro_generator \
|
||||
--format markdown \
|
||||
--no-show-source
|
||||
|
||||
# multiphysics_network
|
||||
pdoc multiphysics_network \
|
||||
--output-dir api-docs/multiphysics \
|
||||
--format markdown \
|
||||
--no-show-source
|
||||
```
|
||||
|
||||
**备选: Sphinx (更强大)**:
|
||||
```bash
|
||||
# 安装 Sphinx
|
||||
pip install sphinx sphinx-markdown-builder
|
||||
|
||||
# 生成 API 文档
|
||||
sphinx-apidoc -o docs/source hydro_generator_module
|
||||
cd docs && make markdown
|
||||
```
|
||||
|
||||
## 集成脚本
|
||||
|
||||
```python
|
||||
#!/usr/bin/env python3
|
||||
# scripts/extract_all_apis.py
|
||||
|
||||
import subprocess
|
||||
import sys
|
||||
from pathlib import Path
|
||||
|
||||
PROJECTS = {
|
||||
'backend': {
|
||||
'path': 'D:/dongdiankaifa9/backend',
|
||||
'type': 'fastapi',
|
||||
'output': 'api-docs/backend'
|
||||
},
|
||||
'frontend': {
|
||||
'path': 'D:/dongdiankaifa9/frontend',
|
||||
'type': 'typescript',
|
||||
'output': 'api-docs/frontend'
|
||||
},
|
||||
'hydro_generator_module': {
|
||||
'path': 'D:/dongdiankaifa9/hydro_generator_module',
|
||||
'type': 'python',
|
||||
'output': 'api-docs/hydro_generator'
|
||||
},
|
||||
'multiphysics_network': {
|
||||
'path': 'D:/dongdiankaifa9/multiphysics_network',
|
||||
'type': 'python',
|
||||
'output': 'api-docs/multiphysics'
|
||||
}
|
||||
}
|
||||
|
||||
def extract_fastapi(config):
|
||||
"""提取 FastAPI OpenAPI 文档"""
|
||||
path = Path(config['path'])
|
||||
sys.path.insert(0, str(path))
|
||||
|
||||
try:
|
||||
from app.main import app
|
||||
import json
|
||||
|
||||
output_dir = Path(config['output'])
|
||||
output_dir.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
# 导出 OpenAPI JSON
|
||||
with open(output_dir / 'openapi.json', 'w', encoding='utf-8') as f:
|
||||
json.dump(app.openapi(), f, indent=2, ensure_ascii=False)
|
||||
|
||||
print(f"✓ FastAPI: {len(app.openapi().get('paths', {}))} endpoints")
|
||||
return True
|
||||
except Exception as e:
|
||||
print(f"✗ FastAPI error: {e}")
|
||||
return False
|
||||
|
||||
def extract_typescript(config):
|
||||
"""提取 TypeScript 文档"""
|
||||
try:
|
||||
subprocess.run([
|
||||
'npx', 'typedoc',
|
||||
'--plugin', 'typedoc-plugin-markdown',
|
||||
'--out', config['output'],
|
||||
'--entryPoints', './lib', './hooks',
|
||||
'--entryPointStrategy', 'expand'
|
||||
], cwd=config['path'], check=True)
|
||||
print(f"✓ TypeDoc: {config['path']}")
|
||||
return True
|
||||
except Exception as e:
|
||||
print(f"✗ TypeDoc error: {e}")
|
||||
return False
|
||||
|
||||
def extract_python(config):
|
||||
"""提取 Python 模块文档"""
|
||||
try:
|
||||
module_name = Path(config['path']).name
|
||||
subprocess.run([
|
||||
'pdoc', module_name,
|
||||
'--output-dir', config['output'],
|
||||
'--format', 'markdown'
|
||||
], cwd=Path(config['path']).parent, check=True)
|
||||
print(f"✓ pdoc: {module_name}")
|
||||
return True
|
||||
except Exception as e:
|
||||
print(f"✗ pdoc error: {e}")
|
||||
return False
|
||||
|
||||
EXTRACTORS = {
|
||||
'fastapi': extract_fastapi,
|
||||
'typescript': extract_typescript,
|
||||
'python': extract_python
|
||||
}
|
||||
|
||||
if __name__ == '__main__':
|
||||
for name, config in PROJECTS.items():
|
||||
print(f"\n[{name}]")
|
||||
extractor = EXTRACTORS.get(config['type'])
|
||||
if extractor:
|
||||
extractor(config)
|
||||
```
|
||||
|
||||
## Phase 3 集成
|
||||
|
||||
在 `api-reference` Agent 提示词中添加:
|
||||
|
||||
```
|
||||
[PRE-EXTRACTION]
|
||||
运行 API 提取脚本获取结构化文档:
|
||||
- python scripts/extract_all_apis.py
|
||||
|
||||
[INPUT FILES]
|
||||
- api-docs/backend/openapi.json (FastAPI endpoints)
|
||||
- api-docs/frontend/*.md (TypeDoc output)
|
||||
- api-docs/hydro_generator/*.md (pdoc output)
|
||||
- api-docs/multiphysics/*.md (pdoc output)
|
||||
```
|
||||
|
||||
## 输出结构
|
||||
|
||||
```
|
||||
api-docs/
|
||||
├── backend/
|
||||
│ ├── openapi.json # Raw OpenAPI spec
|
||||
│ └── API_REFERENCE.md # Converted Markdown
|
||||
├── frontend/
|
||||
│ ├── modules.md
|
||||
│ ├── functions.md
|
||||
│ └── classes/
|
||||
├── hydro_generator/
|
||||
│ ├── assembler.md
|
||||
│ ├── blueprint.md
|
||||
│ └── builders/
|
||||
└── multiphysics/
|
||||
├── analysis_domain.md
|
||||
├── builders.md
|
||||
└── compilers.md
|
||||
```
|
||||
85
.claude/skills/software-manual/scripts/bundle-libraries.md
Normal file
85
.claude/skills/software-manual/scripts/bundle-libraries.md
Normal file
@@ -0,0 +1,85 @@
|
||||
# 库文件打包说明
|
||||
|
||||
## 依赖库
|
||||
|
||||
HTML 组装阶段需要内嵌以下成熟库(无 CDN 依赖):
|
||||
|
||||
### 1. marked.js - Markdown 解析
|
||||
|
||||
```bash
|
||||
# 获取最新版本
|
||||
curl -o templates/libs/marked.min.js https://unpkg.com/marked/marked.min.js
|
||||
```
|
||||
|
||||
### 2. highlight.js - 代码语法高亮
|
||||
|
||||
```bash
|
||||
# 获取核心 + 常用语言包
|
||||
curl -o templates/libs/highlight.min.js https://unpkg.com/@highlightjs/cdn-assets/highlight.min.js
|
||||
|
||||
# 获取 github-dark 主题
|
||||
curl -o templates/libs/github-dark.min.css https://unpkg.com/@highlightjs/cdn-assets/styles/github-dark.min.css
|
||||
```
|
||||
|
||||
## 内嵌方式
|
||||
|
||||
Phase 5 Agent 应:
|
||||
|
||||
1. 读取 `templates/libs/*.js` 和 `*.css`
|
||||
2. 将内容嵌入 HTML 的 `<script>` 和 `<style>` 标签
|
||||
3. 在 `DOMContentLoaded` 后初始化:
|
||||
|
||||
```javascript
|
||||
// 初始化 marked
|
||||
marked.setOptions({
|
||||
highlight: function(code, lang) {
|
||||
if (lang && hljs.getLanguage(lang)) {
|
||||
return hljs.highlight(code, { language: lang }).value;
|
||||
}
|
||||
return hljs.highlightAuto(code).value;
|
||||
},
|
||||
breaks: true,
|
||||
gfm: true
|
||||
});
|
||||
|
||||
// 应用高亮
|
||||
document.querySelectorAll('pre code').forEach(block => {
|
||||
hljs.highlightElement(block);
|
||||
});
|
||||
```
|
||||
|
||||
## 备选方案
|
||||
|
||||
如果无法获取外部库,使用内置的简化 Markdown 转换:
|
||||
|
||||
```javascript
|
||||
function simpleMarkdown(md) {
|
||||
return md
|
||||
.replace(/^### (.+)$/gm, '<h3>$1</h3>')
|
||||
.replace(/^## (.+)$/gm, '<h2>$1</h2>')
|
||||
.replace(/^# (.+)$/gm, '<h1>$1</h1>')
|
||||
.replace(/```(\w+)?\n([\s\S]*?)```/g, (m, lang, code) =>
|
||||
`<pre data-language="${lang || ''}"><code class="language-${lang || ''}">${escapeHtml(code)}</code></pre>`)
|
||||
.replace(/`([^`]+)`/g, '<code>$1</code>')
|
||||
.replace(/\*\*(.+?)\*\*/g, '<strong>$1</strong>')
|
||||
.replace(/\*(.+?)\*/g, '<em>$1</em>')
|
||||
.replace(/\[([^\]]+)\]\(([^)]+)\)/g, '<a href="$2">$1</a>')
|
||||
.replace(/^\|(.+)\|$/gm, processTableRow)
|
||||
.replace(/^- (.+)$/gm, '<li>$1</li>')
|
||||
.replace(/^\d+\. (.+)$/gm, '<li>$1</li>');
|
||||
}
|
||||
```
|
||||
|
||||
## 文件结构
|
||||
|
||||
```
|
||||
templates/
|
||||
├── libs/
|
||||
│ ├── marked.min.js # Markdown parser
|
||||
│ ├── highlight.min.js # Syntax highlighting
|
||||
│ └── github-dark.min.css # Code theme
|
||||
├── tiddlywiki-shell.html
|
||||
└── css/
|
||||
├── wiki-base.css
|
||||
└── wiki-dark.css
|
||||
```
|
||||
270
.claude/skills/software-manual/scripts/extract_apis.py
Normal file
270
.claude/skills/software-manual/scripts/extract_apis.py
Normal file
@@ -0,0 +1,270 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
API 文档提取脚本
|
||||
支持 FastAPI、TypeScript、Python 模块
|
||||
"""
|
||||
|
||||
import subprocess
|
||||
import sys
|
||||
import json
|
||||
from pathlib import Path
|
||||
from typing import Dict, Any, Optional
|
||||
|
||||
# 项目配置
|
||||
PROJECTS = {
|
||||
'backend': {
|
||||
'path': Path('D:/dongdiankaifa9/backend'),
|
||||
'type': 'fastapi',
|
||||
'entry': 'app.main:app',
|
||||
'output': 'api-docs/backend'
|
||||
},
|
||||
'frontend': {
|
||||
'path': Path('D:/dongdiankaifa9/frontend'),
|
||||
'type': 'typescript',
|
||||
'entries': ['lib', 'hooks', 'components'],
|
||||
'output': 'api-docs/frontend'
|
||||
},
|
||||
'hydro_generator_module': {
|
||||
'path': Path('D:/dongdiankaifa9/hydro_generator_module'),
|
||||
'type': 'python',
|
||||
'output': 'api-docs/hydro_generator'
|
||||
},
|
||||
'multiphysics_network': {
|
||||
'path': Path('D:/dongdiankaifa9/multiphysics_network'),
|
||||
'type': 'python',
|
||||
'output': 'api-docs/multiphysics'
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
def extract_fastapi(name: str, config: Dict[str, Any], output_base: Path) -> bool:
|
||||
"""提取 FastAPI OpenAPI 文档"""
|
||||
path = config['path']
|
||||
output_dir = output_base / config['output']
|
||||
output_dir.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
# 添加路径到 sys.path
|
||||
if str(path) not in sys.path:
|
||||
sys.path.insert(0, str(path))
|
||||
|
||||
try:
|
||||
# 动态导入 app
|
||||
from app.main import app
|
||||
|
||||
# 获取 OpenAPI schema
|
||||
openapi_schema = app.openapi()
|
||||
|
||||
# 保存 JSON
|
||||
json_path = output_dir / 'openapi.json'
|
||||
with open(json_path, 'w', encoding='utf-8') as f:
|
||||
json.dump(openapi_schema, f, indent=2, ensure_ascii=False)
|
||||
|
||||
# 生成 Markdown 摘要
|
||||
md_path = output_dir / 'API_SUMMARY.md'
|
||||
generate_api_markdown(openapi_schema, md_path)
|
||||
|
||||
endpoints = len(openapi_schema.get('paths', {}))
|
||||
print(f" ✓ Extracted {endpoints} endpoints → {output_dir}")
|
||||
return True
|
||||
|
||||
except ImportError as e:
|
||||
print(f" ✗ Import error: {e}")
|
||||
return False
|
||||
except Exception as e:
|
||||
print(f" ✗ Error: {e}")
|
||||
return False
|
||||
|
||||
|
||||
def generate_api_markdown(schema: Dict, output_path: Path):
|
||||
"""从 OpenAPI schema 生成 Markdown"""
|
||||
lines = [
|
||||
f"# {schema.get('info', {}).get('title', 'API Reference')}",
|
||||
"",
|
||||
f"Version: {schema.get('info', {}).get('version', '1.0.0')}",
|
||||
"",
|
||||
"## Endpoints",
|
||||
"",
|
||||
"| Method | Path | Summary |",
|
||||
"|--------|------|---------|"
|
||||
]
|
||||
|
||||
for path, methods in schema.get('paths', {}).items():
|
||||
for method, details in methods.items():
|
||||
if method in ('get', 'post', 'put', 'delete', 'patch'):
|
||||
summary = details.get('summary', details.get('operationId', '-'))
|
||||
lines.append(f"| `{method.upper()}` | `{path}` | {summary} |")
|
||||
|
||||
lines.extend([
|
||||
"",
|
||||
"## Schemas",
|
||||
""
|
||||
])
|
||||
|
||||
for name, schema_def in schema.get('components', {}).get('schemas', {}).items():
|
||||
lines.append(f"### {name}")
|
||||
lines.append("")
|
||||
if 'properties' in schema_def:
|
||||
lines.append("| Property | Type | Required |")
|
||||
lines.append("|----------|------|----------|")
|
||||
required = schema_def.get('required', [])
|
||||
for prop, prop_def in schema_def['properties'].items():
|
||||
prop_type = prop_def.get('type', prop_def.get('$ref', 'any'))
|
||||
is_required = '✓' if prop in required else ''
|
||||
lines.append(f"| `{prop}` | {prop_type} | {is_required} |")
|
||||
lines.append("")
|
||||
|
||||
with open(output_path, 'w', encoding='utf-8') as f:
|
||||
f.write('\n'.join(lines))
|
||||
|
||||
|
||||
def extract_typescript(name: str, config: Dict[str, Any], output_base: Path) -> bool:
|
||||
"""提取 TypeScript 文档 (TypeDoc)"""
|
||||
path = config['path']
|
||||
output_dir = output_base / config['output']
|
||||
|
||||
# 检查 TypeDoc 是否已安装
|
||||
try:
|
||||
result = subprocess.run(
|
||||
['npx', 'typedoc', '--version'],
|
||||
cwd=path,
|
||||
capture_output=True,
|
||||
text=True
|
||||
)
|
||||
if result.returncode != 0:
|
||||
print(f" ⚠ TypeDoc not installed, installing...")
|
||||
subprocess.run(
|
||||
['npm', 'install', '--save-dev', 'typedoc', 'typedoc-plugin-markdown'],
|
||||
cwd=path,
|
||||
check=True
|
||||
)
|
||||
except FileNotFoundError:
|
||||
print(f" ✗ npm/npx not found")
|
||||
return False
|
||||
|
||||
# 运行 TypeDoc
|
||||
try:
|
||||
entries = config.get('entries', ['lib'])
|
||||
cmd = [
|
||||
'npx', 'typedoc',
|
||||
'--plugin', 'typedoc-plugin-markdown',
|
||||
'--out', str(output_dir),
|
||||
'--entryPointStrategy', 'expand',
|
||||
'--exclude', '**/node_modules/**',
|
||||
'--exclude', '**/*.test.*',
|
||||
'--readme', 'none'
|
||||
]
|
||||
for entry in entries:
|
||||
entry_path = path / entry
|
||||
if entry_path.exists():
|
||||
cmd.extend(['--entryPoints', str(entry_path)])
|
||||
|
||||
result = subprocess.run(cmd, cwd=path, capture_output=True, text=True)
|
||||
|
||||
if result.returncode == 0:
|
||||
print(f" ✓ TypeDoc generated → {output_dir}")
|
||||
return True
|
||||
else:
|
||||
print(f" ✗ TypeDoc error: {result.stderr[:200]}")
|
||||
return False
|
||||
|
||||
except Exception as e:
|
||||
print(f" ✗ Error: {e}")
|
||||
return False
|
||||
|
||||
|
||||
def extract_python_module(name: str, config: Dict[str, Any], output_base: Path) -> bool:
|
||||
"""提取 Python 模块文档 (pdoc)"""
|
||||
path = config['path']
|
||||
output_dir = output_base / config['output']
|
||||
module_name = path.name
|
||||
|
||||
# 检查 pdoc
|
||||
try:
|
||||
subprocess.run(['pdoc', '--version'], capture_output=True, check=True)
|
||||
except (FileNotFoundError, subprocess.CalledProcessError):
|
||||
print(f" ⚠ pdoc not installed, installing...")
|
||||
subprocess.run([sys.executable, '-m', 'pip', 'install', 'pdoc'], check=True)
|
||||
|
||||
# 运行 pdoc
|
||||
try:
|
||||
result = subprocess.run(
|
||||
[
|
||||
'pdoc', module_name,
|
||||
'--output-dir', str(output_dir),
|
||||
'--format', 'markdown'
|
||||
],
|
||||
cwd=path.parent,
|
||||
capture_output=True,
|
||||
text=True
|
||||
)
|
||||
|
||||
if result.returncode == 0:
|
||||
# 统计生成的文件
|
||||
md_files = list(output_dir.glob('**/*.md'))
|
||||
print(f" ✓ pdoc generated {len(md_files)} files → {output_dir}")
|
||||
return True
|
||||
else:
|
||||
print(f" ✗ pdoc error: {result.stderr[:200]}")
|
||||
return False
|
||||
|
||||
except Exception as e:
|
||||
print(f" ✗ Error: {e}")
|
||||
return False
|
||||
|
||||
|
||||
EXTRACTORS = {
|
||||
'fastapi': extract_fastapi,
|
||||
'typescript': extract_typescript,
|
||||
'python': extract_python_module
|
||||
}
|
||||
|
||||
|
||||
def main(output_base: Optional[str] = None, projects: Optional[list] = None):
|
||||
"""主入口"""
|
||||
base = Path(output_base) if output_base else Path.cwd()
|
||||
|
||||
print("=" * 50)
|
||||
print("API Documentation Extraction")
|
||||
print("=" * 50)
|
||||
|
||||
results = {}
|
||||
|
||||
for name, config in PROJECTS.items():
|
||||
if projects and name not in projects:
|
||||
continue
|
||||
|
||||
print(f"\n[{name}] ({config['type']})")
|
||||
|
||||
if not config['path'].exists():
|
||||
print(f" ✗ Path not found: {config['path']}")
|
||||
results[name] = False
|
||||
continue
|
||||
|
||||
extractor = EXTRACTORS.get(config['type'])
|
||||
if extractor:
|
||||
results[name] = extractor(name, config, base)
|
||||
else:
|
||||
print(f" ✗ Unknown type: {config['type']}")
|
||||
results[name] = False
|
||||
|
||||
# 汇总
|
||||
print("\n" + "=" * 50)
|
||||
print("Summary")
|
||||
print("=" * 50)
|
||||
success = sum(1 for v in results.values() if v)
|
||||
print(f"Success: {success}/{len(results)}")
|
||||
|
||||
return all(results.values())
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
import argparse
|
||||
|
||||
parser = argparse.ArgumentParser(description='Extract API documentation')
|
||||
parser.add_argument('--output', '-o', default='.', help='Output base directory')
|
||||
parser.add_argument('--projects', '-p', nargs='+', help='Specific projects to extract')
|
||||
|
||||
args = parser.parse_args()
|
||||
|
||||
success = main(args.output, args.projects)
|
||||
sys.exit(0 if success else 1)
|
||||
447
.claude/skills/software-manual/scripts/screenshot-helper.md
Normal file
447
.claude/skills/software-manual/scripts/screenshot-helper.md
Normal file
@@ -0,0 +1,447 @@
|
||||
# Screenshot Helper
|
||||
|
||||
Guide for capturing screenshots using Chrome MCP.
|
||||
|
||||
## Overview
|
||||
|
||||
This script helps capture screenshots of web interfaces for the software manual using Chrome MCP or fallback methods.
|
||||
|
||||
## Chrome MCP Prerequisites
|
||||
|
||||
### Check MCP Availability
|
||||
|
||||
```javascript
|
||||
async function checkChromeMCPAvailability() {
|
||||
try {
|
||||
// Attempt to get Chrome version via MCP
|
||||
const version = await mcp__chrome__getVersion();
|
||||
return {
|
||||
available: true,
|
||||
browser: version.browser,
|
||||
version: version.version
|
||||
};
|
||||
} catch (error) {
|
||||
return {
|
||||
available: false,
|
||||
error: error.message
|
||||
};
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### MCP Configuration
|
||||
|
||||
Expected Claude configuration for Chrome MCP:
|
||||
|
||||
```json
|
||||
{
|
||||
"mcpServers": {
|
||||
"chrome": {
|
||||
"command": "npx",
|
||||
"args": ["@anthropic-ai/mcp-chrome"],
|
||||
"env": {
|
||||
"CHROME_PATH": "C:\\Program Files\\Google\\Chrome\\Application\\chrome.exe"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Screenshot Workflow
|
||||
|
||||
### Step 1: Prepare Environment
|
||||
|
||||
```javascript
|
||||
async function prepareScreenshotEnvironment(workDir, config) {
|
||||
const screenshotDir = `${workDir}/screenshots`;
|
||||
|
||||
// Create directory
|
||||
Bash({ command: `mkdir -p "${screenshotDir}"` });
|
||||
|
||||
// Check Chrome MCP
|
||||
const chromeMCP = await checkChromeMCPAvailability();
|
||||
|
||||
if (!chromeMCP.available) {
|
||||
console.log('Chrome MCP not available. Will generate manual guide.');
|
||||
return { mode: 'manual' };
|
||||
}
|
||||
|
||||
// Start development server if needed
|
||||
if (config.screenshot_config?.dev_command) {
|
||||
const server = await startDevServer(config);
|
||||
return { mode: 'auto', server, screenshotDir };
|
||||
}
|
||||
|
||||
return { mode: 'auto', screenshotDir };
|
||||
}
|
||||
```
|
||||
|
||||
### Step 2: Start Development Server
|
||||
|
||||
```javascript
|
||||
async function startDevServer(config) {
|
||||
const devCommand = config.screenshot_config.dev_command;
|
||||
const devUrl = config.screenshot_config.dev_url;
|
||||
|
||||
// Start server in background
|
||||
const server = Bash({
|
||||
command: devCommand,
|
||||
run_in_background: true
|
||||
});
|
||||
|
||||
console.log(`Starting dev server: ${devCommand}`);
|
||||
|
||||
// Wait for server to be ready
|
||||
const ready = await waitForServer(devUrl, 30000);
|
||||
|
||||
if (!ready) {
|
||||
throw new Error(`Server at ${devUrl} did not start in time`);
|
||||
}
|
||||
|
||||
console.log(`Dev server ready at ${devUrl}`);
|
||||
|
||||
return server;
|
||||
}
|
||||
|
||||
async function waitForServer(url, timeout = 30000) {
|
||||
const start = Date.now();
|
||||
|
||||
while (Date.now() - start < timeout) {
|
||||
try {
|
||||
const response = await fetch(url, { method: 'HEAD' });
|
||||
if (response.ok) return true;
|
||||
} catch (e) {
|
||||
// Server not ready
|
||||
}
|
||||
await sleep(1000);
|
||||
}
|
||||
|
||||
return false;
|
||||
}
|
||||
```
|
||||
|
||||
### Step 3: Capture Screenshots
|
||||
|
||||
```javascript
|
||||
async function captureScreenshots(screenshots, config, workDir) {
|
||||
const results = {
|
||||
captured: [],
|
||||
failed: []
|
||||
};
|
||||
|
||||
const devUrl = config.screenshot_config.dev_url;
|
||||
const screenshotDir = `${workDir}/screenshots`;
|
||||
|
||||
for (const ss of screenshots) {
|
||||
try {
|
||||
// Build full URL
|
||||
const fullUrl = new URL(ss.url, devUrl).href;
|
||||
|
||||
console.log(`Capturing: ${ss.id} (${fullUrl})`);
|
||||
|
||||
// Configure capture options
|
||||
const options = {
|
||||
url: fullUrl,
|
||||
viewport: { width: 1280, height: 800 },
|
||||
fullPage: ss.fullPage || false
|
||||
};
|
||||
|
||||
// Wait for specific element if specified
|
||||
if (ss.wait_for) {
|
||||
options.waitFor = ss.wait_for;
|
||||
}
|
||||
|
||||
// Capture specific element if selector provided
|
||||
if (ss.selector) {
|
||||
options.selector = ss.selector;
|
||||
}
|
||||
|
||||
// Add delay for animations
|
||||
await sleep(500);
|
||||
|
||||
// Capture via Chrome MCP
|
||||
const result = await mcp__chrome__screenshot(options);
|
||||
|
||||
// Save as PNG
|
||||
const filename = `${ss.id}.png`;
|
||||
Write(`${screenshotDir}/${filename}`, result.data, { encoding: 'base64' });
|
||||
|
||||
results.captured.push({
|
||||
id: ss.id,
|
||||
file: filename,
|
||||
url: ss.url,
|
||||
description: ss.description,
|
||||
size: result.data.length
|
||||
});
|
||||
|
||||
} catch (error) {
|
||||
console.error(`Failed to capture ${ss.id}:`, error.message);
|
||||
results.failed.push({
|
||||
id: ss.id,
|
||||
url: ss.url,
|
||||
error: error.message
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
return results;
|
||||
}
|
||||
```
|
||||
|
||||
### Step 4: Generate Manifest
|
||||
|
||||
```javascript
|
||||
function generateScreenshotManifest(results, workDir) {
|
||||
const manifest = {
|
||||
generated: new Date().toISOString(),
|
||||
total: results.captured.length + results.failed.length,
|
||||
captured: results.captured.length,
|
||||
failed: results.failed.length,
|
||||
screenshots: results.captured,
|
||||
failures: results.failed
|
||||
};
|
||||
|
||||
Write(`${workDir}/screenshots/screenshots-manifest.json`,
|
||||
JSON.stringify(manifest, null, 2));
|
||||
|
||||
return manifest;
|
||||
}
|
||||
```
|
||||
|
||||
### Step 5: Cleanup
|
||||
|
||||
```javascript
|
||||
async function cleanupScreenshotEnvironment(env) {
|
||||
if (env.server) {
|
||||
console.log('Stopping dev server...');
|
||||
KillShell({ shell_id: env.server.task_id });
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Main Runner
|
||||
|
||||
```javascript
|
||||
async function runScreenshotCapture(workDir, screenshots) {
|
||||
const config = JSON.parse(Read(`${workDir}/manual-config.json`));
|
||||
|
||||
// Prepare environment
|
||||
const env = await prepareScreenshotEnvironment(workDir, config);
|
||||
|
||||
if (env.mode === 'manual') {
|
||||
// Generate manual capture guide
|
||||
generateManualCaptureGuide(screenshots, workDir);
|
||||
return { success: false, mode: 'manual' };
|
||||
}
|
||||
|
||||
try {
|
||||
// Capture screenshots
|
||||
const results = await captureScreenshots(screenshots, config, workDir);
|
||||
|
||||
// Generate manifest
|
||||
const manifest = generateScreenshotManifest(results, workDir);
|
||||
|
||||
// Generate manual guide for failed captures
|
||||
if (results.failed.length > 0) {
|
||||
generateManualCaptureGuide(results.failed, workDir);
|
||||
}
|
||||
|
||||
return {
|
||||
success: true,
|
||||
captured: results.captured.length,
|
||||
failed: results.failed.length,
|
||||
manifest
|
||||
};
|
||||
|
||||
} finally {
|
||||
// Cleanup
|
||||
await cleanupScreenshotEnvironment(env);
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Manual Capture Fallback
|
||||
|
||||
When Chrome MCP is unavailable:
|
||||
|
||||
```javascript
|
||||
function generateManualCaptureGuide(screenshots, workDir) {
|
||||
const guide = `
|
||||
# Manual Screenshot Capture Guide
|
||||
|
||||
Chrome MCP is not available. Please capture screenshots manually.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
1. Start your development server
|
||||
2. Open a browser
|
||||
3. Use a screenshot tool (Snipping Tool, Screenshot, etc.)
|
||||
|
||||
## Screenshots Required
|
||||
|
||||
${screenshots.map((ss, i) => `
|
||||
### ${i + 1}. ${ss.id}
|
||||
|
||||
- **URL**: ${ss.url}
|
||||
- **Description**: ${ss.description}
|
||||
- **Save as**: \`screenshots/${ss.id}.png\`
|
||||
${ss.selector ? `- **Capture area**: \`${ss.selector}\` element only` : '- **Type**: Full page or viewport'}
|
||||
${ss.wait_for ? `- **Wait for**: \`${ss.wait_for}\` to be visible` : ''}
|
||||
|
||||
**Steps:**
|
||||
1. Navigate to ${ss.url}
|
||||
${ss.wait_for ? `2. Wait for ${ss.wait_for} to appear` : ''}
|
||||
${ss.selector ? `2. Capture only the ${ss.selector} area` : '2. Capture the full viewport'}
|
||||
3. Save as \`${ss.id}.png\`
|
||||
`).join('\n')}
|
||||
|
||||
## After Capturing
|
||||
|
||||
1. Place all PNG files in the \`screenshots/\` directory
|
||||
2. Ensure filenames match exactly (case-sensitive)
|
||||
3. Run Phase 5 (HTML Assembly) to continue
|
||||
|
||||
## Screenshot Specifications
|
||||
|
||||
- **Format**: PNG
|
||||
- **Width**: 1280px recommended
|
||||
- **Quality**: High
|
||||
- **Annotations**: None (add in post-processing if needed)
|
||||
`;
|
||||
|
||||
Write(`${workDir}/screenshots/MANUAL_CAPTURE.md`, guide);
|
||||
}
|
||||
```
|
||||
|
||||
## Advanced Options
|
||||
|
||||
### Viewport Sizes
|
||||
|
||||
```javascript
|
||||
const viewportPresets = {
|
||||
desktop: { width: 1280, height: 800 },
|
||||
tablet: { width: 768, height: 1024 },
|
||||
mobile: { width: 375, height: 667 },
|
||||
wide: { width: 1920, height: 1080 }
|
||||
};
|
||||
|
||||
async function captureResponsive(ss, config, workDir) {
|
||||
const results = [];
|
||||
|
||||
for (const [name, viewport] of Object.entries(viewportPresets)) {
|
||||
const result = await mcp__chrome__screenshot({
|
||||
url: ss.url,
|
||||
viewport
|
||||
});
|
||||
|
||||
const filename = `${ss.id}-${name}.png`;
|
||||
Write(`${workDir}/screenshots/${filename}`, result.data, { encoding: 'base64' });
|
||||
|
||||
results.push({ viewport: name, file: filename });
|
||||
}
|
||||
|
||||
return results;
|
||||
}
|
||||
```
|
||||
|
||||
### Before/After Comparisons
|
||||
|
||||
```javascript
|
||||
async function captureInteraction(ss, config, workDir) {
|
||||
const baseUrl = config.screenshot_config.dev_url;
|
||||
const fullUrl = new URL(ss.url, baseUrl).href;
|
||||
|
||||
// Capture before state
|
||||
const before = await mcp__chrome__screenshot({
|
||||
url: fullUrl,
|
||||
viewport: { width: 1280, height: 800 }
|
||||
});
|
||||
Write(`${workDir}/screenshots/${ss.id}-before.png`, before.data, { encoding: 'base64' });
|
||||
|
||||
// Perform interaction (click, type, etc.)
|
||||
if (ss.interaction) {
|
||||
await mcp__chrome__click({ selector: ss.interaction.click });
|
||||
await sleep(500);
|
||||
}
|
||||
|
||||
// Capture after state
|
||||
const after = await mcp__chrome__screenshot({
|
||||
url: fullUrl,
|
||||
viewport: { width: 1280, height: 800 }
|
||||
});
|
||||
Write(`${workDir}/screenshots/${ss.id}-after.png`, after.data, { encoding: 'base64' });
|
||||
|
||||
return {
|
||||
before: `${ss.id}-before.png`,
|
||||
after: `${ss.id}-after.png`
|
||||
};
|
||||
}
|
||||
```
|
||||
|
||||
### Screenshot Annotation
|
||||
|
||||
```javascript
|
||||
function generateAnnotationGuide(screenshots, workDir) {
|
||||
const guide = `
|
||||
# Screenshot Annotation Guide
|
||||
|
||||
For screenshots requiring callouts or highlights:
|
||||
|
||||
## Tools
|
||||
- macOS: Preview, Skitch
|
||||
- Windows: Snipping Tool, ShareX
|
||||
- Cross-platform: Greenshot, Lightshot
|
||||
|
||||
## Annotation Guidelines
|
||||
|
||||
1. **Callouts**: Use numbered circles (1, 2, 3)
|
||||
2. **Highlights**: Use semi-transparent rectangles
|
||||
3. **Arrows**: Point from text to element
|
||||
4. **Text**: Use sans-serif font, 12-14pt
|
||||
|
||||
## Color Scheme
|
||||
|
||||
- Primary: #0d6efd (blue)
|
||||
- Secondary: #6c757d (gray)
|
||||
- Success: #198754 (green)
|
||||
- Warning: #ffc107 (yellow)
|
||||
- Danger: #dc3545 (red)
|
||||
|
||||
## Screenshots Needing Annotation
|
||||
|
||||
${screenshots.filter(s => s.annotate).map(ss => `
|
||||
- **${ss.id}**: ${ss.description}
|
||||
- Highlight: ${ss.annotate.highlight || 'N/A'}
|
||||
- Callouts: ${ss.annotate.callouts?.join(', ') || 'N/A'}
|
||||
`).join('\n')}
|
||||
`;
|
||||
|
||||
Write(`${workDir}/screenshots/ANNOTATION_GUIDE.md`, guide);
|
||||
}
|
||||
```
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Chrome MCP Not Found
|
||||
|
||||
1. Check Claude MCP configuration
|
||||
2. Verify Chrome is installed
|
||||
3. Check CHROME_PATH environment variable
|
||||
|
||||
### Screenshots Are Blank
|
||||
|
||||
1. Increase wait time before capture
|
||||
2. Check if page requires authentication
|
||||
3. Verify URL is correct
|
||||
|
||||
### Elements Not Visible
|
||||
|
||||
1. Scroll element into view
|
||||
2. Expand collapsed sections
|
||||
3. Wait for animations to complete
|
||||
|
||||
### Server Not Starting
|
||||
|
||||
1. Check if port is already in use
|
||||
2. Verify dev command is correct
|
||||
3. Check for startup errors in logs
|
||||
419
.claude/skills/software-manual/scripts/swagger-runner.md
Normal file
419
.claude/skills/software-manual/scripts/swagger-runner.md
Normal file
@@ -0,0 +1,419 @@
|
||||
# Swagger/OpenAPI Runner
|
||||
|
||||
Guide for generating backend API documentation from OpenAPI/Swagger specifications.
|
||||
|
||||
## Overview
|
||||
|
||||
This script extracts and converts OpenAPI/Swagger specifications to Markdown format for inclusion in the software manual.
|
||||
|
||||
## Detection Strategy
|
||||
|
||||
### Check for Existing Specification
|
||||
|
||||
```javascript
|
||||
async function detectOpenAPISpec() {
|
||||
// Check for existing spec files
|
||||
const specPatterns = [
|
||||
'openapi.json',
|
||||
'openapi.yaml',
|
||||
'openapi.yml',
|
||||
'swagger.json',
|
||||
'swagger.yaml',
|
||||
'swagger.yml',
|
||||
'**/openapi*.json',
|
||||
'**/swagger*.json'
|
||||
];
|
||||
|
||||
for (const pattern of specPatterns) {
|
||||
const files = Glob(pattern);
|
||||
if (files.length > 0) {
|
||||
return { found: true, type: 'file', path: files[0] };
|
||||
}
|
||||
}
|
||||
|
||||
// Check for swagger-jsdoc in dependencies
|
||||
const packageJson = JSON.parse(Read('package.json'));
|
||||
if (packageJson.dependencies?.['swagger-jsdoc'] ||
|
||||
packageJson.devDependencies?.['swagger-jsdoc']) {
|
||||
return { found: true, type: 'jsdoc' };
|
||||
}
|
||||
|
||||
// Check for NestJS Swagger
|
||||
if (packageJson.dependencies?.['@nestjs/swagger']) {
|
||||
return { found: true, type: 'nestjs' };
|
||||
}
|
||||
|
||||
// Check for runtime endpoint
|
||||
return { found: false, suggestion: 'runtime' };
|
||||
}
|
||||
```
|
||||
|
||||
## Extraction Methods
|
||||
|
||||
### Method A: From Existing Spec File
|
||||
|
||||
```javascript
|
||||
async function extractFromFile(specPath, workDir) {
|
||||
const outputDir = `${workDir}/api-docs/backend`;
|
||||
Bash({ command: `mkdir -p "${outputDir}"` });
|
||||
|
||||
// Copy spec to output
|
||||
Bash({ command: `cp "${specPath}" "${outputDir}/openapi.json"` });
|
||||
|
||||
// Convert to Markdown using widdershins
|
||||
const result = Bash({
|
||||
command: `npx widdershins "${specPath}" -o "${outputDir}/api-reference.md" --language_tabs 'javascript:JavaScript' 'python:Python' 'bash:cURL'`,
|
||||
timeout: 60000
|
||||
});
|
||||
|
||||
return { success: result.exitCode === 0, outputDir };
|
||||
}
|
||||
```
|
||||
|
||||
### Method B: From swagger-jsdoc
|
||||
|
||||
```javascript
|
||||
async function extractFromJsDoc(workDir) {
|
||||
const outputDir = `${workDir}/api-docs/backend`;
|
||||
|
||||
// Look for swagger definition file
|
||||
const defFiles = Glob('**/swagger*.js').concat(Glob('**/openapi*.js'));
|
||||
if (defFiles.length === 0) {
|
||||
return { success: false, error: 'No swagger definition found' };
|
||||
}
|
||||
|
||||
// Generate spec
|
||||
const result = Bash({
|
||||
command: `npx swagger-jsdoc -d "${defFiles[0]}" -o "${outputDir}/openapi.json"`,
|
||||
timeout: 60000
|
||||
});
|
||||
|
||||
if (result.exitCode !== 0) {
|
||||
return { success: false, error: result.stderr };
|
||||
}
|
||||
|
||||
// Convert to Markdown
|
||||
Bash({
|
||||
command: `npx widdershins "${outputDir}/openapi.json" -o "${outputDir}/api-reference.md" --language_tabs 'javascript:JavaScript' 'bash:cURL'`
|
||||
});
|
||||
|
||||
return { success: true, outputDir };
|
||||
}
|
||||
```
|
||||
|
||||
### Method C: From NestJS Swagger
|
||||
|
||||
```javascript
|
||||
async function extractFromNestJS(workDir) {
|
||||
const outputDir = `${workDir}/api-docs/backend`;
|
||||
|
||||
// NestJS typically exposes /api-docs-json at runtime
|
||||
// We need to start the server temporarily
|
||||
|
||||
// Start server in background
|
||||
const server = Bash({
|
||||
command: 'npm run start:dev',
|
||||
run_in_background: true,
|
||||
timeout: 30000
|
||||
});
|
||||
|
||||
// Wait for server to be ready
|
||||
await waitForServer('http://localhost:3000', 30000);
|
||||
|
||||
// Fetch OpenAPI spec
|
||||
const spec = await fetch('http://localhost:3000/api-docs-json');
|
||||
const specJson = await spec.json();
|
||||
|
||||
// Save spec
|
||||
Write(`${outputDir}/openapi.json`, JSON.stringify(specJson, null, 2));
|
||||
|
||||
// Stop server
|
||||
KillShell({ shell_id: server.task_id });
|
||||
|
||||
// Convert to Markdown
|
||||
Bash({
|
||||
command: `npx widdershins "${outputDir}/openapi.json" -o "${outputDir}/api-reference.md" --language_tabs 'javascript:JavaScript' 'bash:cURL'`
|
||||
});
|
||||
|
||||
return { success: true, outputDir };
|
||||
}
|
||||
```
|
||||
|
||||
### Method D: From Runtime Endpoint
|
||||
|
||||
```javascript
|
||||
async function extractFromRuntime(workDir, serverUrl = 'http://localhost:3000') {
|
||||
const outputDir = `${workDir}/api-docs/backend`;
|
||||
|
||||
// Common OpenAPI endpoint paths
|
||||
const endpointPaths = [
|
||||
'/api-docs-json',
|
||||
'/swagger.json',
|
||||
'/openapi.json',
|
||||
'/docs/json',
|
||||
'/api/v1/docs.json'
|
||||
];
|
||||
|
||||
let specJson = null;
|
||||
|
||||
for (const path of endpointPaths) {
|
||||
try {
|
||||
const response = await fetch(`${serverUrl}${path}`);
|
||||
if (response.ok) {
|
||||
specJson = await response.json();
|
||||
break;
|
||||
}
|
||||
} catch (e) {
|
||||
continue;
|
||||
}
|
||||
}
|
||||
|
||||
if (!specJson) {
|
||||
return { success: false, error: 'Could not fetch OpenAPI spec from server' };
|
||||
}
|
||||
|
||||
// Save and convert
|
||||
Write(`${outputDir}/openapi.json`, JSON.stringify(specJson, null, 2));
|
||||
|
||||
Bash({
|
||||
command: `npx widdershins "${outputDir}/openapi.json" -o "${outputDir}/api-reference.md"`
|
||||
});
|
||||
|
||||
return { success: true, outputDir };
|
||||
}
|
||||
```
|
||||
|
||||
## Installation
|
||||
|
||||
### Required Tools
|
||||
|
||||
```bash
|
||||
# For OpenAPI to Markdown conversion
|
||||
npm install -g widdershins
|
||||
|
||||
# Or as dev dependency
|
||||
npm install --save-dev widdershins
|
||||
|
||||
# For generating from JSDoc comments
|
||||
npm install --save-dev swagger-jsdoc
|
||||
```
|
||||
|
||||
## Configuration
|
||||
|
||||
### widdershins Options
|
||||
|
||||
```bash
|
||||
npx widdershins openapi.json \
|
||||
-o api-reference.md \
|
||||
--language_tabs 'javascript:JavaScript' 'python:Python' 'bash:cURL' \
|
||||
--summary \
|
||||
--omitHeader \
|
||||
--resolve \
|
||||
--expandBody
|
||||
```
|
||||
|
||||
| Option | Description |
|
||||
|--------|-------------|
|
||||
| `--language_tabs` | Code example languages |
|
||||
| `--summary` | Use summary as operation heading |
|
||||
| `--omitHeader` | Don't include title header |
|
||||
| `--resolve` | Resolve $ref references |
|
||||
| `--expandBody` | Show full request body |
|
||||
|
||||
### swagger-jsdoc Definition
|
||||
|
||||
Example `swagger-def.js`:
|
||||
|
||||
```javascript
|
||||
module.exports = {
|
||||
definition: {
|
||||
openapi: '3.0.0',
|
||||
info: {
|
||||
title: 'MyApp API',
|
||||
version: '1.0.0',
|
||||
description: 'API documentation for MyApp'
|
||||
},
|
||||
servers: [
|
||||
{ url: 'http://localhost:3000/api/v1' }
|
||||
]
|
||||
},
|
||||
apis: ['./src/routes/*.js', './src/controllers/*.js']
|
||||
};
|
||||
```
|
||||
|
||||
## Output Format
|
||||
|
||||
### Generated Markdown Structure
|
||||
|
||||
```markdown
|
||||
# MyApp API
|
||||
|
||||
## Overview
|
||||
|
||||
Base URL: `http://localhost:3000/api/v1`
|
||||
|
||||
## Authentication
|
||||
|
||||
This API uses Bearer token authentication.
|
||||
|
||||
---
|
||||
|
||||
## Projects
|
||||
|
||||
### List Projects
|
||||
|
||||
`GET /projects`
|
||||
|
||||
Returns a list of all projects.
|
||||
|
||||
**Parameters**
|
||||
|
||||
| Name | In | Type | Required | Description |
|
||||
|------|-----|------|----------|-------------|
|
||||
| status | query | string | false | Filter by status |
|
||||
| page | query | integer | false | Page number |
|
||||
|
||||
**Responses**
|
||||
|
||||
| Status | Description |
|
||||
|--------|-------------|
|
||||
| 200 | Successful response |
|
||||
| 401 | Unauthorized |
|
||||
|
||||
**Example Request**
|
||||
|
||||
```javascript
|
||||
fetch('/api/v1/projects?status=active')
|
||||
.then(res => res.json())
|
||||
.then(data => console.log(data));
|
||||
```
|
||||
|
||||
**Example Response**
|
||||
|
||||
```json
|
||||
{
|
||||
"data": [
|
||||
{ "id": "1", "name": "Project 1" }
|
||||
],
|
||||
"pagination": {
|
||||
"page": 1,
|
||||
"total": 10
|
||||
}
|
||||
}
|
||||
```
|
||||
```
|
||||
|
||||
## Integration
|
||||
|
||||
### Main Runner
|
||||
|
||||
```javascript
|
||||
async function runSwaggerExtraction(workDir) {
|
||||
const detection = await detectOpenAPISpec();
|
||||
|
||||
if (!detection.found) {
|
||||
console.log('No OpenAPI spec detected. Skipping backend API docs.');
|
||||
return { success: false, skipped: true };
|
||||
}
|
||||
|
||||
let result;
|
||||
|
||||
switch (detection.type) {
|
||||
case 'file':
|
||||
result = await extractFromFile(detection.path, workDir);
|
||||
break;
|
||||
case 'jsdoc':
|
||||
result = await extractFromJsDoc(workDir);
|
||||
break;
|
||||
case 'nestjs':
|
||||
result = await extractFromNestJS(workDir);
|
||||
break;
|
||||
default:
|
||||
result = await extractFromRuntime(workDir);
|
||||
}
|
||||
|
||||
if (result.success) {
|
||||
// Post-process the Markdown
|
||||
await postProcessApiDocs(result.outputDir);
|
||||
}
|
||||
|
||||
return result;
|
||||
}
|
||||
|
||||
async function postProcessApiDocs(outputDir) {
|
||||
const mdFile = `${outputDir}/api-reference.md`;
|
||||
let content = Read(mdFile);
|
||||
|
||||
// Remove widdershins header
|
||||
content = content.replace(/^---[\s\S]*?---\n/, '');
|
||||
|
||||
// Add custom styling hints
|
||||
content = content.replace(/^(#{1,3} .+)$/gm, '$1\n');
|
||||
|
||||
Write(mdFile, content);
|
||||
}
|
||||
```
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Common Issues
|
||||
|
||||
#### "widdershins: command not found"
|
||||
|
||||
```bash
|
||||
npm install -g widdershins
|
||||
# Or use npx
|
||||
npx widdershins openapi.json -o api.md
|
||||
```
|
||||
|
||||
#### "Error parsing OpenAPI spec"
|
||||
|
||||
```bash
|
||||
# Validate spec first
|
||||
npx @redocly/cli lint openapi.json
|
||||
|
||||
# Fix common issues
|
||||
npx @redocly/cli bundle openapi.json -o fixed.json
|
||||
```
|
||||
|
||||
#### "Server not responding"
|
||||
|
||||
Ensure the development server is running and accessible:
|
||||
|
||||
```bash
|
||||
# Check if server is running
|
||||
curl http://localhost:3000/health
|
||||
|
||||
# Check OpenAPI endpoint
|
||||
curl http://localhost:3000/api-docs-json
|
||||
```
|
||||
|
||||
### Manual Fallback
|
||||
|
||||
If automatic extraction fails, document APIs manually:
|
||||
|
||||
1. List all route files: `Glob('**/routes/*.js')`
|
||||
2. Extract route definitions using regex
|
||||
3. Build documentation structure manually
|
||||
|
||||
```javascript
|
||||
async function manualApiExtraction(workDir) {
|
||||
const routeFiles = Glob('src/routes/*.js').concat(Glob('src/routes/*.ts'));
|
||||
const endpoints = [];
|
||||
|
||||
for (const file of routeFiles) {
|
||||
const content = Read(file);
|
||||
const routes = content.matchAll(/router\.(get|post|put|delete|patch)\(['"]([^'"]+)['"]/g);
|
||||
|
||||
for (const match of routes) {
|
||||
endpoints.push({
|
||||
method: match[1].toUpperCase(),
|
||||
path: match[2],
|
||||
file: file
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
return endpoints;
|
||||
}
|
||||
```
|
||||
357
.claude/skills/software-manual/scripts/typedoc-runner.md
Normal file
357
.claude/skills/software-manual/scripts/typedoc-runner.md
Normal file
@@ -0,0 +1,357 @@
|
||||
# TypeDoc Runner
|
||||
|
||||
Guide for generating frontend API documentation using TypeDoc.
|
||||
|
||||
## Overview
|
||||
|
||||
TypeDoc generates API documentation from TypeScript source code by analyzing type annotations and JSDoc comments.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
### Check TypeScript Project
|
||||
|
||||
```javascript
|
||||
// Verify TypeScript is used
|
||||
const packageJson = JSON.parse(Read('package.json'));
|
||||
const hasTypeScript = packageJson.devDependencies?.typescript ||
|
||||
packageJson.dependencies?.typescript;
|
||||
|
||||
if (!hasTypeScript) {
|
||||
console.log('Not a TypeScript project. Skipping TypeDoc.');
|
||||
return;
|
||||
}
|
||||
|
||||
// Check for tsconfig.json
|
||||
const hasTsConfig = Glob('tsconfig.json').length > 0;
|
||||
```
|
||||
|
||||
## Installation
|
||||
|
||||
### Install TypeDoc
|
||||
|
||||
```bash
|
||||
npm install --save-dev typedoc typedoc-plugin-markdown
|
||||
```
|
||||
|
||||
### Optional Plugins
|
||||
|
||||
```bash
|
||||
# For better Markdown output
|
||||
npm install --save-dev typedoc-plugin-markdown
|
||||
|
||||
# For README inclusion
|
||||
npm install --save-dev typedoc-plugin-rename-defaults
|
||||
```
|
||||
|
||||
## Configuration
|
||||
|
||||
### typedoc.json
|
||||
|
||||
Create `typedoc.json` in project root:
|
||||
|
||||
```json
|
||||
{
|
||||
"entryPoints": ["./src/index.ts"],
|
||||
"entryPointStrategy": "expand",
|
||||
"out": ".workflow/.scratchpad/manual-{timestamp}/api-docs/frontend",
|
||||
"plugin": ["typedoc-plugin-markdown"],
|
||||
"exclude": [
|
||||
"**/node_modules/**",
|
||||
"**/*.test.ts",
|
||||
"**/*.spec.ts",
|
||||
"**/tests/**"
|
||||
],
|
||||
"excludePrivate": true,
|
||||
"excludeProtected": true,
|
||||
"excludeInternal": true,
|
||||
"hideGenerator": true,
|
||||
"readme": "none",
|
||||
"categorizeByGroup": true,
|
||||
"navigation": {
|
||||
"includeCategories": true,
|
||||
"includeGroups": true
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Alternative: CLI Options
|
||||
|
||||
```bash
|
||||
npx typedoc \
|
||||
--entryPoints src/index.ts \
|
||||
--entryPointStrategy expand \
|
||||
--out api-docs/frontend \
|
||||
--plugin typedoc-plugin-markdown \
|
||||
--exclude "**/node_modules/**" \
|
||||
--exclude "**/*.test.ts" \
|
||||
--excludePrivate \
|
||||
--excludeProtected \
|
||||
--readme none
|
||||
```
|
||||
|
||||
## Execution
|
||||
|
||||
### Basic Run
|
||||
|
||||
```javascript
|
||||
async function runTypeDoc(workDir) {
|
||||
const outputDir = `${workDir}/api-docs/frontend`;
|
||||
|
||||
// Create output directory
|
||||
Bash({ command: `mkdir -p "${outputDir}"` });
|
||||
|
||||
// Run TypeDoc
|
||||
const result = Bash({
|
||||
command: `npx typedoc --out "${outputDir}" --plugin typedoc-plugin-markdown src/`,
|
||||
timeout: 120000 // 2 minutes
|
||||
});
|
||||
|
||||
if (result.exitCode !== 0) {
|
||||
console.error('TypeDoc failed:', result.stderr);
|
||||
return { success: false, error: result.stderr };
|
||||
}
|
||||
|
||||
// List generated files
|
||||
const files = Glob(`${outputDir}/**/*.md`);
|
||||
console.log(`Generated ${files.length} documentation files`);
|
||||
|
||||
return { success: true, files };
|
||||
}
|
||||
```
|
||||
|
||||
### With Custom Entry Points
|
||||
|
||||
```javascript
|
||||
async function runTypeDocCustom(workDir, entryPoints) {
|
||||
const outputDir = `${workDir}/api-docs/frontend`;
|
||||
|
||||
// Build entry points string
|
||||
const entries = entryPoints.map(e => `--entryPoints "${e}"`).join(' ');
|
||||
|
||||
const result = Bash({
|
||||
command: `npx typedoc ${entries} --out "${outputDir}" --plugin typedoc-plugin-markdown`,
|
||||
timeout: 120000
|
||||
});
|
||||
|
||||
return { success: result.exitCode === 0 };
|
||||
}
|
||||
|
||||
// Example: Document specific files
|
||||
await runTypeDocCustom(workDir, [
|
||||
'src/api/index.ts',
|
||||
'src/hooks/index.ts',
|
||||
'src/utils/index.ts'
|
||||
]);
|
||||
```
|
||||
|
||||
## Output Structure
|
||||
|
||||
```
|
||||
api-docs/frontend/
|
||||
├── README.md # Index
|
||||
├── modules.md # Module list
|
||||
├── modules/
|
||||
│ ├── api.md # API module
|
||||
│ ├── hooks.md # Hooks module
|
||||
│ └── utils.md # Utils module
|
||||
├── classes/
|
||||
│ ├── ApiClient.md # Class documentation
|
||||
│ └── ...
|
||||
├── interfaces/
|
||||
│ ├── Config.md # Interface documentation
|
||||
│ └── ...
|
||||
└── functions/
|
||||
├── formatDate.md # Function documentation
|
||||
└── ...
|
||||
```
|
||||
|
||||
## Integration with Manual
|
||||
|
||||
### Reading TypeDoc Output
|
||||
|
||||
```javascript
|
||||
async function integrateTypeDocOutput(workDir) {
|
||||
const apiDocsDir = `${workDir}/api-docs/frontend`;
|
||||
const files = Glob(`${apiDocsDir}/**/*.md`);
|
||||
|
||||
// Build API reference content
|
||||
let content = '## Frontend API Reference\n\n';
|
||||
|
||||
// Add modules
|
||||
const modules = Glob(`${apiDocsDir}/modules/*.md`);
|
||||
for (const mod of modules) {
|
||||
const modContent = Read(mod);
|
||||
content += `### ${extractTitle(modContent)}\n\n`;
|
||||
content += summarizeModule(modContent);
|
||||
}
|
||||
|
||||
// Add functions
|
||||
const functions = Glob(`${apiDocsDir}/functions/*.md`);
|
||||
content += '\n### Functions\n\n';
|
||||
for (const fn of functions) {
|
||||
const fnContent = Read(fn);
|
||||
content += formatFunctionDoc(fnContent);
|
||||
}
|
||||
|
||||
// Add hooks
|
||||
const hooks = Glob(`${apiDocsDir}/functions/*Hook*.md`);
|
||||
if (hooks.length > 0) {
|
||||
content += '\n### Hooks\n\n';
|
||||
for (const hook of hooks) {
|
||||
const hookContent = Read(hook);
|
||||
content += formatHookDoc(hookContent);
|
||||
}
|
||||
}
|
||||
|
||||
return content;
|
||||
}
|
||||
```
|
||||
|
||||
### Example Output Format
|
||||
|
||||
```markdown
|
||||
## Frontend API Reference
|
||||
|
||||
### API Module
|
||||
|
||||
Functions for interacting with the backend API.
|
||||
|
||||
#### fetchProjects
|
||||
|
||||
```typescript
|
||||
function fetchProjects(options?: FetchOptions): Promise<Project[]>
|
||||
```
|
||||
|
||||
Fetches all projects for the current user.
|
||||
|
||||
**Parameters:**
|
||||
|
||||
| Name | Type | Description |
|
||||
|------|------|-------------|
|
||||
| options | FetchOptions | Optional fetch configuration |
|
||||
|
||||
**Returns:** Promise<Project[]>
|
||||
|
||||
### Hooks
|
||||
|
||||
#### useProjects
|
||||
|
||||
```typescript
|
||||
function useProjects(options?: UseProjectsOptions): UseProjectsResult
|
||||
```
|
||||
|
||||
React hook for managing project data.
|
||||
|
||||
**Parameters:**
|
||||
|
||||
| Name | Type | Description |
|
||||
|------|------|-------------|
|
||||
| options.status | string | Filter by project status |
|
||||
| options.limit | number | Max projects to fetch |
|
||||
|
||||
**Returns:**
|
||||
|
||||
| Property | Type | Description |
|
||||
|----------|------|-------------|
|
||||
| projects | Project[] | Array of projects |
|
||||
| loading | boolean | Loading state |
|
||||
| error | Error \| null | Error if failed |
|
||||
| refetch | () => void | Refresh data |
|
||||
```
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Common Issues
|
||||
|
||||
#### "Cannot find module 'typescript'"
|
||||
|
||||
```bash
|
||||
npm install --save-dev typescript
|
||||
```
|
||||
|
||||
#### "No entry points found"
|
||||
|
||||
Ensure entry points exist:
|
||||
|
||||
```bash
|
||||
# Check entry points
|
||||
ls src/index.ts
|
||||
|
||||
# Or use glob pattern
|
||||
npx typedoc --entryPoints "src/**/*.ts"
|
||||
```
|
||||
|
||||
#### "Unsupported TypeScript version"
|
||||
|
||||
```bash
|
||||
# Check TypeDoc compatibility
|
||||
npm info typedoc peerDependencies
|
||||
|
||||
# Install compatible version
|
||||
npm install --save-dev typedoc@0.25.x
|
||||
```
|
||||
|
||||
### Debugging
|
||||
|
||||
```bash
|
||||
# Verbose output
|
||||
npx typedoc --logLevel Verbose src/
|
||||
|
||||
# Show warnings
|
||||
npx typedoc --treatWarningsAsErrors src/
|
||||
```
|
||||
|
||||
## Best Practices
|
||||
|
||||
### Document Exports Only
|
||||
|
||||
```typescript
|
||||
// Good: Public API documented
|
||||
/**
|
||||
* Fetches projects from the API.
|
||||
* @param options - Fetch options
|
||||
* @returns Promise resolving to projects
|
||||
*/
|
||||
export function fetchProjects(options?: FetchOptions): Promise<Project[]> {
|
||||
// ...
|
||||
}
|
||||
|
||||
// Internal: Not documented
|
||||
function internalHelper() {
|
||||
// ...
|
||||
}
|
||||
```
|
||||
|
||||
### Use JSDoc Comments
|
||||
|
||||
```typescript
|
||||
/**
|
||||
* User hook for managing authentication state.
|
||||
*
|
||||
* @example
|
||||
* ```tsx
|
||||
* const { user, login, logout } = useAuth();
|
||||
* ```
|
||||
*
|
||||
* @returns Authentication state and methods
|
||||
*/
|
||||
export function useAuth(): AuthResult {
|
||||
// ...
|
||||
}
|
||||
```
|
||||
|
||||
### Define Types Properly
|
||||
|
||||
```typescript
|
||||
/**
|
||||
* Configuration for the API client.
|
||||
*/
|
||||
export interface ApiConfig {
|
||||
/** API base URL */
|
||||
baseUrl: string;
|
||||
/** Request timeout in milliseconds */
|
||||
timeout?: number;
|
||||
/** Custom headers to include */
|
||||
headers?: Record<string, string>;
|
||||
}
|
||||
```
|
||||
325
.claude/skills/software-manual/specs/html-template.md
Normal file
325
.claude/skills/software-manual/specs/html-template.md
Normal file
@@ -0,0 +1,325 @@
|
||||
# HTML Template Specification
|
||||
|
||||
Technical specification for the TiddlyWiki-style HTML output.
|
||||
|
||||
## Overview
|
||||
|
||||
The output is a single, self-contained HTML file with:
|
||||
- All CSS embedded inline
|
||||
- All JavaScript embedded inline
|
||||
- All images embedded as Base64
|
||||
- Full offline functionality
|
||||
|
||||
## File Structure
|
||||
|
||||
```html
|
||||
<!DOCTYPE html>
|
||||
<html lang="zh-CN">
|
||||
<head>
|
||||
<meta charset="UTF-8">
|
||||
<meta name="viewport" content="width=device-width, initial-scale=1.0">
|
||||
<title>{{SOFTWARE_NAME}} - User Manual</title>
|
||||
<style>{{EMBEDDED_CSS}}</style>
|
||||
</head>
|
||||
<body class="wiki-container" data-theme="light">
|
||||
<aside class="wiki-sidebar">...</aside>
|
||||
<main class="wiki-content">...</main>
|
||||
<button class="theme-toggle">...</button>
|
||||
<script id="search-index" type="application/json">{{SEARCH_INDEX}}</script>
|
||||
<script>{{EMBEDDED_JS}}</script>
|
||||
</body>
|
||||
</html>
|
||||
```
|
||||
|
||||
## Placeholders
|
||||
|
||||
| Placeholder | Description | Source |
|
||||
|-------------|-------------|--------|
|
||||
| `{{SOFTWARE_NAME}}` | Software name | manual-config.json |
|
||||
| `{{VERSION}}` | Version number | manual-config.json |
|
||||
| `{{EMBEDDED_CSS}}` | All CSS content | wiki-base.css + wiki-dark.css |
|
||||
| `{{TOC_HTML}}` | Table of contents | Generated from sections |
|
||||
| `{{TIDDLERS_HTML}}` | All content blocks | Converted from Markdown |
|
||||
| `{{SEARCH_INDEX_JSON}}` | Search data | Generated from content |
|
||||
| `{{EMBEDDED_JS}}` | JavaScript code | Inline in template |
|
||||
| `{{TIMESTAMP}}` | Generation timestamp | ISO 8601 format |
|
||||
| `{{LOGO_BASE64}}` | Logo image | Project logo or generated |
|
||||
|
||||
## Component Specifications
|
||||
|
||||
### Sidebar (`.wiki-sidebar`)
|
||||
|
||||
```
|
||||
Width: 280px (fixed)
|
||||
Position: Fixed left
|
||||
Height: 100vh
|
||||
Components:
|
||||
- Logo area (.wiki-logo)
|
||||
- Search box (.wiki-search)
|
||||
- Tag navigation (.wiki-tags)
|
||||
- Table of contents (.wiki-toc)
|
||||
```
|
||||
|
||||
### Main Content (`.wiki-content`)
|
||||
|
||||
```
|
||||
Margin-left: 280px (sidebar width)
|
||||
Max-width: 900px (content)
|
||||
Components:
|
||||
- Header bar (.content-header)
|
||||
- Tiddler container (.tiddler-container)
|
||||
- Footer (.wiki-footer)
|
||||
```
|
||||
|
||||
### Tiddler (Content Block)
|
||||
|
||||
```html
|
||||
<article class="tiddler"
|
||||
id="tiddler-{{ID}}"
|
||||
data-tags="{{TAGS}}"
|
||||
data-difficulty="{{DIFFICULTY}}">
|
||||
<header class="tiddler-header">
|
||||
<h2 class="tiddler-title">
|
||||
<button class="collapse-toggle">▼</button>
|
||||
{{TITLE}}
|
||||
</h2>
|
||||
<div class="tiddler-meta">
|
||||
<span class="difficulty-badge {{DIFFICULTY}}">{{DIFFICULTY_LABEL}}</span>
|
||||
{{TAG_BADGES}}
|
||||
</div>
|
||||
</header>
|
||||
<div class="tiddler-content">
|
||||
{{CONTENT_HTML}}
|
||||
</div>
|
||||
</article>
|
||||
```
|
||||
|
||||
### Search Index Format
|
||||
|
||||
```json
|
||||
{
|
||||
"tiddler-overview": {
|
||||
"title": "Product Overview",
|
||||
"body": "Plain text content for searching...",
|
||||
"tags": ["getting-started", "overview"]
|
||||
},
|
||||
"tiddler-ui-guide": {
|
||||
"title": "UI Guide",
|
||||
"body": "Plain text content...",
|
||||
"tags": ["ui-guide"]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Interactive Features
|
||||
|
||||
### 1. Search
|
||||
|
||||
- Full-text search with result highlighting
|
||||
- Searches title, body, and tags
|
||||
- Shows up to 10 results
|
||||
- Keyboard accessible (Enter to search, Esc to close)
|
||||
|
||||
### 2. Collapse/Expand
|
||||
|
||||
- Per-section toggle via button
|
||||
- Expand All / Collapse All buttons
|
||||
- State indicated by ▼ (expanded) or ▶ (collapsed)
|
||||
- Smooth transition animation
|
||||
|
||||
### 3. Tag Filtering
|
||||
|
||||
- Tags: all, getting-started, ui-guide, api, config, troubleshooting, examples
|
||||
- Single selection (radio behavior)
|
||||
- "all" shows everything
|
||||
- Hidden tiddlers via `display: none`
|
||||
|
||||
### 4. Theme Toggle
|
||||
|
||||
- Light/Dark mode switch
|
||||
- Persists to localStorage (`wiki-theme`)
|
||||
- Applies to entire document via `[data-theme="dark"]`
|
||||
- Toggle button shows sun/moon icon
|
||||
|
||||
### 5. Responsive Design
|
||||
|
||||
```
|
||||
Breakpoints:
|
||||
- Desktop (> 1024px): Sidebar visible
|
||||
- Tablet (768-1024px): Sidebar collapsible
|
||||
- Mobile (< 768px): Sidebar hidden, hamburger menu
|
||||
```
|
||||
|
||||
### 6. Print Support
|
||||
|
||||
- Hides sidebar, toggles, interactive elements
|
||||
- Expands all collapsed sections
|
||||
- Adjusts colors for print
|
||||
- Page breaks between sections
|
||||
|
||||
## Accessibility
|
||||
|
||||
### Keyboard Navigation
|
||||
|
||||
- Tab through interactive elements
|
||||
- Enter to activate buttons
|
||||
- Escape to close search results
|
||||
- Arrow keys in search results
|
||||
|
||||
### ARIA Attributes
|
||||
|
||||
```html
|
||||
<input aria-label="Search">
|
||||
<nav aria-label="Table of Contents">
|
||||
<button aria-label="Toggle theme">
|
||||
<div aria-live="polite"> <!-- For search results -->
|
||||
```
|
||||
|
||||
### Color Contrast
|
||||
|
||||
- Text/background ratio ≥ 4.5:1
|
||||
- Interactive elements clearly visible
|
||||
- Focus indicators visible
|
||||
|
||||
## Performance
|
||||
|
||||
### Target Metrics
|
||||
|
||||
| Metric | Target |
|
||||
|--------|--------|
|
||||
| Total file size | < 10MB |
|
||||
| Time to interactive | < 2s |
|
||||
| Search latency | < 100ms |
|
||||
|
||||
### Optimization Strategies
|
||||
|
||||
1. **Lazy loading for images**: `loading="lazy"`
|
||||
2. **Efficient search**: In-memory index, no external requests
|
||||
3. **CSS containment**: Scope styles to components
|
||||
4. **Minimal JavaScript**: Vanilla JS, no libraries
|
||||
|
||||
## CSS Variables
|
||||
|
||||
### Light Theme
|
||||
|
||||
```css
|
||||
:root {
|
||||
--bg-primary: #ffffff;
|
||||
--bg-secondary: #f8f9fa;
|
||||
--text-primary: #212529;
|
||||
--text-secondary: #495057;
|
||||
--accent-color: #0d6efd;
|
||||
--border-color: #dee2e6;
|
||||
}
|
||||
```
|
||||
|
||||
### Dark Theme
|
||||
|
||||
```css
|
||||
[data-theme="dark"] {
|
||||
--bg-primary: #1a1a2e;
|
||||
--bg-secondary: #16213e;
|
||||
--text-primary: #eaeaea;
|
||||
--text-secondary: #b8b8b8;
|
||||
--accent-color: #4dabf7;
|
||||
--border-color: #2d3748;
|
||||
}
|
||||
```
|
||||
|
||||
## Markdown to HTML Mapping
|
||||
|
||||
| Markdown | HTML |
|
||||
|----------|------|
|
||||
| `# Heading` | `<h1>` |
|
||||
| `## Heading` | `<h2>` |
|
||||
| `**bold**` | `<strong>` |
|
||||
| `*italic*` | `<em>` |
|
||||
| `` `code` `` | `<code>` |
|
||||
| `[link](url)` | `<a href="url">` |
|
||||
| `- item` | `<ul><li>` |
|
||||
| `1. item` | `<ol><li>` |
|
||||
| ``` ```js ``` | `<pre><code class="language-js">` |
|
||||
| `> quote` | `<blockquote>` |
|
||||
| `---` | `<hr>` |
|
||||
|
||||
## Screenshot Embedding
|
||||
|
||||
### Marker Format
|
||||
|
||||
```markdown
|
||||
<!-- SCREENSHOT: id="ss-login" url="/login" description="Login form" -->
|
||||
```
|
||||
|
||||
### Embedded Format
|
||||
|
||||
```html
|
||||
<figure class="screenshot">
|
||||
<img src="data:image/png;base64,{{BASE64_DATA}}"
|
||||
alt="Login form"
|
||||
loading="lazy">
|
||||
<figcaption>Login form</figcaption>
|
||||
</figure>
|
||||
```
|
||||
|
||||
### Placeholder (if missing)
|
||||
|
||||
```html
|
||||
<div class="screenshot-placeholder">
|
||||
[Screenshot: ss-login - Login form]
|
||||
</div>
|
||||
```
|
||||
|
||||
## File Size Optimization
|
||||
|
||||
### CSS
|
||||
|
||||
- Minify before embedding
|
||||
- Remove unused styles
|
||||
- Combine duplicate rules
|
||||
|
||||
### JavaScript
|
||||
|
||||
- Minify before embedding
|
||||
- Remove console.log statements
|
||||
- Use IIFE for scoping
|
||||
|
||||
### Images
|
||||
|
||||
- Compress before Base64 encoding
|
||||
- Use appropriate dimensions (max 1280px width)
|
||||
- Consider WebP format if browser support is acceptable
|
||||
|
||||
## Validation
|
||||
|
||||
### HTML Validation
|
||||
|
||||
- W3C HTML5 compliance
|
||||
- Proper nesting
|
||||
- Required attributes present
|
||||
|
||||
### CSS Validation
|
||||
|
||||
- Valid property values
|
||||
- No deprecated properties
|
||||
- Vendor prefixes where needed
|
||||
|
||||
### JavaScript
|
||||
|
||||
- No syntax errors
|
||||
- All functions defined
|
||||
- Error handling for edge cases
|
||||
|
||||
## Testing Checklist
|
||||
|
||||
- [ ] Opens in Chrome/Firefox/Safari/Edge
|
||||
- [ ] Search works correctly
|
||||
- [ ] Collapse/expand works
|
||||
- [ ] Tag filtering works
|
||||
- [ ] Theme toggle works
|
||||
- [ ] Print preview correct
|
||||
- [ ] Keyboard navigation works
|
||||
- [ ] Mobile responsive
|
||||
- [ ] Offline functionality
|
||||
- [ ] All links valid
|
||||
- [ ] All images display
|
||||
- [ ] No console errors
|
||||
253
.claude/skills/software-manual/specs/quality-standards.md
Normal file
253
.claude/skills/software-manual/specs/quality-standards.md
Normal file
@@ -0,0 +1,253 @@
|
||||
# Quality Standards
|
||||
|
||||
Quality gates and standards for software manual generation.
|
||||
|
||||
## Quality Dimensions
|
||||
|
||||
### 1. Completeness (25%)
|
||||
|
||||
All required sections present and adequately covered.
|
||||
|
||||
| Requirement | Weight | Criteria |
|
||||
|-------------|--------|----------|
|
||||
| Overview section | 5 | Product intro, features, quick start |
|
||||
| UI Guide | 5 | All major screens documented |
|
||||
| API Reference | 5 | All public APIs documented |
|
||||
| Configuration | 4 | All config options explained |
|
||||
| Troubleshooting | 3 | Common issues addressed |
|
||||
| Examples | 3 | Multi-level examples provided |
|
||||
|
||||
**Scoring**:
|
||||
- 100%: All sections present with adequate depth
|
||||
- 80%: All sections present, some lacking depth
|
||||
- 60%: Missing 1-2 sections
|
||||
- 40%: Missing 3+ sections
|
||||
- 0%: Critical sections missing (overview, UI guide)
|
||||
|
||||
### 2. Consistency (25%)
|
||||
|
||||
Terminology, style, and structure uniform across sections.
|
||||
|
||||
| Aspect | Check |
|
||||
|--------|-------|
|
||||
| Terminology | Same term for same concept throughout |
|
||||
| Formatting | Consistent heading levels, code block styles |
|
||||
| Tone | Consistent formality level |
|
||||
| Cross-references | All internal links valid |
|
||||
| Screenshot naming | Follow `ss-{feature}-{action}` pattern |
|
||||
|
||||
**Scoring**:
|
||||
- 100%: Zero inconsistencies
|
||||
- 80%: 1-3 minor inconsistencies
|
||||
- 60%: 4-6 inconsistencies
|
||||
- 40%: 7-10 inconsistencies
|
||||
- 0%: Pervasive inconsistencies
|
||||
|
||||
### 3. Depth (25%)
|
||||
|
||||
Content provides sufficient detail for target audience.
|
||||
|
||||
| Level | Criteria |
|
||||
|-------|----------|
|
||||
| Shallow | Basic descriptions only |
|
||||
| Standard | Descriptions + usage examples |
|
||||
| Deep | Descriptions + examples + edge cases + best practices |
|
||||
|
||||
**Per-Section Depth Check**:
|
||||
- [ ] Explains "what" (definition)
|
||||
- [ ] Explains "why" (rationale)
|
||||
- [ ] Explains "how" (procedure)
|
||||
- [ ] Provides examples
|
||||
- [ ] Covers edge cases
|
||||
- [ ] Includes tips/best practices
|
||||
|
||||
**Scoring**:
|
||||
- 100%: Deep coverage on all critical sections
|
||||
- 80%: Standard coverage on all sections
|
||||
- 60%: Shallow coverage on some sections
|
||||
- 40%: Missing depth in critical areas
|
||||
- 0%: Superficial throughout
|
||||
|
||||
### 4. Readability (25%)
|
||||
|
||||
Clear, user-friendly writing that's easy to follow.
|
||||
|
||||
| Metric | Target |
|
||||
|--------|--------|
|
||||
| Sentence length | Average < 20 words |
|
||||
| Paragraph length | Average < 5 sentences |
|
||||
| Heading hierarchy | Proper H1 > H2 > H3 nesting |
|
||||
| Code blocks | Language specified |
|
||||
| Lists | Used for 3+ items |
|
||||
| Screenshots | Placed near relevant text |
|
||||
|
||||
**Structural Elements**:
|
||||
- [ ] Clear section headers
|
||||
- [ ] Numbered steps for procedures
|
||||
- [ ] Bullet lists for options/features
|
||||
- [ ] Tables for comparisons
|
||||
- [ ] Code blocks with syntax highlighting
|
||||
- [ ] Screenshots with captions
|
||||
|
||||
**Scoring**:
|
||||
- 100%: All readability criteria met
|
||||
- 80%: Minor structural issues
|
||||
- 60%: Some sections hard to follow
|
||||
- 40%: Significant readability problems
|
||||
- 0%: Unclear, poorly structured
|
||||
|
||||
## Overall Quality Score
|
||||
|
||||
```
|
||||
Overall = (Completeness × 0.25) + (Consistency × 0.25) +
|
||||
(Depth × 0.25) + (Readability × 0.25)
|
||||
```
|
||||
|
||||
**Quality Gates**:
|
||||
|
||||
| Gate | Threshold | Action |
|
||||
|------|-----------|--------|
|
||||
| Pass | ≥ 80% | Proceed to HTML generation |
|
||||
| Review | 60-79% | Address warnings, proceed with caution |
|
||||
| Fail | < 60% | Must address errors before continuing |
|
||||
|
||||
## Issue Classification
|
||||
|
||||
### Errors (Must Fix)
|
||||
|
||||
- Missing required sections
|
||||
- Invalid cross-references
|
||||
- Broken screenshot markers
|
||||
- Code blocks without language
|
||||
- Incomplete procedures (missing steps)
|
||||
|
||||
### Warnings (Should Fix)
|
||||
|
||||
- Terminology inconsistencies
|
||||
- Sections lacking depth
|
||||
- Missing examples
|
||||
- Long paragraphs (> 7 sentences)
|
||||
- Screenshots missing captions
|
||||
|
||||
### Info (Nice to Have)
|
||||
|
||||
- Optimization suggestions
|
||||
- Additional example opportunities
|
||||
- Alternative explanations
|
||||
- Enhancement ideas
|
||||
|
||||
## Quality Checklist
|
||||
|
||||
### Pre-Generation
|
||||
|
||||
- [ ] All agents completed successfully
|
||||
- [ ] No errors in consolidation report
|
||||
- [ ] Overall score ≥ 60%
|
||||
|
||||
### Post-Generation
|
||||
|
||||
- [ ] HTML renders correctly
|
||||
- [ ] Search returns relevant results
|
||||
- [ ] All screenshots display
|
||||
- [ ] Theme toggle works
|
||||
- [ ] Print preview looks good
|
||||
|
||||
### Final Review
|
||||
|
||||
- [ ] User previewed and approved
|
||||
- [ ] File size reasonable (< 10MB)
|
||||
- [ ] No console errors in browser
|
||||
- [ ] Accessible (keyboard navigation works)
|
||||
|
||||
## Automated Checks
|
||||
|
||||
```javascript
|
||||
function runQualityChecks(workDir) {
|
||||
const results = {
|
||||
completeness: checkCompleteness(workDir),
|
||||
consistency: checkConsistency(workDir),
|
||||
depth: checkDepth(workDir),
|
||||
readability: checkReadability(workDir)
|
||||
};
|
||||
|
||||
results.overall = (
|
||||
results.completeness * 0.25 +
|
||||
results.consistency * 0.25 +
|
||||
results.depth * 0.25 +
|
||||
results.readability * 0.25
|
||||
);
|
||||
|
||||
return results;
|
||||
}
|
||||
|
||||
function checkCompleteness(workDir) {
|
||||
const requiredSections = [
|
||||
'section-overview.md',
|
||||
'section-ui-guide.md',
|
||||
'section-api-reference.md',
|
||||
'section-configuration.md',
|
||||
'section-troubleshooting.md',
|
||||
'section-examples.md'
|
||||
];
|
||||
|
||||
const existing = Glob(`${workDir}/sections/section-*.md`);
|
||||
const found = requiredSections.filter(s =>
|
||||
existing.some(e => e.endsWith(s))
|
||||
);
|
||||
|
||||
return (found.length / requiredSections.length) * 100;
|
||||
}
|
||||
|
||||
function checkConsistency(workDir) {
|
||||
// Check terminology, cross-references, naming conventions
|
||||
const issues = [];
|
||||
|
||||
// ... implementation ...
|
||||
|
||||
return Math.max(0, 100 - issues.length * 10);
|
||||
}
|
||||
|
||||
function checkDepth(workDir) {
|
||||
// Check content length, examples, edge cases
|
||||
const sections = Glob(`${workDir}/sections/section-*.md`);
|
||||
let totalScore = 0;
|
||||
|
||||
for (const section of sections) {
|
||||
const content = Read(section);
|
||||
let sectionScore = 0;
|
||||
|
||||
if (content.length > 500) sectionScore += 20;
|
||||
if (content.includes('```')) sectionScore += 20;
|
||||
if (content.includes('Example')) sectionScore += 20;
|
||||
if (content.match(/\d+\./g)?.length > 3) sectionScore += 20;
|
||||
if (content.includes('Note:') || content.includes('Tip:')) sectionScore += 20;
|
||||
|
||||
totalScore += sectionScore;
|
||||
}
|
||||
|
||||
return totalScore / sections.length;
|
||||
}
|
||||
|
||||
function checkReadability(workDir) {
|
||||
// Check structure, formatting, organization
|
||||
const sections = Glob(`${workDir}/sections/section-*.md`);
|
||||
let issues = 0;
|
||||
|
||||
for (const section of sections) {
|
||||
const content = Read(section);
|
||||
|
||||
// Check heading hierarchy
|
||||
if (!content.startsWith('# ')) issues++;
|
||||
|
||||
// Check code block languages
|
||||
const codeBlocks = content.match(/```\w*/g);
|
||||
if (codeBlocks?.some(b => b === '```')) issues++;
|
||||
|
||||
// Check paragraph length
|
||||
const paragraphs = content.split('\n\n');
|
||||
if (paragraphs.some(p => p.split('. ').length > 7)) issues++;
|
||||
}
|
||||
|
||||
return Math.max(0, 100 - issues * 10);
|
||||
}
|
||||
```
|
||||
298
.claude/skills/software-manual/specs/writing-style.md
Normal file
298
.claude/skills/software-manual/specs/writing-style.md
Normal file
@@ -0,0 +1,298 @@
|
||||
# Writing Style Guide
|
||||
|
||||
User-friendly writing standards for software manuals.
|
||||
|
||||
## Core Principles
|
||||
|
||||
### 1. User-Centered
|
||||
|
||||
Write for the user, not the developer.
|
||||
|
||||
**Do**:
|
||||
- "Click the **Save** button to save your changes"
|
||||
- "Enter your email address in the login form"
|
||||
|
||||
**Don't**:
|
||||
- "The onClick handler triggers the save mutation"
|
||||
- "POST to /api/auth/login with email in body"
|
||||
|
||||
### 2. Action-Oriented
|
||||
|
||||
Focus on what users can **do**, not what the system does.
|
||||
|
||||
**Do**:
|
||||
- "You can export your data as CSV"
|
||||
- "To create a new project, click **New Project**"
|
||||
|
||||
**Don't**:
|
||||
- "The system exports data in CSV format"
|
||||
- "A new project is created when the button is clicked"
|
||||
|
||||
### 3. Clear and Direct
|
||||
|
||||
Use simple, straightforward language.
|
||||
|
||||
**Do**:
|
||||
- "Select a file to upload"
|
||||
- "The maximum file size is 10MB"
|
||||
|
||||
**Don't**:
|
||||
- "Utilize the file selection interface to designate a file for uploading"
|
||||
- "File size constraints limit uploads to 10 megabytes"
|
||||
|
||||
## Tone
|
||||
|
||||
### Friendly but Professional
|
||||
|
||||
- Conversational but not casual
|
||||
- Helpful but not condescending
|
||||
- Confident but not arrogant
|
||||
|
||||
**Examples**:
|
||||
|
||||
| Too Casual | Just Right | Too Formal |
|
||||
|------------|------------|------------|
|
||||
| "Yo, here's how..." | "Here's how to..." | "The following procedure describes..." |
|
||||
| "Easy peasy!" | "That's all you need to do." | "The procedure has been completed." |
|
||||
| "Don't worry about it" | "You don't need to change this" | "This parameter does not require modification" |
|
||||
|
||||
### Second Person
|
||||
|
||||
Address the user directly as "you".
|
||||
|
||||
**Do**: "You can customize your dashboard..."
|
||||
**Don't**: "Users can customize their dashboards..."
|
||||
|
||||
## Structure
|
||||
|
||||
### Headings
|
||||
|
||||
Use clear, descriptive headings that tell users what they'll learn.
|
||||
|
||||
**Good Headings**:
|
||||
- "Getting Started"
|
||||
- "Creating Your First Project"
|
||||
- "Configuring Email Notifications"
|
||||
- "Troubleshooting Login Issues"
|
||||
|
||||
**Weak Headings**:
|
||||
- "Overview"
|
||||
- "Step 1"
|
||||
- "Settings"
|
||||
- "FAQ"
|
||||
|
||||
### Procedures
|
||||
|
||||
Number steps for sequential tasks.
|
||||
|
||||
```markdown
|
||||
## Creating a New User
|
||||
|
||||
1. Navigate to **Settings** > **Users**.
|
||||
2. Click the **Add User** button.
|
||||
3. Enter the user's email address.
|
||||
4. Select a role from the dropdown.
|
||||
5. Click **Save**.
|
||||
|
||||
The new user will receive an invitation email.
|
||||
```
|
||||
|
||||
### Features/Options
|
||||
|
||||
Use bullet lists for non-sequential items.
|
||||
|
||||
```markdown
|
||||
## Export Options
|
||||
|
||||
You can export your data in several formats:
|
||||
|
||||
- **CSV**: Compatible with spreadsheets
|
||||
- **JSON**: Best for developers
|
||||
- **PDF**: Ideal for sharing reports
|
||||
```
|
||||
|
||||
### Comparisons
|
||||
|
||||
Use tables for comparing options.
|
||||
|
||||
```markdown
|
||||
## Plan Comparison
|
||||
|
||||
| Feature | Free | Pro | Enterprise |
|
||||
|---------|------|-----|------------|
|
||||
| Projects | 3 | Unlimited | Unlimited |
|
||||
| Storage | 1GB | 10GB | 100GB |
|
||||
| Support | Community | Email | Dedicated |
|
||||
```
|
||||
|
||||
## Content Types
|
||||
|
||||
### Conceptual (What Is)
|
||||
|
||||
Explain what something is and why it matters.
|
||||
|
||||
```markdown
|
||||
## What is a Workspace?
|
||||
|
||||
A workspace is a container for your projects and team members. Each workspace
|
||||
has its own settings, billing, and permissions. You might create separate
|
||||
workspaces for different clients or departments.
|
||||
```
|
||||
|
||||
### Procedural (How To)
|
||||
|
||||
Step-by-step instructions for completing a task.
|
||||
|
||||
```markdown
|
||||
## How to Create a Workspace
|
||||
|
||||
1. Click your profile icon in the top-right corner.
|
||||
2. Select **Create Workspace**.
|
||||
3. Enter a name for your workspace.
|
||||
4. Choose a plan (you can upgrade later).
|
||||
5. Click **Create**.
|
||||
|
||||
Your new workspace is ready to use.
|
||||
```
|
||||
|
||||
### Reference (API/Config)
|
||||
|
||||
Detailed specifications and parameters.
|
||||
|
||||
```markdown
|
||||
## Configuration Options
|
||||
|
||||
### `DATABASE_URL`
|
||||
|
||||
- **Type**: String (required)
|
||||
- **Format**: `postgresql://user:password@host:port/database`
|
||||
- **Example**: `postgresql://admin:secret@localhost:5432/myapp`
|
||||
|
||||
Database connection string for PostgreSQL.
|
||||
```
|
||||
|
||||
## Formatting
|
||||
|
||||
### Bold
|
||||
|
||||
Use for:
|
||||
- UI elements: Click **Save**
|
||||
- First use of key terms: **Workspaces** contain projects
|
||||
- Emphasis: **Never** share your API key
|
||||
|
||||
### Italic
|
||||
|
||||
Use for:
|
||||
- Introducing new terms: A *workspace* is...
|
||||
- Placeholders: Replace *your-api-key* with...
|
||||
- Emphasis (sparingly): This is *really* important
|
||||
|
||||
### Code
|
||||
|
||||
Use for:
|
||||
- Commands: Run `npm install`
|
||||
- File paths: Edit `config/settings.json`
|
||||
- Environment variables: Set `DATABASE_URL`
|
||||
- API endpoints: POST `/api/users`
|
||||
- Code references: The `handleSubmit` function
|
||||
|
||||
### Code Blocks
|
||||
|
||||
Always specify the language.
|
||||
|
||||
```javascript
|
||||
// Example: Fetching user data
|
||||
const response = await fetch('/api/user');
|
||||
const user = await response.json();
|
||||
```
|
||||
|
||||
### Notes and Warnings
|
||||
|
||||
Use for important callouts.
|
||||
|
||||
```markdown
|
||||
> **Note**: This feature requires a Pro plan.
|
||||
|
||||
> **Warning**: Deleting a workspace cannot be undone.
|
||||
|
||||
> **Tip**: Use keyboard shortcuts to work faster.
|
||||
```
|
||||
|
||||
## Screenshots
|
||||
|
||||
### When to Include
|
||||
|
||||
- First time showing a UI element
|
||||
- Complex interfaces
|
||||
- Before/after comparisons
|
||||
- Error states
|
||||
|
||||
### Guidelines
|
||||
|
||||
- Capture just the relevant area
|
||||
- Use consistent dimensions
|
||||
- Highlight important elements
|
||||
- Add descriptive captions
|
||||
|
||||
```markdown
|
||||
<!-- SCREENSHOT: id="ss-dashboard" description="Main dashboard showing project list" -->
|
||||
|
||||
*The dashboard displays all your projects with their status.*
|
||||
```
|
||||
|
||||
## Examples
|
||||
|
||||
### Good Section Example
|
||||
|
||||
```markdown
|
||||
## Inviting Team Members
|
||||
|
||||
You can invite colleagues to collaborate on your projects.
|
||||
|
||||
### To invite a team member:
|
||||
|
||||
1. Open **Settings** > **Team**.
|
||||
2. Click **Invite Member**.
|
||||
3. Enter their email address.
|
||||
4. Select their role:
|
||||
- **Admin**: Full access to all settings
|
||||
- **Editor**: Can edit projects
|
||||
- **Viewer**: Read-only access
|
||||
5. Click **Send Invite**.
|
||||
|
||||
The person will receive an email with a link to join your workspace.
|
||||
|
||||
> **Note**: You can have up to 5 team members on the Free plan.
|
||||
|
||||
<!-- SCREENSHOT: id="ss-invite-team" description="Team invitation dialog" -->
|
||||
```
|
||||
|
||||
## Language Guidelines
|
||||
|
||||
### Avoid Jargon
|
||||
|
||||
| Technical | User-Friendly |
|
||||
|-----------|---------------|
|
||||
| Execute | Run |
|
||||
| Terminate | Stop, End |
|
||||
| Instantiate | Create |
|
||||
| Invoke | Call, Use |
|
||||
| Parameterize | Set, Configure |
|
||||
| Persist | Save |
|
||||
|
||||
### Be Specific
|
||||
|
||||
| Vague | Specific |
|
||||
|-------|----------|
|
||||
| "Click the button" | "Click **Save**" |
|
||||
| "Enter information" | "Enter your email address" |
|
||||
| "An error occurred" | "Your password must be at least 8 characters" |
|
||||
| "It takes a moment" | "This typically takes 2-3 seconds" |
|
||||
|
||||
### Use Active Voice
|
||||
|
||||
| Passive | Active |
|
||||
|---------|--------|
|
||||
| "The file is uploaded" | "Upload the file" |
|
||||
| "Settings are saved" | "Click **Save** to keep your changes" |
|
||||
| "Errors are displayed" | "The form shows any errors" |
|
||||
788
.claude/skills/software-manual/templates/css/wiki-base.css
Normal file
788
.claude/skills/software-manual/templates/css/wiki-base.css
Normal file
@@ -0,0 +1,788 @@
|
||||
/* ========================================
|
||||
TiddlyWiki-Style Base CSS
|
||||
Software Manual Skill
|
||||
======================================== */
|
||||
|
||||
/* ========== CSS Variables ========== */
|
||||
:root {
|
||||
/* Light Theme */
|
||||
--bg-primary: #ffffff;
|
||||
--bg-secondary: #f8f9fa;
|
||||
--bg-tertiary: #e9ecef;
|
||||
--text-primary: #212529;
|
||||
--text-secondary: #495057;
|
||||
--text-muted: #6c757d;
|
||||
--border-color: #dee2e6;
|
||||
--accent-color: #0d6efd;
|
||||
--accent-hover: #0b5ed7;
|
||||
--success-color: #198754;
|
||||
--warning-color: #ffc107;
|
||||
--danger-color: #dc3545;
|
||||
--info-color: #0dcaf0;
|
||||
|
||||
/* Layout */
|
||||
--sidebar-width: 280px;
|
||||
--header-height: 60px;
|
||||
--content-max-width: 900px;
|
||||
--spacing-xs: 0.25rem;
|
||||
--spacing-sm: 0.5rem;
|
||||
--spacing-md: 1rem;
|
||||
--spacing-lg: 1.5rem;
|
||||
--spacing-xl: 2rem;
|
||||
|
||||
/* Typography */
|
||||
--font-family: -apple-system, BlinkMacSystemFont, 'Segoe UI', Roboto, 'Helvetica Neue', Arial, sans-serif;
|
||||
--font-family-mono: 'SF Mono', Monaco, Consolas, 'Liberation Mono', 'Courier New', monospace;
|
||||
--font-size-sm: 0.875rem;
|
||||
--font-size-base: 1rem;
|
||||
--font-size-lg: 1.125rem;
|
||||
--font-size-xl: 1.25rem;
|
||||
--line-height: 1.6;
|
||||
|
||||
/* Shadows */
|
||||
--shadow-sm: 0 1px 2px rgba(0, 0, 0, 0.05);
|
||||
--shadow-md: 0 4px 6px rgba(0, 0, 0, 0.1);
|
||||
--shadow-lg: 0 10px 15px rgba(0, 0, 0, 0.1);
|
||||
|
||||
/* Transitions */
|
||||
--transition-fast: 150ms ease;
|
||||
--transition-base: 300ms ease;
|
||||
}
|
||||
|
||||
/* ========== Reset & Base ========== */
|
||||
*, *::before, *::after {
|
||||
box-sizing: border-box;
|
||||
margin: 0;
|
||||
padding: 0;
|
||||
}
|
||||
|
||||
html {
|
||||
scroll-behavior: smooth;
|
||||
}
|
||||
|
||||
body {
|
||||
font-family: var(--font-family);
|
||||
font-size: var(--font-size-base);
|
||||
line-height: var(--line-height);
|
||||
color: var(--text-primary);
|
||||
background-color: var(--bg-secondary);
|
||||
}
|
||||
|
||||
/* ========== Layout ========== */
|
||||
.wiki-container {
|
||||
display: flex;
|
||||
min-height: 100vh;
|
||||
}
|
||||
|
||||
/* ========== Sidebar ========== */
|
||||
.wiki-sidebar {
|
||||
position: fixed;
|
||||
top: 0;
|
||||
left: 0;
|
||||
width: var(--sidebar-width);
|
||||
height: 100vh;
|
||||
background-color: var(--bg-primary);
|
||||
border-right: 1px solid var(--border-color);
|
||||
overflow-y: auto;
|
||||
z-index: 100;
|
||||
display: flex;
|
||||
flex-direction: column;
|
||||
transition: transform var(--transition-base);
|
||||
}
|
||||
|
||||
/* Logo Area */
|
||||
.wiki-logo {
|
||||
padding: var(--spacing-lg);
|
||||
text-align: center;
|
||||
border-bottom: 1px solid var(--border-color);
|
||||
}
|
||||
|
||||
.wiki-logo .logo-placeholder {
|
||||
width: 60px;
|
||||
height: 60px;
|
||||
margin: 0 auto var(--spacing-sm);
|
||||
background: linear-gradient(135deg, var(--accent-color), var(--info-color));
|
||||
border-radius: 12px;
|
||||
display: flex;
|
||||
align-items: center;
|
||||
justify-content: center;
|
||||
color: white;
|
||||
font-weight: bold;
|
||||
font-size: var(--font-size-xl);
|
||||
}
|
||||
|
||||
.wiki-logo h1 {
|
||||
font-size: var(--font-size-lg);
|
||||
font-weight: 600;
|
||||
margin-bottom: var(--spacing-xs);
|
||||
}
|
||||
|
||||
.wiki-logo .version {
|
||||
font-size: var(--font-size-sm);
|
||||
color: var(--text-muted);
|
||||
}
|
||||
|
||||
/* Search */
|
||||
.wiki-search {
|
||||
padding: var(--spacing-md);
|
||||
position: relative;
|
||||
}
|
||||
|
||||
.wiki-search input {
|
||||
width: 100%;
|
||||
padding: var(--spacing-sm) var(--spacing-md);
|
||||
border: 1px solid var(--border-color);
|
||||
border-radius: 6px;
|
||||
font-size: var(--font-size-sm);
|
||||
background-color: var(--bg-secondary);
|
||||
transition: border-color var(--transition-fast), box-shadow var(--transition-fast);
|
||||
}
|
||||
|
||||
.wiki-search input:focus {
|
||||
outline: none;
|
||||
border-color: var(--accent-color);
|
||||
box-shadow: 0 0 0 3px rgba(13, 110, 253, 0.15);
|
||||
}
|
||||
|
||||
.search-results {
|
||||
position: absolute;
|
||||
top: 100%;
|
||||
left: var(--spacing-md);
|
||||
right: var(--spacing-md);
|
||||
background: var(--bg-primary);
|
||||
border: 1px solid var(--border-color);
|
||||
border-radius: 6px;
|
||||
box-shadow: var(--shadow-lg);
|
||||
max-height: 400px;
|
||||
overflow-y: auto;
|
||||
z-index: 200;
|
||||
}
|
||||
|
||||
.search-result-item {
|
||||
display: block;
|
||||
padding: var(--spacing-sm) var(--spacing-md);
|
||||
text-decoration: none;
|
||||
color: var(--text-primary);
|
||||
border-bottom: 1px solid var(--border-color);
|
||||
transition: background-color var(--transition-fast);
|
||||
}
|
||||
|
||||
.search-result-item:last-child {
|
||||
border-bottom: none;
|
||||
}
|
||||
|
||||
.search-result-item:hover {
|
||||
background-color: var(--bg-secondary);
|
||||
}
|
||||
|
||||
.result-title {
|
||||
font-weight: 600;
|
||||
margin-bottom: var(--spacing-xs);
|
||||
}
|
||||
|
||||
.result-excerpt {
|
||||
font-size: var(--font-size-sm);
|
||||
color: var(--text-secondary);
|
||||
}
|
||||
|
||||
.result-excerpt mark {
|
||||
background-color: var(--warning-color);
|
||||
padding: 0 2px;
|
||||
border-radius: 2px;
|
||||
}
|
||||
|
||||
.no-results {
|
||||
padding: var(--spacing-md);
|
||||
text-align: center;
|
||||
color: var(--text-muted);
|
||||
}
|
||||
|
||||
/* Tags */
|
||||
.wiki-tags {
|
||||
padding: var(--spacing-md);
|
||||
display: flex;
|
||||
flex-wrap: wrap;
|
||||
gap: var(--spacing-xs);
|
||||
border-bottom: 1px solid var(--border-color);
|
||||
}
|
||||
|
||||
.wiki-tags .tag {
|
||||
padding: var(--spacing-xs) var(--spacing-sm);
|
||||
font-size: var(--font-size-sm);
|
||||
border: 1px solid var(--border-color);
|
||||
border-radius: 20px;
|
||||
background: var(--bg-secondary);
|
||||
color: var(--text-secondary);
|
||||
cursor: pointer;
|
||||
transition: all var(--transition-fast);
|
||||
}
|
||||
|
||||
.wiki-tags .tag:hover {
|
||||
border-color: var(--accent-color);
|
||||
color: var(--accent-color);
|
||||
}
|
||||
|
||||
.wiki-tags .tag.active {
|
||||
background-color: var(--accent-color);
|
||||
border-color: var(--accent-color);
|
||||
color: white;
|
||||
}
|
||||
|
||||
/* Table of Contents */
|
||||
.wiki-toc {
|
||||
flex: 1;
|
||||
padding: var(--spacing-md);
|
||||
overflow-y: auto;
|
||||
}
|
||||
|
||||
.wiki-toc h3 {
|
||||
font-size: var(--font-size-sm);
|
||||
text-transform: uppercase;
|
||||
letter-spacing: 0.05em;
|
||||
color: var(--text-muted);
|
||||
margin-bottom: var(--spacing-md);
|
||||
}
|
||||
|
||||
.wiki-toc ul {
|
||||
list-style: none;
|
||||
}
|
||||
|
||||
.wiki-toc li {
|
||||
margin-bottom: var(--spacing-xs);
|
||||
}
|
||||
|
||||
.wiki-toc a {
|
||||
display: flex;
|
||||
align-items: center;
|
||||
justify-content: space-between;
|
||||
padding: var(--spacing-sm);
|
||||
color: var(--text-secondary);
|
||||
text-decoration: none;
|
||||
border-radius: 6px;
|
||||
font-size: var(--font-size-sm);
|
||||
transition: all var(--transition-fast);
|
||||
}
|
||||
|
||||
.wiki-toc a:hover {
|
||||
background-color: var(--bg-secondary);
|
||||
color: var(--accent-color);
|
||||
}
|
||||
|
||||
/* ========== Main Content ========== */
|
||||
.wiki-content {
|
||||
flex: 1;
|
||||
margin-left: var(--sidebar-width);
|
||||
min-height: 100vh;
|
||||
display: flex;
|
||||
flex-direction: column;
|
||||
}
|
||||
|
||||
/* Header */
|
||||
.content-header {
|
||||
position: sticky;
|
||||
top: 0;
|
||||
background-color: var(--bg-primary);
|
||||
border-bottom: 1px solid var(--border-color);
|
||||
padding: var(--spacing-sm) var(--spacing-lg);
|
||||
display: flex;
|
||||
align-items: center;
|
||||
justify-content: space-between;
|
||||
z-index: 50;
|
||||
}
|
||||
|
||||
.sidebar-toggle {
|
||||
display: none;
|
||||
flex-direction: column;
|
||||
gap: 4px;
|
||||
padding: var(--spacing-sm);
|
||||
background: none;
|
||||
border: none;
|
||||
cursor: pointer;
|
||||
}
|
||||
|
||||
.sidebar-toggle span {
|
||||
display: block;
|
||||
width: 20px;
|
||||
height: 2px;
|
||||
background-color: var(--text-primary);
|
||||
transition: transform var(--transition-fast);
|
||||
}
|
||||
|
||||
.header-actions {
|
||||
display: flex;
|
||||
gap: var(--spacing-sm);
|
||||
}
|
||||
|
||||
.header-actions button {
|
||||
padding: var(--spacing-xs) var(--spacing-sm);
|
||||
font-size: var(--font-size-sm);
|
||||
border: 1px solid var(--border-color);
|
||||
border-radius: 4px;
|
||||
background: var(--bg-primary);
|
||||
color: var(--text-secondary);
|
||||
cursor: pointer;
|
||||
transition: all var(--transition-fast);
|
||||
}
|
||||
|
||||
.header-actions button:hover {
|
||||
border-color: var(--accent-color);
|
||||
color: var(--accent-color);
|
||||
}
|
||||
|
||||
/* Tiddler Container */
|
||||
.tiddler-container {
|
||||
flex: 1;
|
||||
max-width: var(--content-max-width);
|
||||
margin: 0 auto;
|
||||
padding: var(--spacing-lg);
|
||||
width: 100%;
|
||||
}
|
||||
|
||||
/* ========== Tiddler (Content Block) ========== */
|
||||
.tiddler {
|
||||
background-color: var(--bg-primary);
|
||||
border: 1px solid var(--border-color);
|
||||
border-radius: 8px;
|
||||
margin-bottom: var(--spacing-lg);
|
||||
box-shadow: var(--shadow-sm);
|
||||
transition: box-shadow var(--transition-fast);
|
||||
}
|
||||
|
||||
.tiddler:hover {
|
||||
box-shadow: var(--shadow-md);
|
||||
}
|
||||
|
||||
.tiddler-header {
|
||||
padding: var(--spacing-md) var(--spacing-lg);
|
||||
border-bottom: 1px solid var(--border-color);
|
||||
display: flex;
|
||||
align-items: center;
|
||||
justify-content: space-between;
|
||||
flex-wrap: wrap;
|
||||
gap: var(--spacing-sm);
|
||||
}
|
||||
|
||||
.tiddler-title {
|
||||
display: flex;
|
||||
align-items: center;
|
||||
gap: var(--spacing-sm);
|
||||
font-size: var(--font-size-xl);
|
||||
font-weight: 600;
|
||||
margin: 0;
|
||||
}
|
||||
|
||||
.collapse-toggle {
|
||||
background: none;
|
||||
border: none;
|
||||
font-size: var(--font-size-sm);
|
||||
color: var(--text-muted);
|
||||
cursor: pointer;
|
||||
padding: var(--spacing-xs);
|
||||
transition: transform var(--transition-fast);
|
||||
}
|
||||
|
||||
.tiddler.collapsed .collapse-toggle {
|
||||
transform: rotate(-90deg);
|
||||
}
|
||||
|
||||
.tiddler-meta {
|
||||
display: flex;
|
||||
gap: var(--spacing-sm);
|
||||
flex-wrap: wrap;
|
||||
}
|
||||
|
||||
.difficulty-badge {
|
||||
padding: var(--spacing-xs) var(--spacing-sm);
|
||||
font-size: 0.75rem;
|
||||
font-weight: 500;
|
||||
border-radius: 4px;
|
||||
text-transform: uppercase;
|
||||
}
|
||||
|
||||
.difficulty-badge.beginner {
|
||||
background-color: #d1fae5;
|
||||
color: #065f46;
|
||||
}
|
||||
|
||||
.difficulty-badge.intermediate {
|
||||
background-color: #fef3c7;
|
||||
color: #92400e;
|
||||
}
|
||||
|
||||
.difficulty-badge.advanced {
|
||||
background-color: #fee2e2;
|
||||
color: #991b1b;
|
||||
}
|
||||
|
||||
.tag-badge {
|
||||
padding: var(--spacing-xs) var(--spacing-sm);
|
||||
font-size: 0.75rem;
|
||||
background-color: var(--bg-tertiary);
|
||||
color: var(--text-secondary);
|
||||
border-radius: 4px;
|
||||
}
|
||||
|
||||
.tiddler-content {
|
||||
padding: var(--spacing-lg);
|
||||
}
|
||||
|
||||
.tiddler.collapsed .tiddler-content {
|
||||
display: none;
|
||||
}
|
||||
|
||||
/* ========== Content Typography ========== */
|
||||
.tiddler-content h1,
|
||||
.tiddler-content h2,
|
||||
.tiddler-content h3,
|
||||
.tiddler-content h4 {
|
||||
margin-top: var(--spacing-lg);
|
||||
margin-bottom: var(--spacing-md);
|
||||
font-weight: 600;
|
||||
}
|
||||
|
||||
.tiddler-content h1 { font-size: 1.75rem; }
|
||||
.tiddler-content h2 { font-size: 1.5rem; }
|
||||
.tiddler-content h3 { font-size: 1.25rem; }
|
||||
.tiddler-content h4 { font-size: 1.125rem; }
|
||||
|
||||
.tiddler-content p {
|
||||
margin-bottom: var(--spacing-md);
|
||||
}
|
||||
|
||||
/* Lists - Enhanced Styling */
|
||||
.tiddler-content ul,
|
||||
.tiddler-content ol {
|
||||
margin: var(--spacing-md) 0;
|
||||
padding-left: var(--spacing-xl);
|
||||
}
|
||||
|
||||
.tiddler-content ul {
|
||||
list-style: none;
|
||||
}
|
||||
|
||||
.tiddler-content ul > li {
|
||||
position: relative;
|
||||
margin-bottom: var(--spacing-sm);
|
||||
padding-left: 8px;
|
||||
}
|
||||
|
||||
.tiddler-content ul > li::before {
|
||||
content: "•";
|
||||
position: absolute;
|
||||
left: -16px;
|
||||
color: var(--accent-color);
|
||||
font-weight: bold;
|
||||
}
|
||||
|
||||
.tiddler-content ol {
|
||||
list-style: none;
|
||||
counter-reset: item;
|
||||
}
|
||||
|
||||
.tiddler-content ol > li {
|
||||
position: relative;
|
||||
margin-bottom: var(--spacing-sm);
|
||||
padding-left: 8px;
|
||||
counter-increment: item;
|
||||
}
|
||||
|
||||
.tiddler-content ol > li::before {
|
||||
content: counter(item) ".";
|
||||
position: absolute;
|
||||
left: -24px;
|
||||
color: var(--accent-color);
|
||||
font-weight: 600;
|
||||
}
|
||||
|
||||
/* Nested lists */
|
||||
.tiddler-content ul ul,
|
||||
.tiddler-content ol ol,
|
||||
.tiddler-content ul ol,
|
||||
.tiddler-content ol ul {
|
||||
margin: var(--spacing-xs) 0;
|
||||
}
|
||||
|
||||
.tiddler-content ul ul > li::before {
|
||||
content: "◦";
|
||||
}
|
||||
|
||||
.tiddler-content ul ul ul > li::before {
|
||||
content: "▪";
|
||||
}
|
||||
|
||||
.tiddler-content a {
|
||||
color: var(--accent-color);
|
||||
text-decoration: none;
|
||||
}
|
||||
|
||||
.tiddler-content a:hover {
|
||||
text-decoration: underline;
|
||||
}
|
||||
|
||||
/* Inline Code - Red Highlight */
|
||||
.tiddler-content code {
|
||||
font-family: var(--font-family-mono);
|
||||
font-size: 0.875em;
|
||||
padding: 2px 6px;
|
||||
background-color: #fff5f5;
|
||||
color: #c92a2a;
|
||||
border-radius: 4px;
|
||||
border: 1px solid #ffc9c9;
|
||||
}
|
||||
|
||||
/* Code Blocks - Dark Background */
|
||||
.tiddler-content pre {
|
||||
position: relative;
|
||||
margin: var(--spacing-md) 0;
|
||||
padding: 0;
|
||||
background-color: #1e2128;
|
||||
border-radius: 8px;
|
||||
overflow: hidden;
|
||||
border: 1px solid #3d4450;
|
||||
}
|
||||
|
||||
.tiddler-content pre::before {
|
||||
content: attr(data-language);
|
||||
display: block;
|
||||
padding: 8px 16px;
|
||||
background-color: #2d333b;
|
||||
color: #8b949e;
|
||||
font-size: 0.75rem;
|
||||
font-family: var(--font-family);
|
||||
text-transform: uppercase;
|
||||
letter-spacing: 0.05em;
|
||||
border-bottom: 1px solid #3d4450;
|
||||
}
|
||||
|
||||
.tiddler-content pre code {
|
||||
display: block;
|
||||
padding: var(--spacing-md);
|
||||
background: none;
|
||||
color: #e6edf3;
|
||||
font-size: var(--font-size-sm);
|
||||
line-height: 1.6;
|
||||
overflow-x: auto;
|
||||
border: none;
|
||||
}
|
||||
|
||||
.copy-code-btn {
|
||||
position: absolute;
|
||||
top: 6px;
|
||||
right: 12px;
|
||||
padding: 4px 10px;
|
||||
font-size: 0.7rem;
|
||||
background-color: #3d4450;
|
||||
color: #8b949e;
|
||||
border: 1px solid #4d5566;
|
||||
border-radius: 4px;
|
||||
cursor: pointer;
|
||||
opacity: 0;
|
||||
transition: all var(--transition-fast);
|
||||
}
|
||||
|
||||
.copy-code-btn:hover {
|
||||
background-color: #4d5566;
|
||||
color: #e6edf3;
|
||||
}
|
||||
|
||||
.tiddler-content pre:hover .copy-code-btn {
|
||||
opacity: 1;
|
||||
}
|
||||
|
||||
/* Tables - Blue Header Style */
|
||||
.tiddler-content table {
|
||||
width: 100%;
|
||||
margin: var(--spacing-md) 0;
|
||||
border-collapse: collapse;
|
||||
border: 1px solid #dee2e6;
|
||||
border-radius: 8px;
|
||||
overflow: hidden;
|
||||
}
|
||||
|
||||
.tiddler-content th {
|
||||
padding: 12px 16px;
|
||||
background: linear-gradient(135deg, #1971c2, #228be6);
|
||||
color: white;
|
||||
font-weight: 600;
|
||||
text-align: left;
|
||||
border: none;
|
||||
border-bottom: 2px solid #1864ab;
|
||||
}
|
||||
|
||||
.tiddler-content td {
|
||||
padding: 10px 16px;
|
||||
border: 1px solid #e9ecef;
|
||||
text-align: left;
|
||||
}
|
||||
|
||||
.tiddler-content tbody tr:nth-child(odd) {
|
||||
background-color: #f8f9fa;
|
||||
}
|
||||
|
||||
.tiddler-content tbody tr:nth-child(even) {
|
||||
background-color: #ffffff;
|
||||
}
|
||||
|
||||
.tiddler-content tbody tr:hover {
|
||||
background-color: #e7f5ff;
|
||||
}
|
||||
|
||||
/* Screenshots */
|
||||
.screenshot {
|
||||
margin: var(--spacing-lg) 0;
|
||||
text-align: center;
|
||||
}
|
||||
|
||||
.screenshot img {
|
||||
max-width: 100%;
|
||||
border: 1px solid var(--border-color);
|
||||
border-radius: 8px;
|
||||
box-shadow: var(--shadow-md);
|
||||
}
|
||||
|
||||
.screenshot figcaption {
|
||||
margin-top: var(--spacing-sm);
|
||||
font-size: var(--font-size-sm);
|
||||
color: var(--text-muted);
|
||||
font-style: italic;
|
||||
}
|
||||
|
||||
.screenshot-placeholder {
|
||||
padding: var(--spacing-xl);
|
||||
background-color: var(--bg-tertiary);
|
||||
border: 2px dashed var(--border-color);
|
||||
border-radius: 8px;
|
||||
color: var(--text-muted);
|
||||
text-align: center;
|
||||
}
|
||||
|
||||
/* ========== Footer ========== */
|
||||
.wiki-footer {
|
||||
padding: var(--spacing-lg);
|
||||
text-align: center;
|
||||
color: var(--text-muted);
|
||||
font-size: var(--font-size-sm);
|
||||
border-top: 1px solid var(--border-color);
|
||||
background-color: var(--bg-primary);
|
||||
}
|
||||
|
||||
/* ========== Theme Toggle ========== */
|
||||
.theme-toggle {
|
||||
position: fixed;
|
||||
bottom: var(--spacing-lg);
|
||||
right: var(--spacing-lg);
|
||||
width: 48px;
|
||||
height: 48px;
|
||||
border-radius: 50%;
|
||||
border: none;
|
||||
background-color: var(--bg-primary);
|
||||
box-shadow: var(--shadow-lg);
|
||||
cursor: pointer;
|
||||
font-size: 1.5rem;
|
||||
z-index: 100;
|
||||
transition: transform var(--transition-fast);
|
||||
}
|
||||
|
||||
.theme-toggle:hover {
|
||||
transform: scale(1.1);
|
||||
}
|
||||
|
||||
[data-theme="light"] .moon-icon { display: inline; }
|
||||
[data-theme="light"] .sun-icon { display: none; }
|
||||
[data-theme="dark"] .moon-icon { display: none; }
|
||||
[data-theme="dark"] .sun-icon { display: inline; }
|
||||
|
||||
/* ========== Back to Top ========== */
|
||||
.back-to-top {
|
||||
position: fixed;
|
||||
bottom: calc(var(--spacing-lg) + 60px);
|
||||
right: var(--spacing-lg);
|
||||
width: 40px;
|
||||
height: 40px;
|
||||
border-radius: 50%;
|
||||
border: none;
|
||||
background-color: var(--accent-color);
|
||||
color: white;
|
||||
font-size: 1.25rem;
|
||||
cursor: pointer;
|
||||
opacity: 0;
|
||||
visibility: hidden;
|
||||
transition: all var(--transition-fast);
|
||||
z-index: 100;
|
||||
}
|
||||
|
||||
.back-to-top.visible {
|
||||
opacity: 1;
|
||||
visibility: visible;
|
||||
}
|
||||
|
||||
.back-to-top:hover {
|
||||
background-color: var(--accent-hover);
|
||||
}
|
||||
|
||||
/* ========== Responsive ========== */
|
||||
@media (max-width: 1024px) {
|
||||
.wiki-sidebar {
|
||||
transform: translateX(-100%);
|
||||
}
|
||||
|
||||
.wiki-sidebar.open {
|
||||
transform: translateX(0);
|
||||
}
|
||||
|
||||
.wiki-content {
|
||||
margin-left: 0;
|
||||
}
|
||||
|
||||
.sidebar-toggle {
|
||||
display: flex;
|
||||
}
|
||||
}
|
||||
|
||||
@media (max-width: 640px) {
|
||||
.tiddler-header {
|
||||
flex-direction: column;
|
||||
align-items: flex-start;
|
||||
}
|
||||
|
||||
.header-actions {
|
||||
display: none;
|
||||
}
|
||||
|
||||
.wiki-tags {
|
||||
overflow-x: auto;
|
||||
flex-wrap: nowrap;
|
||||
padding-bottom: var(--spacing-md);
|
||||
}
|
||||
}
|
||||
|
||||
/* ========== Print Styles ========== */
|
||||
@media print {
|
||||
.wiki-sidebar,
|
||||
.theme-toggle,
|
||||
.back-to-top,
|
||||
.content-header,
|
||||
.collapse-toggle,
|
||||
.copy-code-btn {
|
||||
display: none !important;
|
||||
}
|
||||
|
||||
.wiki-content {
|
||||
margin-left: 0;
|
||||
}
|
||||
|
||||
.tiddler {
|
||||
break-inside: avoid;
|
||||
box-shadow: none;
|
||||
border: 1px solid #ccc;
|
||||
}
|
||||
|
||||
.tiddler.collapsed .tiddler-content {
|
||||
display: block;
|
||||
}
|
||||
|
||||
.tiddler-content pre {
|
||||
background-color: #f5f5f5 !important;
|
||||
color: #333 !important;
|
||||
}
|
||||
}
|
||||
278
.claude/skills/software-manual/templates/css/wiki-dark.css
Normal file
278
.claude/skills/software-manual/templates/css/wiki-dark.css
Normal file
@@ -0,0 +1,278 @@
|
||||
/* ========================================
|
||||
TiddlyWiki-Style Dark Theme
|
||||
Software Manual Skill
|
||||
======================================== */
|
||||
|
||||
[data-theme="dark"] {
|
||||
/* Dark Theme Colors */
|
||||
--bg-primary: #1a1a2e;
|
||||
--bg-secondary: #16213e;
|
||||
--bg-tertiary: #0f3460;
|
||||
--text-primary: #eaeaea;
|
||||
--text-secondary: #b8b8b8;
|
||||
--text-muted: #888888;
|
||||
--border-color: #2d3748;
|
||||
--accent-color: #4dabf7;
|
||||
--accent-hover: #339af0;
|
||||
--success-color: #51cf66;
|
||||
--warning-color: #ffd43b;
|
||||
--danger-color: #ff6b6b;
|
||||
--info-color: #22b8cf;
|
||||
|
||||
/* Shadows */
|
||||
--shadow-sm: 0 1px 2px rgba(0, 0, 0, 0.3);
|
||||
--shadow-md: 0 4px 6px rgba(0, 0, 0, 0.4);
|
||||
--shadow-lg: 0 10px 15px rgba(0, 0, 0, 0.5);
|
||||
}
|
||||
|
||||
/* Dark theme specific overrides */
|
||||
[data-theme="dark"] .wiki-logo .logo-placeholder {
|
||||
background: linear-gradient(135deg, var(--accent-color), #6741d9);
|
||||
}
|
||||
|
||||
[data-theme="dark"] .wiki-search input {
|
||||
background-color: var(--bg-tertiary);
|
||||
border-color: var(--border-color);
|
||||
color: var(--text-primary);
|
||||
}
|
||||
|
||||
[data-theme="dark"] .wiki-search input::placeholder {
|
||||
color: var(--text-muted);
|
||||
}
|
||||
|
||||
[data-theme="dark"] .search-results {
|
||||
background-color: var(--bg-secondary);
|
||||
border-color: var(--border-color);
|
||||
}
|
||||
|
||||
[data-theme="dark"] .search-result-item {
|
||||
border-color: var(--border-color);
|
||||
}
|
||||
|
||||
[data-theme="dark"] .search-result-item:hover {
|
||||
background-color: var(--bg-tertiary);
|
||||
}
|
||||
|
||||
[data-theme="dark"] .result-excerpt mark {
|
||||
background-color: rgba(255, 212, 59, 0.3);
|
||||
color: var(--warning-color);
|
||||
}
|
||||
|
||||
[data-theme="dark"] .wiki-tags .tag {
|
||||
background-color: var(--bg-tertiary);
|
||||
border-color: var(--border-color);
|
||||
color: var(--text-secondary);
|
||||
}
|
||||
|
||||
[data-theme="dark"] .wiki-tags .tag:hover {
|
||||
border-color: var(--accent-color);
|
||||
color: var(--accent-color);
|
||||
}
|
||||
|
||||
[data-theme="dark"] .wiki-tags .tag.active {
|
||||
background-color: var(--accent-color);
|
||||
border-color: var(--accent-color);
|
||||
color: #1a1a2e;
|
||||
}
|
||||
|
||||
[data-theme="dark"] .wiki-toc a:hover {
|
||||
background-color: var(--bg-tertiary);
|
||||
}
|
||||
|
||||
[data-theme="dark"] .content-header {
|
||||
background-color: var(--bg-primary);
|
||||
border-color: var(--border-color);
|
||||
}
|
||||
|
||||
[data-theme="dark"] .sidebar-toggle span {
|
||||
background-color: var(--text-primary);
|
||||
}
|
||||
|
||||
[data-theme="dark"] .header-actions button {
|
||||
background-color: var(--bg-secondary);
|
||||
border-color: var(--border-color);
|
||||
color: var(--text-secondary);
|
||||
}
|
||||
|
||||
[data-theme="dark"] .header-actions button:hover {
|
||||
border-color: var(--accent-color);
|
||||
color: var(--accent-color);
|
||||
}
|
||||
|
||||
[data-theme="dark"] .tiddler {
|
||||
background-color: var(--bg-primary);
|
||||
border-color: var(--border-color);
|
||||
}
|
||||
|
||||
[data-theme="dark"] .tiddler-header {
|
||||
border-color: var(--border-color);
|
||||
}
|
||||
|
||||
[data-theme="dark"] .difficulty-badge.beginner {
|
||||
background-color: rgba(81, 207, 102, 0.2);
|
||||
color: var(--success-color);
|
||||
}
|
||||
|
||||
[data-theme="dark"] .difficulty-badge.intermediate {
|
||||
background-color: rgba(255, 212, 59, 0.2);
|
||||
color: var(--warning-color);
|
||||
}
|
||||
|
||||
[data-theme="dark"] .difficulty-badge.advanced {
|
||||
background-color: rgba(255, 107, 107, 0.2);
|
||||
color: var(--danger-color);
|
||||
}
|
||||
|
||||
[data-theme="dark"] .tag-badge {
|
||||
background-color: var(--bg-tertiary);
|
||||
color: var(--text-secondary);
|
||||
}
|
||||
|
||||
[data-theme="dark"] .tiddler-content code {
|
||||
background-color: var(--bg-tertiary);
|
||||
color: var(--accent-color);
|
||||
}
|
||||
|
||||
[data-theme="dark"] .tiddler-content pre {
|
||||
background-color: #0d1117;
|
||||
border: 1px solid var(--border-color);
|
||||
}
|
||||
|
||||
[data-theme="dark"] .tiddler-content pre code {
|
||||
color: #e6e6e6;
|
||||
}
|
||||
|
||||
[data-theme="dark"] .copy-code-btn {
|
||||
background-color: var(--bg-tertiary);
|
||||
color: var(--text-secondary);
|
||||
}
|
||||
|
||||
[data-theme="dark"] .tiddler-content th {
|
||||
background-color: var(--bg-tertiary);
|
||||
}
|
||||
|
||||
[data-theme="dark"] .tiddler-content tr:nth-child(even) {
|
||||
background-color: var(--bg-secondary);
|
||||
}
|
||||
|
||||
[data-theme="dark"] .tiddler-content th,
|
||||
[data-theme="dark"] .tiddler-content td {
|
||||
border-color: var(--border-color);
|
||||
}
|
||||
|
||||
[data-theme="dark"] .screenshot img {
|
||||
border-color: var(--border-color);
|
||||
}
|
||||
|
||||
[data-theme="dark"] .screenshot-placeholder {
|
||||
background-color: var(--bg-tertiary);
|
||||
border-color: var(--border-color);
|
||||
}
|
||||
|
||||
[data-theme="dark"] .wiki-footer {
|
||||
background-color: var(--bg-primary);
|
||||
border-color: var(--border-color);
|
||||
}
|
||||
|
||||
[data-theme="dark"] .theme-toggle {
|
||||
background-color: var(--bg-secondary);
|
||||
color: var(--warning-color);
|
||||
}
|
||||
|
||||
[data-theme="dark"] .back-to-top {
|
||||
background-color: var(--accent-color);
|
||||
}
|
||||
|
||||
[data-theme="dark"] .back-to-top:hover {
|
||||
background-color: var(--accent-hover);
|
||||
}
|
||||
|
||||
/* Scrollbar styling for dark theme */
|
||||
[data-theme="dark"] ::-webkit-scrollbar {
|
||||
width: 8px;
|
||||
height: 8px;
|
||||
}
|
||||
|
||||
[data-theme="dark"] ::-webkit-scrollbar-track {
|
||||
background: var(--bg-secondary);
|
||||
}
|
||||
|
||||
[data-theme="dark"] ::-webkit-scrollbar-thumb {
|
||||
background: var(--bg-tertiary);
|
||||
border-radius: 4px;
|
||||
}
|
||||
|
||||
[data-theme="dark"] ::-webkit-scrollbar-thumb:hover {
|
||||
background: var(--border-color);
|
||||
}
|
||||
|
||||
/* Selection color */
|
||||
[data-theme="dark"] ::selection {
|
||||
background-color: rgba(77, 171, 247, 0.3);
|
||||
color: var(--text-primary);
|
||||
}
|
||||
|
||||
/* Focus styles for accessibility */
|
||||
[data-theme="dark"] :focus {
|
||||
outline-color: var(--accent-color);
|
||||
}
|
||||
|
||||
[data-theme="dark"] .wiki-search input:focus {
|
||||
border-color: var(--accent-color);
|
||||
box-shadow: 0 0 0 3px rgba(77, 171, 247, 0.2);
|
||||
}
|
||||
|
||||
/* Link colors */
|
||||
[data-theme="dark"] .tiddler-content a {
|
||||
color: var(--accent-color);
|
||||
}
|
||||
|
||||
[data-theme="dark"] .tiddler-content a:hover {
|
||||
color: var(--accent-hover);
|
||||
}
|
||||
|
||||
/* Blockquote styling */
|
||||
[data-theme="dark"] .tiddler-content blockquote {
|
||||
border-left: 4px solid var(--accent-color);
|
||||
background-color: var(--bg-tertiary);
|
||||
padding: var(--spacing-md);
|
||||
margin: var(--spacing-md) 0;
|
||||
color: var(--text-secondary);
|
||||
}
|
||||
|
||||
/* Horizontal rule */
|
||||
[data-theme="dark"] .tiddler-content hr {
|
||||
border: none;
|
||||
border-top: 1px solid var(--border-color);
|
||||
margin: var(--spacing-lg) 0;
|
||||
}
|
||||
|
||||
/* Alert/Note boxes */
|
||||
[data-theme="dark"] .note,
|
||||
[data-theme="dark"] .warning,
|
||||
[data-theme="dark"] .tip,
|
||||
[data-theme="dark"] .danger {
|
||||
padding: var(--spacing-md);
|
||||
border-radius: 6px;
|
||||
margin: var(--spacing-md) 0;
|
||||
}
|
||||
|
||||
[data-theme="dark"] .note {
|
||||
background-color: rgba(34, 184, 207, 0.1);
|
||||
border-left: 4px solid var(--info-color);
|
||||
}
|
||||
|
||||
[data-theme="dark"] .warning {
|
||||
background-color: rgba(255, 212, 59, 0.1);
|
||||
border-left: 4px solid var(--warning-color);
|
||||
}
|
||||
|
||||
[data-theme="dark"] .tip {
|
||||
background-color: rgba(81, 207, 102, 0.1);
|
||||
border-left: 4px solid var(--success-color);
|
||||
}
|
||||
|
||||
[data-theme="dark"] .danger {
|
||||
background-color: rgba(255, 107, 107, 0.1);
|
||||
border-left: 4px solid var(--danger-color);
|
||||
}
|
||||
327
.claude/skills/software-manual/templates/tiddlywiki-shell.html
Normal file
327
.claude/skills/software-manual/templates/tiddlywiki-shell.html
Normal file
@@ -0,0 +1,327 @@
|
||||
<!DOCTYPE html>
|
||||
<html lang="zh-CN">
|
||||
<head>
|
||||
<meta charset="UTF-8">
|
||||
<meta name="viewport" content="width=device-width, initial-scale=1.0">
|
||||
<meta name="description" content="{{SOFTWARE_NAME}} - Interactive Software Manual">
|
||||
<meta name="generator" content="software-manual-skill">
|
||||
<title>{{SOFTWARE_NAME}} v{{VERSION}} - User Manual</title>
|
||||
<style>
|
||||
{{EMBEDDED_CSS}}
|
||||
</style>
|
||||
</head>
|
||||
<body class="wiki-container" data-theme="light">
|
||||
<!-- Sidebar Navigation -->
|
||||
<aside class="wiki-sidebar">
|
||||
<!-- Logo and Title -->
|
||||
<div class="wiki-logo">
|
||||
<div class="logo-placeholder">{{SOFTWARE_NAME}}</div>
|
||||
<h1>{{SOFTWARE_NAME}}</h1>
|
||||
<span class="version">v{{VERSION}}</span>
|
||||
</div>
|
||||
|
||||
<!-- Search Box -->
|
||||
<div class="wiki-search">
|
||||
<input type="text" id="searchInput" placeholder="Search documentation..." aria-label="Search">
|
||||
<div id="searchResults" class="search-results" aria-live="polite"></div>
|
||||
</div>
|
||||
|
||||
<!-- Tag Navigation (Dynamic) -->
|
||||
<nav class="wiki-tags" aria-label="Filter by category">
|
||||
<button class="tag active" data-tag="all">全部</button>
|
||||
{{TAG_BUTTONS_HTML}}
|
||||
</nav>
|
||||
|
||||
<!-- Table of Contents -->
|
||||
{{TOC_HTML}}
|
||||
</aside>
|
||||
|
||||
<!-- Main Content Area -->
|
||||
<main class="wiki-content">
|
||||
<!-- Header Bar -->
|
||||
<header class="content-header">
|
||||
<button class="sidebar-toggle" id="sidebarToggle" aria-label="Toggle sidebar">
|
||||
<span></span>
|
||||
<span></span>
|
||||
<span></span>
|
||||
</button>
|
||||
<div class="header-actions">
|
||||
<button class="expand-all" id="expandAll">Expand All</button>
|
||||
<button class="collapse-all" id="collapseAll">Collapse All</button>
|
||||
<button class="print-btn" id="printBtn">Print</button>
|
||||
</div>
|
||||
</header>
|
||||
|
||||
<!-- Tiddler Container -->
|
||||
<div class="tiddler-container">
|
||||
{{TIDDLERS_HTML}}
|
||||
</div>
|
||||
|
||||
<!-- Footer -->
|
||||
<footer class="wiki-footer">
|
||||
<p>Generated by <strong>software-manual-skill</strong></p>
|
||||
<p>Last updated: <time datetime="{{TIMESTAMP}}">{{TIMESTAMP}}</time></p>
|
||||
</footer>
|
||||
</main>
|
||||
|
||||
<!-- Theme Toggle Button -->
|
||||
<button class="theme-toggle" id="themeToggle" aria-label="Toggle theme">
|
||||
<span class="sun-icon">☀</span>
|
||||
<span class="moon-icon">☾</span>
|
||||
</button>
|
||||
|
||||
<!-- Back to Top Button -->
|
||||
<button class="back-to-top" id="backToTop" aria-label="Back to top">↑</button>
|
||||
|
||||
<!-- Search Index Data -->
|
||||
<script id="search-index" type="application/json">
|
||||
{{SEARCH_INDEX_JSON}}
|
||||
</script>
|
||||
|
||||
<!-- Embedded JavaScript -->
|
||||
<script>
|
||||
(function() {
|
||||
'use strict';
|
||||
|
||||
// ========== Search Functionality ==========
|
||||
class WikiSearch {
|
||||
constructor(indexData) {
|
||||
this.index = indexData;
|
||||
}
|
||||
|
||||
search(query) {
|
||||
if (!query || query.length < 2) return [];
|
||||
|
||||
const results = [];
|
||||
const lowerQuery = query.toLowerCase();
|
||||
const queryWords = lowerQuery.split(/\s+/);
|
||||
|
||||
for (const [id, content] of Object.entries(this.index)) {
|
||||
let score = 0;
|
||||
|
||||
// Title match (higher weight)
|
||||
const titleLower = content.title.toLowerCase();
|
||||
if (titleLower.includes(lowerQuery)) {
|
||||
score += 10;
|
||||
}
|
||||
queryWords.forEach(word => {
|
||||
if (titleLower.includes(word)) score += 3;
|
||||
});
|
||||
|
||||
// Body match
|
||||
const bodyLower = content.body.toLowerCase();
|
||||
if (bodyLower.includes(lowerQuery)) {
|
||||
score += 5;
|
||||
}
|
||||
queryWords.forEach(word => {
|
||||
if (bodyLower.includes(word)) score += 1;
|
||||
});
|
||||
|
||||
// Tag match
|
||||
if (content.tags) {
|
||||
content.tags.forEach(tag => {
|
||||
if (tag.toLowerCase().includes(lowerQuery)) score += 4;
|
||||
});
|
||||
}
|
||||
|
||||
if (score > 0) {
|
||||
results.push({
|
||||
id,
|
||||
title: content.title,
|
||||
excerpt: this.highlight(content.body, query),
|
||||
score
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
return results
|
||||
.sort((a, b) => b.score - a.score)
|
||||
.slice(0, 10);
|
||||
}
|
||||
|
||||
highlight(text, query) {
|
||||
const maxLength = 150;
|
||||
const lowerText = text.toLowerCase();
|
||||
const lowerQuery = query.toLowerCase();
|
||||
const index = lowerText.indexOf(lowerQuery);
|
||||
|
||||
if (index === -1) {
|
||||
return text.substring(0, maxLength) + (text.length > maxLength ? '...' : '');
|
||||
}
|
||||
|
||||
const start = Math.max(0, index - 40);
|
||||
const end = Math.min(text.length, index + query.length + 80);
|
||||
let excerpt = text.substring(start, end);
|
||||
|
||||
if (start > 0) excerpt = '...' + excerpt;
|
||||
if (end < text.length) excerpt += '...';
|
||||
|
||||
// Highlight matches
|
||||
const regex = new RegExp('(' + query.replace(/[.*+?^${}()|[\]\\]/g, '\\$&') + ')', 'gi');
|
||||
return excerpt.replace(regex, '<mark>$1</mark>');
|
||||
}
|
||||
}
|
||||
|
||||
// Initialize search
|
||||
const indexData = JSON.parse(document.getElementById('search-index').textContent);
|
||||
const search = new WikiSearch(indexData);
|
||||
|
||||
const searchInput = document.getElementById('searchInput');
|
||||
const searchResults = document.getElementById('searchResults');
|
||||
|
||||
searchInput.addEventListener('input', function() {
|
||||
const query = this.value.trim();
|
||||
const results = search.search(query);
|
||||
|
||||
if (results.length === 0) {
|
||||
searchResults.innerHTML = query.length >= 2
|
||||
? '<div class="no-results">No results found</div>'
|
||||
: '';
|
||||
return;
|
||||
}
|
||||
|
||||
searchResults.innerHTML = results.map(r => `
|
||||
<a href="#${r.id}" class="search-result-item" data-tiddler="${r.id}">
|
||||
<div class="result-title">${r.title}</div>
|
||||
<div class="result-excerpt">${r.excerpt}</div>
|
||||
</a>
|
||||
`).join('');
|
||||
});
|
||||
|
||||
// Clear search on result click
|
||||
searchResults.addEventListener('click', function(e) {
|
||||
const item = e.target.closest('.search-result-item');
|
||||
if (item) {
|
||||
searchInput.value = '';
|
||||
searchResults.innerHTML = '';
|
||||
|
||||
// Expand target tiddler
|
||||
const tiddlerId = item.dataset.tiddler;
|
||||
const tiddler = document.getElementById(tiddlerId);
|
||||
if (tiddler) {
|
||||
tiddler.classList.remove('collapsed');
|
||||
const toggle = tiddler.querySelector('.collapse-toggle');
|
||||
if (toggle) toggle.textContent = '▼';
|
||||
}
|
||||
}
|
||||
});
|
||||
|
||||
// ========== Collapse/Expand ==========
|
||||
document.querySelectorAll('.collapse-toggle').forEach(btn => {
|
||||
btn.addEventListener('click', function() {
|
||||
const tiddler = this.closest('.tiddler');
|
||||
tiddler.classList.toggle('collapsed');
|
||||
this.textContent = tiddler.classList.contains('collapsed') ? '▶' : '▼';
|
||||
});
|
||||
});
|
||||
|
||||
// Expand/Collapse All
|
||||
document.getElementById('expandAll').addEventListener('click', function() {
|
||||
document.querySelectorAll('.tiddler').forEach(t => {
|
||||
t.classList.remove('collapsed');
|
||||
const toggle = t.querySelector('.collapse-toggle');
|
||||
if (toggle) toggle.textContent = '▼';
|
||||
});
|
||||
});
|
||||
|
||||
document.getElementById('collapseAll').addEventListener('click', function() {
|
||||
document.querySelectorAll('.tiddler').forEach(t => {
|
||||
t.classList.add('collapsed');
|
||||
const toggle = t.querySelector('.collapse-toggle');
|
||||
if (toggle) toggle.textContent = '▶';
|
||||
});
|
||||
});
|
||||
|
||||
// ========== Tag Filtering ==========
|
||||
document.querySelectorAll('.wiki-tags .tag').forEach(tag => {
|
||||
tag.addEventListener('click', function() {
|
||||
const filter = this.dataset.tag;
|
||||
|
||||
// Update active state
|
||||
document.querySelectorAll('.wiki-tags .tag').forEach(t => t.classList.remove('active'));
|
||||
this.classList.add('active');
|
||||
|
||||
// Filter tiddlers
|
||||
document.querySelectorAll('.tiddler').forEach(tiddler => {
|
||||
if (filter === 'all') {
|
||||
tiddler.style.display = '';
|
||||
} else {
|
||||
const tags = tiddler.dataset.tags || '';
|
||||
tiddler.style.display = tags.includes(filter) ? '' : 'none';
|
||||
}
|
||||
});
|
||||
});
|
||||
});
|
||||
|
||||
// ========== Theme Toggle ==========
|
||||
const themeToggle = document.getElementById('themeToggle');
|
||||
const savedTheme = localStorage.getItem('wiki-theme');
|
||||
|
||||
if (savedTheme) {
|
||||
document.body.dataset.theme = savedTheme;
|
||||
}
|
||||
|
||||
themeToggle.addEventListener('click', function() {
|
||||
const isDark = document.body.dataset.theme === 'dark';
|
||||
document.body.dataset.theme = isDark ? 'light' : 'dark';
|
||||
localStorage.setItem('wiki-theme', document.body.dataset.theme);
|
||||
});
|
||||
|
||||
// ========== Sidebar Toggle (Mobile) ==========
|
||||
document.getElementById('sidebarToggle').addEventListener('click', function() {
|
||||
document.querySelector('.wiki-sidebar').classList.toggle('open');
|
||||
});
|
||||
|
||||
// ========== Back to Top ==========
|
||||
const backToTop = document.getElementById('backToTop');
|
||||
|
||||
window.addEventListener('scroll', function() {
|
||||
backToTop.classList.toggle('visible', window.scrollY > 300);
|
||||
});
|
||||
|
||||
backToTop.addEventListener('click', function() {
|
||||
window.scrollTo({ top: 0, behavior: 'smooth' });
|
||||
});
|
||||
|
||||
// ========== Print ==========
|
||||
document.getElementById('printBtn').addEventListener('click', function() {
|
||||
window.print();
|
||||
});
|
||||
|
||||
// ========== TOC Navigation ==========
|
||||
document.querySelectorAll('.wiki-toc a').forEach(link => {
|
||||
link.addEventListener('click', function(e) {
|
||||
const tiddlerId = this.getAttribute('href').substring(1);
|
||||
const tiddler = document.getElementById(tiddlerId);
|
||||
|
||||
if (tiddler) {
|
||||
// Expand if collapsed
|
||||
tiddler.classList.remove('collapsed');
|
||||
const toggle = tiddler.querySelector('.collapse-toggle');
|
||||
if (toggle) toggle.textContent = '▼';
|
||||
|
||||
// Close sidebar on mobile
|
||||
document.querySelector('.wiki-sidebar').classList.remove('open');
|
||||
}
|
||||
});
|
||||
});
|
||||
|
||||
// ========== Code Block Copy ==========
|
||||
document.querySelectorAll('pre').forEach(pre => {
|
||||
const copyBtn = document.createElement('button');
|
||||
copyBtn.className = 'copy-code-btn';
|
||||
copyBtn.textContent = 'Copy';
|
||||
copyBtn.addEventListener('click', function() {
|
||||
const code = pre.querySelector('code');
|
||||
navigator.clipboard.writeText(code.textContent).then(() => {
|
||||
copyBtn.textContent = 'Copied!';
|
||||
setTimeout(() => copyBtn.textContent = 'Copy', 2000);
|
||||
});
|
||||
});
|
||||
pre.appendChild(copyBtn);
|
||||
});
|
||||
|
||||
})();
|
||||
</script>
|
||||
</body>
|
||||
</html>
|
||||
@@ -1,10 +1,17 @@
|
||||
# Analysis Mode Protocol
|
||||
|
||||
## Mode Definition
|
||||
|
||||
**Mode**: `analysis` (READ-ONLY)
|
||||
**Tools**: Gemini, Qwen (default mode)
|
||||
## Prompt Structure
|
||||
|
||||
```
|
||||
PURPOSE: [development goal]
|
||||
TASK: [specific implementation task]
|
||||
MODE: [auto|write]
|
||||
CONTEXT: [file patterns]
|
||||
EXPECTED: [deliverables]
|
||||
RULES: [templates | additional constraints]
|
||||
```
|
||||
## Operation Boundaries
|
||||
|
||||
### ALLOWED Operations
|
||||
@@ -27,8 +34,8 @@
|
||||
2. **Read** and analyze CONTEXT files thoroughly
|
||||
3. **Identify** patterns, issues, and dependencies
|
||||
4. **Generate** insights and recommendations
|
||||
5. **Output** structured analysis (text response only)
|
||||
6. **Validate** EXPECTED deliverables met
|
||||
5. **Validate** EXPECTED deliverables met
|
||||
6. **Output** structured analysis (text response only)
|
||||
|
||||
## Core Requirements
|
||||
|
||||
|
||||
@@ -1,10 +1,14 @@
|
||||
# Write Mode Protocol
|
||||
## Prompt Structure
|
||||
|
||||
## Mode Definition
|
||||
|
||||
**Mode**: `write` (FILE OPERATIONS) / `auto` (FULL OPERATIONS)
|
||||
**Tools**: Codex (auto), Gemini/Qwen (write)
|
||||
|
||||
```
|
||||
PURPOSE: [development goal]
|
||||
TASK: [specific implementation task]
|
||||
MODE: [auto|write]
|
||||
CONTEXT: [file patterns]
|
||||
EXPECTED: [deliverables]
|
||||
RULES: [templates | additional constraints]
|
||||
```
|
||||
## Operation Boundaries
|
||||
|
||||
### MODE: write
|
||||
@@ -15,12 +19,6 @@
|
||||
|
||||
**Restrictions**: Follow project conventions, cannot break existing functionality
|
||||
|
||||
### MODE: auto (Codex only)
|
||||
- All `write` mode operations
|
||||
- Run tests and builds
|
||||
- Commit code incrementally
|
||||
- Full autonomous development
|
||||
|
||||
**Constraint**: Must test every change
|
||||
|
||||
## Execution Flow
|
||||
@@ -33,16 +31,6 @@
|
||||
5. **Validate** changes
|
||||
6. **Report** file changes
|
||||
|
||||
### MODE: auto
|
||||
1. **Parse** all 6 fields
|
||||
2. **Analyze** CONTEXT files - find 3+ similar patterns
|
||||
3. **Plan** implementation following RULES
|
||||
4. **Generate** code with tests
|
||||
5. **Run** tests continuously
|
||||
6. **Commit** working code incrementally
|
||||
7. **Validate** EXPECTED deliverables
|
||||
8. **Report** results
|
||||
|
||||
## Core Requirements
|
||||
|
||||
**ALWAYS**:
|
||||
@@ -61,17 +49,6 @@
|
||||
- Break backward compatibility
|
||||
- Exceed 3 failed attempts without stopping
|
||||
|
||||
## Multi-Task Execution (Resume)
|
||||
|
||||
**First subtask**: Standard execution flow
|
||||
**Subsequent subtasks** (via `resume`):
|
||||
- Recall context from previous subtasks
|
||||
- Build on previous work
|
||||
- Maintain consistency
|
||||
- Test integration
|
||||
- Report context for next subtask
|
||||
|
||||
## Error Handling
|
||||
|
||||
**Three-Attempt Rule**: On 3rd failure, stop and report what attempted, what failed, root cause
|
||||
|
||||
@@ -92,7 +69,7 @@
|
||||
|
||||
**If template has no format** → Use default format below
|
||||
|
||||
### Single Task Implementation
|
||||
### Task Implementation
|
||||
|
||||
```markdown
|
||||
# Implementation: [TASK Title]
|
||||
@@ -124,48 +101,6 @@
|
||||
[Recommendations if any]
|
||||
```
|
||||
|
||||
### Multi-Task (First Subtask)
|
||||
|
||||
```markdown
|
||||
# Subtask 1/N: [TASK Title]
|
||||
|
||||
## Changes
|
||||
[List of file changes]
|
||||
|
||||
## Implementation
|
||||
[Details with code references]
|
||||
|
||||
## Testing
|
||||
✅ Tests: X passing
|
||||
|
||||
## Context for Next Subtask
|
||||
- Key decisions: [established patterns]
|
||||
- Files created: [paths and purposes]
|
||||
- Integration points: [where next subtask should connect]
|
||||
```
|
||||
|
||||
### Multi-Task (Subsequent Subtasks)
|
||||
|
||||
```markdown
|
||||
# Subtask N/M: [TASK Title]
|
||||
|
||||
## Changes
|
||||
[List of file changes]
|
||||
|
||||
## Integration Notes
|
||||
✅ Compatible with previous subtask
|
||||
✅ Maintains established patterns
|
||||
|
||||
## Implementation
|
||||
[Details with code references]
|
||||
|
||||
## Testing
|
||||
✅ Tests: X passing
|
||||
|
||||
## Context for Next Subtask
|
||||
[If not final, provide context]
|
||||
```
|
||||
|
||||
### Partial Completion
|
||||
|
||||
```markdown
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
{
|
||||
"$schema": "http://json-schema.org/draft-07/schema#",
|
||||
"title": "Conflict Resolution Schema",
|
||||
"description": "Simplified schema for conflict detection and resolution",
|
||||
"description": "Schema for conflict detection, strategy generation, and resolution output",
|
||||
|
||||
"type": "object",
|
||||
"required": ["conflicts", "summary"],
|
||||
@@ -10,7 +10,7 @@
|
||||
"type": "array",
|
||||
"items": {
|
||||
"type": "object",
|
||||
"required": ["id", "brief", "severity", "category", "strategies"],
|
||||
"required": ["id", "brief", "severity", "category", "strategies", "recommended"],
|
||||
"properties": {
|
||||
"id": {
|
||||
"type": "string",
|
||||
@@ -38,10 +38,41 @@
|
||||
"type": "string",
|
||||
"description": "详细冲突描述"
|
||||
},
|
||||
"clarification_questions": {
|
||||
"type": "array",
|
||||
"items": { "type": "string" },
|
||||
"description": "需要用户澄清的问题(可选)"
|
||||
"impact": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"scope": { "type": "string", "description": "影响的模块/组件" },
|
||||
"compatibility": { "enum": ["Yes", "No", "Partial"] },
|
||||
"migration_required": { "type": "boolean" },
|
||||
"estimated_effort": { "type": "string", "description": "人天估计" }
|
||||
}
|
||||
},
|
||||
"overlap_analysis": {
|
||||
"type": "object",
|
||||
"description": "仅当 category=ModuleOverlap 时需要",
|
||||
"properties": {
|
||||
"new_module": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"name": { "type": "string" },
|
||||
"scenarios": { "type": "array", "items": { "type": "string" } },
|
||||
"responsibilities": { "type": "string" }
|
||||
}
|
||||
},
|
||||
"existing_modules": {
|
||||
"type": "array",
|
||||
"items": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"file": { "type": "string" },
|
||||
"name": { "type": "string" },
|
||||
"scenarios": { "type": "array", "items": { "type": "string" } },
|
||||
"overlap_scenarios": { "type": "array", "items": { "type": "string" } },
|
||||
"responsibilities": { "type": "string" }
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
},
|
||||
"strategies": {
|
||||
"type": "array",
|
||||
@@ -49,26 +80,34 @@
|
||||
"maxItems": 4,
|
||||
"items": {
|
||||
"type": "object",
|
||||
"required": ["name", "approach", "complexity", "risk"],
|
||||
"required": ["name", "approach", "complexity", "risk", "effort", "pros", "cons"],
|
||||
"properties": {
|
||||
"name": {
|
||||
"type": "string",
|
||||
"description": "策略名称(中文)"
|
||||
},
|
||||
"approach": {
|
||||
"type": "string",
|
||||
"description": "实现方法简述"
|
||||
},
|
||||
"complexity": {
|
||||
"enum": ["Low", "Medium", "High"]
|
||||
},
|
||||
"risk": {
|
||||
"enum": ["Low", "Medium", "High"]
|
||||
},
|
||||
"constraints": {
|
||||
"name": { "type": "string", "description": "策略名称(中文)" },
|
||||
"approach": { "type": "string", "description": "实现方法简述" },
|
||||
"complexity": { "enum": ["Low", "Medium", "High"] },
|
||||
"risk": { "enum": ["Low", "Medium", "High"] },
|
||||
"effort": { "type": "string", "description": "时间估计" },
|
||||
"pros": { "type": "array", "items": { "type": "string" }, "description": "优点" },
|
||||
"cons": { "type": "array", "items": { "type": "string" }, "description": "缺点" },
|
||||
"clarification_needed": {
|
||||
"type": "array",
|
||||
"items": { "type": "string" },
|
||||
"description": "实施此策略的约束条件(传递给 task-generate)"
|
||||
"description": "需要用户澄清的问题(尤其是 ModuleOverlap)"
|
||||
},
|
||||
"modifications": {
|
||||
"type": "array",
|
||||
"items": {
|
||||
"type": "object",
|
||||
"required": ["file", "section", "change_type", "old_content", "new_content", "rationale"],
|
||||
"properties": {
|
||||
"file": { "type": "string", "description": "相对项目根目录的完整路径" },
|
||||
"section": { "type": "string", "description": "Markdown heading 用于定位" },
|
||||
"change_type": { "enum": ["update", "add", "remove"] },
|
||||
"old_content": { "type": "string", "description": "原始内容片段(20-100字符,用于唯一匹配)" },
|
||||
"new_content": { "type": "string", "description": "修改后的内容" },
|
||||
"rationale": { "type": "string", "description": "修改理由" }
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -77,13 +116,20 @@
|
||||
"type": "integer",
|
||||
"minimum": 0,
|
||||
"description": "推荐策略索引(0-based)"
|
||||
},
|
||||
"modification_suggestions": {
|
||||
"type": "array",
|
||||
"minItems": 2,
|
||||
"maxItems": 5,
|
||||
"items": { "type": "string" },
|
||||
"description": "自定义处理建议(2-5条,中文)"
|
||||
}
|
||||
}
|
||||
}
|
||||
},
|
||||
"summary": {
|
||||
"type": "object",
|
||||
"required": ["total"],
|
||||
"required": ["total", "critical", "high", "medium"],
|
||||
"properties": {
|
||||
"total": { "type": "integer" },
|
||||
"critical": { "type": "integer" },
|
||||
@@ -93,45 +139,13 @@
|
||||
}
|
||||
},
|
||||
|
||||
"examples": [
|
||||
{
|
||||
"conflicts": [
|
||||
{
|
||||
"id": "CON-001",
|
||||
"brief": "新认证模块与现有 AuthManager 功能重叠",
|
||||
"severity": "High",
|
||||
"category": "ModuleOverlap",
|
||||
"affected_files": ["src/auth/AuthManager.ts"],
|
||||
"description": "计划新增的 UserAuthService 与现有 AuthManager 在登录和 Token 验证场景存在重叠",
|
||||
"clarification_questions": [
|
||||
"新模块的核心职责边界是什么?",
|
||||
"哪些场景应该由新模块独立处理?"
|
||||
],
|
||||
"strategies": [
|
||||
{
|
||||
"name": "扩展现有模块",
|
||||
"approach": "在 AuthManager 中添加新功能",
|
||||
"complexity": "Low",
|
||||
"risk": "Low",
|
||||
"constraints": ["保持 AuthManager 作为唯一认证入口", "新增 MFA 方法"]
|
||||
},
|
||||
{
|
||||
"name": "职责拆分",
|
||||
"approach": "AuthManager 负责基础认证,新模块负责高级认证",
|
||||
"complexity": "Medium",
|
||||
"risk": "Medium",
|
||||
"constraints": ["定义清晰的接口边界", "基础认证 = 密码+token", "高级认证 = MFA+OAuth"]
|
||||
}
|
||||
],
|
||||
"recommended": 0
|
||||
}
|
||||
],
|
||||
"summary": {
|
||||
"total": 1,
|
||||
"critical": 0,
|
||||
"high": 1,
|
||||
"medium": 0
|
||||
}
|
||||
}
|
||||
]
|
||||
"_quality_standards": {
|
||||
"modifications": [
|
||||
"old_content: 20-100字符,确保 Edit 工具能唯一匹配",
|
||||
"new_content: 保持 markdown 格式",
|
||||
"change_type: update(替换), add(插入), remove(删除)"
|
||||
],
|
||||
"user_facing_text": "brief, name, pros, cons, modification_suggestions 使用中文",
|
||||
"technical_fields": "severity, category, complexity, risk 使用英文"
|
||||
}
|
||||
}
|
||||
|
||||
@@ -0,0 +1,219 @@
|
||||
{
|
||||
"$schema": "http://json-schema.org/draft-07/schema#",
|
||||
"$id": "discovery-finding-schema",
|
||||
"title": "Discovery Finding Schema",
|
||||
"description": "Schema for perspective-based issue discovery results",
|
||||
"type": "object",
|
||||
"required": ["perspective", "discovery_id", "analysis_timestamp", "cli_tool_used", "summary", "findings"],
|
||||
"properties": {
|
||||
"perspective": {
|
||||
"type": "string",
|
||||
"enum": ["bug", "ux", "test", "quality", "security", "performance", "maintainability", "best-practices"],
|
||||
"description": "Discovery perspective"
|
||||
},
|
||||
"discovery_id": {
|
||||
"type": "string",
|
||||
"pattern": "^DSC-\\d{8}-\\d{6}$",
|
||||
"description": "Parent discovery session ID"
|
||||
},
|
||||
"analysis_timestamp": {
|
||||
"type": "string",
|
||||
"format": "date-time",
|
||||
"description": "ISO 8601 timestamp of analysis"
|
||||
},
|
||||
"cli_tool_used": {
|
||||
"type": "string",
|
||||
"enum": ["gemini", "qwen", "codex"],
|
||||
"description": "CLI tool that performed the analysis"
|
||||
},
|
||||
"model": {
|
||||
"type": "string",
|
||||
"description": "Specific model version used",
|
||||
"examples": ["gemini-2.5-pro", "qwen-max"]
|
||||
},
|
||||
"analysis_duration_ms": {
|
||||
"type": "integer",
|
||||
"minimum": 0,
|
||||
"description": "Analysis duration in milliseconds"
|
||||
},
|
||||
"summary": {
|
||||
"type": "object",
|
||||
"required": ["total_findings"],
|
||||
"properties": {
|
||||
"total_findings": { "type": "integer", "minimum": 0 },
|
||||
"critical": { "type": "integer", "minimum": 0 },
|
||||
"high": { "type": "integer", "minimum": 0 },
|
||||
"medium": { "type": "integer", "minimum": 0 },
|
||||
"low": { "type": "integer", "minimum": 0 },
|
||||
"files_analyzed": { "type": "integer", "minimum": 0 }
|
||||
},
|
||||
"description": "Summary statistics (FLAT structure, NOT nested)"
|
||||
},
|
||||
"findings": {
|
||||
"type": "array",
|
||||
"items": {
|
||||
"type": "object",
|
||||
"required": ["id", "title", "perspective", "priority", "category", "description", "file", "line"],
|
||||
"properties": {
|
||||
"id": {
|
||||
"type": "string",
|
||||
"pattern": "^dsc-[a-z]+-\\d{3}-[a-f0-9]{8}$",
|
||||
"description": "Unique finding ID: dsc-{perspective}-{seq}-{uuid8}",
|
||||
"examples": ["dsc-bug-001-a1b2c3d4"]
|
||||
},
|
||||
"title": {
|
||||
"type": "string",
|
||||
"minLength": 10,
|
||||
"maxLength": 200,
|
||||
"description": "Concise finding title"
|
||||
},
|
||||
"perspective": {
|
||||
"type": "string",
|
||||
"enum": ["bug", "ux", "test", "quality", "security", "performance", "maintainability", "best-practices"]
|
||||
},
|
||||
"priority": {
|
||||
"type": "string",
|
||||
"enum": ["critical", "high", "medium", "low"],
|
||||
"description": "Priority level (lowercase only)"
|
||||
},
|
||||
"category": {
|
||||
"type": "string",
|
||||
"description": "Perspective-specific category",
|
||||
"examples": ["null-check", "edge-case", "missing-test", "complexity", "injection"]
|
||||
},
|
||||
"description": {
|
||||
"type": "string",
|
||||
"minLength": 20,
|
||||
"description": "Detailed description of the finding"
|
||||
},
|
||||
"file": {
|
||||
"type": "string",
|
||||
"description": "File path relative to project root"
|
||||
},
|
||||
"line": {
|
||||
"type": "integer",
|
||||
"minimum": 1,
|
||||
"description": "Line number of the finding"
|
||||
},
|
||||
"snippet": {
|
||||
"type": "string",
|
||||
"description": "Relevant code snippet"
|
||||
},
|
||||
"suggested_issue": {
|
||||
"type": "object",
|
||||
"required": ["title", "type", "priority"],
|
||||
"properties": {
|
||||
"title": {
|
||||
"type": "string",
|
||||
"description": "Suggested issue title for export"
|
||||
},
|
||||
"type": {
|
||||
"type": "string",
|
||||
"enum": ["bug", "feature", "enhancement", "refactor", "test", "docs"],
|
||||
"description": "Issue type"
|
||||
},
|
||||
"priority": {
|
||||
"type": "integer",
|
||||
"minimum": 1,
|
||||
"maximum": 5,
|
||||
"description": "Priority 1-5 (1=critical, 5=low)"
|
||||
},
|
||||
"labels": {
|
||||
"type": "array",
|
||||
"items": { "type": "string" },
|
||||
"description": "Suggested labels for the issue"
|
||||
}
|
||||
},
|
||||
"description": "Pre-filled issue suggestion for export"
|
||||
},
|
||||
"external_reference": {
|
||||
"type": ["object", "null"],
|
||||
"properties": {
|
||||
"source": { "type": "string" },
|
||||
"url": { "type": "string", "format": "uri" },
|
||||
"relevance": { "type": "string" }
|
||||
},
|
||||
"description": "External reference from Exa research (if applicable)"
|
||||
},
|
||||
"confidence": {
|
||||
"type": "number",
|
||||
"minimum": 0,
|
||||
"maximum": 1,
|
||||
"description": "Confidence score 0.0-1.0"
|
||||
},
|
||||
"impact": {
|
||||
"type": "string",
|
||||
"description": "Description of potential impact"
|
||||
},
|
||||
"recommendation": {
|
||||
"type": "string",
|
||||
"description": "Specific recommendation to address the finding"
|
||||
},
|
||||
"metadata": {
|
||||
"type": "object",
|
||||
"additionalProperties": true,
|
||||
"description": "Additional metadata (CWE ID, OWASP category, etc.)"
|
||||
}
|
||||
}
|
||||
},
|
||||
"description": "Array of discovered findings"
|
||||
},
|
||||
"cross_references": {
|
||||
"type": "array",
|
||||
"items": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"finding_id": { "type": "string" },
|
||||
"related_perspectives": {
|
||||
"type": "array",
|
||||
"items": { "type": "string" }
|
||||
},
|
||||
"reason": { "type": "string" }
|
||||
}
|
||||
},
|
||||
"description": "Cross-references to findings in other perspectives"
|
||||
}
|
||||
},
|
||||
"examples": [
|
||||
{
|
||||
"perspective": "bug",
|
||||
"discovery_id": "DSC-20250128-143022",
|
||||
"analysis_timestamp": "2025-01-28T14:35:00Z",
|
||||
"cli_tool_used": "gemini",
|
||||
"model": "gemini-2.5-pro",
|
||||
"analysis_duration_ms": 45000,
|
||||
"summary": {
|
||||
"total_findings": 8,
|
||||
"critical": 1,
|
||||
"high": 2,
|
||||
"medium": 3,
|
||||
"low": 2,
|
||||
"files_analyzed": 5
|
||||
},
|
||||
"findings": [
|
||||
{
|
||||
"id": "dsc-bug-001-a1b2c3d4",
|
||||
"title": "Missing null check in user validation",
|
||||
"perspective": "bug",
|
||||
"priority": "high",
|
||||
"category": "null-check",
|
||||
"description": "User object is accessed without null check after database query, which may fail if user doesn't exist",
|
||||
"file": "src/auth/validator.ts",
|
||||
"line": 45,
|
||||
"snippet": "const user = await db.findUser(id);\nreturn user.email; // user may be null",
|
||||
"suggested_issue": {
|
||||
"title": "Add null check in user validation",
|
||||
"type": "bug",
|
||||
"priority": 2,
|
||||
"labels": ["bug", "auth"]
|
||||
},
|
||||
"external_reference": null,
|
||||
"confidence": 0.85,
|
||||
"impact": "Runtime error when user not found",
|
||||
"recommendation": "Add null check: if (!user) throw new NotFoundError('User not found');"
|
||||
}
|
||||
],
|
||||
"cross_references": []
|
||||
}
|
||||
]
|
||||
}
|
||||
@@ -0,0 +1,125 @@
|
||||
{
|
||||
"$schema": "http://json-schema.org/draft-07/schema#",
|
||||
"$id": "discovery-state-schema",
|
||||
"title": "Discovery State Schema (Merged)",
|
||||
"description": "Unified schema for issue discovery session (state + progress merged)",
|
||||
"type": "object",
|
||||
"required": ["discovery_id", "target_pattern", "phase", "created_at"],
|
||||
"properties": {
|
||||
"discovery_id": {
|
||||
"type": "string",
|
||||
"description": "Unique discovery session ID",
|
||||
"pattern": "^DSC-\\d{8}-\\d{6}$",
|
||||
"examples": ["DSC-20250128-143022"]
|
||||
},
|
||||
"target_pattern": {
|
||||
"type": "string",
|
||||
"description": "File/directory pattern being analyzed",
|
||||
"examples": ["src/auth/**", "codex-lens/**/*.py"]
|
||||
},
|
||||
"phase": {
|
||||
"type": "string",
|
||||
"enum": ["initialization", "parallel", "aggregation", "complete"],
|
||||
"description": "Current execution phase"
|
||||
},
|
||||
"created_at": {
|
||||
"type": "string",
|
||||
"format": "date-time"
|
||||
},
|
||||
"updated_at": {
|
||||
"type": "string",
|
||||
"format": "date-time"
|
||||
},
|
||||
"target": {
|
||||
"type": "object",
|
||||
"description": "Target module information",
|
||||
"properties": {
|
||||
"files_count": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"source": { "type": "integer" },
|
||||
"tests": { "type": "integer" },
|
||||
"total": { "type": "integer" }
|
||||
}
|
||||
},
|
||||
"project": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"name": { "type": "string" },
|
||||
"version": { "type": "string" }
|
||||
}
|
||||
}
|
||||
}
|
||||
},
|
||||
"perspectives": {
|
||||
"type": "array",
|
||||
"description": "Perspective analysis status (merged from progress)",
|
||||
"items": {
|
||||
"type": "object",
|
||||
"required": ["name", "status"],
|
||||
"properties": {
|
||||
"name": {
|
||||
"type": "string",
|
||||
"enum": ["bug", "ux", "test", "quality", "security", "performance", "maintainability", "best-practices"]
|
||||
},
|
||||
"status": {
|
||||
"type": "string",
|
||||
"enum": ["pending", "in_progress", "completed", "failed"]
|
||||
},
|
||||
"findings": {
|
||||
"type": "integer",
|
||||
"minimum": 0
|
||||
}
|
||||
}
|
||||
}
|
||||
},
|
||||
"external_research": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"enabled": { "type": "boolean", "default": false },
|
||||
"completed": { "type": "boolean", "default": false }
|
||||
}
|
||||
},
|
||||
"results": {
|
||||
"type": "object",
|
||||
"description": "Aggregated results (final phase)",
|
||||
"properties": {
|
||||
"total_findings": { "type": "integer", "minimum": 0 },
|
||||
"issues_generated": { "type": "integer", "minimum": 0 },
|
||||
"priority_distribution": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"critical": { "type": "integer" },
|
||||
"high": { "type": "integer" },
|
||||
"medium": { "type": "integer" },
|
||||
"low": { "type": "integer" }
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
},
|
||||
"examples": [
|
||||
{
|
||||
"discovery_id": "DSC-20251228-182237",
|
||||
"target_pattern": "codex-lens/**/*.py",
|
||||
"phase": "complete",
|
||||
"created_at": "2025-12-28T18:22:37+08:00",
|
||||
"updated_at": "2025-12-28T18:35:00+08:00",
|
||||
"target": {
|
||||
"files_count": { "source": 48, "tests": 44, "total": 93 },
|
||||
"project": { "name": "codex-lens", "version": "0.1.0" }
|
||||
},
|
||||
"perspectives": [
|
||||
{ "name": "bug", "status": "completed", "findings": 15 },
|
||||
{ "name": "test", "status": "completed", "findings": 11 },
|
||||
{ "name": "quality", "status": "completed", "findings": 12 }
|
||||
],
|
||||
"external_research": { "enabled": false, "completed": false },
|
||||
"results": {
|
||||
"total_findings": 37,
|
||||
"issues_generated": 15,
|
||||
"priority_distribution": { "critical": 4, "high": 13, "medium": 16, "low": 6 }
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
168
.claude/workflows/cli-templates/schemas/issues-jsonl-schema.json
Normal file
168
.claude/workflows/cli-templates/schemas/issues-jsonl-schema.json
Normal file
@@ -0,0 +1,168 @@
|
||||
{
|
||||
"$schema": "http://json-schema.org/draft-07/schema#",
|
||||
"title": "Issues JSONL Schema",
|
||||
"description": "Schema for each line in issues.jsonl (flat storage)",
|
||||
"type": "object",
|
||||
"required": ["id", "title", "status", "created_at"],
|
||||
"properties": {
|
||||
"id": {
|
||||
"type": "string",
|
||||
"description": "Issue ID (GH-123, ISS-xxx, DSC-001)"
|
||||
},
|
||||
"title": {
|
||||
"type": "string"
|
||||
},
|
||||
"status": {
|
||||
"type": "string",
|
||||
"enum": ["registered", "planning", "planned", "queued", "executing", "completed", "failed", "paused"],
|
||||
"default": "registered"
|
||||
},
|
||||
"priority": {
|
||||
"type": "integer",
|
||||
"minimum": 1,
|
||||
"maximum": 5,
|
||||
"default": 3,
|
||||
"description": "1=critical, 2=high, 3=medium, 4=low, 5=trivial"
|
||||
},
|
||||
"context": {
|
||||
"type": "string",
|
||||
"description": "Issue context/description (markdown)"
|
||||
},
|
||||
"source": {
|
||||
"type": "string",
|
||||
"enum": ["github", "text", "discovery"],
|
||||
"description": "Source of the issue"
|
||||
},
|
||||
"source_url": {
|
||||
"type": "string",
|
||||
"description": "Original source URL (for GitHub issues)"
|
||||
},
|
||||
"labels": {
|
||||
"type": "array",
|
||||
"items": { "type": "string" },
|
||||
"description": "Issue labels/tags"
|
||||
},
|
||||
"discovery_context": {
|
||||
"type": "object",
|
||||
"description": "Enriched context from issue:discover (only when source=discovery)",
|
||||
"properties": {
|
||||
"discovery_id": {
|
||||
"type": "string",
|
||||
"description": "Source discovery session ID"
|
||||
},
|
||||
"perspective": {
|
||||
"type": "string",
|
||||
"enum": ["bug", "ux", "test", "quality", "security", "performance", "maintainability", "best-practices"]
|
||||
},
|
||||
"category": {
|
||||
"type": "string",
|
||||
"description": "Finding category (e.g., edge-case, race-condition)"
|
||||
},
|
||||
"file": {
|
||||
"type": "string",
|
||||
"description": "Primary affected file"
|
||||
},
|
||||
"line": {
|
||||
"type": "integer",
|
||||
"description": "Line number in primary file"
|
||||
},
|
||||
"snippet": {
|
||||
"type": "string",
|
||||
"description": "Code snippet showing the issue"
|
||||
},
|
||||
"confidence": {
|
||||
"type": "number",
|
||||
"minimum": 0,
|
||||
"maximum": 1,
|
||||
"description": "Agent confidence score"
|
||||
},
|
||||
"suggested_fix": {
|
||||
"type": "string",
|
||||
"description": "Suggested remediation from discovery"
|
||||
}
|
||||
}
|
||||
},
|
||||
"affected_components": {
|
||||
"type": "array",
|
||||
"items": { "type": "string" },
|
||||
"description": "Files/modules affected"
|
||||
},
|
||||
"lifecycle_requirements": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"test_strategy": {
|
||||
"type": "string",
|
||||
"enum": ["unit", "integration", "e2e", "manual", "auto"]
|
||||
},
|
||||
"regression_scope": {
|
||||
"type": "string",
|
||||
"enum": ["affected", "related", "full"]
|
||||
},
|
||||
"acceptance_type": {
|
||||
"type": "string",
|
||||
"enum": ["automated", "manual", "both"]
|
||||
},
|
||||
"commit_strategy": {
|
||||
"type": "string",
|
||||
"enum": ["per-task", "squash", "atomic"]
|
||||
}
|
||||
}
|
||||
},
|
||||
"bound_solution_id": {
|
||||
"type": "string",
|
||||
"description": "ID of the bound solution (null if none bound)"
|
||||
},
|
||||
"solution_count": {
|
||||
"type": "integer",
|
||||
"default": 0,
|
||||
"description": "Number of candidate solutions"
|
||||
},
|
||||
"created_at": {
|
||||
"type": "string",
|
||||
"format": "date-time"
|
||||
},
|
||||
"updated_at": {
|
||||
"type": "string",
|
||||
"format": "date-time"
|
||||
},
|
||||
"planned_at": {
|
||||
"type": "string",
|
||||
"format": "date-time"
|
||||
},
|
||||
"completed_at": {
|
||||
"type": "string",
|
||||
"format": "date-time"
|
||||
}
|
||||
},
|
||||
"examples": [
|
||||
{
|
||||
"id": "DSC-001",
|
||||
"title": "Fix: SQLite connection pool memory leak",
|
||||
"status": "registered",
|
||||
"priority": 1,
|
||||
"context": "Connection pool cleanup only happens when MAX_POOL_SIZE is reached...",
|
||||
"source": "discovery",
|
||||
"labels": ["bug", "resource-leak", "critical"],
|
||||
"discovery_context": {
|
||||
"discovery_id": "DSC-20251228-182237",
|
||||
"perspective": "bug",
|
||||
"category": "resource-leak",
|
||||
"file": "storage/sqlite_store.py",
|
||||
"line": 59,
|
||||
"snippet": "if len(self._pool) >= self.MAX_POOL_SIZE:\n self._cleanup_stale_connections()",
|
||||
"confidence": 0.85,
|
||||
"suggested_fix": "Implement periodic cleanup or weak references"
|
||||
},
|
||||
"affected_components": ["storage/sqlite_store.py"],
|
||||
"lifecycle_requirements": {
|
||||
"test_strategy": "unit",
|
||||
"regression_scope": "affected",
|
||||
"acceptance_type": "automated",
|
||||
"commit_strategy": "per-task"
|
||||
},
|
||||
"bound_solution_id": null,
|
||||
"solution_count": 0,
|
||||
"created_at": "2025-12-28T18:22:37Z"
|
||||
}
|
||||
]
|
||||
}
|
||||
253
.claude/workflows/cli-templates/schemas/queue-schema.json
Normal file
253
.claude/workflows/cli-templates/schemas/queue-schema.json
Normal file
@@ -0,0 +1,253 @@
|
||||
{
|
||||
"$schema": "http://json-schema.org/draft-07/schema#",
|
||||
"title": "Issue Execution Queue Schema",
|
||||
"description": "Execution queue supporting both task-level (T-N) and solution-level (S-N) granularity",
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"id": {
|
||||
"type": "string",
|
||||
"pattern": "^QUE-[0-9]{8}-[0-9]{6}$",
|
||||
"description": "Queue ID in format QUE-YYYYMMDD-HHMMSS"
|
||||
},
|
||||
"status": {
|
||||
"type": "string",
|
||||
"enum": ["active", "paused", "completed", "archived"],
|
||||
"default": "active"
|
||||
},
|
||||
"issue_ids": {
|
||||
"type": "array",
|
||||
"items": { "type": "string" },
|
||||
"description": "Issues included in this queue"
|
||||
},
|
||||
"solutions": {
|
||||
"type": "array",
|
||||
"description": "Solution-level queue items (preferred for new queues)",
|
||||
"items": {
|
||||
"$ref": "#/definitions/solutionItem"
|
||||
}
|
||||
},
|
||||
"tasks": {
|
||||
"type": "array",
|
||||
"description": "Task-level queue items (legacy format)",
|
||||
"items": {
|
||||
"$ref": "#/definitions/taskItem"
|
||||
}
|
||||
},
|
||||
"conflicts": {
|
||||
"type": "array",
|
||||
"description": "Detected conflicts between items",
|
||||
"items": {
|
||||
"$ref": "#/definitions/conflict"
|
||||
}
|
||||
},
|
||||
"execution_groups": {
|
||||
"type": "array",
|
||||
"description": "Parallel/Sequential execution groups",
|
||||
"items": {
|
||||
"$ref": "#/definitions/executionGroup"
|
||||
}
|
||||
},
|
||||
"_metadata": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"version": { "type": "string", "default": "2.0" },
|
||||
"queue_type": {
|
||||
"type": "string",
|
||||
"enum": ["solution", "task"],
|
||||
"description": "Queue granularity level"
|
||||
},
|
||||
"total_solutions": { "type": "integer" },
|
||||
"total_tasks": { "type": "integer" },
|
||||
"pending_count": { "type": "integer" },
|
||||
"ready_count": { "type": "integer" },
|
||||
"executing_count": { "type": "integer" },
|
||||
"completed_count": { "type": "integer" },
|
||||
"failed_count": { "type": "integer" },
|
||||
"last_queue_formation": { "type": "string", "format": "date-time" },
|
||||
"last_updated": { "type": "string", "format": "date-time" }
|
||||
}
|
||||
}
|
||||
},
|
||||
"definitions": {
|
||||
"solutionItem": {
|
||||
"type": "object",
|
||||
"required": ["item_id", "issue_id", "solution_id", "status", "task_count", "files_touched"],
|
||||
"properties": {
|
||||
"item_id": {
|
||||
"type": "string",
|
||||
"pattern": "^S-[0-9]+$",
|
||||
"description": "Solution-level queue item ID (S-1, S-2, ...)"
|
||||
},
|
||||
"issue_id": {
|
||||
"type": "string",
|
||||
"description": "Source issue ID"
|
||||
},
|
||||
"solution_id": {
|
||||
"type": "string",
|
||||
"description": "Bound solution ID"
|
||||
},
|
||||
"status": {
|
||||
"type": "string",
|
||||
"enum": ["pending", "ready", "executing", "completed", "failed", "blocked"],
|
||||
"default": "pending"
|
||||
},
|
||||
"task_count": {
|
||||
"type": "integer",
|
||||
"minimum": 1,
|
||||
"description": "Number of tasks in this solution"
|
||||
},
|
||||
"files_touched": {
|
||||
"type": "array",
|
||||
"items": { "type": "string" },
|
||||
"description": "All files modified by this solution"
|
||||
},
|
||||
"execution_order": {
|
||||
"type": "integer",
|
||||
"description": "Order in execution sequence"
|
||||
},
|
||||
"execution_group": {
|
||||
"type": "string",
|
||||
"description": "Parallel (P*) or Sequential (S*) group ID"
|
||||
},
|
||||
"depends_on": {
|
||||
"type": "array",
|
||||
"items": { "type": "string" },
|
||||
"description": "Solution IDs this item depends on"
|
||||
},
|
||||
"semantic_priority": {
|
||||
"type": "number",
|
||||
"minimum": 0,
|
||||
"maximum": 1,
|
||||
"description": "Semantic importance score (0.0-1.0)"
|
||||
},
|
||||
"assigned_executor": {
|
||||
"type": "string",
|
||||
"enum": ["codex", "gemini", "agent"]
|
||||
},
|
||||
"queued_at": { "type": "string", "format": "date-time" },
|
||||
"started_at": { "type": "string", "format": "date-time" },
|
||||
"completed_at": { "type": "string", "format": "date-time" },
|
||||
"result": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"summary": { "type": "string" },
|
||||
"files_modified": { "type": "array", "items": { "type": "string" } },
|
||||
"tasks_completed": { "type": "integer" },
|
||||
"commit_hashes": { "type": "array", "items": { "type": "string" } }
|
||||
}
|
||||
},
|
||||
"failure_reason": { "type": "string" }
|
||||
}
|
||||
},
|
||||
"taskItem": {
|
||||
"type": "object",
|
||||
"required": ["item_id", "issue_id", "solution_id", "task_id", "status"],
|
||||
"properties": {
|
||||
"item_id": {
|
||||
"type": "string",
|
||||
"pattern": "^T-[0-9]+$",
|
||||
"description": "Task-level queue item ID (T-1, T-2, ...)"
|
||||
},
|
||||
"issue_id": { "type": "string" },
|
||||
"solution_id": { "type": "string" },
|
||||
"task_id": { "type": "string" },
|
||||
"status": {
|
||||
"type": "string",
|
||||
"enum": ["pending", "ready", "executing", "completed", "failed", "blocked"],
|
||||
"default": "pending"
|
||||
},
|
||||
"execution_order": { "type": "integer" },
|
||||
"execution_group": { "type": "string" },
|
||||
"depends_on": { "type": "array", "items": { "type": "string" } },
|
||||
"semantic_priority": { "type": "number", "minimum": 0, "maximum": 1 },
|
||||
"assigned_executor": { "type": "string", "enum": ["codex", "gemini", "agent"] },
|
||||
"queued_at": { "type": "string", "format": "date-time" },
|
||||
"started_at": { "type": "string", "format": "date-time" },
|
||||
"completed_at": { "type": "string", "format": "date-time" },
|
||||
"result": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"files_modified": { "type": "array", "items": { "type": "string" } },
|
||||
"files_created": { "type": "array", "items": { "type": "string" } },
|
||||
"summary": { "type": "string" },
|
||||
"commit_hash": { "type": "string" }
|
||||
}
|
||||
},
|
||||
"failure_reason": { "type": "string" }
|
||||
}
|
||||
},
|
||||
"conflict": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"type": {
|
||||
"type": "string",
|
||||
"enum": ["file_conflict", "dependency_conflict", "resource_conflict"]
|
||||
},
|
||||
"file": {
|
||||
"type": "string",
|
||||
"description": "Conflicting file path"
|
||||
},
|
||||
"solutions": {
|
||||
"type": "array",
|
||||
"items": { "type": "string" },
|
||||
"description": "Solution IDs involved (for solution-level queues)"
|
||||
},
|
||||
"tasks": {
|
||||
"type": "array",
|
||||
"items": { "type": "string" },
|
||||
"description": "Task IDs involved (for task-level queues)"
|
||||
},
|
||||
"resolution": {
|
||||
"type": "string",
|
||||
"enum": ["sequential", "merge", "manual"]
|
||||
},
|
||||
"resolution_order": {
|
||||
"type": "array",
|
||||
"items": { "type": "string" },
|
||||
"description": "Execution order to resolve conflict"
|
||||
},
|
||||
"rationale": {
|
||||
"type": "string",
|
||||
"description": "Explanation of resolution decision"
|
||||
},
|
||||
"resolved": {
|
||||
"type": "boolean",
|
||||
"default": false
|
||||
}
|
||||
}
|
||||
},
|
||||
"executionGroup": {
|
||||
"type": "object",
|
||||
"required": ["id", "type"],
|
||||
"properties": {
|
||||
"id": {
|
||||
"type": "string",
|
||||
"pattern": "^[PS][0-9]+$",
|
||||
"description": "Group ID (P1, P2 for parallel, S1, S2 for sequential)"
|
||||
},
|
||||
"type": {
|
||||
"type": "string",
|
||||
"enum": ["parallel", "sequential"]
|
||||
},
|
||||
"solutions": {
|
||||
"type": "array",
|
||||
"items": { "type": "string" },
|
||||
"description": "Solution IDs in this group"
|
||||
},
|
||||
"tasks": {
|
||||
"type": "array",
|
||||
"items": { "type": "string" },
|
||||
"description": "Task IDs in this group (legacy)"
|
||||
},
|
||||
"solution_count": {
|
||||
"type": "integer",
|
||||
"description": "Number of solutions in group"
|
||||
},
|
||||
"task_count": {
|
||||
"type": "integer",
|
||||
"description": "Number of tasks in group (legacy)"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
94
.claude/workflows/cli-templates/schemas/registry-schema.json
Normal file
94
.claude/workflows/cli-templates/schemas/registry-schema.json
Normal file
@@ -0,0 +1,94 @@
|
||||
{
|
||||
"$schema": "http://json-schema.org/draft-07/schema#",
|
||||
"title": "Issue Registry Schema",
|
||||
"description": "Global registry of all issues and their solutions",
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"issues": {
|
||||
"type": "array",
|
||||
"description": "List of registered issues",
|
||||
"items": {
|
||||
"type": "object",
|
||||
"required": ["id", "title", "status", "created_at"],
|
||||
"properties": {
|
||||
"id": {
|
||||
"type": "string",
|
||||
"description": "Issue ID (e.g., GH-123, TEXT-xxx)"
|
||||
},
|
||||
"title": {
|
||||
"type": "string"
|
||||
},
|
||||
"status": {
|
||||
"type": "string",
|
||||
"enum": ["registered", "planning", "planned", "queued", "executing", "completed", "failed", "paused"],
|
||||
"default": "registered"
|
||||
},
|
||||
"priority": {
|
||||
"type": "integer",
|
||||
"minimum": 1,
|
||||
"maximum": 5,
|
||||
"default": 3
|
||||
},
|
||||
"solution_count": {
|
||||
"type": "integer",
|
||||
"default": 0,
|
||||
"description": "Number of candidate solutions"
|
||||
},
|
||||
"bound_solution_id": {
|
||||
"type": "string",
|
||||
"description": "ID of the bound solution (null if none bound)"
|
||||
},
|
||||
"source": {
|
||||
"type": "string",
|
||||
"enum": ["github", "text", "file"],
|
||||
"description": "Source of the issue"
|
||||
},
|
||||
"source_url": {
|
||||
"type": "string",
|
||||
"description": "Original source URL (for GitHub issues)"
|
||||
},
|
||||
"created_at": {
|
||||
"type": "string",
|
||||
"format": "date-time"
|
||||
},
|
||||
"updated_at": {
|
||||
"type": "string",
|
||||
"format": "date-time"
|
||||
},
|
||||
"planned_at": {
|
||||
"type": "string",
|
||||
"format": "date-time"
|
||||
},
|
||||
"queued_at": {
|
||||
"type": "string",
|
||||
"format": "date-time"
|
||||
},
|
||||
"completed_at": {
|
||||
"type": "string",
|
||||
"format": "date-time"
|
||||
}
|
||||
}
|
||||
}
|
||||
},
|
||||
"_metadata": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"version": { "type": "string", "default": "1.0" },
|
||||
"total_issues": { "type": "integer" },
|
||||
"by_status": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"registered": { "type": "integer" },
|
||||
"planning": { "type": "integer" },
|
||||
"planned": { "type": "integer" },
|
||||
"queued": { "type": "integer" },
|
||||
"executing": { "type": "integer" },
|
||||
"completed": { "type": "integer" },
|
||||
"failed": { "type": "integer" }
|
||||
}
|
||||
},
|
||||
"last_updated": { "type": "string", "format": "date-time" }
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
165
.claude/workflows/cli-templates/schemas/solution-schema.json
Normal file
165
.claude/workflows/cli-templates/schemas/solution-schema.json
Normal file
@@ -0,0 +1,165 @@
|
||||
{
|
||||
"$schema": "http://json-schema.org/draft-07/schema#",
|
||||
"title": "Issue Solution Schema",
|
||||
"description": "Schema for solution registered to an issue",
|
||||
"type": "object",
|
||||
"required": ["id", "tasks", "is_bound", "created_at"],
|
||||
"properties": {
|
||||
"id": {
|
||||
"type": "string",
|
||||
"description": "Unique solution identifier",
|
||||
"pattern": "^SOL-[0-9]+$"
|
||||
},
|
||||
"description": {
|
||||
"type": "string",
|
||||
"description": "High-level summary of the solution"
|
||||
},
|
||||
"approach": {
|
||||
"type": "string",
|
||||
"description": "Technical approach or strategy"
|
||||
},
|
||||
"tasks": {
|
||||
"type": "array",
|
||||
"description": "Task breakdown for this solution",
|
||||
"items": {
|
||||
"type": "object",
|
||||
"required": ["id", "title", "scope", "action", "implementation", "acceptance"],
|
||||
"properties": {
|
||||
"id": {
|
||||
"type": "string",
|
||||
"pattern": "^T[0-9]+$"
|
||||
},
|
||||
"title": {
|
||||
"type": "string",
|
||||
"description": "Action verb + target"
|
||||
},
|
||||
"scope": {
|
||||
"type": "string",
|
||||
"description": "Module path or feature area"
|
||||
},
|
||||
"action": {
|
||||
"type": "string",
|
||||
"enum": ["Create", "Update", "Implement", "Refactor", "Add", "Delete", "Configure", "Test", "Fix"]
|
||||
},
|
||||
"description": {
|
||||
"type": "string",
|
||||
"description": "1-2 sentences describing what to implement"
|
||||
},
|
||||
"modification_points": {
|
||||
"type": "array",
|
||||
"items": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"file": { "type": "string" },
|
||||
"target": { "type": "string" },
|
||||
"change": { "type": "string" }
|
||||
}
|
||||
}
|
||||
},
|
||||
"implementation": {
|
||||
"type": "array",
|
||||
"items": { "type": "string" },
|
||||
"description": "Step-by-step implementation guide"
|
||||
},
|
||||
"test": {
|
||||
"type": "object",
|
||||
"description": "Test requirements",
|
||||
"properties": {
|
||||
"unit": { "type": "array", "items": { "type": "string" } },
|
||||
"integration": { "type": "array", "items": { "type": "string" } },
|
||||
"commands": { "type": "array", "items": { "type": "string" } },
|
||||
"coverage_target": { "type": "number" }
|
||||
}
|
||||
},
|
||||
"regression": {
|
||||
"type": "array",
|
||||
"items": { "type": "string" },
|
||||
"description": "Regression check points"
|
||||
},
|
||||
"acceptance": {
|
||||
"type": "object",
|
||||
"description": "Acceptance criteria & verification",
|
||||
"required": ["criteria", "verification"],
|
||||
"properties": {
|
||||
"criteria": { "type": "array", "items": { "type": "string" } },
|
||||
"verification": { "type": "array", "items": { "type": "string" } },
|
||||
"manual_checks": { "type": "array", "items": { "type": "string" } }
|
||||
}
|
||||
},
|
||||
"commit": {
|
||||
"type": "object",
|
||||
"description": "Commit specification",
|
||||
"properties": {
|
||||
"type": { "type": "string", "enum": ["feat", "fix", "refactor", "test", "docs", "chore"] },
|
||||
"scope": { "type": "string" },
|
||||
"message_template": { "type": "string" },
|
||||
"breaking": { "type": "boolean" }
|
||||
}
|
||||
},
|
||||
"depends_on": {
|
||||
"type": "array",
|
||||
"items": { "type": "string" },
|
||||
"default": [],
|
||||
"description": "Task IDs this task depends on"
|
||||
},
|
||||
"estimated_minutes": {
|
||||
"type": "integer",
|
||||
"description": "Estimated time to complete"
|
||||
},
|
||||
"status": {
|
||||
"type": "string",
|
||||
"description": "Task status (optional, for tracking)"
|
||||
},
|
||||
"priority": {
|
||||
"type": "integer",
|
||||
"minimum": 1,
|
||||
"maximum": 5,
|
||||
"default": 3
|
||||
}
|
||||
}
|
||||
}
|
||||
},
|
||||
"exploration_context": {
|
||||
"type": "object",
|
||||
"description": "ACE exploration results",
|
||||
"properties": {
|
||||
"project_structure": { "type": "string" },
|
||||
"relevant_files": {
|
||||
"type": "array",
|
||||
"items": { "type": "string" }
|
||||
},
|
||||
"patterns": { "type": "string" },
|
||||
"integration_points": { "type": "string" }
|
||||
}
|
||||
},
|
||||
"analysis": {
|
||||
"type": "object",
|
||||
"description": "Solution risk assessment",
|
||||
"properties": {
|
||||
"risk": { "type": "string", "enum": ["low", "medium", "high"] },
|
||||
"impact": { "type": "string", "enum": ["low", "medium", "high"] },
|
||||
"complexity": { "type": "string", "enum": ["low", "medium", "high"] }
|
||||
}
|
||||
},
|
||||
"score": {
|
||||
"type": "number",
|
||||
"minimum": 0,
|
||||
"maximum": 1,
|
||||
"description": "Solution quality score (0.0-1.0)"
|
||||
},
|
||||
"is_bound": {
|
||||
"type": "boolean",
|
||||
"default": false,
|
||||
"description": "Whether this solution is bound to the issue"
|
||||
},
|
||||
"created_at": {
|
||||
"type": "string",
|
||||
"format": "date-time"
|
||||
},
|
||||
"bound_at": {
|
||||
"type": "string",
|
||||
"format": "date-time",
|
||||
"description": "When this solution was bound to the issue"
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -65,13 +65,13 @@ RULES: $(cat ~/.claude/workflows/cli-templates/protocols/[mode]-protocol.md) $(c
|
||||
ccw cli -p "<PROMPT>" --tool <gemini|qwen|codex> --mode <analysis|write>
|
||||
```
|
||||
|
||||
**⚠️ CRITICAL**: `--mode` parameter is **MANDATORY** for all CLI executions. No defaults are assumed.
|
||||
**Note**: `--mode` defaults to `analysis` if not specified. Explicitly specify `--mode write` for file operations.
|
||||
|
||||
### Core Principles
|
||||
|
||||
- **Use tools early and often** - Tools are faster and more thorough
|
||||
- **Unified CLI** - Always use `ccw cli -p` for consistent parameter handling
|
||||
- **Mode is MANDATORY** - ALWAYS explicitly specify `--mode analysis|write` (no implicit defaults)
|
||||
- **Default mode is analysis** - Omit `--mode` for read-only operations, explicitly use `--mode write` for file modifications
|
||||
- **One template required** - ALWAYS reference exactly ONE template in RULES (use universal fallback if no specific match)
|
||||
- **Write protection** - Require EXPLICIT `--mode write` for file operations
|
||||
- **Use double quotes for shell expansion** - Always wrap prompts in double quotes `"..."` to enable `$(cat ...)` command substitution; NEVER use single quotes or escape characters (`\$`, `\"`, `\'`)
|
||||
@@ -183,7 +183,6 @@ ASSISTANT RESPONSE: [Previous output]
|
||||
|
||||
**Tool Behavior**: Codex uses native `codex resume`; Gemini/Qwen assembles context as single prompt
|
||||
|
||||
---
|
||||
|
||||
## Prompt Template
|
||||
|
||||
@@ -362,10 +361,6 @@ ccw cli -p "RULES: \$(cat ~/.claude/workflows/cli-templates/protocols/analysis-p
|
||||
- Description: Additional directories (comma-separated)
|
||||
- Default: none
|
||||
|
||||
- **`--timeout <ms>`**
|
||||
- Description: Timeout in milliseconds
|
||||
- Default: 300000
|
||||
|
||||
- **`--resume [id]`**
|
||||
- Description: Resume previous session
|
||||
- Default: -
|
||||
@@ -430,7 +425,7 @@ MODE: analysis
|
||||
CONTEXT: @src/auth/**/* @src/middleware/auth.ts | Memory: Using bcrypt for passwords, JWT for sessions
|
||||
EXPECTED: Security report with: severity matrix, file:line references, CVE mappings where applicable, remediation code snippets prioritized by risk
|
||||
RULES: $(cat ~/.claude/workflows/cli-templates/protocols/analysis-protocol.md) $(cat ~/.claude/workflows/cli-templates/prompts/analysis/03-assess-security-risks.txt) | Focus on authentication | Ignore test files
|
||||
" --tool gemini --cd src/auth --timeout 600000
|
||||
" --tool gemini --mode analysis --cd src/auth
|
||||
```
|
||||
|
||||
**Implementation Task** (New Feature):
|
||||
@@ -442,7 +437,7 @@ MODE: write
|
||||
CONTEXT: @src/middleware/**/* @src/config/**/* | Memory: Using Express.js, Redis already configured, existing middleware pattern in auth.ts
|
||||
EXPECTED: Production-ready code with: TypeScript types, unit tests, integration test, configuration example, migration guide
|
||||
RULES: $(cat ~/.claude/workflows/cli-templates/protocols/write-protocol.md) $(cat ~/.claude/workflows/cli-templates/prompts/development/02-implement-feature.txt) | Follow existing middleware patterns | No breaking changes
|
||||
" --tool codex --mode write --timeout 1800000
|
||||
" --tool codex --mode write
|
||||
```
|
||||
|
||||
**Bug Fix Task**:
|
||||
@@ -454,7 +449,7 @@ MODE: analysis
|
||||
CONTEXT: @src/websocket/**/* @src/services/connection-manager.ts | Memory: Using ws library, ~5000 concurrent connections in production
|
||||
EXPECTED: Root cause analysis with: memory profile, leak source (file:line), fix recommendation with code, verification steps
|
||||
RULES: $(cat ~/.claude/workflows/cli-templates/protocols/analysis-protocol.md) $(cat ~/.claude/workflows/cli-templates/prompts/analysis/01-diagnose-bug-root-cause.txt) | Focus on resource cleanup
|
||||
" --tool gemini --cd src --timeout 900000
|
||||
" --tool gemini --mode analysis --cd src
|
||||
```
|
||||
|
||||
**Refactoring Task**:
|
||||
@@ -466,30 +461,25 @@ MODE: write
|
||||
CONTEXT: @src/payments/**/* @src/types/payment.ts | Memory: Currently only Stripe, adding PayPal next sprint, must support future gateways
|
||||
EXPECTED: Refactored code with: strategy interface, concrete implementations, factory class, updated tests, migration checklist
|
||||
RULES: $(cat ~/.claude/workflows/cli-templates/protocols/write-protocol.md) $(cat ~/.claude/workflows/cli-templates/prompts/development/02-refactor-codebase.txt) | Preserve all existing behavior | Tests must pass
|
||||
" --tool gemini --mode write --timeout 1200000
|
||||
" --tool gemini --mode write
|
||||
```
|
||||
---
|
||||
|
||||
## Configuration
|
||||
## ⚙️ Execution Configuration
|
||||
|
||||
### Timeout Allocation
|
||||
### Dynamic Timeout Allocation
|
||||
|
||||
**Minimum**: 5 minutes (300000ms)
|
||||
**Minimum timeout: 5 minutes (300000ms)** - Never set below this threshold.
|
||||
|
||||
- **Simple**: 5-10min (300000-600000ms)
|
||||
- Examples: Analysis, search
|
||||
**Timeout Ranges**:
|
||||
- **Simple** (analysis, search): 5-10min (300000-600000ms)
|
||||
- **Medium** (refactoring, documentation): 10-20min (600000-1200000ms)
|
||||
- **Complex** (implementation, migration): 20-60min (1200000-3600000ms)
|
||||
- **Heavy** (large codebase, multi-file): 60-120min (3600000-7200000ms)
|
||||
|
||||
- **Medium**: 10-20min (600000-1200000ms)
|
||||
- Examples: Refactoring, documentation
|
||||
|
||||
- **Complex**: 20-60min (1200000-3600000ms)
|
||||
- Examples: Implementation, migration
|
||||
|
||||
- **Heavy**: 60-120min (3600000-7200000ms)
|
||||
- Examples: Large codebase, multi-file
|
||||
|
||||
**Codex Multiplier**: 3x allocated time (minimum 15min / 900000ms)
|
||||
**Codex Multiplier**: 3x of allocated time (minimum 15min / 900000ms)
|
||||
|
||||
**Auto-detection**: Analyze PURPOSE and TASK fields to determine timeout
|
||||
|
||||
### Permission Framework
|
||||
|
||||
@@ -523,4 +513,3 @@ RULES: $(cat ~/.claude/workflows/cli-templates/protocols/write-protocol.md) $(ca
|
||||
- [ ] **Tool selected** - `--tool gemini|qwen|codex`
|
||||
- [ ] **Template applied (REQUIRED)** - Use specific or universal fallback template
|
||||
- [ ] **Constraints specified** - Scope, requirements
|
||||
- [ ] **Timeout configured** - Based on complexity
|
||||
|
||||
@@ -1,35 +1,27 @@
|
||||
## MCP Tools Usage
|
||||
## Context Acquisition (MCP Tools Priority)
|
||||
|
||||
### smart_search - Code Search (REQUIRED - HIGHEST PRIORITY)
|
||||
**For task context gathering and analysis, ALWAYS prefer MCP tools**:
|
||||
|
||||
**OVERRIDES**: All other search/discovery rules in other workflow files
|
||||
1. **mcp__ace-tool__search_context** - HIGHEST PRIORITY for code discovery
|
||||
- Semantic search with real-time codebase index
|
||||
- Use for: finding implementations, understanding architecture, locating patterns
|
||||
- Example: `mcp__ace-tool__search_context(project_root_path="/path", query="authentication logic")`
|
||||
|
||||
**When**: ANY code discovery task, including:
|
||||
- Find code, understand codebase structure, locate implementations
|
||||
- Explore unknown locations
|
||||
- Verify file existence before reading
|
||||
- Pattern-based file discovery
|
||||
2. **smart_search** - Fallback for structured search
|
||||
- Use `smart_search(query="...")` for keyword/regex search
|
||||
- Use `smart_search(action="find_files", pattern="*.ts")` for file discovery
|
||||
- Supports modes: `auto`, `hybrid`, `exact`, `ripgrep`
|
||||
|
||||
**Priority Rule**:
|
||||
1. **Always use smart_search FIRST** for any code/file discovery
|
||||
2. Only use Built-in Grep for single-file exact line search (after location confirmed)
|
||||
3. Only use Built-in Read for known, confirmed file paths
|
||||
3. **read_file** - Batch file reading
|
||||
- Read multiple files in parallel: `read_file(path="file1.ts")`, `read_file(path="file2.ts")`
|
||||
- Supports glob patterns: `read_file(path="src/**/*.config.ts")`
|
||||
|
||||
**Workflow** (search first, init if needed):
|
||||
```javascript
|
||||
// Step 1: Try search directly (works if index exists or uses ripgrep fallback)
|
||||
smart_search(query="authentication logic")
|
||||
|
||||
// Step 2: Only if search warns "No CodexLens index found", then init
|
||||
smart_search(action="init", path=".") // Creates FTS index only
|
||||
|
||||
// Note: For semantic/vector search, use "ccw view" dashboard to create vector index
|
||||
**Priority Order**:
|
||||
```
|
||||
ACE search_context (semantic) → smart_search (structured) → read_file (batch read) → shell commands (fallback)
|
||||
```
|
||||
|
||||
**Modes**: `auto` (intelligent routing), `hybrid` (semantic, needs vector index), `exact` (FTS), `ripgrep` (no index)
|
||||
|
||||
---
|
||||
|
||||
**NEVER** use shell commands (`cat`, `find`, `grep`) when MCP tools are available.
|
||||
### read_file - Read File Contents
|
||||
|
||||
**When**: Read files found by smart_search
|
||||
|
||||
@@ -21,8 +21,11 @@
|
||||
- Graceful degradation
|
||||
- Don't expose sensitive info
|
||||
|
||||
|
||||
|
||||
## Core Principles
|
||||
|
||||
|
||||
**Incremental Progress**:
|
||||
- Small, testable changes
|
||||
- Commit working code frequently
|
||||
@@ -43,11 +46,63 @@
|
||||
- Maintain established patterns
|
||||
- Test integration between subtasks
|
||||
|
||||
|
||||
## System Optimization
|
||||
|
||||
**Direct Binary Calls**: Always call binaries directly in `functions.shell`, set `workdir`, avoid shell wrappers (`bash -lc`, `cmd /c`, etc.)
|
||||
|
||||
**Text Editing Priority**:
|
||||
1. Use `apply_patch` tool for all routine text edits
|
||||
2. Fall back to `sed` for single-line substitutions if unavailable
|
||||
3. Avoid Python editing scripts unless both fail
|
||||
|
||||
**apply_patch invocation**:
|
||||
```json
|
||||
{
|
||||
"command": ["apply_patch", "*** Begin Patch\n*** Update File: path/to/file\n@@\n- old\n+ new\n*** End Patch\n"],
|
||||
"workdir": "<workdir>",
|
||||
"justification": "Brief reason"
|
||||
}
|
||||
```
|
||||
|
||||
**Windows UTF-8 Encoding** (before commands):
|
||||
```powershell
|
||||
[Console]::InputEncoding = [Text.UTF8Encoding]::new($false)
|
||||
[Console]::OutputEncoding = [Text.UTF8Encoding]::new($false)
|
||||
chcp 65001 > $null
|
||||
```
|
||||
|
||||
## Context Acquisition (MCP Tools Priority)
|
||||
|
||||
**For task context gathering and analysis, ALWAYS prefer MCP tools**:
|
||||
|
||||
1. **mcp__ace-tool__search_context** - HIGHEST PRIORITY for code discovery
|
||||
- Semantic search with real-time codebase index
|
||||
- Use for: finding implementations, understanding architecture, locating patterns
|
||||
- Example: `mcp__ace-tool__search_context(project_root_path="/path", query="authentication logic")`
|
||||
|
||||
2. **smart_search** - Fallback for structured search
|
||||
- Use `smart_search(query="...")` for keyword/regex search
|
||||
- Use `smart_search(action="find_files", pattern="*.ts")` for file discovery
|
||||
- Supports modes: `auto`, `hybrid`, `exact`, `ripgrep`
|
||||
|
||||
3. **read_file** - Batch file reading
|
||||
- Read multiple files in parallel: `read_file(path="file1.ts")`, `read_file(path="file2.ts")`
|
||||
- Supports glob patterns: `read_file(path="src/**/*.config.ts")`
|
||||
|
||||
**Priority Order**:
|
||||
```
|
||||
ACE search_context (semantic) → smart_search (structured) → read_file (batch read) → shell commands (fallback)
|
||||
```
|
||||
|
||||
**NEVER** use shell commands (`cat`, `find`, `grep`) when MCP tools are available.
|
||||
|
||||
## Execution Checklist
|
||||
|
||||
**Before**:
|
||||
- [ ] Understand PURPOSE and TASK clearly
|
||||
- [ ] Review CONTEXT files, find 3+ patterns
|
||||
- [ ] Use ACE search_context first, fallback to smart_search for discovery
|
||||
- [ ] Use read_file to batch read context files, find 3+ patterns
|
||||
- [ ] Check RULES templates and constraints
|
||||
|
||||
**During**:
|
||||
|
||||
Binary file not shown.
378
.codex/prompts/compact.md
Normal file
378
.codex/prompts/compact.md
Normal file
@@ -0,0 +1,378 @@
|
||||
---
|
||||
description: Compact current session memory into structured text for session recovery
|
||||
argument-hint: "[optional: session description]"
|
||||
---
|
||||
|
||||
# Memory Compact Command (/memory:compact)
|
||||
|
||||
## 1. Overview
|
||||
|
||||
The `memory:compact` command **compresses current session working memory** into structured text optimized for **session recovery**, extracts critical information, and saves it to persistent storage via MCP `core_memory` tool.
|
||||
|
||||
**Core Philosophy**:
|
||||
- **Session Recovery First**: Capture everything needed to resume work seamlessly
|
||||
- **Minimize Re-exploration**: Include file paths, decisions, and state to avoid redundant analysis
|
||||
- **Preserve Train of Thought**: Keep notes and hypotheses for complex debugging
|
||||
- **Actionable State**: Record last action result and known issues
|
||||
|
||||
## 2. Parameters
|
||||
|
||||
- `"session description"` (Optional): Session description to supplement objective
|
||||
- Example: "completed core-memory module"
|
||||
- Example: "debugging JWT refresh - suspected memory leak"
|
||||
|
||||
## 3. Structured Output Format
|
||||
|
||||
```markdown
|
||||
## Session ID
|
||||
[WFS-ID if workflow session active, otherwise (none)]
|
||||
|
||||
## Project Root
|
||||
[Absolute path to project root, e.g., D:\Claude_dms3]
|
||||
|
||||
## Objective
|
||||
[High-level goal - the "North Star" of this session]
|
||||
|
||||
## Execution Plan
|
||||
[CRITICAL: Embed the LATEST plan in its COMPLETE and DETAILED form]
|
||||
|
||||
### Source: [workflow | todo | user-stated | inferred]
|
||||
|
||||
<details>
|
||||
<summary>Full Execution Plan (Click to expand)</summary>
|
||||
|
||||
[PRESERVE COMPLETE PLAN VERBATIM - DO NOT SUMMARIZE]
|
||||
- ALL phases, tasks, subtasks
|
||||
- ALL file paths (absolute)
|
||||
- ALL dependencies and prerequisites
|
||||
- ALL acceptance criteria
|
||||
- ALL status markers ([x] done, [ ] pending)
|
||||
- ALL notes and context
|
||||
|
||||
Example:
|
||||
## Phase 1: Setup
|
||||
- [x] Initialize project structure
|
||||
- Created D:\Claude_dms3\src\core\index.ts
|
||||
- Added dependencies: lodash, zod
|
||||
- [ ] Configure TypeScript
|
||||
- Update tsconfig.json for strict mode
|
||||
|
||||
## Phase 2: Implementation
|
||||
- [ ] Implement core API
|
||||
- Target: D:\Claude_dms3\src\api\handler.ts
|
||||
- Dependencies: Phase 1 complete
|
||||
- Acceptance: All tests pass
|
||||
|
||||
</details>
|
||||
|
||||
## Working Files (Modified)
|
||||
[Absolute paths to actively modified files]
|
||||
- D:\Claude_dms3\src\file1.ts (role: main implementation)
|
||||
- D:\Claude_dms3\tests\file1.test.ts (role: unit tests)
|
||||
|
||||
## Reference Files (Read-Only)
|
||||
[Absolute paths to context files - NOT modified but essential for understanding]
|
||||
- D:\Claude_dms3\.claude\CLAUDE.md (role: project instructions)
|
||||
- D:\Claude_dms3\src\types\index.ts (role: type definitions)
|
||||
- D:\Claude_dms3\package.json (role: dependencies)
|
||||
|
||||
## Last Action
|
||||
[Last significant action and its result/status]
|
||||
|
||||
## Decisions
|
||||
- [Decision]: [Reasoning]
|
||||
- [Decision]: [Reasoning]
|
||||
|
||||
## Constraints
|
||||
- [User-specified limitation or preference]
|
||||
|
||||
## Dependencies
|
||||
- [Added/changed packages or environment requirements]
|
||||
|
||||
## Known Issues
|
||||
- [Deferred bug or edge case]
|
||||
|
||||
## Changes Made
|
||||
- [Completed modification]
|
||||
|
||||
## Pending
|
||||
- [Next step] or (none)
|
||||
|
||||
## Notes
|
||||
[Unstructured thoughts, hypotheses, debugging trails]
|
||||
```
|
||||
|
||||
## 4. Field Definitions
|
||||
|
||||
| Field | Purpose | Recovery Value |
|
||||
|-------|---------|----------------|
|
||||
| **Session ID** | Workflow session identifier (WFS-*) | Links memory to specific stateful task execution |
|
||||
| **Project Root** | Absolute path to project directory | Enables correct path resolution in new sessions |
|
||||
| **Objective** | Ultimate goal of the session | Prevents losing track of broader feature |
|
||||
| **Execution Plan** | Complete plan from any source (verbatim) | Preserves full planning context, avoids re-planning |
|
||||
| **Working Files** | Actively modified files (absolute paths) | Immediately identifies where work was happening |
|
||||
| **Reference Files** | Read-only context files (absolute paths) | Eliminates re-exploration for critical context |
|
||||
| **Last Action** | Final tool output/status | Immediate state awareness (success/failure) |
|
||||
| **Decisions** | Architectural choices + reasoning | Prevents re-litigating settled decisions |
|
||||
| **Constraints** | User-imposed limitations | Maintains personalized coding style |
|
||||
| **Dependencies** | Package/environment changes | Prevents missing dependency errors |
|
||||
| **Known Issues** | Deferred bugs/edge cases | Ensures issues aren't forgotten |
|
||||
| **Changes Made** | Completed modifications | Clear record of what was done |
|
||||
| **Pending** | Next steps | Immediate action items |
|
||||
| **Notes** | Hypotheses, debugging trails | Preserves "train of thought" |
|
||||
|
||||
## 5. Execution Flow
|
||||
|
||||
### Step 1: Analyze Current Session
|
||||
|
||||
Extract the following from conversation history:
|
||||
|
||||
```javascript
|
||||
const sessionAnalysis = {
|
||||
sessionId: "", // WFS-* if workflow session active, null otherwise
|
||||
projectRoot: "", // Absolute path: D:\Claude_dms3
|
||||
objective: "", // High-level goal (1-2 sentences)
|
||||
executionPlan: {
|
||||
source: "workflow" | "todo" | "user-stated" | "inferred",
|
||||
content: "" // Full plan content - ALWAYS preserve COMPLETE and DETAILED form
|
||||
},
|
||||
workingFiles: [], // {absolutePath, role} - modified files
|
||||
referenceFiles: [], // {absolutePath, role} - read-only context files
|
||||
lastAction: "", // Last significant action + result
|
||||
decisions: [], // {decision, reasoning}
|
||||
constraints: [], // User-specified limitations
|
||||
dependencies: [], // Added/changed packages
|
||||
knownIssues: [], // Deferred bugs
|
||||
changesMade: [], // Completed modifications
|
||||
pending: [], // Next steps
|
||||
notes: "" // Unstructured thoughts
|
||||
}
|
||||
```
|
||||
|
||||
### Step 2: Generate Structured Text
|
||||
|
||||
```javascript
|
||||
// Helper: Generate execution plan section
|
||||
const generateExecutionPlan = (plan) => {
|
||||
const sourceLabels = {
|
||||
'workflow': 'workflow (IMPL_PLAN.md)',
|
||||
'todo': 'todo (TodoWrite)',
|
||||
'user-stated': 'user-stated',
|
||||
'inferred': 'inferred'
|
||||
};
|
||||
|
||||
// CRITICAL: Preserve complete plan content verbatim - DO NOT summarize
|
||||
return `### Source: ${sourceLabels[plan.source] || plan.source}
|
||||
|
||||
<details>
|
||||
<summary>Full Execution Plan (Click to expand)</summary>
|
||||
|
||||
${plan.content}
|
||||
|
||||
</details>`;
|
||||
};
|
||||
|
||||
const structuredText = `## Session ID
|
||||
${sessionAnalysis.sessionId || '(none)'}
|
||||
|
||||
## Project Root
|
||||
${sessionAnalysis.projectRoot}
|
||||
|
||||
## Objective
|
||||
${sessionAnalysis.objective}
|
||||
|
||||
## Execution Plan
|
||||
${generateExecutionPlan(sessionAnalysis.executionPlan)}
|
||||
|
||||
## Working Files (Modified)
|
||||
${sessionAnalysis.workingFiles.map(f => `- ${f.absolutePath} (role: ${f.role})`).join('\n') || '(none)'}
|
||||
|
||||
## Reference Files (Read-Only)
|
||||
${sessionAnalysis.referenceFiles.map(f => `- ${f.absolutePath} (role: ${f.role})`).join('\n') || '(none)'}
|
||||
|
||||
## Last Action
|
||||
${sessionAnalysis.lastAction}
|
||||
|
||||
## Decisions
|
||||
${sessionAnalysis.decisions.map(d => `- ${d.decision}: ${d.reasoning}`).join('\n') || '(none)'}
|
||||
|
||||
## Constraints
|
||||
${sessionAnalysis.constraints.map(c => `- ${c}`).join('\n') || '(none)'}
|
||||
|
||||
## Dependencies
|
||||
${sessionAnalysis.dependencies.map(d => `- ${d}`).join('\n') || '(none)'}
|
||||
|
||||
## Known Issues
|
||||
${sessionAnalysis.knownIssues.map(i => `- ${i}`).join('\n') || '(none)'}
|
||||
|
||||
## Changes Made
|
||||
${sessionAnalysis.changesMade.map(c => `- ${c}`).join('\n') || '(none)'}
|
||||
|
||||
## Pending
|
||||
${sessionAnalysis.pending.length > 0
|
||||
? sessionAnalysis.pending.map(p => `- ${p}`).join('\n')
|
||||
: '(none)'}
|
||||
|
||||
## Notes
|
||||
${sessionAnalysis.notes || '(none)'}`
|
||||
```
|
||||
|
||||
### Step 3: Import to Core Memory via MCP
|
||||
|
||||
Use the MCP `core_memory` tool to save the structured text:
|
||||
|
||||
```javascript
|
||||
mcp__ccw-tools__core_memory({
|
||||
operation: "import",
|
||||
text: structuredText
|
||||
})
|
||||
```
|
||||
|
||||
Or via CLI (pipe structured text to import):
|
||||
|
||||
```bash
|
||||
# Write structured text to temp file, then import
|
||||
echo "$structuredText" | ccw core-memory import
|
||||
|
||||
# Or from a file
|
||||
ccw core-memory import --file /path/to/session-memory.md
|
||||
```
|
||||
|
||||
**Response Format**:
|
||||
```json
|
||||
{
|
||||
"operation": "import",
|
||||
"id": "CMEM-YYYYMMDD-HHMMSS",
|
||||
"message": "Created memory: CMEM-YYYYMMDD-HHMMSS"
|
||||
}
|
||||
```
|
||||
|
||||
### Step 4: Report Recovery ID
|
||||
|
||||
After successful import, **clearly display the Recovery ID** to the user:
|
||||
|
||||
```
|
||||
╔════════════════════════════════════════════════════════════════════════════╗
|
||||
║ ✓ Session Memory Saved ║
|
||||
║ ║
|
||||
║ Recovery ID: CMEM-YYYYMMDD-HHMMSS ║
|
||||
║ ║
|
||||
║ To restore: "Please import memory <ID>" ║
|
||||
║ (MCP: core_memory export | CLI: ccw core-memory export --id <ID>) ║
|
||||
╚════════════════════════════════════════════════════════════════════════════╝
|
||||
```
|
||||
|
||||
## 6. Quality Checklist
|
||||
|
||||
Before generating:
|
||||
- [ ] Session ID captured if workflow session active (WFS-*)
|
||||
- [ ] Project Root is absolute path (e.g., D:\Claude_dms3)
|
||||
- [ ] Objective clearly states the "North Star" goal
|
||||
- [ ] Execution Plan: COMPLETE plan preserved VERBATIM (no summarization)
|
||||
- [ ] Plan Source: Clearly identified (workflow | todo | user-stated | inferred)
|
||||
- [ ] Plan Details: ALL phases, tasks, file paths, dependencies, status markers included
|
||||
- [ ] All file paths are ABSOLUTE (not relative)
|
||||
- [ ] Working Files: 3-8 modified files with roles
|
||||
- [ ] Reference Files: Key context files (CLAUDE.md, types, configs)
|
||||
- [ ] Last Action captures final state (success/failure)
|
||||
- [ ] Decisions include reasoning, not just choices
|
||||
- [ ] Known Issues separates deferred from forgotten bugs
|
||||
- [ ] Notes preserve debugging hypotheses if any
|
||||
|
||||
## 7. Path Resolution Rules
|
||||
|
||||
### Project Root Detection
|
||||
1. Check current working directory from environment
|
||||
2. Look for project markers: `.git/`, `package.json`, `.claude/`
|
||||
3. Use the topmost directory containing these markers
|
||||
|
||||
### Absolute Path Conversion
|
||||
```javascript
|
||||
// Convert relative to absolute
|
||||
const toAbsolutePath = (relativePath, projectRoot) => {
|
||||
if (path.isAbsolute(relativePath)) return relativePath;
|
||||
return path.join(projectRoot, relativePath);
|
||||
};
|
||||
|
||||
// Example: "src/api/auth.ts" → "D:\Claude_dms3\src\api\auth.ts"
|
||||
```
|
||||
|
||||
### Reference File Categories
|
||||
| Category | Examples | Priority |
|
||||
|----------|----------|----------|
|
||||
| Project Config | `.claude/CLAUDE.md`, `package.json`, `tsconfig.json` | High |
|
||||
| Type Definitions | `src/types/*.ts`, `*.d.ts` | High |
|
||||
| Related Modules | Parent/sibling modules with shared interfaces | Medium |
|
||||
| Test Files | Corresponding test files for modified code | Medium |
|
||||
| Documentation | `README.md`, `ARCHITECTURE.md` | Low |
|
||||
|
||||
## 8. Plan Detection (Priority Order)
|
||||
|
||||
### Priority 1: Workflow Session (IMPL_PLAN.md)
|
||||
```javascript
|
||||
// Check for active workflow session
|
||||
const manifest = await mcp__ccw-tools__session_manager({
|
||||
operation: "list",
|
||||
location: "active"
|
||||
});
|
||||
|
||||
if (manifest.sessions?.length > 0) {
|
||||
const session = manifest.sessions[0];
|
||||
const plan = await mcp__ccw-tools__session_manager({
|
||||
operation: "read",
|
||||
session_id: session.id,
|
||||
content_type: "plan"
|
||||
});
|
||||
sessionAnalysis.sessionId = session.id;
|
||||
sessionAnalysis.executionPlan.source = "workflow";
|
||||
sessionAnalysis.executionPlan.content = plan.content;
|
||||
}
|
||||
```
|
||||
|
||||
### Priority 2: TodoWrite (Current Session Todos)
|
||||
```javascript
|
||||
// Extract from conversation - look for TodoWrite tool calls
|
||||
// Preserve COMPLETE todo list with all details
|
||||
const todos = extractTodosFromConversation();
|
||||
if (todos.length > 0) {
|
||||
sessionAnalysis.executionPlan.source = "todo";
|
||||
// Format todos with full context - preserve status markers
|
||||
sessionAnalysis.executionPlan.content = todos.map(t =>
|
||||
`- [${t.status === 'completed' ? 'x' : t.status === 'in_progress' ? '>' : ' '}] ${t.content}`
|
||||
).join('\n');
|
||||
}
|
||||
```
|
||||
|
||||
### Priority 3: User-Stated Plan
|
||||
```javascript
|
||||
// Look for explicit plan statements in user messages:
|
||||
// - "Here's my plan: 1. ... 2. ... 3. ..."
|
||||
// - "I want to: first..., then..., finally..."
|
||||
// - Numbered or bulleted lists describing steps
|
||||
const userPlan = extractUserStatedPlan();
|
||||
if (userPlan) {
|
||||
sessionAnalysis.executionPlan.source = "user-stated";
|
||||
sessionAnalysis.executionPlan.content = userPlan;
|
||||
}
|
||||
```
|
||||
|
||||
### Priority 4: Inferred Plan
|
||||
```javascript
|
||||
// If no explicit plan, infer from:
|
||||
// - Task description and breakdown discussion
|
||||
// - Sequence of actions taken
|
||||
// - Outstanding work mentioned
|
||||
const inferredPlan = inferPlanFromDiscussion();
|
||||
if (inferredPlan) {
|
||||
sessionAnalysis.executionPlan.source = "inferred";
|
||||
sessionAnalysis.executionPlan.content = inferredPlan;
|
||||
}
|
||||
```
|
||||
|
||||
## 9. Notes
|
||||
|
||||
- **Timing**: Execute at task completion or before context switch
|
||||
- **Frequency**: Once per independent task or milestone
|
||||
- **Recovery**: New session can immediately continue with full context
|
||||
- **Knowledge Graph**: Entity relationships auto-extracted for visualization
|
||||
- **Absolute Paths**: Critical for cross-session recovery on different machines
|
||||
317
.codex/prompts/issue-execute.md
Normal file
317
.codex/prompts/issue-execute.md
Normal file
@@ -0,0 +1,317 @@
|
||||
---
|
||||
description: Execute all solutions from issue queue with git commit after each task
|
||||
argument-hint: ""
|
||||
---
|
||||
|
||||
# Issue Execute (Codex Version)
|
||||
|
||||
## Core Principle
|
||||
|
||||
**Serial Execution**: Execute solutions ONE BY ONE from the issue queue via `ccw issue next`. For each solution, complete all tasks sequentially (implement → test → commit per task). Continue autonomously until queue is empty.
|
||||
|
||||
## Execution Flow
|
||||
|
||||
```
|
||||
INIT: Fetch first solution via ccw issue next
|
||||
|
||||
WHILE solution exists:
|
||||
1. Receive solution JSON from ccw issue next
|
||||
2. Execute all tasks in solution.tasks sequentially:
|
||||
FOR each task:
|
||||
- IMPLEMENT: Follow task.implementation steps
|
||||
- TEST: Run task.test commands
|
||||
- VERIFY: Check task.acceptance criteria
|
||||
- COMMIT: Stage files, commit with task.commit.message_template
|
||||
3. Report completion via ccw issue done <item_id>
|
||||
4. Fetch next solution via ccw issue next
|
||||
|
||||
WHEN queue empty:
|
||||
Output final summary
|
||||
```
|
||||
|
||||
## Step 1: Fetch First Solution
|
||||
|
||||
Run this command to get your first solution:
|
||||
|
||||
```bash
|
||||
ccw issue next
|
||||
```
|
||||
|
||||
This returns JSON with the full solution definition:
|
||||
- `item_id`: Solution identifier in queue (e.g., "S-1")
|
||||
- `issue_id`: Parent issue ID (e.g., "ISS-20251227-001")
|
||||
- `solution_id`: Solution ID (e.g., "SOL-20251227-001")
|
||||
- `solution`: Full solution with all tasks
|
||||
- `execution_hints`: Timing and executor hints
|
||||
|
||||
If response contains `{ "status": "empty" }`, all solutions are complete - skip to final summary.
|
||||
|
||||
## Step 2: Parse Solution Response
|
||||
|
||||
Expected solution structure:
|
||||
|
||||
```json
|
||||
{
|
||||
"item_id": "S-1",
|
||||
"issue_id": "ISS-20251227-001",
|
||||
"solution_id": "SOL-20251227-001",
|
||||
"status": "pending",
|
||||
"solution": {
|
||||
"id": "SOL-20251227-001",
|
||||
"description": "Description of solution approach",
|
||||
"tasks": [
|
||||
{
|
||||
"id": "T1",
|
||||
"title": "Task title",
|
||||
"scope": "src/module/",
|
||||
"action": "Create|Modify|Fix|Refactor|Add",
|
||||
"description": "What to do",
|
||||
"modification_points": [
|
||||
{ "file": "path/to/file.ts", "target": "function name", "change": "description" }
|
||||
],
|
||||
"implementation": [
|
||||
"Step 1: Do this",
|
||||
"Step 2: Do that"
|
||||
],
|
||||
"test": {
|
||||
"commands": ["npm test -- --filter=xxx"],
|
||||
"unit": ["Unit test requirement 1", "Unit test requirement 2"]
|
||||
},
|
||||
"regression": ["Verify existing tests still pass"],
|
||||
"acceptance": {
|
||||
"criteria": ["Criterion 1: Must pass", "Criterion 2: Must verify"],
|
||||
"verification": ["Run test command", "Manual verification step"]
|
||||
},
|
||||
"commit": {
|
||||
"type": "feat|fix|test|refactor",
|
||||
"scope": "module",
|
||||
"message_template": "feat(scope): description"
|
||||
},
|
||||
"depends_on": [],
|
||||
"estimated_minutes": 30,
|
||||
"priority": 1
|
||||
}
|
||||
],
|
||||
"exploration_context": {
|
||||
"relevant_files": ["path/to/reference.ts"],
|
||||
"patterns": "Follow existing pattern in xxx",
|
||||
"integration_points": "Used by other modules"
|
||||
},
|
||||
"analysis": {
|
||||
"risk": "low|medium|high",
|
||||
"impact": "low|medium|high",
|
||||
"complexity": "low|medium|high"
|
||||
},
|
||||
"score": 0.95,
|
||||
"is_bound": true
|
||||
},
|
||||
"execution_hints": {
|
||||
"executor": "codex",
|
||||
"estimated_minutes": 180
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Step 3: Execute Tasks Sequentially
|
||||
|
||||
Iterate through `solution.tasks` array and execute each task:
|
||||
|
||||
### Phase A: IMPLEMENT
|
||||
|
||||
1. Read all `solution.exploration_context.relevant_files` to understand existing patterns
|
||||
2. Follow `task.implementation` steps in order
|
||||
3. Apply changes to `task.modification_points` files
|
||||
4. Follow `solution.exploration_context.patterns` for code style consistency
|
||||
5. Run `task.regression` checks if specified to ensure no breakage
|
||||
|
||||
**Output format:**
|
||||
```
|
||||
## Implementing: [task.title] (Task [N]/[Total])
|
||||
|
||||
**Scope**: [task.scope]
|
||||
**Action**: [task.action]
|
||||
|
||||
**Steps**:
|
||||
1. ✓ [implementation step 1]
|
||||
2. ✓ [implementation step 2]
|
||||
...
|
||||
|
||||
**Files Modified**:
|
||||
- path/to/file1.ts
|
||||
- path/to/file2.ts
|
||||
```
|
||||
|
||||
### Phase B: TEST
|
||||
|
||||
1. Run all commands in `task.test.commands`
|
||||
2. Verify unit tests pass (`task.test.unit`)
|
||||
3. Run integration tests if specified (`task.test.integration`)
|
||||
|
||||
**If tests fail**: Fix the code and re-run. Do NOT proceed until tests pass.
|
||||
|
||||
**Output format:**
|
||||
```
|
||||
## Testing: [task.title]
|
||||
|
||||
**Test Results**:
|
||||
- [x] Unit tests: PASSED
|
||||
- [x] Integration tests: PASSED (or N/A)
|
||||
```
|
||||
|
||||
### Phase C: VERIFY
|
||||
|
||||
Check all `task.acceptance.criteria` are met using `task.acceptance.verification` steps:
|
||||
|
||||
```
|
||||
## Verifying: [task.title]
|
||||
|
||||
**Acceptance Criteria**:
|
||||
- [x] Criterion 1: Verified
|
||||
- [x] Criterion 2: Verified
|
||||
...
|
||||
|
||||
**Verification Steps**:
|
||||
- [x] Run test command
|
||||
- [x] Manual verification step
|
||||
|
||||
All criteria met: YES
|
||||
```
|
||||
|
||||
**If any criterion fails**: Go back to IMPLEMENT phase and fix.
|
||||
|
||||
### Phase D: COMMIT
|
||||
|
||||
After all phases pass, commit the changes for this task:
|
||||
|
||||
```bash
|
||||
# Stage all modified files
|
||||
git add path/to/file1.ts path/to/file2.ts ...
|
||||
|
||||
# Commit with task message template
|
||||
git commit -m "$(cat <<'EOF'
|
||||
[task.commit.message_template]
|
||||
|
||||
Solution-ID: [solution_id]
|
||||
Issue-ID: [issue_id]
|
||||
Task-ID: [task.id]
|
||||
EOF
|
||||
)"
|
||||
```
|
||||
|
||||
**Output format:**
|
||||
```
|
||||
## Committed: [task.title]
|
||||
|
||||
**Commit**: [commit hash]
|
||||
**Message**: [commit message]
|
||||
**Files**: N files changed
|
||||
```
|
||||
|
||||
### Repeat for Next Task
|
||||
|
||||
Continue to next task in `solution.tasks` array until all tasks are complete.
|
||||
|
||||
## Step 4: Report Completion
|
||||
|
||||
After ALL tasks in the solution are complete, report to queue system:
|
||||
|
||||
```bash
|
||||
ccw issue done <item_id> --result '{
|
||||
"files_modified": ["path1", "path2"],
|
||||
"tests_passed": true,
|
||||
"acceptance_passed": true,
|
||||
"committed": true,
|
||||
"commits": [
|
||||
{ "task_id": "T1", "hash": "abc123" },
|
||||
{ "task_id": "T2", "hash": "def456" }
|
||||
],
|
||||
"summary": "[What was accomplished]"
|
||||
}'
|
||||
```
|
||||
|
||||
**If solution failed and cannot be fixed:**
|
||||
|
||||
```bash
|
||||
ccw issue done <item_id> --fail --reason "Task [task.id] failed: [details]"
|
||||
```
|
||||
|
||||
## Step 5: Continue to Next Solution
|
||||
|
||||
Immediately fetch the next solution:
|
||||
|
||||
```bash
|
||||
ccw issue next
|
||||
```
|
||||
|
||||
**Output progress:**
|
||||
```
|
||||
✓ [N/M] Completed: [item_id] - [solution.approach]
|
||||
→ Fetching next solution...
|
||||
```
|
||||
|
||||
**DO NOT STOP.** Return to Step 2 and continue until queue is empty.
|
||||
|
||||
## Final Summary
|
||||
|
||||
When `ccw issue next` returns `{ "status": "empty" }`:
|
||||
|
||||
```markdown
|
||||
## Issue Queue Execution Complete
|
||||
|
||||
**Total Solutions Executed**: N
|
||||
**Total Tasks Executed**: M
|
||||
|
||||
**All Commits**:
|
||||
| # | Solution | Task | Commit |
|
||||
|---|----------|------|--------|
|
||||
| 1 | S-1 | T1 | abc123 |
|
||||
| 2 | S-1 | T2 | def456 |
|
||||
| 3 | S-2 | T1 | ghi789 |
|
||||
|
||||
**Files Modified**:
|
||||
- path/to/file1.ts
|
||||
- path/to/file2.ts
|
||||
|
||||
**Summary**:
|
||||
[Overall what was accomplished]
|
||||
```
|
||||
|
||||
## Execution Rules
|
||||
|
||||
1. **Never stop mid-queue** - Continue until queue is empty
|
||||
2. **One solution at a time** - Fully complete (all tasks + report) before moving on
|
||||
3. **Sequential within solution** - Complete each task (including commit) before next
|
||||
4. **Tests MUST pass** - Do not proceed to commit if tests fail
|
||||
5. **Commit after each task** - Each task gets its own commit
|
||||
6. **Self-verify** - All acceptance criteria must pass before commit
|
||||
7. **Report accurately** - Use `ccw issue done` after each solution
|
||||
8. **Handle failures gracefully** - If a solution fails, report via `ccw issue done --fail` and continue to next
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Situation | Action |
|
||||
|-----------|--------|
|
||||
| `ccw issue next` returns empty | All done - output final summary |
|
||||
| Tests fail | Fix code, re-run tests |
|
||||
| Verification fails | Go back to implement phase |
|
||||
| Git commit fails | Check staging, retry commit |
|
||||
| `ccw issue done` fails | Log error, continue to next solution |
|
||||
| Unrecoverable error | Call `ccw issue done --fail`, continue to next |
|
||||
|
||||
## CLI Command Reference
|
||||
|
||||
| Command | Purpose |
|
||||
|---------|---------|
|
||||
| `ccw issue next` | Fetch next solution from queue |
|
||||
| `ccw issue done <id>` | Mark solution complete with result |
|
||||
| `ccw issue done <id> --fail` | Mark solution failed with reason |
|
||||
|
||||
## Start Execution
|
||||
|
||||
Begin by running:
|
||||
|
||||
```bash
|
||||
ccw issue next
|
||||
```
|
||||
|
||||
Then follow the solution lifecycle for each solution until queue is empty.
|
||||
106
.codex/prompts/issue-plan.md
Normal file
106
.codex/prompts/issue-plan.md
Normal file
@@ -0,0 +1,106 @@
|
||||
---
|
||||
description: Plan issue(s) into bound solutions (writes solutions JSONL via ccw issue bind)
|
||||
argument-hint: "<issue-id>[,<issue-id>,...] [--all-pending] [--batch-size 3]"
|
||||
---
|
||||
|
||||
# Issue Plan (Codex Version)
|
||||
|
||||
## Goal
|
||||
|
||||
Create executable solution(s) for issue(s) and bind the selected solution to each issue using `ccw issue bind`.
|
||||
|
||||
This workflow is **planning + registration** (no implementation): it explores the codebase just enough to produce a high-quality task breakdown that can be executed later (e.g., by `issue-execute.md`).
|
||||
|
||||
## Inputs
|
||||
|
||||
- **Explicit issues**: comma-separated IDs, e.g. `ISS-123,ISS-124`
|
||||
- **All pending**: `--all-pending` → plan all issues in `registered` status
|
||||
- **Batch size**: `--batch-size N` (default `3`) → max issues per batch
|
||||
|
||||
## Output Requirements
|
||||
|
||||
For each issue:
|
||||
- Register at least one solution and bind one solution to the issue (updates `.workflow/issues/issues.jsonl` and appends to `.workflow/issues/solutions/{issue-id}.jsonl`).
|
||||
- Ensure tasks conform to `.claude/workflows/cli-templates/schemas/solution-schema.json`.
|
||||
- Each task includes quantified `acceptance.criteria` and concrete `acceptance.verification`.
|
||||
|
||||
Return a final summary JSON:
|
||||
```json
|
||||
{
|
||||
"bound": [{ "issue_id": "...", "solution_id": "...", "task_count": 0 }],
|
||||
"pending_selection": [{ "issue_id": "...", "solutions": [{ "id": "...", "task_count": 0, "description": "..." }] }],
|
||||
"conflicts": [{ "file": "...", "issues": ["..."] }]
|
||||
}
|
||||
```
|
||||
|
||||
## Workflow
|
||||
|
||||
### Step 1: Resolve issue list
|
||||
|
||||
- If `--all-pending`:
|
||||
- Run `ccw issue list --status registered --json` and plan all returned issues.
|
||||
- Else:
|
||||
- Parse IDs from user input (split by `,`), and ensure each issue exists:
|
||||
- `ccw issue init <issue-id> --title "Issue <issue-id>"` (safe if already exists)
|
||||
|
||||
### Step 2: Load issue details
|
||||
|
||||
For each issue ID:
|
||||
- `ccw issue status <issue-id> --json`
|
||||
- Extract the issue title/context/labels and any discovery hints (affected files, snippets, etc. if present).
|
||||
|
||||
### Step 3: Minimal exploration (evidence-based)
|
||||
|
||||
- If issue context names specific files or symbols: open them first.
|
||||
- Otherwise:
|
||||
- Use `rg` to locate relevant code paths by keywords from the title/context.
|
||||
- Read 3+ similar patterns before proposing refactors or API changes.
|
||||
|
||||
### Step 4: Draft solutions and tasks (schema-driven)
|
||||
|
||||
Default to **one** solution per issue unless there are genuinely different approaches.
|
||||
|
||||
Task rules (from schema):
|
||||
- `id`: `T1`, `T2`, ...
|
||||
- `action`: one of `Create|Update|Implement|Refactor|Add|Delete|Configure|Test|Fix`
|
||||
- `implementation`: step-by-step, executable instructions
|
||||
- `test.commands`: include at least one command per task when feasible
|
||||
- `acceptance.criteria`: testable statements
|
||||
- `acceptance.verification`: concrete steps/commands mapping to criteria
|
||||
- Prefer small, independently testable tasks; encode dependencies in `depends_on`.
|
||||
|
||||
### Step 5: Register & bind solutions via CLI
|
||||
|
||||
Create an import JSON file per solution (NOT JSONL), then bind it:
|
||||
|
||||
1. Write a file (example path):
|
||||
- `.workflow/issues/solutions/_imports/<issue-id>-<timestamp>.json`
|
||||
2. File contents shape (minimum):
|
||||
```json
|
||||
{
|
||||
"description": "High-level summary",
|
||||
"approach": "Technical approach",
|
||||
"tasks": []
|
||||
}
|
||||
```
|
||||
3. Register+bind in one step:
|
||||
- `ccw issue bind <issue-id> --solution <import-file>`
|
||||
|
||||
If you intentionally generated multiple solutions for the same issue:
|
||||
- Register each via `ccw issue bind <issue-id> <solution-id> --solution <import-file>` (do NOT bind yet).
|
||||
- Present the alternatives in `pending_selection` and stop for user choice.
|
||||
- Bind chosen solution with: `ccw issue bind <issue-id> <solution-id>`.
|
||||
|
||||
### Step 6: Detect cross-issue file conflicts (best-effort)
|
||||
|
||||
Across the issues planned in this run:
|
||||
- Build a set of touched files from each solution’s `modification_points.file` (and/or task `scope` when explicit files are missing).
|
||||
- If the same file appears in multiple issues, add it to `conflicts` with all involved issue IDs.
|
||||
- Recommend a safe execution order (sequential) when conflicts exist.
|
||||
|
||||
## Done Criteria
|
||||
|
||||
- A bound solution exists for each issue unless explicitly deferred for user selection.
|
||||
- All tasks validate against the solution schema fields (especially acceptance criteria + verification).
|
||||
- The final summary JSON matches the required shape.
|
||||
|
||||
225
.codex/prompts/issue-queue.md
Normal file
225
.codex/prompts/issue-queue.md
Normal file
@@ -0,0 +1,225 @@
|
||||
---
|
||||
description: Form execution queue from bound solutions (orders solutions, detects conflicts, assigns groups)
|
||||
argument-hint: "[--issue <id>]"
|
||||
---
|
||||
|
||||
# Issue Queue (Codex Version)
|
||||
|
||||
## Goal
|
||||
|
||||
Create an ordered execution queue from all bound solutions. Analyze inter-solution file conflicts, calculate semantic priorities, and assign parallel/sequential execution groups.
|
||||
|
||||
This workflow is **ordering only** (no execution): it reads bound solutions, detects conflicts, and produces a queue file that `issue-execute.md` can consume.
|
||||
|
||||
## Inputs
|
||||
|
||||
- **All planned**: Default behavior → queue all issues with `planned` status and bound solutions
|
||||
- **Specific issue**: `--issue <id>` → queue only that issue's solution
|
||||
|
||||
## Output Requirements
|
||||
|
||||
**Generate Files (EXACTLY 2):**
|
||||
1. `.workflow/issues/queues/{queue-id}.json` - Full queue with solutions, conflicts, groups
|
||||
2. `.workflow/issues/queues/index.json` - Update with new queue entry
|
||||
|
||||
**Return Summary:**
|
||||
```json
|
||||
{
|
||||
"queue_id": "QUE-YYYYMMDD-HHMMSS",
|
||||
"total_solutions": 3,
|
||||
"total_tasks": 12,
|
||||
"execution_groups": [{ "id": "P1", "type": "parallel", "count": 2 }],
|
||||
"conflicts_resolved": 1,
|
||||
"issues_queued": ["ISS-xxx", "ISS-yyy"]
|
||||
}
|
||||
```
|
||||
|
||||
## Workflow
|
||||
|
||||
### Step 1: Generate Queue ID
|
||||
|
||||
Generate queue ID ONCE at start, reuse throughout:
|
||||
|
||||
```bash
|
||||
# Format: QUE-YYYYMMDD-HHMMSS (UTC)
|
||||
QUEUE_ID="QUE-$(date -u +%Y%m%d-%H%M%S)"
|
||||
```
|
||||
|
||||
### Step 2: Load Planned Issues
|
||||
|
||||
Get all issues with bound solutions:
|
||||
|
||||
```bash
|
||||
ccw issue list --status planned --json
|
||||
```
|
||||
|
||||
For each issue in the result:
|
||||
- Extract `id`, `bound_solution_id`, `priority`
|
||||
- Read solution from `.workflow/issues/solutions/{issue-id}.jsonl`
|
||||
- Find the bound solution by matching `solution.id === bound_solution_id`
|
||||
- Collect `files_touched` from all tasks' `modification_points.file`
|
||||
|
||||
Build solution list:
|
||||
```json
|
||||
[
|
||||
{
|
||||
"issue_id": "ISS-xxx",
|
||||
"solution_id": "SOL-xxx",
|
||||
"task_count": 3,
|
||||
"files_touched": ["src/auth.ts", "src/utils.ts"],
|
||||
"priority": "medium"
|
||||
}
|
||||
]
|
||||
```
|
||||
|
||||
### Step 3: Detect File Conflicts
|
||||
|
||||
Build a file → solutions mapping:
|
||||
|
||||
```javascript
|
||||
fileModifications = {
|
||||
"src/auth.ts": ["SOL-001", "SOL-003"],
|
||||
"src/api.ts": ["SOL-002"]
|
||||
}
|
||||
```
|
||||
|
||||
Conflicts exist when a file has multiple solutions. For each conflict:
|
||||
- Record the file and involved solutions
|
||||
- Will be resolved in Step 4
|
||||
|
||||
### Step 4: Resolve Conflicts & Build DAG
|
||||
|
||||
**Resolution Rules (in priority order):**
|
||||
1. Higher issue priority first: `critical > high > medium > low`
|
||||
2. Foundation solutions first: fewer dependencies
|
||||
3. More tasks = higher priority: larger impact
|
||||
|
||||
For each file conflict:
|
||||
- Apply resolution rules to determine order
|
||||
- Add dependency edge: later solution `depends_on` earlier solution
|
||||
- Record rationale
|
||||
|
||||
**Semantic Priority Formula:**
|
||||
```
|
||||
Base: critical=0.9, high=0.7, medium=0.5, low=0.3
|
||||
Boost: task_count>=5 → +0.1, task_count>=3 → +0.05
|
||||
Final: clamp(base + boost, 0.0, 1.0)
|
||||
```
|
||||
|
||||
### Step 5: Assign Execution Groups
|
||||
|
||||
- **Parallel (P1, P2, ...)**: Solutions with NO file overlaps between them
|
||||
- **Sequential (S1, S2, ...)**: Solutions that share files must run in order
|
||||
|
||||
Group assignment:
|
||||
1. Start with all solutions in potential parallel group
|
||||
2. For each file conflict, move later solution to sequential group
|
||||
3. Assign group IDs: P1 for first parallel batch, S2 for first sequential, etc.
|
||||
|
||||
### Step 6: Generate Queue Files
|
||||
|
||||
**Queue file structure** (`.workflow/issues/queues/{QUEUE_ID}.json`):
|
||||
|
||||
```json
|
||||
{
|
||||
"id": "QUE-20251228-120000",
|
||||
"status": "active",
|
||||
"issue_ids": ["ISS-001", "ISS-002"],
|
||||
"solutions": [
|
||||
{
|
||||
"item_id": "S-1",
|
||||
"issue_id": "ISS-001",
|
||||
"solution_id": "SOL-001",
|
||||
"status": "pending",
|
||||
"execution_order": 1,
|
||||
"execution_group": "P1",
|
||||
"depends_on": [],
|
||||
"semantic_priority": 0.8,
|
||||
"assigned_executor": "codex",
|
||||
"files_touched": ["src/auth.ts"],
|
||||
"task_count": 3
|
||||
}
|
||||
],
|
||||
"conflicts": [
|
||||
{
|
||||
"type": "file_conflict",
|
||||
"file": "src/auth.ts",
|
||||
"solutions": ["S-1", "S-3"],
|
||||
"resolution": "sequential",
|
||||
"resolution_order": ["S-1", "S-3"],
|
||||
"rationale": "S-1 creates auth module, S-3 extends it"
|
||||
}
|
||||
],
|
||||
"execution_groups": [
|
||||
{ "id": "P1", "type": "parallel", "solutions": ["S-1", "S-2"], "solution_count": 2 },
|
||||
{ "id": "S2", "type": "sequential", "solutions": ["S-3"], "solution_count": 1 }
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
**Update index** (`.workflow/issues/queues/index.json`):
|
||||
|
||||
```json
|
||||
{
|
||||
"active_queue_id": "QUE-20251228-120000",
|
||||
"queues": [
|
||||
{
|
||||
"id": "QUE-20251228-120000",
|
||||
"status": "active",
|
||||
"issue_ids": ["ISS-001", "ISS-002"],
|
||||
"total_solutions": 3,
|
||||
"completed_solutions": 0,
|
||||
"created_at": "2025-12-28T12:00:00Z"
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
### Step 7: Update Issue Statuses
|
||||
|
||||
For each queued issue, update status to `queued`:
|
||||
|
||||
```bash
|
||||
ccw issue update <issue-id> --status queued
|
||||
```
|
||||
|
||||
## Queue Item ID Format
|
||||
|
||||
- Solution items: `S-1`, `S-2`, `S-3`, ...
|
||||
- Sequential numbering starting from 1
|
||||
|
||||
## Done Criteria
|
||||
|
||||
- [ ] Exactly 2 files generated: queue JSON + index update
|
||||
- [ ] Queue has valid DAG (no circular dependencies)
|
||||
- [ ] All file conflicts resolved with rationale
|
||||
- [ ] Semantic priority calculated for each solution (0.0-1.0)
|
||||
- [ ] Execution groups assigned (P* for parallel, S* for sequential)
|
||||
- [ ] Issue statuses updated to `queued`
|
||||
- [ ] Summary JSON returned with correct shape
|
||||
|
||||
## Validation Rules
|
||||
|
||||
1. **No cycles**: If resolution creates a cycle, abort and report
|
||||
2. **Parallel safety**: Solutions in same P* group must have NO file overlaps
|
||||
3. **Sequential order**: Solutions in S* group must be in correct dependency order
|
||||
4. **Single queue ID**: Use the same queue ID throughout (generated in Step 1)
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Situation | Action |
|
||||
|-----------|--------|
|
||||
| No planned issues | Return empty queue summary |
|
||||
| Circular dependency detected | Abort, report cycle details |
|
||||
| Missing solution file | Skip issue, log warning |
|
||||
| Index file missing | Create new index |
|
||||
|
||||
## Start Execution
|
||||
|
||||
Begin by listing planned issues:
|
||||
|
||||
```bash
|
||||
ccw issue list --status planned --json
|
||||
```
|
||||
|
||||
Then follow the workflow to generate the queue.
|
||||
@@ -1,25 +1,62 @@
|
||||
# Gemini Code Guidelines
|
||||
|
||||
## Code Quality Standards
|
||||
|
||||
### Code Quality
|
||||
- Follow project's existing patterns
|
||||
- Match import style and naming conventions
|
||||
- Single responsibility per function/class
|
||||
- DRY (Don't Repeat Yourself)
|
||||
- YAGNI (You Aren't Gonna Need It)
|
||||
|
||||
### Testing
|
||||
- Test all public functions
|
||||
- Test edge cases and error conditions
|
||||
- Mock external dependencies
|
||||
- Target 80%+ coverage
|
||||
|
||||
### Error Handling
|
||||
- Proper try-catch blocks
|
||||
- Clear error messages
|
||||
- Graceful degradation
|
||||
- Don't expose sensitive info
|
||||
|
||||
## Core Principles
|
||||
|
||||
**Thoroughness**:
|
||||
- Analyze ALL CONTEXT files completely
|
||||
- Check cross-file patterns and dependencies
|
||||
- Identify edge cases and quantify metrics
|
||||
**Incremental Progress**:
|
||||
- Small, testable changes
|
||||
- Commit working code frequently
|
||||
- Build on previous work (subtasks)
|
||||
|
||||
**Evidence-Based**:
|
||||
- Quote relevant code with `file:line` references
|
||||
- Link related patterns across files
|
||||
- Support all claims with concrete examples
|
||||
- Study 3+ similar patterns before implementing
|
||||
- Match project style exactly
|
||||
- Verify with existing code
|
||||
|
||||
**Actionable**:
|
||||
- Clear, specific recommendations (not vague)
|
||||
- Prioritized by impact
|
||||
- Incremental changes over big rewrites
|
||||
**Pragmatic**:
|
||||
- Boring solutions over clever code
|
||||
- Simple over complex
|
||||
- Adapt to project reality
|
||||
|
||||
**Philosophy**:
|
||||
- **Simple over complex** - Avoid over-engineering
|
||||
- **Clear over clever** - Prefer obvious solutions
|
||||
- **Learn from existing** - Reference project patterns
|
||||
- **Pragmatic over dogmatic** - Adapt to project reality
|
||||
- **Incremental progress** - Small, testable changes
|
||||
**Context Continuity** (Multi-Task):
|
||||
- Leverage resume for consistency
|
||||
- Maintain established patterns
|
||||
- Test integration between subtasks
|
||||
|
||||
## Execution Checklist
|
||||
|
||||
**Before**:
|
||||
- [ ] Understand PURPOSE and TASK clearly
|
||||
- [ ] Review CONTEXT files, find 3+ patterns
|
||||
- [ ] Check RULES templates and constraints
|
||||
|
||||
**During**:
|
||||
- [ ] Follow existing patterns exactly
|
||||
- [ ] Write tests alongside code
|
||||
- [ ] Run tests after every change
|
||||
- [ ] Commit working code incrementally
|
||||
|
||||
**After**:
|
||||
- [ ] All tests pass
|
||||
- [ ] Coverage meets target
|
||||
- [ ] Build succeeds
|
||||
- [ ] All EXPECTED deliverables met
|
||||
|
||||
1
.gitignore
vendored
1
.gitignore
vendored
@@ -29,3 +29,4 @@ COMMAND_TEMPLATE_ORCHESTRATOR.md
|
||||
settings.json
|
||||
*.mcp.json
|
||||
.mcp.json
|
||||
.ace-tool/
|
||||
|
||||
22
.mcp.json
22
.mcp.json
@@ -1,22 +0,0 @@
|
||||
{
|
||||
"mcpServers": {
|
||||
"chrome-devtools": {
|
||||
"type": "stdio",
|
||||
"command": "npx",
|
||||
"args": [
|
||||
"chrome-devtools-mcp@latest"
|
||||
],
|
||||
"env": {}
|
||||
},
|
||||
"ccw-tools": {
|
||||
"command": "npx",
|
||||
"args": [
|
||||
"-y",
|
||||
"ccw-mcp"
|
||||
],
|
||||
"env": {
|
||||
"CCW_ENABLED_TOOLS": "write_file,edit_file,smart_search,core_memory"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -46,7 +46,6 @@ Install-Claude.ps1
|
||||
install-remote.ps1
|
||||
*.mcp.json
|
||||
# ccw internal files
|
||||
ccw/package.json
|
||||
ccw/node_modules/
|
||||
ccw/*.md
|
||||
|
||||
|
||||
331
AGENTS.md
Normal file
331
AGENTS.md
Normal file
@@ -0,0 +1,331 @@
|
||||
# Codex Agent Execution Protocol
|
||||
|
||||
## Overview
|
||||
|
||||
**Role**: Autonomous development, implementation, and testing specialist
|
||||
|
||||
|
||||
## Prompt Structure
|
||||
|
||||
All prompts follow this 6-field format:
|
||||
|
||||
```
|
||||
PURPOSE: [development goal]
|
||||
TASK: [specific implementation task]
|
||||
MODE: [auto|write]
|
||||
CONTEXT: [file patterns]
|
||||
EXPECTED: [deliverables]
|
||||
RULES: [templates | additional constraints]
|
||||
```
|
||||
|
||||
**Subtask indicator**: `Subtask N of M: [title]` or `CONTINUE TO NEXT SUBTASK`
|
||||
|
||||
## MODE Definitions
|
||||
|
||||
### MODE: auto (default)
|
||||
|
||||
**Permissions**:
|
||||
- Full file operations (create/modify/delete)
|
||||
- Run tests and builds
|
||||
- Commit code incrementally
|
||||
|
||||
**Execute**:
|
||||
1. Parse PURPOSE and TASK
|
||||
2. Analyze CONTEXT files - find 3+ similar patterns
|
||||
3. Plan implementation following RULES
|
||||
4. Generate code with tests
|
||||
5. Run tests continuously
|
||||
6. Commit working code incrementally
|
||||
7. Validate EXPECTED deliverables
|
||||
8. Report results (with context for next subtask if multi-task)
|
||||
|
||||
**Constraint**: Must test every change
|
||||
|
||||
### MODE: write
|
||||
|
||||
**Permissions**:
|
||||
- Focused file operations
|
||||
- Create/modify specific files
|
||||
- Run tests for validation
|
||||
|
||||
**Execute**:
|
||||
1. Analyze CONTEXT files
|
||||
2. Make targeted changes
|
||||
3. Validate tests pass
|
||||
4. Report file changes
|
||||
|
||||
## Execution Protocol
|
||||
|
||||
### Core Requirements
|
||||
|
||||
**ALWAYS**:
|
||||
- Parse all 6 fields (PURPOSE, TASK, MODE, CONTEXT, EXPECTED, RULES)
|
||||
- Study CONTEXT files - find 3+ similar patterns before implementing
|
||||
- Apply RULES (templates + constraints) exactly
|
||||
- Test continuously after every change
|
||||
- Commit incrementally with working code
|
||||
- Match project style and patterns exactly
|
||||
- List all created/modified files at output beginning
|
||||
- Use direct binary calls (avoid shell wrappers)
|
||||
- Prefer apply_patch for text edits
|
||||
- Configure Windows UTF-8 encoding for Chinese support
|
||||
|
||||
**NEVER**:
|
||||
- Make assumptions without code verification
|
||||
- Ignore existing patterns
|
||||
- Skip tests
|
||||
- Use clever tricks over boring solutions
|
||||
- Over-engineer solutions
|
||||
- Break existing code or backward compatibility
|
||||
- Exceed 3 failed attempts without stopping
|
||||
|
||||
### RULES Processing
|
||||
|
||||
- Parse RULES field to extract template content and constraints
|
||||
- Recognize `|` as separator: `template content | additional constraints`
|
||||
- Apply ALL template guidelines as mandatory
|
||||
- Apply ALL additional constraints as mandatory
|
||||
- Treat rule violations as task failures
|
||||
|
||||
### Multi-Task Execution (Resume Pattern)
|
||||
|
||||
**First subtask**: Standard execution flow above
|
||||
**Subsequent subtasks** (via `resume --last`):
|
||||
- Recall context from previous subtasks
|
||||
- Build on previous work (don't repeat)
|
||||
- Maintain consistency with established patterns
|
||||
- Focus on current subtask scope only
|
||||
- Test integration with previous work
|
||||
- Report context for next subtask
|
||||
|
||||
## System Optimization
|
||||
|
||||
**Direct Binary Calls**: Always call binaries directly in `functions.shell`, set `workdir`, avoid shell wrappers (`bash -lc`, `cmd /c`, etc.)
|
||||
|
||||
**Text Editing Priority**:
|
||||
1. Use `apply_patch` tool for all routine text edits
|
||||
2. Fall back to `sed` for single-line substitutions if unavailable
|
||||
3. Avoid Python editing scripts unless both fail
|
||||
|
||||
**apply_patch invocation**:
|
||||
```json
|
||||
{
|
||||
"command": ["apply_patch", "*** Begin Patch\n*** Update File: path/to/file\n@@\n- old\n+ new\n*** End Patch\n"],
|
||||
"workdir": "<workdir>",
|
||||
"justification": "Brief reason"
|
||||
}
|
||||
```
|
||||
|
||||
**Windows UTF-8 Encoding** (before commands):
|
||||
```powershell
|
||||
[Console]::InputEncoding = [Text.UTF8Encoding]::new($false)
|
||||
[Console]::OutputEncoding = [Text.UTF8Encoding]::new($false)
|
||||
chcp 65001 > $null
|
||||
```
|
||||
|
||||
## Output Standards
|
||||
|
||||
### Format Priority
|
||||
|
||||
**If template defines output format** → Follow template format EXACTLY (all sections mandatory)
|
||||
|
||||
**If template has no format** → Use default format below based on task type
|
||||
|
||||
### Default Output Formats
|
||||
|
||||
#### Single Task Implementation
|
||||
|
||||
```markdown
|
||||
# Implementation: [TASK Title]
|
||||
|
||||
## Changes
|
||||
- Created: `path/to/file1.ext` (X lines)
|
||||
- Modified: `path/to/file2.ext` (+Y/-Z lines)
|
||||
- Deleted: `path/to/file3.ext`
|
||||
|
||||
## Summary
|
||||
[2-3 sentence overview of what was implemented]
|
||||
|
||||
## Key Decisions
|
||||
1. [Decision] - Rationale and reference to similar pattern
|
||||
2. [Decision] - path/to/reference:line
|
||||
|
||||
## Implementation Details
|
||||
[Evidence-based description with code references]
|
||||
|
||||
## Testing
|
||||
- Tests written: X new tests
|
||||
- Tests passing: Y/Z tests
|
||||
- Coverage: N%
|
||||
|
||||
## Validation
|
||||
✅ Tests: X passing
|
||||
✅ Coverage: Y%
|
||||
✅ Build: Success
|
||||
|
||||
## Next Steps
|
||||
[Recommendations or future improvements]
|
||||
```
|
||||
|
||||
#### Multi-Task Execution (with Resume)
|
||||
|
||||
**First Subtask**:
|
||||
```markdown
|
||||
# Subtask 1/N: [TASK Title]
|
||||
|
||||
## Changes
|
||||
[List of file changes]
|
||||
|
||||
## Implementation
|
||||
[Details with code references]
|
||||
|
||||
## Testing
|
||||
✅ Tests: X passing
|
||||
✅ Integration: Compatible with existing code
|
||||
|
||||
## Context for Next Subtask
|
||||
- Key decisions: [established patterns]
|
||||
- Files created: [paths and purposes]
|
||||
- Integration points: [where next subtask should connect]
|
||||
```
|
||||
|
||||
**Subsequent Subtasks**:
|
||||
```markdown
|
||||
# Subtask N/M: [TASK Title]
|
||||
|
||||
## Changes
|
||||
[List of file changes]
|
||||
|
||||
## Integration Notes
|
||||
✅ Compatible with subtask N-1
|
||||
✅ Maintains established patterns
|
||||
✅ Tests pass with previous work
|
||||
|
||||
## Implementation
|
||||
[Details with code references]
|
||||
|
||||
## Testing
|
||||
✅ Tests: X passing
|
||||
✅ Total coverage: Y%
|
||||
|
||||
## Context for Next Subtask
|
||||
[If not final subtask, provide context for continuation]
|
||||
```
|
||||
|
||||
#### Partial Completion
|
||||
|
||||
```markdown
|
||||
# Task Status: Partially Completed
|
||||
|
||||
## Completed
|
||||
- [What worked successfully]
|
||||
- Files: `path/to/completed.ext`
|
||||
|
||||
## Blocked
|
||||
- **Issue**: [What failed]
|
||||
- **Root Cause**: [Analysis of failure]
|
||||
- **Attempted**: [Solutions tried - attempt X of 3]
|
||||
|
||||
## Required
|
||||
[What's needed to proceed]
|
||||
|
||||
## Recommendation
|
||||
[Suggested next steps or alternative approaches]
|
||||
```
|
||||
|
||||
### Code References
|
||||
|
||||
**Format**: `path/to/file:line_number`
|
||||
|
||||
**Example**: `src/auth/jwt.ts:45` - Implemented token validation following pattern from `src/auth/session.ts:78`
|
||||
|
||||
### Related Files Section
|
||||
|
||||
**Always include at output beginning** - List ALL files analyzed, created, or modified:
|
||||
|
||||
```markdown
|
||||
## Related Files
|
||||
- `path/to/file1.ext` - [Role in implementation]
|
||||
- `path/to/file2.ext` - [Reference pattern used]
|
||||
- `path/to/file3.ext` - [Modified for X reason]
|
||||
```
|
||||
|
||||
## Error Handling
|
||||
|
||||
### Three-Attempt Rule
|
||||
|
||||
**On 3rd failed attempt**:
|
||||
1. Stop execution
|
||||
2. Report: What attempted, what failed, root cause
|
||||
3. Request guidance or suggest alternatives
|
||||
|
||||
### Recovery Strategies
|
||||
|
||||
| Error Type | Response |
|
||||
|------------|----------|
|
||||
| **Syntax/Type** | Review errors → Fix → Re-run tests → Validate build |
|
||||
| **Runtime** | Analyze stack trace → Add error handling → Test error cases |
|
||||
| **Test Failure** | Debug in isolation → Review setup → Fix implementation/test |
|
||||
| **Build Failure** | Check messages → Fix incrementally → Validate each fix |
|
||||
|
||||
## Quality Standards
|
||||
|
||||
### Code Quality
|
||||
- Follow project's existing patterns
|
||||
- Match import style and naming conventions
|
||||
- Single responsibility per function/class
|
||||
- DRY (Don't Repeat Yourself)
|
||||
- YAGNI (You Aren't Gonna Need It)
|
||||
|
||||
### Testing
|
||||
- Test all public functions
|
||||
- Test edge cases and error conditions
|
||||
- Mock external dependencies
|
||||
- Target 80%+ coverage
|
||||
|
||||
### Error Handling
|
||||
- Proper try-catch blocks
|
||||
- Clear error messages
|
||||
- Graceful degradation
|
||||
- Don't expose sensitive info
|
||||
|
||||
## Core Principles
|
||||
|
||||
**Incremental Progress**:
|
||||
- Small, testable changes
|
||||
- Commit working code frequently
|
||||
- Build on previous work (subtasks)
|
||||
|
||||
**Evidence-Based**:
|
||||
- Study 3+ similar patterns before implementing
|
||||
- Match project style exactly
|
||||
- Verify with existing code
|
||||
|
||||
**Pragmatic**:
|
||||
- Boring solutions over clever code
|
||||
- Simple over complex
|
||||
- Adapt to project reality
|
||||
|
||||
**Context Continuity** (Multi-Task):
|
||||
- Leverage resume for consistency
|
||||
- Maintain established patterns
|
||||
- Test integration between subtasks
|
||||
|
||||
## Execution Checklist
|
||||
|
||||
**Before**:
|
||||
- [ ] Understand PURPOSE and TASK clearly
|
||||
- [ ] Review CONTEXT files, find 3+ patterns
|
||||
- [ ] Check RULES templates and constraints
|
||||
|
||||
**During**:
|
||||
- [ ] Follow existing patterns exactly
|
||||
- [ ] Write tests alongside code
|
||||
- [ ] Run tests after every change
|
||||
- [ ] Commit working code incrementally
|
||||
|
||||
**After**:
|
||||
- [ ] All tests pass
|
||||
- [ ] Coverage meets target
|
||||
- [ ] Build succeeds
|
||||
- [ ] All EXPECTED deliverables met
|
||||
196
API_SETTINGS_IMPLEMENTATION.md
Normal file
196
API_SETTINGS_IMPLEMENTATION.md
Normal file
@@ -0,0 +1,196 @@
|
||||
# API Settings 页面实现完成
|
||||
|
||||
## 创建的文件
|
||||
|
||||
### 1. JavaScript 文件
|
||||
**位置**: `ccw/src/templates/dashboard-js/views/api-settings.js` (28KB)
|
||||
|
||||
**主要功能**:
|
||||
- ✅ Provider Management (提供商管理)
|
||||
- 添加/编辑/删除提供商
|
||||
- 支持 OpenAI, Anthropic, Google, Ollama, Azure, Mistral, DeepSeek, Custom
|
||||
- API Key 管理(支持环境变量)
|
||||
- 连接测试功能
|
||||
|
||||
- ✅ Endpoint Management (端点管理)
|
||||
- 创建自定义端点
|
||||
- 关联提供商和模型
|
||||
- 缓存策略配置
|
||||
- 显示 CLI 使用示例
|
||||
|
||||
- ✅ Cache Management (缓存管理)
|
||||
- 全局缓存开关
|
||||
- 缓存统计显示
|
||||
- 清除缓存功能
|
||||
|
||||
### 2. CSS 样式文件
|
||||
**位置**: `ccw/src/templates/dashboard-css/31-api-settings.css` (6.8KB)
|
||||
|
||||
**样式包括**:
|
||||
- 卡片式布局
|
||||
- 表单样式
|
||||
- 进度条
|
||||
- 响应式设计
|
||||
- 空状态显示
|
||||
|
||||
### 3. 国际化支持
|
||||
**位置**: `ccw/src/templates/dashboard-js/i18n.js`
|
||||
|
||||
**添加的翻译**:
|
||||
- 英文:54 个翻译键
|
||||
- 中文:54 个翻译键
|
||||
- 包含所有 UI 文本、提示信息、错误消息
|
||||
|
||||
### 4. 配置更新
|
||||
|
||||
#### dashboard-generator.ts
|
||||
- ✅ 添加 `31-api-settings.css` 到 CSS 模块列表
|
||||
- ✅ 添加 `views/api-settings.js` 到 JS 模块列表
|
||||
|
||||
#### navigation.js
|
||||
- ✅ 添加 `api-settings` 路由处理
|
||||
- ✅ 添加标题更新逻辑
|
||||
|
||||
#### dashboard.html
|
||||
- ✅ 添加导航菜单项 (Settings 图标)
|
||||
|
||||
## API 端点使用
|
||||
|
||||
该页面使用以下后端 API(已存在):
|
||||
|
||||
### Provider APIs
|
||||
- `GET /api/litellm-api/providers` - 获取所有提供商
|
||||
- `POST /api/litellm-api/providers` - 创建提供商
|
||||
- `PUT /api/litellm-api/providers/:id` - 更新提供商
|
||||
- `DELETE /api/litellm-api/providers/:id` - 删除提供商
|
||||
- `POST /api/litellm-api/providers/:id/test` - 测试连接
|
||||
|
||||
### Endpoint APIs
|
||||
- `GET /api/litellm-api/endpoints` - 获取所有端点
|
||||
- `POST /api/litellm-api/endpoints` - 创建端点
|
||||
- `PUT /api/litellm-api/endpoints/:id` - 更新端点
|
||||
- `DELETE /api/litellm-api/endpoints/:id` - 删除端点
|
||||
|
||||
### Model Discovery
|
||||
- `GET /api/litellm-api/models/:providerType` - 获取提供商支持的模型列表
|
||||
|
||||
### Cache APIs
|
||||
- `GET /api/litellm-api/cache/stats` - 获取缓存统计
|
||||
- `POST /api/litellm-api/cache/clear` - 清除缓存
|
||||
|
||||
### Config APIs
|
||||
- `GET /api/litellm-api/config` - 获取完整配置
|
||||
- `PUT /api/litellm-api/config/cache` - 更新全局缓存设置
|
||||
|
||||
## 页面特性
|
||||
|
||||
### Provider 管理
|
||||
```
|
||||
+-- Provider Card ------------------------+
|
||||
| OpenAI Production [Edit] [Del] |
|
||||
| Type: openai |
|
||||
| Key: sk-...abc |
|
||||
| URL: https://api.openai.com/v1 |
|
||||
| Status: ✓ Enabled |
|
||||
+-----------------------------------------+
|
||||
```
|
||||
|
||||
### Endpoint 管理
|
||||
```
|
||||
+-- Endpoint Card ------------------------+
|
||||
| GPT-4o Code Review [Edit] [Del]|
|
||||
| ID: my-gpt4o |
|
||||
| Provider: OpenAI Production |
|
||||
| Model: gpt-4-turbo |
|
||||
| Cache: Enabled (60 min) |
|
||||
| Usage: ccw cli -p "..." --model my-gpt4o|
|
||||
+-----------------------------------------+
|
||||
```
|
||||
|
||||
### 表单功能
|
||||
- **Provider Form**:
|
||||
- 类型选择(8 种提供商)
|
||||
- API Key 输入(支持显示/隐藏)
|
||||
- 环境变量支持
|
||||
- Base URL 自定义
|
||||
- 启用/禁用开关
|
||||
|
||||
- **Endpoint Form**:
|
||||
- 端点 ID(CLI 使用)
|
||||
- 显示名称
|
||||
- 提供商选择(动态加载)
|
||||
- 模型选择(根据提供商动态加载)
|
||||
- 缓存策略配置
|
||||
- TTL(分钟)
|
||||
- 最大大小(KB)
|
||||
- 自动缓存文件模式
|
||||
|
||||
## 使用流程
|
||||
|
||||
### 1. 添加提供商
|
||||
1. 点击 "Add Provider"
|
||||
2. 选择提供商类型(如 OpenAI)
|
||||
3. 输入显示名称
|
||||
4. 输入 API Key(或使用环境变量)
|
||||
5. 可选:输入自定义 API Base URL
|
||||
6. 保存
|
||||
|
||||
### 2. 创建自定义端点
|
||||
1. 点击 "Add Endpoint"
|
||||
2. 输入端点 ID(用于 CLI)
|
||||
3. 输入显示名称
|
||||
4. 选择提供商
|
||||
5. 选择模型(自动加载该提供商支持的模型)
|
||||
6. 可选:配置缓存策略
|
||||
7. 保存
|
||||
|
||||
### 3. 使用端点
|
||||
```bash
|
||||
ccw cli -p "Analyze this code..." --model my-gpt4o
|
||||
```
|
||||
|
||||
## 代码质量
|
||||
|
||||
- ✅ 遵循现有代码风格
|
||||
- ✅ 使用 i18n 函数支持国际化
|
||||
- ✅ 响应式设计(移动端友好)
|
||||
- ✅ 完整的表单验证
|
||||
- ✅ 用户友好的错误提示
|
||||
- ✅ 使用 Lucide 图标
|
||||
- ✅ 模态框复用现有样式
|
||||
- ✅ 与后端 API 完全集成
|
||||
|
||||
## 测试建议
|
||||
|
||||
1. **基础功能测试**:
|
||||
- 添加/编辑/删除提供商
|
||||
- 添加/编辑/删除端点
|
||||
- 清除缓存
|
||||
|
||||
2. **表单验证测试**:
|
||||
- 必填字段验证
|
||||
- API Key 显示/隐藏
|
||||
- 环境变量切换
|
||||
|
||||
3. **数据加载测试**:
|
||||
- 模型列表动态加载
|
||||
- 缓存统计显示
|
||||
- 空状态显示
|
||||
|
||||
4. **国际化测试**:
|
||||
- 切换语言(英文/中文)
|
||||
- 验证所有文本正确显示
|
||||
|
||||
## 下一步
|
||||
|
||||
页面已完成并集成到项目中。启动 CCW Dashboard 后:
|
||||
1. 导航栏会显示 "API Settings" 菜单项(Settings 图标)
|
||||
2. 点击进入即可使用所有功能
|
||||
3. 所有操作会实时同步到配置文件
|
||||
|
||||
## 注意事项
|
||||
|
||||
- 页面使用现有的 LiteLLM API 路由(`litellm-api-routes.ts`)
|
||||
- 配置保存在项目的 LiteLLM 配置文件中
|
||||
- 支持环境变量引用格式:`${VARIABLE_NAME}`
|
||||
- API Key 在显示时会自动脱敏(显示前 4 位和后 4 位)
|
||||
46
CHANGELOG.md
46
CHANGELOG.md
@@ -5,6 +5,52 @@ All notable changes to Claude Code Workflow (CCW) will be documented in this fil
|
||||
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
|
||||
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
|
||||
|
||||
## [6.3.11] - 2025-12-28
|
||||
|
||||
### 🔧 Issue System Enhancements | Issue系统增强
|
||||
|
||||
#### CLI Improvements | CLI改进
|
||||
- **Added**: `ccw issue update <id> --status <status>` command for pure field updates
|
||||
- **Added**: Support for `--priority`, `--title`, `--description` in update command
|
||||
- **Added**: Auto-timestamp setting based on status (planned_at, queued_at, completed_at)
|
||||
|
||||
#### Issue Plan Command | Issue Plan命令
|
||||
- **Changed**: Agent execution from sequential to parallel (max 10 concurrent)
|
||||
- **Added**: Multi-solution user selection prompt with clear notification
|
||||
- **Added**: Explicit binding check (`solutions.length === 1`) before auto-bind
|
||||
|
||||
#### Issue Queue Command | Issue Queue命令
|
||||
- **Fixed**: Queue ID generation moved from agent to command (avoid duplicate IDs)
|
||||
- **Fixed**: Strict output file control (exactly 2 files per execution)
|
||||
- **Added**: Clear documentation for `update` vs `done`/`queue add` usage
|
||||
|
||||
#### Discovery System | Discovery系统
|
||||
- **Enhanced**: Discovery progress reading with new schema support
|
||||
- **Enhanced**: Discovery index reading and issue exporting
|
||||
|
||||
## [6.3.9] - 2025-12-27
|
||||
|
||||
### 🔧 Issue System Consistency | Issue系统一致性修复
|
||||
|
||||
#### Schema Unification | Schema统一
|
||||
- **Upgraded**: `solution-schema.json` to Rich Plan model with full lifecycle fields
|
||||
- **Added**: `test`, `regression`, `commit`, `lifecycle_status` objects to task schema
|
||||
- **Changed**: `acceptance` from string[] to object `{criteria[], verification[]}`
|
||||
- **Added**: `analysis` and `score` fields for multi-solution evaluation
|
||||
- **Removed**: Redundant `issue-task-jsonl-schema.json` and `solutions-jsonl-schema.json`
|
||||
- **Fixed**: `queue-schema.json` field naming (`queue_id` → `item_id`)
|
||||
|
||||
#### Agent Updates | Agent更新
|
||||
- **Added**: Multi-solution generation support based on complexity
|
||||
- **Added**: Search tool fallback chain (ACE → smart_search → Grep → rg → Glob)
|
||||
- **Added**: `lifecycle_requirements` propagation from issue to tasks
|
||||
- **Added**: Priority mapping formula (1-5 → 0.0-1.0 semantic priority)
|
||||
- **Fixed**: Task decomposition to match Rich Plan schema
|
||||
|
||||
#### Type Safety | 类型安全
|
||||
- **Added**: `QueueConflict` and `ExecutionGroup` interfaces to `issue.ts`
|
||||
- **Fixed**: `conflicts` array typing (from `any[]` to `QueueConflict[]`)
|
||||
|
||||
## [6.2.0] - 2025-12-21
|
||||
|
||||
### 🎯 Native CodexLens & Dashboard Revolution | 原生CodexLens与Dashboard革新
|
||||
|
||||
12
README.md
12
README.md
@@ -5,7 +5,7 @@
|
||||
|
||||
<div align="center">
|
||||
|
||||
[](https://github.com/catlog22/Claude-Code-Workflow/releases)
|
||||
[](https://github.com/catlog22/Claude-Code-Workflow/releases)
|
||||
[](https://www.npmjs.com/package/claude-code-workflow)
|
||||
[](LICENSE)
|
||||
[]()
|
||||
@@ -52,6 +52,16 @@ CCW is built on a set of core principles that distinguish it from traditional AI
|
||||
|
||||
## ⚙️ Installation
|
||||
|
||||
### **📋 Requirements**
|
||||
|
||||
| Platform | Node.js | Additional |
|
||||
|----------|---------|------------|
|
||||
| Windows | 20.x or 22.x LTS (recommended) | Node 23+ requires [Visual Studio Build Tools](https://visualstudio.microsoft.com/visual-cpp-build-tools/) |
|
||||
| macOS | 18.x+ | Xcode Command Line Tools |
|
||||
| Linux | 18.x+ | build-essential |
|
||||
|
||||
> **Note**: The `better-sqlite3` dependency requires native compilation. Using Node.js LTS versions avoids build issues.
|
||||
|
||||
### **📦 npm Install (Recommended)**
|
||||
|
||||
Install globally via npm:
|
||||
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user