mirror of
https://github.com/catlog22/Claude-Code-Workflow.git
synced 2026-02-05 01:50:27 +08:00
feat: add workflow management commands and utilities
- Implemented workflow installation, listing, and syncing commands in `workflow.ts`. - Created utility functions for project root detection and package version retrieval in `project-root.ts`. - Added update checker functionality to notify users of new package versions in `update-checker.ts`. - Developed unit tests for project root utilities and update checker to ensure functionality and version comparison accuracy.
This commit is contained in:
666
.claude/commands/workflow/debug-with-file.md
Normal file
666
.claude/commands/workflow/debug-with-file.md
Normal file
@@ -0,0 +1,666 @@
|
||||
---
|
||||
name: debug-with-file
|
||||
description: Interactive hypothesis-driven debugging with documented exploration, understanding evolution, and Gemini-assisted correction
|
||||
argument-hint: "\"bug description or error message\""
|
||||
allowed-tools: TodoWrite(*), Task(*), AskUserQuestion(*), Read(*), Grep(*), Glob(*), Bash(*), Edit(*), Write(*)
|
||||
---
|
||||
|
||||
# Workflow Debug-With-File Command (/workflow:debug-with-file)
|
||||
|
||||
## Overview
|
||||
|
||||
Enhanced evidence-based debugging with **documented exploration process**. Records understanding evolution, consolidates insights, and uses Gemini to correct misunderstandings.
|
||||
|
||||
**Core workflow**: Explore → Document → Log → Analyze → Correct Understanding → Fix → Verify
|
||||
|
||||
**Key enhancements over /workflow:debug**:
|
||||
- **understanding.md**: Timeline of exploration and learning
|
||||
- **Gemini-assisted correction**: Validates and corrects hypotheses
|
||||
- **Consolidation**: Simplifies proven-wrong understanding to avoid clutter
|
||||
- **Learning retention**: Preserves what was learned, even from failed attempts
|
||||
|
||||
## Usage
|
||||
|
||||
```bash
|
||||
/workflow:debug-with-file <BUG_DESCRIPTION>
|
||||
|
||||
# Arguments
|
||||
<bug-description> Bug description, error message, or stack trace (required)
|
||||
```
|
||||
|
||||
## Execution Process
|
||||
|
||||
```
|
||||
Session Detection:
|
||||
├─ Check if debug session exists for this bug
|
||||
├─ EXISTS + understanding.md exists → Continue mode
|
||||
└─ NOT_FOUND → Explore mode
|
||||
|
||||
Explore Mode:
|
||||
├─ Locate error source in codebase
|
||||
├─ Document initial understanding in understanding.md
|
||||
├─ Generate testable hypotheses with Gemini validation
|
||||
├─ Add NDJSON logging instrumentation
|
||||
└─ Output: Hypothesis list + await user reproduction
|
||||
|
||||
Analyze Mode:
|
||||
├─ Parse debug.log, validate each hypothesis
|
||||
├─ Use Gemini to analyze evidence and correct understanding
|
||||
├─ Update understanding.md with:
|
||||
│ ├─ New evidence
|
||||
│ ├─ Corrected misunderstandings (strikethrough + correction)
|
||||
│ └─ Consolidated current understanding
|
||||
└─ Decision:
|
||||
├─ Confirmed → Fix root cause
|
||||
├─ Inconclusive → Add more logging, iterate
|
||||
└─ All rejected → Gemini-assisted new hypotheses
|
||||
|
||||
Fix & Cleanup:
|
||||
├─ Apply fix based on confirmed hypothesis
|
||||
├─ User verifies
|
||||
├─ Document final understanding + lessons learned
|
||||
├─ Remove debug instrumentation
|
||||
└─ If not fixed → Return to Analyze mode
|
||||
```
|
||||
|
||||
## Implementation
|
||||
|
||||
### Session Setup & Mode Detection
|
||||
|
||||
```javascript
|
||||
const getUtc8ISOString = () => new Date(Date.now() + 8 * 60 * 60 * 1000).toISOString()
|
||||
|
||||
const bugSlug = bug_description.toLowerCase().replace(/[^a-z0-9]+/g, '-').substring(0, 30)
|
||||
const dateStr = getUtc8ISOString().substring(0, 10)
|
||||
|
||||
const sessionId = `DBG-${bugSlug}-${dateStr}`
|
||||
const sessionFolder = `.workflow/.debug/${sessionId}`
|
||||
const debugLogPath = `${sessionFolder}/debug.log`
|
||||
const understandingPath = `${sessionFolder}/understanding.md`
|
||||
const hypothesesPath = `${sessionFolder}/hypotheses.json`
|
||||
|
||||
// Auto-detect mode
|
||||
const sessionExists = fs.existsSync(sessionFolder)
|
||||
const hasUnderstanding = sessionExists && fs.existsSync(understandingPath)
|
||||
const logHasContent = sessionExists && fs.existsSync(debugLogPath) && fs.statSync(debugLogPath).size > 0
|
||||
|
||||
const mode = logHasContent ? 'analyze' : (hasUnderstanding ? 'continue' : 'explore')
|
||||
|
||||
if (!sessionExists) {
|
||||
bash(`mkdir -p ${sessionFolder}`)
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Explore Mode
|
||||
|
||||
**Step 1.1: Locate Error Source**
|
||||
|
||||
```javascript
|
||||
// Extract keywords from bug description
|
||||
const keywords = extractErrorKeywords(bug_description)
|
||||
|
||||
// Search codebase for error locations
|
||||
const searchResults = []
|
||||
for (const keyword of keywords) {
|
||||
const results = Grep({ pattern: keyword, path: ".", output_mode: "content", "-C": 3 })
|
||||
searchResults.push({ keyword, results })
|
||||
}
|
||||
|
||||
// Identify affected files and functions
|
||||
const affectedLocations = analyzeSearchResults(searchResults)
|
||||
```
|
||||
|
||||
**Step 1.2: Document Initial Understanding**
|
||||
|
||||
Create `understanding.md` with exploration timeline:
|
||||
|
||||
```markdown
|
||||
# Understanding Document
|
||||
|
||||
**Session ID**: ${sessionId}
|
||||
**Bug Description**: ${bug_description}
|
||||
**Started**: ${getUtc8ISOString()}
|
||||
|
||||
---
|
||||
|
||||
## Exploration Timeline
|
||||
|
||||
### Iteration 1 - Initial Exploration (${timestamp})
|
||||
|
||||
#### Current Understanding
|
||||
|
||||
Based on bug description and initial code search:
|
||||
|
||||
- Error pattern: ${errorPattern}
|
||||
- Affected areas: ${affectedLocations.map(l => l.file).join(', ')}
|
||||
- Initial hypothesis: ${initialThoughts}
|
||||
|
||||
#### Evidence from Code Search
|
||||
|
||||
${searchResults.map(r => `
|
||||
**Keyword: "${r.keyword}"**
|
||||
- Found in: ${r.results.files.join(', ')}
|
||||
- Key findings: ${r.insights}
|
||||
`).join('\n')}
|
||||
|
||||
#### Next Steps
|
||||
|
||||
- Generate testable hypotheses
|
||||
- Add instrumentation
|
||||
- Await reproduction
|
||||
|
||||
---
|
||||
|
||||
## Current Consolidated Understanding
|
||||
|
||||
${initialConsolidatedUnderstanding}
|
||||
```
|
||||
|
||||
**Step 1.3: Gemini-Assisted Hypothesis Generation**
|
||||
|
||||
```bash
|
||||
ccw cli -p "
|
||||
PURPOSE: Generate debugging hypotheses for: ${bug_description}
|
||||
Success criteria: Testable hypotheses with clear evidence criteria
|
||||
|
||||
TASK:
|
||||
• Analyze error pattern and code search results
|
||||
• Identify 3-5 most likely root causes
|
||||
• For each hypothesis, specify:
|
||||
- What might be wrong
|
||||
- What evidence would confirm/reject it
|
||||
- Where to add instrumentation
|
||||
• Rank by likelihood
|
||||
|
||||
MODE: analysis
|
||||
|
||||
CONTEXT: @${sessionFolder}/understanding.md | Search results in understanding.md
|
||||
|
||||
EXPECTED:
|
||||
- Structured hypothesis list (JSON format)
|
||||
- Each hypothesis with: id, description, testable_condition, logging_point, evidence_criteria
|
||||
- Likelihood ranking (1=most likely)
|
||||
|
||||
CONSTRAINTS: Focus on testable conditions
|
||||
" --tool gemini --mode analysis --rule analysis-diagnose-bug-root-cause
|
||||
```
|
||||
|
||||
Save Gemini output to `hypotheses.json`:
|
||||
|
||||
```json
|
||||
{
|
||||
"iteration": 1,
|
||||
"timestamp": "2025-01-21T10:00:00+08:00",
|
||||
"hypotheses": [
|
||||
{
|
||||
"id": "H1",
|
||||
"description": "Data structure mismatch - expected key not present",
|
||||
"testable_condition": "Check if target key exists in dict",
|
||||
"logging_point": "file.py:func:42",
|
||||
"evidence_criteria": {
|
||||
"confirm": "data shows missing key",
|
||||
"reject": "key exists with valid value"
|
||||
},
|
||||
"likelihood": 1,
|
||||
"status": "pending"
|
||||
}
|
||||
],
|
||||
"gemini_insights": "...",
|
||||
"corrected_assumptions": []
|
||||
}
|
||||
```
|
||||
|
||||
**Step 1.4: Add NDJSON Instrumentation**
|
||||
|
||||
For each hypothesis, add logging (same as original debug command).
|
||||
|
||||
**Step 1.5: Update understanding.md**
|
||||
|
||||
Append hypothesis section:
|
||||
|
||||
```markdown
|
||||
#### Hypotheses Generated (Gemini-Assisted)
|
||||
|
||||
${hypotheses.map(h => `
|
||||
**${h.id}** (Likelihood: ${h.likelihood}): ${h.description}
|
||||
- Logging at: ${h.logging_point}
|
||||
- Testing: ${h.testable_condition}
|
||||
- Evidence to confirm: ${h.evidence_criteria.confirm}
|
||||
- Evidence to reject: ${h.evidence_criteria.reject}
|
||||
`).join('\n')}
|
||||
|
||||
**Gemini Insights**: ${geminiInsights}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Analyze Mode
|
||||
|
||||
**Step 2.1: Parse Debug Log**
|
||||
|
||||
```javascript
|
||||
// Parse NDJSON log
|
||||
const entries = Read(debugLogPath).split('\n')
|
||||
.filter(l => l.trim())
|
||||
.map(l => JSON.parse(l))
|
||||
|
||||
// Group by hypothesis
|
||||
const byHypothesis = groupBy(entries, 'hid')
|
||||
```
|
||||
|
||||
**Step 2.2: Gemini-Assisted Evidence Analysis**
|
||||
|
||||
```bash
|
||||
ccw cli -p "
|
||||
PURPOSE: Analyze debug log evidence to validate/correct hypotheses for: ${bug_description}
|
||||
Success criteria: Clear verdict per hypothesis + corrected understanding
|
||||
|
||||
TASK:
|
||||
• Parse log entries by hypothesis
|
||||
• Evaluate evidence against expected criteria
|
||||
• Determine verdict: confirmed | rejected | inconclusive
|
||||
• Identify incorrect assumptions from previous understanding
|
||||
• Suggest corrections to understanding
|
||||
|
||||
MODE: analysis
|
||||
|
||||
CONTEXT:
|
||||
@${debugLogPath}
|
||||
@${understandingPath}
|
||||
@${hypothesesPath}
|
||||
|
||||
EXPECTED:
|
||||
- Per-hypothesis verdict with reasoning
|
||||
- Evidence summary
|
||||
- List of incorrect assumptions with corrections
|
||||
- Updated consolidated understanding
|
||||
- Root cause if confirmed, or next investigation steps
|
||||
|
||||
CONSTRAINTS: Evidence-based reasoning only, no speculation
|
||||
" --tool gemini --mode analysis --rule analysis-diagnose-bug-root-cause
|
||||
```
|
||||
|
||||
**Step 2.3: Update Understanding with Corrections**
|
||||
|
||||
Append new iteration to `understanding.md`:
|
||||
|
||||
```markdown
|
||||
### Iteration ${n} - Evidence Analysis (${timestamp})
|
||||
|
||||
#### Log Analysis Results
|
||||
|
||||
${results.map(r => `
|
||||
**${r.id}**: ${r.verdict.toUpperCase()}
|
||||
- Evidence: ${JSON.stringify(r.evidence)}
|
||||
- Reasoning: ${r.reason}
|
||||
`).join('\n')}
|
||||
|
||||
#### Corrected Understanding
|
||||
|
||||
Previous misunderstandings identified and corrected:
|
||||
|
||||
${corrections.map(c => `
|
||||
- ~~${c.wrong}~~ → ${c.corrected}
|
||||
- Why wrong: ${c.reason}
|
||||
- Evidence: ${c.evidence}
|
||||
`).join('\n')}
|
||||
|
||||
#### New Insights
|
||||
|
||||
${newInsights.join('\n- ')}
|
||||
|
||||
#### Gemini Analysis
|
||||
|
||||
${geminiAnalysis}
|
||||
|
||||
${confirmedHypothesis ? `
|
||||
#### Root Cause Identified
|
||||
|
||||
**${confirmedHypothesis.id}**: ${confirmedHypothesis.description}
|
||||
|
||||
Evidence supporting this conclusion:
|
||||
${confirmedHypothesis.supportingEvidence}
|
||||
` : `
|
||||
#### Next Steps
|
||||
|
||||
${nextSteps}
|
||||
`}
|
||||
|
||||
---
|
||||
|
||||
## Current Consolidated Understanding (Updated)
|
||||
|
||||
${consolidatedUnderstanding}
|
||||
```
|
||||
|
||||
**Step 2.4: Consolidate Understanding**
|
||||
|
||||
At the bottom of `understanding.md`, update the consolidated section:
|
||||
|
||||
- Remove or simplify proven-wrong assumptions
|
||||
- Keep them in strikethrough for reference
|
||||
- Focus on current valid understanding
|
||||
- Avoid repeating details from timeline
|
||||
|
||||
```markdown
|
||||
## Current Consolidated Understanding
|
||||
|
||||
### What We Know
|
||||
|
||||
- ${validUnderstanding1}
|
||||
- ${validUnderstanding2}
|
||||
|
||||
### What Was Disproven
|
||||
|
||||
- ~~Initial assumption: ${wrongAssumption}~~ (Evidence: ${disproofEvidence})
|
||||
|
||||
### Current Investigation Focus
|
||||
|
||||
${currentFocus}
|
||||
|
||||
### Remaining Questions
|
||||
|
||||
- ${openQuestion1}
|
||||
- ${openQuestion2}
|
||||
```
|
||||
|
||||
**Step 2.5: Update hypotheses.json**
|
||||
|
||||
```json
|
||||
{
|
||||
"iteration": 2,
|
||||
"timestamp": "2025-01-21T10:15:00+08:00",
|
||||
"hypotheses": [
|
||||
{
|
||||
"id": "H1",
|
||||
"status": "rejected",
|
||||
"verdict_reason": "Evidence shows key exists with valid value",
|
||||
"evidence": {...}
|
||||
},
|
||||
{
|
||||
"id": "H2",
|
||||
"status": "confirmed",
|
||||
"verdict_reason": "Log data confirms timing issue",
|
||||
"evidence": {...}
|
||||
}
|
||||
],
|
||||
"gemini_corrections": [
|
||||
{
|
||||
"wrong_assumption": "...",
|
||||
"corrected_to": "...",
|
||||
"reason": "..."
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Fix & Verification
|
||||
|
||||
**Step 3.1: Apply Fix**
|
||||
|
||||
(Same as original debug command)
|
||||
|
||||
**Step 3.2: Document Resolution**
|
||||
|
||||
Append to `understanding.md`:
|
||||
|
||||
```markdown
|
||||
### Iteration ${n} - Resolution (${timestamp})
|
||||
|
||||
#### Fix Applied
|
||||
|
||||
- Modified files: ${modifiedFiles.join(', ')}
|
||||
- Fix description: ${fixDescription}
|
||||
- Root cause addressed: ${rootCause}
|
||||
|
||||
#### Verification Results
|
||||
|
||||
${verificationResults}
|
||||
|
||||
#### Lessons Learned
|
||||
|
||||
What we learned from this debugging session:
|
||||
|
||||
1. ${lesson1}
|
||||
2. ${lesson2}
|
||||
3. ${lesson3}
|
||||
|
||||
#### Key Insights for Future
|
||||
|
||||
- ${insight1}
|
||||
- ${insight2}
|
||||
```
|
||||
|
||||
**Step 3.3: Cleanup**
|
||||
|
||||
Remove debug instrumentation (same as original command).
|
||||
|
||||
---
|
||||
|
||||
## Session Folder Structure
|
||||
|
||||
```
|
||||
.workflow/.debug/DBG-{slug}-{date}/
|
||||
├── debug.log # NDJSON log (execution evidence)
|
||||
├── understanding.md # NEW: Exploration timeline + consolidated understanding
|
||||
├── hypotheses.json # NEW: Hypothesis history with verdicts
|
||||
└── resolution.md # Optional: Final summary
|
||||
```
|
||||
|
||||
## Understanding Document Template
|
||||
|
||||
```markdown
|
||||
# Understanding Document
|
||||
|
||||
**Session ID**: DBG-xxx-2025-01-21
|
||||
**Bug Description**: [original description]
|
||||
**Started**: 2025-01-21T10:00:00+08:00
|
||||
|
||||
---
|
||||
|
||||
## Exploration Timeline
|
||||
|
||||
### Iteration 1 - Initial Exploration (2025-01-21 10:00)
|
||||
|
||||
#### Current Understanding
|
||||
...
|
||||
|
||||
#### Evidence from Code Search
|
||||
...
|
||||
|
||||
#### Hypotheses Generated (Gemini-Assisted)
|
||||
...
|
||||
|
||||
### Iteration 2 - Evidence Analysis (2025-01-21 10:15)
|
||||
|
||||
#### Log Analysis Results
|
||||
...
|
||||
|
||||
#### Corrected Understanding
|
||||
- ~~[wrong]~~ → [corrected]
|
||||
|
||||
#### Gemini Analysis
|
||||
...
|
||||
|
||||
---
|
||||
|
||||
## Current Consolidated Understanding
|
||||
|
||||
### What We Know
|
||||
- [valid understanding points]
|
||||
|
||||
### What Was Disproven
|
||||
- ~~[disproven assumptions]~~
|
||||
|
||||
### Current Investigation Focus
|
||||
[current focus]
|
||||
|
||||
### Remaining Questions
|
||||
- [open questions]
|
||||
```
|
||||
|
||||
## Iteration Flow
|
||||
|
||||
```
|
||||
First Call (/workflow:debug-with-file "error"):
|
||||
├─ No session exists → Explore mode
|
||||
├─ Extract error keywords, search codebase
|
||||
├─ Document initial understanding in understanding.md
|
||||
├─ Use Gemini to generate hypotheses
|
||||
├─ Add logging instrumentation
|
||||
└─ Await user reproduction
|
||||
|
||||
After Reproduction (/workflow:debug-with-file "error"):
|
||||
├─ Session exists + debug.log has content → Analyze mode
|
||||
├─ Parse log, use Gemini to evaluate hypotheses
|
||||
├─ Update understanding.md with:
|
||||
│ ├─ Evidence analysis results
|
||||
│ ├─ Corrected misunderstandings (strikethrough)
|
||||
│ ├─ New insights
|
||||
│ └─ Updated consolidated understanding
|
||||
├─ Update hypotheses.json with verdicts
|
||||
└─ Decision:
|
||||
├─ Confirmed → Fix → Document resolution
|
||||
├─ Inconclusive → Add logging, document next steps
|
||||
└─ All rejected → Gemini-assisted new hypotheses
|
||||
|
||||
Output:
|
||||
├─ .workflow/.debug/DBG-{slug}-{date}/debug.log
|
||||
├─ .workflow/.debug/DBG-{slug}-{date}/understanding.md (evolving document)
|
||||
└─ .workflow/.debug/DBG-{slug}-{date}/hypotheses.json (history)
|
||||
```
|
||||
|
||||
## Gemini Integration Points
|
||||
|
||||
### 1. Hypothesis Generation (Explore Mode)
|
||||
|
||||
**Purpose**: Generate evidence-based, testable hypotheses
|
||||
|
||||
**Prompt Pattern**:
|
||||
```
|
||||
PURPOSE: Generate debugging hypotheses + evidence criteria
|
||||
TASK: Analyze error + code → testable hypotheses with clear pass/fail criteria
|
||||
CONTEXT: @understanding.md (search results)
|
||||
EXPECTED: JSON with hypotheses, likelihood ranking, evidence criteria
|
||||
```
|
||||
|
||||
### 2. Evidence Analysis (Analyze Mode)
|
||||
|
||||
**Purpose**: Validate hypotheses and correct misunderstandings
|
||||
|
||||
**Prompt Pattern**:
|
||||
```
|
||||
PURPOSE: Analyze debug log evidence + correct understanding
|
||||
TASK: Evaluate each hypothesis → identify wrong assumptions → suggest corrections
|
||||
CONTEXT: @debug.log @understanding.md @hypotheses.json
|
||||
EXPECTED: Verdicts + corrections + updated consolidated understanding
|
||||
```
|
||||
|
||||
### 3. New Hypothesis Generation (After All Rejected)
|
||||
|
||||
**Purpose**: Generate new hypotheses based on what was disproven
|
||||
|
||||
**Prompt Pattern**:
|
||||
```
|
||||
PURPOSE: Generate new hypotheses given disproven assumptions
|
||||
TASK: Review rejected hypotheses → identify knowledge gaps → new investigation angles
|
||||
CONTEXT: @understanding.md (with disproven section) @hypotheses.json
|
||||
EXPECTED: New hypotheses avoiding previously rejected paths
|
||||
```
|
||||
|
||||
## Error Correction Mechanism
|
||||
|
||||
### Correction Format in understanding.md
|
||||
|
||||
```markdown
|
||||
#### Corrected Understanding
|
||||
|
||||
- ~~Assumed dict key "config" was missing~~ → Key exists, but value is None
|
||||
- Why wrong: Only checked existence, not value validity
|
||||
- Evidence: H1 log shows {"config": null, "exists": true}
|
||||
|
||||
- ~~Thought error occurred in initialization~~ → Error happens during runtime update
|
||||
- Why wrong: Stack trace misread as init code
|
||||
- Evidence: H2 timestamp shows 30s after startup
|
||||
```
|
||||
|
||||
### Consolidation Rules
|
||||
|
||||
When updating "Current Consolidated Understanding":
|
||||
|
||||
1. **Simplify disproven items**: Move to "What Was Disproven" with single-line summary
|
||||
2. **Keep valid insights**: Promote confirmed findings to "What We Know"
|
||||
3. **Avoid duplication**: Don't repeat timeline details in consolidated section
|
||||
4. **Focus on current state**: What do we know NOW, not the journey
|
||||
5. **Preserve key corrections**: Keep important wrong→right transformations for learning
|
||||
|
||||
**Bad (cluttered)**:
|
||||
```markdown
|
||||
## Current Consolidated Understanding
|
||||
|
||||
In iteration 1 we thought X, but in iteration 2 we found Y, then in iteration 3...
|
||||
Also we checked A and found B, and then we checked C...
|
||||
```
|
||||
|
||||
**Good (consolidated)**:
|
||||
```markdown
|
||||
## Current Consolidated Understanding
|
||||
|
||||
### What We Know
|
||||
- Error occurs during runtime update, not initialization
|
||||
- Config value is None (not missing key)
|
||||
|
||||
### What Was Disproven
|
||||
- ~~Initialization error~~ (Timing evidence)
|
||||
- ~~Missing key hypothesis~~ (Key exists)
|
||||
|
||||
### Current Investigation Focus
|
||||
Why is config value None during update?
|
||||
```
|
||||
|
||||
## Post-Completion Expansion
|
||||
|
||||
完成后询问用户是否扩展为issue(test/enhance/refactor/doc),选中项调用 `/issue:new "{summary} - {dimension}"`
|
||||
|
||||
---
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Situation | Action |
|
||||
|-----------|--------|
|
||||
| Empty debug.log | Verify reproduction triggered the code path |
|
||||
| All hypotheses rejected | Use Gemini to generate new hypotheses based on disproven assumptions |
|
||||
| Fix doesn't work | Document failed fix attempt, iterate with refined understanding |
|
||||
| >5 iterations | Review consolidated understanding, escalate to `/workflow:lite-fix` with full context |
|
||||
| Gemini unavailable | Fallback to manual hypothesis generation, document without Gemini insights |
|
||||
| Understanding too long | Consolidate aggressively, archive old iterations to separate file |
|
||||
|
||||
## Comparison with /workflow:debug
|
||||
|
||||
| Feature | /workflow:debug | /workflow:debug-with-file |
|
||||
|---------|-----------------|---------------------------|
|
||||
| NDJSON logging | ✅ | ✅ |
|
||||
| Hypothesis generation | Manual | Gemini-assisted |
|
||||
| Exploration documentation | ❌ | ✅ understanding.md |
|
||||
| Understanding evolution | ❌ | ✅ Timeline + corrections |
|
||||
| Error correction | ❌ | ✅ Strikethrough + reasoning |
|
||||
| Consolidated learning | ❌ | ✅ Current understanding section |
|
||||
| Hypothesis history | ❌ | ✅ hypotheses.json |
|
||||
| Gemini validation | ❌ | ✅ At key decision points |
|
||||
|
||||
## Usage Recommendations
|
||||
|
||||
Use `/workflow:debug-with-file` when:
|
||||
- Complex bugs requiring multiple investigation rounds
|
||||
- Learning from debugging process is valuable
|
||||
- Team needs to understand debugging rationale
|
||||
- Bug might recur, documentation helps prevention
|
||||
|
||||
Use `/workflow:debug` when:
|
||||
- Simple, quick bugs
|
||||
- One-off issues
|
||||
- Documentation overhead not needed
|
||||
@@ -13,6 +13,7 @@ import { memoryCommand } from './commands/memory.js';
|
||||
import { coreMemoryCommand } from './commands/core-memory.js';
|
||||
import { hookCommand } from './commands/hook.js';
|
||||
import { issueCommand } from './commands/issue.js';
|
||||
import { workflowCommand } from './commands/workflow.js';
|
||||
import { readFileSync, existsSync } from 'fs';
|
||||
import { fileURLToPath } from 'url';
|
||||
import { dirname, join } from 'path';
|
||||
@@ -300,5 +301,13 @@ export function run(argv: string[]): void {
|
||||
.option('--queue <queue-id>', 'Target queue ID for multi-queue operations')
|
||||
.action((subcommand, args, options) => issueCommand(subcommand, args, options));
|
||||
|
||||
// Workflow command - Workflow installation and management
|
||||
program
|
||||
.command('workflow [subcommand] [args...]')
|
||||
.description('Workflow installation and management (install, list, sync)')
|
||||
.option('-f, --force', 'Force installation without prompts')
|
||||
.option('--source <source>', 'Install specific source only')
|
||||
.action((subcommand, args, options) => workflowCommand(subcommand, args, options));
|
||||
|
||||
program.parse(argv);
|
||||
}
|
||||
|
||||
@@ -98,8 +98,9 @@ function broadcastStreamEvent(eventType: string, payload: Record<string, unknown
|
||||
req.on('socket', (socket) => {
|
||||
socket.unref();
|
||||
});
|
||||
req.on('error', () => {
|
||||
// Silently ignore errors for streaming events
|
||||
req.on('error', (err) => {
|
||||
// Log errors for debugging - helps diagnose hook communication issues
|
||||
console.error(`[Hook] Failed to send ${eventType}:`, (err as Error).message);
|
||||
});
|
||||
req.on('timeout', () => {
|
||||
req.destroy();
|
||||
|
||||
@@ -1,6 +1,7 @@
|
||||
import { serveCommand } from './serve.js';
|
||||
import { launchBrowser } from '../utils/browser-launcher.js';
|
||||
import { validatePath } from '../utils/path-resolver.js';
|
||||
import { checkForUpdates } from '../utils/update-checker.js';
|
||||
import chalk from 'chalk';
|
||||
|
||||
interface ViewOptions {
|
||||
@@ -68,6 +69,9 @@ async function switchWorkspace(port: number, path: string): Promise<SwitchWorksp
|
||||
* @param {Object} options - Command options
|
||||
*/
|
||||
export async function viewCommand(options: ViewOptions): Promise<void> {
|
||||
// Check for updates (fire-and-forget, non-blocking)
|
||||
checkForUpdates().catch(() => { /* ignore errors */ });
|
||||
|
||||
const port = options.port || 3456;
|
||||
const host = options.host || '127.0.0.1';
|
||||
const browserHost = host === '0.0.0.0' || host === '::' ? 'localhost' : host;
|
||||
|
||||
348
ccw/src/commands/workflow.ts
Normal file
348
ccw/src/commands/workflow.ts
Normal file
@@ -0,0 +1,348 @@
|
||||
import { existsSync, mkdirSync, readdirSync, statSync, copyFileSync, readFileSync, writeFileSync } from 'fs';
|
||||
import { join, basename, dirname } from 'path';
|
||||
import { homedir } from 'os';
|
||||
import inquirer from 'inquirer';
|
||||
import chalk from 'chalk';
|
||||
import { showHeader, createSpinner, info, warning, error, summaryBox, divider } from '../utils/ui.js';
|
||||
import { getPackageRoot as findPackageRoot, getPackageVersion } from '../utils/project-root.js';
|
||||
|
||||
// Workflow source directories (relative to package root)
|
||||
const WORKFLOW_SOURCES = [
|
||||
{ name: '.claude/workflows', description: 'Claude workflows' },
|
||||
{ name: '.claude/scripts', description: 'Claude scripts' },
|
||||
{ name: '.claude/templates', description: 'Claude templates' },
|
||||
{ name: '.codex/prompts', description: 'Codex prompts' },
|
||||
{ name: '.gemini', description: 'Gemini configuration' },
|
||||
{ name: '.qwen', description: 'Qwen configuration' }
|
||||
];
|
||||
|
||||
interface WorkflowOptions {
|
||||
force?: boolean;
|
||||
all?: boolean;
|
||||
source?: string;
|
||||
}
|
||||
|
||||
interface CopyStats {
|
||||
files: number;
|
||||
directories: number;
|
||||
updated: number;
|
||||
skipped: number;
|
||||
}
|
||||
|
||||
/**
|
||||
* Get package root directory using robust path resolution
|
||||
*/
|
||||
function getPackageRoot(): string {
|
||||
return findPackageRoot();
|
||||
}
|
||||
|
||||
/**
|
||||
* Get workflow installation target directory
|
||||
*/
|
||||
function getWorkflowTargetDir(): string {
|
||||
return homedir();
|
||||
}
|
||||
|
||||
/**
|
||||
* Get package version
|
||||
*/
|
||||
function getVersion(): string {
|
||||
return getPackageVersion();
|
||||
}
|
||||
|
||||
/**
|
||||
* Custom error with file path context
|
||||
*/
|
||||
class FileOperationError extends Error {
|
||||
constructor(message: string, public filePath: string, public operation: string) {
|
||||
super(`${operation} failed for ${filePath}: ${message}`);
|
||||
this.name = 'FileOperationError';
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Copy directory recursively with stats tracking and detailed error handling
|
||||
*/
|
||||
async function copyDirectory(
|
||||
src: string,
|
||||
dest: string,
|
||||
stats: CopyStats = { files: 0, directories: 0, updated: 0, skipped: 0 }
|
||||
): Promise<CopyStats> {
|
||||
if (!existsSync(src)) {
|
||||
return stats;
|
||||
}
|
||||
|
||||
// Create destination directory with error context
|
||||
if (!existsSync(dest)) {
|
||||
try {
|
||||
mkdirSync(dest, { recursive: true });
|
||||
stats.directories++;
|
||||
} catch (err) {
|
||||
const e = err as Error;
|
||||
throw new FileOperationError(e.message, dest, 'Create directory');
|
||||
}
|
||||
}
|
||||
|
||||
const entries = readdirSync(src);
|
||||
|
||||
for (const entry of entries) {
|
||||
// Skip settings files
|
||||
if (entry === 'settings.json' || entry === 'settings.local.json') {
|
||||
stats.skipped++;
|
||||
continue;
|
||||
}
|
||||
|
||||
const srcPath = join(src, entry);
|
||||
const destPath = join(dest, entry);
|
||||
|
||||
try {
|
||||
const stat = statSync(srcPath);
|
||||
|
||||
if (stat.isDirectory()) {
|
||||
await copyDirectory(srcPath, destPath, stats);
|
||||
} else {
|
||||
// Check if file needs update (use binary comparison for non-text files)
|
||||
if (existsSync(destPath)) {
|
||||
try {
|
||||
const srcContent = readFileSync(srcPath);
|
||||
const destContent = readFileSync(destPath);
|
||||
if (srcContent.equals(destContent)) {
|
||||
stats.skipped++;
|
||||
continue;
|
||||
}
|
||||
stats.updated++;
|
||||
} catch {
|
||||
// If comparison fails, proceed with copy
|
||||
stats.updated++;
|
||||
}
|
||||
}
|
||||
copyFileSync(srcPath, destPath);
|
||||
stats.files++;
|
||||
}
|
||||
} catch (err) {
|
||||
if (err instanceof FileOperationError) {
|
||||
throw err; // Re-throw our custom errors
|
||||
}
|
||||
const e = err as Error;
|
||||
throw new FileOperationError(e.message, srcPath, 'Copy file');
|
||||
}
|
||||
}
|
||||
|
||||
return stats;
|
||||
}
|
||||
|
||||
/**
|
||||
* List installed workflows
|
||||
*/
|
||||
async function listWorkflows(): Promise<void> {
|
||||
const targetDir = getWorkflowTargetDir();
|
||||
|
||||
console.log(chalk.blue.bold('\n Installed Workflows\n'));
|
||||
|
||||
let hasWorkflows = false;
|
||||
|
||||
for (const source of WORKFLOW_SOURCES) {
|
||||
const targetPath = join(targetDir, source.name);
|
||||
|
||||
if (existsSync(targetPath)) {
|
||||
hasWorkflows = true;
|
||||
const files = readdirSync(targetPath, { recursive: true });
|
||||
const fileCount = files.filter(f => {
|
||||
const fullPath = join(targetPath, f.toString());
|
||||
return existsSync(fullPath) && statSync(fullPath).isFile();
|
||||
}).length;
|
||||
|
||||
console.log(chalk.cyan(` ${source.name}`));
|
||||
console.log(chalk.gray(` Path: ${targetPath}`));
|
||||
console.log(chalk.gray(` Files: ${fileCount}`));
|
||||
console.log('');
|
||||
}
|
||||
}
|
||||
|
||||
if (!hasWorkflows) {
|
||||
info('No workflows installed. Run: ccw workflow install');
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Install workflows to user home directory
|
||||
*/
|
||||
async function installWorkflows(options: WorkflowOptions): Promise<void> {
|
||||
const version = getVersion();
|
||||
showHeader(version);
|
||||
|
||||
const sourceDir = getPackageRoot();
|
||||
const targetDir = getWorkflowTargetDir();
|
||||
|
||||
// Filter sources if specific source requested
|
||||
let sources = WORKFLOW_SOURCES;
|
||||
if (options.source) {
|
||||
sources = WORKFLOW_SOURCES.filter(s => s.name.includes(options.source!));
|
||||
if (sources.length === 0) {
|
||||
error(`Unknown source: ${options.source}`);
|
||||
info(`Available sources: ${WORKFLOW_SOURCES.map(s => s.name).join(', ')}`);
|
||||
return;
|
||||
}
|
||||
}
|
||||
|
||||
// Validate source directories exist
|
||||
const availableSources = sources.filter(s => existsSync(join(sourceDir, s.name)));
|
||||
|
||||
if (availableSources.length === 0) {
|
||||
error('No workflow sources found to install.');
|
||||
error(`Expected directories in: ${sourceDir}`);
|
||||
return;
|
||||
}
|
||||
|
||||
console.log('');
|
||||
info(`Found ${availableSources.length} workflow sources to install:`);
|
||||
availableSources.forEach(s => {
|
||||
console.log(chalk.gray(` - ${s.name} (${s.description})`));
|
||||
});
|
||||
|
||||
divider();
|
||||
|
||||
// Confirm installation
|
||||
if (!options.force) {
|
||||
const { proceed } = await inquirer.prompt([{
|
||||
type: 'confirm',
|
||||
name: 'proceed',
|
||||
message: `Install workflows to ${targetDir}?`,
|
||||
default: true
|
||||
}]);
|
||||
|
||||
if (!proceed) {
|
||||
info('Installation cancelled');
|
||||
return;
|
||||
}
|
||||
}
|
||||
|
||||
// Perform installation
|
||||
console.log('');
|
||||
const spinner = createSpinner('Installing workflows...').start();
|
||||
|
||||
const totalStats: CopyStats = { files: 0, directories: 0, updated: 0, skipped: 0 };
|
||||
|
||||
try {
|
||||
for (const source of availableSources) {
|
||||
const srcPath = join(sourceDir, source.name);
|
||||
const destPath = join(targetDir, source.name);
|
||||
|
||||
spinner.text = `Installing ${source.name}...`;
|
||||
const stats = await copyDirectory(srcPath, destPath);
|
||||
|
||||
totalStats.files += stats.files;
|
||||
totalStats.directories += stats.directories;
|
||||
totalStats.updated += stats.updated;
|
||||
totalStats.skipped += stats.skipped;
|
||||
}
|
||||
|
||||
// Write version marker
|
||||
const versionPath = join(targetDir, '.claude', 'workflow-version.json');
|
||||
if (existsSync(dirname(versionPath))) {
|
||||
const versionData = {
|
||||
version,
|
||||
installedAt: new Date().toISOString(),
|
||||
installer: 'ccw workflow'
|
||||
};
|
||||
writeFileSync(versionPath, JSON.stringify(versionData, null, 2), 'utf8');
|
||||
}
|
||||
|
||||
spinner.succeed('Workflow installation complete!');
|
||||
|
||||
} catch (err) {
|
||||
spinner.fail('Installation failed');
|
||||
const errMsg = err as Error;
|
||||
error(errMsg.message);
|
||||
return;
|
||||
}
|
||||
|
||||
// Show summary
|
||||
console.log('');
|
||||
const summaryLines = [
|
||||
chalk.green.bold('\u2713 Workflow Installation Successful'),
|
||||
'',
|
||||
chalk.white(`Target: ${chalk.cyan(targetDir)}`),
|
||||
chalk.white(`Version: ${chalk.cyan(version)}`),
|
||||
'',
|
||||
chalk.gray(`New files: ${totalStats.files}`),
|
||||
chalk.gray(`Updated: ${totalStats.updated}`),
|
||||
chalk.gray(`Skipped: ${totalStats.skipped}`),
|
||||
chalk.gray(`Directories: ${totalStats.directories}`)
|
||||
];
|
||||
|
||||
summaryBox({
|
||||
title: ' Workflow Summary ',
|
||||
lines: summaryLines,
|
||||
borderColor: 'green'
|
||||
});
|
||||
|
||||
// Show next steps
|
||||
console.log('');
|
||||
info('Next steps:');
|
||||
console.log(chalk.gray(' 1. Restart Claude Code or your IDE'));
|
||||
console.log(chalk.gray(' 2. Workflows are now available globally'));
|
||||
console.log(chalk.gray(' 3. Run: ccw workflow list - to see installed workflows'));
|
||||
console.log('');
|
||||
}
|
||||
|
||||
/**
|
||||
* Sync workflows (update existing installation)
|
||||
*/
|
||||
async function syncWorkflows(options: WorkflowOptions): Promise<void> {
|
||||
info('Syncing workflows (same as install with updates)...');
|
||||
await installWorkflows({ ...options, force: false });
|
||||
}
|
||||
|
||||
/**
|
||||
* Show workflow help
|
||||
*/
|
||||
function showWorkflowHelp(): void {
|
||||
console.log(chalk.blue.bold('\n CCW Workflow Manager\n'));
|
||||
console.log(chalk.white(' Usage:'));
|
||||
console.log(chalk.gray(' ccw workflow <command> [options]'));
|
||||
console.log('');
|
||||
console.log(chalk.white(' Commands:'));
|
||||
console.log(chalk.cyan(' install') + chalk.gray(' Install workflows to global directory (~/)'));
|
||||
console.log(chalk.cyan(' list') + chalk.gray(' List installed workflows'));
|
||||
console.log(chalk.cyan(' sync') + chalk.gray(' Sync/update workflows from package'));
|
||||
console.log('');
|
||||
console.log(chalk.white(' Options:'));
|
||||
console.log(chalk.gray(' -f, --force Force installation without prompts'));
|
||||
console.log(chalk.gray(' --source Install specific source only'));
|
||||
console.log('');
|
||||
console.log(chalk.white(' Examples:'));
|
||||
console.log(chalk.gray(' ccw workflow install # Install all workflows'));
|
||||
console.log(chalk.gray(' ccw workflow install -f # Force install'));
|
||||
console.log(chalk.gray(' ccw workflow install --source .claude/workflows'));
|
||||
console.log(chalk.gray(' ccw workflow list # List installed'));
|
||||
console.log(chalk.gray(' ccw workflow sync # Update workflows'));
|
||||
console.log('');
|
||||
}
|
||||
|
||||
/**
|
||||
* Main workflow command handler
|
||||
*/
|
||||
export async function workflowCommand(
|
||||
subcommand?: string,
|
||||
args?: string[],
|
||||
options: WorkflowOptions = {}
|
||||
): Promise<void> {
|
||||
switch (subcommand) {
|
||||
case 'install':
|
||||
await installWorkflows(options);
|
||||
break;
|
||||
case 'list':
|
||||
case 'ls':
|
||||
await listWorkflows();
|
||||
break;
|
||||
case 'sync':
|
||||
case 'update':
|
||||
await syncWorkflows(options);
|
||||
break;
|
||||
case 'help':
|
||||
default:
|
||||
showWorkflowHelp();
|
||||
break;
|
||||
}
|
||||
}
|
||||
@@ -113,8 +113,20 @@ export function updateActiveExecution(event: {
|
||||
activeExec.output += output;
|
||||
}
|
||||
} else if (type === 'completed') {
|
||||
// Remove from active executions
|
||||
activeExecutions.delete(executionId);
|
||||
// Mark as completed instead of immediately deleting
|
||||
// Keep execution visible for 5 minutes to allow page refreshes to see it
|
||||
const activeExec = activeExecutions.get(executionId);
|
||||
if (activeExec) {
|
||||
activeExec.status = success ? 'completed' : 'error';
|
||||
|
||||
// Auto-cleanup after 5 minutes
|
||||
setTimeout(() => {
|
||||
activeExecutions.delete(executionId);
|
||||
console.log(`[ActiveExec] Auto-cleaned completed execution: ${executionId}`);
|
||||
}, 5 * 60 * 1000);
|
||||
|
||||
console.log(`[ActiveExec] Marked as ${activeExec.status}, will auto-clean in 5 minutes`);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
@@ -6,21 +6,100 @@
|
||||
// State
|
||||
let versionCheckData = null;
|
||||
let versionBannerDismissed = false;
|
||||
let autoUpdateEnabled = true; // Default to enabled
|
||||
|
||||
/**
|
||||
* Initialize version check on page load
|
||||
*/
|
||||
async function initVersionCheck() {
|
||||
// Load auto-update setting from localStorage
|
||||
const stored = localStorage.getItem('ccw.autoUpdate');
|
||||
autoUpdateEnabled = stored !== null ? stored === 'true' : true;
|
||||
|
||||
// Update toggle checkbox state
|
||||
updateAutoUpdateToggleUI();
|
||||
|
||||
// Check version after a short delay to not block initial render
|
||||
setTimeout(async () => {
|
||||
await checkForUpdates();
|
||||
if (autoUpdateEnabled) {
|
||||
await checkForUpdates();
|
||||
}
|
||||
}, 2000);
|
||||
}
|
||||
|
||||
/**
|
||||
* Toggle auto-update setting (called from checkbox change event)
|
||||
*/
|
||||
function toggleAutoUpdate() {
|
||||
const checkbox = document.getElementById('autoUpdateToggle');
|
||||
if (!checkbox) return;
|
||||
|
||||
autoUpdateEnabled = checkbox.checked;
|
||||
localStorage.setItem('ccw.autoUpdate', autoUpdateEnabled.toString());
|
||||
|
||||
// Show notification
|
||||
if (autoUpdateEnabled) {
|
||||
addGlobalNotification('success', 'Auto-update enabled', 'Version check will run automatically', 'version-check');
|
||||
// Run check immediately if just enabled
|
||||
checkForUpdates();
|
||||
} else {
|
||||
addGlobalNotification('info', 'Auto-update disabled', 'Version check is turned off', 'version-check');
|
||||
// Dismiss banner if visible
|
||||
dismissUpdateBanner();
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Check for updates immediately (called from "Check Now" button)
|
||||
*/
|
||||
async function checkForUpdatesNow() {
|
||||
const btn = document.getElementById('checkUpdateNow');
|
||||
if (btn) {
|
||||
// Add loading animation
|
||||
btn.classList.add('animate-spin');
|
||||
btn.disabled = true;
|
||||
}
|
||||
|
||||
// Force check regardless of toggle state
|
||||
const originalState = autoUpdateEnabled;
|
||||
autoUpdateEnabled = true;
|
||||
|
||||
try {
|
||||
await checkForUpdates();
|
||||
addGlobalNotification('success', 'Update check complete', 'Checked for latest version', 'version-check');
|
||||
} catch (err) {
|
||||
addGlobalNotification('error', 'Update check failed', err.message, 'version-check');
|
||||
} finally {
|
||||
// Restore original state
|
||||
autoUpdateEnabled = originalState;
|
||||
|
||||
if (btn) {
|
||||
btn.classList.remove('animate-spin');
|
||||
btn.disabled = false;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Update auto-update toggle checkbox state
|
||||
*/
|
||||
function updateAutoUpdateToggleUI() {
|
||||
const checkbox = document.getElementById('autoUpdateToggle');
|
||||
if (!checkbox) return;
|
||||
|
||||
checkbox.checked = autoUpdateEnabled;
|
||||
}
|
||||
|
||||
/**
|
||||
* Check for package updates
|
||||
*/
|
||||
async function checkForUpdates() {
|
||||
// Respect the toggle setting
|
||||
if (!autoUpdateEnabled) {
|
||||
console.log('Version check skipped: auto-update is disabled');
|
||||
return;
|
||||
}
|
||||
|
||||
try {
|
||||
const res = await fetch('/api/version-check');
|
||||
if (!res.ok) return;
|
||||
@@ -165,3 +244,10 @@ npm install -g ' + versionCheckData.packageName + '@latest\n\
|
||||
function getVersionInfo() {
|
||||
return versionCheckData;
|
||||
}
|
||||
|
||||
/**
|
||||
* Check if auto-update is enabled
|
||||
*/
|
||||
function isAutoUpdateEnabled() {
|
||||
return autoUpdateEnabled;
|
||||
}
|
||||
|
||||
@@ -42,6 +42,8 @@ const i18n = {
|
||||
'header.recentProjects': 'Recent Projects',
|
||||
'header.browse': 'Browse...',
|
||||
'header.refreshWorkspace': 'Refresh workspace',
|
||||
'header.checkUpdateNow': 'Check for updates now',
|
||||
'header.autoUpdate': 'Auto-update check',
|
||||
'header.toggleTheme': 'Toggle theme',
|
||||
'header.language': 'Language',
|
||||
'header.cliStream': 'CLI Stream Viewer',
|
||||
@@ -2391,6 +2393,8 @@ const i18n = {
|
||||
'header.recentProjects': '最近项目',
|
||||
'header.browse': '浏览...',
|
||||
'header.refreshWorkspace': '刷新工作区',
|
||||
'header.checkUpdateNow': '立即检查更新',
|
||||
'header.autoUpdate': '自动更新检查',
|
||||
'header.toggleTheme': '切换主题',
|
||||
'header.language': '语言',
|
||||
'header.cliStream': 'CLI 流式输出',
|
||||
|
||||
@@ -234,6 +234,52 @@
|
||||
animation: spin 1s linear infinite;
|
||||
}
|
||||
|
||||
/* Auto-Update Toggle Switch */
|
||||
.auto-update-switch {
|
||||
position: relative;
|
||||
display: inline-block;
|
||||
width: 32px;
|
||||
height: 18px;
|
||||
cursor: pointer;
|
||||
}
|
||||
.auto-update-switch input {
|
||||
opacity: 0;
|
||||
width: 0;
|
||||
height: 0;
|
||||
}
|
||||
.auto-update-slider {
|
||||
position: absolute;
|
||||
cursor: pointer;
|
||||
top: 0;
|
||||
left: 0;
|
||||
right: 0;
|
||||
bottom: 0;
|
||||
background-color: hsl(var(--muted));
|
||||
transition: 0.3s;
|
||||
border-radius: 18px;
|
||||
}
|
||||
.auto-update-slider:before {
|
||||
position: absolute;
|
||||
content: "";
|
||||
height: 12px;
|
||||
width: 12px;
|
||||
left: 3px;
|
||||
bottom: 3px;
|
||||
background-color: hsl(var(--muted-foreground));
|
||||
transition: 0.3s;
|
||||
border-radius: 50%;
|
||||
}
|
||||
.auto-update-switch input:checked + .auto-update-slider {
|
||||
background-color: hsl(var(--success));
|
||||
}
|
||||
.auto-update-switch input:checked + .auto-update-slider:before {
|
||||
transform: translateX(14px);
|
||||
background-color: white;
|
||||
}
|
||||
.auto-update-switch:hover .auto-update-slider {
|
||||
opacity: 0.9;
|
||||
}
|
||||
|
||||
/* Injected from dashboard-css/*.css modules */
|
||||
{{CSS_CONTENT}}
|
||||
</style>
|
||||
@@ -296,6 +342,22 @@
|
||||
<path d="M16 21h5v-5"/>
|
||||
</svg>
|
||||
</button>
|
||||
<!-- Auto-Update Controls -->
|
||||
<div class="flex items-center gap-2 border-l border-border pl-2">
|
||||
<!-- Check Now Button -->
|
||||
<button class="p-1.5 text-muted-foreground hover:text-foreground hover:bg-hover rounded" id="checkUpdateNow" data-i18n-title="header.checkUpdateNow" title="Check for updates now" onclick="checkForUpdatesNow()">
|
||||
<svg width="16" height="16" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round">
|
||||
<path d="M21 15v4a2 2 0 0 1-2 2H5a2 2 0 0 1-2-2v-4"/>
|
||||
<polyline points="7 10 12 15 17 10"/>
|
||||
<line x1="12" y1="15" x2="12" y2="3"/>
|
||||
</svg>
|
||||
</button>
|
||||
<!-- Auto-Update Toggle Switch -->
|
||||
<label class="auto-update-switch" data-i18n-title="header.autoUpdate" title="Auto-update check">
|
||||
<input type="checkbox" id="autoUpdateToggle" onchange="toggleAutoUpdate()" checked>
|
||||
<span class="auto-update-slider"></span>
|
||||
</label>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<!-- Language Toggle -->
|
||||
|
||||
73
ccw/src/utils/project-root.ts
Normal file
73
ccw/src/utils/project-root.ts
Normal file
@@ -0,0 +1,73 @@
|
||||
import { existsSync, readFileSync } from 'fs';
|
||||
import { dirname, join } from 'path';
|
||||
import { fileURLToPath } from 'url';
|
||||
|
||||
const __filename = fileURLToPath(import.meta.url);
|
||||
const __dirname = dirname(__filename);
|
||||
|
||||
interface PackageInfo {
|
||||
name: string;
|
||||
version: string;
|
||||
[key: string]: unknown;
|
||||
}
|
||||
|
||||
/**
|
||||
* Find project root by searching upward for package.json
|
||||
* More robust than hardcoded relative paths
|
||||
*/
|
||||
export function findProjectRoot(startDir: string = __dirname): string | null {
|
||||
let currentDir = startDir;
|
||||
let previousDir = '';
|
||||
|
||||
// Traverse up until we find package.json or reach filesystem root
|
||||
while (currentDir !== previousDir) {
|
||||
const pkgPath = join(currentDir, 'package.json');
|
||||
if (existsSync(pkgPath)) {
|
||||
try {
|
||||
const pkg = JSON.parse(readFileSync(pkgPath, 'utf8'));
|
||||
// Verify this is our package (not a nested node_modules package)
|
||||
if (pkg.name === 'claude-code-workflow' || pkg.bin?.ccw) {
|
||||
return currentDir;
|
||||
}
|
||||
} catch {
|
||||
// Invalid JSON, continue searching
|
||||
}
|
||||
}
|
||||
previousDir = currentDir;
|
||||
currentDir = dirname(currentDir);
|
||||
}
|
||||
|
||||
return null;
|
||||
}
|
||||
|
||||
/**
|
||||
* Load package.json from project root
|
||||
* Returns null if not found or invalid
|
||||
*/
|
||||
export function loadPackageInfo(): PackageInfo | null {
|
||||
const projectRoot = findProjectRoot();
|
||||
if (!projectRoot) return null;
|
||||
|
||||
const pkgPath = join(projectRoot, 'package.json');
|
||||
try {
|
||||
const content = readFileSync(pkgPath, 'utf8');
|
||||
return JSON.parse(content) as PackageInfo;
|
||||
} catch {
|
||||
return null;
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Get package version from project root
|
||||
*/
|
||||
export function getPackageVersion(): string {
|
||||
const pkg = loadPackageInfo();
|
||||
return pkg?.version || '1.0.0';
|
||||
}
|
||||
|
||||
/**
|
||||
* Get package root directory
|
||||
*/
|
||||
export function getPackageRoot(): string {
|
||||
return findProjectRoot() || join(__dirname, '..', '..', '..');
|
||||
}
|
||||
178
ccw/src/utils/update-checker.ts
Normal file
178
ccw/src/utils/update-checker.ts
Normal file
@@ -0,0 +1,178 @@
|
||||
import { existsSync, readFileSync, writeFileSync, mkdirSync } from 'fs';
|
||||
import { join } from 'path';
|
||||
import { homedir } from 'os';
|
||||
import chalk from 'chalk';
|
||||
import { loadPackageInfo } from './project-root.js';
|
||||
|
||||
interface CacheData {
|
||||
lastCheck: number;
|
||||
latestVersion: string | null;
|
||||
}
|
||||
|
||||
const CHECK_INTERVAL = 24 * 60 * 60 * 1000; // 24 hours
|
||||
const CACHE_DIR = join(homedir(), '.config', 'ccw');
|
||||
const CACHE_FILE = join(CACHE_DIR, 'update-check.json');
|
||||
|
||||
/**
|
||||
* Load cached update check data
|
||||
*/
|
||||
function loadCache(): CacheData | null {
|
||||
try {
|
||||
if (existsSync(CACHE_FILE)) {
|
||||
return JSON.parse(readFileSync(CACHE_FILE, 'utf8'));
|
||||
}
|
||||
return null;
|
||||
} catch {
|
||||
return null;
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Save update check data to cache
|
||||
*/
|
||||
function saveCache(data: CacheData): void {
|
||||
try {
|
||||
if (!existsSync(CACHE_DIR)) {
|
||||
mkdirSync(CACHE_DIR, { recursive: true });
|
||||
}
|
||||
writeFileSync(CACHE_FILE, JSON.stringify(data, null, 2));
|
||||
} catch {
|
||||
// Ignore cache write errors
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Parse semver version into components
|
||||
* Handles: 1.2.3, v1.2.3, 1.2.3-alpha.1, 1.2.3-beta, 1.2.3-rc.2
|
||||
*/
|
||||
function parseVersion(version: string): { major: number; minor: number; patch: number; prerelease: string[] } {
|
||||
const cleaned = version.replace(/^v/, '');
|
||||
const [mainPart, prereleasePart] = cleaned.split('-');
|
||||
const parts = mainPart.split('.');
|
||||
const major = parseInt(parts[0], 10) || 0;
|
||||
const minor = parseInt(parts[1], 10) || 0;
|
||||
const patch = parseInt(parts[2], 10) || 0;
|
||||
const prerelease = prereleasePart ? prereleasePart.split('.') : [];
|
||||
|
||||
return { major, minor, patch, prerelease };
|
||||
}
|
||||
|
||||
/**
|
||||
* Compare two semver versions
|
||||
* Returns: 1 if a > b, -1 if a < b, 0 if equal
|
||||
* Properly handles prerelease versions (1.0.0-alpha < 1.0.0)
|
||||
*/
|
||||
function compareVersions(a: string, b: string): number {
|
||||
const vA = parseVersion(a);
|
||||
const vB = parseVersion(b);
|
||||
|
||||
// Compare major.minor.patch
|
||||
if (vA.major !== vB.major) return vA.major > vB.major ? 1 : -1;
|
||||
if (vA.minor !== vB.minor) return vA.minor > vB.minor ? 1 : -1;
|
||||
if (vA.patch !== vB.patch) return vA.patch > vB.patch ? 1 : -1;
|
||||
|
||||
// Handle prerelease: no prerelease > has prerelease
|
||||
// e.g., 1.0.0 > 1.0.0-alpha
|
||||
if (vA.prerelease.length === 0 && vB.prerelease.length > 0) return 1;
|
||||
if (vA.prerelease.length > 0 && vB.prerelease.length === 0) return -1;
|
||||
|
||||
// Compare prerelease identifiers
|
||||
const maxLen = Math.max(vA.prerelease.length, vB.prerelease.length);
|
||||
for (let i = 0; i < maxLen; i++) {
|
||||
const partA = vA.prerelease[i];
|
||||
const partB = vB.prerelease[i];
|
||||
|
||||
// Missing part is less (1.0.0-alpha < 1.0.0-alpha.1)
|
||||
if (partA === undefined) return -1;
|
||||
if (partB === undefined) return 1;
|
||||
|
||||
// Numeric comparison if both are numbers
|
||||
const numA = parseInt(partA, 10);
|
||||
const numB = parseInt(partB, 10);
|
||||
if (!isNaN(numA) && !isNaN(numB)) {
|
||||
if (numA !== numB) return numA > numB ? 1 : -1;
|
||||
} else {
|
||||
// String comparison
|
||||
if (partA !== partB) return partA > partB ? 1 : -1;
|
||||
}
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
/**
|
||||
* Fetch latest version from npm registry
|
||||
*/
|
||||
async function fetchLatestVersion(packageName: string): Promise<string | null> {
|
||||
try {
|
||||
const controller = new AbortController();
|
||||
const timeoutId = setTimeout(() => controller.abort(), 5000); // 5s timeout
|
||||
|
||||
const response = await fetch(`https://registry.npmjs.org/${packageName}/latest`, {
|
||||
signal: controller.signal,
|
||||
headers: { 'Accept': 'application/json' }
|
||||
});
|
||||
|
||||
clearTimeout(timeoutId);
|
||||
|
||||
if (!response.ok) return null;
|
||||
|
||||
const data = await response.json() as { version?: string };
|
||||
return data.version || null;
|
||||
} catch {
|
||||
return null;
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Check for npm package updates and notify user
|
||||
* Non-blocking, with caching to avoid frequent requests
|
||||
*/
|
||||
export async function checkForUpdates(): Promise<void> {
|
||||
try {
|
||||
const pkg = loadPackageInfo();
|
||||
if (!pkg) return;
|
||||
|
||||
// Check cache first
|
||||
const cache = loadCache();
|
||||
const now = Date.now();
|
||||
|
||||
let latestVersion: string | null = null;
|
||||
|
||||
// Use cached version if within check interval
|
||||
if (cache && (now - cache.lastCheck) < CHECK_INTERVAL) {
|
||||
latestVersion = cache.latestVersion;
|
||||
} else {
|
||||
// Fetch from npm registry
|
||||
latestVersion = await fetchLatestVersion(pkg.name);
|
||||
|
||||
// Update cache
|
||||
saveCache({
|
||||
lastCheck: now,
|
||||
latestVersion
|
||||
});
|
||||
}
|
||||
|
||||
// Compare and notify (only for stable releases, ignore prerelease)
|
||||
if (latestVersion && compareVersions(latestVersion, pkg.version) > 0) {
|
||||
console.log('');
|
||||
console.log(chalk.yellow.bold(' \u26a0 New version available!'));
|
||||
console.log(chalk.gray(` Current: ${pkg.version} \u2192 Latest: ${chalk.green(latestVersion)}`));
|
||||
console.log(chalk.cyan(` Run: npm update -g ${pkg.name}`));
|
||||
console.log('');
|
||||
}
|
||||
} catch {
|
||||
// Silently ignore update check errors
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Check for updates and notify (async, non-blocking)
|
||||
* Call this at the start of commands that should show update notifications
|
||||
*/
|
||||
export function notifyUpdates(): void {
|
||||
// Run in background, don't block main execution
|
||||
checkForUpdates().catch(() => {
|
||||
// Ignore errors
|
||||
});
|
||||
}
|
||||
159
ccw/tests/project-root.test.ts
Normal file
159
ccw/tests/project-root.test.ts
Normal file
@@ -0,0 +1,159 @@
|
||||
/**
|
||||
* Unit tests for project-root utility module.
|
||||
*
|
||||
* Tests path resolution logic for finding project root directory.
|
||||
* Note: These tests work with the actual filesystem rather than mocks
|
||||
* because ESM module caching makes fs mocking unreliable.
|
||||
*/
|
||||
|
||||
import { describe, it } from 'node:test';
|
||||
import assert from 'node:assert/strict';
|
||||
import { fileURLToPath } from 'node:url';
|
||||
import { dirname, join, resolve } from 'node:path';
|
||||
import { existsSync, readFileSync } from 'node:fs';
|
||||
|
||||
const __filename = fileURLToPath(import.meta.url);
|
||||
const __dirname = dirname(__filename);
|
||||
|
||||
// Import the actual module
|
||||
import { findProjectRoot, loadPackageInfo, getPackageVersion, getPackageRoot } from '../dist/utils/project-root.js';
|
||||
|
||||
describe('project-root: findProjectRoot', () => {
|
||||
it('should find project root from tests directory', () => {
|
||||
const result = findProjectRoot(__dirname);
|
||||
|
||||
// Should find the actual project root
|
||||
assert.ok(result, 'Should find a project root');
|
||||
|
||||
// Verify it has the expected package.json
|
||||
const pkgPath = join(result, 'package.json');
|
||||
assert.ok(existsSync(pkgPath), 'Should have package.json at root');
|
||||
|
||||
const pkg = JSON.parse(readFileSync(pkgPath, 'utf8'));
|
||||
assert.ok(
|
||||
pkg.name === 'claude-code-workflow' || pkg.bin?.ccw,
|
||||
'Should find correct project by name or bin'
|
||||
);
|
||||
});
|
||||
|
||||
it('should find project root from deeply nested directory', () => {
|
||||
const deepDir = join(__dirname, 'integration', 'cli-executor');
|
||||
const result = findProjectRoot(deepDir);
|
||||
|
||||
assert.ok(result, 'Should find project root from deep directory');
|
||||
});
|
||||
|
||||
it('should find project root from src directory', () => {
|
||||
const srcDir = join(__dirname, '..', 'src', 'utils');
|
||||
const result = findProjectRoot(srcDir);
|
||||
|
||||
assert.ok(result, 'Should find project root from src');
|
||||
});
|
||||
|
||||
it('should return consistent result regardless of starting point', () => {
|
||||
const fromTests = findProjectRoot(__dirname);
|
||||
const fromSrc = findProjectRoot(join(__dirname, '..', 'src'));
|
||||
const fromCommands = findProjectRoot(join(__dirname, '..', 'src', 'commands'));
|
||||
|
||||
assert.equal(fromTests, fromSrc, 'Should find same root from tests and src');
|
||||
assert.equal(fromSrc, fromCommands, 'Should find same root from src and commands');
|
||||
});
|
||||
});
|
||||
|
||||
describe('project-root: loadPackageInfo', () => {
|
||||
it('should load package info successfully', () => {
|
||||
const pkg = loadPackageInfo();
|
||||
|
||||
assert.ok(pkg, 'Should return package info');
|
||||
assert.ok(pkg.name, 'Should have name field');
|
||||
assert.ok(pkg.version, 'Should have version field');
|
||||
});
|
||||
|
||||
it('should return correct package name', () => {
|
||||
const pkg = loadPackageInfo();
|
||||
|
||||
assert.ok(
|
||||
pkg?.name === 'claude-code-workflow' || pkg?.bin?.ccw,
|
||||
'Should return the correct project package'
|
||||
);
|
||||
});
|
||||
|
||||
it('should include version field', () => {
|
||||
const pkg = loadPackageInfo();
|
||||
|
||||
assert.ok(pkg?.version, 'Should have version');
|
||||
assert.match(pkg!.version, /^\d+\.\d+\.\d+/, 'Version should be semver format');
|
||||
});
|
||||
});
|
||||
|
||||
describe('project-root: getPackageVersion', () => {
|
||||
it('should return version string', () => {
|
||||
const version = getPackageVersion();
|
||||
|
||||
assert.ok(version, 'Should return a version');
|
||||
assert.equal(typeof version, 'string', 'Version should be a string');
|
||||
});
|
||||
|
||||
it('should return valid semver format', () => {
|
||||
const version = getPackageVersion();
|
||||
|
||||
// Basic semver pattern: X.Y.Z or X.Y.Z-prerelease
|
||||
assert.match(version, /^\d+\.\d+\.\d+/, 'Should be semver format');
|
||||
});
|
||||
|
||||
it('should return consistent version', () => {
|
||||
const v1 = getPackageVersion();
|
||||
const v2 = getPackageVersion();
|
||||
|
||||
assert.equal(v1, v2, 'Should return same version on multiple calls');
|
||||
});
|
||||
});
|
||||
|
||||
describe('project-root: getPackageRoot', () => {
|
||||
it('should return project root path', () => {
|
||||
const root = getPackageRoot();
|
||||
|
||||
assert.ok(root, 'Should return a path');
|
||||
assert.equal(typeof root, 'string', 'Should be a string');
|
||||
});
|
||||
|
||||
it('should return existing directory', () => {
|
||||
const root = getPackageRoot();
|
||||
|
||||
assert.ok(existsSync(root), 'Root directory should exist');
|
||||
});
|
||||
|
||||
it('should contain package.json', () => {
|
||||
const root = getPackageRoot();
|
||||
const pkgPath = join(root, 'package.json');
|
||||
|
||||
assert.ok(existsSync(pkgPath), 'Should have package.json');
|
||||
});
|
||||
|
||||
it('should return absolute path', () => {
|
||||
const root = getPackageRoot();
|
||||
|
||||
assert.equal(root, resolve(root), 'Should be absolute path');
|
||||
});
|
||||
});
|
||||
|
||||
describe('project-root: integration', () => {
|
||||
it('should have consistent data across functions', () => {
|
||||
const root = getPackageRoot();
|
||||
const pkg = loadPackageInfo();
|
||||
const version = getPackageVersion();
|
||||
|
||||
// Read package.json directly for comparison
|
||||
const directPkg = JSON.parse(readFileSync(join(root, 'package.json'), 'utf8'));
|
||||
|
||||
assert.equal(pkg?.version, directPkg.version, 'loadPackageInfo should match direct read');
|
||||
assert.equal(version, directPkg.version, 'getPackageVersion should match direct read');
|
||||
});
|
||||
|
||||
it('should find root matching package.json location', () => {
|
||||
const root = getPackageRoot();
|
||||
const foundRoot = findProjectRoot(__dirname);
|
||||
|
||||
assert.equal(root, foundRoot, 'getPackageRoot and findProjectRoot should match');
|
||||
});
|
||||
});
|
||||
225
ccw/tests/update-checker.test.ts
Normal file
225
ccw/tests/update-checker.test.ts
Normal file
@@ -0,0 +1,225 @@
|
||||
/**
|
||||
* Unit tests for update-checker utility module.
|
||||
*
|
||||
* Tests version comparison logic with semver support including prerelease versions.
|
||||
*/
|
||||
|
||||
import { describe, it } from 'node:test';
|
||||
import assert from 'node:assert/strict';
|
||||
|
||||
// We need to test the compareVersions function which is not exported
|
||||
// So we'll create a standalone copy for testing purposes
|
||||
function parseVersion(version: string): { major: number; minor: number; patch: number; prerelease: string[] } {
|
||||
const cleaned = version.replace(/^v/, '');
|
||||
const [mainPart, prereleasePart] = cleaned.split('-');
|
||||
const parts = mainPart.split('.');
|
||||
const major = parseInt(parts[0], 10) || 0;
|
||||
const minor = parseInt(parts[1], 10) || 0;
|
||||
const patch = parseInt(parts[2], 10) || 0;
|
||||
const prerelease = prereleasePart ? prereleasePart.split('.') : [];
|
||||
|
||||
return { major, minor, patch, prerelease };
|
||||
}
|
||||
|
||||
function compareVersions(a: string, b: string): number {
|
||||
const vA = parseVersion(a);
|
||||
const vB = parseVersion(b);
|
||||
|
||||
// Compare major.minor.patch
|
||||
if (vA.major !== vB.major) return vA.major > vB.major ? 1 : -1;
|
||||
if (vA.minor !== vB.minor) return vA.minor > vB.minor ? 1 : -1;
|
||||
if (vA.patch !== vB.patch) return vA.patch > vB.patch ? 1 : -1;
|
||||
|
||||
// Handle prerelease: no prerelease > has prerelease
|
||||
// e.g., 1.0.0 > 1.0.0-alpha
|
||||
if (vA.prerelease.length === 0 && vB.prerelease.length > 0) return 1;
|
||||
if (vA.prerelease.length > 0 && vB.prerelease.length === 0) return -1;
|
||||
|
||||
// Compare prerelease identifiers
|
||||
const maxLen = Math.max(vA.prerelease.length, vB.prerelease.length);
|
||||
for (let i = 0; i < maxLen; i++) {
|
||||
const partA = vA.prerelease[i];
|
||||
const partB = vB.prerelease[i];
|
||||
|
||||
// Missing part is less (1.0.0-alpha < 1.0.0-alpha.1)
|
||||
if (partA === undefined) return -1;
|
||||
if (partB === undefined) return 1;
|
||||
|
||||
// Numeric comparison if both are numbers
|
||||
const numA = parseInt(partA, 10);
|
||||
const numB = parseInt(partB, 10);
|
||||
if (!isNaN(numA) && !isNaN(numB)) {
|
||||
if (numA !== numB) return numA > numB ? 1 : -1;
|
||||
} else {
|
||||
// String comparison
|
||||
if (partA !== partB) return partA > partB ? 1 : -1;
|
||||
}
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
describe('update-checker: parseVersion', () => {
|
||||
it('should parse basic version', () => {
|
||||
const result = parseVersion('1.2.3');
|
||||
assert.equal(result.major, 1);
|
||||
assert.equal(result.minor, 2);
|
||||
assert.equal(result.patch, 3);
|
||||
assert.deepEqual(result.prerelease, []);
|
||||
});
|
||||
|
||||
it('should parse version with v prefix', () => {
|
||||
const result = parseVersion('v1.2.3');
|
||||
assert.equal(result.major, 1);
|
||||
assert.equal(result.minor, 2);
|
||||
assert.equal(result.patch, 3);
|
||||
});
|
||||
|
||||
it('should parse version with alpha prerelease', () => {
|
||||
const result = parseVersion('1.0.0-alpha');
|
||||
assert.equal(result.major, 1);
|
||||
assert.equal(result.minor, 0);
|
||||
assert.equal(result.patch, 0);
|
||||
assert.deepEqual(result.prerelease, ['alpha']);
|
||||
});
|
||||
|
||||
it('should parse version with numeric prerelease', () => {
|
||||
const result = parseVersion('1.0.0-alpha.1');
|
||||
assert.deepEqual(result.prerelease, ['alpha', '1']);
|
||||
});
|
||||
|
||||
it('should parse version with rc prerelease', () => {
|
||||
const result = parseVersion('2.5.0-rc.3');
|
||||
assert.equal(result.major, 2);
|
||||
assert.equal(result.minor, 5);
|
||||
assert.equal(result.patch, 0);
|
||||
assert.deepEqual(result.prerelease, ['rc', '3']);
|
||||
});
|
||||
|
||||
it('should handle missing patch version', () => {
|
||||
const result = parseVersion('1.2');
|
||||
assert.equal(result.major, 1);
|
||||
assert.equal(result.minor, 2);
|
||||
assert.equal(result.patch, 0);
|
||||
});
|
||||
|
||||
it('should handle missing minor and patch', () => {
|
||||
const result = parseVersion('3');
|
||||
assert.equal(result.major, 3);
|
||||
assert.equal(result.minor, 0);
|
||||
assert.equal(result.patch, 0);
|
||||
});
|
||||
});
|
||||
|
||||
describe('update-checker: compareVersions', () => {
|
||||
describe('major version comparison', () => {
|
||||
it('should return 1 when first major is greater', () => {
|
||||
assert.equal(compareVersions('2.0.0', '1.0.0'), 1);
|
||||
assert.equal(compareVersions('3.5.2', '2.8.9'), 1);
|
||||
});
|
||||
|
||||
it('should return -1 when first major is less', () => {
|
||||
assert.equal(compareVersions('1.0.0', '2.0.0'), -1);
|
||||
assert.equal(compareVersions('1.9.9', '2.0.0'), -1);
|
||||
});
|
||||
});
|
||||
|
||||
describe('minor version comparison', () => {
|
||||
it('should return 1 when major equal and first minor is greater', () => {
|
||||
assert.equal(compareVersions('1.2.0', '1.1.0'), 1);
|
||||
assert.equal(compareVersions('3.5.0', '3.4.9'), 1);
|
||||
});
|
||||
|
||||
it('should return -1 when major equal and first minor is less', () => {
|
||||
assert.equal(compareVersions('1.1.0', '1.2.0'), -1);
|
||||
assert.equal(compareVersions('2.3.5', '2.4.0'), -1);
|
||||
});
|
||||
});
|
||||
|
||||
describe('patch version comparison', () => {
|
||||
it('should return 1 when major.minor equal and first patch is greater', () => {
|
||||
assert.equal(compareVersions('1.2.5', '1.2.3'), 1);
|
||||
assert.equal(compareVersions('2.0.1', '2.0.0'), 1);
|
||||
});
|
||||
|
||||
it('should return -1 when major.minor equal and first patch is less', () => {
|
||||
assert.equal(compareVersions('1.2.3', '1.2.5'), -1);
|
||||
assert.equal(compareVersions('3.1.0', '3.1.2'), -1);
|
||||
});
|
||||
});
|
||||
|
||||
describe('equal versions', () => {
|
||||
it('should return 0 for identical versions', () => {
|
||||
assert.equal(compareVersions('1.2.3', '1.2.3'), 0);
|
||||
assert.equal(compareVersions('v1.2.3', '1.2.3'), 0);
|
||||
});
|
||||
|
||||
it('should return 0 for equal versions with missing parts', () => {
|
||||
assert.equal(compareVersions('1.2', '1.2.0'), 0);
|
||||
assert.equal(compareVersions('2', '2.0.0'), 0);
|
||||
});
|
||||
});
|
||||
|
||||
describe('prerelease version comparison', () => {
|
||||
it('should treat stable version as greater than prerelease', () => {
|
||||
assert.equal(compareVersions('1.0.0', '1.0.0-alpha'), 1);
|
||||
assert.equal(compareVersions('1.0.0', '1.0.0-beta'), 1);
|
||||
assert.equal(compareVersions('1.0.0', '1.0.0-rc.1'), 1);
|
||||
});
|
||||
|
||||
it('should treat prerelease version as less than stable', () => {
|
||||
assert.equal(compareVersions('1.0.0-alpha', '1.0.0'), -1);
|
||||
assert.equal(compareVersions('1.0.0-beta', '1.0.0'), -1);
|
||||
assert.equal(compareVersions('2.5.0-rc.2', '2.5.0'), -1);
|
||||
});
|
||||
|
||||
it('should compare prerelease identifiers alphabetically', () => {
|
||||
assert.equal(compareVersions('1.0.0-beta', '1.0.0-alpha'), 1);
|
||||
assert.equal(compareVersions('1.0.0-alpha', '1.0.0-beta'), -1);
|
||||
assert.equal(compareVersions('1.0.0-rc', '1.0.0-beta'), 1);
|
||||
});
|
||||
|
||||
it('should compare numeric prerelease identifiers numerically', () => {
|
||||
assert.equal(compareVersions('1.0.0-alpha.2', '1.0.0-alpha.1'), 1);
|
||||
assert.equal(compareVersions('1.0.0-alpha.1', '1.0.0-alpha.2'), -1);
|
||||
assert.equal(compareVersions('1.0.0-beta.10', '1.0.0-beta.2'), 1);
|
||||
});
|
||||
|
||||
it('should handle missing prerelease parts', () => {
|
||||
assert.equal(compareVersions('1.0.0-alpha.1', '1.0.0-alpha'), 1);
|
||||
assert.equal(compareVersions('1.0.0-alpha', '1.0.0-alpha.1'), -1);
|
||||
});
|
||||
|
||||
it('should handle complex prerelease comparisons', () => {
|
||||
assert.equal(compareVersions('1.0.0-alpha.1', '1.0.0-alpha.1'), 0);
|
||||
assert.equal(compareVersions('1.0.0-rc.1', '1.0.0-beta.10'), 1);
|
||||
assert.equal(compareVersions('2.0.0-beta.1', '2.0.0-alpha.9'), 1);
|
||||
});
|
||||
});
|
||||
|
||||
describe('real-world version scenarios', () => {
|
||||
it('should correctly order common npm package versions', () => {
|
||||
const versions = [
|
||||
'1.0.0-alpha',
|
||||
'1.0.0-alpha.1',
|
||||
'1.0.0-beta',
|
||||
'1.0.0-beta.2',
|
||||
'1.0.0-rc.1',
|
||||
'1.0.0',
|
||||
'1.0.1',
|
||||
'1.1.0',
|
||||
'2.0.0'
|
||||
];
|
||||
|
||||
for (let i = 0; i < versions.length - 1; i++) {
|
||||
const result = compareVersions(versions[i + 1], versions[i]);
|
||||
assert.equal(result, 1, `Expected ${versions[i + 1]} > ${versions[i]}`);
|
||||
}
|
||||
});
|
||||
|
||||
it('should handle version with v prefix correctly', () => {
|
||||
assert.equal(compareVersions('v2.0.0', '1.9.9'), 1);
|
||||
assert.equal(compareVersions('v1.0.0-beta', 'v1.0.0'), -1);
|
||||
});
|
||||
});
|
||||
});
|
||||
Reference in New Issue
Block a user