Compare commits

...

15 Commits

Author SHA1 Message Date
catlog22
3c28c61bea fix: update minimal documentation output guideline for clarity 2025-11-22 23:21:26 +08:00
catlog22
b0b99a4217 Enhance installation and uninstallation processes
- Added optional quiet mode for backup notifications to reduce output clutter.
- Improved file merging with progress reporting and summary of backed up files.
- Implemented a cleanup function to move old installations to a backup directory before new installations.
- Enhanced manifest handling to distinguish between Global and Path installations.
- Updated uninstallation process to preserve global files if a Global installation exists.
- Improved error handling and user feedback during file operations.
2025-11-22 23:20:39 +08:00
catlog22
4f533f6fd5 feat(lite-execute): intelligent task grouping with parallel/sequential execution
Major improvements:
- Implement dependency-aware task grouping algorithm
  * Same file modifications → sequential batch
  * Different files + no dependencies → parallel batch
  * Keyword-based dependency inference (use/integrate/call/import)
  * Batch limits: Agent 7 tasks, CLI 4 tasks

- Add parallel execution strategy
  * All parallel batches launch concurrently (single Claude message)
  * Sequential batches execute in order
  * No concurrency limit for CLI tools
  * Execution flow: parallel-first → sequential-after

- Enhance code review with task.json acceptance criteria
  * Unified review template for all tools (Gemini/Qwen/Codex/Agent)
  * task.json in CONTEXT field for CLI tools
  * Explicit acceptance criteria verification

- Clarify execution requirements
  * Remove ambiguous strategy instructions
  * Add explicit "MUST complete ALL tasks" requirement
  * Prevent CLI tools from partial execution misunderstanding

- Simplify documentation (~260 lines reduced)
  * Remove redundant examples and explanations
  * Keep core algorithm logic with inline comments
  * Eliminate duplicate descriptions across sections

Breaking changes: None (backward compatible)
2025-11-22 21:05:43 +08:00
catlog22
530c348e95 feat: enhance task execution with intelligent grouping and dependency analysis 2025-11-22 20:57:16 +08:00
catlog22
a98b26b111 feat: add session folder structure for lite-plan artifacts
- Create dedicated session folder (.workflow/.lite-plan/{task-slug}-{timestamp}/)
  for each lite-plan execution to organize all planning artifacts
- Always export task.json (removed optional export question from Phase 4)
- Save exploration.json, plan.json, and task.json to session folder
- Add session artifact paths to executionContext for lite-execute delegation
- Update lite-execute to use artifact file paths for CLI/agent context
- Enable CLI tools (Gemini/Qwen/Codex) and agents to access detailed
  planning context via file references

Benefits:
- Clean separation between different task executions
- All artifacts automatically saved for reusability
- Enhanced context available for execution phase
- Natural audit trail of planning sessions
2025-11-22 20:32:01 +08:00
catlog22
9f7e33cbde Merge branch 'main' of https://github.com/catlog22/Claude-Code-Workflow 2025-11-22 20:22:21 +08:00
catlog22
a25464ce28 Merge pull request #29 from catlog22/claude/analyze-workflow-logic-015GvXNP62281T3ogUSaJdP4
Analyze workflow execution logic and document findings
2025-11-20 20:16:28 +08:00
catlog22
0a3f2a5b03 Merge pull request #28 from catlog22/claude/check-active-workflow-files-01S7quRK9xHWStgpQ9WeTnF2
Check for .active identifier in workflow files
2025-11-20 20:15:39 +08:00
Claude
1929b7f72d docs: remove obsolete .active marker file references
Remove outdated references to .active-session marker files that are no longer used in the workflow implementation. The system now uses directory-based session management where active sessions are identified by their location in .workflow/active/ directory.

Changes:
- WORKFLOW_DIAGRAMS.md: Replace .active-session marker with actual directory structure
- COMMAND_SPEC.md: Update session:complete description to reflect directory-based archival

The .archiving marker is still valid and used for transactional session completion.
2025-11-20 12:13:45 +00:00
Claude
b8889d99c9 refactor: simplify session selection logic expression
- Condense Case A/B/C headers for clarity
- Merge bash if-else into single line using && ||
- Combine steps 1-4 in Case C into compact flow
- Remove redundant explanations while keeping key info
- Reduce from 64 lines to 39 lines (39% reduction)
2025-11-20 12:12:56 +00:00
Claude
a79a3221ce fix: enhance workflow execute session selection with user interaction
- Add detailed session discovery logic with 3 cases (none, single, multiple)
- Implement AskUserQuestion for multiple active sessions
- Display rich session metadata (project name, task progress, completion %)
- Support flexible user input (session number, full ID, or partial ID)
- Maintain backward compatibility for single-session scenarios
- Improve user experience with clear confirmation messages

This minimal change addresses session conflict issues without complex
locking mechanisms, focusing on explicit user choice when ambiguity exists.
2025-11-20 12:08:35 +00:00
catlog22
67c18d1b03 Merge pull request #27 from catlog22/claude/cleanup-root-docs-01SAjhmQa3Wc4EWxZoCnFb77
Remove unnecessary documentation from root directory
2025-11-20 19:55:15 +08:00
catlog22
8d828e8762 Merge branch 'main' of https://github.com/catlog22/Claude-Code-Workflow 2025-11-20 19:44:31 +08:00
catlog22
07caf20e0d Merge branch 'main' of https://github.com/catlog22/Claude-Code-Workflow 2025-11-20 18:51:24 +08:00
catlog22
d7bee9bdf2 docs: clarify path parameter description in /memory:docs command
Improve the path parameter documentation to eliminate ambiguity:
- Change "Target directory" to "Source directory to analyze"
- Explicitly state documentation is generated in .workflow/docs/{project_name}/ at workspace root
- Emphasize docs are NOT created within the source path itself
- Add concrete example showing path mirroring behavior

This resolves potential confusion about where documentation files are created.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-20 15:48:06 +08:00
9 changed files with 1192 additions and 602 deletions

View File

@@ -44,7 +44,11 @@ Lightweight planner that analyzes project structure, decomposes documentation wo
/memory:docs [path] [--tool <gemini|qwen|codex>] [--mode <full|partial>] [--cli-execute]
```
- **path**: Target directory (default: current directory)
- **path**: Source directory to analyze (default: current directory)
- Specifies the source code directory to be documented
- Documentation is generated in a separate `.workflow/docs/{project_name}/` directory at the workspace root, **not** within the source `path` itself
- The source path's structure is mirrored within the project-specific documentation folder
- Example: analyzing `src/modules` produces documentation at `.workflow/docs/{project_name}/src/modules/`
- **--mode**: Documentation generation mode (default: full)
- `full`: Complete documentation (modules + README + ARCHITECTURE + EXAMPLES + HTTP API)
- `partial`: Module documentation only (API.md + README.md)

View File

@@ -54,13 +54,64 @@ Orchestrates autonomous workflow execution through systematic task discovery, ag
### Phase 1: Discovery
**Applies to**: Normal mode only (skipped in resume mode)
**Process**:
1. **Check Active Sessions**: Find sessions in `.workflow/active/` directory
2. **Select Session**: If multiple found, prompt user selection
3. **Load Session Metadata**: Read `workflow-session.json` ONLY (minimal context)
4. **DO NOT read task JSONs yet** - defer until execution phase
**Purpose**: Find and select active workflow session with user confirmation when multiple sessions exist
**Resume Mode**: This phase is completely skipped when `--resume-session="session-id"` flag is provided.
**Process**:
#### Step 1.1: Count Active Sessions
```bash
bash(find .workflow/active/ -name "WFS-*" -type d 2>/dev/null | wc -l)
```
#### Step 1.2: Handle Session Selection
**Case A: No Sessions** (count = 0)
```
ERROR: No active workflow sessions found
Run /workflow:plan "task description" to create a session
```
**Case B: Single Session** (count = 1)
```bash
bash(find .workflow/active/ -name "WFS-*" -type d 2>/dev/null | head -1 | xargs basename)
```
Auto-select and continue to Phase 2.
**Case C: Multiple Sessions** (count > 1)
List sessions with metadata and prompt user selection:
```bash
bash(for dir in .workflow/active/WFS-*/; do
session=$(basename "$dir")
project=$(jq -r '.project // "Unknown"' "$dir/workflow-session.json" 2>/dev/null)
total=$(grep -c "^- \[" "$dir/TODO_LIST.md" 2>/dev/null || echo "0")
completed=$(grep -c "^- \[x\]" "$dir/TODO_LIST.md" 2>/dev/null || echo "0")
[ "$total" -gt 0 ] && progress=$((completed * 100 / total)) || progress=0
echo "${session} | ${project} | ${completed}/${total} tasks (${progress}%)"
done)
```
Use AskUserQuestion to present formatted options:
```
Multiple active workflow sessions detected. Please select one:
1. WFS-auth-system | Authentication System | 3/5 tasks (60%)
2. WFS-payment-module | Payment Integration | 0/8 tasks (0%)
Enter number, full session ID, or partial match:
```
Parse user input (supports: number "1", full ID "WFS-auth-system", or partial "auth"), validate selection, and continue to Phase 2.
#### Step 1.3: Load Session Metadata
```bash
bash(cat .workflow/active/${sessionId}/workflow-session.json)
```
**Output**: Store session metadata in memory
**DO NOT read task JSONs yet** - defer until execution phase (lazy loading)
**Resume Mode**: This entire phase is skipped when `--resume-session="session-id"` flag is provided.
### Phase 2: Planning Document Analysis
**Applies to**: Normal mode only (skipped in resume mode)

View File

@@ -185,86 +185,104 @@ Execution Complete
previousExecutionResults = []
```
### Step 2: Create TodoWrite Execution List
### Step 2: Task Grouping & Batch Creation
**Operations**:
- Create execution tracking from task list
- Typically single execution call for all tasks
- Split into multiple calls if task list very large (>10 tasks)
**Execution Call Creation**:
**Dependency Analysis & Grouping Algorithm**:
```javascript
function createExecutionCalls(tasks) {
const taskTitles = tasks.map(t => t.title || t)
// Infer dependencies: same file → sequential, keywords (use/integrate) → sequential
function inferDependencies(tasks) {
return tasks.map((task, i) => {
const deps = []
const file = task.file || task.title.match(/in\s+([^\s:]+)/)?.[1]
const keywords = (task.description || task.title).toLowerCase()
// Single call for ≤10 tasks (most common)
if (tasks.length <= 10) {
return [{
method: executionMethod === "Codex" ? "Codex" : "Agent",
taskSummary: taskTitles.length <= 3
? taskTitles.join(', ')
: `${taskTitles.slice(0, 2).join(', ')}, and ${taskTitles.length - 2} more`,
tasks: tasks
}]
}
// Split into multiple calls for >10 tasks
const callSize = 5
const calls = []
for (let i = 0; i < tasks.length; i += callSize) {
const batchTasks = tasks.slice(i, i + callSize)
const batchTitles = batchTasks.map(t => t.title || t)
calls.push({
method: executionMethod === "Codex" ? "Codex" : "Agent",
taskSummary: `Tasks ${i + 1}-${Math.min(i + callSize, tasks.length)}: ${batchTitles[0]}...`,
tasks: batchTasks
})
}
return calls
for (let j = 0; j < i; j++) {
const prevFile = tasks[j].file || tasks[j].title.match(/in\s+([^\s:]+)/)?.[1]
if (file && prevFile === file) deps.push(j) // Same file
else if (/use|integrate|call|import/.test(keywords)) deps.push(j) // Keyword dependency
}
return { ...task, taskIndex: i, dependencies: deps }
})
}
// Create execution calls with IDs
executionCalls = createExecutionCalls(planObject.tasks).map((call, index) => ({
...call,
id: `[${call.method}-${index+1}]`
}))
// Group into batches: independent → parallel [P1,P2...], dependent → sequential [S1,S2...]
function createExecutionCalls(tasks, executionMethod) {
const tasksWithDeps = inferDependencies(tasks)
const maxBatch = executionMethod === "Codex" ? 4 : 7
const calls = []
const processed = new Set()
// Parallel: independent tasks, different files, max batch size
const parallelGroups = []
tasksWithDeps.forEach(t => {
if (t.dependencies.length === 0 && !processed.has(t.taskIndex)) {
const group = [t]
processed.add(t.taskIndex)
tasksWithDeps.forEach(o => {
if (!o.dependencies.length && !processed.has(o.taskIndex) &&
group.length < maxBatch && t.file !== o.file) {
group.push(o)
processed.add(o.taskIndex)
}
})
parallelGroups.push(group)
}
})
// Sequential: dependent tasks, batch when deps satisfied
const remaining = tasksWithDeps.filter(t => !processed.has(t.taskIndex))
while (remaining.length > 0) {
const batch = remaining.filter((t, i) =>
i < maxBatch && t.dependencies.every(d => processed.has(d))
)
if (!batch.length) break
batch.forEach(t => processed.add(t.taskIndex))
calls.push({ executionType: "sequential", groupId: `S${calls.length + 1}`, tasks: batch })
remaining.splice(0, remaining.length, ...remaining.filter(t => !processed.has(t.taskIndex)))
}
// Combine results
return [
...parallelGroups.map((g, i) => ({
method: executionMethod, executionType: "parallel", groupId: `P${i+1}`,
taskSummary: g.map(t => t.title).join(' | '), tasks: g
})),
...calls.map(c => ({ ...c, method: executionMethod, taskSummary: c.tasks.map(t => t.title).join(' → ') }))
]
}
executionCalls = createExecutionCalls(planObject.tasks, executionMethod).map(c => ({ ...c, id: `[${c.groupId}]` }))
// Create TodoWrite list
TodoWrite({
todos: executionCalls.map(call => ({
content: `${call.id} (${call.taskSummary})`,
todos: executionCalls.map(c => ({
content: `${c.executionType === "parallel" ? "⚡" : "→"} ${c.id} (${c.tasks.length} tasks)`,
status: "pending",
activeForm: `Executing ${call.id} (${call.taskSummary})`
activeForm: `Executing ${c.id}`
}))
})
```
**Example Execution Lists**:
```
Single call (typical):
[ ] [Agent-1] (Create AuthService, Add JWT utilities, Implement middleware)
Few tasks:
[ ] [Codex-1] (Create AuthService, Add JWT utilities, and 3 more)
Large task sets (>10):
[ ] [Agent-1] (Tasks 1-5: Create AuthService, Add JWT utilities, ...)
[ ] [Agent-2] (Tasks 6-10: Create tests, Update docs, ...)
```
### Step 3: Launch Execution
**IMPORTANT**: CLI execution MUST run in foreground (no background execution)
**Execution Loop**:
**Execution Flow**: Parallel batches concurrently → Sequential batches in order
```javascript
for (currentIndex = 0; currentIndex < executionCalls.length; currentIndex++) {
const currentCall = executionCalls[currentIndex]
const parallel = executionCalls.filter(c => c.executionType === "parallel")
const sequential = executionCalls.filter(c => c.executionType === "sequential")
// Update TodoWrite: mark current call in_progress
// Launch execution with previousExecutionResults context
// After completion: collect result, add to previousExecutionResults
// Update TodoWrite: mark current call completed
// Phase 1: Launch all parallel batches (single message with multiple tool calls)
if (parallel.length > 0) {
TodoWrite({ todos: executionCalls.map(c => ({ status: c.executionType === "parallel" ? "in_progress" : "pending" })) })
parallelResults = await Promise.all(parallel.map(c => executeBatch(c)))
previousExecutionResults.push(...parallelResults)
TodoWrite({ todos: executionCalls.map(c => ({ status: parallel.includes(c) ? "completed" : "pending" })) })
}
// Phase 2: Execute sequential batches one by one
for (const call of sequential) {
TodoWrite({ todos: executionCalls.map(c => ({ status: c === call ? "in_progress" : "..." })) })
result = await executeBatch(call)
previousExecutionResults.push(result)
TodoWrite({ todos: executionCalls.map(c => ({ status: "completed" or "pending" })) })
}
```
@@ -323,12 +341,17 @@ ${result.notes ? `Notes: ${result.notes}` : ''}
${clarificationContext ? `\n## Clarifications\n${JSON.stringify(clarificationContext, null, 2)}` : ''}
## Instructions
- Reference original request to ensure alignment
- Review previous results to understand completed work
- Build on previous work, avoid duplication
- Test functionality as you implement
- Complete all assigned tasks
${executionContext?.session?.artifacts ? `\n## Planning Artifacts
Detailed planning context available in:
${executionContext.session.artifacts.exploration ? `- Exploration: ${executionContext.session.artifacts.exploration}` : ''}
- Plan: ${executionContext.session.artifacts.plan}
- Task: ${executionContext.session.artifacts.task}
Read these files for detailed architecture, patterns, and constraints.` : ''}
## Requirements
MUST complete ALL ${planObject.tasks.length} tasks listed above in this single execution.
Return only after all tasks are fully implemented and tested.
`
)
```
@@ -341,6 +364,11 @@ When to use:
- `executionMethod = "Codex"`
- `executionMethod = "Auto" AND complexity = "Medium" or "High"`
**Artifact Path Delegation**:
- Include artifact file paths in CLI prompt for enhanced context
- Codex can read artifact files for detailed planning information
- Example: Reference exploration.json for architecture patterns
Command format:
```bash
function formatTaskForCodex(task, index) {
@@ -390,12 +418,18 @@ Constraints: ${explorationContext.constraints || 'None'}
${clarificationContext ? `\n### User Clarifications\n${Object.entries(clarificationContext).map(([q, a]) => `${q}: ${a}`).join('\n')}` : ''}
## Execution Instructions
- Reference original request to ensure alignment
- Review previous results for context continuity
- Build on previous work, don't duplicate completed tasks
- Complete all assigned tasks in single execution
- Test functionality as you implement
${executionContext?.session?.artifacts ? `\n### Planning Artifact Files
Detailed planning context available in session folder:
${executionContext.session.artifacts.exploration ? `- Exploration: ${executionContext.session.artifacts.exploration}` : ''}
- Plan: ${executionContext.session.artifacts.plan}
- Task: ${executionContext.session.artifacts.task}
Read these files for complete architecture details, code patterns, and integration constraints.
` : ''}
## Requirements
MUST complete ALL ${planObject.tasks.length} tasks listed above in this single execution.
Return only after all tasks are fully implemented and tested.
Complexity: ${planObject.complexity}
" --skip-git-repo-check -s danger-full-access
@@ -414,105 +448,72 @@ bash_result = Bash(
**Result Collection**: After completion, analyze output and collect result following `executionResult` structure
### Step 4: Track Execution Progress
### Step 4: Progress Tracking
**Real-time TodoWrite Updates** at execution call level:
```javascript
// When call starts
TodoWrite({
todos: [
{ content: "[Agent-1] (Implement auth + Create JWT utils)", status: "in_progress", activeForm: "..." },
{ content: "[Agent-2] (Add middleware + Update routes)", status: "pending", activeForm: "..." }
]
})
// When call completes
TodoWrite({
todos: [
{ content: "[Agent-1] (Implement auth + Create JWT utils)", status: "completed", activeForm: "..." },
{ content: "[Agent-2] (Add middleware + Update routes)", status: "in_progress", activeForm: "..." }
]
})
```
**User Visibility**:
- User sees execution call progress (not individual task progress)
- Current execution highlighted as "in_progress"
- Completed executions marked with checkmark
- Each execution shows task summary for context
Progress tracked at batch level (not individual task level). Icons: ⚡ (parallel, concurrent), → (sequential, one-by-one)
### Step 5: Code Review (Optional)
**Skip Condition**: Only run if `codeReviewTool ≠ "Skip"`
**Operations**:
- Agent Review: Current agent performs direct review
- Gemini Review: Execute gemini CLI with review prompt
- Custom tool: Execute specified CLI tool (qwen, codex, etc.)
**Review Focus**: Verify implementation against task.json acceptance criteria
- Read task.json from session artifacts for acceptance criteria
- Check each acceptance criterion is fulfilled
- Validate code quality and identify issues
- Ensure alignment with planned approach
**Command Formats**:
**Operations**:
- Agent Review: Current agent performs direct review (read task.json for acceptance criteria)
- Gemini Review: Execute gemini CLI with review prompt (task.json in CONTEXT)
- Custom tool: Execute specified CLI tool (qwen, codex, etc.) with task.json reference
**Unified Review Template** (All tools use same standard):
**Review Criteria**:
- **Acceptance Criteria**: Verify each criterion from task.json `context.acceptance`
- **Code Quality**: Analyze quality, identify issues, suggest improvements
- **Plan Alignment**: Validate implementation matches planned approach
**Shared Prompt Template** (used by all CLI tools):
```
PURPOSE: Code review for implemented changes against task.json acceptance criteria
TASK: • Verify task.json acceptance criteria fulfillment • Analyze code quality • Identify issues • Suggest improvements • Validate plan adherence
MODE: analysis
CONTEXT: @**/* @{task.json} @{plan.json} [@{exploration.json}] | Memory: Review lite-execute changes against task.json requirements
EXPECTED: Quality assessment with acceptance criteria verification, issue identification, and recommendations. Explicitly check each acceptance criterion from task.json.
RULES: $(cat ~/.claude/workflows/cli-templates/prompts/analysis/02-review-code-quality.txt) | Focus on task.json acceptance criteria and plan adherence | analysis=READ-ONLY
```
**Tool-Specific Execution** (Apply shared prompt template above):
```bash
# Agent Review: Direct agent review (no CLI)
# Uses analysis prompt and TodoWrite tools directly
# Method 1: Agent Review (current agent)
# - Read task.json: ${executionContext.session.artifacts.task}
# - Apply unified review criteria (see Shared Prompt Template)
# - Report findings directly
# Gemini Review:
gemini -p "
PURPOSE: Code review for implemented changes
TASK: • Analyze quality • Identify issues • Suggest improvements
MODE: analysis
CONTEXT: @**/* | Memory: Review lite-execute changes
EXPECTED: Quality assessment with recommendations
RULES: $(cat ~/.claude/workflows/cli-templates/prompts/analysis/02-review-code-quality.txt) | Focus on recent changes | analysis=READ-ONLY
"
# Method 2: Gemini Review (recommended)
gemini -p "[Shared Prompt Template with artifacts]"
# CONTEXT includes: @**/* @${task.json} @${plan.json} [@${exploration.json}]
# Qwen Review (custom tool via "Other"):
qwen -p "
PURPOSE: Code review for implemented changes
TASK: • Analyze quality • Identify issues • Suggest improvements
MODE: analysis
CONTEXT: @**/* | Memory: Review lite-execute changes
EXPECTED: Quality assessment with recommendations
RULES: $(cat ~/.claude/workflows/cli-templates/prompts/analysis/02-review-code-quality.txt) | Focus on recent changes | analysis=READ-ONLY
"
# Method 3: Qwen Review (alternative)
qwen -p "[Shared Prompt Template with artifacts]"
# Same prompt as Gemini, different execution engine
# Codex Review (custom tool via "Other"):
codex --full-auto exec "Review recent code changes for quality, potential issues, and improvements" --skip-git-repo-check -s danger-full-access
# Method 4: Codex Review (autonomous)
codex --full-auto exec "[Verify task.json acceptance criteria at ${task.json}]" --skip-git-repo-check -s danger-full-access
```
**Implementation Note**: Replace `[Shared Prompt Template with artifacts]` placeholder with actual template content, substituting:
- `@{task.json}``@${executionContext.session.artifacts.task}`
- `@{plan.json}``@${executionContext.session.artifacts.plan}`
- `[@{exploration.json}]``@${executionContext.session.artifacts.exploration}` (if exists)
## Best Practices
### Execution Intelligence
1. **Context Continuity**: Each execution call receives previous results
- Prevents duplication across multiple executions
- Maintains coherent implementation flow
- Builds on completed work
2. **Execution Call Tracking**: Progress at call level, not task level
- Each call handles all or subset of tasks
- Clear visibility of current execution
- Simple progress updates
3. **Flexible Execution**: Multiple input modes supported
- In-memory: Seamless lite-plan integration
- Prompt: Quick standalone execution
- File: Intelligent format detection
- Enhanced Task JSON (lite-plan export): Full plan extraction
- Plain text: Uses as prompt
### Task Management
1. **Live Progress Updates**: Real-time TodoWrite tracking
- Execution calls created before execution starts
- Updated as executions progress
- Clear completion status
2. **Simple Execution**: Straightforward task handling
- All tasks in single call (typical)
- Split only for very large task sets (>10)
- Agent/Codex determines optimal execution order
**Input Modes**: In-memory (lite-plan), prompt (standalone), file (JSON/text)
**Batch Limits**: Agent 7 tasks, CLI 4 tasks
**Execution**: Parallel batches use single Claude message with multiple tool calls (no concurrency limit)
## Error Handling
@@ -546,10 +547,26 @@ Passed from lite-plan via global variable:
clarificationContext: {...} | null,
executionMethod: "Agent" | "Codex" | "Auto",
codeReviewTool: "Skip" | "Gemini Review" | "Agent Review" | string,
originalUserInput: string
originalUserInput: string,
// Session artifacts location (saved by lite-plan)
session: {
id: string, // Session identifier: {taskSlug}-{shortTimestamp}
folder: string, // Session folder path: .workflow/.lite-plan/{session-id}
artifacts: {
exploration: string | null, // exploration.json path (if exploration performed)
plan: string, // plan.json path (always present)
task: string // task.json path (always exported)
}
}
}
```
**Artifact Usage**:
- Artifact files contain detailed planning context
- Pass artifact paths to CLI tools and agents for enhanced context
- See execution options below for usage examples
### executionResult (Output)
Collected after each execution call completes:

View File

@@ -130,6 +130,13 @@ needsExploration = (
**Exploration Execution** (if needed):
```javascript
// Generate session identifiers for artifact storage
const taskSlug = task_description.toLowerCase().replace(/[^a-z0-9]+/g, '-').substring(0, 40)
const timestamp = new Date().toISOString().replace(/[:.]/g, '-')
const shortTimestamp = timestamp.substring(0, 19).replace('T', '-') // YYYY-MM-DD-HH-mm-ss
const sessionId = `${taskSlug}-${shortTimestamp}`
const sessionFolder = `.workflow/.lite-plan/${sessionId}`
Task(
subagent_type="cli-explore-agent",
description="Analyze codebase for task context",
@@ -149,9 +156,14 @@ Task(
Output Format: JSON-like structured object
`
)
// Save exploration results for CLI/agent access in lite-execute
const explorationFile = `${sessionFolder}/exploration.json`
Write(explorationFile, JSON.stringify(explorationContext, null, 2))
```
**Output**: `explorationContext` (see Data Structures section)
**Output**: `explorationContext` (in-memory, see Data Structures section)
**Artifact**: Saved to `{sessionFolder}/exploration.json` for CLI/agent use
**Progress Tracking**:
- Mark Phase 1 completed
@@ -228,6 +240,14 @@ Current Claude generates plan directly:
- Estimated Time: Total implementation time
- Recommended Execution: "Agent"
```javascript
// Save planning results to session folder (same as Option B)
const planFile = `${sessionFolder}/plan.json`
Write(planFile, JSON.stringify(planObject, null, 2))
```
**Artifact**: Saved to `{sessionFolder}/plan.json` for CLI/agent use
**Option B: Agent-Based Planning (Medium/High Complexity)**
Delegate to cli-lite-planning-agent:
@@ -270,9 +290,14 @@ Task(
Format: "{Action} in {file_path}: {details} following {pattern}"
`
)
// Save planning results to session folder
const planFile = `${sessionFolder}/plan.json`
Write(planFile, JSON.stringify(planObject, null, 2))
```
**Output**: `planObject` (see Data Structures section)
**Artifact**: Saved to `{sessionFolder}/plan.json` for CLI/agent use
**Progress Tracking**:
- Mark Phase 3 completed
@@ -315,7 +340,7 @@ ${i+1}. **${task.title}** (${task.file})
**Step 4.2: Collect User Confirmation**
Four questions via single AskUserQuestion call:
Three questions via single AskUserQuestion call:
```javascript
AskUserQuestion({
@@ -353,15 +378,6 @@ Confirm plan? (Multi-select: can supplement via "Other")`,
{ label: "Agent Review", description: "@code-reviewer agent" },
{ label: "Skip", description: "No review" }
]
},
{
question: "Export plan to Enhanced Task JSON file?\n\nAllows reuse with lite-execute later.",
header: "Export JSON",
multiSelect: false,
options: [
{ label: "Yes", description: "Export to JSON (recommended for complex tasks)" },
{ label: "No", description: "Keep in-memory only" }
]
}
]
})
@@ -384,10 +400,6 @@ Code Review (after execution):
├─ Gemini Review → gemini CLI analysis
├─ Agent Review → Current Claude review
└─ Other → Custom tool (e.g., qwen, codex)
Export JSON:
├─ Yes → Export to .workflow/lite-plans/plan-{timestamp}.json
└─ No → In-memory only
```
**Progress Tracking**:
@@ -398,48 +410,48 @@ Export JSON:
### Phase 5: Dispatch to Execution
**Step 5.1: Export Enhanced Task JSON (Optional)**
**Step 5.1: Export Enhanced Task JSON**
Only execute if `userSelection.export_task_json === "Yes"`:
Always export Enhanced Task JSON to session folder:
```javascript
if (userSelection.export_task_json === "Yes") {
const timestamp = new Date().toISOString().replace(/[:.]/g, '-')
const taskId = `LP-${timestamp}`
const filename = `.workflow/lite-plans/${taskId}.json`
const taskId = `LP-${shortTimestamp}`
const filename = `${sessionFolder}/task.json`
const enhancedTaskJson = {
id: taskId,
title: original_task_description,
status: "pending",
const enhancedTaskJson = {
id: taskId,
title: original_task_description,
status: "pending",
meta: {
type: "planning",
created_at: new Date().toISOString(),
complexity: planObject.complexity,
estimated_time: planObject.estimated_time,
recommended_execution: planObject.recommended_execution,
workflow: "lite-plan"
meta: {
type: "planning",
created_at: new Date().toISOString(),
complexity: planObject.complexity,
estimated_time: planObject.estimated_time,
recommended_execution: planObject.recommended_execution,
workflow: "lite-plan",
session_id: sessionId,
session_folder: sessionFolder
},
context: {
requirements: [original_task_description],
plan: {
summary: planObject.summary,
approach: planObject.approach,
tasks: planObject.tasks
},
context: {
requirements: [original_task_description],
plan: {
summary: planObject.summary,
approach: planObject.approach,
tasks: planObject.tasks
},
exploration: explorationContext || null,
clarifications: clarificationContext || null,
focus_paths: explorationContext?.relevant_files || [],
acceptance: planObject.tasks.flatMap(t => t.acceptance)
}
exploration: explorationContext || null,
clarifications: clarificationContext || null,
focus_paths: explorationContext?.relevant_files || [],
acceptance: planObject.tasks.flatMap(t => t.acceptance)
}
Write(filename, JSON.stringify(enhancedTaskJson, null, 2))
console.log(`Enhanced Task JSON exported to: ${filename}`)
console.log(`Reuse with: /workflow:lite-execute ${filename}`)
}
Write(filename, JSON.stringify(enhancedTaskJson, null, 2))
console.log(`Enhanced Task JSON exported to: ${filename}`)
console.log(`Session folder: ${sessionFolder}`)
console.log(`Reuse with: /workflow:lite-execute ${filename}`)
```
**Step 5.2: Store Execution Context**
@@ -451,7 +463,18 @@ executionContext = {
clarificationContext: clarificationContext || null,
executionMethod: userSelection.execution_method,
codeReviewTool: userSelection.code_review_tool,
originalUserInput: original_task_description
originalUserInput: original_task_description,
// Session artifacts location
session: {
id: sessionId,
folder: sessionFolder,
artifacts: {
exploration: explorationContext ? `${sessionFolder}/exploration.json` : null,
plan: `${sessionFolder}/plan.json`,
task: `${sessionFolder}/task.json` // Always exported
}
}
}
```
@@ -462,7 +485,11 @@ SlashCommand(command="/workflow:lite-execute --in-memory")
```
**Execution Handoff**:
- lite-execute reads `executionContext` variable
- lite-execute reads `executionContext` variable from memory
- `executionContext.session.artifacts` contains file paths to saved planning artifacts:
- `exploration` - exploration.json (if exploration performed)
- `plan` - plan.json (always exists)
- `task` - task.json (if user selected export)
- All execution logic handled by lite-execute
- lite-plan completes after successful handoff
@@ -502,7 +529,7 @@ SlashCommand(command="/workflow:lite-execute --in-memory")
- Plan confirmation (multi-select with supplements)
- Execution method selection
- Code review tool selection (custom via "Other")
- JSON export option
- Enhanced Task JSON always exported to session folder
- Allows plan refinement without re-selecting execution method
### Task Management
@@ -519,11 +546,11 @@ SlashCommand(command="/workflow:lite-execute --in-memory")
- Medium: 5-7 tasks (detailed)
- High: 7-10 tasks (comprehensive)
3. **No File Artifacts During Planning**:
- All planning stays in memory
- Optional Enhanced Task JSON export (user choice)
- Faster workflow, cleaner workspace
- Plan context passed directly to execution
3. **Session Artifact Management**:
- All planning artifacts saved to dedicated session folder
- Enhanced Task JSON always exported for reusability
- Plan context passed to execution via memory and files
- Clean organization with session-based folder structure
### Planning Standards
@@ -550,6 +577,39 @@ SlashCommand(command="/workflow:lite-execute --in-memory")
| Phase 4 Confirmation Timeout | User no response > 5 minutes | Save context to temp var, display resume instructions, exit gracefully |
| Phase 4 Modification Loop | User requests modify > 3 times | Suggest breaking task into smaller pieces or using `/workflow:plan` |
## Session Folder Structure
Each lite-plan execution creates a dedicated session folder to organize all artifacts:
```
.workflow/.lite-plan/{task-slug}-{short-timestamp}/
├── exploration.json # Exploration results (if exploration performed)
├── plan.json # Planning results (always created)
└── task.json # Enhanced Task JSON (always created)
```
**Folder Naming Convention**:
- `{task-slug}`: First 40 characters of task description, lowercased, non-alphanumeric replaced with `-`
- `{short-timestamp}`: YYYY-MM-DD-HH-mm-ss format
- Example: `.workflow/.lite-plan/implement-user-auth-jwt-2025-01-15-14-30-45/`
**File Contents**:
- `exploration.json`: Complete explorationContext object (if exploration performed, see Data Structures)
- `plan.json`: Complete planObject (always created, see Data Structures)
- `task.json`: Enhanced Task JSON with all context (always created, see Data Structures)
**Access Patterns**:
- **lite-plan**: Creates folder and writes all artifacts during execution, passes paths via `executionContext.session.artifacts`
- **lite-execute**: Reads artifact paths from `executionContext.session.artifacts` (see lite-execute.md for usage details)
- **User**: Can inspect artifacts for debugging or reference
- **Reuse**: Pass `task.json` path to `/workflow:lite-execute {path}` for re-execution
**Benefits**:
- Clean separation between different task executions
- Easy to find and inspect artifacts for specific tasks
- Natural history/audit trail of planning sessions
- Supports concurrent lite-plan executions without conflicts
## Data Structures
### explorationContext
@@ -621,7 +681,18 @@ Context passed to lite-execute via --in-memory (Phase 5):
clarificationContext: {...} | null, // User responses from Phase 2
executionMethod: "Agent" | "Codex" | "Auto",
codeReviewTool: "Skip" | "Gemini Review" | "Agent Review" | string,
originalUserInput: string // User's original task description
originalUserInput: string, // User's original task description
// Session artifacts location (for lite-execute to access saved files)
session: {
id: string, // Session identifier: {taskSlug}-{shortTimestamp}
folder: string, // Session folder path: .workflow/.lite-plan/{session-id}
artifacts: {
exploration: string | null, // exploration.json path (if exploration performed)
plan: string, // plan.json path (always present)
task: string // task.json path (always exported)
}
}
}
```

View File

@@ -29,6 +29,7 @@ For all CLI tool usage, command syntax, and integration guidelines:
- **Clear intent over clever code** - Be boring and obvious
- **Follow existing code style** - Match import patterns, naming conventions, and formatting of existing codebase
- **No unsolicited reports** - Task summaries can be performed internally, but NEVER generate additional reports, documentation files, or summary files without explicit user permission
- **Minimal documentation output** - Avoid unnecessary documentation. If required, save to .workflow/.scratchpad/
### Simplicity Means

View File

@@ -180,7 +180,7 @@ Commands for creating, listing, and managing workflow sessions.
- **Syntax**: `/workflow:session:complete [--detailed]`
- **Parameters**:
- `--detailed` (Flag): Shows a more detailed completion summary.
- **Responsibilities**: Marks the currently active session as "completed", records timestamps, and removes the `.active-*` marker file.
- **Responsibilities**: Marks the currently active session as "completed", records timestamps, and moves the session from `.workflow/active/` to `.workflow/archives/`.
- **Agent Calls**: None.
- **Example**:
```bash

File diff suppressed because it is too large Load Diff

View File

@@ -225,6 +225,7 @@ function get_backup_directory() {
function backup_file_to_folder() {
local file_path="$1"
local backup_folder="$2"
local quiet="${3:-}" # Optional quiet mode
if [ ! -f "$file_path" ]; then
return 1
@@ -249,10 +250,16 @@ function backup_file_to_folder() {
local backup_file_path="${backup_sub_dir}/${file_name}"
if cp "$file_path" "$backup_file_path"; then
write_color "Backed up: $file_name" "$COLOR_INFO"
# Only output if not in quiet mode
if [ "$quiet" != "quiet" ]; then
write_color "Backed up: $file_name" "$COLOR_INFO"
fi
return 0
else
write_color "WARNING: Failed to backup file $file_path" "$COLOR_WARNING"
# Always show warnings
if [ "$quiet" != "quiet" ]; then
write_color "WARNING: Failed to backup file $file_path" "$COLOR_WARNING"
fi
return 1
fi
}
@@ -443,14 +450,25 @@ function merge_directory_contents() {
return 1
fi
mkdir -p "$destination"
write_color "Created destination directory: $destination" "$COLOR_INFO"
# Create destination directory if it doesn't exist
if [ ! -d "$destination" ]; then
mkdir -p "$destination"
write_color "Created destination directory: $destination" "$COLOR_INFO"
fi
# Count total files first
local total_files=$(find "$source" -type f | wc -l)
local merged_count=0
local skipped_count=0
local backed_up_count=0
local processed_count=0
write_color "Processing $total_files files in $description..." "$COLOR_INFO"
# Find all files recursively
while IFS= read -r -d '' file; do
((processed_count++))
local relative_path="${file#$source/}"
local dest_path="${destination}/${relative_path}"
local dest_dir=$(dirname "$dest_path")
@@ -458,41 +476,58 @@ function merge_directory_contents() {
mkdir -p "$dest_dir"
if [ -f "$dest_path" ]; then
local file_name=$(basename "$relative_path")
# Use BackupAll mode for automatic backup without confirmation (default behavior)
if [ "$BACKUP_ALL" = true ] && [ "$NO_BACKUP" = false ]; then
if [ -n "$backup_folder" ]; then
backup_file_to_folder "$dest_path" "$backup_folder"
write_color "Auto-backed up: $file_name" "$COLOR_INFO"
# Quiet backup - no individual file output
if backup_file_to_folder "$dest_path" "$backup_folder" "quiet"; then
((backed_up_count++))
fi
fi
cp "$file" "$dest_path"
((merged_count++))
elif [ "$NO_BACKUP" = true ]; then
# No backup mode - ask for confirmation
if confirm_action "File '$relative_path' already exists. Replace it? (NO BACKUP)" false; then
cp "$file" "$dest_path"
((merged_count++))
else
write_color "Skipped $file_name (no backup)" "$COLOR_WARNING"
((skipped_count++))
fi
elif confirm_action "File '$relative_path' already exists. Replace it?" false; then
if [ -n "$backup_folder" ]; then
backup_file_to_folder "$dest_path" "$backup_folder"
write_color "Backed up existing $file_name" "$COLOR_INFO"
# Quiet backup - no individual file output
if backup_file_to_folder "$dest_path" "$backup_folder" "quiet"; then
((backed_up_count++))
fi
fi
cp "$file" "$dest_path"
((merged_count++))
else
write_color "Skipped $file_name" "$COLOR_WARNING"
((skipped_count++))
fi
else
cp "$file" "$dest_path"
((merged_count++))
fi
# Show progress every 20 files
if [ $((processed_count % 20)) -eq 0 ] || [ "$processed_count" -eq "$total_files" ]; then
local percent=$((processed_count * 100 / total_files))
echo -ne "\rMerging $description: $processed_count/$total_files files ($percent%)..."
fi
done < <(find "$source" -type f -print0)
write_color "✓ Merged $merged_count files, skipped $skipped_count files" "$COLOR_SUCCESS"
# Clear progress line
echo -ne "\r\033[K"
# Show summary
if [ "$backed_up_count" -gt 0 ]; then
write_color "✓ Merged $merged_count files ($backed_up_count backed up), skipped $skipped_count files" "$COLOR_SUCCESS"
else
write_color "✓ Merged $merged_count files, skipped $skipped_count files" "$COLOR_SUCCESS"
fi
return 0
}
@@ -508,6 +543,10 @@ function install_global() {
write_color "Global installation path: $user_home" "$COLOR_INFO"
# Clean up old installation before proceeding (fast move operation)
echo ""
move_old_installation "$user_home" "Global"
# Initialize manifest
local manifest_file=$(new_install_manifest "Global" "$user_home")
@@ -627,7 +666,7 @@ function install_global() {
create_version_json "$global_claude_dir" "Global"
# Save installation manifest
save_install_manifest "$manifest_file" "$user_home"
save_install_manifest "$manifest_file" "$user_home" "Global"
return 0
}
@@ -642,6 +681,10 @@ function install_path() {
local global_claude_dir="${user_home}/.claude"
write_color "Global path: $user_home" "$COLOR_INFO"
# Clean up old installation before proceeding (fast move operation)
echo ""
move_old_installation "$target_dir" "Path"
# Initialize manifest
local manifest_file=$(new_install_manifest "Path" "$target_dir")
@@ -700,11 +743,15 @@ function install_path() {
fi
done
# Global components - exclude local folders
# Global components - exclude local folders (use same efficient method as Global mode)
write_color "Installing global components to $global_claude_dir..." "$COLOR_INFO"
local merged_count=0
# Create temporary directory for global files only
local temp_global_dir="/tmp/claude-global-$$"
mkdir -p "$temp_global_dir"
# Copy global files to temp directory (excluding local folders)
write_color "Preparing global components..." "$COLOR_INFO"
while IFS= read -r -d '' file; do
local relative_path="${file#$source_claude_dir/}"
local top_folder=$(echo "$relative_path" | cut -d'/' -f1)
@@ -714,37 +761,28 @@ function install_path() {
continue
fi
local dest_path="${global_claude_dir}/${relative_path}"
local dest_dir=$(dirname "$dest_path")
local temp_dest_path="${temp_global_dir}/${relative_path}"
local temp_dest_dir=$(dirname "$temp_dest_path")
mkdir -p "$dest_dir"
if [ -f "$dest_path" ]; then
if [ "$BACKUP_ALL" = true ] && [ "$NO_BACKUP" = false ]; then
if [ -n "$backup_folder" ]; then
backup_file_to_folder "$dest_path" "$backup_folder"
fi
cp "$file" "$dest_path"
((merged_count++))
elif [ "$NO_BACKUP" = true ]; then
if confirm_action "File '$relative_path' already exists in global location. Replace it? (NO BACKUP)" false; then
cp "$file" "$dest_path"
((merged_count++))
fi
elif confirm_action "File '$relative_path' already exists in global location. Replace it?" false; then
if [ -n "$backup_folder" ]; then
backup_file_to_folder "$dest_path" "$backup_folder"
fi
cp "$file" "$dest_path"
((merged_count++))
fi
else
cp "$file" "$dest_path"
((merged_count++))
fi
mkdir -p "$temp_dest_dir"
cp "$file" "$temp_dest_path"
done < <(find "$source_claude_dir" -type f -print0)
write_color "✓ Merged $merged_count files to global location" "$COLOR_SUCCESS"
# Use bulk merge method (same as Global mode - fast!)
if merge_directory_contents "$temp_global_dir" "$global_claude_dir" "global components" "$backup_folder"; then
# Track global files in manifest using bulk method (fast!)
add_manifest_entry "$manifest_file" "$global_claude_dir" "Directory"
# Track files from TEMP directory
while IFS= read -r -d '' source_file; do
local relative_path="${source_file#$temp_global_dir}"
local target_path="${global_claude_dir}${relative_path}"
add_manifest_entry "$manifest_file" "$target_path" "File"
done < <(find "$temp_global_dir" -type f -print0)
fi
# Clean up temp directory
rm -rf "$temp_global_dir"
# Handle CLAUDE.md file in global .claude directory
local global_claude_md="${global_claude_dir}/CLAUDE.md"
@@ -822,7 +860,7 @@ function install_path() {
create_version_json "$global_claude_dir" "Global"
# Save installation manifest
save_install_manifest "$manifest_file" "$target_dir"
save_install_manifest "$manifest_file" "$target_dir" "Path"
return 0
}
@@ -911,8 +949,15 @@ function new_install_manifest() {
mkdir -p "$MANIFEST_DIR"
# Generate unique manifest ID based on timestamp and mode
# Distinguish between Global and Path installations with clear naming
local timestamp=$(date +"%Y%m%d-%H%M%S")
local manifest_id="install-${installation_mode}-${timestamp}"
local mode_prefix
if [ "$installation_mode" = "Global" ]; then
mode_prefix="manifest-global"
else
mode_prefix="manifest-path"
fi
local manifest_id="${mode_prefix}-${timestamp}"
# Create manifest file path
local manifest_file="${MANIFEST_DIR}/${manifest_id}.json"
@@ -976,7 +1021,8 @@ EOF
function remove_old_manifests_for_path() {
local installation_path="$1"
local current_manifest_file="$2" # Optional: exclude this file from deletion
local installation_mode="$2"
local current_manifest_file="$3" # Optional: exclude this file from deletion
if [ ! -d "$MANIFEST_DIR" ]; then
return 0
@@ -986,7 +1032,8 @@ function remove_old_manifests_for_path() {
local target_path=$(echo "$installation_path" | sed 's:/*$::' | tr '[:upper:]' '[:lower:]')
local removed_count=0
# Find and remove old manifests for the same installation path
# Find and remove old manifests for the same installation path and mode
# Support both new (manifest-*) and old (install-*) format
while IFS= read -r -d '' file; do
# Skip the current manifest file if specified
if [ -n "$current_manifest_file" ] && [ "$file" = "$current_manifest_file" ]; then
@@ -994,19 +1041,20 @@ function remove_old_manifests_for_path() {
fi
local manifest_path=$(jq -r '.installation_path // ""' "$file" 2>/dev/null)
local manifest_mode=$(jq -r '.installation_mode // "Global"' "$file" 2>/dev/null)
if [ -n "$manifest_path" ]; then
# Normalize manifest path
local normalized_manifest_path=$(echo "$manifest_path" | sed 's:/*$::' | tr '[:upper:]' '[:lower:]')
# If paths match, remove this old manifest
if [ "$normalized_manifest_path" = "$target_path" ]; then
# Only remove if BOTH path and mode match
if [ "$normalized_manifest_path" = "$target_path" ] && [ "$manifest_mode" = "$installation_mode" ]; then
rm -f "$file"
write_color "Removed old manifest: $(basename "$file")" "$COLOR_INFO"
((removed_count++))
fi
fi
done < <(find "$MANIFEST_DIR" -name "install-*.json" -type f -print0 2>/dev/null)
done < <(find "$MANIFEST_DIR" \( -name "manifest-*.json" -o -name "install-*.json" \) -type f -print0 2>/dev/null)
if [ "$removed_count" -gt 0 ]; then
write_color "Removed $removed_count old manifest(s) for installation path: $installation_path" "$COLOR_SUCCESS"
@@ -1018,10 +1066,11 @@ function remove_old_manifests_for_path() {
function save_install_manifest() {
local manifest_file="$1"
local installation_path="$2"
local installation_mode="$3"
# Remove old manifests for the same installation path (excluding current one)
if [ -n "$installation_path" ]; then
remove_old_manifests_for_path "$installation_path" "$manifest_file"
# Remove old manifests for the same installation path and mode (excluding current one)
if [ -n "$installation_path" ] && [ -n "$installation_mode" ]; then
remove_old_manifests_for_path "$installation_path" "$installation_mode" "$manifest_file"
fi
if [ -f "$manifest_file" ]; then
@@ -1045,10 +1094,16 @@ function migrate_legacy_manifest() {
# Create manifest directory if it doesn't exist
mkdir -p "$MANIFEST_DIR"
# Read legacy manifest
# Read legacy manifest and generate new manifest ID with new naming convention
local mode=$(jq -r '.installation_mode // "Global"' "$legacy_manifest")
local timestamp=$(date +"%Y%m%d-%H%M%S")
local manifest_id="install-${mode}-${timestamp}-migrated"
local mode_prefix
if [ "$mode" = "Global" ]; then
mode_prefix="manifest-global"
else
mode_prefix="manifest-path"
fi
local manifest_id="${mode_prefix}-${timestamp}-migrated"
# Create new manifest file
local new_manifest="${MANIFEST_DIR}/${manifest_id}.json"
@@ -1072,8 +1127,8 @@ function get_all_install_manifests() {
return
fi
# Check if any manifest files exist
local manifest_count=$(find "$MANIFEST_DIR" -name "install-*.json" -type f 2>/dev/null | wc -l)
# Check if any manifest files exist (both new and old formats)
local manifest_count=$(find "$MANIFEST_DIR" \( -name "manifest-*.json" -o -name "install-*.json" \) -type f 2>/dev/null | wc -l)
if [ "$manifest_count" -eq 0 ]; then
echo "[]"
@@ -1102,7 +1157,7 @@ function get_all_install_manifests() {
manifest_content=$(echo "$manifest_content" | jq --argjson fc "$files_count" --argjson dc "$dirs_count" '. + {files_count: $fc, directories_count: $dc}')
all_manifests+="$manifest_content"
done < <(find "$MANIFEST_DIR" -name "install-*.json" -type f -print0 | sort -z)
done < <(find "$MANIFEST_DIR" \( -name "manifest-*.json" -o -name "install-*.json" \) -type f -print0 | sort -z)
all_manifests+="]"
@@ -1128,6 +1183,112 @@ function get_all_install_manifests() {
echo "$latest_manifests"
}
function move_old_installation() {
local installation_path="$1"
local installation_mode="$2"
write_color "Checking for previous installation..." "$COLOR_INFO"
# Find existing manifest for this installation path and mode
local manifests_json=$(get_all_install_manifests)
local target_path=$(echo "$installation_path" | sed 's:/*$::' | tr '[:upper:]' '[:lower:]')
local old_manifest=$(echo "$manifests_json" | jq --arg path "$target_path" --arg mode "$installation_mode" '
.[] | select(
(.installation_path | ascii_downcase | sub("/+$"; "")) == $path and
.installation_mode == $mode
)
')
if [ -z "$old_manifest" ] || [ "$old_manifest" = "null" ]; then
write_color "No previous $installation_mode installation found at this path" "$COLOR_INFO"
return 0
fi
local install_date=$(echo "$old_manifest" | jq -r '.installation_date')
local files_count=$(echo "$old_manifest" | jq -r '.files_count')
local dirs_count=$(echo "$old_manifest" | jq -r '.directories_count')
write_color "Found previous installation from $install_date" "$COLOR_INFO"
write_color "Files: $files_count, Directories: $dirs_count" "$COLOR_INFO"
# Create backup folder
local timestamp=$(date +"%Y%m%d-%H%M%S")
local backup_dir="${installation_path}/claude-backup-old-${timestamp}"
mkdir -p "$backup_dir"
write_color "Created backup folder: $backup_dir" "$COLOR_SUCCESS"
local moved_files=0
local removed_dirs=0
local failed_items=()
# Move files first (from manifest)
write_color "Moving old installation files to backup..." "$COLOR_INFO"
while IFS= read -r file_path; do
if [ -z "$file_path" ] || [ "$file_path" = "null" ]; then
continue
fi
if [ -f "$file_path" ]; then
# Calculate relative path from installation root
local relative_path="${file_path#$installation_path}"
relative_path="${relative_path#/}"
if [ -z "$relative_path" ]; then
relative_path=$(basename "$file_path")
fi
local backup_dest_dir=$(dirname "${backup_dir}/${relative_path}")
mkdir -p "$backup_dest_dir"
if mv "$file_path" "${backup_dest_dir}/" 2>/dev/null; then
((moved_files++))
else
write_color " WARNING: Failed to move file: $file_path" "$COLOR_WARNING"
failed_items+=("$file_path")
fi
fi
done <<< "$(echo "$old_manifest" | jq -r '.files[].path')"
# Remove empty directories (in reverse order to handle nested dirs)
write_color "Cleaning up empty directories..." "$COLOR_INFO"
while IFS= read -r dir_path; do
if [ -z "$dir_path" ] || [ "$dir_path" = "null" ]; then
continue
fi
if [ -d "$dir_path" ]; then
# Check if directory is empty
if [ -z "$(ls -A "$dir_path" 2>/dev/null)" ]; then
if rmdir "$dir_path" 2>/dev/null; then
write_color " Removed empty directory: $dir_path" "$COLOR_INFO"
((removed_dirs++))
fi
else
write_color " Directory not empty (preserved): $dir_path" "$COLOR_INFO"
fi
fi
done <<< "$(echo "$old_manifest" | jq -r '.directories[].path' | awk '{ print length, $0 }' | sort -rn | cut -d' ' -f2-)"
# Note: Old manifest will be automatically removed by save_install_manifest
# via remove_old_manifests_for_path to ensure robust cleanup
echo ""
write_color "Old installation cleanup summary:" "$COLOR_INFO"
echo " Files moved: $moved_files"
echo " Directories removed: $removed_dirs"
echo " Backup location: $backup_dir"
if [ ${#failed_items[@]} -gt 0 ]; then
write_color " Failed items: ${#failed_items[@]}" "$COLOR_WARNING"
fi
echo ""
# Return backup path for reference
return 0
}
# ============================================================================
# UNINSTALLATION FUNCTIONS
# ============================================================================
@@ -1173,26 +1334,50 @@ function uninstall_claude_workflow() {
if [ "$manifests_count" -eq 1 ]; then
selected_manifest=$(echo "$manifests_json" | jq '.[0]')
write_color "Only one installation found, will uninstall:" "$COLOR_INFO"
# Read version from version.json
local install_path=$(echo "$selected_manifest" | jq -r '.installation_path // ""')
local install_mode=$(echo "$selected_manifest" | jq -r '.installation_mode // "Unknown"')
local version_str="Version Unknown"
# Determine version.json path
local version_json_path="${install_path}/.claude/version.json"
if [ -f "$version_json_path" ]; then
local ver=$(jq -r '.version // ""' "$version_json_path" 2>/dev/null)
if [ -n "$ver" ] && [ "$ver" != "unknown" ]; then
version_str="v$ver"
fi
fi
write_color "Found installation: $version_str - $install_path" "$COLOR_INFO"
else
# Multiple manifests - let user choose
# Multiple manifests - let user choose (simplified: only version and path)
local options=()
for i in $(seq 0 $((manifests_count - 1))); do
local m=$(echo "$manifests_json" | jq ".[$i]")
# Safely extract date string
local date_str=$(echo "$m" | jq -r '.installation_date // "unknown date"' | cut -c1-10)
local mode=$(echo "$m" | jq -r '.installation_mode // "Unknown"')
local files_count=$(echo "$m" | jq -r '.files_count // 0')
local dirs_count=$(echo "$m" | jq -r '.directories_count // 0')
local path_info=$(echo "$m" | jq -r '.installation_path // ""')
local install_mode=$(echo "$m" | jq -r '.installation_mode // "Unknown"')
local version_str="Version Unknown"
if [ -n "$path_info" ]; then
path_info=" ($path_info)"
# Read version from version.json
local version_json_path="${path_info}/.claude/version.json"
if [ -f "$version_json_path" ]; then
local ver=$(jq -r '.version // ""' "$version_json_path" 2>/dev/null)
if [ -n "$ver" ] && [ "$ver" != "unknown" ]; then
version_str="v$ver"
fi
fi
options+=("$((i + 1)). [$mode] $date_str - $files_count files, $dirs_count dirs$path_info")
local path_str="Path Unknown"
if [ -n "$path_info" ]; then
path_str="$path_info"
fi
options+=("$((i + 1)). $version_str - $path_str")
done
options+=("Cancel - Don't uninstall anything")
@@ -1210,16 +1395,24 @@ function uninstall_claude_workflow() {
selected_manifest=$(echo "$manifests_json" | jq ".[$selected_index]")
fi
# Display selected installation info
# Display selected installation info (simplified: only version and path)
local final_path=$(echo "$selected_manifest" | jq -r '.installation_path // ""')
local final_mode=$(echo "$selected_manifest" | jq -r '.installation_mode // "Unknown"')
local final_version="Version Unknown"
# Read version from version.json
local final_version_path="${final_path}/.claude/version.json"
if [ -f "$final_version_path" ]; then
local ver=$(jq -r '.version // ""' "$final_version_path" 2>/dev/null)
if [ -n "$ver" ] && [ "$ver" != "unknown" ]; then
final_version="v$ver"
fi
fi
echo ""
write_color "Installation Information:" "$COLOR_INFO"
echo " Manifest ID: $(echo "$selected_manifest" | jq -r '.manifest_id')"
echo " Mode: $(echo "$selected_manifest" | jq -r '.installation_mode')"
echo " Path: $(echo "$selected_manifest" | jq -r '.installation_path')"
echo " Date: $(echo "$selected_manifest" | jq -r '.installation_date')"
echo " Installer Version: $(echo "$selected_manifest" | jq -r '.installer_version')"
echo " Files tracked: $(echo "$selected_manifest" | jq -r '.files_count')"
echo " Directories tracked: $(echo "$selected_manifest" | jq -r '.directories_count')"
write_color "Uninstallation Target:" "$COLOR_INFO"
echo " $final_version"
echo " Path: $final_path"
echo ""
# Confirm uninstallation
@@ -1229,55 +1422,64 @@ function uninstall_claude_workflow() {
fi
local removed_files=0
local removed_dirs=0
local failed_items=()
local skipped_files=0
# Remove files first
# Check if this is a Path mode uninstallation and if Global installation exists
local is_path_mode=false
local has_global_installation=false
if [ "$final_mode" = "Path" ]; then
is_path_mode=true
# Check if any Global installation manifest exists
if [ -d "$MANIFEST_DIR" ]; then
local global_manifest_count=$(find "$MANIFEST_DIR" -name "manifest-global-*.json" -type f 2>/dev/null | wc -l)
if [ "$global_manifest_count" -gt 0 ]; then
has_global_installation=true
write_color "Found Global installation, global files will be preserved" "$COLOR_WARNING"
echo ""
fi
fi
fi
# Only remove files listed in manifest - do NOT remove directories
write_color "Removing installed files..." "$COLOR_INFO"
local files_array=$(echo "$selected_manifest" | jq -c '.files[]')
local files_array=$(echo "$selected_manifest" | jq -c '.files[]' 2>/dev/null)
while IFS= read -r file_entry; do
local file_path=$(echo "$file_entry" | jq -r '.path')
if [ -n "$files_array" ]; then
while IFS= read -r file_entry; do
local file_path=$(echo "$file_entry" | jq -r '.path')
if [ -f "$file_path" ]; then
if rm -f "$file_path" 2>/dev/null; then
write_color " Removed file: $file_path" "$COLOR_SUCCESS"
((removed_files++))
else
write_color " WARNING: Failed to remove file: $file_path" "$COLOR_WARNING"
failed_items+=("$file_path")
fi
else
write_color " File not found (already removed): $file_path" "$COLOR_INFO"
fi
done <<< "$files_array"
# For Path mode uninstallation, skip global files if Global installation exists
if [ "$is_path_mode" = true ] && [ "$has_global_installation" = true ]; then
local global_claude_dir="${HOME}/.claude"
# Remove directories (in reverse order by path length)
write_color "Removing installed directories..." "$COLOR_INFO"
local dirs_array=$(echo "$selected_manifest" | jq -c '.directories[] | {path: .path, length: (.path | length)}' | sort -t: -k2 -rn | jq -c '.path')
while IFS= read -r dir_path_json; do
local dir_path=$(echo "$dir_path_json" | jq -r '.')
if [ -d "$dir_path" ]; then
# Check if directory is empty
if [ -z "$(ls -A "$dir_path" 2>/dev/null)" ]; then
if rmdir "$dir_path" 2>/dev/null; then
write_color " Removed directory: $dir_path" "$COLOR_SUCCESS"
((removed_dirs++))
else
write_color " WARNING: Failed to remove directory: $dir_path" "$COLOR_WARNING"
failed_items+=("$dir_path")
# Skip files under global .claude directory
if [[ "$file_path" == "$global_claude_dir"* ]]; then
((skipped_files++))
continue
fi
else
write_color " Directory not empty (preserved): $dir_path" "$COLOR_WARNING"
fi
else
write_color " Directory not found (already removed): $dir_path" "$COLOR_INFO"
fi
done <<< "$dirs_array"
if [ -f "$file_path" ]; then
if rm -f "$file_path" 2>/dev/null; then
((removed_files++))
else
write_color " WARNING: Failed to remove: $file_path" "$COLOR_WARNING"
failed_items+=("$file_path")
fi
fi
done <<< "$files_array"
fi
# Display removal summary
if [ "$skipped_files" -gt 0 ]; then
write_color "Removed $removed_files files, skipped $skipped_files global files" "$COLOR_SUCCESS"
else
write_color "Removed $removed_files files" "$COLOR_SUCCESS"
fi
# Remove manifest file
local manifest_file=$(echo "$selected_manifest" | jq -r '.manifest_file')
@@ -1295,7 +1497,12 @@ function uninstall_claude_workflow() {
write_color "========================================" "$COLOR_INFO"
write_color "Uninstallation Summary:" "$COLOR_INFO"
echo " Files removed: $removed_files"
echo " Directories removed: $removed_dirs"
if [ "$skipped_files" -gt 0 ]; then
echo " Files skipped (global files preserved): $skipped_files"
echo ""
write_color "Note: $skipped_files global files were preserved due to existing Global installation" "$COLOR_INFO"
fi
if [ ${#failed_items[@]} -gt 0 ]; then
echo ""
@@ -1307,7 +1514,11 @@ function uninstall_claude_workflow() {
echo ""
if [ ${#failed_items[@]} -eq 0 ]; then
write_color "✓ Claude Code Workflow has been successfully uninstalled!" "$COLOR_SUCCESS"
if [ "$skipped_files" -gt 0 ]; then
write_color "✓ Uninstallation complete! Removed $removed_files files, preserved $skipped_files global files." "$COLOR_SUCCESS"
else
write_color "✓ Claude Code Workflow has been successfully uninstalled!" "$COLOR_SUCCESS"
fi
else
write_color "Uninstallation completed with warnings." "$COLOR_WARNING"
write_color "Please manually remove the failed items listed above." "$COLOR_INFO"

View File

@@ -14,9 +14,10 @@ graph TB
end
subgraph "Session Management"
MARKER[".active-session marker"]
SESSION["workflow-session.json"]
WDIR[".workflow/ directories"]
ACTIVE_DIR[".workflow/active/"]
ARCHIVE_DIR[".workflow/archives/"]
end
subgraph "Task System"
@@ -124,9 +125,7 @@ stateDiagram-v2
CreateStructure --> CreateJSON: Create workflow-session.json
CreateJSON --> CreatePlan: Create IMPL_PLAN.md
CreatePlan --> CreateTasks: Create .task/ directory
CreateTasks --> SetActive: touch .active-session-name
SetActive --> Active: Session Ready
CreateTasks --> Active: Session Ready in .workflow/active/
Active --> Paused: Switch to Another Session
Active --> Working: Execute Tasks