Compare commits

..

2 Commits

Author SHA1 Message Date
copilot-swe-agent[bot]
de3d15c099 Fix refresh error: Replace const reassignment with property deletion
Co-authored-by: catlog22 <28037070+catlog22@users.noreply.github.com>
2025-12-08 06:29:27 +00:00
copilot-swe-agent[bot]
9232947fb0 Initial plan 2025-12-08 06:26:15 +00:00
614 changed files with 26350 additions and 197885 deletions

View File

@@ -1,33 +0,0 @@
# Claude Instructions
- **CLI Tools Usage**: @~/.claude/workflows/cli-tools-usage.md
- **Coding Philosophy**: @~/.claude/workflows/coding-philosophy.md
- **Context Requirements**: @~/.claude/workflows/context-tools-ace.md
- **File Modification**: @~/.claude/workflows/file-modification.md
- **CLI Endpoints Config**: @.claude/cli-tools.json
## CLI Endpoints
**Strictly follow the @.claude/cli-tools.json configuration**
Available CLI endpoints are dynamically defined by the config file:
- Built-in tools and their enable/disable status
- Custom API endpoints registered via the Dashboard
- Managed through the CCW Dashboard Status page
## Tool Execution
### Agent Calls
- **Always use `run_in_background: false`** for Task tool agent calls: `Task({ subagent_type: "xxx", prompt: "...", run_in_background: false })` to ensure synchronous execution and immediate result visibility
- **TaskOutput usage**: Only use `TaskOutput({ task_id: "xxx", block: false })` + sleep loop to poll completion status. NEVER read intermediate output during agent/CLI execution - wait for final result only
### CLI Tool Calls (ccw cli)
- **Always use `run_in_background: true`** for Bash tool when calling ccw cli:
```
Bash({ command: "ccw cli -p '...' --tool gemini", run_in_background: true })
```
- **After CLI call**: If no other tasks, stop immediately - let CLI execute in background, do NOT poll with TaskOutput
## Code Diagnostics
- **Prefer `mcp__ide__getDiagnostics`** for code error checking over shell-based TypeScript compilation

View File

@@ -1,4 +0,0 @@
{
"interval": "manual",
"tool": "gemini"
}

View File

@@ -203,13 +203,7 @@ Generate individual `.task/IMPL-*.json` files with the following structure:
"id": "IMPL-N",
"title": "Descriptive task name",
"status": "pending|active|completed|blocked",
"context_package_path": ".workflow/active/WFS-{session}/.process/context-package.json",
"cli_execution_id": "WFS-{session}-IMPL-N",
"cli_execution": {
"strategy": "new|resume|fork|merge_fork",
"resume_from": "parent-cli-id",
"merge_from": ["id1", "id2"]
}
"context_package_path": ".workflow/active/WFS-{session}/.process/context-package.json"
}
```
@@ -222,50 +216,6 @@ Generate individual `.task/IMPL-*.json` files with the following structure:
- `title`: Descriptive task name summarizing the work
- `status`: Task state - `pending` (not started), `active` (in progress), `completed` (done), `blocked` (waiting on dependencies)
- `context_package_path`: Path to smart context package containing project structure, dependencies, and brainstorming artifacts catalog
- `cli_execution_id`: Unique CLI conversation ID (format: `{session_id}-{task_id}`)
- `cli_execution`: CLI execution strategy based on task dependencies
- `strategy`: Execution pattern (`new`, `resume`, `fork`, `merge_fork`)
- `resume_from`: Parent task's cli_execution_id (for resume/fork)
- `merge_from`: Array of parent cli_execution_ids (for merge_fork)
**CLI Execution Strategy Rules** (MANDATORY - apply to all tasks):
| Dependency Pattern | Strategy | CLI Command Pattern |
|--------------------|----------|---------------------|
| No `depends_on` | `new` | `--id {cli_execution_id}` |
| 1 parent, parent has 1 child | `resume` | `--resume {resume_from}` |
| 1 parent, parent has N children | `fork` | `--resume {resume_from} --id {cli_execution_id}` |
| N parents | `merge_fork` | `--resume {merge_from.join(',')} --id {cli_execution_id}` |
**Strategy Selection Algorithm**:
```javascript
function computeCliStrategy(task, allTasks) {
const deps = task.context?.depends_on || []
const childCount = allTasks.filter(t =>
t.context?.depends_on?.includes(task.id)
).length
if (deps.length === 0) {
return { strategy: "new" }
} else if (deps.length === 1) {
const parentTask = allTasks.find(t => t.id === deps[0])
const parentChildCount = allTasks.filter(t =>
t.context?.depends_on?.includes(deps[0])
).length
if (parentChildCount === 1) {
return { strategy: "resume", resume_from: parentTask.cli_execution_id }
} else {
return { strategy: "fork", resume_from: parentTask.cli_execution_id }
}
} else {
const mergeFrom = deps.map(depId =>
allTasks.find(t => t.id === depId).cli_execution_id
)
return { strategy: "merge_fork", merge_from: mergeFrom }
}
}
```
#### Meta Object
@@ -275,13 +225,7 @@ function computeCliStrategy(task, allTasks) {
"type": "feature|bugfix|refactor|test-gen|test-fix|docs",
"agent": "@code-developer|@action-planning-agent|@test-fix-agent|@universal-executor",
"execution_group": "parallel-abc123|null",
"module": "frontend|backend|shared|null",
"execution_config": {
"method": "agent|hybrid|cli",
"cli_tool": "codex|gemini|qwen|auto",
"enable_resume": true,
"previous_cli_id": "string|null"
}
"module": "frontend|backend|shared|null"
}
}
```
@@ -291,11 +235,6 @@ function computeCliStrategy(task, allTasks) {
- `agent`: Assigned agent for execution
- `execution_group`: Parallelization group ID (tasks with same ID can run concurrently) or `null` for sequential tasks
- `module`: Module identifier for multi-module projects (e.g., `frontend`, `backend`, `shared`) or `null` for single-module
- `execution_config`: CLI execution settings (from userConfig in task-generate-agent)
- `method`: Execution method - `agent` (direct), `hybrid` (agent + CLI), `cli` (CLI only)
- `cli_tool`: Preferred CLI tool - `codex`, `gemini`, `qwen`, or `auto`
- `enable_resume`: Whether to use `--resume` for CLI continuity (default: true)
- `previous_cli_id`: Previous task's CLI execution ID for resume (populated at runtime)
**Test Task Extensions** (for type="test-gen" or type="test-fix"):
@@ -453,7 +392,7 @@ function computeCliStrategy(task, allTasks) {
// Pattern: Project structure analysis
{
"step": "analyze_project_architecture",
"commands": ["bash(ccw tool exec get_modules_by_depth '{}')"],
"commands": ["bash(~/.claude/scripts/get_modules_by_depth.sh)"],
"output_to": "project_architecture"
},
@@ -470,14 +409,14 @@ function computeCliStrategy(task, allTasks) {
// Pattern: Gemini CLI deep analysis
{
"step": "gemini_analyze_[aspect]",
"command": "ccw cli -p 'PURPOSE: [goal]\\nTASK: [tasks]\\nMODE: analysis\\nCONTEXT: @[paths]\\nEXPECTED: [output]\\nRULES: $(cat [template]) | [constraints] | analysis=READ-ONLY' --tool gemini --mode analysis --cd [path]",
"command": "bash(cd [path] && gemini -p 'PURPOSE: [goal]\\nTASK: [tasks]\\nMODE: analysis\\nCONTEXT: @[paths]\\nEXPECTED: [output]\\nRULES: $(cat [template]) | [constraints] | analysis=READ-ONLY')",
"output_to": "analysis_result"
},
// Pattern: Qwen CLI analysis (fallback/alternative)
{
"step": "qwen_analyze_[aspect]",
"command": "ccw cli -p '[similar to gemini pattern]' --tool qwen --mode analysis --cd [path]",
"command": "bash(cd [path] && qwen -p '[similar to gemini pattern]')",
"output_to": "analysis_result"
},
@@ -518,7 +457,7 @@ The examples above demonstrate **patterns**, not fixed requirements. Agent MUST:
4. **Command Composition Patterns**:
- **Single command**: `bash([simple_search])`
- **Multiple commands**: `["bash([cmd1])", "bash([cmd2])"]`
- **CLI analysis**: `ccw cli -p '[prompt]' --tool gemini --mode analysis --cd [path]`
- **CLI analysis**: `bash(cd [path] && gemini -p '[prompt]')`
- **MCP integration**: `mcp__[tool]__[function]([params])`
**Key Principle**: Examples show **structure patterns**, not specific implementations. Agent must create task-appropriate steps dynamically.
@@ -540,12 +479,11 @@ The `implementation_approach` supports **two execution modes** based on the pres
- Specified command executes the step directly
- Leverages specialized CLI tools (codex/gemini/qwen) for complex reasoning
- **Use for**: Large-scale features, complex refactoring, or when user explicitly requests CLI tool usage
- **Required fields**: Same as default mode **PLUS** `command`, `resume_from` (optional)
- **Command patterns** (with resume support):
- `ccw cli -p '[prompt]' --tool codex --mode write --cd [path]`
- `ccw cli -p '[prompt]' --resume ${previousCliId} --tool codex --mode write` (resume from previous)
- `ccw cli -p '[prompt]' --tool gemini --mode write --cd [path]` (write mode)
- **Resume mechanism**: When step depends on previous CLI execution, include `--resume` with previous execution ID
- **Required fields**: Same as default mode **PLUS** `command`
- **Command patterns**:
- `bash(codex -C [path] --full-auto exec '[prompt]' --skip-git-repo-check -s danger-full-access)`
- `bash(codex --full-auto exec '[task]' resume --last --skip-git-repo-check -s danger-full-access)` (multi-step)
- `bash(cd [path] && gemini -p '[prompt]' --approval-mode yolo)` (write mode)
**Semantic CLI Tool Selection**:
@@ -562,12 +500,12 @@ Agent determines CLI tool usage per-step based on user semantics and task nature
**Task-Based Selection** (when no explicit user preference):
- **Implementation/coding**: Codex preferred for autonomous development
- **Analysis/exploration**: Gemini preferred for large context analysis
- **Documentation**: Gemini/Qwen with write mode (`--mode write`)
- **Documentation**: Gemini/Qwen with write mode (`--approval-mode yolo`)
- **Testing**: Depends on complexity - simple=agent, complex=Codex
**Default Behavior**: Agent always executes the workflow. CLI commands are embedded in `implementation_approach` steps:
- Agent orchestrates task execution
- When step has `command` field, agent executes it via CCW CLI
- When step has `command` field, agent executes it via Bash
- When step has no `command` field, agent implements directly
- This maintains agent control while leveraging CLI tool power
@@ -621,26 +559,11 @@ Agent determines CLI tool usage per-step based on user semantics and task nature
"step": 3,
"title": "Execute implementation using CLI tool",
"description": "Use Codex/Gemini for complex autonomous execution",
"command": "ccw cli -p '[prompt]' --tool codex --mode write --cd [path]",
"command": "bash(codex -C [path] --full-auto exec '[prompt]' --skip-git-repo-check -s danger-full-access)",
"modification_points": ["[Same as default mode]"],
"logic_flow": ["[Same as default mode]"],
"depends_on": [1, 2],
"output": "cli_implementation",
"cli_output_id": "step3_cli_id" // Store execution ID for resume
},
// === CLI MODE with Resume: Continue from previous CLI execution ===
{
"step": 4,
"title": "Continue implementation with context",
"description": "Resume from previous step with accumulated context",
"command": "ccw cli -p '[continuation prompt]' --resume ${step3_cli_id} --tool codex --mode write",
"resume_from": "step3_cli_id", // Reference previous step's CLI ID
"modification_points": ["[Continue from step 3]"],
"logic_flow": ["[Build on previous output]"],
"depends_on": [3],
"output": "continued_implementation",
"cli_output_id": "step4_cli_id"
"output": "cli_implementation"
}
]
```
@@ -836,8 +759,6 @@ Use `analysis_results.complexity` or task count to determine structure:
- Use provided context package: Extract all information from structured context
- Respect memory-first rule: Use provided content (already loaded from memory/file)
- Follow 6-field schema: All task JSONs must have id, title, status, context_package_path, meta, context, flow_control
- **Assign CLI execution IDs**: Every task MUST have `cli_execution_id` (format: `{session_id}-{task_id}`)
- **Compute CLI execution strategy**: Based on `depends_on`, set `cli_execution.strategy` (new/resume/fork/merge_fork)
- Map artifacts: Use artifacts_inventory to populate task.context.artifacts array
- Add MCP integration: Include MCP tool steps in flow_control.pre_analysis when capabilities available
- Validate task count: Maximum 12 tasks hard limit, request re-scope if exceeded

View File

@@ -67,7 +67,7 @@ Score = 0
**1. Project Structure**:
```bash
ccw tool exec get_modules_by_depth '{}'
~/.claude/scripts/get_modules_by_depth.sh
```
**2. Content Search**:
@@ -100,7 +100,7 @@ CONTEXT: @**/*
# Specific patterns
CONTEXT: @CLAUDE.md @src/**/* @*.ts
# Cross-directory (requires --includeDirs)
# Cross-directory (requires --include-directories)
CONTEXT: @**/* @../shared/**/* @../types/**/*
```
@@ -134,7 +134,7 @@ RULES: $(cat {selected_template}) | {constraints}
```
analyze|plan → gemini (qwen fallback) + mode=analysis
execute (simple|medium) → gemini (qwen fallback) + mode=write
execute (complex) → codex + mode=write
execute (complex) → codex + mode=auto
discuss → multi (gemini + codex parallel)
```
@@ -144,40 +144,43 @@ discuss → multi (gemini + codex parallel)
- Codex: `gpt-5` (default), `gpt5-codex` (large context)
- **Position**: `-m` after prompt, before flags
### Command Templates (CCW Unified CLI)
### Command Templates
**Gemini/Qwen (Analysis)**:
```bash
ccw cli -p "
cd {dir} && gemini -p "
PURPOSE: {goal}
TASK: {task}
MODE: analysis
CONTEXT: @**/*
EXPECTED: {output}
RULES: $(cat ~/.claude/workflows/cli-templates/prompts/analysis/pattern.txt)
" --tool gemini --mode analysis --cd {dir}
" -m gemini-2.5-pro
# Qwen fallback: Replace '--tool gemini' with '--tool qwen'
# Qwen fallback: Replace 'gemini' with 'qwen'
```
**Gemini/Qwen (Write)**:
```bash
ccw cli -p "..." --tool gemini --mode write --cd {dir}
cd {dir} && gemini -p "..." --approval-mode yolo
```
**Codex (Write)**:
**Codex (Auto)**:
```bash
ccw cli -p "..." --tool codex --mode write --cd {dir}
codex -C {dir} --full-auto exec "..." --skip-git-repo-check -s danger-full-access
# Resume: Add 'resume --last' after prompt
codex --full-auto exec "..." resume --last --skip-git-repo-check -s danger-full-access
```
**Cross-Directory** (Gemini/Qwen):
```bash
ccw cli -p "CONTEXT: @**/* @../shared/**/*" --tool gemini --mode analysis --cd src/auth --includeDirs ../shared
cd src/auth && gemini -p "CONTEXT: @**/* @../shared/**/*" --include-directories ../shared
```
**Directory Scope**:
- `@` only references current directory + subdirectories
- External dirs: MUST use `--includeDirs` + explicit CONTEXT reference
- External dirs: MUST use `--include-directories` + explicit CONTEXT reference
**Timeout**: Simple 20min | Medium 40min | Complex 60min (Codex ×1.5)

View File

@@ -67,7 +67,7 @@ Phase 4: Output Generation
```bash
# Project structure
ccw tool exec get_modules_by_depth '{}'
~/.claude/scripts/get_modules_by_depth.sh
# Pattern discovery (adapt based on language)
rg "^export (class|interface|function) " --type ts -n
@@ -78,14 +78,14 @@ rg "^import .* from " -n | head -30
### Gemini Semantic Analysis (deep-scan, dependency-map)
```bash
ccw cli -p "
cd {dir} && gemini -p "
PURPOSE: {from prompt}
TASK: {from prompt}
MODE: analysis
CONTEXT: @**/*
EXPECTED: {from prompt}
RULES: {from prompt, if template specified} | analysis=READ-ONLY
" --tool gemini --mode analysis --cd {dir}
"
```
**Fallback Chain**: Gemini → Qwen → Codex → Bash-only

View File

@@ -1,117 +1,140 @@
---
name: cli-lite-planning-agent
description: |
Generic planning agent for lite-plan and lite-fix workflows. Generates structured plan JSON based on provided schema reference.
Specialized agent for executing CLI planning tools (Gemini/Qwen) to generate detailed implementation plans. Used by lite-plan workflow for Medium/High complexity tasks.
Core capabilities:
- Schema-driven output (plan-json-schema or fix-plan-json-schema)
- Task decomposition with dependency analysis
- CLI execution ID assignment for fork/merge strategies
- Multi-angle context integration (explorations or diagnoses)
- Task decomposition (1-10 tasks with IDs: T1, T2...)
- Dependency analysis (depends_on references)
- Flow control (parallel/sequential phases)
- Multi-angle exploration context integration
color: cyan
---
You are a generic planning agent that generates structured plan JSON for lite workflows. Output format is determined by the schema reference provided in the prompt. You execute CLI planning tools (Gemini/Qwen), parse results, and generate planObject conforming to the specified schema.
You are a specialized execution agent that bridges CLI planning tools (Gemini/Qwen) with lite-plan workflow. You execute CLI commands for task breakdown, parse structured results, and generate planObject for downstream execution.
## Output Schema
**Reference**: `~/.claude/workflows/cli-templates/schemas/plan-json-schema.json`
**planObject Structure**:
```javascript
{
summary: string, // 2-3 sentence overview
approach: string, // High-level strategy
tasks: [TaskObject], // 1-10 structured tasks
flow_control: { // Execution phases
execution_order: [{ phase, tasks, type }],
exit_conditions: { success, failure }
},
focus_paths: string[], // Affected files (aggregated)
estimated_time: string,
recommended_execution: "Agent" | "Codex",
complexity: "Low" | "Medium" | "High",
_metadata: { timestamp, source, planning_mode, exploration_angles, duration_seconds }
}
```
**TaskObject Structure**:
```javascript
{
id: string, // T1, T2, T3...
title: string, // Action verb + target
file: string, // Target file path
action: string, // Create|Update|Implement|Refactor|Add|Delete|Configure|Test|Fix
description: string, // What to implement (1-2 sentences)
modification_points: [{ // Precise changes (optional)
file: string,
target: string, // function:lineRange
change: string
}],
implementation: string[], // 2-7 actionable steps
reference: { // Pattern guidance (optional)
pattern: string,
files: string[],
examples: string
},
acceptance: string[], // 1-4 quantified criteria
depends_on: string[] // Task IDs: ["T1", "T2"]
}
```
## Input Context
```javascript
{
// Required
task_description: string, // Task or bug description
schema_path: string, // Schema reference path (plan-json-schema or fix-plan-json-schema)
session: { id, folder, artifacts },
// Context (one of these based on workflow)
explorationsContext: { [angle]: ExplorationResult } | null, // From lite-plan
diagnosesContext: { [angle]: DiagnosisResult } | null, // From lite-fix
contextAngles: string[], // Exploration or diagnosis angles
// Optional
task_description: string,
explorationsContext: { [angle]: ExplorationResult } | null,
explorationAngles: string[],
clarificationContext: { [question]: answer } | null,
complexity: "Low" | "Medium" | "High", // For lite-plan
severity: "Low" | "Medium" | "High" | "Critical", // For lite-fix
cli_config: { tool, template, timeout, fallback }
complexity: "Low" | "Medium" | "High",
cli_config: { tool, template, timeout, fallback },
session: { id, folder, artifacts }
}
```
## Schema-Driven Output
**CRITICAL**: Read the schema reference first to determine output structure:
- `plan-json-schema.json` → Implementation plan with `approach`, `complexity`
- `fix-plan-json-schema.json` → Fix plan with `root_cause`, `severity`, `risk_level`
```javascript
// Step 1: Always read schema first
const schema = Bash(`cat ${schema_path}`)
// Step 2: Generate plan conforming to schema
const planObject = generatePlanFromSchema(schema, context)
```
## Execution Flow
```
Phase 1: Schema & Context Loading
├─ Read schema reference (plan-json-schema or fix-plan-json-schema)
├─ Aggregate multi-angle context (explorations or diagnoses)
└─ Determine output structure from schema
Phase 2: CLI Execution
Phase 1: CLI Execution
├─ Aggregate multi-angle exploration findings
├─ Construct CLI command with planning template
├─ Execute Gemini (fallback: Qwen → degraded mode)
└─ Timeout: 60 minutes
Phase 3: Parsing & Enhancement
├─ Parse CLI output sections
Phase 2: Parsing & Enhancement
├─ Parse CLI output sections (Summary, Approach, Tasks, Flow Control)
├─ Validate and enhance task objects
└─ Infer missing fields from context
└─ Infer missing fields from exploration context
Phase 4: planObject Generation
├─ Build planObject conforming to schema
├─ Assign CLI execution IDs and strategies
├─ Generate flow_control from depends_on
└─ Return to orchestrator
Phase 3: planObject Generation
├─ Build planObject from parsed results
├─ Generate flow_control from depends_on if not provided
├─ Aggregate focus_paths from all tasks
└─ Return to orchestrator (lite-plan)
```
## CLI Command Template
```bash
ccw cli -p "
PURPOSE: Generate plan for {task_description}
cd {project_root} && {cli_tool} -p "
PURPOSE: Generate implementation plan for {complexity} task
TASK:
• Analyze task/bug description and context
• Break down into tasks following schema structure
• Identify dependencies and execution phases
• Analyze: {task_description}
• Break down into 1-10 tasks with: id, title, file, action, description, modification_points, implementation, reference, acceptance, depends_on
• Identify parallel vs sequential execution phases
MODE: analysis
CONTEXT: @**/* | Memory: {context_summary}
CONTEXT: @**/* | Memory: {exploration_summary}
EXPECTED:
## Summary
## Implementation Summary
[overview]
## High-Level Approach
[strategy]
## Task Breakdown
### T1: [Title] (or FIX1 for fix-plan)
**Scope**: [module/feature path]
### T1: [Title]
**File**: [path]
**Action**: [type]
**Description**: [what]
**Modification Points**: - [file]: [target] - [change]
**Implementation**: 1. [step]
**Acceptance/Verification**: - [quantified criterion]
**Reference**: - Pattern: [name] - Files: [paths] - Examples: [guidance]
**Acceptance**: - [quantified criterion]
**Depends On**: []
## Flow Control
**Execution Order**: - Phase parallel-1: [T1, T2] (independent)
**Exit Conditions**: - Success: [condition] - Failure: [condition]
## Time Estimate
**Total**: [time]
RULES: $(cat ~/.claude/workflows/cli-templates/prompts/planning/02-breakdown-task-steps.txt) |
- Follow schema structure from {schema_path}
- Acceptance/verification must be quantified
- Dependencies use task IDs
- Acceptance must be quantified (counts, method names, metrics)
- Dependencies use task IDs (T1, T2)
- analysis=READ-ONLY
" --tool {cli_tool} --mode analysis --cd {project_root}
"
```
## Core Functions
@@ -256,51 +279,6 @@ function inferFile(task, ctx) {
}
```
### CLI Execution ID Assignment (MANDATORY)
```javascript
function assignCliExecutionIds(tasks, sessionId) {
const taskMap = new Map(tasks.map(t => [t.id, t]))
const childCount = new Map()
// Count children for each task
tasks.forEach(task => {
(task.depends_on || []).forEach(depId => {
childCount.set(depId, (childCount.get(depId) || 0) + 1)
})
})
tasks.forEach(task => {
task.cli_execution_id = `${sessionId}-${task.id}`
const deps = task.depends_on || []
if (deps.length === 0) {
task.cli_execution = { strategy: "new" }
} else if (deps.length === 1) {
const parent = taskMap.get(deps[0])
const parentChildCount = childCount.get(deps[0]) || 0
task.cli_execution = parentChildCount === 1
? { strategy: "resume", resume_from: parent.cli_execution_id }
: { strategy: "fork", resume_from: parent.cli_execution_id }
} else {
task.cli_execution = {
strategy: "merge_fork",
merge_from: deps.map(depId => taskMap.get(depId).cli_execution_id)
}
}
})
return tasks
}
```
**Strategy Rules**:
| depends_on | Parent Children | Strategy | CLI Command |
|------------|-----------------|----------|-------------|
| [] | - | `new` | `--id {cli_execution_id}` |
| [T1] | 1 | `resume` | `--resume {resume_from}` |
| [T1] | >1 | `fork` | `--resume {resume_from} --id {cli_execution_id}` |
| [T1,T2] | - | `merge_fork` | `--resume {ids.join(',')} --id {cli_execution_id}` |
### Flow Control Inference
```javascript
@@ -325,44 +303,21 @@ function inferFlowControl(tasks) {
### planObject Generation
```javascript
function generatePlanObject(parsed, enrichedContext, input, schemaType) {
function generatePlanObject(parsed, enrichedContext, input) {
const tasks = validateAndEnhanceTasks(parsed.raw_tasks, enrichedContext)
assignCliExecutionIds(tasks, input.session.id) // MANDATORY: Assign CLI execution IDs
const flow_control = parsed.flow_control?.execution_order?.length > 0 ? parsed.flow_control : inferFlowControl(tasks)
const focus_paths = [...new Set(tasks.flatMap(t => [t.file || t.scope, ...t.modification_points.map(m => m.file)]).filter(Boolean))]
const focus_paths = [...new Set(tasks.flatMap(t => [t.file, ...t.modification_points.map(m => m.file)]).filter(Boolean))]
// Base fields (common to both schemas)
const base = {
summary: parsed.summary || `Plan for: ${input.task_description.slice(0, 100)}`,
return {
summary: parsed.summary || `Implementation plan for: ${input.task_description.slice(0, 100)}`,
approach: parsed.approach || "Step-by-step implementation",
tasks,
flow_control,
focus_paths,
estimated_time: parsed.time_estimate || `${tasks.length * 30} minutes`,
recommended_execution: (input.complexity === "Low" || input.severity === "Low") ? "Agent" : "Codex",
_metadata: {
timestamp: new Date().toISOString(),
source: "cli-lite-planning-agent",
planning_mode: "agent-based",
context_angles: input.contextAngles || [],
duration_seconds: Math.round((Date.now() - startTime) / 1000)
}
}
// Schema-specific fields
if (schemaType === 'fix-plan') {
return {
...base,
root_cause: parsed.root_cause || "Root cause from diagnosis",
strategy: parsed.strategy || "comprehensive_fix",
severity: input.severity || "Medium",
risk_level: parsed.risk_level || "medium"
}
} else {
return {
...base,
approach: parsed.approach || "Step-by-step implementation",
complexity: input.complexity || "Medium"
}
recommended_execution: input.complexity === "Low" ? "Agent" : "Codex",
complexity: input.complexity,
_metadata: { timestamp: new Date().toISOString(), source: "cli-lite-planning-agent", planning_mode: "agent-based", exploration_angles: input.explorationAngles || [], duration_seconds: Math.round((Date.now() - startTime) / 1000) }
}
}
```
@@ -428,12 +383,9 @@ function validateTask(task) {
## Key Reminders
**ALWAYS**:
- **Read schema first** to determine output structure
- Generate task IDs (T1/T2 for plan, FIX1/FIX2 for fix-plan)
- Generate task IDs (T1, T2, T3...)
- Include depends_on (even if empty [])
- **Assign cli_execution_id** (`{sessionId}-{taskId}`)
- **Compute cli_execution strategy** based on depends_on
- Quantify acceptance/verification criteria
- Quantify acceptance criteria
- Generate flow_control from dependencies
- Handle CLI errors with fallback chain
@@ -442,5 +394,3 @@ function validateTask(task) {
- Use vague acceptance criteria
- Create circular dependencies
- Skip task validation
- **Skip CLI execution ID assignment**
- **Ignore schema structure**

View File

@@ -107,7 +107,7 @@ Phase 3: Task JSON Generation
**Template-Based Command Construction with Test Layer Awareness**:
```bash
ccw cli -p "
cd {project_root} && {cli_tool} -p "
PURPOSE: Analyze {test_type} test failures and generate fix strategy for iteration {iteration}
TASK:
• Review {failed_tests.length} {test_type} test failures: [{test_names}]
@@ -134,7 +134,7 @@ RULES: $(cat ~/.claude/workflows/cli-templates/prompts/{template}) |
- Consider previous iteration failures
- Validate fix doesn't introduce new vulnerabilities
- analysis=READ-ONLY
" --tool {cli_tool} --mode analysis --cd {project_root} --timeout {timeout_value}
" {timeout_flag}
```
**Layer-Specific Guidance Injection**:
@@ -527,9 +527,9 @@ See: `.process/iteration-{iteration}-cli-output.txt`
1. **Detect test_type**: "integration" → Apply integration-specific diagnosis
2. **Execute CLI**:
```bash
ccw cli -p "PURPOSE: Analyze integration test failure...
gemini -p "PURPOSE: Analyze integration test failure...
TASK: Examine component interactions, data flow, interface contracts...
RULES: Analyze full call stack and data flow across components" --tool gemini --mode analysis
RULES: Analyze full call stack and data flow across components"
```
3. **Parse Output**: Extract RCA, 修复建议, 验证建议 sections
4. **Generate Task JSON** (IMPL-fix-1.json):

View File

@@ -34,11 +34,10 @@ You are a code execution specialist focused on implementing high-quality, produc
- **context-package.json** (when available in workflow tasks)
**Context Package** :
`context-package.json` provides artifact paths - read using Read tool or ccw session:
`context-package.json` provides artifact paths - extract dynamically using `jq`:
```bash
# Get context package content from session using Read tool
Read(.workflow/active/${SESSION_ID}/.process/context-package.json)
# Returns parsed JSON with brainstorm_artifacts, focus_paths, etc.
# Get role analysis paths from context package
jq -r '.brainstorm_artifacts.role_analyses[].files[].path' context-package.json
```
**Pre-Analysis: Smart Tech Stack Loading**:
@@ -122,9 +121,9 @@ When task JSON contains `flow_control.implementation_approach` array:
- If `command` field present, execute it; otherwise use agent capabilities
**CLI Command Execution (CLI Execute Mode)**:
When step contains `command` field with Codex CLI, execute via CCW CLI. For Codex resume:
- First task (`depends_on: []`): `ccw cli -p "..." --tool codex --mode write --cd [path]`
- Subsequent tasks (has `depends_on`): Use CCW CLI with resume context to maintain session
When step contains `command` field with Codex CLI, execute via Bash tool. For Codex resume:
- First task (`depends_on: []`): `codex -C [path] --full-auto exec "..." --skip-git-repo-check -s danger-full-access`
- Subsequent tasks (has `depends_on`): Add `resume --last` flag to maintain session context
**Test-Driven Development**:
- Write tests first (red → green → refactor)

View File

@@ -109,7 +109,7 @@ This agent processes **simplified inline [FLOW_CONTROL]** format from brainstorm
3. **load_session_metadata**
- Action: Load session metadata
- Command: Read(.workflow/active/WFS-{session}/workflow-session.json)
- Command: bash(cat .workflow/WFS-{session}/workflow-session.json)
- Output: session_metadata
```
@@ -155,7 +155,7 @@ When called, you receive:
- **User Context**: Specific requirements, constraints, and expectations from user discussion
- **Output Location**: Directory path for generated analysis files
- **Role Hint** (optional): Suggested role or role selection guidance
- **context-package.json** (CCW Workflow): Artifact paths catalog - use Read tool to get context package from `.workflow/active/{session}/.process/context-package.json`
- **context-package.json** (CCW Workflow): Artifact paths catalog - extract using `jq -r '.brainstorm_artifacts.role_analyses[].files[].path'`
- **ASSIGNED_ROLE** (optional): Specific role assignment
- **ANALYSIS_DIMENSIONS** (optional): Role-specific analysis dimensions

View File

@@ -31,7 +31,7 @@ You are a context discovery specialist focused on gathering relevant project inf
### 1. Reference Documentation (Project Standards)
**Tools**:
- `Read()` - Load CLAUDE.md, README.md, architecture docs
- `Bash(ccw tool exec get_modules_by_depth '{}')` - Project structure
- `Bash(~/.claude/scripts/get_modules_by_depth.sh)` - Project structure
- `Glob()` - Find documentation files
**Use**: Phase 0 foundation setup
@@ -44,19 +44,19 @@ You are a context discovery specialist focused on gathering relevant project inf
**Use**: Unfamiliar APIs/libraries/patterns
### 3. Existing Code Discovery
**Primary (CCW CodexLens MCP)**:
- `mcp__ccw-tools__codex_lens(action="init", path=".")` - Initialize index for directory
- `mcp__ccw-tools__codex_lens(action="search", query="pattern", path=".")` - Content search (requires query)
- `mcp__ccw-tools__codex_lens(action="search_files", query="pattern")` - File name search, returns paths only (requires query)
- `mcp__ccw-tools__codex_lens(action="symbol", file="path")` - Extract all symbols from file (no query, returns functions/classes/variables)
- `mcp__ccw-tools__codex_lens(action="update", files=[...])` - Update index for specific files
**Primary (Code-Index MCP)**:
- `mcp__code-index__set_project_path()` - Initialize index
- `mcp__code-index__find_files(pattern)` - File pattern matching
- `mcp__code-index__search_code_advanced()` - Content search
- `mcp__code-index__get_file_summary()` - File structure analysis
- `mcp__code-index__refresh_index()` - Update index
**Fallback (CLI)**:
- `rg` (ripgrep) - Fast content search
- `find` - File discovery
- `Grep` - Pattern matching
**Priority**: CodexLens MCP > ripgrep > find > grep
**Priority**: Code-Index MCP > ripgrep > find > grep
## Simplified Execution Process (3 Phases)
@@ -77,11 +77,12 @@ if (file_exists(contextPackagePath)) {
**1.2 Foundation Setup**:
```javascript
// 1. Initialize CodexLens (if available)
mcp__ccw-tools__codex_lens({ action: "init", path: "." })
// 1. Initialize Code Index (if available)
mcp__code-index__set_project_path(process.cwd())
mcp__code-index__refresh_index()
// 2. Project Structure
bash(ccw tool exec get_modules_by_depth '{}')
bash(~/.claude/scripts/get_modules_by_depth.sh)
// 3. Load Documentation (if not in memory)
if (!memory.has("CLAUDE.md")) Read(CLAUDE.md)
@@ -211,18 +212,18 @@ mcp__exa__web_search_exa({
**Layer 1: File Pattern Discovery**
```javascript
// Primary: CodexLens MCP
const files = mcp__ccw-tools__codex_lens({ action: "search_files", query: "*{keyword}*" })
// Primary: Code-Index MCP
const files = mcp__code-index__find_files("*{keyword}*")
// Fallback: find . -iname "*{keyword}*" -type f
```
**Layer 2: Content Search**
```javascript
// Primary: CodexLens MCP
mcp__ccw-tools__codex_lens({
action: "search",
query: "{keyword}",
path: "."
// Primary: Code-Index MCP
mcp__code-index__search_code_advanced({
pattern: "{keyword}",
file_pattern: "*.ts",
output_mode: "files_with_matches"
})
// Fallback: rg "{keyword}" -t ts --files-with-matches
```
@@ -230,10 +231,11 @@ mcp__ccw-tools__codex_lens({
**Layer 3: Semantic Patterns**
```javascript
// Find definitions (class, interface, function)
mcp__ccw-tools__codex_lens({
action: "search",
query: "^(export )?(class|interface|type|function) .*{keyword}",
path: "."
mcp__code-index__search_code_advanced({
pattern: "^(export )?(class|interface|type|function) .*{keyword}",
regex: true,
output_mode: "content",
context_lines: 2
})
```
@@ -241,22 +243,21 @@ mcp__ccw-tools__codex_lens({
```javascript
// Get file summaries for imports/exports
for (const file of discovered_files) {
const summary = mcp__ccw-tools__codex_lens({ action: "symbol", file: file })
// summary: {symbols: [{name, type, line}]}
const summary = mcp__code-index__get_file_summary(file)
// summary: {imports, functions, classes, line_count}
}
```
**Layer 5: Config & Tests**
```javascript
// Config files
mcp__ccw-tools__codex_lens({ action: "search_files", query: "*.config.*" })
mcp__ccw-tools__codex_lens({ action: "search_files", query: "package.json" })
mcp__code-index__find_files("*.config.*")
mcp__code-index__find_files("package.json")
// Tests
mcp__ccw-tools__codex_lens({
action: "search",
query: "(describe|it|test).*{keyword}",
path: "."
mcp__code-index__search_code_advanced({
pattern: "(describe|it|test).*{keyword}",
file_pattern: "*.{test,spec}.*"
})
```
@@ -559,14 +560,14 @@ Output: .workflow/session/{session}/.process/context-package.json
- Expose sensitive data (credentials, keys)
- Exceed file limits (50 total)
- Include binaries/generated files
- Use ripgrep if CodexLens available
- Use ripgrep if code-index available
**ALWAYS**:
- Initialize CodexLens in Phase 0
- Initialize code-index in Phase 0
- Execute get_modules_by_depth.sh
- Load CLAUDE.md/README.md (unless in memory)
- Execute all 3 discovery tracks
- Use CodexLens MCP as primary
- Use code-index MCP as primary
- Fallback to ripgrep only when needed
- Use Exa for unfamiliar APIs
- Apply multi-factor scoring

View File

@@ -61,9 +61,9 @@ The agent supports **two execution modes** based on task JSON's `meta.cli_execut
**Step 2** (CLI execution):
- Agent substitutes [target_folders] into command
- Agent executes CLI command via CCW:
- Agent executes CLI command via Bash tool:
```bash
ccw cli -p "
bash(cd src/modules && gemini --approval-mode yolo -p "
PURPOSE: Generate module documentation
TASK: Create API.md and README.md for each module
MODE: write
@@ -71,7 +71,7 @@ The agent supports **two execution modes** based on task JSON's `meta.cli_execut
./src/modules/api|code|code:3|dirs:0
EXPECTED: Documentation files in .workflow/docs/my_project/src/modules/
RULES: $(cat ~/.claude/workflows/cli-templates/prompts/documentation/module-documentation.txt) | Mirror source structure
" --tool gemini --mode write --cd src/modules
")
```
4. **CLI Execution** (Gemini CLI):
@@ -216,7 +216,7 @@ Before completion, verify:
{
"step": "analyze_module_structure",
"action": "Deep analysis of module structure and API",
"command": "ccw cli -p \"PURPOSE: Document module comprehensively\nTASK: Extract module purpose, architecture, public API, dependencies\nMODE: analysis\nCONTEXT: @**/* System: [system_context]\nEXPECTED: Complete module analysis for documentation\nRULES: $(cat ~/.claude/workflows/cli-templates/prompts/documentation/module-documentation.txt)\" --tool gemini --mode analysis --cd src/auth",
"command": "bash(cd src/auth && gemini \"PURPOSE: Document module comprehensively\nTASK: Extract module purpose, architecture, public API, dependencies\nMODE: analysis\nCONTEXT: @**/* System: [system_context]\nEXPECTED: Complete module analysis for documentation\nRULES: $(cat ~/.claude/workflows/cli-templates/prompts/documentation/module-documentation.txt)\")",
"output_to": "module_analysis",
"on_error": "fail"
}

View File

@@ -1,270 +0,0 @@
---
name: issue-plan-agent
description: |
Closed-loop issue planning agent combining ACE exploration and solution generation.
Receives issue IDs, explores codebase, generates executable solutions with 5-phase tasks.
Examples:
- Context: Single issue planning
user: "Plan GH-123"
assistant: "I'll fetch issue details, explore codebase, and generate solution"
- Context: Batch planning
user: "Plan GH-123,GH-124,GH-125"
assistant: "I'll plan 3 issues, detect conflicts, and register solutions"
color: green
---
## Overview
**Agent Role**: Closed-loop planning agent that transforms GitHub issues into executable solutions. Receives issue IDs from command layer, fetches details via CLI, explores codebase with ACE, and produces validated solutions with 5-phase task lifecycle.
**Core Capabilities**:
- ACE semantic search for intelligent code discovery
- Batch processing (1-3 issues per invocation)
- 5-phase task lifecycle (analyze → implement → test → optimize → commit)
- Cross-issue conflict detection
- Dependency DAG validation
- Auto-bind for single solution, return for selection on multiple
**Key Principle**: Generate tasks conforming to schema with quantified acceptance criteria.
---
## 1. Input & Execution
### 1.1 Input Context
```javascript
{
issue_ids: string[], // Issue IDs only (e.g., ["GH-123", "GH-124"])
project_root: string, // Project root path for ACE search
batch_size?: number, // Max issues per batch (default: 3)
}
```
**Note**: Agent receives IDs only. Fetch details via `ccw issue status <id> --json`.
### 1.2 Execution Flow
```
Phase 1: Issue Understanding (5%)
↓ Fetch details, extract requirements, determine complexity
Phase 2: ACE Exploration (30%)
↓ Semantic search, pattern discovery, dependency mapping
Phase 3: Solution Planning (50%)
↓ Task decomposition, 5-phase lifecycle, acceptance criteria
Phase 4: Validation & Output (15%)
↓ DAG validation, conflict detection, solution registration
```
#### Phase 1: Issue Understanding
**Step 1**: Fetch issue details via CLI
```bash
ccw issue status <issue-id> --json
```
**Step 2**: Analyze and classify
```javascript
function analyzeIssue(issue) {
return {
issue_id: issue.id,
requirements: extractRequirements(issue.description),
scope: inferScope(issue.title, issue.description),
complexity: determineComplexity(issue), // Low | Medium | High
lifecycle: issue.lifecycle_requirements // User preferences for test/commit
}
}
```
**Step 3**: Apply lifecycle requirements to tasks
- `lifecycle.test_strategy` → Configure `test.unit`, `test.commands`
- `lifecycle.commit_strategy` → Configure `commit.type`, `commit.scope`
- `lifecycle.regression_scope` → Configure `regression` array
**Complexity Rules**:
| Complexity | Files | Tasks |
|------------|-------|-------|
| Low | 1-2 | 1-3 |
| Medium | 3-5 | 3-6 |
| High | 6+ | 5-10 |
#### Phase 2: ACE Exploration
**Primary**: ACE semantic search
```javascript
mcp__ace-tool__search_context({
project_root_path: project_root,
query: `Find code related to: ${issue.title}. Keywords: ${extractKeywords(issue)}`
})
```
**Exploration Checklist**:
- [ ] Identify relevant files (direct matches)
- [ ] Find related patterns (similar implementations)
- [ ] Map integration points
- [ ] Discover dependencies
- [ ] Locate test patterns
**Fallback Chain**: ACE → smart_search → Grep → rg → Glob
| Tool | When to Use |
|------|-------------|
| `mcp__ace-tool__search_context` | Semantic search (primary) |
| `mcp__ccw-tools__smart_search` | Symbol/pattern search |
| `Grep` | Exact regex matching |
| `rg` / `grep` | CLI fallback |
| `Glob` | File path discovery |
#### Phase 3: Solution Planning
**Multi-Solution Generation**:
Generate multiple candidate solutions when:
- Issue complexity is HIGH
- Multiple valid implementation approaches exist
- Trade-offs between approaches (performance vs simplicity, etc.)
| Condition | Solutions |
|-----------|-----------|
| Low complexity, single approach | 1 solution, auto-bind |
| Medium complexity, clear path | 1-2 solutions |
| High complexity, multiple approaches | 2-3 solutions, user selection |
**Solution Evaluation** (for each candidate):
```javascript
{
analysis: {
risk: "low|medium|high", // Implementation risk
impact: "low|medium|high", // Scope of changes
complexity: "low|medium|high" // Technical complexity
},
score: 0.0-1.0 // Overall quality score (higher = recommended)
}
```
**Selection Flow**:
1. Generate all candidate solutions
2. Evaluate and score each
3. Single solution → auto-bind
4. Multiple solutions → return `pending_selection` for user choice
**Task Decomposition** following schema:
```javascript
function decomposeTasks(issue, exploration) {
return groups.map(group => ({
id: `T${taskId++}`, // Pattern: ^T[0-9]+$
title: group.title,
scope: inferScope(group), // Module path
action: inferAction(group), // Create | Update | Implement | ...
description: group.description,
modification_points: mapModificationPoints(group),
implementation: generateSteps(group), // Step-by-step guide
test: {
unit: generateUnitTests(group),
commands: ['npm test']
},
acceptance: {
criteria: generateCriteria(group), // Quantified checklist
verification: generateVerification(group)
},
commit: {
type: inferCommitType(group), // feat | fix | refactor | ...
scope: inferScope(group),
message_template: generateCommitMsg(group)
},
depends_on: inferDependencies(group, tasks),
executor: inferExecutor(group),
priority: calculatePriority(group) // 1-5 (1=highest)
}))
}
```
#### Phase 4: Validation & Output
**Validation**:
- DAG validation (no circular dependencies)
- Task validation (all 5 phases present)
- Conflict detection (cross-issue file modifications)
**Solution Registration**:
```bash
# Write solution and register via CLI
ccw issue bind <issue-id> --solution /tmp/sol.json
```
---
## 2. Output Requirements
### 2.1 Generate Files (Primary)
**Solution file per issue**:
```
.workflow/issues/solutions/{issue-id}.jsonl
```
Each line is a solution JSON containing tasks. Schema: `cat .claude/workflows/cli-templates/schemas/solution-schema.json`
### 2.2 Binding
| Scenario | Action |
|----------|--------|
| Single solution | `ccw issue bind <id> --solution <file>` (auto) |
| Multiple solutions | Register only, return for selection |
### 2.3 Return Summary
```json
{
"bound": [{ "issue_id": "...", "solution_id": "...", "task_count": N }],
"pending_selection": [{ "issue_id": "...", "solutions": [{ "id": "SOL-001", "description": "...", "task_count": N }] }],
"conflicts": [{ "file": "...", "issues": [...] }]
}
```
---
## 3. Quality Standards
### 3.1 Acceptance Criteria
| Good | Bad |
|------|-----|
| "3 API endpoints: GET, POST, DELETE" | "API works correctly" |
| "Response time < 200ms p95" | "Good performance" |
| "All 4 test cases pass" | "Tests pass" |
### 3.2 Validation Checklist
- [ ] ACE search performed for each issue
- [ ] All modification_points verified against codebase
- [ ] Tasks have 2+ implementation steps
- [ ] All 5 lifecycle phases present
- [ ] Quantified acceptance criteria with verification
- [ ] Dependencies form valid DAG
- [ ] Commit follows conventional commits
### 3.3 Guidelines
**ALWAYS**:
1. Read schema first: `cat .claude/workflows/cli-templates/schemas/solution-schema.json`
2. Use ACE semantic search as PRIMARY exploration tool
3. Fetch issue details via `ccw issue status <id> --json`
4. Quantify acceptance.criteria with testable conditions
5. Validate DAG before output
6. Evaluate each solution with `analysis` and `score`
7. Single solution → auto-bind; Multiple → return `pending_selection`
8. For HIGH complexity: generate 2-3 candidate solutions
**NEVER**:
1. Execute implementation (return plan only)
2. Use vague criteria ("works correctly", "good performance")
3. Create circular dependencies
4. Generate more than 10 tasks per issue
5. Bind when multiple solutions exist
**OUTPUT**:
1. Register solutions via `ccw issue bind <id> --solution <file>`
2. Return JSON with `bound`, `pending_selection`, `conflicts`
3. Solutions written to `.workflow/issues/solutions/{issue-id}.jsonl`

View File

@@ -1,227 +0,0 @@
---
name: issue-queue-agent
description: |
Task ordering agent for queue formation with dependency analysis and conflict resolution.
Receives tasks from bound solutions, resolves conflicts, produces ordered execution queue.
Examples:
- Context: Single issue queue
user: "Order tasks for GH-123"
assistant: "I'll analyze dependencies and generate execution queue"
- Context: Multi-issue queue with conflicts
user: "Order tasks for GH-123, GH-124"
assistant: "I'll detect conflicts, resolve ordering, and assign groups"
color: orange
---
## Overview
**Agent Role**: Queue formation agent that transforms tasks from bound solutions into an ordered execution queue. Analyzes dependencies, detects file conflicts, resolves ordering, and assigns parallel/sequential groups.
**Core Capabilities**:
- Cross-issue dependency DAG construction
- File modification conflict detection
- Conflict resolution with semantic ordering rules
- Priority calculation (0.0-1.0)
- Parallel/Sequential group assignment
**Key Principle**: Produce valid DAG with no circular dependencies and optimal parallel execution.
---
## 1. Input & Execution
### 1.1 Input Context
```javascript
{
tasks: [{
key: string, // e.g., "GH-123:TASK-001"
issue_id: string, // e.g., "GH-123"
solution_id: string, // e.g., "SOL-001"
task_id: string, // e.g., "TASK-001"
type: string, // feature | bug | refactor | test | chore | docs
file_context: string[],
depends_on: string[] // composite keys, e.g., ["GH-123:TASK-001"]
}],
project_root?: string,
rebuild?: boolean
}
```
**Note**: Agent generates unique `item_id` (pattern: `T-{N}`) for queue output.
### 1.2 Execution Flow
```
Phase 1: Dependency Analysis (20%)
↓ Parse depends_on, build DAG, detect cycles
Phase 2: Conflict Detection (30%)
↓ Identify file conflicts across issues
Phase 3: Conflict Resolution (25%)
↓ Apply ordering rules, update DAG
Phase 4: Ordering & Grouping (25%)
↓ Topological sort, assign groups
```
---
## 2. Processing Logic
### 2.1 Dependency Graph
```javascript
function buildDependencyGraph(tasks) {
const graph = new Map()
const fileModifications = new Map()
for (const item of tasks) {
graph.set(item.key, { ...item, inDegree: 0, outEdges: [] })
for (const file of item.file_context || []) {
if (!fileModifications.has(file)) fileModifications.set(file, [])
fileModifications.get(file).push(item.key)
}
}
// Add dependency edges
for (const [key, node] of graph) {
for (const depKey of node.depends_on || []) {
if (graph.has(depKey)) {
graph.get(depKey).outEdges.push(key)
node.inDegree++
}
}
}
return { graph, fileModifications }
}
```
### 2.2 Conflict Detection
Conflict when multiple tasks modify same file:
```javascript
function detectConflicts(fileModifications, graph) {
return [...fileModifications.entries()]
.filter(([_, tasks]) => tasks.length > 1)
.map(([file, tasks]) => ({
type: 'file_conflict',
file,
tasks,
resolved: false
}))
}
```
### 2.3 Resolution Rules
| Priority | Rule | Example |
|----------|------|---------|
| 1 | Create before Update | T1:Create → T2:Update |
| 2 | Foundation before integration | config/ → src/ |
| 3 | Types before implementation | types/ → components/ |
| 4 | Core before tests | src/ → __tests__/ |
| 5 | Delete last | T1:Update → T2:Delete |
### 2.4 Semantic Priority
**Base Priority Mapping** (task.priority 1-5 → base score):
| task.priority | Base Score | Meaning |
|---------------|------------|---------|
| 1 | 0.8 | Highest |
| 2 | 0.65 | High |
| 3 | 0.5 | Medium |
| 4 | 0.35 | Low |
| 5 | 0.2 | Lowest |
**Action-based Boost** (applied to base score):
| Factor | Boost |
|--------|-------|
| Create action | +0.2 |
| Configure action | +0.15 |
| Implement action | +0.1 |
| Fix action | +0.05 |
| Foundation scope | +0.1 |
| Types scope | +0.05 |
| Refactor action | -0.05 |
| Test action | -0.1 |
| Delete action | -0.15 |
**Formula**: `semantic_priority = clamp(baseScore + sum(boosts), 0.0, 1.0)`
### 2.5 Group Assignment
- **Parallel (P*)**: Tasks with no dependencies or conflicts between them
- **Sequential (S*)**: Tasks that must run in order due to dependencies or conflicts
---
## 3. Output Requirements
### 3.1 Generate Files (Primary)
**Queue files**:
```
.workflow/issues/queues/{queue-id}.json # Full queue with tasks, conflicts, groups
.workflow/issues/queues/index.json # Update with new queue entry
```
Queue ID format: `QUE-YYYYMMDD-HHMMSS` (UTC timestamp)
Schema: `cat .claude/workflows/cli-templates/schemas/queue-schema.json`
### 3.2 Return Summary
```json
{
"queue_id": "QUE-20251227-143000",
"total_tasks": N,
"execution_groups": [{ "id": "P1", "type": "parallel", "count": N }],
"conflicts_resolved": N,
"issues_queued": ["GH-123", "GH-124"]
}
```
---
## 4. Quality Standards
### 4.1 Validation Checklist
- [ ] No circular dependencies
- [ ] All conflicts resolved
- [ ] Dependencies ordered correctly
- [ ] Parallel groups have no conflicts
- [ ] Semantic priority calculated
### 4.2 Error Handling
| Scenario | Action |
|----------|--------|
| Circular dependency | Abort, report cycles |
| Resolution creates cycle | Flag for manual resolution |
| Missing task reference | Skip and warn |
| Empty task list | Return empty queue |
### 4.3 Guidelines
**ALWAYS**:
1. Build dependency graph before ordering
2. Detect cycles before and after resolution
3. Apply resolution rules consistently
4. Calculate semantic priority for all tasks
5. Include rationale for conflict resolutions
6. Validate ordering before output
**NEVER**:
1. Execute tasks (ordering only)
2. Ignore circular dependencies
3. Skip conflict detection
4. Output invalid DAG
5. Merge conflicting tasks in parallel group
**OUTPUT**:
1. Write `.workflow/issues/queues/{queue-id}.json`
2. Update `.workflow/issues/queues/index.json`
3. Return summary with `queue_id`, `total_tasks`, `execution_groups`, `conflicts_resolved`, `issues_queued`

View File

@@ -8,7 +8,7 @@ You are a documentation update coordinator for complex projects. Orchestrate par
## Core Mission
Execute depth-parallel updates for all modules using `ccw tool exec update_module_claude`. **Every module path must be processed**.
Execute depth-parallel updates for all modules using `~/.claude/scripts/update_module_claude.sh`. **Every module path must be processed**.
## Input Context
@@ -42,12 +42,12 @@ TodoWrite([
# 3. Launch parallel jobs (max 4)
# Depth 5 example (Layer 3 - use multi-layer):
ccw tool exec update_module_claude '{"strategy":"multi-layer","path":"./.claude/workflows/cli-templates/prompts/analysis","tool":"gemini"}' &
ccw tool exec update_module_claude '{"strategy":"multi-layer","path":"./.claude/workflows/cli-templates/prompts/development","tool":"gemini"}' &
~/.claude/scripts/update_module_claude.sh "multi-layer" "./.claude/workflows/cli-templates/prompts/analysis" "gemini" &
~/.claude/scripts/update_module_claude.sh "multi-layer" "./.claude/workflows/cli-templates/prompts/development" "gemini" &
# Depth 1 example (Layer 2 - use single-layer):
ccw tool exec update_module_claude '{"strategy":"single-layer","path":"./src/auth","tool":"gemini"}' &
ccw tool exec update_module_claude '{"strategy":"single-layer","path":"./src/api","tool":"gemini"}' &
~/.claude/scripts/update_module_claude.sh "single-layer" "./src/auth" "gemini" &
~/.claude/scripts/update_module_claude.sh "single-layer" "./src/api" "gemini" &
# ... up to 4 concurrent jobs
# 4. Wait for all depth jobs to complete

View File

@@ -36,10 +36,10 @@ You are a test context discovery specialist focused on gathering test coverage i
**Use**: Phase 1 source context loading
### 2. Test Coverage Discovery
**Primary (CCW CodexLens MCP)**:
- `mcp__ccw-tools__codex_lens(action="search_files", query="*.test.*")` - Find test files
- `mcp__ccw-tools__codex_lens(action="search", query="pattern")` - Search test patterns
- `mcp__ccw-tools__codex_lens(action="symbol", file="path")` - Analyze test structure
**Primary (Code-Index MCP)**:
- `mcp__code-index__find_files(pattern)` - Find test files (*.test.*, *.spec.*)
- `mcp__code-index__search_code_advanced()` - Search test patterns
- `mcp__code-index__get_file_summary()` - Analyze test structure
**Fallback (CLI)**:
- `rg` (ripgrep) - Fast test pattern search
@@ -120,10 +120,9 @@ for (const summary_path of summaries) {
**2.1 Existing Test Discovery**:
```javascript
// Method 1: CodexLens MCP (preferred)
const test_files = mcp__ccw-tools__codex_lens({
action: "search_files",
query: "*.test.* OR *.spec.* OR test_*.py OR *_test.go"
// Method 1: Code-Index MCP (preferred)
const test_files = mcp__code-index__find_files({
patterns: ["*.test.*", "*.spec.*", "*test_*.py", "*_test.go"]
});
// Method 2: Fallback CLI

View File

@@ -83,18 +83,17 @@ When task JSON contains implementation_approach array:
- L1 (Unit): `*.test.*`, `*.spec.*` in `__tests__/`, `tests/unit/`
- L2 (Integration): `tests/integration/`, `*.integration.test.*`
- L3 (E2E): `tests/e2e/`, `*.e2e.test.*`, `cypress/`, `playwright/`
- **context-package.json** (CCW Workflow): Use Read tool to get context package from `.workflow/active/{session}/.process/context-package.json`
- **context-package.json** (CCW Workflow): Extract artifact paths using `jq -r '.brainstorm_artifacts.role_analyses[].files[].path'`
- Identify test commands from project configuration
```bash
# Detect test framework and multi-layered commands
if [ -f "package.json" ]; then
# Extract layer-specific test commands using Read tool or jq
PKG_JSON=$(cat package.json)
LINT_CMD=$(echo "$PKG_JSON" | jq -r '.scripts.lint // "eslint ."')
UNIT_CMD=$(echo "$PKG_JSON" | jq -r '.scripts["test:unit"] // .scripts.test')
INTEGRATION_CMD=$(echo "$PKG_JSON" | jq -r '.scripts["test:integration"] // ""')
E2E_CMD=$(echo "$PKG_JSON" | jq -r '.scripts["test:e2e"] // ""')
# Extract layer-specific test commands
LINT_CMD=$(cat package.json | jq -r '.scripts.lint // "eslint ."')
UNIT_CMD=$(cat package.json | jq -r '.scripts["test:unit"] // .scripts.test')
INTEGRATION_CMD=$(cat package.json | jq -r '.scripts["test:integration"] // ""')
E2E_CMD=$(cat package.json | jq -r '.scripts["test:e2e"] // ""')
elif [ -f "pytest.ini" ] || [ -f "setup.py" ]; then
LINT_CMD="ruff check . || flake8 ."
UNIT_CMD="pytest tests/unit/"

View File

@@ -1,47 +0,0 @@
{
"version": "1.0.0",
"tools": {
"gemini": {
"enabled": true,
"isBuiltin": true,
"command": "gemini",
"description": "Google AI for code analysis"
},
"qwen": {
"enabled": true,
"isBuiltin": true,
"command": "qwen",
"description": "Alibaba AI assistant"
},
"codex": {
"enabled": true,
"isBuiltin": true,
"command": "codex",
"description": "OpenAI code generation"
},
"claude": {
"enabled": true,
"isBuiltin": true,
"command": "claude",
"description": "Anthropic AI assistant"
}
},
"customEndpoints": [],
"defaultTool": "gemini",
"settings": {
"promptFormat": "plain",
"smartContext": {
"enabled": false,
"maxFiles": 10
},
"nativeResume": true,
"recursiveQuery": true,
"cache": {
"injectionMode": "auto",
"defaultPrefix": "",
"defaultSuffix": ""
},
"codeIndexMcp": "ace"
},
"$schema": "./cli-tools.schema.json"
}

View File

@@ -191,7 +191,7 @@ target/
### Step 2: Workspace Analysis (MANDATORY FIRST)
```bash
# Analyze workspace structure
bash(ccw tool exec get_modules_by_depth '{"format":"json"}')
bash(~/.claude/scripts/get_modules_by_depth.sh json)
```
### Step 3: Technology Detection

View File

@@ -1,462 +0,0 @@
---
name: execute
description: Execute queue with codex using endpoint-driven task fetching (single task per codex instance)
argument-hint: "[--parallel <n>] [--executor codex|gemini]"
allowed-tools: TodoWrite(*), Bash(*), Read(*), AskUserQuestion(*)
---
# Issue Execute Command (/issue:execute)
## Overview
Execution orchestrator that coordinates codex instances. Each task is executed by an independent codex instance that fetches its task via CLI endpoint. **Codex does NOT read task files** - it calls `ccw issue next` to get task data dynamically.
**Core design:**
- Single task per codex instance (not loop mode)
- Endpoint-driven: `ccw issue next` → execute → `ccw issue complete`
- No file reading in codex
- Orchestrator manages parallelism
## Storage Structure (Queue History)
```
.workflow/issues/
├── issues.jsonl # All issues (one per line)
├── queues/ # Queue history directory
│ ├── index.json # Queue index (active + history)
│ └── {queue-id}.json # Individual queue files
└── solutions/
├── {issue-id}.jsonl # Solutions for issue
└── ...
```
## Usage
```bash
/issue:execute [FLAGS]
# Examples
/issue:execute # Execute all ready tasks
/issue:execute --parallel 3 # Execute up to 3 tasks in parallel
/issue:execute --executor codex # Force codex executor
# Flags
--parallel <n> Max parallel codex instances (default: 1)
--executor <type> Force executor: codex|gemini|agent
--dry-run Show what would execute without running
```
## Execution Process
```
Phase 1: Queue Loading
├─ Load queue.json
├─ Count pending/ready tasks
└─ Initialize TodoWrite tracking
Phase 2: Ready Task Detection
├─ Find tasks with satisfied dependencies
├─ Group by execution_group (parallel batches)
└─ Determine execution order
Phase 3: Codex Coordination
├─ For each ready task:
│ ├─ Launch independent codex instance
│ ├─ Codex calls: ccw issue next
│ ├─ Codex receives task data (NOT file)
│ ├─ Codex executes task
│ ├─ Codex calls: ccw issue complete <queue-id>
│ └─ Update TodoWrite
└─ Parallel execution based on --parallel flag
Phase 4: Completion
├─ Generate execution summary
├─ Update issue statuses in issues.jsonl
└─ Display results
```
## Implementation
### Phase 1: Queue Loading
```javascript
// Load active queue via CLI endpoint
const queueJson = Bash(`ccw issue status --json 2>/dev/null || echo '{}'`);
const queue = JSON.parse(queueJson);
if (!queue.id || queue.tasks?.length === 0) {
console.log('No active queue found. Run /issue:queue first.');
return;
}
// Count by status
const pending = queue.tasks.filter(q => q.status === 'pending');
const executing = queue.tasks.filter(q => q.status === 'executing');
const completed = queue.tasks.filter(q => q.status === 'completed');
console.log(`
## Execution Queue Status
- Pending: ${pending.length}
- Executing: ${executing.length}
- Completed: ${completed.length}
- Total: ${queue.tasks.length}
`);
if (pending.length === 0 && executing.length === 0) {
console.log('All tasks completed!');
return;
}
```
### Phase 2: Ready Task Detection
```javascript
// Find ready tasks (dependencies satisfied)
function getReadyTasks() {
const completedIds = new Set(
queue.tasks.filter(q => q.status === 'completed').map(q => q.item_id)
);
return queue.tasks.filter(item => {
if (item.status !== 'pending') return false;
return item.depends_on.every(depId => completedIds.has(depId));
});
}
const readyTasks = getReadyTasks();
if (readyTasks.length === 0) {
if (executing.length > 0) {
console.log('Tasks are currently executing. Wait for completion.');
} else {
console.log('No ready tasks. Check for blocked dependencies.');
}
return;
}
console.log(`Found ${readyTasks.length} ready tasks`);
// Sort by execution order
readyTasks.sort((a, b) => a.execution_order - b.execution_order);
// Initialize TodoWrite
TodoWrite({
todos: readyTasks.slice(0, parallelLimit).map(t => ({
content: `[${t.item_id}] ${t.issue_id}:${t.task_id}`,
status: 'pending',
activeForm: `Executing ${t.item_id}`
}))
});
```
### Phase 3: Codex Coordination (Single Task Mode - Full Lifecycle)
```javascript
// Execute tasks - single codex instance per task with full lifecycle
async function executeTask(queueItem) {
const codexPrompt = `
## Single Task Execution - CLOSED-LOOP LIFECYCLE
You are executing ONE task from the issue queue. Each task has 5 phases that MUST ALL complete successfully.
### Step 1: Fetch Task
Run this command to get your task:
\`\`\`bash
ccw issue next
\`\`\`
This returns JSON with full lifecycle definition:
- task.implementation: Implementation steps
- task.test: Test requirements and commands
- task.regression: Regression check commands
- task.acceptance: Acceptance criteria and verification
- task.commit: Commit specification
### Step 2: Execute Full Lifecycle
**Phase 1: IMPLEMENT**
1. Follow task.implementation steps in order
2. Modify files specified in modification_points
3. Use context.relevant_files for reference
4. Use context.patterns for code style
**Phase 2: TEST**
1. Run test commands from task.test.commands
2. Ensure all unit tests pass (task.test.unit)
3. Run integration tests if specified (task.test.integration)
4. Verify coverage meets task.test.coverage_target if specified
5. If tests fail → fix code and re-run, do NOT proceed until tests pass
**Phase 3: REGRESSION**
1. Run all commands in task.regression
2. Ensure no existing tests are broken
3. If regression fails → fix and re-run
**Phase 4: ACCEPTANCE**
1. Verify each criterion in task.acceptance.criteria
2. Execute verification steps in task.acceptance.verification
3. Complete any manual_checks if specified
4. All criteria MUST pass before proceeding
**Phase 5: COMMIT**
1. Stage all modified files
2. Use task.commit.message_template as commit message
3. Commit with: git commit -m "$(cat <<'EOF'\n<message>\nEOF\n)"
4. If commit_strategy is 'per-task', commit now
5. If commit_strategy is 'atomic' or 'squash', stage but don't commit
### Step 3: Report Completion
When ALL phases complete successfully:
\`\`\`bash
ccw issue complete <item_id> --result '{
"files_modified": ["path1", "path2"],
"tests_passed": true,
"regression_passed": true,
"acceptance_passed": true,
"committed": true,
"commit_hash": "<hash>",
"summary": "What was done"
}'
\`\`\`
If any phase fails and cannot be fixed:
\`\`\`bash
ccw issue fail <item_id> --reason "Phase X failed: <details>"
\`\`\`
### Rules
- NEVER skip any lifecycle phase
- Tests MUST pass before proceeding to acceptance
- Regression MUST pass before commit
- ALL acceptance criteria MUST be verified
- Report accurate lifecycle status in result
### Start Now
Begin by running: ccw issue next
`;
// Execute codex
const executor = queueItem.assigned_executor || flags.executor || 'codex';
if (executor === 'codex') {
Bash(
`ccw cli -p "${escapePrompt(codexPrompt)}" --tool codex --mode write --id exec-${queueItem.item_id}`,
timeout=3600000 // 1 hour timeout
);
} else if (executor === 'gemini') {
Bash(
`ccw cli -p "${escapePrompt(codexPrompt)}" --tool gemini --mode write --id exec-${queueItem.item_id}`,
timeout=1800000 // 30 min timeout
);
} else {
// Agent execution
Task(
subagent_type="code-developer",
run_in_background=false,
description=`Execute ${queueItem.item_id}`,
prompt=codexPrompt
);
}
}
// Execute with parallelism
const parallelLimit = flags.parallel || 1;
for (let i = 0; i < readyTasks.length; i += parallelLimit) {
const batch = readyTasks.slice(i, i + parallelLimit);
console.log(`\n### Executing Batch ${Math.floor(i / parallelLimit) + 1}`);
console.log(batch.map(t => `- ${t.item_id}: ${t.issue_id}:${t.task_id}`).join('\n'));
if (parallelLimit === 1) {
// Sequential execution
for (const task of batch) {
updateTodo(task.item_id, 'in_progress');
await executeTask(task);
updateTodo(task.item_id, 'completed');
}
} else {
// Parallel execution - launch all at once
const executions = batch.map(task => {
updateTodo(task.item_id, 'in_progress');
return executeTask(task);
});
await Promise.all(executions);
batch.forEach(task => updateTodo(task.item_id, 'completed'));
}
// Refresh ready tasks after batch
const newReady = getReadyTasks();
if (newReady.length > 0) {
console.log(`${newReady.length} more tasks now ready`);
}
}
```
### Codex Task Fetch Response
When codex calls `ccw issue next`, it receives:
```json
{
"item_id": "T-1",
"issue_id": "GH-123",
"solution_id": "SOL-001",
"task": {
"id": "T1",
"title": "Create auth middleware",
"scope": "src/middleware/",
"action": "Create",
"description": "Create JWT validation middleware",
"modification_points": [
{ "file": "src/middleware/auth.ts", "target": "new file", "change": "Create middleware" }
],
"implementation": [
"Create auth.ts file in src/middleware/",
"Implement JWT token validation using jsonwebtoken",
"Add error handling for invalid/expired tokens",
"Export middleware function"
],
"acceptance": [
"Middleware validates JWT tokens successfully",
"Returns 401 for invalid or missing tokens",
"Passes token payload to request context"
]
},
"context": {
"relevant_files": ["src/config/auth.ts", "src/types/auth.d.ts"],
"patterns": "Follow existing middleware pattern in src/middleware/logger.ts"
},
"execution_hints": {
"executor": "codex",
"estimated_minutes": 30
}
}
```
### Phase 4: Completion Summary
```javascript
// Reload queue for final status via CLI
const finalQueueJson = Bash(`ccw issue status --json 2>/dev/null || echo '{}'`);
const finalQueue = JSON.parse(finalQueueJson);
// Use queue._metadata for summary (already calculated by CLI)
const summary = finalQueue._metadata || {
completed_count: 0,
failed_count: 0,
pending_count: 0,
total_tasks: 0
};
console.log(`
## Execution Complete
**Completed**: ${summary.completed_count}/${summary.total_tasks}
**Failed**: ${summary.failed_count}
**Pending**: ${summary.pending_count}
### Task Results
${(finalQueue.tasks || []).map(q => {
const icon = q.status === 'completed' ? '✓' :
q.status === 'failed' ? '✗' :
q.status === 'executing' ? '⟳' : '○';
return `${icon} ${q.item_id} [${q.issue_id}:${q.task_id}] - ${q.status}`;
}).join('\n')}
`);
// Issue status updates are handled by ccw issue complete/fail endpoints
// No need to manually update issues.jsonl here
if (summary.pending_count > 0) {
console.log(`
### Continue Execution
Run \`/issue:execute\` again to execute remaining tasks.
`);
}
```
## Dry Run Mode
```javascript
if (flags.dryRun) {
console.log(`
## Dry Run - Would Execute
${readyTasks.map((t, i) => `
${i + 1}. ${t.item_id}
Issue: ${t.issue_id}
Task: ${t.task_id}
Executor: ${t.assigned_executor}
Group: ${t.execution_group}
`).join('')}
No changes made. Remove --dry-run to execute.
`);
return;
}
```
## Error Handling
| Error | Resolution |
|-------|------------|
| Queue not found | Display message, suggest /issue:queue |
| No ready tasks | Check dependencies, show blocked tasks |
| Codex timeout | Mark as failed, allow retry |
| ccw issue next empty | All tasks done or blocked |
| Task execution failure | Marked via ccw issue fail, use `ccw issue retry` to reset |
## Troubleshooting
### Interrupted Tasks
If execution was interrupted (crashed/stopped), `ccw issue next` will automatically resume:
```bash
# Automatically returns the executing task for resumption
ccw issue next
```
Tasks in `executing` status are prioritized and returned first, no manual reset needed.
### Failed Tasks
If a task failed and you want to retry:
```bash
# Reset all failed tasks to pending
ccw issue retry
# Reset failed tasks for specific issue
ccw issue retry <issue-id>
```
## Endpoint Contract
### `ccw issue next`
- Returns next ready task as JSON
- Marks task as 'executing'
- Returns `{ status: 'empty' }` when no tasks
### `ccw issue complete <item-id>`
- Marks task as 'completed'
- Updates queue.json
- Checks if issue is fully complete
### `ccw issue fail <item-id>`
- Marks task as 'failed'
- Records failure reason
- Allows retry via /issue:execute
### `ccw issue retry [issue-id]`
- Resets failed tasks to 'pending'
- Allows re-execution via `ccw issue next`
## Related Commands
- `/issue:plan` - Plan issues with solutions
- `/issue:queue` - Form execution queue
- `ccw issue queue list` - View queue status
- `ccw issue retry` - Retry failed tasks

View File

@@ -1,113 +0,0 @@
---
name: manage
description: Interactive issue management (CRUD) via ccw cli endpoints with menu-driven interface
argument-hint: "[issue-id] [--action list|view|edit|delete|bulk]"
allowed-tools: TodoWrite(*), Bash(*), Read(*), Write(*), AskUserQuestion(*), Task(*)
---
# Issue Manage Command (/issue:manage)
## Overview
Interactive menu-driven interface for issue management using `ccw issue` CLI endpoints:
- **List**: Browse and filter issues
- **View**: Detailed issue inspection
- **Edit**: Modify issue fields
- **Delete**: Remove issues
- **Bulk**: Batch operations on multiple issues
## CLI Endpoints Reference
```bash
# Core endpoints (ccw issue)
ccw issue list # List all issues
ccw issue list <id> --json # Get issue details
ccw issue status <id> # Detailed status
ccw issue init <id> --title "..." # Create issue
ccw issue task <id> --title "..." # Add task
ccw issue bind <id> <solution-id> # Bind solution
# Queue management
ccw issue queue # List current queue
ccw issue queue add <id> # Add to queue
ccw issue queue list # Queue history
ccw issue queue switch <queue-id> # Switch queue
ccw issue queue archive # Archive queue
ccw issue queue delete <queue-id> # Delete queue
ccw issue next # Get next task
ccw issue done <queue-id> # Mark completed
ccw issue complete <item-id> # (legacy alias for done)
```
## Usage
```bash
# Interactive mode (menu-driven)
/issue:manage
# Direct to specific issue
/issue:manage GH-123
# Direct action
/issue:manage --action list
/issue:manage GH-123 --action edit
```
## Implementation
This command delegates to the `issue-manage` skill for detailed implementation.
### Entry Point
```javascript
const issueId = parseIssueId(userInput);
const action = flags.action;
// Show main menu if no action specified
if (!action) {
await showMainMenu(issueId);
} else {
await executeAction(action, issueId);
}
```
### Main Menu Flow
1. **Dashboard**: Fetch issues summary via `ccw issue list --json`
2. **Menu**: Present action options via AskUserQuestion
3. **Route**: Execute selected action (List/View/Edit/Delete/Bulk)
4. **Loop**: Return to menu after each action
### Available Actions
| Action | Description | CLI Command |
|--------|-------------|-------------|
| List | Browse with filters | `ccw issue list --json` |
| View | Detail view | `ccw issue status <id> --json` |
| Edit | Modify fields | Update `issues.jsonl` |
| Delete | Remove issue | Clean up all related files |
| Bulk | Batch operations | Multi-select + batch update |
## Data Files
| File | Purpose |
|------|---------|
| `.workflow/issues/issues.jsonl` | Issue records |
| `.workflow/issues/solutions/<id>.jsonl` | Solutions per issue |
| `.workflow/issues/queue.json` | Execution queue |
## Error Handling
| Error | Resolution |
|-------|------------|
| No issues found | Suggest creating with /issue:new |
| Issue not found | Show available issues, ask for correction |
| Invalid selection | Show error, re-prompt |
| Write failure | Check permissions, show error |
## Related Commands
- `/issue:new` - Create structured issue
- `/issue:plan` - Plan solution for issue
- `/issue:queue` - Form execution queue
- `/issue:execute` - Execute queued tasks

View File

@@ -1,451 +0,0 @@
---
name: new
description: Create structured issue from GitHub URL or text description, extracting key elements into issues.jsonl
argument-hint: "<github-url | text-description> [--priority 1-5] [--labels label1,label2]"
allowed-tools: TodoWrite(*), Bash(*), Read(*), Write(*), WebFetch(*), AskUserQuestion(*)
---
# Issue New Command (/issue:new)
## Overview
Creates a new structured issue from either:
1. **GitHub Issue URL** - Fetches and parses issue content via `gh` CLI
2. **Text Description** - Parses natural language into structured fields
Outputs a well-formed issue entry to `.workflow/issues/issues.jsonl`.
## Issue Structure (Closed-Loop)
```typescript
interface Issue {
id: string; // GH-123 or ISS-YYYYMMDD-HHMMSS
title: string; // Issue title (clear, concise)
status: 'registered'; // Initial status
priority: number; // 1 (critical) to 5 (low)
context: string; // Problem description
source: 'github' | 'text'; // Input source type
source_url?: string; // GitHub URL if applicable
labels?: string[]; // Categorization labels
// Structured extraction
problem_statement: string; // What is the problem?
expected_behavior?: string; // What should happen?
actual_behavior?: string; // What actually happens?
affected_components?: string[];// Files/modules affected
reproduction_steps?: string[]; // Steps to reproduce
// Closed-loop requirements (guide plan generation)
lifecycle_requirements: {
test_strategy: 'unit' | 'integration' | 'e2e' | 'manual' | 'auto';
regression_scope: 'affected' | 'related' | 'full'; // Which tests to run
acceptance_type: 'automated' | 'manual' | 'both'; // How to verify
commit_strategy: 'per-task' | 'squash' | 'atomic'; // Commit granularity
};
// Metadata
bound_solution_id: null;
solution_count: 0;
created_at: string;
updated_at: string;
}
```
## Lifecycle Requirements
The `lifecycle_requirements` field guides downstream commands (`/issue:plan`, `/issue:execute`):
| Field | Options | Purpose |
|-------|---------|---------|
| `test_strategy` | `unit`, `integration`, `e2e`, `manual`, `auto` | Which test types to generate |
| `regression_scope` | `affected`, `related`, `full` | Which tests to run for regression |
| `acceptance_type` | `automated`, `manual`, `both` | How to verify completion |
| `commit_strategy` | `per-task`, `squash`, `atomic` | Commit granularity |
> **Note**: Task structure (SolutionTask) is defined in `/issue:plan` - see `.claude/commands/issue/plan.md`
## Usage
```bash
# From GitHub URL
/issue:new https://github.com/owner/repo/issues/123
# From text description
/issue:new "Login fails when password contains special characters. Expected: successful login. Actual: 500 error. Affects src/auth/*"
# With options
/issue:new <url-or-text> --priority 2 --labels "bug,auth"
```
## Implementation
### Phase 1: Input Detection
```javascript
const input = userInput.trim();
const flags = parseFlags(userInput); // --priority, --labels
// Detect input type
const isGitHubUrl = input.match(/github\.com\/[\w-]+\/[\w-]+\/issues\/\d+/);
const isGitHubShort = input.match(/^#(\d+)$/); // #123 format
let issueData = {};
if (isGitHubUrl || isGitHubShort) {
// GitHub issue - fetch via gh CLI
issueData = await fetchGitHubIssue(input);
} else {
// Text description - parse structure
issueData = await parseTextDescription(input);
}
```
### Phase 2: GitHub Issue Fetching
```javascript
async function fetchGitHubIssue(urlOrNumber) {
let issueRef;
if (urlOrNumber.startsWith('http')) {
// Extract owner/repo/number from URL
const match = urlOrNumber.match(/github\.com\/([\w-]+)\/([\w-]+)\/issues\/(\d+)/);
if (!match) throw new Error('Invalid GitHub URL');
issueRef = `${match[1]}/${match[2]}#${match[3]}`;
} else {
// #123 format - use current repo
issueRef = urlOrNumber.replace('#', '');
}
// Fetch via gh CLI
const result = Bash(`gh issue view ${issueRef} --json number,title,body,labels,state,url`);
const ghIssue = JSON.parse(result);
// Parse body for structure
const parsed = parseIssueBody(ghIssue.body);
return {
id: `GH-${ghIssue.number}`,
title: ghIssue.title,
source: 'github',
source_url: ghIssue.url,
labels: ghIssue.labels.map(l => l.name),
context: ghIssue.body,
...parsed
};
}
function parseIssueBody(body) {
// Extract structured sections from markdown body
const sections = {};
// Problem/Description
const problemMatch = body.match(/##?\s*(problem|description|issue)[:\s]*([\s\S]*?)(?=##|$)/i);
if (problemMatch) sections.problem_statement = problemMatch[2].trim();
// Expected behavior
const expectedMatch = body.match(/##?\s*(expected|should)[:\s]*([\s\S]*?)(?=##|$)/i);
if (expectedMatch) sections.expected_behavior = expectedMatch[2].trim();
// Actual behavior
const actualMatch = body.match(/##?\s*(actual|current)[:\s]*([\s\S]*?)(?=##|$)/i);
if (actualMatch) sections.actual_behavior = actualMatch[2].trim();
// Steps to reproduce
const stepsMatch = body.match(/##?\s*(steps|reproduce)[:\s]*([\s\S]*?)(?=##|$)/i);
if (stepsMatch) {
const stepsText = stepsMatch[2].trim();
sections.reproduction_steps = stepsText
.split('\n')
.filter(line => line.match(/^\s*[\d\-\*]/))
.map(line => line.replace(/^\s*[\d\.\-\*]\s*/, '').trim());
}
// Affected components (from file references)
const fileMatches = body.match(/`[^`]*\.(ts|js|tsx|jsx|py|go|rs)[^`]*`/g);
if (fileMatches) {
sections.affected_components = [...new Set(fileMatches.map(f => f.replace(/`/g, '')))];
}
// Fallback: use entire body as problem statement
if (!sections.problem_statement) {
sections.problem_statement = body.substring(0, 500);
}
return sections;
}
```
### Phase 3: Text Description Parsing
```javascript
async function parseTextDescription(text) {
// Generate unique ID
const id = `ISS-${new Date().toISOString().replace(/[-:T]/g, '').slice(0, 14)}`;
// Extract structured elements using patterns
const result = {
id,
source: 'text',
title: '',
problem_statement: '',
expected_behavior: null,
actual_behavior: null,
affected_components: [],
reproduction_steps: []
};
// Pattern: "Title. Description. Expected: X. Actual: Y. Affects: files"
const sentences = text.split(/\.(?=\s|$)/);
// First sentence as title
result.title = sentences[0]?.trim() || 'Untitled Issue';
// Look for keywords
for (const sentence of sentences) {
const s = sentence.trim();
if (s.match(/^expected:?\s*/i)) {
result.expected_behavior = s.replace(/^expected:?\s*/i, '');
} else if (s.match(/^actual:?\s*/i)) {
result.actual_behavior = s.replace(/^actual:?\s*/i, '');
} else if (s.match(/^affects?:?\s*/i)) {
const components = s.replace(/^affects?:?\s*/i, '').split(/[,\s]+/);
result.affected_components = components.filter(c => c.includes('/') || c.includes('.'));
} else if (s.match(/^steps?:?\s*/i)) {
result.reproduction_steps = s.replace(/^steps?:?\s*/i, '').split(/[,;]/);
} else if (!result.problem_statement && s.length > 10) {
result.problem_statement = s;
}
}
// Fallback problem statement
if (!result.problem_statement) {
result.problem_statement = text.substring(0, 300);
}
return result;
}
```
### Phase 4: Lifecycle Configuration
```javascript
// Ask for lifecycle requirements (or use smart defaults)
const lifecycleAnswer = AskUserQuestion({
questions: [
{
question: 'Test strategy for this issue?',
header: 'Test',
multiSelect: false,
options: [
{ label: 'auto', description: 'Auto-detect based on affected files (Recommended)' },
{ label: 'unit', description: 'Unit tests only' },
{ label: 'integration', description: 'Integration tests' },
{ label: 'e2e', description: 'End-to-end tests' },
{ label: 'manual', description: 'Manual testing only' }
]
},
{
question: 'Regression scope?',
header: 'Regression',
multiSelect: false,
options: [
{ label: 'affected', description: 'Only affected module tests (Recommended)' },
{ label: 'related', description: 'Affected + dependent modules' },
{ label: 'full', description: 'Full test suite' }
]
},
{
question: 'Commit strategy?',
header: 'Commit',
multiSelect: false,
options: [
{ label: 'per-task', description: 'One commit per task (Recommended)' },
{ label: 'atomic', description: 'Single commit for entire issue' },
{ label: 'squash', description: 'Squash at the end' }
]
}
]
});
const lifecycle = {
test_strategy: lifecycleAnswer.test || 'auto',
regression_scope: lifecycleAnswer.regression || 'affected',
acceptance_type: 'automated',
commit_strategy: lifecycleAnswer.commit || 'per-task'
};
issueData.lifecycle_requirements = lifecycle;
```
### Phase 5: User Confirmation
```javascript
// Show parsed data and ask for confirmation
console.log(`
## Parsed Issue
**ID**: ${issueData.id}
**Title**: ${issueData.title}
**Source**: ${issueData.source}${issueData.source_url ? ` (${issueData.source_url})` : ''}
### Problem Statement
${issueData.problem_statement}
${issueData.expected_behavior ? `### Expected Behavior\n${issueData.expected_behavior}\n` : ''}
${issueData.actual_behavior ? `### Actual Behavior\n${issueData.actual_behavior}\n` : ''}
${issueData.affected_components?.length ? `### Affected Components\n${issueData.affected_components.map(c => `- ${c}`).join('\n')}\n` : ''}
${issueData.reproduction_steps?.length ? `### Reproduction Steps\n${issueData.reproduction_steps.map((s, i) => `${i+1}. ${s}`).join('\n')}\n` : ''}
### Lifecycle Configuration
- **Test Strategy**: ${lifecycle.test_strategy}
- **Regression Scope**: ${lifecycle.regression_scope}
- **Commit Strategy**: ${lifecycle.commit_strategy}
`);
// Ask user to confirm or edit
const answer = AskUserQuestion({
questions: [{
question: 'Create this issue?',
header: 'Confirm',
multiSelect: false,
options: [
{ label: 'Create', description: 'Save issue to issues.jsonl' },
{ label: 'Edit Title', description: 'Modify the issue title' },
{ label: 'Edit Priority', description: 'Change priority (1-5)' },
{ label: 'Cancel', description: 'Discard and exit' }
]
}]
});
if (answer.includes('Cancel')) {
console.log('Issue creation cancelled.');
return;
}
if (answer.includes('Edit Title')) {
const titleAnswer = AskUserQuestion({
questions: [{
question: 'Enter new title:',
header: 'Title',
multiSelect: false,
options: [
{ label: issueData.title.substring(0, 40), description: 'Keep current' }
]
}]
});
// Handle custom input via "Other"
if (titleAnswer.customText) {
issueData.title = titleAnswer.customText;
}
}
```
### Phase 6: Write to JSONL
```javascript
// Construct final issue object
const priority = flags.priority ? parseInt(flags.priority) : 3;
const labels = flags.labels ? flags.labels.split(',').map(l => l.trim()) : [];
const newIssue = {
id: issueData.id,
title: issueData.title,
status: 'registered',
priority,
context: issueData.problem_statement,
source: issueData.source,
source_url: issueData.source_url || null,
labels: [...(issueData.labels || []), ...labels],
// Structured fields
problem_statement: issueData.problem_statement,
expected_behavior: issueData.expected_behavior || null,
actual_behavior: issueData.actual_behavior || null,
affected_components: issueData.affected_components || [],
reproduction_steps: issueData.reproduction_steps || [],
// Closed-loop lifecycle requirements
lifecycle_requirements: issueData.lifecycle_requirements || {
test_strategy: 'auto',
regression_scope: 'affected',
acceptance_type: 'automated',
commit_strategy: 'per-task'
},
// Metadata
bound_solution_id: null,
solution_count: 0,
created_at: new Date().toISOString(),
updated_at: new Date().toISOString()
};
// Ensure directory exists
Bash('mkdir -p .workflow/issues');
// Append to issues.jsonl
const issuesPath = '.workflow/issues/issues.jsonl';
Bash(`echo '${JSON.stringify(newIssue)}' >> "${issuesPath}"`);
console.log(`
## Issue Created
**ID**: ${newIssue.id}
**Title**: ${newIssue.title}
**Priority**: ${newIssue.priority}
**Labels**: ${newIssue.labels.join(', ') || 'none'}
**Source**: ${newIssue.source}
### Next Steps
1. Plan solution: \`/issue:plan ${newIssue.id}\`
2. View details: \`ccw issue status ${newIssue.id}\`
3. Manage issues: \`/issue:manage\`
`);
```
## Examples
### GitHub Issue
```bash
/issue:new https://github.com/myorg/myrepo/issues/42 --priority 2
# Output:
## Issue Created
**ID**: GH-42
**Title**: Fix memory leak in WebSocket handler
**Priority**: 2
**Labels**: bug, performance
**Source**: github (https://github.com/myorg/myrepo/issues/42)
```
### Text Description
```bash
/issue:new "API rate limiting not working. Expected: 429 after 100 requests. Actual: No limit. Affects src/middleware/rate-limit.ts"
# Output:
## Issue Created
**ID**: ISS-20251227-142530
**Title**: API rate limiting not working
**Priority**: 3
**Labels**: none
**Source**: text
```
## Error Handling
| Error | Resolution |
|-------|------------|
| Invalid GitHub URL | Show format hint, ask for correction |
| gh CLI not available | Fall back to WebFetch for public issues |
| Empty description | Prompt user for required fields |
| Duplicate issue ID | Auto-increment or suggest merge |
| Parse failure | Show raw input, ask for manual structuring |
## Related Commands
- `/issue:plan` - Plan solution for issue
- `/issue:manage` - Interactive issue management
- `ccw issue list` - List all issues
- `ccw issue status <id>` - View issue details

View File

@@ -1,304 +0,0 @@
---
name: plan
description: Batch plan issue resolution using issue-plan-agent (explore + plan closed-loop)
argument-hint: "--all-pending <issue-id>[,<issue-id>,...] [--batch-size 3] "
allowed-tools: TodoWrite(*), Task(*), SlashCommand(*), AskUserQuestion(*), Bash(*), Read(*), Write(*)
---
# Issue Plan Command (/issue:plan)
## Overview
Unified planning command using **issue-plan-agent** that combines exploration and planning into a single closed-loop workflow.
## Output Requirements
**Generate Files:**
1. `.workflow/issues/solutions/{issue-id}.jsonl` - Solution with tasks for each issue
**Return Summary:**
```json
{
"bound": [{ "issue_id": "...", "solution_id": "...", "task_count": N }],
"pending_selection": [{ "issue_id": "...", "solutions": [...] }],
"conflicts": [{ "file": "...", "issues": [...] }]
}
```
**Completion Criteria:**
- [ ] Solution file generated for each issue
- [ ] Single solution → auto-bound via `ccw issue bind`
- [ ] Multiple solutions → returned for user selection
- [ ] Tasks conform to schema: `cat .claude/workflows/cli-templates/schemas/solution-schema.json`
- [ ] Each task has quantified `acceptance.criteria`
## Core Capabilities
- **Closed-loop agent**: issue-plan-agent combines explore + plan
- Batch processing: 1 agent processes 1-3 issues
- ACE semantic search integrated into planning
- Solution with executable tasks and delivery criteria
- Automatic solution registration and binding
## Storage Structure (Flat JSONL)
```
.workflow/issues/
├── issues.jsonl # All issues (one per line)
├── queue.json # Execution queue
└── solutions/
├── {issue-id}.jsonl # Solutions for issue (one per line)
└── ...
```
## Usage
```bash
/issue:plan <issue-id>[,<issue-id>,...] [FLAGS]
# Examples
/issue:plan GH-123 # Single issue
/issue:plan GH-123,GH-124,GH-125 # Batch (up to 3)
/issue:plan --all-pending # All pending issues
# Flags
--batch-size <n> Max issues per agent batch (default: 3)
```
## Execution Process
```
Phase 1: Issue Loading
├─ Parse input (single, comma-separated, or --all-pending)
├─ Fetch issue metadata (ID, title, tags)
├─ Validate issues exist (create if needed)
└─ Group by similarity (shared tags or title keywords, max 3 per batch)
Phase 2: Unified Explore + Plan (issue-plan-agent)
├─ Launch issue-plan-agent per batch
├─ Agent performs:
│ ├─ ACE semantic search for each issue
│ ├─ Codebase exploration (files, patterns, dependencies)
│ ├─ Solution generation with task breakdown
│ └─ Conflict detection across issues
└─ Output: solution JSON per issue
Phase 3: Solution Registration & Binding
├─ Append solutions to solutions/{issue-id}.jsonl
├─ Single solution per issue → auto-bind
├─ Multiple candidates → AskUserQuestion to select
└─ Update issues.jsonl with bound_solution_id
Phase 4: Summary
├─ Display bound solutions
├─ Show task counts per issue
└─ Display next steps (/issue:queue)
```
## Implementation
### Phase 1: Issue Loading (ID + Title + Tags)
```javascript
const batchSize = flags.batchSize || 3;
let issues = []; // {id, title, tags}
if (flags.allPending) {
// Get pending issues with metadata via CLI (JSON output)
const result = Bash(`ccw issue list --status pending,registered --json`).trim();
const parsed = result ? JSON.parse(result) : [];
issues = parsed.map(i => ({ id: i.id, title: i.title || '', tags: i.tags || [] }));
if (issues.length === 0) {
console.log('No pending issues found.');
return;
}
console.log(`Found ${issues.length} pending issues`);
} else {
// Parse comma-separated issue IDs, fetch metadata
const ids = userInput.includes(',')
? userInput.split(',').map(s => s.trim())
: [userInput.trim()];
for (const id of ids) {
Bash(`ccw issue init ${id} --title "Issue ${id}" 2>/dev/null || true`);
const info = Bash(`ccw issue status ${id} --json`).trim();
const parsed = info ? JSON.parse(info) : {};
issues.push({ id, title: parsed.title || '', tags: parsed.tags || [] });
}
}
// Intelligent grouping by similarity (tags → title keywords)
function groupBySimilarity(issues, maxSize) {
const batches = [];
const used = new Set();
for (const issue of issues) {
if (used.has(issue.id)) continue;
const batch = [issue];
used.add(issue.id);
const issueTags = new Set(issue.tags);
const issueWords = new Set(issue.title.toLowerCase().split(/\s+/));
// Find similar issues
for (const other of issues) {
if (used.has(other.id) || batch.length >= maxSize) continue;
// Similarity: shared tags or shared title keywords
const sharedTags = other.tags.filter(t => issueTags.has(t)).length;
const otherWords = other.title.toLowerCase().split(/\s+/);
const sharedWords = otherWords.filter(w => issueWords.has(w) && w.length > 3).length;
if (sharedTags > 0 || sharedWords >= 2) {
batch.push(other);
used.add(other.id);
}
}
batches.push(batch);
}
return batches;
}
const batches = groupBySimilarity(issues, batchSize);
console.log(`Processing ${issues.length} issues in ${batches.length} batch(es) (grouped by similarity)`);
TodoWrite({
todos: batches.map((_, i) => ({
content: `Plan batch ${i+1}`,
status: 'pending',
activeForm: `Planning batch ${i+1}`
}))
});
```
### Phase 2: Unified Explore + Plan (issue-plan-agent)
```javascript
Bash(`mkdir -p .workflow/issues/solutions`);
const pendingSelections = []; // Collect multi-solution issues for user selection
for (const [batchIndex, batch] of batches.entries()) {
updateTodo(`Plan batch ${batchIndex + 1}`, 'in_progress');
// Build issue list with metadata for agent context
const issueList = batch.map(i => `- ${i.id}: ${i.title}${i.tags.length ? ` [${i.tags.join(', ')}]` : ''}`).join('\n');
// Build minimal prompt - agent handles exploration, planning, and binding
const issuePrompt = `
## Plan Issues
**Issues** (grouped by similarity):
${issueList}
**Project Root**: ${process.cwd()}
### Steps
1. Fetch: \`ccw issue status <id> --json\`
2. Explore (ACE) → Plan solution
3. Register & bind: \`ccw issue bind <id> --solution <file>\`
### Generate Files
\`.workflow/issues/solutions/{issue-id}.jsonl\` - Solution with tasks (schema: cat .claude/workflows/cli-templates/schemas/solution-schema.json)
### Binding Rules
- **Single solution**: Auto-bind via \`ccw issue bind <id> --solution <file>\`
- **Multiple solutions**: Register only, return for user selection
### Return Summary
\`\`\`json
{
"bound": [{ "issue_id": "...", "solution_id": "...", "task_count": N }],
"pending_selection": [{ "issue_id": "...", "solutions": [{ "id": "...", "description": "...", "task_count": N }] }],
"conflicts": [{ "file": "...", "issues": [...] }]
}
\`\`\`
`;
// Launch issue-plan-agent - agent writes solutions directly
const batchIds = batch.map(i => i.id);
const result = Task(
subagent_type="issue-plan-agent",
run_in_background=false,
description=`Explore & plan ${batch.length} issues: ${batchIds.join(', ')}`,
prompt=issuePrompt
);
// Parse summary from agent
const summary = JSON.parse(result);
// Display auto-bound solutions
for (const item of summary.bound || []) {
console.log(`${item.issue_id}: ${item.solution_id} (${item.task_count} tasks)`);
}
// Collect pending selections for Phase 3
pendingSelections.push(...(summary.pending_selection || []));
// Show conflicts
if (summary.conflicts?.length > 0) {
console.log(`⚠ Conflicts: ${summary.conflicts.map(c => c.file).join(', ')}`);
}
updateTodo(`Plan batch ${batchIndex + 1}`, 'completed');
}
```
### Phase 3: Multi-Solution Selection
```javascript
// Only handle issues where agent generated multiple solutions
if (pendingSelections.length > 0) {
const answer = AskUserQuestion({
questions: pendingSelections.map(({ issue_id, solutions }) => ({
question: `Select solution for ${issue_id}:`,
header: issue_id,
multiSelect: false,
options: solutions.map(s => ({
label: `${s.id} (${s.task_count} tasks)`,
description: s.description
}))
}))
});
// Bind user-selected solutions
for (const { issue_id } of pendingSelections) {
const selectedId = extractSelectedSolutionId(answer, issue_id);
if (selectedId) {
Bash(`ccw issue bind ${issue_id} ${selectedId}`);
console.log(`${issue_id}: ${selectedId} bound`);
}
}
}
```
### Phase 4: Summary
```javascript
// Count planned issues via CLI
const plannedIds = Bash(`ccw issue list --status planned --ids`).trim();
const plannedCount = plannedIds ? plannedIds.split('\n').length : 0;
console.log(`
## Done: ${issues.length} issues → ${plannedCount} planned
Next: \`/issue:queue\`\`/issue:execute\`
`);
```
## Error Handling
| Error | Resolution |
|-------|------------|
| Issue not found | Auto-create in issues.jsonl |
| ACE search fails | Agent falls back to ripgrep |
| No solutions generated | Display error, suggest manual planning |
| User cancels selection | Skip issue, continue with others |
| File conflicts | Agent detects and suggests resolution order |
## Related Commands
- `/issue:queue` - Form execution queue from bound solutions
- `/issue:execute` - Execute queue with codex
- `ccw issue list` - List all issues
- `ccw issue status` - View issue and solution details

View File

@@ -1,294 +0,0 @@
---
name: queue
description: Form execution queue from bound solutions using issue-queue-agent
argument-hint: "[--rebuild] [--issue <id>]"
allowed-tools: TodoWrite(*), Task(*), Bash(*), Read(*), Write(*)
---
# Issue Queue Command (/issue:queue)
## Overview
Queue formation command using **issue-queue-agent** that analyzes all bound solutions, resolves conflicts, and creates an ordered execution queue.
## Output Requirements
**Generate Files:**
1. `.workflow/issues/queues/{queue-id}.json` - Full queue with tasks, conflicts, groups
2. `.workflow/issues/queues/index.json` - Update with new queue entry
**Return Summary:**
```json
{
"queue_id": "QUE-20251227-143000",
"total_tasks": N,
"execution_groups": [{ "id": "P1", "type": "parallel", "count": N }],
"conflicts_resolved": N,
"issues_queued": ["GH-123", "GH-124"]
}
```
**Completion Criteria:**
- [ ] Queue JSON generated with valid DAG (no cycles)
- [ ] All file conflicts resolved with rationale
- [ ] Semantic priority calculated for all tasks
- [ ] Execution groups assigned (parallel P* / sequential S*)
- [ ] Issue statuses updated to `queued` via `ccw issue update`
## Core Capabilities
- **Agent-driven**: issue-queue-agent handles all ordering logic
- Dependency DAG construction and cycle detection
- File conflict detection and resolution
- Semantic priority calculation (0.0-1.0)
- Parallel/Sequential group assignment
## Storage Structure (Queue History)
```
.workflow/issues/
├── issues.jsonl # All issues (one per line)
├── queues/ # Queue history directory
│ ├── index.json # Queue index (active + history)
│ ├── {queue-id}.json # Individual queue files
│ └── ...
└── solutions/
├── {issue-id}.jsonl # Solutions for issue
└── ...
```
### Queue Index Schema
```json
{
"active_queue_id": "QUE-20251227-143000",
"queues": [
{
"id": "QUE-20251227-143000",
"status": "active",
"issue_ids": ["GH-123", "GH-124"],
"total_tasks": 8,
"completed_tasks": 3,
"created_at": "2025-12-27T14:30:00Z"
},
{
"id": "QUE-20251226-100000",
"status": "completed",
"issue_ids": ["GH-120"],
"total_tasks": 5,
"completed_tasks": 5,
"created_at": "2025-12-26T10:00:00Z",
"completed_at": "2025-12-26T12:30:00Z"
}
]
}
```
## Usage
```bash
/issue:queue [FLAGS]
# Examples
/issue:queue # Form NEW queue from all bound solutions
/issue:queue --issue GH-123 # Form queue for specific issue only
/issue:queue --append GH-124 # Append to active queue
/issue:queue --list # List all queues (history)
/issue:queue --switch QUE-xxx # Switch active queue
/issue:queue --archive # Archive completed active queue
# Flags
--issue <id> Form queue for specific issue only
--append <id> Append issue to active queue (don't create new)
# CLI subcommands (ccw issue queue ...)
ccw issue queue list List all queues with status
ccw issue queue switch <queue-id> Switch active queue
ccw issue queue archive Archive current queue
ccw issue queue delete <queue-id> Delete queue from history
```
## Execution Process
```
Phase 1: Solution Loading
├─ Load issues.jsonl
├─ Filter issues with bound_solution_id
├─ Read solutions/{issue-id}.jsonl for each issue
├─ Find bound solution by ID
└─ Extract tasks from bound solutions
Phase 2-4: Agent-Driven Queue Formation (issue-queue-agent)
├─ Launch issue-queue-agent with all tasks
├─ Agent performs:
│ ├─ Build dependency DAG from depends_on
│ ├─ Detect circular dependencies
│ ├─ Identify file modification conflicts
│ ├─ Resolve conflicts using ordering rules
│ ├─ Calculate semantic priority (0.0-1.0)
│ └─ Assign execution groups (parallel/sequential)
└─ Output: queue JSON with ordered tasks
Phase 5: Queue Output
├─ Write queue.json
├─ Update issue statuses in issues.jsonl
└─ Display queue summary
```
## Implementation
### Phase 1: Solution Loading
```javascript
// Load issues.jsonl
const issuesPath = '.workflow/issues/issues.jsonl';
const allIssues = Bash(`cat "${issuesPath}" 2>/dev/null || echo ''`)
.split('\n')
.filter(line => line.trim())
.map(line => JSON.parse(line));
// Filter issues with bound solutions
const plannedIssues = allIssues.filter(i =>
i.status === 'planned' && i.bound_solution_id
);
if (plannedIssues.length === 0) {
console.log('No issues with bound solutions found.');
console.log('Run /issue:plan first to create and bind solutions.');
return;
}
// Load all tasks from bound solutions
const allTasks = [];
for (const issue of plannedIssues) {
const solPath = `.workflow/issues/solutions/${issue.id}.jsonl`;
const solutions = Bash(`cat "${solPath}" 2>/dev/null || echo ''`)
.split('\n')
.filter(line => line.trim())
.map(line => JSON.parse(line));
// Find bound solution
const boundSol = solutions.find(s => s.id === issue.bound_solution_id);
if (!boundSol) {
console.log(`⚠ Bound solution ${issue.bound_solution_id} not found for ${issue.id}`);
continue;
}
for (const task of boundSol.tasks || []) {
allTasks.push({
issue_id: issue.id,
solution_id: issue.bound_solution_id,
task,
exploration_context: boundSol.exploration_context
});
}
}
console.log(`Loaded ${allTasks.length} tasks from ${plannedIssues.length} issues`);
```
### Phase 2-4: Agent-Driven Queue Formation
```javascript
// Build minimal prompt - agent reads schema and handles ordering
const agentPrompt = `
## Order Tasks
**Tasks**: ${allTasks.length} from ${plannedIssues.length} issues
**Project Root**: ${process.cwd()}
### Input
\`\`\`json
${JSON.stringify(allTasks.map(t => ({
key: \`\${t.issue_id}:\${t.task.id}\`,
type: t.task.type,
file_context: t.task.file_context,
depends_on: t.task.depends_on
})), null, 2)}
\`\`\`
### Steps
1. Parse tasks: Extract task keys, types, file contexts, dependencies
2. Build DAG: Construct dependency graph from depends_on references
3. Detect cycles: Verify no circular dependencies exist (abort if found)
4. Detect conflicts: Identify file modification conflicts across issues
5. Resolve conflicts: Apply ordering rules (Create→Update→Delete, config→src→tests)
6. Calculate priority: Compute semantic priority (0.0-1.0) for each task
7. Assign groups: Assign parallel (P*) or sequential (S*) execution groups
8. Generate queue: Write queue JSON with ordered tasks
9. Update index: Update queues/index.json with new queue entry
### Rules
- **DAG Validity**: Output must be valid DAG with no circular dependencies
- **Conflict Resolution**: All file conflicts must be resolved with rationale
- **Ordering Priority**:
1. Create before Update (files must exist before modification)
2. Foundation before integration (config/ → src/)
3. Types before implementation (types/ → components/)
4. Core before tests (src/ → __tests__/)
5. Delete last (preserve dependencies until no longer needed)
- **Parallel Safety**: Tasks in same parallel group must have no file conflicts
- **Queue ID Format**: \`QUE-YYYYMMDD-HHMMSS\` (UTC timestamp)
### Generate Files
1. \`.workflow/issues/queues/\${queueId}.json\` - Full queue (schema: cat .claude/workflows/cli-templates/schemas/queue-schema.json)
2. \`.workflow/issues/queues/index.json\` - Update with new entry
### Return Summary
\`\`\`json
{
"queue_id": "QUE-YYYYMMDD-HHMMSS",
"total_tasks": N,
"execution_groups": [{ "id": "P1", "type": "parallel", "count": N }],
"conflicts_resolved": N,
"issues_queued": ["GH-123"]
}
\`\`\`
`;
const result = Task(
subagent_type="issue-queue-agent",
run_in_background=false,
description=`Order ${allTasks.length} tasks`,
prompt=agentPrompt
);
const summary = JSON.parse(result);
```
### Phase 5: Summary & Status Update
```javascript
// Agent already generated queue files, use summary
console.log(`
## Queue Formed: ${summary.queue_id}
**Tasks**: ${summary.total_tasks}
**Issues**: ${summary.issues_queued.join(', ')}
**Groups**: ${summary.execution_groups.map(g => `${g.id}(${g.count})`).join(', ')}
**Conflicts Resolved**: ${summary.conflicts_resolved}
Next: \`/issue:execute\`
`);
// Update issue statuses via CLI
for (const issueId of summary.issues_queued) {
Bash(`ccw issue update ${issueId} --status queued`);
}
```
## Error Handling
| Error | Resolution |
|-------|------------|
| No bound solutions | Display message, suggest /issue:plan |
| Circular dependency | List cycles, abort queue formation |
| Unresolved conflicts | Agent resolves using ordering rules |
| Invalid task reference | Skip and warn |
## Related Commands
- `/issue:plan` - Plan issues and bind solutions
- `/issue:execute` - Execute queue with codex
- `ccw issue queue list` - View current queue

View File

@@ -1,383 +0,0 @@
---
name: compact
description: Compact current session memory into structured text for session recovery, extracting objective/plan/files/decisions/constraints/state, and save via MCP core_memory tool
argument-hint: "[optional: session description]"
allowed-tools: mcp__ccw-tools__core_memory(*), Read(*)
examples:
- /memory:compact
- /memory:compact "completed core-memory module"
---
# Memory Compact Command (/memory:compact)
## 1. Overview
The `memory:compact` command **compresses current session working memory** into structured text optimized for **session recovery**, extracts critical information, and saves it to persistent storage via MCP `core_memory` tool.
**Core Philosophy**:
- **Session Recovery First**: Capture everything needed to resume work seamlessly
- **Minimize Re-exploration**: Include file paths, decisions, and state to avoid redundant analysis
- **Preserve Train of Thought**: Keep notes and hypotheses for complex debugging
- **Actionable State**: Record last action result and known issues
## 2. Parameters
- `"session description"` (Optional): Session description to supplement objective
- Example: "completed core-memory module"
- Example: "debugging JWT refresh - suspected memory leak"
## 3. Structured Output Format
```markdown
## Session ID
[WFS-ID if workflow session active, otherwise (none)]
## Project Root
[Absolute path to project root, e.g., D:\Claude_dms3]
## Objective
[High-level goal - the "North Star" of this session]
## Execution Plan
[CRITICAL: Embed the LATEST plan in its COMPLETE and DETAILED form]
### Source: [workflow | todo | user-stated | inferred]
<details>
<summary>Full Execution Plan (Click to expand)</summary>
[PRESERVE COMPLETE PLAN VERBATIM - DO NOT SUMMARIZE]
- ALL phases, tasks, subtasks
- ALL file paths (absolute)
- ALL dependencies and prerequisites
- ALL acceptance criteria
- ALL status markers ([x] done, [ ] pending)
- ALL notes and context
Example:
## Phase 1: Setup
- [x] Initialize project structure
- Created D:\Claude_dms3\src\core\index.ts
- Added dependencies: lodash, zod
- [ ] Configure TypeScript
- Update tsconfig.json for strict mode
## Phase 2: Implementation
- [ ] Implement core API
- Target: D:\Claude_dms3\src\api\handler.ts
- Dependencies: Phase 1 complete
- Acceptance: All tests pass
</details>
## Working Files (Modified)
[Absolute paths to actively modified files]
- D:\Claude_dms3\src\file1.ts (role: main implementation)
- D:\Claude_dms3\tests\file1.test.ts (role: unit tests)
## Reference Files (Read-Only)
[Absolute paths to context files - NOT modified but essential for understanding]
- D:\Claude_dms3\.claude\CLAUDE.md (role: project instructions)
- D:\Claude_dms3\src\types\index.ts (role: type definitions)
- D:\Claude_dms3\package.json (role: dependencies)
## Last Action
[Last significant action and its result/status]
## Decisions
- [Decision]: [Reasoning]
- [Decision]: [Reasoning]
## Constraints
- [User-specified limitation or preference]
## Dependencies
- [Added/changed packages or environment requirements]
## Known Issues
- [Deferred bug or edge case]
## Changes Made
- [Completed modification]
## Pending
- [Next step] or (none)
## Notes
[Unstructured thoughts, hypotheses, debugging trails]
```
## 4. Field Definitions
| Field | Purpose | Recovery Value |
|-------|---------|----------------|
| **Session ID** | Workflow session identifier (WFS-*) | Links memory to specific stateful task execution |
| **Project Root** | Absolute path to project directory | Enables correct path resolution in new sessions |
| **Objective** | Ultimate goal of the session | Prevents losing track of broader feature |
| **Execution Plan** | Complete plan from any source (verbatim) | Preserves full planning context, avoids re-planning |
| **Working Files** | Actively modified files (absolute paths) | Immediately identifies where work was happening |
| **Reference Files** | Read-only context files (absolute paths) | Eliminates re-exploration for critical context |
| **Last Action** | Final tool output/status | Immediate state awareness (success/failure) |
| **Decisions** | Architectural choices + reasoning | Prevents re-litigating settled decisions |
| **Constraints** | User-imposed limitations | Maintains personalized coding style |
| **Dependencies** | Package/environment changes | Prevents missing dependency errors |
| **Known Issues** | Deferred bugs/edge cases | Ensures issues aren't forgotten |
| **Changes Made** | Completed modifications | Clear record of what was done |
| **Pending** | Next steps | Immediate action items |
| **Notes** | Hypotheses, debugging trails | Preserves "train of thought" |
## 5. Execution Flow
### Step 1: Analyze Current Session
Extract the following from conversation history:
```javascript
const sessionAnalysis = {
sessionId: "", // WFS-* if workflow session active, null otherwise
projectRoot: "", // Absolute path: D:\Claude_dms3
objective: "", // High-level goal (1-2 sentences)
executionPlan: {
source: "workflow" | "todo" | "user-stated" | "inferred",
content: "" // Full plan content - ALWAYS preserve COMPLETE and DETAILED form
},
workingFiles: [], // {absolutePath, role} - modified files
referenceFiles: [], // {absolutePath, role} - read-only context files
lastAction: "", // Last significant action + result
decisions: [], // {decision, reasoning}
constraints: [], // User-specified limitations
dependencies: [], // Added/changed packages
knownIssues: [], // Deferred bugs
changesMade: [], // Completed modifications
pending: [], // Next steps
notes: "" // Unstructured thoughts
}
```
### Step 2: Generate Structured Text
```javascript
// Helper: Generate execution plan section
const generateExecutionPlan = (plan) => {
const sourceLabels = {
'workflow': 'workflow (IMPL_PLAN.md)',
'todo': 'todo (TodoWrite)',
'user-stated': 'user-stated',
'inferred': 'inferred'
};
// CRITICAL: Preserve complete plan content verbatim - DO NOT summarize
return `### Source: ${sourceLabels[plan.source] || plan.source}
<details>
<summary>Full Execution Plan (Click to expand)</summary>
${plan.content}
</details>`;
};
const structuredText = `## Session ID
${sessionAnalysis.sessionId || '(none)'}
## Project Root
${sessionAnalysis.projectRoot}
## Objective
${sessionAnalysis.objective}
## Execution Plan
${generateExecutionPlan(sessionAnalysis.executionPlan)}
## Working Files (Modified)
${sessionAnalysis.workingFiles.map(f => `- ${f.absolutePath} (role: ${f.role})`).join('\n') || '(none)'}
## Reference Files (Read-Only)
${sessionAnalysis.referenceFiles.map(f => `- ${f.absolutePath} (role: ${f.role})`).join('\n') || '(none)'}
## Last Action
${sessionAnalysis.lastAction}
## Decisions
${sessionAnalysis.decisions.map(d => `- ${d.decision}: ${d.reasoning}`).join('\n') || '(none)'}
## Constraints
${sessionAnalysis.constraints.map(c => `- ${c}`).join('\n') || '(none)'}
## Dependencies
${sessionAnalysis.dependencies.map(d => `- ${d}`).join('\n') || '(none)'}
## Known Issues
${sessionAnalysis.knownIssues.map(i => `- ${i}`).join('\n') || '(none)'}
## Changes Made
${sessionAnalysis.changesMade.map(c => `- ${c}`).join('\n') || '(none)'}
## Pending
${sessionAnalysis.pending.length > 0
? sessionAnalysis.pending.map(p => `- ${p}`).join('\n')
: '(none)'}
## Notes
${sessionAnalysis.notes || '(none)'}`
```
### Step 3: Import to Core Memory via MCP
Use the MCP `core_memory` tool to save the structured text:
```javascript
mcp__ccw-tools__core_memory({
operation: "import",
text: structuredText
})
```
Or via CLI (pipe structured text to import):
```bash
# Write structured text to temp file, then import
echo "$structuredText" | ccw core-memory import
# Or from a file
ccw core-memory import --file /path/to/session-memory.md
```
**Response Format**:
```json
{
"operation": "import",
"id": "CMEM-YYYYMMDD-HHMMSS",
"message": "Created memory: CMEM-YYYYMMDD-HHMMSS"
}
```
### Step 4: Report Recovery ID
After successful import, **clearly display the Recovery ID** to the user:
```
╔════════════════════════════════════════════════════════════════════════════╗
║ ✓ Session Memory Saved ║
║ ║
║ Recovery ID: CMEM-YYYYMMDD-HHMMSS ║
║ ║
║ To restore: "Please import memory <ID>" ║
║ (MCP: core_memory export | CLI: ccw core-memory export --id <ID>) ║
╚════════════════════════════════════════════════════════════════════════════╝
```
## 6. Quality Checklist
Before generating:
- [ ] Session ID captured if workflow session active (WFS-*)
- [ ] Project Root is absolute path (e.g., D:\Claude_dms3)
- [ ] Objective clearly states the "North Star" goal
- [ ] Execution Plan: COMPLETE plan preserved VERBATIM (no summarization)
- [ ] Plan Source: Clearly identified (workflow | todo | user-stated | inferred)
- [ ] Plan Details: ALL phases, tasks, file paths, dependencies, status markers included
- [ ] All file paths are ABSOLUTE (not relative)
- [ ] Working Files: 3-8 modified files with roles
- [ ] Reference Files: Key context files (CLAUDE.md, types, configs)
- [ ] Last Action captures final state (success/failure)
- [ ] Decisions include reasoning, not just choices
- [ ] Known Issues separates deferred from forgotten bugs
- [ ] Notes preserve debugging hypotheses if any
## 7. Path Resolution Rules
### Project Root Detection
1. Check current working directory from environment
2. Look for project markers: `.git/`, `package.json`, `.claude/`
3. Use the topmost directory containing these markers
### Absolute Path Conversion
```javascript
// Convert relative to absolute
const toAbsolutePath = (relativePath, projectRoot) => {
if (path.isAbsolute(relativePath)) return relativePath;
return path.join(projectRoot, relativePath);
};
// Example: "src/api/auth.ts" → "D:\Claude_dms3\src\api\auth.ts"
```
### Reference File Categories
| Category | Examples | Priority |
|----------|----------|----------|
| Project Config | `.claude/CLAUDE.md`, `package.json`, `tsconfig.json` | High |
| Type Definitions | `src/types/*.ts`, `*.d.ts` | High |
| Related Modules | Parent/sibling modules with shared interfaces | Medium |
| Test Files | Corresponding test files for modified code | Medium |
| Documentation | `README.md`, `ARCHITECTURE.md` | Low |
## 8. Plan Detection (Priority Order)
### Priority 1: Workflow Session (IMPL_PLAN.md)
```javascript
// Check for active workflow session
const manifest = await mcp__ccw-tools__session_manager({
operation: "list",
location: "active"
});
if (manifest.sessions?.length > 0) {
const session = manifest.sessions[0];
const plan = await mcp__ccw-tools__session_manager({
operation: "read",
session_id: session.id,
content_type: "plan"
});
sessionAnalysis.sessionId = session.id;
sessionAnalysis.executionPlan.source = "workflow";
sessionAnalysis.executionPlan.content = plan.content;
}
```
### Priority 2: TodoWrite (Current Session Todos)
```javascript
// Extract from conversation - look for TodoWrite tool calls
// Preserve COMPLETE todo list with all details
const todos = extractTodosFromConversation();
if (todos.length > 0) {
sessionAnalysis.executionPlan.source = "todo";
// Format todos with full context - preserve status markers
sessionAnalysis.executionPlan.content = todos.map(t =>
`- [${t.status === 'completed' ? 'x' : t.status === 'in_progress' ? '>' : ' '}] ${t.content}`
).join('\n');
}
```
### Priority 3: User-Stated Plan
```javascript
// Look for explicit plan statements in user messages:
// - "Here's my plan: 1. ... 2. ... 3. ..."
// - "I want to: first..., then..., finally..."
// - Numbered or bulleted lists describing steps
const userPlan = extractUserStatedPlan();
if (userPlan) {
sessionAnalysis.executionPlan.source = "user-stated";
sessionAnalysis.executionPlan.content = userPlan;
}
```
### Priority 4: Inferred Plan
```javascript
// If no explicit plan, infer from:
// - Task description and breakdown discussion
// - Sequence of actions taken
// - Outstanding work mentioned
const inferredPlan = inferPlanFromDiscussion();
if (inferredPlan) {
sessionAnalysis.executionPlan.source = "inferred";
sessionAnalysis.executionPlan.content = inferredPlan;
}
```
## 9. Notes
- **Timing**: Execute at task completion or before context switch
- **Frequency**: Once per independent task or milestone
- **Recovery**: New session can immediately continue with full context
- **Knowledge Graph**: Entity relationships auto-extracted for visualization
- **Absolute Paths**: Critical for cross-session recovery on different machines

View File

@@ -101,10 +101,10 @@ src/ (depth 1) → SINGLE STRATEGY
Bash({command: "pwd && basename \"$(pwd)\" && git rev-parse --show-toplevel 2>/dev/null || pwd", run_in_background: false});
// Get module structure with classification
Bash({command: "ccw tool exec get_modules_by_depth '{\"format\":\"list\"}' | ccw tool exec classify_folders '{}'", run_in_background: false});
Bash({command: "~/.claude/scripts/get_modules_by_depth.sh list | ~/.claude/scripts/classify-folders.sh", run_in_background: false});
// OR with path parameter
Bash({command: "cd <target-path> && ccw tool exec get_modules_by_depth '{\"format\":\"list\"}' | ccw tool exec classify_folders '{}'", run_in_background: false});
Bash({command: "cd <target-path> && ~/.claude/scripts/get_modules_by_depth.sh list | ~/.claude/scripts/classify-folders.sh", run_in_background: false});
```
**Parse output** `depth:N|path:<PATH>|type:<code|navigation>|...` to extract module paths, types, and count.
@@ -200,7 +200,7 @@ for (let layer of [3, 2, 1]) {
let strategy = module.depth >= 3 ? "full" : "single";
for (let tool of tool_order) {
Bash({
command: `cd ${module.path} && ccw tool exec generate_module_docs '{"strategy":"${strategy}","sourcePath":".","projectName":"${project_name}","tool":"${tool}"}'`,
command: `cd ${module.path} && ~/.claude/scripts/generate_module_docs.sh "${strategy}" "." "${project_name}" "${tool}"`,
run_in_background: false
});
if (bash_result.exit_code === 0) {
@@ -263,7 +263,7 @@ MODULES:
TOOLS (try in order): {{tool_1}}, {{tool_2}}, {{tool_3}}
EXECUTION SCRIPT: ccw tool exec generate_module_docs
EXECUTION SCRIPT: ~/.claude/scripts/generate_module_docs.sh
- Accepts strategy parameter: full | single
- Accepts folder type detection: code | navigation
- Tool execution via direct CLI commands (gemini/qwen/codex)
@@ -273,7 +273,7 @@ EXECUTION FLOW (for each module):
1. Tool fallback loop (exit on first success):
for tool in {{tool_1}} {{tool_2}} {{tool_3}}; do
Bash({
command: `cd "{{module_path}}" && ccw tool exec generate_module_docs '{"strategy":"{{strategy}}","sourcePath":".","projectName":"{{project_name}}","tool":"${tool}"}'`,
command: `cd "{{module_path}}" && ~/.claude/scripts/generate_module_docs.sh "{{strategy}}" "." "{{project_name}}" "${tool}"`,
run_in_background: false
})
exit_code=$?
@@ -322,7 +322,7 @@ let project_root = get_project_root();
report("Generating project README.md...");
for (let tool of tool_order) {
Bash({
command: `cd ${project_root} && ccw tool exec generate_module_docs '{"strategy":"project-readme","sourcePath":".","projectName":"${project_name}","tool":"${tool}"}'`,
command: `cd ${project_root} && ~/.claude/scripts/generate_module_docs.sh "project-readme" "." "${project_name}" "${tool}"`,
run_in_background: false
});
if (bash_result.exit_code === 0) {
@@ -335,7 +335,7 @@ for (let tool of tool_order) {
report("Generating ARCHITECTURE.md and EXAMPLES.md...");
for (let tool of tool_order) {
Bash({
command: `cd ${project_root} && ccw tool exec generate_module_docs '{"strategy":"project-architecture","sourcePath":".","projectName":"${project_name}","tool":"${tool}"}'`,
command: `cd ${project_root} && ~/.claude/scripts/generate_module_docs.sh "project-architecture" "." "${project_name}" "${tool}"`,
run_in_background: false
});
if (bash_result.exit_code === 0) {
@@ -350,7 +350,7 @@ if (bash_result.stdout.includes("API_FOUND")) {
report("Generating HTTP API documentation...");
for (let tool of tool_order) {
Bash({
command: `cd ${project_root} && ccw tool exec generate_module_docs '{"strategy":"http-api","sourcePath":".","projectName":"${project_name}","tool":"${tool}"}'`,
command: `cd ${project_root} && ~/.claude/scripts/generate_module_docs.sh "http-api" "." "${project_name}" "${tool}"`,
run_in_background: false
});
if (bash_result.exit_code === 0) {

View File

@@ -51,7 +51,7 @@ Orchestrates context-aware documentation generation/update for changed modules u
Bash({command: "pwd && basename \"$(pwd)\" && git rev-parse --show-toplevel 2>/dev/null || pwd", run_in_background: false});
// Detect changed modules
Bash({command: "ccw tool exec detect_changed_modules '{\"format\":\"list\"}'", run_in_background: false});
Bash({command: "~/.claude/scripts/detect_changed_modules.sh list", run_in_background: false});
// Cache git changes
Bash({command: "git add -A 2>/dev/null || true", run_in_background: false});
@@ -123,7 +123,7 @@ for (let depth of sorted_depths.reverse()) { // N → 0
return async () => {
for (let tool of tool_order) {
Bash({
command: `cd ${module.path} && ccw tool exec generate_module_docs '{"strategy":"single","sourcePath":".","projectName":"${project_name}","tool":"${tool}"}'`,
command: `cd ${module.path} && ~/.claude/scripts/generate_module_docs.sh "single" "." "${project_name}" "${tool}"`,
run_in_background: false
});
if (bash_result.exit_code === 0) {
@@ -207,21 +207,21 @@ EXECUTION:
For each module above:
1. Try tool 1:
Bash({
command: `cd "{{module_path}}" && ccw tool exec generate_module_docs '{"strategy":"single","sourcePath":".","projectName":"{{project_name}}","tool":"{{tool_1}}"}'`,
command: `cd "{{module_path}}" && ~/.claude/scripts/generate_module_docs.sh "single" "." "{{project_name}}" "{{tool_1}}"`,
run_in_background: false
})
→ Success: Report "✅ {{module_path}} docs generated with {{tool_1}}", proceed to next module
→ Failure: Try tool 2
2. Try tool 2:
Bash({
command: `cd "{{module_path}}" && ccw tool exec generate_module_docs '{"strategy":"single","sourcePath":".","projectName":"{{project_name}}","tool":"{{tool_2}}"}'`,
command: `cd "{{module_path}}" && ~/.claude/scripts/generate_module_docs.sh "single" "." "{{project_name}}" "{{tool_2}}"`,
run_in_background: false
})
→ Success: Report "✅ {{module_path}} docs generated with {{tool_2}}", proceed to next module
→ Failure: Try tool 3
3. Try tool 3:
Bash({
command: `cd "{{module_path}}" && ccw tool exec generate_module_docs '{"strategy":"single","sourcePath":".","projectName":"{{project_name}}","tool":"{{tool_3}}"}'`,
command: `cd "{{module_path}}" && ~/.claude/scripts/generate_module_docs.sh "single" "." "{{project_name}}" "{{tool_3}}"`,
run_in_background: false
})
→ Success: Report "✅ {{module_path}} docs generated with {{tool_3}}", proceed to next module

View File

@@ -85,10 +85,10 @@ bash(jq '. + {"target_path":"{target_path}","project_root":"{project_root}","pro
```bash
# 1. Run folder analysis
bash(ccw tool exec get_modules_by_depth '{}' | ccw tool exec classify_folders '{}')
bash(~/.claude/scripts/get_modules_by_depth.sh | ~/.claude/scripts/classify-folders.sh)
# 2. Get top-level directories (first 2 path levels)
bash(ccw tool exec get_modules_by_depth '{}' | ccw tool exec classify_folders '{}' | awk -F'|' '{print $1}' | sed 's|^\./||' | awk -F'/' '{if(NF>=2) print $1"/"$2; else if(NF==1) print $1}' | sort -u)
bash(~/.claude/scripts/get_modules_by_depth.sh | ~/.claude/scripts/classify-folders.sh | awk -F'|' '{print $1}' | sed 's|^\./||' | awk -F'/' '{if(NF>=2) print $1"/"$2; else if(NF==1) print $1}' | sort -u)
# 3. Find existing docs (if directory exists)
bash(if [ -d .workflow/docs/\${project_name} ]; then find .workflow/docs/\${project_name} -type f -name "*.md" ! -path "*/README.md" ! -path "*/ARCHITECTURE.md" ! -path "*/EXAMPLES.md" ! -path "*/api/*" 2>/dev/null; fi)
@@ -235,12 +235,12 @@ api_id=$((group_count + 3))
| Mode | cli_execute | Placement | CLI MODE | Approval Flag | Agent Role |
|------|-------------|-----------|----------|---------------|------------|
| **Agent** | false | pre_analysis | analysis | (none) | Generate docs in implementation_approach |
| **CLI** | true | implementation_approach | write | --mode write | Execute CLI commands, validate output |
| **CLI** | true | implementation_approach | write | --approval-mode yolo | Execute CLI commands, validate output |
**Command Patterns**:
- Gemini/Qwen: `ccw cli -p "..." --tool gemini --mode analysis --cd dir`
- CLI Mode: `ccw cli -p "..." --tool gemini --mode write --cd dir`
- Codex: `ccw cli -p "..." --tool codex --mode write --cd dir`
- Gemini/Qwen: `cd dir && gemini -p "..."`
- CLI Mode: `cd dir && gemini --approval-mode yolo -p "..."`
- Codex: `codex -C dir --full-auto exec "..." --skip-git-repo-check -s danger-full-access`
**Generation Process**:
1. Read configuration values (tool, cli_execute, mode) from workflow-session.json
@@ -331,7 +331,7 @@ api_id=$((group_count + 3))
{
"step": 2,
"title": "Batch generate documentation via CLI",
"command": "ccw cli -p 'PURPOSE: Generate module docs\\nTASK: Create documentation\\nMODE: write\\nCONTEXT: @**/* [phase2_context]\\nEXPECTED: API.md and README.md\\nRULES: Mirror structure' --tool gemini --mode write --cd ${dirs_from_group}",
"command": "bash(dirs=$(jq -r '.groups.assignments[] | select(.group_id == \"${group_number}\") | .directories[]' ${session_dir}/.process/doc-planning-data.json); for dir in $dirs; do cd \"$dir\" && gemini --approval-mode yolo -p \"PURPOSE: Generate module docs\\nTASK: Create documentation\\nMODE: write\\nCONTEXT: @**/* [phase2_context]\\nEXPECTED: API.md and README.md\\nRULES: Mirror structure\" || echo \"Failed: $dir\"; cd -; done)",
"depends_on": [1],
"output": "generated_docs"
}
@@ -363,7 +363,7 @@ api_id=$((group_count + 3))
},
{
"step": "analyze_project",
"command": "bash(ccw cli -p \"PURPOSE: Analyze project structure\\nTASK: Extract overview from modules\\nMODE: analysis\\nCONTEXT: [all_module_docs]\\nEXPECTED: Project outline\" --tool gemini --mode analysis)",
"command": "bash(gemini \"PURPOSE: Analyze project structure\\nTASK: Extract overview from modules\\nMODE: analysis\\nCONTEXT: [all_module_docs]\\nEXPECTED: Project outline\")",
"output_to": "project_outline"
}
],
@@ -403,7 +403,7 @@ api_id=$((group_count + 3))
"pre_analysis": [
{"step": "load_existing_docs", "command": "bash(cat .workflow/docs/${project_name}/{ARCHITECTURE,EXAMPLES}.md 2>/dev/null || echo 'No existing docs')", "output_to": "existing_arch_examples"},
{"step": "load_all_docs", "command": "bash(cat .workflow/docs/${project_name}/README.md && find .workflow/docs/${project_name} -type f -name '*.md' ! -path '*/README.md' ! -path '*/ARCHITECTURE.md' ! -path '*/EXAMPLES.md' ! -path '*/api/*' | xargs cat)", "output_to": "all_docs"},
{"step": "analyze_architecture", "command": "bash(ccw cli -p \"PURPOSE: Analyze system architecture\\nTASK: Synthesize architectural overview and examples\\nMODE: analysis\\nCONTEXT: [all_docs]\\nEXPECTED: Architecture + Examples outline\" --tool gemini --mode analysis)", "output_to": "arch_examples_outline"}
{"step": "analyze_architecture", "command": "bash(gemini \"PURPOSE: Analyze system architecture\\nTASK: Synthesize architectural overview and examples\\nMODE: analysis\\nCONTEXT: [all_docs]\\nEXPECTED: Architecture + Examples outline\")", "output_to": "arch_examples_outline"}
],
"implementation_approach": [
{
@@ -440,7 +440,7 @@ api_id=$((group_count + 3))
"pre_analysis": [
{"step": "discover_api", "command": "bash(rg 'router\\.| @(Get|Post)' -g '*.{ts,js}')", "output_to": "endpoint_discovery"},
{"step": "load_existing_api", "command": "bash(cat .workflow/docs/${project_name}/api/README.md 2>/dev/null || echo 'No existing API docs')", "output_to": "existing_api_docs"},
{"step": "analyze_api", "command": "bash(ccw cli -p \"PURPOSE: Document HTTP API\\nTASK: Analyze endpoints\\nMODE: analysis\\nCONTEXT: @src/api/**/* [endpoint_discovery]\\nEXPECTED: API outline\" --tool gemini --mode analysis)", "output_to": "api_outline"}
{"step": "analyze_api", "command": "bash(gemini \"PURPOSE: Document HTTP API\\nTASK: Analyze endpoints\\nMODE: analysis\\nCONTEXT: @src/api/**/* [endpoint_discovery]\\nEXPECTED: API outline\")", "output_to": "api_outline"}
],
"implementation_approach": [
{
@@ -601,7 +601,7 @@ api_id=$((group_count + 3))
| Mode | CLI Placement | CLI MODE | Approval Flag | Agent Role |
|------|---------------|----------|---------------|------------|
| **Agent (default)** | pre_analysis | analysis | (none) | Generates documentation content |
| **CLI (--cli-execute)** | implementation_approach | write | --mode write | Executes CLI commands, validates output |
| **CLI (--cli-execute)** | implementation_approach | write | --approval-mode yolo | Executes CLI commands, validates output |
**Execution Flow**:
- **Phase 2**: Unified analysis once, results in `.process/`

View File

@@ -5,7 +5,7 @@ argument-hint: "[--tool gemini|qwen] \"task context description\""
allowed-tools: Task(*), Bash(*)
examples:
- /memory:load "在当前前端基础上开发用户认证功能"
- /memory:load --tool qwen "重构支付模块API"
- /memory:load --tool qwen -p "重构支付模块API"
---
# Memory Load Command (/memory:load)
@@ -39,7 +39,7 @@ The command fully delegates to **universal-executor agent**, which autonomously:
1. **Analyzes Project Structure**: Executes `get_modules_by_depth.sh` to understand architecture
2. **Loads Documentation**: Reads CLAUDE.md, README.md and other key docs
3. **Extracts Keywords**: Derives core keywords from task description
4. **Discovers Files**: Uses CodexLens MCP or rg/find to locate relevant files
4. **Discovers Files**: Uses MCP code-index or rg/find to locate relevant files
5. **CLI Deep Analysis**: Executes Gemini/Qwen CLI for deep context analysis
6. **Generates Content Package**: Returns structured JSON core content package
@@ -109,7 +109,7 @@ Task(
1. **Project Structure**
\`\`\`bash
bash(ccw tool exec get_modules_by_depth '{}')
bash(~/.claude/scripts/get_modules_by_depth.sh)
\`\`\`
2. **Core Documentation**
@@ -136,7 +136,7 @@ Task(
Execute Gemini/Qwen CLI for deep analysis (saves main thread tokens):
\`\`\`bash
ccw cli -p "
cd . && ${tool} -p "
PURPOSE: Extract project core context for task: ${task_description}
TASK: Analyze project architecture, tech stack, key patterns, relevant files
MODE: analysis
@@ -147,7 +147,7 @@ RULES:
- Identify key architecture patterns and technical constraints
- Extract integration points and development standards
- Output concise, structured format
" --tool ${tool} --mode analysis
"
\`\`\`
### Step 4: Generate Core Content Package
@@ -212,7 +212,7 @@ Before returning:
### Example 2: Using Qwen Tool
```bash
/memory:load --tool qwen "重构支付模块API"
/memory:load --tool qwen -p "重构支付模块API"
```
Agent uses Qwen CLI for analysis, returns same structured package.

View File

@@ -1,310 +0,0 @@
---
name: tech-research-rules
description: "3-phase orchestrator: extract tech stack → Exa research → generate path-conditional rules (auto-loaded by Claude Code)"
argument-hint: "[session-id | tech-stack-name] [--regenerate] [--tool <gemini|qwen>]"
allowed-tools: SlashCommand(*), TodoWrite(*), Bash(*), Read(*), Write(*), Task(*)
---
# Tech Stack Rules Generator
## Overview
**Purpose**: Generate multi-layered, path-conditional rules that Claude Code automatically loads based on file context.
**Output Structure**:
```
.claude/rules/tech/{tech-stack}/
├── core.md # paths: **/*.{ext} - Core principles
├── patterns.md # paths: src/**/*.{ext} - Implementation patterns
├── testing.md # paths: **/*.{test,spec}.{ext} - Testing rules
├── config.md # paths: *.config.* - Configuration rules
├── api.md # paths: **/api/**/* - API rules (backend only)
├── components.md # paths: **/components/**/* - Component rules (frontend only)
└── metadata.json # Generation metadata
```
**Templates Location**: `~/.claude/workflows/cli-templates/prompts/rules/`
---
## Core Rules
1. **Start Immediately**: First action is TodoWrite initialization
2. **Path-Conditional Output**: Every rule file includes `paths` frontmatter
3. **Template-Driven**: Agent reads templates before generating content
4. **Agent Produces Files**: Agent writes all rule files directly
5. **No Manual Loading**: Rules auto-activate when Claude works with matching files
---
## 3-Phase Execution
### Phase 1: Prepare Context & Detect Tech Stack
**Goal**: Detect input mode, extract tech stack info, determine file extensions
**Input Mode Detection**:
```bash
input="$1"
if [[ "$input" == WFS-* ]]; then
MODE="session"
SESSION_ID="$input"
# Read workflow-session.json to extract tech stack
else
MODE="direct"
TECH_STACK_NAME="$input"
fi
```
**Tech Stack Analysis**:
```javascript
// Decompose composite tech stacks
// "typescript-react-nextjs" → ["typescript", "react", "nextjs"]
const TECH_EXTENSIONS = {
"typescript": "{ts,tsx}",
"javascript": "{js,jsx}",
"python": "py",
"rust": "rs",
"go": "go",
"java": "java",
"csharp": "cs",
"ruby": "rb",
"php": "php"
};
const FRAMEWORK_TYPE = {
"react": "frontend",
"vue": "frontend",
"angular": "frontend",
"nextjs": "fullstack",
"nuxt": "fullstack",
"fastapi": "backend",
"express": "backend",
"django": "backend",
"rails": "backend"
};
```
**Check Existing Rules**:
```bash
normalized_name=$(echo "$TECH_STACK_NAME" | tr '[:upper:]' '[:lower:]' | tr ' ' '-')
rules_dir=".claude/rules/tech/${normalized_name}"
existing_count=$(find "${rules_dir}" -name "*.md" 2>/dev/null | wc -l || echo 0)
```
**Skip Decision**:
- If `existing_count > 0` AND no `--regenerate``SKIP_GENERATION = true`
- If `--regenerate` → Delete existing and regenerate
**Output Variables**:
- `TECH_STACK_NAME`: Normalized name
- `PRIMARY_LANG`: Primary language
- `FILE_EXT`: File extension pattern
- `FRAMEWORK_TYPE`: frontend | backend | fullstack | library
- `COMPONENTS`: Array of tech components
- `SKIP_GENERATION`: Boolean
**TodoWrite**: Mark phase 1 completed
---
### Phase 2: Agent Produces Path-Conditional Rules
**Skip Condition**: Skipped if `SKIP_GENERATION = true`
**Goal**: Delegate to agent for Exa research and rule file generation
**Template Files**:
```
~/.claude/workflows/cli-templates/prompts/rules/
├── tech-rules-agent-prompt.txt # Agent instructions
├── rule-core.txt # Core principles template
├── rule-patterns.txt # Implementation patterns template
├── rule-testing.txt # Testing rules template
├── rule-config.txt # Configuration rules template
├── rule-api.txt # API rules template (backend)
└── rule-components.txt # Component rules template (frontend)
```
**Agent Task**:
```javascript
Task({
subagent_type: "general-purpose",
description: `Generate tech stack rules: ${TECH_STACK_NAME}`,
prompt: `
You are generating path-conditional rules for Claude Code.
## Context
- Tech Stack: ${TECH_STACK_NAME}
- Primary Language: ${PRIMARY_LANG}
- File Extensions: ${FILE_EXT}
- Framework Type: ${FRAMEWORK_TYPE}
- Components: ${JSON.stringify(COMPONENTS)}
- Output Directory: .claude/rules/tech/${TECH_STACK_NAME}/
## Instructions
Read the agent prompt template for detailed instructions:
$(cat ~/.claude/workflows/cli-templates/prompts/rules/tech-rules-agent-prompt.txt)
## Execution Steps
1. Execute Exa research queries (see agent prompt)
2. Read each rule template
3. Generate rule files following template structure
4. Write files to output directory
5. Write metadata.json
6. Report completion
## Variable Substitutions
Replace in templates:
- {TECH_STACK_NAME} → ${TECH_STACK_NAME}
- {PRIMARY_LANG} → ${PRIMARY_LANG}
- {FILE_EXT} → ${FILE_EXT}
- {FRAMEWORK_TYPE} → ${FRAMEWORK_TYPE}
`
})
```
**Completion Criteria**:
- 4-6 rule files written with proper `paths` frontmatter
- metadata.json written
- Agent reports files created
**TodoWrite**: Mark phase 2 completed
---
### Phase 3: Verify & Report
**Goal**: Verify generated files and provide usage summary
**Steps**:
1. **Verify Files**:
```bash
find ".claude/rules/tech/${TECH_STACK_NAME}" -name "*.md" -type f
```
2. **Validate Frontmatter**:
```bash
head -5 ".claude/rules/tech/${TECH_STACK_NAME}/core.md"
```
3. **Read Metadata**:
```javascript
Read(`.claude/rules/tech/${TECH_STACK_NAME}/metadata.json`)
```
4. **Generate Summary Report**:
```
Tech Stack Rules Generated
Tech Stack: {TECH_STACK_NAME}
Location: .claude/rules/tech/{TECH_STACK_NAME}/
Files Created:
├── core.md → paths: **/*.{ext}
├── patterns.md → paths: src/**/*.{ext}
├── testing.md → paths: **/*.{test,spec}.{ext}
├── config.md → paths: *.config.*
├── api.md → paths: **/api/**/* (if backend)
└── components.md → paths: **/components/**/* (if frontend)
Auto-Loading:
- Rules apply automatically when editing matching files
- No manual loading required
Example Activation:
- Edit src/components/Button.tsx → core.md + patterns.md + components.md
- Edit tests/api.test.ts → core.md + testing.md
- Edit package.json → config.md
```
**TodoWrite**: Mark phase 3 completed
---
## Path Pattern Reference
| Pattern | Matches |
|---------|---------|
| `**/*.ts` | All .ts files |
| `src/**/*` | All files under src/ |
| `*.config.*` | Config files in root |
| `**/*.{ts,tsx}` | .ts and .tsx files |
| Tech Stack | Core Pattern | Test Pattern |
|------------|--------------|--------------|
| TypeScript | `**/*.{ts,tsx}` | `**/*.{test,spec}.{ts,tsx}` |
| Python | `**/*.py` | `**/test_*.py, **/*_test.py` |
| Rust | `**/*.rs` | `**/tests/**/*.rs` |
| Go | `**/*.go` | `**/*_test.go` |
---
## Parameters
```bash
/memory:tech-research [session-id | "tech-stack-name"] [--regenerate]
```
**Arguments**:
- **session-id**: `WFS-*` format - Extract from workflow session
- **tech-stack-name**: Direct input - `"typescript"`, `"typescript-react"`
- **--regenerate**: Force regenerate existing rules
---
## Examples
### Single Language
```bash
/memory:tech-research "typescript"
```
**Output**: `.claude/rules/tech/typescript/` with 4 rule files
### Frontend Stack
```bash
/memory:tech-research "typescript-react"
```
**Output**: `.claude/rules/tech/typescript-react/` with 5 rule files (includes components.md)
### Backend Stack
```bash
/memory:tech-research "python-fastapi"
```
**Output**: `.claude/rules/tech/python-fastapi/` with 5 rule files (includes api.md)
### From Session
```bash
/memory:tech-research WFS-user-auth-20251104
```
**Workflow**: Extract tech stack from session → Generate rules
---
## Comparison: Rules vs SKILL
| Aspect | SKILL Memory | Rules |
|--------|--------------|-------|
| Loading | Manual: `Skill("tech")` | Automatic by path |
| Scope | All files when loaded | Only matching files |
| Granularity | Monolithic packages | Per-file-type |
| Context | Full package | Only relevant rules |
**When to Use**:
- **Rules**: Tech stack conventions per file type
- **SKILL**: Reference docs, APIs, examples for manual lookup

View File

@@ -0,0 +1,477 @@
---
name: tech-research
description: 3-phase orchestrator: extract tech stack from session/name → delegate to agent for Exa research and module generation → generate SKILL.md index (skips phase 2 if exists)
argument-hint: "[session-id | tech-stack-name] [--regenerate] [--tool <gemini|qwen>]"
allowed-tools: SlashCommand(*), TodoWrite(*), Bash(*), Read(*), Write(*), Task(*)
---
# Tech Stack Research SKILL Generator
## Overview
**Pure Orchestrator with Agent Delegation**: Prepares context paths and delegates ALL work to agent. Agent produces files directly.
**Auto-Continue Workflow**: Runs fully autonomously once triggered. Each phase completes and automatically triggers the next phase.
**Execution Paths**:
- **Full Path**: All 3 phases (no existing SKILL OR `--regenerate` specified)
- **Skip Path**: Phase 1 → Phase 3 (existing SKILL found AND no `--regenerate` flag)
- **Phase 3 Always Executes**: SKILL index is always generated or updated
**Agent Responsibility**:
- Agent does ALL the work: context reading, Exa research, content synthesis, file writing
- Orchestrator only provides context paths and waits for completion
## Core Rules
1. **Start Immediately**: First action is TodoWrite initialization, second action is Phase 1 execution
2. **Context Path Delegation**: Pass session directory or tech stack name to agent, let agent do discovery
3. **Agent Produces Files**: Agent directly writes all module files, orchestrator does NOT parse agent output
4. **Auto-Continue**: After completing each phase, update TodoWrite and immediately execute next phase
5. **No User Prompts**: Never ask user questions or wait for input between phases
6. **Track Progress**: Update TodoWrite after EVERY phase completion before starting next phase
7. **Lightweight Index**: Phase 3 only generates SKILL.md index by reading existing files
---
## 3-Phase Execution
### Phase 1: Prepare Context Paths
**Goal**: Detect input mode, prepare context paths for agent, check existing SKILL
**Input Mode Detection**:
```bash
# Get input parameter
input="$1"
# Detect mode
if [[ "$input" == WFS-* ]]; then
MODE="session"
SESSION_ID="$input"
CONTEXT_PATH=".workflow/${SESSION_ID}"
else
MODE="direct"
TECH_STACK_NAME="$input"
CONTEXT_PATH="$input" # Pass tech stack name as context
fi
```
**Check Existing SKILL**:
```bash
# For session mode, peek at session to get tech stack name
if [[ "$MODE" == "session" ]]; then
bash(test -f ".workflow/${SESSION_ID}/workflow-session.json")
Read(.workflow/${SESSION_ID}/workflow-session.json)
# Extract tech_stack_name (minimal extraction)
fi
# Normalize and check
normalized_name=$(echo "$TECH_STACK_NAME" | tr '[:upper:]' '[:lower:]' | tr ' ' '-')
bash(test -d ".claude/skills/${normalized_name}" && echo "exists" || echo "not_exists")
bash(find ".claude/skills/${normalized_name}" -name "*.md" 2>/dev/null | wc -l || echo 0)
```
**Skip Decision**:
```javascript
if (existing_files > 0 && !regenerate_flag) {
SKIP_GENERATION = true
message = "Tech stack SKILL already exists, skipping Phase 2. Use --regenerate to force regeneration."
} else if (regenerate_flag) {
bash(rm -rf ".claude/skills/${normalized_name}")
SKIP_GENERATION = false
message = "Regenerating tech stack SKILL from scratch."
} else {
SKIP_GENERATION = false
message = "No existing SKILL found, generating new tech stack documentation."
}
```
**Output Variables**:
- `MODE`: `session` or `direct`
- `SESSION_ID`: Session ID (if session mode)
- `CONTEXT_PATH`: Path to session directory OR tech stack name
- `TECH_STACK_NAME`: Extracted or provided tech stack name
- `SKIP_GENERATION`: Boolean - whether to skip Phase 2
**TodoWrite**:
- If skipping: Mark phase 1 completed, phase 2 completed, phase 3 in_progress
- If not skipping: Mark phase 1 completed, phase 2 in_progress
---
### Phase 2: Agent Produces All Files
**Skip Condition**: Skipped if `SKIP_GENERATION = true`
**Goal**: Delegate EVERYTHING to agent - context reading, Exa research, content synthesis, and file writing
**Agent Task Specification**:
```
Task(
subagent_type: "general-purpose",
description: "Generate tech stack SKILL: {CONTEXT_PATH}",
prompt: "
Generate a complete tech stack SKILL package with Exa research.
**Context Provided**:
- Mode: {MODE}
- Context Path: {CONTEXT_PATH}
**Templates Available**:
- Module Format: ~/.claude/workflows/cli-templates/prompts/tech/tech-module-format.txt
- SKILL Index: ~/.claude/workflows/cli-templates/prompts/tech/tech-skill-index.txt
**Your Responsibilities**:
1. **Extract Tech Stack Information**:
IF MODE == 'session':
- Read `.workflow/active/{session_id}/workflow-session.json`
- Read `.workflow/active/{session_id}/.process/context-package.json`
- Extract tech_stack: {language, frameworks, libraries}
- Build tech stack name: \"{language}-{framework1}-{framework2}\"
- Example: \"typescript-react-nextjs\"
IF MODE == 'direct':
- Tech stack name = CONTEXT_PATH
- Parse composite: split by '-' delimiter
- Example: \"typescript-react-nextjs\" → [\"typescript\", \"react\", \"nextjs\"]
2. **Execute Exa Research** (4-6 parallel queries):
Base Queries (always execute):
- mcp__exa__get_code_context_exa(query: \"{tech} core principles best practices 2025\", tokensNum: 8000)
- mcp__exa__get_code_context_exa(query: \"{tech} common patterns architecture examples\", tokensNum: 7000)
- mcp__exa__web_search_exa(query: \"{tech} configuration setup tooling 2025\", numResults: 5)
- mcp__exa__get_code_context_exa(query: \"{tech} testing strategies\", tokensNum: 5000)
Component Queries (if composite):
- For each additional component:
mcp__exa__get_code_context_exa(query: \"{main_tech} {component} integration\", tokensNum: 5000)
3. **Read Module Format Template**:
Read template for structure guidance:
```bash
Read(~/.claude/workflows/cli-templates/prompts/tech/tech-module-format.txt)
```
4. **Synthesize Content into 6 Modules**:
Follow template structure from tech-module-format.txt:
- **principles.md** - Core concepts, philosophies (~3K tokens)
- **patterns.md** - Implementation patterns with code examples (~5K tokens)
- **practices.md** - Best practices, anti-patterns, pitfalls (~4K tokens)
- **testing.md** - Testing strategies, frameworks (~3K tokens)
- **config.md** - Setup, configuration, tooling (~3K tokens)
- **frameworks.md** - Framework integration (only if composite, ~4K tokens)
Each module follows template format:
- Frontmatter (YAML)
- Main sections with clear headings
- Code examples from Exa research
- Best practices sections
- References to Exa sources
5. **Write Files Directly**:
```javascript
// Create directory
bash(mkdir -p \".claude/skills/{tech_stack_name}\")
// Write each module file using Write tool
Write({ file_path: \".claude/skills/{tech_stack_name}/principles.md\", content: ... })
Write({ file_path: \".claude/skills/{tech_stack_name}/patterns.md\", content: ... })
Write({ file_path: \".claude/skills/{tech_stack_name}/practices.md\", content: ... })
Write({ file_path: \".claude/skills/{tech_stack_name}/testing.md\", content: ... })
Write({ file_path: \".claude/skills/{tech_stack_name}/config.md\", content: ... })
// Write frameworks.md only if composite
// Write metadata.json
Write({
file_path: \".claude/skills/{tech_stack_name}/metadata.json\",
content: JSON.stringify({
tech_stack_name,
components,
is_composite,
generated_at: timestamp,
source: \"exa-research\",
research_summary: { total_queries, total_sources }
})
})
```
6. **Report Completion**:
Provide summary:
- Tech stack name
- Files created (count)
- Exa queries executed
- Sources consulted
**CRITICAL**:
- MUST read external template files before generating content (step 3 for modules, step 4 for index)
- You have FULL autonomy - read files, execute Exa, synthesize content, write files
- Do NOT return JSON or structured data - produce actual .md files
- Handle errors gracefully (Exa failures, missing files, template read failures)
- If tech stack cannot be determined, ask orchestrator to clarify
"
)
```
**Completion Criteria**:
- Agent task executed successfully
- 5-6 modular files written to `.claude/skills/{tech_stack_name}/`
- metadata.json written
- Agent reports completion
**TodoWrite**: Mark phase 2 completed, phase 3 in_progress
---
### Phase 3: Generate SKILL.md Index
**Note**: This phase **ALWAYS executes** - generates or updates the SKILL index.
**Goal**: Read generated module files and create SKILL.md index with loading recommendations
**Steps**:
1. **Verify Generated Files**:
```bash
bash(find ".claude/skills/${TECH_STACK_NAME}" -name "*.md" -type f | sort)
```
2. **Read metadata.json**:
```javascript
Read(.claude/skills/${TECH_STACK_NAME}/metadata.json)
// Extract: tech_stack_name, components, is_composite, research_summary
```
3. **Read Module Headers** (optional, first 20 lines):
```javascript
Read(.claude/skills/${TECH_STACK_NAME}/principles.md, limit: 20)
// Repeat for other modules
```
4. **Read SKILL Index Template**:
```javascript
Read(~/.claude/workflows/cli-templates/prompts/tech/tech-skill-index.txt)
```
5. **Generate SKILL.md Index**:
Follow template from tech-skill-index.txt with variable substitutions:
- `{TECH_STACK_NAME}`: From metadata.json
- `{MAIN_TECH}`: Primary technology
- `{ISO_TIMESTAMP}`: Current timestamp
- `{QUERY_COUNT}`: From research_summary
- `{SOURCE_COUNT}`: From research_summary
- Conditional sections for composite tech stacks
Template provides structure for:
- Frontmatter with metadata
- Overview and tech stack description
- Module organization (Core/Practical/Config sections)
- Loading recommendations (Quick/Implementation/Complete)
- Usage guidelines and auto-trigger keywords
- Research metadata and version history
6. **Write SKILL.md**:
```javascript
Write({
file_path: `.claude/skills/${TECH_STACK_NAME}/SKILL.md`,
content: generatedIndexMarkdown
})
```
**Completion Criteria**:
- SKILL.md index written
- All module files verified
- Loading recommendations included
**TodoWrite**: Mark phase 3 completed
**Final Report**:
```
Tech Stack SKILL Package Complete
Tech Stack: {TECH_STACK_NAME}
Location: .claude/skills/{TECH_STACK_NAME}/
Files: SKILL.md + 5-6 modules + metadata.json
Exa Research: {queries} queries, {sources} sources
Usage: Skill(command: "{TECH_STACK_NAME}")
```
---
## Implementation Details
### TodoWrite Patterns
**Initialization** (Before Phase 1):
```javascript
TodoWrite({todos: [
{"content": "Prepare context paths", "status": "in_progress", "activeForm": "Preparing context paths"},
{"content": "Agent produces all module files", "status": "pending", "activeForm": "Agent producing files"},
{"content": "Generate SKILL.md index", "status": "pending", "activeForm": "Generating SKILL index"}
]})
```
**Full Path** (SKIP_GENERATION = false):
```javascript
// After Phase 1
TodoWrite({todos: [
{"content": "Prepare context paths", "status": "completed", ...},
{"content": "Agent produces all module files", "status": "in_progress", ...},
{"content": "Generate SKILL.md index", "status": "pending", ...}
]})
// After Phase 2
TodoWrite({todos: [
{"content": "Prepare context paths", "status": "completed", ...},
{"content": "Agent produces all module files", "status": "completed", ...},
{"content": "Generate SKILL.md index", "status": "in_progress", ...}
]})
// After Phase 3
TodoWrite({todos: [
{"content": "Prepare context paths", "status": "completed", ...},
{"content": "Agent produces all module files", "status": "completed", ...},
{"content": "Generate SKILL.md index", "status": "completed", ...}
]})
```
**Skip Path** (SKIP_GENERATION = true):
```javascript
// After Phase 1 (skip Phase 2)
TodoWrite({todos: [
{"content": "Prepare context paths", "status": "completed", ...},
{"content": "Agent produces all module files", "status": "completed", ...}, // Skipped
{"content": "Generate SKILL.md index", "status": "in_progress", ...}
]})
```
### Execution Flow
**Full Path**:
```
User → TodoWrite Init → Phase 1 (prepare) → Phase 2 (agent writes files) → Phase 3 (write index) → Report
```
**Skip Path**:
```
User → TodoWrite Init → Phase 1 (detect existing) → Phase 3 (update index) → Report
```
### Error Handling
**Phase 1 Errors**:
- Invalid session ID: Report error, verify session exists
- Missing context-package: Warn, fall back to direct mode
- No tech stack detected: Ask user to specify tech stack name
**Phase 2 Errors (Agent)**:
- Agent task fails: Retry once, report if fails again
- Exa API failures: Agent handles internally with retries
- Incomplete results: Warn user, proceed with partial data if minimum sections available
**Phase 3 Errors**:
- Write failures: Report which files failed
- Missing files: Note in SKILL.md, suggest regeneration
---
## Parameters
```bash
/memory:tech-research [session-id | "tech-stack-name"] [--regenerate] [--tool <gemini|qwen>]
```
**Arguments**:
- **session-id | tech-stack-name**: Input source (auto-detected by WFS- prefix)
- Session mode: `WFS-user-auth-v2` - Extract tech stack from workflow
- Direct mode: `"typescript"`, `"typescript-react-nextjs"` - User specifies
- **--regenerate**: Force regenerate existing SKILL (deletes and recreates)
- **--tool**: Reserved for future CLI integration (default: gemini)
---
## Examples
**Generated File Structure** (for all examples):
```
.claude/skills/{tech-stack}/
├── SKILL.md # Index (Phase 3)
├── principles.md # Agent (Phase 2)
├── patterns.md # Agent
├── practices.md # Agent
├── testing.md # Agent
├── config.md # Agent
├── frameworks.md # Agent (if composite)
└── metadata.json # Agent
```
### Direct Mode - Single Stack
```bash
/memory:tech-research "typescript"
```
**Workflow**:
1. Phase 1: Detects direct mode, checks existing SKILL
2. Phase 2: Agent executes 4 Exa queries, writes 5 modules
3. Phase 3: Generates SKILL.md index
### Direct Mode - Composite Stack
```bash
/memory:tech-research "typescript-react-nextjs"
```
**Workflow**:
1. Phase 1: Decomposes into ["typescript", "react", "nextjs"]
2. Phase 2: Agent executes 6 Exa queries (4 base + 2 components), writes 6 modules (adds frameworks.md)
3. Phase 3: Generates SKILL.md index with framework integration
### Session Mode - Extract from Workflow
```bash
/memory:tech-research WFS-user-auth-20251104
```
**Workflow**:
1. Phase 1: Reads session, extracts tech stack: `python-fastapi-sqlalchemy`
2. Phase 2: Agent researches Python + FastAPI + SQLAlchemy, writes 6 modules
3. Phase 3: Generates SKILL.md index
### Regenerate Existing
```bash
/memory:tech-research "react" --regenerate
```
**Workflow**:
1. Phase 1: Deletes existing SKILL due to --regenerate
2. Phase 2: Agent executes fresh Exa research (latest 2025 practices)
3. Phase 3: Generates updated SKILL.md
### Skip Path - Fast Update
```bash
/memory:tech-research "python"
```
**Scenario**: SKILL already exists with 7 files
**Workflow**:
1. Phase 1: Detects existing SKILL, sets SKIP_GENERATION = true
2. Phase 2: **SKIPPED**
3. Phase 3: Updates SKILL.md index only (5-10x faster)

View File

@@ -99,10 +99,10 @@ src/ (depth 1) → SINGLE-LAYER STRATEGY
Bash({command: "git add -A 2>/dev/null || true", run_in_background: false});
// Get module structure
Bash({command: "ccw tool exec get_modules_by_depth '{\"format\":\"list\"}'", run_in_background: false});
Bash({command: "~/.claude/scripts/get_modules_by_depth.sh list", run_in_background: false});
// OR with --path
Bash({command: "cd <target-path> && ccw tool exec get_modules_by_depth '{\"format\":\"list\"}'", run_in_background: false});
Bash({command: "cd <target-path> && ~/.claude/scripts/get_modules_by_depth.sh list", run_in_background: false});
```
**Parse output** `depth:N|path:<PATH>|...` to extract module paths and count.
@@ -185,7 +185,7 @@ for (let layer of [3, 2, 1]) {
let strategy = module.depth >= 3 ? "multi-layer" : "single-layer";
for (let tool of tool_order) {
Bash({
command: `cd ${module.path} && ccw tool exec update_module_claude '{"strategy":"${strategy}","path":".","tool":"${tool}"}'`,
command: `cd ${module.path} && ~/.claude/scripts/update_module_claude.sh "${strategy}" "." "${tool}"`,
run_in_background: false
});
if (bash_result.exit_code === 0) {
@@ -244,7 +244,7 @@ MODULES:
TOOLS (try in order): {{tool_1}}, {{tool_2}}, {{tool_3}}
EXECUTION SCRIPT: ccw tool exec update_module_claude
EXECUTION SCRIPT: ~/.claude/scripts/update_module_claude.sh
- Accepts strategy parameter: multi-layer | single-layer
- Tool execution via direct CLI commands (gemini/qwen/codex)
@@ -252,7 +252,7 @@ EXECUTION FLOW (for each module):
1. Tool fallback loop (exit on first success):
for tool in {{tool_1}} {{tool_2}} {{tool_3}}; do
Bash({
command: `cd "{{module_path}}" && ccw tool exec update_module_claude '{"strategy":"{{strategy}}","path":".","tool":"${tool}"}'`,
command: `cd "{{module_path}}" && ~/.claude/scripts/update_module_claude.sh "{{strategy}}" "." "${tool}"`,
run_in_background: false
})
exit_code=$?

View File

@@ -41,7 +41,7 @@ Orchestrates context-aware CLAUDE.md updates for changed modules using batched a
```javascript
// Detect changed modules
Bash({command: "ccw tool exec detect_changed_modules '{\"format\":\"list\"}'", run_in_background: false});
Bash({command: "~/.claude/scripts/detect_changed_modules.sh list", run_in_background: false});
// Cache git changes
Bash({command: "git add -A 2>/dev/null || true", run_in_background: false});
@@ -102,7 +102,7 @@ for (let depth of sorted_depths.reverse()) { // N → 0
return async () => {
for (let tool of tool_order) {
Bash({
command: `cd ${module.path} && ccw tool exec update_module_claude '{"strategy":"single-layer","path":".","tool":"${tool}"}'`,
command: `cd ${module.path} && ~/.claude/scripts/update_module_claude.sh "single-layer" "." "${tool}"`,
run_in_background: false
});
if (bash_result.exit_code === 0) {
@@ -184,21 +184,21 @@ EXECUTION:
For each module above:
1. Try tool 1:
Bash({
command: `cd "{{module_path}}" && ccw tool exec update_module_claude '{"strategy":"single-layer","path":".","tool":"{{tool_1}}"}'`,
command: `cd "{{module_path}}" && ~/.claude/scripts/update_module_claude.sh "single-layer" "." "{{tool_1}}"`,
run_in_background: false
})
→ Success: Report "✅ {{module_path}} updated with {{tool_1}}", proceed to next module
→ Failure: Try tool 2
2. Try tool 2:
Bash({
command: `cd "{{module_path}}" && ccw tool exec update_module_claude '{"strategy":"single-layer","path":".","tool":"{{tool_2}}"}'`,
command: `cd "{{module_path}}" && ~/.claude/scripts/update_module_claude.sh "single-layer" "." "{{tool_2}}"`,
run_in_background: false
})
→ Success: Report "✅ {{module_path}} updated with {{tool_2}}", proceed to next module
→ Failure: Try tool 3
3. Try tool 3:
Bash({
command: `cd "{{module_path}}" && ccw tool exec update_module_claude '{"strategy":"single-layer","path":".","tool":"{{tool_3}}"}'`,
command: `cd "{{module_path}}" && ~/.claude/scripts/update_module_claude.sh "single-layer" "." "{{tool_3}}"`,
run_in_background: false
})
→ Success: Report "✅ {{module_path}} updated with {{tool_3}}", proceed to next module

View File

@@ -187,7 +187,7 @@ Objectives:
3. Use Gemini for aggregation (optional):
Command pattern:
ccw cli -p "
cd .workflow/.archives/{session_id} && gemini -p "
PURPOSE: Extract lessons and conflicts from workflow session
TASK:
• Analyze IMPL_PLAN and lessons from manifest
@@ -198,7 +198,7 @@ Objectives:
CONTEXT: @IMPL_PLAN.md @workflow-session.json
EXPECTED: Structured lessons and conflicts in JSON format
RULES: Template reference from skill-aggregation.txt
" --tool gemini --mode analysis --cd .workflow/.archives/{session_id}
"
3.5. **Generate SKILL.md Description** (CRITICAL for auto-loading):
@@ -334,7 +334,7 @@ Objectives:
- Sort sessions by date
2. Use Gemini for final aggregation:
ccw cli -p "
gemini -p "
PURPOSE: Aggregate lessons and conflicts from all workflow sessions
TASK:
• Group successes by functional domain
@@ -345,7 +345,7 @@ Objectives:
CONTEXT: [Provide aggregated JSON data]
EXPECTED: Final aggregated structure for SKILL documents
RULES: Template reference from skill-aggregation.txt
" --tool gemini --mode analysis
"
3. Read templates for formatting (same 4 templates as single mode)

View File

@@ -81,7 +81,6 @@ ELSE:
**Framework-Based Analysis** (when guidance-specification.md exists):
```bash
Task(subagent_type="conceptual-planning-agent",
run_in_background=false,
prompt="Generate API designer analysis addressing topic framework
## Framework Integration Required
@@ -137,7 +136,6 @@ Task(subagent_type="conceptual-planning-agent",
# For existing analysis updates
IF update_mode = "incremental":
Task(subagent_type="conceptual-planning-agent",
run_in_background=false,
prompt="Update existing API designer analysis
## Current Analysis Context

View File

@@ -128,7 +128,6 @@ for (let i = 0; i < allQuestions.length; i += BATCH_SIZE) {
```javascript
Task(
subagent_type="context-search-agent",
run_in_background=false,
description="Gather project context for brainstorm",
prompt=`
Execute context-search-agent in BRAINSTORM MODE (Phase 1-2 only).

View File

@@ -81,7 +81,6 @@ ELSE:
**Framework-Based Analysis** (when guidance-specification.md exists):
```bash
Task(subagent_type="conceptual-planning-agent",
run_in_background=false,
prompt="Generate system architect analysis addressing topic framework
## Framework Integration Required
@@ -137,7 +136,6 @@ Task(subagent_type="conceptual-planning-agent",
# For existing analysis updates
IF update_mode = "incremental":
Task(subagent_type="conceptual-planning-agent",
run_in_background=false,
prompt="Update existing system architect analysis
## Current Analysis Context

View File

@@ -1,516 +0,0 @@
---
name: clean
description: Intelligent code cleanup with mainline detection, stale artifact discovery, and safe execution
argument-hint: "[--dry-run] [\"focus area\"]"
allowed-tools: TodoWrite(*), Task(*), AskUserQuestion(*), Read(*), Glob(*), Bash(*), Write(*)
---
# Clean Command (/workflow:clean)
## Overview
Intelligent cleanup command that explores the codebase to identify the development mainline, discovers artifacts that have drifted from it, and safely removes stale sessions, abandoned documents, and dead code.
**Core capabilities:**
- Mainline detection: Identify active development branches and core modules
- Drift analysis: Find sessions, documents, and code that deviate from mainline
- Intelligent discovery: cli-explore-agent based artifact scanning
- Safe execution: Confirmation-based cleanup with dry-run preview
## Usage
```bash
/workflow:clean # Full intelligent cleanup (explore → analyze → confirm → execute)
/workflow:clean --dry-run # Explore and analyze only, no execution
/workflow:clean "auth module" # Focus cleanup on specific area
```
## Execution Process
```
Phase 1: Mainline Detection
├─ Analyze git history for development trends
├─ Identify core modules (high commit frequency)
├─ Map active vs stale branches
└─ Build mainline profile
Phase 2: Drift Discovery (cli-explore-agent)
├─ Scan workflow sessions for orphaned artifacts
├─ Identify documents drifted from mainline
├─ Detect dead code and unused exports
└─ Generate cleanup manifest
Phase 3: Confirmation
├─ Display cleanup summary by category
├─ Show impact analysis (files, size, risk)
└─ AskUserQuestion: Select categories to clean
Phase 4: Execution (unless --dry-run)
├─ Execute cleanup by category
├─ Update manifests and indexes
└─ Report results
```
## Implementation
### Phase 1: Mainline Detection
**Session Setup**:
```javascript
const getUtc8ISOString = () => new Date(Date.now() + 8 * 60 * 60 * 1000).toISOString()
const dateStr = getUtc8ISOString().substring(0, 10)
const sessionId = `clean-${dateStr}`
const sessionFolder = `.workflow/.clean/${sessionId}`
Bash(`mkdir -p ${sessionFolder}`)
```
**Step 1.1: Git History Analysis**
```bash
# Get commit frequency by directory (last 30 days)
bash(git log --since="30 days ago" --name-only --pretty=format: | grep -v "^$" | cut -d/ -f1-2 | sort | uniq -c | sort -rn | head -20)
# Get recent active branches
bash(git for-each-ref --sort=-committerdate refs/heads/ --format='%(refname:short) %(committerdate:relative)' | head -10)
# Get files with most recent changes
bash(git log --since="7 days ago" --name-only --pretty=format: | grep -v "^$" | sort | uniq -c | sort -rn | head -30)
```
**Step 1.2: Build Mainline Profile**
```javascript
const mainlineProfile = {
coreModules: [], // High-frequency directories
activeFiles: [], // Recently modified files
activeBranches: [], // Branches with recent commits
staleThreshold: {
sessions: 7, // Days
branches: 30,
documents: 14
},
timestamp: getUtc8ISOString()
}
// Parse git log output to identify core modules
// Modules with >5 commits in last 30 days = core
// Modules with 0 commits in last 30 days = potentially stale
Write(`${sessionFolder}/mainline-profile.json`, JSON.stringify(mainlineProfile, null, 2))
```
---
### Phase 2: Drift Discovery
**Launch cli-explore-agent for intelligent artifact scanning**:
```javascript
Task(
subagent_type="cli-explore-agent",
run_in_background=false,
description="Discover stale artifacts",
prompt=`
## Task Objective
Discover artifacts that have drifted from the development mainline. Identify stale sessions, abandoned documents, and dead code for cleanup.
## Context
- **Session Folder**: ${sessionFolder}
- **Mainline Profile**: ${sessionFolder}/mainline-profile.json
- **Focus Area**: ${focusArea || "全项目"}
## Discovery Categories
### Category 1: Stale Workflow Sessions
Scan and analyze workflow session directories:
**Locations to scan**:
- .workflow/active/WFS-* (active sessions)
- .workflow/archives/WFS-* (archived sessions)
- .workflow/.lite-plan/* (lite-plan sessions)
- .workflow/.debug/DBG-* (debug sessions)
**Staleness criteria**:
- Active sessions: No modification >7 days + no related git commits
- Archives: >30 days old + no feature references in project.json
- Lite-plan: >7 days old + plan.json not executed
- Debug: >3 days old + issue not in recent commits
**Analysis steps**:
1. List all session directories with modification times
2. Cross-reference with git log (are session topics in recent commits?)
3. Check manifest.json for orphan entries
4. Identify sessions with .archiving marker (interrupted)
### Category 2: Drifted Documents
Scan documentation that no longer aligns with code:
**Locations to scan**:
- .claude/rules/tech/* (generated tech rules)
- .workflow/.scratchpad/* (temporary notes)
- **/CLAUDE.md (module documentation)
- **/README.md (outdated descriptions)
**Drift criteria**:
- Tech rules: Referenced files no longer exist
- Scratchpad: Any file (always temporary)
- Module docs: Describe functions/classes that were removed
- READMEs: Reference deleted directories
**Analysis steps**:
1. Parse document content for file/function references
2. Verify referenced entities still exist in codebase
3. Flag documents with >30% broken references
### Category 3: Dead Code
Identify code that is no longer used:
**Scan patterns**:
- Unused exports (exported but never imported)
- Orphan files (not imported anywhere)
- Commented-out code blocks (>10 lines)
- TODO/FIXME comments >90 days old
**Analysis steps**:
1. Build import graph using rg/grep
2. Identify exports with no importers
3. Find files not in import graph
4. Scan for large comment blocks
## Output Format
Write to: ${sessionFolder}/cleanup-manifest.json
\`\`\`json
{
"generated_at": "ISO timestamp",
"mainline_summary": {
"core_modules": ["src/core", "src/api"],
"active_branches": ["main", "feature/auth"],
"health_score": 0.85
},
"discoveries": {
"stale_sessions": [
{
"path": ".workflow/active/WFS-old-feature",
"type": "active",
"age_days": 15,
"reason": "No related commits in 15 days",
"size_kb": 1024,
"risk": "low"
}
],
"drifted_documents": [
{
"path": ".claude/rules/tech/deprecated-lib",
"type": "tech_rules",
"broken_references": 5,
"total_references": 6,
"drift_percentage": 83,
"reason": "Referenced library removed",
"risk": "low"
}
],
"dead_code": [
{
"path": "src/utils/legacy.ts",
"type": "orphan_file",
"reason": "Not imported by any file",
"last_modified": "2025-10-01",
"risk": "medium"
}
]
},
"summary": {
"total_items": 12,
"total_size_mb": 45.2,
"by_category": {
"stale_sessions": 5,
"drifted_documents": 4,
"dead_code": 3
},
"by_risk": {
"low": 8,
"medium": 3,
"high": 1
}
}
}
\`\`\`
## Execution Commands
\`\`\`bash
# Session directories
find .workflow -type d -name "WFS-*" -o -name "DBG-*" 2>/dev/null
# Check modification times (Linux/Mac)
stat -c "%Y %n" .workflow/active/WFS-* 2>/dev/null
# Check modification times (Windows PowerShell via bash)
powershell -Command "Get-ChildItem '.workflow/active/WFS-*' | ForEach-Object { Write-Output \"$($_.LastWriteTime) $($_.FullName)\" }"
# Find orphan exports (TypeScript)
rg "export (const|function|class|interface|type)" --type ts -l
# Find imports
rg "import.*from" --type ts
# Find large comment blocks
rg "^\\s*/\\*" -A 10 --type ts
# Find old TODOs
rg "TODO|FIXME" --type ts -n
\`\`\`
## Success Criteria
- [ ] All session directories scanned with age calculation
- [ ] Documents cross-referenced with existing code
- [ ] Dead code detection via import graph analysis
- [ ] cleanup-manifest.json written with complete data
- [ ] Each item has risk level and cleanup reason
`
)
```
---
### Phase 3: Confirmation
**Step 3.1: Display Summary**
```javascript
const manifest = JSON.parse(Read(`${sessionFolder}/cleanup-manifest.json`))
console.log(`
## Cleanup Discovery Report
**Mainline Health**: ${Math.round(manifest.mainline_summary.health_score * 100)}%
**Core Modules**: ${manifest.mainline_summary.core_modules.join(', ')}
### Summary
| Category | Count | Size | Risk |
|----------|-------|------|------|
| Stale Sessions | ${manifest.summary.by_category.stale_sessions} | - | ${getRiskSummary('sessions')} |
| Drifted Documents | ${manifest.summary.by_category.drifted_documents} | - | ${getRiskSummary('documents')} |
| Dead Code | ${manifest.summary.by_category.dead_code} | - | ${getRiskSummary('code')} |
**Total**: ${manifest.summary.total_items} items, ~${manifest.summary.total_size_mb} MB
### Stale Sessions
${manifest.discoveries.stale_sessions.map(s =>
`- ${s.path} (${s.age_days}d, ${s.risk}): ${s.reason}`
).join('\n')}
### Drifted Documents
${manifest.discoveries.drifted_documents.map(d =>
`- ${d.path} (${d.drift_percentage}% broken, ${d.risk}): ${d.reason}`
).join('\n')}
### Dead Code
${manifest.discoveries.dead_code.map(c =>
`- ${c.path} (${c.type}, ${c.risk}): ${c.reason}`
).join('\n')}
`)
```
**Step 3.2: Dry-Run Exit**
```javascript
if (flags.includes('--dry-run')) {
console.log(`
---
**Dry-run mode**: No changes made.
Manifest saved to: ${sessionFolder}/cleanup-manifest.json
To execute cleanup: /workflow:clean
`)
return
}
```
**Step 3.3: User Confirmation**
```javascript
AskUserQuestion({
questions: [
{
question: "Which categories to clean?",
header: "Categories",
multiSelect: true,
options: [
{
label: "Sessions",
description: `${manifest.summary.by_category.stale_sessions} stale workflow sessions`
},
{
label: "Documents",
description: `${manifest.summary.by_category.drifted_documents} drifted documents`
},
{
label: "Dead Code",
description: `${manifest.summary.by_category.dead_code} unused code files`
}
]
},
{
question: "Risk level to include?",
header: "Risk",
multiSelect: false,
options: [
{ label: "Low only", description: "Safest - only obviously stale items" },
{ label: "Low + Medium", description: "Recommended - includes likely unused items" },
{ label: "All", description: "Aggressive - includes high-risk items" }
]
}
]
})
```
---
### Phase 4: Execution
**Step 4.1: Filter Items by Selection**
```javascript
const selectedCategories = userSelection.categories // ['Sessions', 'Documents', ...]
const riskLevel = userSelection.risk // 'Low only', 'Low + Medium', 'All'
const riskFilter = {
'Low only': ['low'],
'Low + Medium': ['low', 'medium'],
'All': ['low', 'medium', 'high']
}[riskLevel]
const itemsToClean = []
if (selectedCategories.includes('Sessions')) {
itemsToClean.push(...manifest.discoveries.stale_sessions.filter(s => riskFilter.includes(s.risk)))
}
if (selectedCategories.includes('Documents')) {
itemsToClean.push(...manifest.discoveries.drifted_documents.filter(d => riskFilter.includes(d.risk)))
}
if (selectedCategories.includes('Dead Code')) {
itemsToClean.push(...manifest.discoveries.dead_code.filter(c => riskFilter.includes(c.risk)))
}
TodoWrite({
todos: itemsToClean.map(item => ({
content: `Clean: ${item.path}`,
status: "pending",
activeForm: `Cleaning ${item.path}`
}))
})
```
**Step 4.2: Execute Cleanup**
```javascript
const results = { deleted: [], failed: [], skipped: [] }
for (const item of itemsToClean) {
TodoWrite({ todos: [...] }) // Mark current as in_progress
try {
if (item.type === 'orphan_file' || item.type === 'dead_export') {
// Dead code: Delete file or remove export
Bash({ command: `rm -rf "${item.path}"` })
} else {
// Sessions and documents: Delete directory/file
Bash({ command: `rm -rf "${item.path}"` })
}
results.deleted.push(item.path)
TodoWrite({ todos: [...] }) // Mark as completed
} catch (error) {
results.failed.push({ path: item.path, error: error.message })
}
}
```
**Step 4.3: Update Manifests**
```javascript
// Update archives manifest if sessions were deleted
if (selectedCategories.includes('Sessions')) {
const archiveManifestPath = '.workflow/archives/manifest.json'
if (fileExists(archiveManifestPath)) {
const archiveManifest = JSON.parse(Read(archiveManifestPath))
const deletedSessionIds = results.deleted
.filter(p => p.includes('WFS-'))
.map(p => p.split('/').pop())
const updatedManifest = archiveManifest.filter(entry =>
!deletedSessionIds.includes(entry.session_id)
)
Write(archiveManifestPath, JSON.stringify(updatedManifest, null, 2))
}
}
// Update project.json if features referenced deleted sessions
const projectPath = '.workflow/project.json'
if (fileExists(projectPath)) {
const project = JSON.parse(Read(projectPath))
const deletedPaths = new Set(results.deleted)
project.features = project.features.filter(f =>
!deletedPaths.has(f.traceability?.archive_path)
)
project.statistics.total_features = project.features.length
project.statistics.last_updated = getUtc8ISOString()
Write(projectPath, JSON.stringify(project, null, 2))
}
```
**Step 4.4: Report Results**
```javascript
console.log(`
## Cleanup Complete
**Deleted**: ${results.deleted.length} items
**Failed**: ${results.failed.length} items
**Skipped**: ${results.skipped.length} items
### Deleted Items
${results.deleted.map(p => `- ${p}`).join('\n')}
${results.failed.length > 0 ? `
### Failed Items
${results.failed.map(f => `- ${f.path}: ${f.error}`).join('\n')}
` : ''}
Cleanup manifest archived to: ${sessionFolder}/cleanup-manifest.json
`)
```
---
## Session Folder Structure
```
.workflow/.clean/{YYYY-MM-DD}/
├── mainline-profile.json # Git history analysis
└── cleanup-manifest.json # Discovery results
```
## Risk Level Definitions
| Risk | Description | Examples |
|------|-------------|----------|
| **Low** | Safe to delete, no dependencies | Empty sessions, scratchpad files, 100% broken docs |
| **Medium** | Likely unused, verify before delete | Orphan files, old archives, partially broken docs |
| **High** | May have hidden dependencies | Files with some imports, recent modifications |
## Error Handling
| Situation | Action |
|-----------|--------|
| No git repository | Skip mainline detection, use file timestamps only |
| Session in use (.archiving) | Skip with warning |
| Permission denied | Report error, continue with others |
| Manifest parse error | Regenerate from filesystem scan |
| Empty discovery | Report "codebase is clean" |
## Related Commands
- `/workflow:session:complete` - Properly archive active sessions
- `/memory:compact` - Save session memory before cleanup
- `/workflow:status` - View current workflow state

View File

@@ -1,321 +0,0 @@
---
name: debug
description: Interactive hypothesis-driven debugging with NDJSON logging, iterative until resolved
argument-hint: "\"bug description or error message\""
allowed-tools: TodoWrite(*), Task(*), AskUserQuestion(*), Read(*), Grep(*), Glob(*), Bash(*), Edit(*), Write(*)
---
# Workflow Debug Command (/workflow:debug)
## Overview
Evidence-based interactive debugging command. Systematically identifies root causes through hypothesis-driven logging and iterative verification.
**Core workflow**: Explore → Add Logging → Reproduce → Analyze Log → Fix → Verify
## Usage
```bash
/workflow:debug <BUG_DESCRIPTION>
# Arguments
<bug-description> Bug description, error message, or stack trace (required)
```
## Execution Process
```
Session Detection:
├─ Check if debug session exists for this bug
├─ EXISTS + debug.log has content → Analyze mode
└─ NOT_FOUND or empty log → Explore mode
Explore Mode:
├─ Locate error source in codebase
├─ Generate testable hypotheses (dynamic count)
├─ Add NDJSON logging instrumentation
└─ Output: Hypothesis list + await user reproduction
Analyze Mode:
├─ Parse debug.log, validate each hypothesis
└─ Decision:
├─ Confirmed → Fix root cause
├─ Inconclusive → Add more logging, iterate
└─ All rejected → Generate new hypotheses
Fix & Cleanup:
├─ Apply fix based on confirmed hypothesis
├─ User verifies
├─ Remove debug instrumentation
└─ If not fixed → Return to Analyze mode
```
## Implementation
### Session Setup & Mode Detection
```javascript
const getUtc8ISOString = () => new Date(Date.now() + 8 * 60 * 60 * 1000).toISOString()
const bugSlug = bug_description.toLowerCase().replace(/[^a-z0-9]+/g, '-').substring(0, 30)
const dateStr = getUtc8ISOString().substring(0, 10)
const sessionId = `DBG-${bugSlug}-${dateStr}`
const sessionFolder = `.workflow/.debug/${sessionId}`
const debugLogPath = `${sessionFolder}/debug.log`
// Auto-detect mode
const sessionExists = fs.existsSync(sessionFolder)
const logHasContent = sessionExists && fs.existsSync(debugLogPath) && fs.statSync(debugLogPath).size > 0
const mode = logHasContent ? 'analyze' : 'explore'
if (!sessionExists) {
bash(`mkdir -p ${sessionFolder}`)
}
```
---
### Explore Mode
**Step 1.1: Locate Error Source**
```javascript
// Extract keywords from bug description
const keywords = extractErrorKeywords(bug_description)
// e.g., ['Stack Length', '未找到', 'registered 0']
// Search codebase for error locations
for (const keyword of keywords) {
Grep({ pattern: keyword, path: ".", output_mode: "content", "-C": 3 })
}
// Identify affected files and functions
const affectedLocations = [...] // from search results
```
**Step 1.2: Generate Hypotheses (Dynamic)**
```javascript
// Hypothesis categories based on error pattern
const HYPOTHESIS_PATTERNS = {
"not found|missing|undefined|未找到": "data_mismatch",
"0|empty|zero|registered 0": "logic_error",
"timeout|connection|sync": "integration_issue",
"type|format|parse": "type_mismatch"
}
// Generate hypotheses based on actual issue (NOT fixed count)
function generateHypotheses(bugDescription, affectedLocations) {
const hypotheses = []
// Analyze bug and create targeted hypotheses
// Each hypothesis has:
// - id: H1, H2, ... (dynamic count)
// - description: What might be wrong
// - testable_condition: What to log
// - logging_point: Where to add instrumentation
return hypotheses // Could be 1, 3, 5, or more
}
const hypotheses = generateHypotheses(bug_description, affectedLocations)
```
**Step 1.3: Add NDJSON Instrumentation**
For each hypothesis, add logging at the relevant location:
**Python template**:
```python
# region debug [H{n}]
try:
import json, time
_dbg = {
"sid": "{sessionId}",
"hid": "H{n}",
"loc": "{file}:{line}",
"msg": "{testable_condition}",
"data": {
# Capture relevant values here
},
"ts": int(time.time() * 1000)
}
with open(r"{debugLogPath}", "a", encoding="utf-8") as _f:
_f.write(json.dumps(_dbg, ensure_ascii=False) + "\n")
except: pass
# endregion
```
**JavaScript/TypeScript template**:
```javascript
// region debug [H{n}]
try {
require('fs').appendFileSync("{debugLogPath}", JSON.stringify({
sid: "{sessionId}",
hid: "H{n}",
loc: "{file}:{line}",
msg: "{testable_condition}",
data: { /* Capture relevant values */ },
ts: Date.now()
}) + "\n");
} catch(_) {}
// endregion
```
**Output to user**:
```
## Hypotheses Generated
Based on error "{bug_description}", generated {n} hypotheses:
{hypotheses.map(h => `
### ${h.id}: ${h.description}
- Logging at: ${h.logging_point}
- Testing: ${h.testable_condition}
`).join('')}
**Debug log**: ${debugLogPath}
**Next**: Run reproduction steps, then come back for analysis.
```
---
### Analyze Mode
```javascript
// Parse NDJSON log
const entries = Read(debugLogPath).split('\n')
.filter(l => l.trim())
.map(l => JSON.parse(l))
// Group by hypothesis
const byHypothesis = groupBy(entries, 'hid')
// Validate each hypothesis
for (const [hid, logs] of Object.entries(byHypothesis)) {
const hypothesis = hypotheses.find(h => h.id === hid)
const latestLog = logs[logs.length - 1]
// Check if evidence confirms or rejects hypothesis
const verdict = evaluateEvidence(hypothesis, latestLog.data)
// Returns: 'confirmed' | 'rejected' | 'inconclusive'
}
```
**Output**:
```
## Evidence Analysis
Analyzed ${entries.length} log entries.
${results.map(r => `
### ${r.id}: ${r.description}
- **Status**: ${r.verdict}
- **Evidence**: ${JSON.stringify(r.evidence)}
- **Reason**: ${r.reason}
`).join('')}
${confirmedHypothesis ? `
## Root Cause Identified
**${confirmedHypothesis.id}**: ${confirmedHypothesis.description}
Ready to fix.
` : `
## Need More Evidence
Add more logging or refine hypotheses.
`}
```
---
### Fix & Cleanup
```javascript
// Apply fix based on confirmed hypothesis
// ... Edit affected files
// After user verifies fix works:
// Remove debug instrumentation (search for region markers)
const instrumentedFiles = Grep({
pattern: "# region debug|// region debug",
output_mode: "files_with_matches"
})
for (const file of instrumentedFiles) {
// Remove content between region markers
removeDebugRegions(file)
}
console.log(`
## Debug Complete
- Root cause: ${confirmedHypothesis.description}
- Fix applied to: ${modifiedFiles.join(', ')}
- Debug instrumentation removed
`)
```
---
## Debug Log Format (NDJSON)
Each line is a JSON object:
```json
{"sid":"DBG-xxx-2025-12-18","hid":"H1","loc":"file.py:func:42","msg":"Check dict keys","data":{"keys":["a","b"],"target":"c","found":false},"ts":1734567890123}
```
| Field | Description |
|-------|-------------|
| `sid` | Session ID |
| `hid` | Hypothesis ID (H1, H2, ...) |
| `loc` | Code location |
| `msg` | What's being tested |
| `data` | Captured values |
| `ts` | Timestamp (ms) |
## Session Folder
```
.workflow/.debug/DBG-{slug}-{date}/
├── debug.log # NDJSON log (main artifact)
└── resolution.md # Summary after fix (optional)
```
## Iteration Flow
```
First Call (/workflow:debug "error"):
├─ No session exists → Explore mode
├─ Extract error keywords, search codebase
├─ Generate hypotheses, add logging
└─ Await user reproduction
After Reproduction (/workflow:debug "error"):
├─ Session exists + debug.log has content → Analyze mode
├─ Parse log, evaluate hypotheses
└─ Decision:
├─ Confirmed → Fix → User verify
│ ├─ Fixed → Cleanup → Done
│ └─ Not fixed → Add logging → Iterate
├─ Inconclusive → Add logging → Iterate
└─ All rejected → New hypotheses → Iterate
Output:
└─ .workflow/.debug/DBG-{slug}-{date}/debug.log
```
## Error Handling
| Situation | Action |
|-----------|--------|
| Empty debug.log | Verify reproduction triggered the code path |
| All hypotheses rejected | Generate new hypotheses with broader scope |
| Fix doesn't work | Iterate with more granular logging |
| >5 iterations | Escalate to `/workflow:lite-fix` with evidence |

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -56,7 +56,6 @@ Phase 2: Planning Document Validation
└─ Validate .task/ contains IMPL-*.json files
Phase 3: TodoWrite Generation
├─ Update session status to "active" (Step 0)
├─ Parse TODO_LIST.md for task statuses
├─ Generate TodoWrite for entire workflow
└─ Prepare session context paths
@@ -68,9 +67,7 @@ Phase 4: Execution Strategy & Task Execution
├─ Get next in_progress task from TodoWrite
├─ Lazy load task JSON
├─ Launch agent with task context
├─ Mark task completed (update IMPL-*.json status)
│ # Quick fix: Update task status for ccw dashboard
│ # TS=$(date -Iseconds) && jq --arg ts "$TS" '.status="completed" | .status_history=(.status_history // [])+[{"from":"in_progress","to":"completed","changed_at":$ts}]' IMPL-X.json > tmp.json && mv tmp.json IMPL-X.json
├─ Mark task completed
└─ Advance to next task
Phase 5: Completion
@@ -81,7 +78,6 @@ Phase 5: Completion
Resume Mode (--resume-session):
├─ Skip Phase 1 & Phase 2
└─ Entry Point: Phase 3 (TodoWrite Generation)
├─ Update session status to "active" (if not already)
└─ Continue: Phase 4 → Phase 5
```
@@ -181,16 +177,6 @@ bash(cat .workflow/active/${sessionId}/workflow-session.json)
### Phase 3: TodoWrite Generation
**Applies to**: Both normal and resume modes (resume mode entry point)
**Step 0: Update Session Status to Active**
Before generating TodoWrite, update session status from "planning" to "active":
```bash
# Update session status (idempotent - safe to run if already active)
jq '.status = "active" | .execution_started_at = (.execution_started_at // now | todate)' \
.workflow/active/${sessionId}/workflow-session.json > tmp.json && \
mv tmp.json .workflow/active/${sessionId}/workflow-session.json
```
This ensures the dashboard shows the session as "ACTIVE" during execution.
**Process**:
1. **Create TodoWrite List**: Generate task list from TODO_LIST.md (not from task JSONs)
- Parse TODO_LIST.md to extract all tasks with current statuses
@@ -392,7 +378,6 @@ TodoWrite({
```bash
Task(subagent_type="{meta.agent}",
run_in_background=false,
prompt="Execute task: {task.title}
{[FLOW_CONTROL]}
@@ -410,6 +395,7 @@ Task(subagent_type="{meta.agent}",
1. Read complete task JSON: {session.task_json_path}
2. Load context package: {session.context_package_path}
Follow complete execution guidelines in @.claude/agents/{meta.agent}.md
**Session Paths**:
- Workflow Dir: {session.workflow_dir}

View File

@@ -86,14 +86,13 @@ bash(cp .workflow/project.json .workflow/project.json.backup)
```javascript
Task(
subagent_type="cli-explore-agent",
run_in_background=false,
description="Deep project analysis",
prompt=`
Analyze project for workflow initialization and generate .workflow/project.json.
## MANDATORY FIRST STEPS
1. Execute: cat ~/.claude/workflows/cli-templates/schemas/project-json-schema.json (get schema reference)
2. Execute: ccw tool exec get_modules_by_depth '{}' (get project structure)
2. Execute: ~/.claude/scripts/get_modules_by_depth.sh (get project structure)
## Task
Generate complete project.json with:

View File

@@ -258,33 +258,6 @@ TodoWrite({
### Step 3: Launch Execution
**Executor Resolution** (任务级 executor 优先于全局设置):
```javascript
// 获取任务的 executor优先使用 executorAssignmentsfallback 到全局 executionMethod
function getTaskExecutor(task) {
const assignments = executionContext?.executorAssignments || {}
if (assignments[task.id]) {
return assignments[task.id].executor // 'gemini' | 'codex' | 'agent'
}
// Fallback: 全局 executionMethod 映射
const method = executionContext?.executionMethod || 'Auto'
if (method === 'Agent') return 'agent'
if (method === 'Codex') return 'codex'
// Auto: 根据复杂度
return planObject.complexity === 'Low' ? 'agent' : 'codex'
}
// 按 executor 分组任务
function groupTasksByExecutor(tasks) {
const groups = { gemini: [], codex: [], agent: [] }
tasks.forEach(task => {
const executor = getTaskExecutor(task)
groups[executor].push(task)
})
return groups
}
```
**Execution Flow**: Parallel batches concurrently → Sequential batches in order
```javascript
const parallel = executionCalls.filter(c => c.executionType === "parallel")
@@ -310,9 +283,8 @@ for (const call of sequential) {
**Option A: Agent Execution**
When to use:
- `getTaskExecutor(task) === "agent"`
- `executionMethod = "Agent"` (全局 fallback)
-`executionMethod = "Auto" AND complexity = "Low"` (全局 fallback)
- `executionMethod = "Agent"`
- `executionMethod = "Auto" AND complexity = "Low"`
**Task Formatting Principle**: Each task is a self-contained checklist. The agent only needs to know what THIS task requires, not its position or relation to other tasks.
@@ -361,7 +333,6 @@ Complete each task according to its "Done when" checklist.
Task(
subagent_type="code-developer",
run_in_background=false,
description=batch.taskSummary,
prompt=formatBatchPrompt({
tasks: batch.tasks,
@@ -428,9 +399,8 @@ function extractRelatedFiles(tasks) {
**Option B: CLI Execution (Codex)**
When to use:
- `getTaskExecutor(task) === "codex"`
- `executionMethod = "Codex"` (全局 fallback)
-`executionMethod = "Auto" AND complexity = "Medium/High"` (全局 fallback)
- `executionMethod = "Codex"`
- `executionMethod = "Auto" AND complexity = "Medium" or "High"`
**Task Formatting Principle**: Same as Agent - each task is a self-contained checklist. No task numbering or position awareness.
@@ -503,10 +473,10 @@ Detailed plan: ${executionContext.session.artifacts.plan}`)
return prompt
}
ccw cli -p "${buildCLIPrompt(batch)}" --tool codex --mode write
codex --full-auto exec "${buildCLIPrompt(batch)}" --skip-git-repo-check -s danger-full-access
```
**Execution with fixed IDs** (predictable ID pattern):
**Execution with tracking**:
```javascript
// Launch CLI in foreground (NOT background)
// Timeout based on complexity: Low=40min, Medium=60min, High=100min
@@ -516,57 +486,15 @@ const timeoutByComplexity = {
"High": 6000000 // 100 minutes
}
// Generate fixed execution ID: ${sessionId}-${groupId}
// This enables predictable ID lookup without relying on resume context chains
const sessionId = executionContext?.session?.id || 'standalone'
const fixedExecutionId = `${sessionId}-${batch.groupId}` // e.g., "implement-auth-2025-12-13-P1"
// Check if resuming from previous failed execution
const previousCliId = batch.resumeFromCliId || null
// Build command with fixed ID (and optional resume for continuation)
const cli_command = previousCliId
? `ccw cli -p "${buildCLIPrompt(batch)}" --tool codex --mode write --id ${fixedExecutionId} --resume ${previousCliId}`
: `ccw cli -p "${buildCLIPrompt(batch)}" --tool codex --mode write --id ${fixedExecutionId}`
bash_result = Bash(
command=cli_command,
timeout=timeoutByComplexity[planObject.complexity] || 3600000
)
// Execution ID is now predictable: ${fixedExecutionId}
// Can also extract from output: "ID: implement-auth-2025-12-13-P1"
const cliExecutionId = fixedExecutionId
// Update TodoWrite when execution completes
```
**Resume on Failure** (with fixed ID):
```javascript
// If execution failed or timed out, offer resume option
if (bash_result.status === 'failed' || bash_result.status === 'timeout') {
console.log(`
⚠️ Execution incomplete. Resume available:
Fixed ID: ${fixedExecutionId}
Lookup: ccw cli detail ${fixedExecutionId}
Resume: ccw cli -p "Continue tasks" --resume ${fixedExecutionId} --tool codex --mode write --id ${fixedExecutionId}-retry
`)
// Store for potential retry in same session
batch.resumeFromCliId = fixedExecutionId
}
```
**Result Collection**: After completion, analyze output and collect result following `executionResult` structure (include `cliExecutionId` for resume capability)
**Option C: CLI Execution (Gemini)**
When to use: `getTaskExecutor(task) === "gemini"` (分析类任务)
```bash
# 使用与 Option B 相同的 formatBatchPrompt切换 tool 和 mode
ccw cli -p "${formatBatchPrompt(batch)}" --tool gemini --mode analysis --id ${sessionId}-${batch.groupId}
```
**Result Collection**: After completion, analyze output and collect result following `executionResult` structure
### Step 4: Progress Tracking
@@ -613,30 +541,15 @@ RULES: $(cat ~/.claude/workflows/cli-templates/prompts/analysis/02-review-code-q
# - Report findings directly
# Method 2: Gemini Review (recommended)
ccw cli -p "[Shared Prompt Template with artifacts]" --tool gemini --mode analysis
gemini -p "[Shared Prompt Template with artifacts]"
# CONTEXT includes: @**/* @${plan.json} [@${exploration.json}]
# Method 3: Qwen Review (alternative)
ccw cli -p "[Shared Prompt Template with artifacts]" --tool qwen --mode analysis
qwen -p "[Shared Prompt Template with artifacts]"
# Same prompt as Gemini, different execution engine
# Method 4: Codex Review (autonomous)
ccw cli -p "[Verify plan acceptance criteria at ${plan.json}]" --tool codex --mode write
```
**Multi-Round Review with Fixed IDs**:
```javascript
// Generate fixed review ID
const reviewId = `${sessionId}-review`
// First review pass with fixed ID
const reviewResult = Bash(`ccw cli -p "[Review prompt]" --tool gemini --mode analysis --id ${reviewId}`)
// If issues found, continue review dialog with fixed ID chain
if (hasUnresolvedIssues(reviewResult)) {
// Resume with follow-up questions
Bash(`ccw cli -p "Clarify the security concerns you mentioned" --resume ${reviewId} --tool gemini --mode analysis --id ${reviewId}-followup`)
}
codex --full-auto exec "[Verify plan acceptance criteria at ${plan.json}]" --skip-git-repo-check -s danger-full-access
```
**Implementation Note**: Replace `[Shared Prompt Template with artifacts]` placeholder with actual template content, substituting:
@@ -710,10 +623,8 @@ console.log(`✓ Development index: [${category}] ${entry.title}`)
| Empty file | File exists but no content | Error: "File is empty: {path}. Provide task description." |
| Invalid Enhanced Task JSON | JSON missing required fields | Warning: "Missing required fields. Treating as plain text." |
| Malformed JSON | JSON parsing fails | Treat as plain text (expected for non-JSON files) |
| Execution failure | Agent/Codex crashes | Display error, use fixed ID `${sessionId}-${groupId}` for resume: `ccw cli -p "Continue" --resume <fixed-id> --id <fixed-id>-retry` |
| Execution timeout | CLI exceeded timeout | Use fixed ID for resume with extended timeout |
| Execution failure | Agent/Codex crashes | Display error, save partial progress, suggest retry |
| Codex unavailable | Codex not installed | Show installation instructions, offer Agent execution |
| Fixed ID not found | Custom ID lookup failed | Check `ccw cli history`, verify date directories |
## Data Structures
@@ -735,15 +646,10 @@ Passed from lite-plan via global variable:
explorationAngles: string[], // List of exploration angles
explorationManifest: {...} | null, // Exploration manifest
clarificationContext: {...} | null,
executionMethod: "Agent" | "Codex" | "Auto", // 全局默认
executionMethod: "Agent" | "Codex" | "Auto",
codeReviewTool: "Skip" | "Gemini Review" | "Agent Review" | string,
originalUserInput: string,
// 任务级 executor 分配(优先于 executionMethod
executorAssignments: {
[taskId]: { executor: "gemini" | "codex" | "agent", reason: string }
},
// Session artifacts location (saved by lite-plan)
session: {
id: string, // Session identifier: {taskSlug}-{shortTimestamp}
@@ -773,20 +679,8 @@ Collected after each execution call completes:
tasksSummary: string, // Brief description of tasks handled
completionSummary: string, // What was completed
keyOutputs: string, // Files created/modified, key changes
notes: string, // Important context for next execution
fixedCliId: string | null // Fixed CLI execution ID (e.g., "implement-auth-2025-12-13-P1")
notes: string // Important context for next execution
}
```
Appended to `previousExecutionResults` array for context continuity in multi-execution scenarios.
**Fixed ID Pattern**: `${sessionId}-${groupId}` enables predictable lookup without auto-generated timestamps.
**Resume Usage**: If `status` is "partial" or "failed", use `fixedCliId` to resume:
```bash
# Lookup previous execution
ccw cli detail ${fixedCliId}
# Resume with new fixed ID for retry
ccw cli -p "Continue from where we left off" --resume ${fixedCliId} --tool codex --mode write --id ${fixedCliId}-retry
```

View File

@@ -164,7 +164,6 @@ Launching ${selectedAngles.length} parallel diagnoses...
const diagnosisTasks = selectedAngles.map((angle, index) =>
Task(
subagent_type="cli-explore-agent",
run_in_background=false,
description=`Diagnose: ${angle}`,
prompt=`
## Task Objective
@@ -178,7 +177,7 @@ Execute **${angle}** diagnosis for bug root cause analysis. Analyze codebase fro
## MANDATORY FIRST STEPS (Execute by Agent)
**You (cli-explore-agent) MUST execute these steps in order:**
1. Run: ccw tool exec get_modules_by_depth '{}' (project structure)
1. Run: ~/.claude/scripts/get_modules_by_depth.sh (project structure)
2. Run: rg -l "{error_keyword_from_bug}" --type ts (locate relevant files)
3. Execute: cat ~/.claude/workflows/cli-templates/schemas/diagnosis-json-schema.json (get output schema reference)
@@ -401,7 +400,6 @@ Write(`${sessionFolder}/fix-plan.json`, JSON.stringify(fixPlan, null, 2))
```javascript
Task(
subagent_type="cli-lite-planning-agent",
run_in_background=false,
description="Generate detailed fix plan",
prompt=`
Generate fix plan and write fix-plan.json.

View File

@@ -15,7 +15,7 @@ Intelligent lightweight planning command with dynamic workflow adaptation based
- Intelligent task analysis with automatic exploration detection
- Dynamic code exploration (cli-explore-agent) when codebase understanding needed
- Interactive clarification after exploration to gather missing information
- Adaptive planning: Low complexity → Direct Claude; Medium/High → cli-lite-planning-agent
- Adaptive planning strategy (direct Claude vs cli-lite-planning-agent) based on complexity
- Two-step confirmation: plan display → multi-dimensional input collection
- Execution dispatch with complete context handoff to lite-execute
@@ -38,7 +38,7 @@ Phase 1: Task Analysis & Exploration
├─ Parse input (description or .md file)
├─ intelligent complexity assessment (Low/Medium/High)
├─ Exploration decision (auto-detect or --explore flag)
├─ Context protection: If file reading ≥50k chars → force cli-explore-agent
├─ ⚠️ Context protection: If file reading ≥50k chars → force cli-explore-agent
└─ Decision:
├─ needsExploration=true → Launch parallel cli-explore-agents (1-4 based on complexity)
└─ needsExploration=false → Skip to Phase 2/3
@@ -140,17 +140,11 @@ function selectAngles(taskDescription, count) {
const selectedAngles = selectAngles(task_description, complexity === 'High' ? 4 : (complexity === 'Medium' ? 3 : 1))
// Planning strategy determination
const planningStrategy = complexity === 'Low'
? 'Direct Claude Planning'
: 'cli-lite-planning-agent'
console.log(`
## Exploration Plan
Task Complexity: ${complexity}
Selected Angles: ${selectedAngles.join(', ')}
Planning Strategy: ${planningStrategy}
Launching ${selectedAngles.length} parallel explorations...
`)
@@ -158,16 +152,11 @@ Launching ${selectedAngles.length} parallel explorations...
**Launch Parallel Explorations** - Orchestrator assigns angle to each agent:
**⚠️ CRITICAL - NO BACKGROUND EXECUTION**:
- **MUST NOT use `run_in_background: true`** - exploration results are REQUIRED before planning
```javascript
// Launch agents with pre-assigned angles
const explorationTasks = selectedAngles.map((angle, index) =>
Task(
subagent_type="cli-explore-agent",
run_in_background=false, // ⚠️ MANDATORY: Must wait for results
description=`Explore: ${angle}`,
prompt=`
## Task Objective
@@ -181,7 +170,7 @@ Execute **${angle}** exploration for task planning context. Analyze codebase fro
## MANDATORY FIRST STEPS (Execute by Agent)
**You (cli-explore-agent) MUST execute these steps in order:**
1. Run: ccw tool exec get_modules_by_depth '{}' (project structure)
1. Run: ~/.claude/scripts/get_modules_by_depth.sh (project structure)
2. Run: rg -l "{keyword_from_task}" --type ts (locate relevant files)
3. Execute: cat ~/.claude/workflows/cli-templates/schemas/explore-json-schema.json (get output schema reference)
@@ -315,11 +304,14 @@ explorations.forEach(exp => {
}
})
// Intelligent deduplication: analyze allClarifications by intent
// - Identify questions with similar intent across different angles
// - Merge similar questions: combine options, consolidate context
// - Produce dedupedClarifications with unique intents only
const dedupedClarifications = intelligentMerge(allClarifications)
// Deduplicate exact same questions only
const seen = new Set()
const dedupedClarifications = allClarifications.filter(c => {
const key = c.question.toLowerCase()
if (seen.has(key)) return false
seen.add(key)
return true
})
// Multi-round clarification: batch questions (max 4 per round)
if (dedupedClarifications.length > 0) {
@@ -359,34 +351,12 @@ if (dedupedClarifications.length > 0) {
**IMPORTANT**: Phase 3 is **planning only** - NO code execution. All execution happens in Phase 5 via lite-execute.
**Executor Assignment** (Claude 智能分配plan 生成后执行):
```javascript
// 分配规则(优先级从高到低):
// 1. 用户明确指定:"用 gemini 分析..." → gemini, "codex 实现..." → codex
// 2. 默认 → agent
const executorAssignments = {} // { taskId: { executor: 'gemini'|'codex'|'agent', reason: string } }
plan.tasks.forEach(task => {
// Claude 根据上述规则语义分析,为每个 task 分配 executor
executorAssignments[task.id] = { executor: '...', reason: '...' }
})
```
**Low Complexity** - Direct planning by Claude:
```javascript
// Step 1: Read schema
const schema = Bash(`cat ~/.claude/workflows/cli-templates/schemas/plan-json-schema.json`)
// Step 2: ⚠️ MANDATORY - Read and review ALL exploration files
const manifest = JSON.parse(Read(`${sessionFolder}/explorations-manifest.json`))
manifest.explorations.forEach(exp => {
const explorationData = Read(exp.path)
console.log(`\n### Exploration: ${exp.angle}\n${explorationData}`)
})
// Step 3: Generate plan following schema (Claude directly, no agent)
// ⚠️ Plan MUST incorporate insights from exploration files read in Step 2
// Step 2: Generate plan following schema (Claude directly, no agent)
const plan = {
summary: "...",
approach: "...",
@@ -397,10 +367,10 @@ const plan = {
_metadata: { timestamp: getUtc8ISOString(), source: "direct-planning", planning_mode: "direct" }
}
// Step 4: Write plan to session folder
// Step 3: Write plan to session folder
Write(`${sessionFolder}/plan.json`, JSON.stringify(plan, null, 2))
// Step 5: MUST continue to Phase 4 (Confirmation) - DO NOT execute code here
// Step 4: MUST continue to Phase 4 (Confirmation) - DO NOT execute code here
```
**Medium/High Complexity** - Invoke cli-lite-planning-agent:
@@ -408,7 +378,6 @@ Write(`${sessionFolder}/plan.json`, JSON.stringify(plan, null, 2))
```javascript
Task(
subagent_type="cli-lite-planning-agent",
run_in_background=false,
description="Generate detailed implementation plan",
prompt=`
Generate implementation plan and write plan.json.
@@ -438,9 +407,23 @@ ${JSON.stringify(clarificationContext) || "None"}
${complexity}
## Requirements
Generate plan.json following the schema obtained above. Key constraints:
- tasks: 2-7 structured tasks (**group by feature/module, NOT by file**)
- _metadata.exploration_angles: ${JSON.stringify(manifest.explorations.map(e => e.angle))}
Generate plan.json with:
- summary: 2-3 sentence overview
- approach: High-level implementation strategy (incorporating insights from all exploration angles)
- tasks: 2-7 structured tasks (**IMPORTANT: group by feature/module, NOT by file**)
- **Task Granularity Principle**: Each task = one complete feature unit or module
- title: action verb + target module/feature (e.g., "Implement auth token refresh")
- scope: module path (src/auth/) or feature name, prefer module-level over single file
- action, description
- modification_points: ALL files to modify for this feature (group related changes)
- implementation (3-7 steps covering all modification_points)
- reference (pattern, files, examples)
- acceptance (2-4 criteria for the entire feature)
- depends_on: task IDs this task depends on (use sparingly, only for true dependencies)
- estimated_time, recommended_execution, complexity
- _metadata:
- timestamp, source, planning_mode
- exploration_angles: ${JSON.stringify(manifest.explorations.map(e => e.angle))}
## Task Grouping Rules
1. **Group by feature**: All changes for one feature = one task (even if 3-5 files)
@@ -452,10 +435,10 @@ Generate plan.json following the schema obtained above. Key constraints:
7. **Prefer parallel**: Most tasks should be independent (no depends_on)
## Execution
1. Read schema file (cat command above)
1. Read ALL exploration files for comprehensive context
2. Execute CLI planning using Gemini (Qwen fallback)
3. Read ALL exploration files for comprehensive context
4. Synthesize findings and generate plan following schema
3. Synthesize findings from multiple exploration angles
4. Parse output and structure plan
5. Write JSON: Write('${sessionFolder}/plan.json', jsonContent)
6. Return brief completion summary
`
@@ -552,13 +535,9 @@ executionContext = {
explorationAngles: manifest.explorations.map(e => e.angle),
explorationManifest: manifest,
clarificationContext: clarificationContext || null,
executionMethod: userSelection.execution_method, // 全局默认,可被 executorAssignments 覆盖
executionMethod: userSelection.execution_method,
codeReviewTool: userSelection.code_review_tool,
originalUserInput: task_description,
// 任务级 executor 分配(优先于全局 executionMethod
executorAssignments: executorAssignments, // { taskId: { executor, reason } }
session: {
id: sessionId,
folder: sessionFolder,

View File

@@ -61,7 +61,7 @@ Phase 2: Context Gathering
├─ Tasks attached: Analyze structure → Identify integration → Generate package
└─ Output: contextPath + conflict_risk
Phase 3: Conflict Resolution
Phase 3: Conflict Resolution (conditional)
└─ Decision (conflict_risk check):
├─ conflict_risk ≥ medium → Execute /workflow:tools:conflict-resolution
│ ├─ Tasks attached: Detect conflicts → Present to user → Apply strategies
@@ -168,7 +168,7 @@ SlashCommand(command="/workflow:tools:context-gather --session [sessionId] \"[st
---
### Phase 3: Conflict Resolution
### Phase 3: Conflict Resolution (Optional - auto-triggered by conflict risk)
**Trigger**: Only execute when context-package.json indicates conflict_risk is "medium" or "high"
@@ -185,10 +185,10 @@ SlashCommand(command="/workflow:tools:conflict-resolution --session [sessionId]
**Parse Output**:
- Extract: Execution status (success/skipped/failed)
- Verify: conflict-resolution.json file path (if executed)
- Verify: CONFLICT_RESOLUTION.md file path (if executed)
**Validation**:
- File `.workflow/active/[sessionId]/.process/conflict-resolution.json` exists (if executed)
- File `.workflow/active/[sessionId]/.process/CONFLICT_RESOLUTION.md` exists (if executed)
**Skip Behavior**:
- If conflict_risk is "none" or "low", skip directly to Phase 3.5
@@ -497,7 +497,7 @@ Return summary to user
- Parse context path from Phase 2 output, store in memory
- **Extract conflict_risk from context-package.json**: Determine Phase 3 execution
- **If conflict_risk ≥ medium**: Launch Phase 3 conflict-resolution with sessionId and contextPath
- Wait for Phase 3 to finish executing (if executed), verify conflict-resolution.json created
- Wait for Phase 3 to finish executing (if executed), verify CONFLICT_RESOLUTION.md created
- **If conflict_risk is none/low**: Skip Phase 3, proceed directly to Phase 4
- **Build Phase 4 command**: `/workflow:tools:task-generate-agent --session [sessionId]`
- Pass session ID to Phase 4 command

View File

@@ -391,7 +391,6 @@ done
```javascript
Task(
subagent_type="cli-explore-agent",
run_in_background=false,
description=`Execute ${dimension} review analysis via Deep Scan`,
prompt=`
## Task Objective
@@ -477,7 +476,6 @@ Task(
```javascript
Task(
subagent_type="cli-explore-agent",
run_in_background=false,
description=`Deep-dive analysis for critical finding: ${findingTitle} via Dependency Map + Deep Scan`,
prompt=`
## Task Objective

View File

@@ -401,7 +401,6 @@ git log --since="${sessionCreatedAt}" --name-only --pretty=format: | sort -u
```javascript
Task(
subagent_type="cli-explore-agent",
run_in_background=false,
description=`Execute ${dimension} review analysis via Deep Scan`,
prompt=`
## Task Objective
@@ -488,7 +487,6 @@ Task(
```javascript
Task(
subagent_type="cli-explore-agent",
run_in_background=false,
description=`Deep-dive analysis for critical finding: ${findingTitle} via Dependency Map + Deep Scan`,
prompt=`
## Task Objective

View File

@@ -112,15 +112,11 @@ After bash validation, the model takes control to:
1. **Load Context**: Read completed task summaries and changed files
```bash
# Load implementation summaries (iterate through .summaries/ directory)
for summary in .workflow/active/${sessionId}/.summaries/*.md; do
cat "$summary"
done
# Load implementation summaries
cat .workflow/active/${sessionId}/.summaries/IMPL-*.md
# Load test results (if available)
for test_summary in .workflow/active/${sessionId}/.summaries/TEST-FIX-*.md 2>/dev/null; do
cat "$test_summary"
done
cat .workflow/active/${sessionId}/.summaries/TEST-FIX-*.md 2>/dev/null
# Get changed files
git log --since="$(cat .workflow/active/${sessionId}/workflow-session.json | jq -r .created_at)" --name-only --pretty=format: | sort -u
@@ -136,53 +132,51 @@ After bash validation, the model takes control to:
```
- Use Gemini for security analysis:
```bash
ccw cli -p "
cd .workflow/active/${sessionId} && gemini -p "
PURPOSE: Security audit of completed implementation
TASK: Review code for security vulnerabilities, insecure patterns, auth/authz issues
CONTEXT: @.summaries/IMPL-*.md,../.. @../../CLAUDE.md
EXPECTED: Security findings report with severity levels
RULES: Focus on OWASP Top 10, authentication, authorization, data validation, injection risks
" --tool gemini --mode write --cd .workflow/active/${sessionId}
" --approval-mode yolo
```
**Architecture Review** (`--type=architecture`):
- Use Qwen for architecture analysis:
```bash
ccw cli -p "
cd .workflow/active/${sessionId} && qwen -p "
PURPOSE: Architecture compliance review
TASK: Evaluate adherence to architectural patterns, identify technical debt, review design decisions
CONTEXT: @.summaries/IMPL-*.md,../.. @../../CLAUDE.md
EXPECTED: Architecture assessment with recommendations
RULES: Check for patterns, separation of concerns, modularity, scalability
" --tool qwen --mode write --cd .workflow/active/${sessionId}
" --approval-mode yolo
```
**Quality Review** (`--type=quality`):
- Use Gemini for code quality:
```bash
ccw cli -p "
cd .workflow/active/${sessionId} && gemini -p "
PURPOSE: Code quality and best practices review
TASK: Assess code readability, maintainability, adherence to best practices
CONTEXT: @.summaries/IMPL-*.md,../.. @../../CLAUDE.md
EXPECTED: Quality assessment with improvement suggestions
RULES: Check for code smells, duplication, complexity, naming conventions
" --tool gemini --mode write --cd .workflow/active/${sessionId}
" --approval-mode yolo
```
**Action Items Review** (`--type=action-items`):
- Verify all requirements and acceptance criteria met:
```bash
# Load task requirements and acceptance criteria
for task_file in .workflow/active/${sessionId}/.task/*.json; do
cat "$task_file" | jq -r '
"Task: " + .id + "\n" +
"Requirements: " + (.context.requirements | join(", ")) + "\n" +
"Acceptance: " + (.context.acceptance | join(", "))
'
done
find .workflow/active/${sessionId}/.task -name "IMPL-*.json" -exec jq -r '
"Task: " + .id + "\n" +
"Requirements: " + (.context.requirements | join(", ")) + "\n" +
"Acceptance: " + (.context.acceptance | join(", "))
' {} \;
# Check implementation summaries against requirements
ccw cli -p "
cd .workflow/active/${sessionId} && gemini -p "
PURPOSE: Verify all requirements and acceptance criteria are met
TASK: Cross-check implementation summaries against original requirements
CONTEXT: @.task/IMPL-*.json,.summaries/IMPL-*.md,../.. @../../CLAUDE.md
@@ -196,7 +190,7 @@ After bash validation, the model takes control to:
- Verify all acceptance criteria are met
- Flag any incomplete or missing action items
- Assess deployment readiness
" --tool gemini --mode write --cd .workflow/active/${sessionId}
" --approval-mode yolo
```

View File

@@ -8,146 +8,493 @@ examples:
# Complete Workflow Session (/workflow:session:complete)
Mark the currently active workflow session as complete, archive it, and update manifests.
## Pre-defined Commands
## Overview
Mark the currently active workflow session as complete, analyze it for lessons learned, move it to the archive directory, and remove the active flag marker.
## Usage
```bash
# Phase 1: Find active session
SESSION_PATH=$(find .workflow/active/ -maxdepth 1 -name "WFS-*" -type d | head -1)
SESSION_ID=$(basename "$SESSION_PATH")
# Phase 3: Move to archive
mkdir -p .workflow/archives/
mv .workflow/active/$SESSION_ID .workflow/archives/$SESSION_ID
# Cleanup marker
rm -f .workflow/archives/$SESSION_ID/.archiving
/workflow:session:complete # Complete current active session
/workflow:session:complete --detailed # Show detailed completion summary
```
## Key Files to Read
## Implementation Flow
**For manifest.json generation**, read ONLY these files:
### Phase 1: Pre-Archival Preparation (Transactional Setup)
| File | Extract |
|------|---------|
| `$SESSION_PATH/workflow-session.json` | session_id, description, started_at, status |
| `$SESSION_PATH/IMPL_PLAN.md` | title (first # heading), description (first paragraph) |
| `$SESSION_PATH/.tasks/*.json` | count files |
| `$SESSION_PATH/.summaries/*.md` | count files |
| `$SESSION_PATH/.review/dimensions/*.json` | count + findings summary (optional) |
## Execution Flow
### Phase 1: Find Session (2 commands)
**Purpose**: Find active session, create archiving marker to prevent concurrent operations. Session remains in active location for agent processing.
#### Step 1.1: Find Active Session and Get Name
```bash
# 1. Find and extract session
SESSION_PATH=$(find .workflow/active/ -maxdepth 1 -name "WFS-*" -type d | head -1)
SESSION_ID=$(basename "$SESSION_PATH")
# Find active session directory
bash(find .workflow/active/ -name "WFS-*" -type d | head -1)
# 2. Check/create archiving marker
test -f "$SESSION_PATH/.archiving" && echo "RESUMING" || touch "$SESSION_PATH/.archiving"
# Extract session name from directory path
bash(basename .workflow/active/WFS-session-name)
```
**Output**: Session name `WFS-session-name`
#### Step 1.2: Check for Existing Archiving Marker (Resume Detection)
```bash
# Check if session is already being archived
bash(test -f .workflow/active/WFS-session-name/.archiving && echo "RESUMING" || echo "NEW")
```
**Output**: `SESSION_ID` = e.g., `WFS-auth-feature`
**If RESUMING**:
- Previous archival attempt was interrupted
- Skip to Phase 2 to resume agent analysis
### Phase 2: Generate Manifest Entry (Read-only)
**If NEW**:
- Continue to Step 1.3
Read the key files above, then build this structure:
#### Step 1.3: Create Archiving Marker
```bash
# Mark session as "archiving in progress"
bash(touch .workflow/active/WFS-session-name/.archiving)
```
**Purpose**:
- Prevents concurrent operations on this session
- Enables recovery if archival fails
- Session remains in `.workflow/active/` for agent analysis
```json
{
"session_id": "<from workflow-session.json>",
"description": "<from workflow-session.json>",
"archived_at": "<current ISO timestamp>",
"archive_path": ".workflow/archives/<SESSION_ID>",
"metrics": {
"duration_hours": "<(completed_at - started_at) / 3600000>",
"tasks_completed": "<count .tasks/*.json>",
"summaries_generated": "<count .summaries/*.md>",
"review_metrics": {
"dimensions_analyzed": "<count .review/dimensions/*.json>",
"total_findings": "<sum from dimension JSONs>"
}
**Result**: Session still at `.workflow/active/WFS-session-name/` with `.archiving` marker
### Phase 2: Agent Analysis (In-Place Processing)
**Purpose**: Agent analyzes session WHILE STILL IN ACTIVE LOCATION. Generates metadata but does NOT move files or update manifest.
#### Agent Invocation
Invoke `universal-executor` agent to analyze session and prepare archive metadata.
**Agent Task**:
```
Task(
subagent_type="universal-executor",
description="Analyze session for archival",
prompt=`
Analyze workflow session for archival preparation. Session is STILL in active location.
## Context
- Session: .workflow/active/WFS-session-name/
- Status: Marked as archiving (.archiving marker present)
- Location: Active sessions directory (NOT archived yet)
## Tasks
1. **Extract session data** from workflow-session.json
- session_id, description/topic, started_at, completed_at, status
- If status != "completed", update it with timestamp
2. **Count files**: tasks (.task/*.json) and summaries (.summaries/*.md)
3. **Extract review data** (if .review/ exists):
- Count dimension results: .review/dimensions/*.json
- Count deep-dive results: .review/iterations/*.json
- Extract findings summary from dimension JSONs (total, critical, high, medium, low)
- Check fix results if .review/fixes/ exists (fixed_count, failed_count)
- Build review_metrics: {dimensions_analyzed, total_findings, severity_distribution, fix_success_rate}
4. **Generate lessons**: Use gemini with ~/.claude/workflows/cli-templates/prompts/archive/analysis-simple.txt
- Return: {successes, challenges, watch_patterns}
- If review data exists, include review-specific lessons (common issue patterns, effective fixes)
5. **Build archive entry**:
- Calculate: duration_hours, success_rate, tags (3-5 keywords)
- Construct complete JSON with session_id, description, archived_at, metrics, tags, lessons
- Include archive_path: ".workflow/archives/WFS-session-name" (future location)
- If review data exists, include review_metrics in metrics object
6. **Extract feature metadata** (for Phase 4):
- Parse IMPL_PLAN.md for title (first # heading)
- Extract description (first paragraph, max 200 chars)
- Generate feature tags (3-5 keywords from content)
7. **Return result**: Complete metadata package for atomic commit
{
"status": "success",
"session_id": "WFS-session-name",
"archive_entry": {
"session_id": "...",
"description": "...",
"archived_at": "...",
"archive_path": ".workflow/archives/WFS-session-name",
"metrics": {
"duration_hours": 2.5,
"tasks_completed": 5,
"summaries_generated": 3,
"review_metrics": { // Optional, only if .review/ exists
"dimensions_analyzed": 4,
"total_findings": 15,
"severity_distribution": {"critical": 1, "high": 3, "medium": 8, "low": 3},
"fix_success_rate": 0.87 // Optional, only if .review/fixes/ exists
}
},
"tags": [...],
"lessons": {...}
},
"feature_metadata": {
"title": "...",
"description": "...",
"tags": [...]
}
}
## Important Constraints
- DO NOT move or delete any files
- DO NOT update manifest.json yet
- Session remains in .workflow/active/ during analysis
- Return complete metadata package for orchestrator to commit atomically
## Error Handling
- On failure: return {"status": "error", "task": "...", "message": "..."}
- Do NOT modify any files on error
`
)
```
**Expected Output**:
- Agent returns complete metadata package
- Session remains in `.workflow/active/` with `.archiving` marker
- No files moved or manifests updated yet
### Phase 3: Atomic Commit (Transactional File Operations)
**Purpose**: Atomically commit all changes. Only execute if Phase 2 succeeds.
#### Step 3.1: Create Archive Directory
```bash
bash(mkdir -p .workflow/archives/)
```
#### Step 3.2: Move Session to Archive
```bash
bash(mv .workflow/active/WFS-session-name .workflow/archives/WFS-session-name)
```
**Result**: Session now at `.workflow/archives/WFS-session-name/`
#### Step 3.3: Update Manifest
```bash
# Read current manifest (or create empty array if not exists)
bash(test -f .workflow/archives/manifest.json && cat .workflow/archives/manifest.json || echo "[]")
```
**JSON Update Logic**:
```javascript
// Read agent result from Phase 2
const agentResult = JSON.parse(agentOutput);
const archiveEntry = agentResult.archive_entry;
// Read existing manifest
let manifest = [];
try {
const manifestContent = Read('.workflow/archives/manifest.json');
manifest = JSON.parse(manifestContent);
} catch {
manifest = []; // Initialize if not exists
}
// Append new entry
manifest.push(archiveEntry);
// Write back
Write('.workflow/archives/manifest.json', JSON.stringify(manifest, null, 2));
```
#### Step 3.4: Remove Archiving Marker
```bash
bash(rm .workflow/archives/WFS-session-name/.archiving)
```
**Result**: Clean archived session without temporary markers
**Output Confirmation**:
```
✓ Session "${sessionId}" archived successfully
Location: .workflow/archives/WFS-session-name/
Lessons: ${archiveEntry.lessons.successes.length} successes, ${archiveEntry.lessons.challenges.length} challenges
Manifest: Updated with ${manifest.length} total sessions
${reviewMetrics ? `Review: ${reviewMetrics.total_findings} findings across ${reviewMetrics.dimensions_analyzed} dimensions, ${Math.round(reviewMetrics.fix_success_rate * 100)}% fixed` : ''}
```
### Phase 4: Update Project Feature Registry
**Purpose**: Record completed session as a project feature in `.workflow/project.json`.
**Execution**: Uses feature metadata from Phase 2 agent result to update project registry.
#### Step 4.1: Check Project State Exists
```bash
bash(test -f .workflow/project.json && echo "EXISTS" || echo "SKIP")
```
**If SKIP**: Output warning and skip Phase 4
```
WARNING: No project.json found. Run /workflow:session:start to initialize.
```
#### Step 4.2: Extract Feature Information from Agent Result
**Data Processing** (Uses Phase 2 agent output):
```javascript
// Extract feature metadata from agent result
const agentResult = JSON.parse(agentOutput);
const featureMeta = agentResult.feature_metadata;
// Data already prepared by agent:
const title = featureMeta.title;
const description = featureMeta.description;
const tags = featureMeta.tags;
// Create feature ID (lowercase slug)
const featureId = title.toLowerCase().replace(/[^a-z0-9]+/g, '-').substring(0, 50);
```
#### Step 4.3: Update project.json
```bash
# Read current project state
bash(cat .workflow/project.json)
```
**JSON Update Logic**:
```javascript
// Read existing project.json (created by /workflow:init)
// Note: overview field is managed by /workflow:init, not modified here
const projectMeta = JSON.parse(Read('.workflow/project.json'));
const currentTimestamp = new Date().toISOString();
const currentDate = currentTimestamp.split('T')[0]; // YYYY-MM-DD
// Extract tags from IMPL_PLAN.md (simple keyword extraction)
const tags = extractTags(planContent); // e.g., ["auth", "security"]
// Build feature object with complete metadata
const newFeature = {
id: featureId,
title: title,
description: description,
status: "completed",
tags: tags,
timeline: {
created_at: currentTimestamp,
implemented_at: currentDate,
updated_at: currentTimestamp
},
"tags": ["<3-5 keywords from IMPL_PLAN.md>"],
"lessons": {
"successes": ["<key wins>"],
"challenges": ["<difficulties>"],
"watch_patterns": ["<patterns to monitor>"]
traceability: {
session_id: sessionId,
archive_path: archivePath, // e.g., ".workflow/archives/WFS-auth-system"
commit_hash: getLatestCommitHash() || "" // Optional: git rev-parse HEAD
},
docs: [], // Placeholder for future doc links
relations: [] // Placeholder for feature dependencies
};
// Add new feature to array
projectMeta.features.push(newFeature);
// Update statistics
projectMeta.statistics.total_features = projectMeta.features.length;
projectMeta.statistics.total_sessions += 1;
projectMeta.statistics.last_updated = currentTimestamp;
// Write back
Write('.workflow/project.json', JSON.stringify(projectMeta, null, 2));
```
**Helper Functions**:
```javascript
// Extract tags from IMPL_PLAN.md content
function extractTags(planContent) {
const tags = [];
// Look for common keywords
const keywords = {
'auth': /authentication|login|oauth|jwt/i,
'security': /security|encrypt|hash|token/i,
'api': /api|endpoint|rest|graphql/i,
'ui': /component|page|interface|frontend/i,
'database': /database|schema|migration|sql/i,
'test': /test|testing|spec|coverage/i
};
for (const [tag, pattern] of Object.entries(keywords)) {
if (pattern.test(planContent)) {
tags.push(tag);
}
}
return tags.slice(0, 5); // Max 5 tags
}
// Get latest git commit hash (optional)
function getLatestCommitHash() {
try {
const result = Bash({
command: "git rev-parse --short HEAD 2>/dev/null",
description: "Get latest commit hash"
});
return result.trim();
} catch {
return "";
}
}
```
**Lessons Generation**: Use gemini with `~/.claude/workflows/cli-templates/prompts/archive/analysis-simple.txt`
#### Step 4.4: Output Confirmation
### Phase 3: Atomic Commit (4 commands)
```bash
# 1. Create archive directory
mkdir -p .workflow/archives/
# 2. Move session
mv .workflow/active/$SESSION_ID .workflow/archives/$SESSION_ID
# 3. Update manifest.json (Read → Append → Write)
# Read: .workflow/archives/manifest.json (or [])
# Append: archive_entry from Phase 2
# Write: updated JSON
# 4. Remove marker
rm -f .workflow/archives/$SESSION_ID/.archiving
```
✓ Feature "${title}" added to project registry
ID: ${featureId}
Session: ${sessionId}
Location: .workflow/project.json
```
**Output**:
```
✓ Session "$SESSION_ID" archived successfully
Location: .workflow/archives/$SESSION_ID/
Manifest: Updated with N total sessions
```
**Error Handling**:
- If project.json malformed: Output error, skip update
- If feature_metadata missing from agent result: Skip Phase 4
- If extraction fails: Use minimal defaults
### Phase 4: Update project.json (Optional)
**Skip if**: `.workflow/project.json` doesn't exist
```bash
# Check
test -f .workflow/project.json || echo "SKIP"
```
**If exists**, add feature entry:
```json
{
"id": "<slugified title>",
"title": "<from IMPL_PLAN.md>",
"status": "completed",
"tags": ["<from Phase 2>"],
"timeline": { "implemented_at": "<date>" },
"traceability": { "session_id": "<SESSION_ID>", "archive_path": "<path>" }
}
```
**Output**:
```
✓ Feature added to project registry
```
**Phase 4 Total Commands**: 1 bash read + JSON manipulation
## Error Recovery
| Phase | Symptom | Recovery |
|-------|---------|----------|
| 1 | No active session | `No active session found` |
| 2 | Analysis fails | Remove marker: `rm $SESSION_PATH/.archiving`, retry |
| 3 | Move fails | Session safe in active/, fix issue, retry |
| 3 | Manifest fails | Session in archives/, manually add entry, remove marker |
### If Agent Fails (Phase 2)
## Quick Reference
**Symptoms**:
- Agent returns `{"status": "error", ...}`
- Agent crashes or times out
- Analysis incomplete
**Recovery Steps**:
```bash
# Session still in .workflow/active/WFS-session-name
# Remove archiving marker
bash(rm .workflow/active/WFS-session-name/.archiving)
```
Phase 1: find session → create .archiving marker
Phase 2: read key files → build manifest entry (no writes)
Phase 3: mkdir → mv → update manifest.json → rm marker
Phase 4: update project.json features array (optional)
**User Notification**:
```
ERROR: Session archival failed during analysis phase
Reason: [error message from agent]
Session remains active in: .workflow/active/WFS-session-name
Recovery:
1. Fix any issues identified in error message
2. Retry: /workflow:session:complete
Session state: SAFE (no changes committed)
```
### If Move Fails (Phase 3)
**Symptoms**:
- `mv` command fails
- Permission denied
- Disk full
**Recovery Steps**:
```bash
# Archiving marker still present
# Session still in .workflow/active/ (move failed)
# No manifest updated yet
```
**User Notification**:
```
ERROR: Session archival failed during move operation
Reason: [mv error message]
Session remains in: .workflow/active/WFS-session-name
Recovery:
1. Fix filesystem issues (permissions, disk space)
2. Retry: /workflow:session:complete
- System will detect .archiving marker
- Will resume from Phase 2 (agent analysis)
Session state: SAFE (analysis complete, ready to retry move)
```
### If Manifest Update Fails (Phase 3)
**Symptoms**:
- JSON parsing error
- Write permission denied
- Session moved but manifest not updated
**Recovery Steps**:
```bash
# Session moved to .workflow/archives/WFS-session-name
# Manifest NOT updated
# Archiving marker still present in archived location
```
**User Notification**:
```
ERROR: Session archived but manifest update failed
Reason: [error message]
Session location: .workflow/archives/WFS-session-name
Recovery:
1. Fix manifest.json issues (syntax, permissions)
2. Manual manifest update:
- Add archive entry from agent output
- Remove .archiving marker: rm .workflow/archives/WFS-session-name/.archiving
Session state: PARTIALLY COMPLETE (session archived, manifest needs update)
```
## Workflow Execution Strategy
### Transactional Four-Phase Approach
**Phase 1: Pre-Archival Preparation** (Marker creation)
- Find active session and extract name
- Check for existing `.archiving` marker (resume detection)
- Create `.archiving` marker if new
- **No data processing** - just state tracking
- **Total**: 2-3 bash commands (find + marker check/create)
**Phase 2: Agent Analysis** (Read-only data processing)
- Extract all session data from active location
- Count tasks and summaries
- Extract review data if .review/ exists (dimension results, findings, fix results)
- Generate lessons learned analysis (including review-specific lessons if applicable)
- Extract feature metadata from IMPL_PLAN.md
- Build complete archive + feature metadata package (with review_metrics if applicable)
- **No file modifications** - pure analysis
- **Total**: 1 agent invocation
**Phase 3: Atomic Commit** (Transactional file operations)
- Create archive directory
- Move session to archive location
- Update manifest.json with archive entry
- Remove `.archiving` marker
- **All-or-nothing**: Either all succeed or session remains in safe state
- **Total**: 4 bash commands + JSON manipulation
**Phase 4: Project Registry Update** (Optional feature tracking)
- Check project.json exists
- Use feature metadata from Phase 2 agent result
- Build feature object with complete traceability
- Update project statistics
- **Independent**: Can fail without affecting archival
- **Total**: 1 bash read + JSON manipulation
### Transactional Guarantees
**State Consistency**:
- Session NEVER in inconsistent state
- `.archiving` marker enables safe resume
- Agent failure leaves session in recoverable state
- Move/manifest operations grouped in Phase 3
**Failure Isolation**:
- Phase 1 failure: No changes made
- Phase 2 failure: Session still active, can retry
- Phase 3 failure: Clear error state, manual recovery documented
- Phase 4 failure: Does not affect archival success
**Resume Capability**:
- Detect interrupted archival via `.archiving` marker
- Resume from Phase 2 (skip marker creation)
- Idempotent operations (safe to retry)

View File

@@ -164,10 +164,10 @@ SlashCommand(command="/workflow:tools:conflict-resolution --session [sessionId]
**Parse Output**:
- Extract: Execution status (success/skipped/failed)
- Verify: conflict-resolution.json file path (if executed)
- Verify: CONFLICT_RESOLUTION.md file path (if executed)
**Validation**:
- File `.workflow/active/[sessionId]/.process/conflict-resolution.json` exists (if executed)
- File `.workflow/active/[sessionId]/.process/CONFLICT_RESOLUTION.md` exists (if executed)
**Skip Behavior**:
- If conflict_risk is "none" or "low", skip directly to Phase 5
@@ -402,7 +402,7 @@ TDD Workflow Orchestrator
│ ├─ Phase 4.1: Detect conflicts with CLI
│ ├─ Phase 4.2: Present conflicts to user
│ └─ Phase 4.3: Apply resolution strategies
│ └─ Returns: conflict-resolution.json ← COLLAPSED
│ └─ Returns: CONFLICT_RESOLUTION.md ← COLLAPSED
│ ELSE:
│ └─ Skip to Phase 5

View File

@@ -77,32 +77,18 @@ find .workflow/active/ -name "WFS-*" -type d | head -1 | sed 's/.*\///'
```bash
# Load all task JSONs
for task_file in .workflow/active/{sessionId}/.task/*.json; do
cat "$task_file"
done
find .workflow/active/{sessionId}/.task/ -name '*.json'
# Extract task IDs
for task_file in .workflow/active/{sessionId}/.task/*.json; do
cat "$task_file" | jq -r '.id'
done
find .workflow/active/{sessionId}/.task/ -name '*.json' -exec jq -r '.id' {} \;
# Check dependencies - read tasks and filter for IMPL/REFACTOR
for task_file in .workflow/active/{sessionId}/.task/IMPL-*.json; do
cat "$task_file" | jq -r '.context.depends_on[]?'
done
for task_file in .workflow/active/{sessionId}/.task/REFACTOR-*.json; do
cat "$task_file" | jq -r '.context.depends_on[]?'
done
# Check dependencies
find .workflow/active/{sessionId}/.task/ -name 'IMPL-*.json' -exec jq -r '.context.depends_on[]?' {} \;
find .workflow/active/{sessionId}/.task/ -name 'REFACTOR-*.json' -exec jq -r '.context.depends_on[]?' {} \;
# Check meta fields
for task_file in .workflow/active/{sessionId}/.task/*.json; do
cat "$task_file" | jq -r '.meta.tdd_phase'
done
for task_file in .workflow/active/{sessionId}/.task/*.json; do
cat "$task_file" | jq -r '.meta.agent'
done
find .workflow/active/{sessionId}/.task/ -name '*.json' -exec jq -r '.meta.tdd_phase' {} \;
find .workflow/active/{sessionId}/.task/ -name '*.json' -exec jq -r '.meta.agent' {} \;
```
**Validation**:
@@ -141,7 +127,7 @@ done
**Gemini analysis for comprehensive TDD compliance report**
```bash
ccw cli -p "
cd project-root && gemini -p "
PURPOSE: Generate TDD compliance report
TASK: Analyze TDD workflow execution and generate quality report
CONTEXT: @{.workflow/active/{sessionId}/.task/*.json,.workflow/active/{sessionId}/.summaries/*,.workflow/active/{sessionId}/.process/tdd-cycle-report.md}
@@ -153,7 +139,7 @@ EXPECTED:
- Red-Green-Refactor cycle validation
- Best practices adherence assessment
RULES: Focus on TDD best practices and workflow adherence. Be specific about violations and improvements.
" --tool gemini --mode analysis --cd project-root > .workflow/active/{sessionId}/TDD_COMPLIANCE_REPORT.md
" > .workflow/active/{sessionId}/TDD_COMPLIANCE_REPORT.md
```
**Output**: TDD_COMPLIANCE_REPORT.md

View File

@@ -221,7 +221,6 @@ return "conservative";
```javascript
Task(
subagent_type="cli-planning-agent",
run_in_background=false,
description=`Analyze test failures (iteration ${N}) - ${strategy} strategy`,
prompt=`
## Task Objective
@@ -272,7 +271,6 @@ Task(
```javascript
Task(
subagent_type="test-fix-agent",
run_in_background=false,
description=`Execute ${task.meta.type}: ${task.title}`,
prompt=`
## Task Objective

View File

@@ -108,7 +108,7 @@ Phase 4: Apply Modifications
**Agent Delegation**:
```javascript
Task(subagent_type="cli-execution-agent", run_in_background=false, prompt=`
Task(subagent_type="cli-execution-agent", prompt=`
## Context
- Session: {session_id}
- Risk: {conflict_risk}
@@ -124,9 +124,6 @@ Task(subagent_type="cli-execution-agent", run_in_background=false, prompt=`
## Analysis Steps
### 0. Load Output Schema (MANDATORY)
Execute: cat ~/.claude/workflows/cli-templates/schemas/conflict-resolution-schema.json
### 1. Load Context
- Read existing files from conflict_detection.existing_files
- Load plan from .workflow/active/{session_id}/.process/context-package.json
@@ -136,7 +133,7 @@ Task(subagent_type="cli-execution-agent", run_in_background=false, prompt=`
### 2. Execute CLI Analysis (Enhanced with Exploration + Scenario Uniqueness)
Primary (Gemini):
ccw cli -p "
cd {project_root} && gemini -p "
PURPOSE: Detect conflicts between plan and codebase, using exploration insights
TASK:
• **Review pre-identified conflict_indicators from exploration results**
@@ -155,7 +152,7 @@ Task(subagent_type="cli-execution-agent", run_in_background=false, prompt=`
- ModuleOverlap conflicts with overlap_analysis
- Targeted clarification questions
RULES: $(cat ~/.claude/workflows/cli-templates/prompts/analysis/02-analyze-code-patterns.txt) | Focus on breaking changes, migration needs, and functional overlaps | Prioritize exploration-identified conflicts | analysis=READ-ONLY
" --tool gemini --mode analysis --cd {project_root}
"
Fallback: Qwen (same prompt) → Claude (manual analysis)
@@ -172,16 +169,125 @@ Task(subagent_type="cli-execution-agent", run_in_background=false, prompt=`
### 4. Return Structured Conflict Data
⚠️ Output to conflict-resolution.json (generated in Phase 4)
⚠️ DO NOT generate CONFLICT_RESOLUTION.md file
**Schema Reference**: Execute \`cat ~/.claude/workflows/cli-templates/schemas/conflict-resolution-schema.json\` to get full schema
Return JSON format for programmatic processing:
Return JSON following the schema above. Key requirements:
\`\`\`json
{
"conflicts": [
{
"id": "CON-001",
"brief": "一行中文冲突摘要",
"severity": "Critical|High|Medium",
"category": "Architecture|API|Data|Dependency|ModuleOverlap",
"affected_files": [
".workflow/active/{session}/.brainstorm/guidance-specification.md",
".workflow/active/{session}/.brainstorm/system-architect/analysis.md"
],
"description": "详细描述冲突 - 什么不兼容",
"impact": {
"scope": "影响的模块/组件",
"compatibility": "Yes|No|Partial",
"migration_required": true|false,
"estimated_effort": "人天估计"
},
"overlap_analysis": {
"// NOTE": "仅当 category=ModuleOverlap 时需要此字段",
"new_module": {
"name": "新模块名称",
"scenarios": ["场景1", "场景2", "场景3"],
"responsibilities": "职责描述"
},
"existing_modules": [
{
"file": "src/existing/module.ts",
"name": "现有模块名称",
"scenarios": ["场景A", "场景B"],
"overlap_scenarios": ["重叠场景1", "重叠场景2"],
"responsibilities": "现有模块职责"
}
]
},
"strategies": [
{
"name": "策略名称(中文)",
"approach": "实现方法简述",
"complexity": "Low|Medium|High",
"risk": "Low|Medium|High",
"effort": "时间估计",
"pros": ["优点1", "优点2"],
"cons": ["缺点1", "缺点2"],
"clarification_needed": [
"// NOTE: 仅当需要用户进一步澄清时需要此字段(尤其是 ModuleOverlap",
"新模块的核心职责边界是什么?",
"如何与现有模块 X 协作?",
"哪些场景应该由新模块处理?"
],
"modifications": [
{
"file": ".workflow/active/{session}/.brainstorm/guidance-specification.md",
"section": "## 2. System Architect Decisions",
"change_type": "update",
"old_content": "原始内容片段(用于定位)",
"new_content": "修改后的内容",
"rationale": "为什么这样改"
},
{
"file": ".workflow/active/{session}/.brainstorm/system-architect/analysis.md",
"section": "## Design Decisions",
"change_type": "update",
"old_content": "原始内容片段",
"new_content": "修改后的内容",
"rationale": "修改理由"
}
]
},
{
"name": "策略2名称",
"approach": "...",
"complexity": "Medium",
"risk": "Low",
"effort": "1-2天",
"pros": ["优点"],
"cons": ["缺点"],
"modifications": [...]
}
],
"recommended": 0,
"modification_suggestions": [
"建议1具体的修改方向或注意事项",
"建议2可能需要考虑的边界情况",
"建议3相关的最佳实践或模式"
]
}
],
"summary": {
"total": 2,
"critical": 1,
"high": 1,
"medium": 0
}
}
\`\`\`
⚠️ CRITICAL Requirements for modifications field:
- old_content: Must be exact text from target file (20-100 chars for unique match)
- new_content: Complete replacement text (maintains formatting)
- change_type: "update" (replace), "add" (insert), "remove" (delete)
- file: Full path relative to project root
- section: Markdown heading for context (helps locate position)
- Minimum 2 strategies per conflict, max 4
- All text in Chinese for user-facing fields (brief, name, pros, cons, modification_suggestions)
- modifications.old_content: 20-100 chars for unique Edit tool matching
- modifications.new_content: preserves markdown formatting
- modification_suggestions: 2-5 actionable suggestions for custom handling
- All text in Chinese for user-facing fields (brief, name, pros, cons)
- modification_suggestions: 2-5 actionable suggestions for custom handling (Chinese)
Quality Standards:
- Each strategy must have actionable modifications
- old_content must be precise enough for Edit tool matching
- new_content preserves markdown formatting and structure
- Recommended strategy (index) based on lowest complexity + risk
- modification_suggestions must be specific, actionable, and context-aware
- Each suggestion should address a specific aspect (compatibility, migration, testing, etc.)
`)
```
@@ -206,85 +312,143 @@ Task(subagent_type="cli-execution-agent", run_in_background=false, prompt=`
8. Return execution log path
```
### Phase 3: User Interaction Loop
### Phase 3: Iterative User Interaction with Clarification Loop
```javascript
FOR each conflict:
round = 0, clarified = false, userClarifications = []
**Execution Flow**:
```
FOR each conflict (逐个处理,无数量限制):
clarified = false
round = 0
userClarifications = []
WHILE (!clarified && round++ < 10):
// 1. Display conflict info (text output for context)
displayConflictSummary(conflict) // id, brief, severity, overlap_analysis if ModuleOverlap
WHILE (!clarified && round < 10):
round++
// 2. Strategy selection via AskUserQuestion
AskUserQuestion({
questions: [{
question: formatStrategiesForDisplay(conflict.strategies),
header: "策略选择",
multiSelect: false,
options: [
...conflict.strategies.map((s, i) => ({
label: `${s.name}${i === conflict.recommended ? ' (推荐)' : ''}`,
description: `${s.complexity}复杂度 | ${s.risk}风险${s.clarification_needed?.length ? ' | ⚠️需澄清' : ''}`
})),
{ label: "自定义修改", description: `建议: ${conflict.modification_suggestions?.slice(0,2).join('; ')}` }
]
}]
})
// 1. Display conflict (包含所有关键字段)
- category, id, brief, severity, description
- IF ModuleOverlap: 展示 overlap_analysis
* new_module: {name, scenarios, responsibilities}
* existing_modules[]: {file, name, scenarios, overlap_scenarios, responsibilities}
// 3. Handle selection
if (userChoice === "自定义修改") {
customConflicts.push({ id, brief, category, suggestions, overlap_analysis })
break
}
// 2. Display strategies (2-4个策略 + 自定义选项)
- FOR each strategy: {name, approach, complexity, risk, effort, pros, cons}
* IF clarification_needed: 展示待澄清问题列表
- 自定义选项: {suggestions: modification_suggestions[]}
selectedStrategy = findStrategyByName(userChoice)
// 3. User selects strategy
userChoice = readInput()
// 4. Clarification (if needed) - batched max 4 per call
if (selectedStrategy.clarification_needed?.length > 0) {
for (batch of chunk(selectedStrategy.clarification_needed, 4)) {
AskUserQuestion({
questions: batch.map((q, i) => ({
question: q, header: `澄清${i+1}`, multiSelect: false,
options: [{ label: "详细说明", description: "提供答案" }]
}))
})
userClarifications.push(...collectAnswers(batch))
}
IF userChoice == "自定义":
customConflicts.push({id, brief, category, suggestions, overlap_analysis})
clarified = true
BREAK
// 5. Agent re-analysis
reanalysisResult = Task({
subagent_type: "cli-execution-agent",
run_in_background: false,
prompt: `Conflict: ${conflict.id}, Strategy: ${selectedStrategy.name}
User Clarifications: ${JSON.stringify(userClarifications)}
Output: { uniqueness_confirmed, rationale, updated_strategy, remaining_questions }`
selectedStrategy = strategies[userChoice]
// 4. Clarification loop
IF selectedStrategy.clarification_needed.length > 0:
// 收集澄清答案
FOR each question:
answer = readInput()
userClarifications.push({question, answer})
// Agent 重新分析
reanalysisResult = Task(cli-execution-agent, prompt={
冲突信息: {id, brief, category, 策略}
用户澄清: userClarifications[]
场景分析: overlap_analysis (if ModuleOverlap)
输出: {
uniqueness_confirmed: bool,
rationale: string,
updated_strategy: {name, approach, complexity, risk, effort, modifications[]},
remaining_questions: [] (如果仍有歧义)
}
})
if (reanalysisResult.uniqueness_confirmed) {
selectedStrategy = { ...reanalysisResult.updated_strategy, clarifications: userClarifications }
IF reanalysisResult.uniqueness_confirmed:
selectedStrategy = updated_strategy
selectedStrategy.clarifications = userClarifications
clarified = true
} else {
selectedStrategy.clarification_needed = reanalysisResult.remaining_questions
}
} else {
ELSE:
// 更新澄清问题,继续下一轮
selectedStrategy.clarification_needed = remaining_questions
ELSE:
clarified = true
}
if (clarified) resolvedConflicts.push({ conflict, strategy: selectedStrategy })
resolvedConflicts.push({conflict, strategy: selectedStrategy})
END WHILE
END FOR
// Build output
selectedStrategies = resolvedConflicts.map(r => ({
conflict_id: r.conflict.id, strategy: r.strategy, clarifications: r.strategy.clarifications || []
conflict_id, strategy, clarifications[]
}))
```
**Key Points**:
- AskUserQuestion: max 4 questions/call, batch if more
- Strategy options: 2-4 strategies + "自定义修改"
- Clarification loop: max 10 rounds, agent判断 uniqueness_confirmed
- Custom conflicts: 记录 overlap_analysis 供后续手动处理
**Key Data Structures**:
```javascript
// Custom conflict tracking
customConflicts[] = {
id, brief, category,
suggestions: modification_suggestions[],
overlap_analysis: { new_module{}, existing_modules[] } // ModuleOverlap only
}
// Agent re-analysis prompt output
{
uniqueness_confirmed: bool,
rationale: string,
updated_strategy: {
name, approach, complexity, risk, effort,
modifications: [{file, section, change_type, old_content, new_content, rationale}]
},
remaining_questions: string[]
}
```
**Text Output Example** (展示关键字段):
```markdown
============================================================
冲突 1/3 - 第 1 轮
============================================================
【ModuleOverlap】CON-001: 新增用户认证服务与现有模块功能重叠
严重程度: High | 描述: 计划中的 UserAuthService 与现有 AuthManager 场景重叠
--- 场景重叠分析 ---
新模块: UserAuthService | 场景: 登录, Token验证, 权限, MFA
现有模块: AuthManager (src/auth/AuthManager.ts) | 重叠: 登录, Token验证
--- 解决策略 ---
1) 合并 (Low复杂度 | Low风险 | 2-3天)
⚠️ 需澄清: AuthManager是否能承担MFA
2) 拆分边界 (Medium复杂度 | Medium风险 | 4-5天)
⚠️ 需澄清: 基础/高级认证边界? Token验证归谁?
3) 自定义修改
建议: 评估扩展性; 策略模式分离; 定义接口边界
请选择 (1-3): > 2
--- 澄清问答 (第1轮) ---
Q: 基础/高级认证边界?
A: 基础=密码登录+token验证, 高级=MFA+OAuth+SSO
Q: Token验证归谁?
A: 统一由 AuthManager 负责
🔄 重新分析...
✅ 唯一性已确认 | 理由: 边界清晰 - AuthManager(基础+token), UserAuthService(MFA+OAuth+SSO)
============================================================
冲突 2/3 - 第 1 轮 [下一个冲突]
============================================================
```
**Loop Characteristics**: 逐个处理 | 无限轮次(max 10) | 动态问题生成 | Agent重新分析判断唯一性 | ModuleOverlap场景边界澄清
### Phase 4: Apply Modifications
@@ -303,30 +467,14 @@ selectedStrategies.forEach(item => {
console.log(`\n正在应用 ${modifications.length} 个修改...`);
// 2. Apply each modification using Edit tool (with fallback to context-package.json)
// 2. Apply each modification using Edit tool
const appliedModifications = [];
const failedModifications = [];
const fallbackConstraints = []; // For files that don't exist
modifications.forEach((mod, idx) => {
try {
console.log(`[${idx + 1}/${modifications.length}] 修改 ${mod.file}...`);
// Check if target file exists (brainstorm files may not exist in lite workflow)
if (!file_exists(mod.file)) {
console.log(` ⚠️ 文件不存在,写入 context-package.json 作为约束`);
fallbackConstraints.push({
source: "conflict-resolution",
conflict_id: mod.conflict_id,
target_file: mod.file,
section: mod.section,
change_type: mod.change_type,
content: mod.new_content,
rationale: mod.rationale
});
return; // Skip to next modification
}
if (mod.change_type === "update") {
Edit({
file_path: mod.file,
@@ -354,45 +502,14 @@ modifications.forEach((mod, idx) => {
}
});
// 2b. Generate conflict-resolution.json output file
const resolutionOutput = {
session_id: sessionId,
resolved_at: new Date().toISOString(),
summary: {
total_conflicts: conflicts.length,
resolved_with_strategy: selectedStrategies.length,
custom_handling: customConflicts.length,
fallback_constraints: fallbackConstraints.length
},
resolved_conflicts: selectedStrategies.map(s => ({
conflict_id: s.conflict_id,
strategy_name: s.strategy.name,
strategy_approach: s.strategy.approach,
clarifications: s.clarifications || [],
modifications_applied: s.strategy.modifications?.filter(m =>
appliedModifications.some(am => am.conflict_id === s.conflict_id)
) || []
})),
custom_conflicts: customConflicts.map(c => ({
id: c.id,
brief: c.brief,
category: c.category,
suggestions: c.suggestions,
overlap_analysis: c.overlap_analysis || null
})),
planning_constraints: fallbackConstraints, // Constraints for files that don't exist
failed_modifications: failedModifications
};
const resolutionPath = `.workflow/active/${sessionId}/.process/conflict-resolution.json`;
Write(resolutionPath, JSON.stringify(resolutionOutput, null, 2));
console.log(`\n📄 冲突解决结果已保存: ${resolutionPath}`);
// 3. Update context-package.json with resolution details (reference to JSON file)
// 3. Update context-package.json with resolution details
const contextPackage = JSON.parse(Read(contextPath));
contextPackage.conflict_detection.conflict_risk = "resolved";
contextPackage.conflict_detection.resolution_file = resolutionPath; // Reference to detailed JSON
contextPackage.conflict_detection.resolved_conflicts = selectedStrategies.map(s => s.conflict_id);
contextPackage.conflict_detection.resolved_conflicts = selectedStrategies.map(s => ({
conflict_id: s.conflict_id,
strategy_name: s.strategy.name,
clarifications: s.clarifications
}));
contextPackage.conflict_detection.custom_conflicts = customConflicts.map(c => c.id);
contextPackage.conflict_detection.resolved_at = new Date().toISOString();
Write(contextPath, JSON.stringify(contextPackage, null, 2));
@@ -465,50 +582,12 @@ return {
✓ Agent log saved to .workflow/active/{session_id}/.chat/
```
## Output Format
### Primary Output: conflict-resolution.json
**Path**: `.workflow/active/{session_id}/.process/conflict-resolution.json`
**Schema**:
```json
{
"session_id": "WFS-xxx",
"resolved_at": "ISO timestamp",
"summary": {
"total_conflicts": 3,
"resolved_with_strategy": 2,
"custom_handling": 1,
"fallback_constraints": 0
},
"resolved_conflicts": [
{
"conflict_id": "CON-001",
"strategy_name": "策略名称",
"strategy_approach": "实现方法",
"clarifications": [],
"modifications_applied": []
}
],
"custom_conflicts": [
{
"id": "CON-002",
"brief": "冲突摘要",
"category": "ModuleOverlap",
"suggestions": ["建议1", "建议2"],
"overlap_analysis": null
}
],
"planning_constraints": [],
"failed_modifications": []
}
```
### Secondary: Agent JSON Response (stdout)
## Output Format: Agent JSON Response
**Focus**: Structured conflict data with actionable modifications for programmatic processing.
**Format**: JSON to stdout (NO file generation)
**Structure**: Defined in Phase 2, Step 4 (agent prompt)
### Key Requirements
@@ -556,12 +635,11 @@ If Edit tool fails mid-application:
- Requires: `conflict_risk ≥ medium`
**Output**:
- Generated file:
- `.workflow/active/{session_id}/.process/conflict-resolution.json` (primary output)
- Modified files (if exist):
- Modified files:
- `.workflow/active/{session_id}/.brainstorm/guidance-specification.md`
- `.workflow/active/{session_id}/.brainstorm/{role}/analysis.md`
- `.workflow/active/{session_id}/.process/context-package.json` (conflict_risk → resolved, resolution_file reference)
- `.workflow/active/{session_id}/.process/context-package.json` (conflict_risk → resolved)
- NO report file generation
**User Interaction**:
- **Iterative conflict processing**: One conflict at a time, not in batches
@@ -589,7 +667,7 @@ If Edit tool fails mid-application:
✓ guidance-specification.md updated with resolved conflicts
✓ Role analyses (*.md) updated with resolved conflicts
✓ context-package.json marked as "resolved" with clarification records
conflict-resolution.json generated with full resolution details
No CONFLICT_RESOLUTION.md file generated
✓ Modification summary includes:
- Total conflicts
- Resolved with strategy (count)

View File

@@ -15,6 +15,7 @@ allowed-tools: Task(*), Read(*), Glob(*)
Orchestrator command that invokes `context-search-agent` to gather comprehensive project context for implementation planning. Generates standardized `context-package.json` with codebase analysis, dependencies, and conflict detection.
**Agent**: `context-search-agent` (`.claude/agents/context-search-agent.md`)
## Core Philosophy
@@ -120,7 +121,6 @@ const sessionFolder = `.workflow/active/${session_id}/.process`;
const explorationTasks = selectedAngles.map((angle, index) =>
Task(
subagent_type="cli-explore-agent",
run_in_background=false,
description=`Explore: ${angle}`,
prompt=`
## Task Objective
@@ -135,7 +135,7 @@ Execute **${angle}** exploration for task planning context. Analyze codebase fro
## MANDATORY FIRST STEPS (Execute by Agent)
**You (cli-explore-agent) MUST execute these steps in order:**
1. Run: ccw tool exec get_modules_by_depth '{}' (project structure)
1. Run: ~/.claude/scripts/get_modules_by_depth.sh (project structure)
2. Run: rg -l "{keyword_from_task}" --type ts (locate relevant files)
3. Execute: cat ~/.claude/workflows/cli-templates/schemas/explore-json-schema.json (get output schema reference)
@@ -215,7 +215,6 @@ Write(`${sessionFolder}/explorations-manifest.json`, JSON.stringify(explorationM
```javascript
Task(
subagent_type="context-search-agent",
run_in_background=false,
description="Gather comprehensive context for plan",
prompt=`
## Execution Mode
@@ -238,7 +237,7 @@ Execute complete context-search-agent workflow for implementation planning:
### Phase 1: Initialization & Pre-Analysis
1. **Project State Loading**: Read and parse `.workflow/project.json`. Use its `overview` section as the foundational `project_context`. This is your primary source for architecture, tech stack, and key components. If file doesn't exist, proceed with fresh analysis.
2. **Detection**: Check for existing context-package (early exit if valid)
3. **Foundation**: Initialize CodexLens, get project structure, load docs
3. **Foundation**: Initialize code-index, get project structure, load docs
4. **Analysis**: Extract keywords, determine scope, classify complexity based on task description and project state
### Phase 2: Multi-Source Context Discovery
@@ -430,5 +429,6 @@ if (historicalConflicts.length > 0 && currentRisk === "low") {
- **Detection-first**: Always check for existing package before invoking agent
- **Project.json integration**: Agent reads `.workflow/project.json` as primary source for project context, avoiding redundant analysis
- **Agent autonomy**: Agent handles all discovery logic per `.claude/agents/context-search-agent.md`
- **No redundancy**: This command is a thin orchestrator, all logic in agent
- **Plan-specific**: Use this for implementation planning; brainstorm mode uses direct agent call

View File

@@ -28,12 +28,6 @@ Input Parsing:
├─ Parse flags: --session
└─ Validation: session_id REQUIRED
Phase 0: User Configuration (Interactive)
├─ Question 1: Supplementary materials/guidelines?
├─ Question 2: Execution method preference (Agent/CLI/Hybrid)
├─ Question 3: CLI tool preference (if CLI selected)
└─ Store: userConfig for agent prompt
Phase 1: Context Preparation & Module Detection (Command)
├─ Assemble session paths (metadata, context package, output dirs)
├─ Provide metadata (session_id, execution_mode, mcp_capabilities)
@@ -63,82 +57,6 @@ Phase 3: Integration (+1 Coordinator, Multi-Module Only)
## Document Generation Lifecycle
### Phase 0: User Configuration (Interactive)
**Purpose**: Collect user preferences before task generation to ensure generated tasks match execution expectations.
**User Questions**:
```javascript
AskUserQuestion({
questions: [
{
question: "Do you have supplementary materials or guidelines to include?",
header: "Materials",
multiSelect: false,
options: [
{ label: "No additional materials", description: "Use existing context only" },
{ label: "Provide file paths", description: "I'll specify paths to include" },
{ label: "Provide inline content", description: "I'll paste content directly" }
]
},
{
question: "Select execution method for generated tasks:",
header: "Execution",
multiSelect: false,
options: [
{ label: "Agent (Recommended)", description: "Claude agent executes tasks directly" },
{ label: "Hybrid", description: "Agent orchestrates, calls CLI for complex steps" },
{ label: "CLI Only", description: "All execution via CLI tools (codex/gemini/qwen)" }
]
},
{
question: "If using CLI, which tool do you prefer?",
header: "CLI Tool",
multiSelect: false,
options: [
{ label: "Codex (Recommended)", description: "Best for implementation tasks" },
{ label: "Gemini", description: "Best for analysis and large context" },
{ label: "Qwen", description: "Alternative analysis tool" },
{ label: "Auto", description: "Let agent decide per-task" }
]
}
]
})
```
**Handle Materials Response**:
```javascript
if (userConfig.materials === "Provide file paths") {
// Follow-up question for file paths
const pathsResponse = AskUserQuestion({
questions: [{
question: "Enter file paths to include (comma-separated or one per line):",
header: "Paths",
multiSelect: false,
options: [
{ label: "Enter paths", description: "Provide paths in text input" }
]
}]
})
userConfig.supplementaryPaths = parseUserPaths(pathsResponse)
}
```
**Build userConfig**:
```javascript
const userConfig = {
supplementaryMaterials: {
type: "none|paths|inline",
content: [...], // Parsed paths or inline content
},
executionMethod: "agent|hybrid|cli",
preferredCliTool: "codex|gemini|qwen|auto",
enableResume: true // Always enable resume for CLI executions
}
```
**Pass to Agent**: Include `userConfig` in agent prompt for Phase 2A/2B.
### Phase 1: Context Preparation & Module Detection (Command Responsibility)
**Command prepares session paths, metadata, and detects module structure.**
@@ -171,14 +89,6 @@ const userConfig = {
3. **Auto Module Detection** (determines single vs parallel mode):
```javascript
function autoDetectModules(contextPackage, projectRoot) {
// === Complexity Gate: Only parallelize for High complexity ===
const complexity = contextPackage.metadata?.complexity || 'Medium';
if (complexity !== 'High') {
// Force single agent mode for Low/Medium complexity
// This maximizes agent context reuse for related tasks
return [{ name: 'main', prefix: '', paths: ['.'] }];
}
// Priority 1: Explicit frontend/backend separation
if (exists('src/frontend') && exists('src/backend')) {
return [
@@ -202,9 +112,8 @@ const userConfig = {
```
**Decision Logic**:
- `complexity !== 'High'` → Force Phase 2A (Single Agent, maximize context reuse)
- `modules.length == 1` → Phase 2A (Single Agent, original flow)
- `modules.length >= 2 && complexity == 'High'` → Phase 2B + Phase 3 (N+1 Parallel)
- `modules.length >= 2` → Phase 2B + Phase 3 (N+1 Parallel)
**Note**: CLI tool usage is now determined semantically by action-planning-agent based on user's task description, not by flags.
@@ -218,7 +127,6 @@ const userConfig = {
```javascript
Task(
subagent_type="action-planning-agent",
run_in_background=false,
description="Generate planning documents (IMPL_PLAN.md, task JSONs, TODO_LIST.md)",
prompt=`
## TASK OBJECTIVE
@@ -242,21 +150,10 @@ Output:
Session ID: {session-id}
MCP Capabilities: {exa_code, exa_web, code_index}
## USER CONFIGURATION (from Phase 0)
Execution Method: ${userConfig.executionMethod} // agent|hybrid|cli
Preferred CLI Tool: ${userConfig.preferredCliTool} // codex|gemini|qwen|auto
Supplementary Materials: ${userConfig.supplementaryMaterials}
## CLI TOOL SELECTION
Based on userConfig.executionMethod:
- "agent": No command field in implementation_approach steps
- "hybrid": Add command field to complex steps only (agent handles simple steps)
- "cli": Add command field to ALL implementation_approach steps
CLI Resume Support (MANDATORY for all CLI commands):
- Use --resume parameter to continue from previous task execution
- Read previous task's cliExecutionId from session state
- Format: ccw cli -p "[prompt]" --resume ${previousCliId} --tool ${tool} --mode write
Determine CLI tool usage per-step based on user's task description:
- If user specifies "use Codex/Gemini/Qwen for X" → Add command field to relevant steps
- Default: Agent execution (no command field) unless user explicitly requests CLI
## EXPLORATION CONTEXT (from context-package.exploration_results)
- Load exploration_results from context-package.json
@@ -266,13 +163,6 @@ CLI Resume Support (MANDATORY for all CLI commands):
- Use aggregated_insights.all_integration_points for precise modification locations
- Use conflict_indicators for risk-aware task sequencing
## CONFLICT RESOLUTION CONTEXT (if exists)
- Check context-package.conflict_detection.resolution_file for conflict-resolution.json path
- If exists, load .process/conflict-resolution.json:
- Apply planning_constraints as task constraints (for brainstorm-less workflows)
- Reference resolved_conflicts for implementation approach alignment
- Handle custom_conflicts with explicit task notes
## EXPECTED DELIVERABLES
1. Task JSON Files (.task/IMPL-*.json)
- 6-field schema (id, title, status, context_package_path, meta, context, flow_control)
@@ -280,7 +170,6 @@ CLI Resume Support (MANDATORY for all CLI commands):
- Artifacts integration from context package
- **focus_paths enhanced with exploration critical_files**
- Flow control with pre_analysis steps (include exploration integration_points analysis)
- **CLI Execution IDs and strategies (MANDATORY)**
2. Implementation Plan (IMPL_PLAN.md)
- Context analysis and artifact references
@@ -292,27 +181,6 @@ CLI Resume Support (MANDATORY for all CLI commands):
- Links to task JSONs and summaries
- Matches task JSON hierarchy
## CLI EXECUTION ID REQUIREMENTS (MANDATORY)
Each task JSON MUST include:
- **cli_execution_id**: Unique ID for CLI execution (format: `{session_id}-{task_id}`)
- **cli_execution**: Strategy object based on depends_on:
- No deps → `{ "strategy": "new" }`
- 1 dep (single child) → `{ "strategy": "resume", "resume_from": "parent-cli-id" }`
- 1 dep (multiple children) → `{ "strategy": "fork", "resume_from": "parent-cli-id" }`
- N deps → `{ "strategy": "merge_fork", "merge_from": ["id1", "id2", ...] }`
**CLI Execution Strategy Rules**:
1. **new**: Task has no dependencies - starts fresh CLI conversation
2. **resume**: Task has 1 parent AND that parent has only this child - continues same conversation
3. **fork**: Task has 1 parent BUT parent has multiple children - creates new branch with parent context
4. **merge_fork**: Task has multiple parents - merges all parent contexts into new conversation
**Execution Command Patterns**:
- new: `ccw cli -p "[prompt]" --tool [tool] --mode write --id [cli_execution_id]`
- resume: `ccw cli -p "[prompt]" --resume [resume_from] --tool [tool] --mode write`
- fork: `ccw cli -p "[prompt]" --resume [resume_from] --id [cli_execution_id] --tool [tool] --mode write`
- merge_fork: `ccw cli -p "[prompt]" --resume [merge_from.join(',')] --id [cli_execution_id] --tool [tool] --mode write`
## QUALITY STANDARDS
Hard Constraints:
- Task count <= 18 (hard limit - request re-scope if exceeded)
@@ -335,9 +203,7 @@ Hard Constraints:
**Condition**: `modules.length >= 2` (multi-module detected)
**Purpose**: Launch N action-planning-agents simultaneously, one per module, for parallel task JSON generation.
**Note**: Phase 2B agents generate Task JSONs ONLY. IMPL_PLAN.md and TODO_LIST.md are generated by Phase 3 Coordinator.
**Purpose**: Launch N action-planning-agents simultaneously, one per module, for parallel task generation.
**Parallel Agent Invocation**:
```javascript
@@ -345,123 +211,27 @@ Hard Constraints:
const planningTasks = modules.map(module =>
Task(
subagent_type="action-planning-agent",
run_in_background=false,
description=`Generate ${module.name} module task JSONs`,
description=`Plan ${module.name} module`,
prompt=`
## TASK OBJECTIVE
Generate task JSON files for ${module.name} module within workflow session
IMPORTANT: This is PLANNING ONLY - generate task JSONs, NOT implementing code.
IMPORTANT: Generate Task JSONs ONLY. IMPL_PLAN.md and TODO_LIST.md by Phase 3 Coordinator.
CRITICAL: Follow the progressive loading strategy defined in agent specification (load analysis.md files incrementally due to file size)
## MODULE SCOPE
## SCOPE
- Module: ${module.name} (${module.type})
- Focus Paths: ${module.paths.join(', ')}
- Task ID Prefix: IMPL-${module.prefix}
- Task Limit: ≤9 tasks (hard limit for this module)
- Other Modules: ${otherModules.join(', ')} (reference only, do NOT generate tasks for them)
- Task Limit: ≤9 tasks
- Other Modules: ${otherModules.join(', ')}
- Cross-module deps format: "CROSS::{module}::{pattern}"
## SESSION PATHS
Input:
- Session Metadata: .workflow/active/{session-id}/workflow-session.json
- Context Package: .workflow/active/{session-id}/.process/context-package.json
Output:
- Task Dir: .workflow/active/{session-id}/.task/
## CONTEXT METADATA
Session ID: {session-id}
MCP Capabilities: {exa_code, exa_web, code_index}
## USER CONFIGURATION (from Phase 0)
Execution Method: ${userConfig.executionMethod} // agent|hybrid|cli
Preferred CLI Tool: ${userConfig.preferredCliTool} // codex|gemini|qwen|auto
Supplementary Materials: ${userConfig.supplementaryMaterials}
## CLI TOOL SELECTION
Based on userConfig.executionMethod:
- "agent": No command field in implementation_approach steps
- "hybrid": Add command field to complex steps only (agent handles simple steps)
- "cli": Add command field to ALL implementation_approach steps
CLI Resume Support (MANDATORY for all CLI commands):
- Use --resume parameter to continue from previous task execution
- Read previous task's cliExecutionId from session state
- Format: ccw cli -p "[prompt]" --resume ${previousCliId} --tool ${tool} --mode write
## EXPLORATION CONTEXT (from context-package.exploration_results)
- Load exploration_results from context-package.json
- Filter for ${module.name} module: Use aggregated_insights.critical_files matching ${module.paths.join(', ')}
- Apply module-relevant constraints from aggregated_insights.constraints
- Reference aggregated_insights.all_patterns applicable to ${module.name}
- Use aggregated_insights.all_integration_points for precise modification locations within module scope
- Use conflict_indicators for risk-aware task sequencing
## CONFLICT RESOLUTION CONTEXT (if exists)
- Check context-package.conflict_detection.resolution_file for conflict-resolution.json path
- If exists, load .process/conflict-resolution.json:
- Apply planning_constraints relevant to ${module.name} as task constraints
- Reference resolved_conflicts affecting ${module.name} for implementation approach alignment
- Handle custom_conflicts with explicit task notes
## CROSS-MODULE DEPENDENCIES
- For dependencies ON other modules: Use placeholder depends_on: ["CROSS::{module}::{pattern}"]
- Example: depends_on: ["CROSS::B::api-endpoint"] (this module depends on B's api-endpoint task)
- Phase 3 Coordinator resolves to actual task IDs
- For dependencies FROM other modules: Document in task context as "provides_for" annotation
## EXPECTED DELIVERABLES
Task JSON Files (.task/IMPL-${module.prefix}*.json):
- 6-field schema (id, title, status, context_package_path, meta, context, flow_control)
- Task ID format: IMPL-${module.prefix}1, IMPL-${module.prefix}2, ...
- Quantified requirements with explicit counts
- Artifacts integration from context package (filtered for ${module.name})
- **focus_paths enhanced with exploration critical_files (module-scoped)**
- Flow control with pre_analysis steps (include exploration integration_points analysis)
- **CLI Execution IDs and strategies (MANDATORY)**
- Focus ONLY on ${module.name} module scope
## CLI EXECUTION ID REQUIREMENTS (MANDATORY)
Each task JSON MUST include:
- **cli_execution_id**: Unique ID for CLI execution (format: `{session_id}-IMPL-${module.prefix}{seq}`)
- **cli_execution**: Strategy object based on depends_on:
- No deps → `{ "strategy": "new" }`
- 1 dep (single child) → `{ "strategy": "resume", "resume_from": "parent-cli-id" }`
- 1 dep (multiple children) → `{ "strategy": "fork", "resume_from": "parent-cli-id" }`
- N deps → `{ "strategy": "merge_fork", "merge_from": ["id1", "id2", ...] }`
- Cross-module dep → `{ "strategy": "cross_module_fork", "resume_from": "CROSS::{module}::{pattern}" }`
**CLI Execution Strategy Rules**:
1. **new**: Task has no dependencies - starts fresh CLI conversation
2. **resume**: Task has 1 parent AND that parent has only this child - continues same conversation
3. **fork**: Task has 1 parent BUT parent has multiple children - creates new branch with parent context
4. **merge_fork**: Task has multiple parents - merges all parent contexts into new conversation
5. **cross_module_fork**: Task depends on task from another module - Phase 3 resolves placeholder
**Execution Command Patterns**:
- new: `ccw cli -p "[prompt]" --tool [tool] --mode write --id [cli_execution_id]`
- resume: `ccw cli -p "[prompt]" --resume [resume_from] --tool [tool] --mode write`
- fork: `ccw cli -p "[prompt]" --resume [resume_from] --id [cli_execution_id] --tool [tool] --mode write`
- merge_fork: `ccw cli -p "[prompt]" --resume [merge_from.join(',')] --id [cli_execution_id] --tool [tool] --mode write`
- cross_module_fork: (Phase 3 resolves placeholder, then uses fork pattern)
## QUALITY STANDARDS
Hard Constraints:
- Task count <= 9 for this module (hard limit - coordinate with Phase 3 if exceeded)
- All requirements quantified (explicit counts and enumerated lists)
- Acceptance criteria measurable (include verification commands)
- Artifact references mapped from context package (module-scoped filter)
- Focus paths use absolute paths or clear relative paths from project root
- Cross-module dependencies use CROSS:: placeholder format
## SUCCESS CRITERIA
- Task JSONs saved to .task/ with IMPL-${module.prefix}* naming
- All task JSONs include cli_execution_id and cli_execution strategy
- Cross-module dependencies use CROSS:: placeholder format consistently
- Focus paths scoped to ${module.paths.join(', ')} only
- Return: task count, task IDs, dependency summary (internal + cross-module)
## INSTRUCTIONS
- Generate tasks ONLY for ${module.name} module
- Use task ID format: IMPL-${module.prefix}1, IMPL-${module.prefix}2, ...
- For cross-module dependencies, use: depends_on: ["CROSS::B::api-endpoint"]
- Maximum 9 tasks per module
`
)
);
@@ -485,79 +255,37 @@ await Promise.all(planningTasks);
- Prefix: A, B, C... (assigned by detection order)
- Sequence: 1, 2, 3... (per-module increment)
### Phase 3: Integration (+1 Coordinator Agent, Multi-Module Only)
### Phase 3: Integration (+1 Coordinator, Multi-Module Only)
**Condition**: Only executed when `modules.length >= 2`
**Purpose**: Collect all module tasks, resolve cross-module dependencies, generate unified IMPL_PLAN.md and TODO_LIST.md documents.
**Purpose**: Collect all module tasks, resolve cross-module dependencies, generate unified documents.
**Coordinator Agent Invocation**:
**Integration Logic**:
```javascript
// Wait for all Phase 2B agents to complete
const moduleResults = await Promise.all(planningTasks);
// 1. Collect all module task JSONs
const allTasks = glob('.task/IMPL-*.json').map(loadJson);
// Launch +1 Coordinator Agent
Task(
subagent_type="action-planning-agent",
run_in_background=false,
description="Integrate module tasks and generate unified documents",
prompt=`
## TASK OBJECTIVE
Integrate all module task JSONs, resolve cross-module dependencies, and generate unified IMPL_PLAN.md and TODO_LIST.md
IMPORTANT: This is INTEGRATION ONLY - consolidate existing task JSONs, NOT creating new tasks.
## SESSION PATHS
Input:
- Session Metadata: .workflow/active/{session-id}/workflow-session.json
- Context Package: .workflow/active/{session-id}/.process/context-package.json
- Task JSONs: .workflow/active/{session-id}/.task/IMPL-*.json (from Phase 2B)
Output:
- Updated Task JSONs: .workflow/active/{session-id}/.task/IMPL-*.json (resolved dependencies)
- IMPL_PLAN: .workflow/active/{session-id}/IMPL_PLAN.md
- TODO_LIST: .workflow/active/{session-id}/TODO_LIST.md
## CONTEXT METADATA
Session ID: {session-id}
Modules: ${modules.map(m => m.name + '(' + m.prefix + ')').join(', ')}
Module Count: ${modules.length}
## INTEGRATION STEPS
1. Collect all .task/IMPL-*.json, group by module prefix
2. Resolve CROSS:: dependencies → actual task IDs, update task JSONs
3. Generate IMPL_PLAN.md (multi-module format per agent specification)
4. Generate TODO_LIST.md (hierarchical format per agent specification)
## CROSS-MODULE DEPENDENCY RESOLUTION
- Pattern: CROSS::{module}::{pattern} → IMPL-{module}* matching title/context
- Example: CROSS::B::api-endpoint → IMPL-B1 (if B1 title contains "api-endpoint")
- Log unresolved as warnings
## EXPECTED DELIVERABLES
1. Updated Task JSONs with resolved dependency IDs
2. IMPL_PLAN.md - multi-module format with cross-dependency section
3. TODO_LIST.md - hierarchical by module with cross-dependency section
## SUCCESS CRITERIA
- No CROSS:: placeholders remaining in task JSONs
- IMPL_PLAN.md and TODO_LIST.md generated with multi-module structure
- Return: task count, per-module breakdown, resolved dependency count
`
)
```
**Dependency Resolution Algorithm**:
```javascript
function resolveCrossModuleDependency(placeholder, allTasks) {
const [, targetModule, pattern] = placeholder.match(/CROSS::(\w+)::(.+)/);
const candidates = allTasks.filter(t =>
t.id.startsWith(`IMPL-${targetModule}`) &&
(t.title.toLowerCase().includes(pattern.toLowerCase()) ||
t.context?.description?.toLowerCase().includes(pattern.toLowerCase()))
);
return candidates.length > 0
? candidates.sort((a, b) => a.id.localeCompare(b.id))[0].id
: placeholder; // Keep for manual resolution
// 2. Resolve cross-module dependencies
for (const task of allTasks) {
if (task.depends_on) {
task.depends_on = task.depends_on.map(dep => {
if (dep.startsWith('CROSS::')) {
// CROSS::B::api-endpoint → find matching IMPL-B* task
const [, targetModule, pattern] = dep.match(/CROSS::(\w+)::(.+)/);
return findTaskByModuleAndPattern(allTasks, targetModule, pattern);
}
return dep;
});
}
}
// 3. Generate unified IMPL_PLAN.md (grouped by module)
generateIMPL_PLAN(allTasks, modules);
// 4. Generate TODO_LIST.md (hierarchical structure)
generateTODO_LIST(allTasks, modules);
```
**Note**: IMPL_PLAN.md and TODO_LIST.md structure definitions are in `action-planning-agent.md`.

View File

@@ -113,7 +113,7 @@ Phase 2: Agent Execution (Document Generation)
// Existing test patterns and coverage analysis
},
"mcp_capabilities": {
"codex_lens": true,
"code_index": true,
"exa_code": true,
"exa_web": true
}
@@ -152,14 +152,9 @@ Phase 2: Agent Execution (Document Generation)
roleAnalysisPaths.forEach(path => Read(path));
```
5. **Load Conflict Resolution** (from conflict-resolution.json, if exists)
5. **Load Conflict Resolution** (from context-package.json, if exists)
```javascript
// Check for new conflict-resolution.json format
if (contextPackage.conflict_detection?.resolution_file) {
Read(contextPackage.conflict_detection.resolution_file) // .process/conflict-resolution.json
}
// Fallback: legacy brainstorm_artifacts path
else if (contextPackage.brainstorm_artifacts?.conflict_resolution?.exists) {
if (contextPackage.brainstorm_artifacts.conflict_resolution?.exists) {
Read(contextPackage.brainstorm_artifacts.conflict_resolution.path)
}
```
@@ -194,7 +189,6 @@ const templatePath = hasCliExecuteFlag
```javascript
Task(
subagent_type="action-planning-agent",
run_in_background=false,
description="Generate TDD task JSON and implementation plan",
prompt=`
## Execution Context
@@ -229,7 +223,7 @@ If conflict_risk was medium/high, modifications have been applied to:
- **guidance-specification.md**: Design decisions updated to resolve conflicts
- **Role analyses (*.md)**: Recommendations adjusted for compatibility
- **context-package.json**: Marked as "resolved" with conflict IDs
- Conflict resolution results stored in conflict-resolution.json
- NO separate CONFLICT_RESOLUTION.md file (conflicts resolved in-place)
### MCP Analysis Results (Optional)
**Code Structure**: {mcp_code_index_results}
@@ -239,6 +233,15 @@ If conflict_risk was medium/high, modifications have been applied to:
**Agent Configuration Reference**: All TDD task generation rules, quantification requirements, Red-Green-Refactor cycle structure, quality standards, and execution details are defined in action-planning-agent.
Refer to: @.claude/agents/action-planning-agent.md for:
- TDD Task Decomposition Standards
- Red-Green-Refactor Cycle Requirements
- Quantification Requirements (MANDATORY)
- 5-Field Task JSON Schema
- IMPL_PLAN.md Structure (TDD variant)
- TODO_LIST.md Format
- TDD Execution Flow & Quality Validation
### TDD-Specific Requirements Summary
#### Task Structure Philosophy
@@ -330,7 +333,7 @@ Generate all three documents and report completion status:
- TDD cycles configured: N cycles with quantified test cases
- Artifacts integrated: synthesis-spec, guidance-specification, N role analyses
- Test context integrated: existing patterns and coverage
- MCP enhancements: CodexLens, exa-research
- MCP enhancements: code-index, exa-research
- Session ready for TDD execution: /workflow:execute
`
)
@@ -370,12 +373,10 @@ const agentContext = {
.flatMap(role => role.files)
.map(file => Read(file.path)),
// Load conflict resolution if exists (prefer new JSON format)
conflict_resolution: context_package.conflict_detection?.resolution_file
? Read(context_package.conflict_detection.resolution_file) // .process/conflict-resolution.json
: (brainstorm_artifacts?.conflict_resolution?.exists
? Read(brainstorm_artifacts.conflict_resolution.path)
: null),
// Load conflict resolution if exists (from context package)
conflict_resolution: brainstorm_artifacts.conflict_resolution?.exists
? Read(brainstorm_artifacts.conflict_resolution.path)
: null,
// Optional MCP enhancements
mcp_analysis: executeMcpDiscovery()
@@ -407,7 +408,7 @@ This section provides quick reference for TDD task JSON structure. For complete
│ ├── IMPL-3.2.json # Complex feature subtask (if needed)
│ └── ...
└── .process/
├── conflict-resolution.json # Conflict resolution results (if conflict_risk ≥ medium)
├── CONFLICT_RESOLUTION.md # Conflict resolution strategies (if conflict_risk ≥ medium)
├── test-context-package.json # Test coverage analysis
├── context-package.json # Input from context-gather
├── context_package_path # Path to smart context package

View File

@@ -76,7 +76,6 @@ Phase 3: Output Validation (Command)
```javascript
Task(
subagent_type="cli-execution-agent",
run_in_background=false,
description="Analyze test coverage gaps and generate test strategy",
prompt=`
## TASK OBJECTIVE
@@ -90,7 +89,7 @@ Template: ~/.claude/workflows/cli-templates/prompts/test/test-concept-analysis.t
## EXECUTION STEPS
1. Execute Gemini analysis:
ccw cli -p "$(cat ~/.claude/workflows/cli-templates/prompts/test/test-concept-analysis.txt)" --tool gemini --mode write --cd .workflow/active/{test_session_id}/.process
cd .workflow/active/{test_session_id}/.process && gemini -p "$(cat ~/.claude/workflows/cli-templates/prompts/test/test-concept-analysis.txt)" --approval-mode yolo
2. Generate TEST_ANALYSIS_RESULTS.md:
Synthesize gemini-test-analysis.md into standardized format for task generation

View File

@@ -14,7 +14,7 @@ allowed-tools: Task(*), Read(*), Glob(*)
Orchestrator command that invokes `test-context-search-agent` to gather comprehensive test coverage context for test generation workflows. Generates standardized `test-context-package.json` with coverage analysis, framework detection, and source implementation context.
**Agent**: `test-context-search-agent` (`.claude/agents/test-context-search-agent.md`)
## Core Philosophy
@@ -86,9 +86,9 @@ if (file_exists(testContextPath)) {
```javascript
Task(
subagent_type="test-context-search-agent",
run_in_background=false,
description="Gather test coverage context",
prompt=`
You are executing as test-context-search-agent (.claude/agents/test-context-search-agent.md).
## Execution Mode
**PLAN MODE** (Comprehensive) - Full Phase 1-3 execution
@@ -228,7 +228,7 @@ Refer to `test-context-search-agent.md` Phase 3.2 for complete `test-context-pac
## Notes
- **Detection-first**: Always check for existing test-context-package before invoking agent
- **Agent autonomy**: Agent handles all coverage analysis logic per `.claude/agents/test-context-search-agent.md`
- **No redundancy**: This command is a thin orchestrator, all logic in agent
- **Framework agnostic**: Supports Jest, Mocha, pytest, RSpec, Go testing, etc.
- **Coverage focus**: Primary goal is identifying implementation files without tests

View File

@@ -94,7 +94,6 @@ Phase 2: Test Document Generation (Agent)
```javascript
Task(
subagent_type="action-planning-agent",
run_in_background=false,
description="Generate test planning documents (IMPL_PLAN.md, task JSONs, TODO_LIST.md)",
prompt=`
## TASK OBJECTIVE
@@ -107,6 +106,8 @@ CRITICAL:
- Follow the progressive loading strategy defined in your agent specification (load context incrementally from memory-first approach)
## AGENT CONFIGURATION REFERENCE
All test task generation rules, schemas, and quality standards are defined in your agent specification:
@.claude/agents/action-planning-agent.md
Refer to your specification for:
- Test Task JSON Schema (6-field structure with test-specific metadata)

View File

@@ -320,7 +320,7 @@ Read({base_path}/prototypes/{target}-style-{style_id}-layout-{layout_id}.html)
### Step 1: Run Preview Generation Script
```bash
bash(ccw tool exec ui_generate_preview '{"prototypesDir":"{base_path}/prototypes"}')
bash(~/.claude/scripts/ui-generate-preview.sh "{base_path}/prototypes")
```
**Script generates**:
@@ -432,7 +432,7 @@ bash(test -f {base_path}/prototypes/compare.html && echo "exists")
bash(mkdir -p {base_path}/prototypes)
# Run preview script
bash(ccw tool exec ui_generate_preview '{"prototypesDir":"{base_path}/prototypes"}')
bash(~/.claude/scripts/ui-generate-preview.sh "{base_path}/prototypes")
```
## Output Structure
@@ -467,7 +467,7 @@ ERROR: Agent assembly failed
→ Check inputs exist, validate JSON structure
ERROR: Script permission denied
Verify ccw tool is available: ccw tool list
chmod +x ~/.claude/scripts/ui-generate-preview.sh
```
### Recovery Strategies

View File

@@ -106,7 +106,7 @@ echo " Output: $base_path"
# 3. Discover files using script
discovery_file="${intermediates_dir}/discovered-files.json"
ccw tool exec discover_design_files '{"sourceDir":"'"$source"'","outputPath":"'"$discovery_file"'"}'
~/.claude/scripts/discover-design-files.sh "$source" "$discovery_file"
echo " Output: $discovery_file"
```
@@ -161,7 +161,6 @@ echo "[Phase 1] Starting parallel agent analysis (3 agents)"
```javascript
Task(subagent_type="ui-design-agent",
run_in_background=false,
prompt="[STYLE_TOKENS_EXTRACTION]
Extract visual design tokens from code files using code import extraction pattern.
@@ -181,14 +180,14 @@ Task(subagent_type="ui-design-agent",
- Pattern: rg → Extract values → Compare → If different → Read full context with comments → Record conflict
- Alternative (if many files): Execute CLI analysis for comprehensive report:
\`\`\`bash
ccw cli -p \"
cd ${source} && gemini -p \"
PURPOSE: Detect color token conflicts across all CSS/SCSS/JS files
TASK: • Scan all files for color definitions • Identify conflicting values • Extract semantic comments
MODE: analysis
CONTEXT: @**/*.css @**/*.scss @**/*.js @**/*.ts
EXPECTED: JSON report listing conflicts with file:line, values, semantic context
RULES: Focus on core tokens | Report ALL variants | analysis=READ-ONLY
\" --tool gemini --mode analysis --cd ${source}
\"
\`\`\`
**Step 1: Load file list**
@@ -277,7 +276,6 @@ Task(subagent_type="ui-design-agent",
```javascript
Task(subagent_type="ui-design-agent",
run_in_background=false,
prompt="[ANIMATION_TOKEN_GENERATION_TASK]
Extract animation tokens from code files using code import extraction pattern.
@@ -297,14 +295,14 @@ Task(subagent_type="ui-design-agent",
- Pattern: rg → Identify animation types → Map framework usage → Prioritize extraction targets
- Alternative (if complex framework mix): Execute CLI analysis for comprehensive report:
\`\`\`bash
ccw cli -p \"
cd ${source} && gemini -p \"
PURPOSE: Detect animation frameworks and patterns
TASK: • Identify frameworks • Map animation patterns • Categorize by complexity
MODE: analysis
CONTEXT: @**/*.css @**/*.scss @**/*.js @**/*.ts
EXPECTED: JSON report listing frameworks, animation types, file locations
RULES: Focus on framework consistency | Map all animations | analysis=READ-ONLY
\" --tool gemini --mode analysis --cd ${source}
\"
\`\`\`
**Step 1: Load file list**
@@ -357,7 +355,6 @@ Task(subagent_type="ui-design-agent",
```javascript
Task(subagent_type="ui-design-agent",
run_in_background=false,
prompt="[LAYOUT_TEMPLATE_GENERATION_TASK]
Extract layout patterns from code files using code import extraction pattern.
@@ -377,14 +374,14 @@ Task(subagent_type="ui-design-agent",
- Pattern: rg → Count occurrences → Classify by frequency → Prioritize universal components
- Alternative (if large codebase): Execute CLI analysis for comprehensive categorization:
\`\`\`bash
ccw cli -p \"
cd ${source} && gemini -p \"
PURPOSE: Classify components as universal vs specialized
TASK: • Identify UI components • Classify reusability • Map layout systems
MODE: analysis
CONTEXT: @**/*.css @**/*.scss @**/*.js @**/*.ts @**/*.html
EXPECTED: JSON report categorizing components, layout patterns, naming conventions
RULES: Focus on component reusability | Identify layout systems | analysis=READ-ONLY
\" --tool gemini --mode analysis --cd ${source}
\"
\`\`\`
**Step 1: Load file list**

View File

@@ -0,0 +1,35 @@
#!/bin/bash
# Classify folders by type for documentation generation
# Usage: get_modules_by_depth.sh | classify-folders.sh
# Output: folder_path|folder_type|code:N|dirs:N
while IFS='|' read -r depth_info path_info files_info types_info claude_info; do
# Extract folder path from format "path:./src/modules"
folder_path=$(echo "$path_info" | cut -d':' -f2-)
# Skip if path extraction failed
[[ -z "$folder_path" || ! -d "$folder_path" ]] && continue
# Count code files (maxdepth 1)
code_files=$(find "$folder_path" -maxdepth 1 -type f \
\( -name "*.ts" -o -name "*.tsx" -o -name "*.js" -o -name "*.jsx" \
-o -name "*.py" -o -name "*.go" -o -name "*.java" -o -name "*.rs" \
-o -name "*.c" -o -name "*.cpp" -o -name "*.cs" \) \
2>/dev/null | wc -l)
# Count subdirectories
subfolders=$(find "$folder_path" -maxdepth 1 -type d \
-not -path "$folder_path" 2>/dev/null | wc -l)
# Determine folder type
if [[ $code_files -gt 0 ]]; then
folder_type="code" # API.md + README.md
elif [[ $subfolders -gt 0 ]]; then
folder_type="navigation" # README.md only
else
folder_type="skip" # Empty or no relevant content
fi
# Output classification result
echo "${folder_path}|${folder_type}|code:${code_files}|dirs:${subfolders}"
done

View File

@@ -0,0 +1,225 @@
#!/bin/bash
# Convert design-tokens.json to tokens.css with Google Fonts import and global font rules
# Usage: cat design-tokens.json | ./convert_tokens_to_css.sh > tokens.css
# Or: ./convert_tokens_to_css.sh < design-tokens.json > tokens.css
# Read JSON from stdin
json_input=$(cat)
# Extract metadata for header comment
style_name=$(echo "$json_input" | jq -r '.meta.name // "Unknown Style"' 2>/dev/null || echo "Design Tokens")
# Generate header
cat <<EOF
/* ========================================
Design Tokens: ${style_name}
Auto-generated from design-tokens.json
======================================== */
EOF
# ========================================
# Google Fonts Import Generation
# ========================================
# Extract font families and generate Google Fonts import URL
fonts=$(echo "$json_input" | jq -r '
.typography.font_family | to_entries[] | .value
' 2>/dev/null | sed "s/'//g" | cut -d',' -f1 | sort -u)
# Build Google Fonts URL
google_fonts_url="https://fonts.googleapis.com/css2?"
font_params=""
while IFS= read -r font; do
# Skip system fonts and empty lines
if [[ -z "$font" ]] || [[ "$font" =~ ^(system-ui|sans-serif|serif|monospace|cursive|fantasy)$ ]]; then
continue
fi
# Special handling for common web fonts with weights
case "$font" in
"Comic Neue")
font_params+="family=Comic+Neue:wght@300;400;700&"
;;
"Patrick Hand"|"Caveat"|"Dancing Script"|"Architects Daughter"|"Indie Flower"|"Shadows Into Light"|"Permanent Marker")
# URL-encode font name and add common weights
encoded_font=$(echo "$font" | sed 's/ /+/g')
font_params+="family=${encoded_font}:wght@400;700&"
;;
"Segoe Print"|"Bradley Hand"|"Chilanka")
# These are system fonts, skip
;;
*)
# Generic font: add with default weights
encoded_font=$(echo "$font" | sed 's/ /+/g')
font_params+="family=${encoded_font}:wght@400;500;600;700&"
;;
esac
done <<< "$fonts"
# Generate @import if we have fonts
if [[ -n "$font_params" ]]; then
# Remove trailing &
font_params="${font_params%&}"
echo "/* Import Web Fonts */"
echo "@import url('${google_fonts_url}${font_params}&display=swap');"
echo ""
fi
# ========================================
# CSS Custom Properties Generation
# ========================================
echo ":root {"
# Colors - Brand
echo " /* Colors - Brand */"
echo "$json_input" | jq -r '
.colors.brand | to_entries[] |
" --color-brand-\(.key): \(.value);"
' 2>/dev/null
echo ""
# Colors - Surface
echo " /* Colors - Surface */"
echo "$json_input" | jq -r '
.colors.surface | to_entries[] |
" --color-surface-\(.key): \(.value);"
' 2>/dev/null
echo ""
# Colors - Semantic
echo " /* Colors - Semantic */"
echo "$json_input" | jq -r '
.colors.semantic | to_entries[] |
" --color-semantic-\(.key): \(.value);"
' 2>/dev/null
echo ""
# Colors - Text
echo " /* Colors - Text */"
echo "$json_input" | jq -r '
.colors.text | to_entries[] |
" --color-text-\(.key): \(.value);"
' 2>/dev/null
echo ""
# Colors - Border
echo " /* Colors - Border */"
echo "$json_input" | jq -r '
.colors.border | to_entries[] |
" --color-border-\(.key): \(.value);"
' 2>/dev/null
echo ""
# Typography - Font Family
echo " /* Typography - Font Family */"
echo "$json_input" | jq -r '
.typography.font_family | to_entries[] |
" --font-family-\(.key): \(.value);"
' 2>/dev/null
echo ""
# Typography - Font Size
echo " /* Typography - Font Size */"
echo "$json_input" | jq -r '
.typography.font_size | to_entries[] |
" --font-size-\(.key): \(.value);"
' 2>/dev/null
echo ""
# Typography - Font Weight
echo " /* Typography - Font Weight */"
echo "$json_input" | jq -r '
.typography.font_weight | to_entries[] |
" --font-weight-\(.key): \(.value);"
' 2>/dev/null
echo ""
# Typography - Line Height
echo " /* Typography - Line Height */"
echo "$json_input" | jq -r '
.typography.line_height | to_entries[] |
" --line-height-\(.key): \(.value);"
' 2>/dev/null
echo ""
# Typography - Letter Spacing
echo " /* Typography - Letter Spacing */"
echo "$json_input" | jq -r '
.typography.letter_spacing | to_entries[] |
" --letter-spacing-\(.key): \(.value);"
' 2>/dev/null
echo ""
# Spacing
echo " /* Spacing */"
echo "$json_input" | jq -r '
.spacing | to_entries[] |
" --spacing-\(.key): \(.value);"
' 2>/dev/null
echo ""
# Border Radius
echo " /* Border Radius */"
echo "$json_input" | jq -r '
.border_radius | to_entries[] |
" --border-radius-\(.key): \(.value);"
' 2>/dev/null
echo ""
# Shadows
echo " /* Shadows */"
echo "$json_input" | jq -r '
.shadows | to_entries[] |
" --shadow-\(.key): \(.value);"
' 2>/dev/null
echo ""
# Breakpoints
echo " /* Breakpoints */"
echo "$json_input" | jq -r '
.breakpoints | to_entries[] |
" --breakpoint-\(.key): \(.value);"
' 2>/dev/null
echo "}"
echo ""
# ========================================
# Global Font Application
# ========================================
echo "/* ========================================"
echo " Global Font Application"
echo " ======================================== */"
echo ""
echo "body {"
echo " font-family: var(--font-family-body);"
echo " font-size: var(--font-size-base);"
echo " line-height: var(--line-height-normal);"
echo " color: var(--color-text-primary);"
echo " background-color: var(--color-surface-background);"
echo "}"
echo ""
echo "h1, h2, h3, h4, h5, h6, legend {"
echo " font-family: var(--font-family-heading);"
echo "}"
echo ""
echo "/* Reset default margins for better control */"
echo "* {"
echo " margin: 0;"
echo " padding: 0;"
echo " box-sizing: border-box;"
echo "}"

View File

@@ -0,0 +1,157 @@
#!/bin/bash
# Detect modules affected by git changes or recent modifications
# Usage: detect_changed_modules.sh [format]
# format: list|grouped|paths (default: paths)
#
# Features:
# - Respects .gitignore patterns (current directory or git root)
# - Detects git changes (staged, unstaged, or last commit)
# - Falls back to recently modified files (last 24 hours)
# Build exclusion filters from .gitignore
build_exclusion_filters() {
local filters=""
# Common system/cache directories to exclude
local system_excludes=(
".git" "__pycache__" "node_modules" ".venv" "venv" "env"
"dist" "build" ".cache" ".pytest_cache" ".mypy_cache"
"coverage" ".nyc_output" "logs" "tmp" "temp"
)
for exclude in "${system_excludes[@]}"; do
filters+=" -not -path '*/$exclude' -not -path '*/$exclude/*'"
done
# Find and parse .gitignore (current dir first, then git root)
local gitignore_file=""
# Check current directory first
if [ -f ".gitignore" ]; then
gitignore_file=".gitignore"
else
# Try to find git root and check for .gitignore there
local git_root=$(git rev-parse --show-toplevel 2>/dev/null)
if [ -n "$git_root" ] && [ -f "$git_root/.gitignore" ]; then
gitignore_file="$git_root/.gitignore"
fi
fi
# Parse .gitignore if found
if [ -n "$gitignore_file" ]; then
while IFS= read -r line; do
# Skip empty lines and comments
[[ -z "$line" || "$line" =~ ^[[:space:]]*# ]] && continue
# Remove trailing slash and whitespace
line=$(echo "$line" | sed 's|/$||' | xargs)
# Skip wildcards patterns (too complex for simple find)
[[ "$line" =~ \* ]] && continue
# Add to filters
filters+=" -not -path '*/$line' -not -path '*/$line/*'"
done < "$gitignore_file"
fi
echo "$filters"
}
detect_changed_modules() {
local format="${1:-paths}"
local changed_files=""
local affected_dirs=""
local exclusion_filters=$(build_exclusion_filters)
# Step 1: Try to get git changes (staged + unstaged)
if git rev-parse --git-dir > /dev/null 2>&1; then
changed_files=$(git diff --name-only HEAD 2>/dev/null; git diff --name-only --cached 2>/dev/null)
# If no changes in working directory, check last commit
if [ -z "$changed_files" ]; then
changed_files=$(git diff --name-only HEAD~1 HEAD 2>/dev/null)
fi
fi
# Step 2: If no git changes, find recently modified source files (last 24 hours)
# Apply exclusion filters from .gitignore
if [ -z "$changed_files" ]; then
changed_files=$(eval "find . -type f \( \
-name '*.md' -o \
-name '*.js' -o -name '*.ts' -o -name '*.jsx' -o -name '*.tsx' -o \
-name '*.py' -o -name '*.go' -o -name '*.rs' -o \
-name '*.java' -o -name '*.cpp' -o -name '*.c' -o -name '*.h' -o \
-name '*.sh' -o -name '*.ps1' -o \
-name '*.json' -o -name '*.yaml' -o -name '*.yml' \
\) $exclusion_filters -mtime -1 2>/dev/null")
fi
# Step 3: Extract unique parent directories
if [ -n "$changed_files" ]; then
affected_dirs=$(echo "$changed_files" | \
sed 's|/[^/]*$||' | \
grep -v '^\.$' | \
sort -u)
# Add current directory if files are in root
if echo "$changed_files" | grep -q '^[^/]*$'; then
affected_dirs=$(echo -e ".\n$affected_dirs" | sort -u)
fi
fi
# Step 4: Output in requested format
case "$format" in
"list")
if [ -n "$affected_dirs" ]; then
echo "$affected_dirs" | while read dir; do
if [ -d "$dir" ]; then
local file_count=$(find "$dir" -maxdepth 1 -type f 2>/dev/null | wc -l)
local depth=$(echo "$dir" | tr -cd '/' | wc -c)
if [ "$dir" = "." ]; then depth=0; fi
local types=$(find "$dir" -maxdepth 1 -type f -name "*.*" 2>/dev/null | \
grep -E '\.[^/]*$' | sed 's/.*\.//' | sort -u | tr '\n' ',' | sed 's/,$//')
local has_claude="no"
[ -f "$dir/CLAUDE.md" ] && has_claude="yes"
echo "depth:$depth|path:$dir|files:$file_count|types:[$types]|has_claude:$has_claude|status:changed"
fi
done
fi
;;
"grouped")
if [ -n "$affected_dirs" ]; then
echo "📊 Affected modules by changes:"
# Group by depth
echo "$affected_dirs" | while read dir; do
if [ -d "$dir" ]; then
local depth=$(echo "$dir" | tr -cd '/' | wc -c)
if [ "$dir" = "." ]; then depth=0; fi
local claude_indicator=""
[ -f "$dir/CLAUDE.md" ] && claude_indicator=" [✓]"
echo "$depth:$dir$claude_indicator"
fi
done | sort -n | awk -F: '
{
if ($1 != prev_depth) {
if (prev_depth != "") print ""
print " 📁 Depth " $1 ":"
prev_depth = $1
}
print " - " $2 " (changed)"
}'
else
echo "📊 No recent changes detected"
fi
;;
"paths"|*)
echo "$affected_dirs"
;;
esac
}
# Execute function if script is run directly
if [[ "${BASH_SOURCE[0]}" == "${0}" ]]; then
detect_changed_modules "$@"
fi

View File

@@ -0,0 +1,83 @@
#!/usr/bin/env bash
# discover-design-files.sh - Discover design-related files and output JSON
# Usage: discover-design-files.sh <source_dir> <output_json>
set -euo pipefail
source_dir="${1:-.}"
output_json="${2:-discovered-files.json}"
# Function to find and format files as JSON array
find_files() {
local pattern="$1"
local files
files=$(eval "find \"$source_dir\" -type f $pattern \
! -path \"*/node_modules/*\" \
! -path \"*/dist/*\" \
! -path \"*/.git/*\" \
! -path \"*/build/*\" \
! -path \"*/coverage/*\" \
2>/dev/null | sort || true")
local count
if [ -z "$files" ]; then
count=0
else
count=$(echo "$files" | grep -c . || echo 0)
fi
local json_files=""
if [ "$count" -gt 0 ]; then
json_files=$(echo "$files" | awk '{printf "\"%s\"%s\n", $0, (NR<'$count'?",":"")}' | tr '\n' ' ')
fi
echo "$count|$json_files"
}
# Discover CSS/SCSS files
css_result=$(find_files '\( -name "*.css" -o -name "*.scss" \)')
css_count=${css_result%%|*}
css_files=${css_result#*|}
# Discover JS/TS files (all framework files)
js_result=$(find_files '\( -name "*.js" -o -name "*.ts" -o -name "*.jsx" -o -name "*.tsx" -o -name "*.mjs" -o -name "*.cjs" -o -name "*.vue" -o -name "*.svelte" \)')
js_count=${js_result%%|*}
js_files=${js_result#*|}
# Discover HTML files
html_result=$(find_files '-name "*.html"')
html_count=${html_result%%|*}
html_files=${html_result#*|}
# Calculate total
total_count=$((css_count + js_count + html_count))
# Generate JSON
cat > "$output_json" << EOF
{
"discovery_time": "$(date -u +%Y-%m-%dT%H:%M:%SZ)",
"source_directory": "$(cd "$source_dir" && pwd)",
"file_types": {
"css": {
"count": $css_count,
"files": [${css_files}]
},
"js": {
"count": $js_count,
"files": [${js_files}]
},
"html": {
"count": $html_count,
"files": [${html_files}]
}
},
"total_files": $total_count
}
EOF
# Ensure file is fully written and synchronized to disk
# This prevents race conditions when the file is immediately read by another process
sync "$output_json" 2>/dev/null || sync # Sync specific file, fallback to full sync
sleep 0.1 # Additional safety: 100ms delay for filesystem metadata update
echo "Discovered: CSS=$css_count, JS=$js_count, HTML=$html_count (Total: $total_count)" >&2

View File

@@ -0,0 +1,243 @@
/**
* Animation & Transition Extraction Script
*
* Extracts CSS animations, transitions, and transform patterns from a live web page.
* This script runs in the browser context via Chrome DevTools Protocol.
*
* @returns {Object} Structured animation data
*/
(() => {
const extractionTimestamp = new Date().toISOString();
const currentUrl = window.location.href;
/**
* Parse transition shorthand or individual properties
*/
function parseTransition(element, computedStyle) {
const transition = computedStyle.transition || computedStyle.webkitTransition;
if (!transition || transition === 'none' || transition === 'all 0s ease 0s') {
return null;
}
// Parse shorthand: "property duration easing delay"
const transitions = [];
const parts = transition.split(/,\s*/);
parts.forEach(part => {
const match = part.match(/^(\S+)\s+([\d.]+m?s)\s+(\S+)(?:\s+([\d.]+m?s))?/);
if (match) {
transitions.push({
property: match[1],
duration: match[2],
easing: match[3],
delay: match[4] || '0s'
});
}
});
return transitions.length > 0 ? transitions : null;
}
/**
* Extract animation name and properties
*/
function parseAnimation(element, computedStyle) {
const animationName = computedStyle.animationName || computedStyle.webkitAnimationName;
if (!animationName || animationName === 'none') {
return null;
}
return {
name: animationName,
duration: computedStyle.animationDuration || computedStyle.webkitAnimationDuration,
easing: computedStyle.animationTimingFunction || computedStyle.webkitAnimationTimingFunction,
delay: computedStyle.animationDelay || computedStyle.webkitAnimationDelay || '0s',
iterationCount: computedStyle.animationIterationCount || computedStyle.webkitAnimationIterationCount || '1',
direction: computedStyle.animationDirection || computedStyle.webkitAnimationDirection || 'normal',
fillMode: computedStyle.animationFillMode || computedStyle.webkitAnimationFillMode || 'none'
};
}
/**
* Extract transform value
*/
function parseTransform(computedStyle) {
const transform = computedStyle.transform || computedStyle.webkitTransform;
if (!transform || transform === 'none') {
return null;
}
return transform;
}
/**
* Get element selector (simplified for readability)
*/
function getSelector(element) {
if (element.id) {
return `#${element.id}`;
}
if (element.className && typeof element.className === 'string') {
const classes = element.className.trim().split(/\s+/).slice(0, 2).join('.');
if (classes) {
return `.${classes}`;
}
}
return element.tagName.toLowerCase();
}
/**
* Extract all stylesheets and find @keyframes rules
*/
function extractKeyframes() {
const keyframes = {};
try {
// Iterate through all stylesheets
Array.from(document.styleSheets).forEach(sheet => {
try {
// Skip external stylesheets due to CORS
if (sheet.href && !sheet.href.startsWith(window.location.origin)) {
return;
}
Array.from(sheet.cssRules || sheet.rules || []).forEach(rule => {
// Check for @keyframes rules
if (rule.type === CSSRule.KEYFRAMES_RULE || rule.type === CSSRule.WEBKIT_KEYFRAMES_RULE) {
const name = rule.name;
const frames = {};
Array.from(rule.cssRules || []).forEach(keyframe => {
const key = keyframe.keyText; // e.g., "0%", "50%", "100%"
frames[key] = keyframe.style.cssText;
});
keyframes[name] = frames;
}
});
} catch (e) {
// Skip stylesheets that can't be accessed (CORS)
console.warn('Cannot access stylesheet:', sheet.href, e.message);
}
});
} catch (e) {
console.error('Error extracting keyframes:', e);
}
return keyframes;
}
/**
* Scan visible elements for animations and transitions
*/
function scanElements() {
const elements = document.querySelectorAll('*');
const transitionData = [];
const animationData = [];
const transformData = [];
const uniqueTransitions = new Set();
const uniqueAnimations = new Set();
const uniqueEasings = new Set();
const uniqueDurations = new Set();
elements.forEach(element => {
// Skip invisible elements
const rect = element.getBoundingClientRect();
if (rect.width === 0 && rect.height === 0) {
return;
}
const computedStyle = window.getComputedStyle(element);
// Extract transitions
const transitions = parseTransition(element, computedStyle);
if (transitions) {
const selector = getSelector(element);
transitions.forEach(t => {
const key = `${t.property}-${t.duration}-${t.easing}`;
if (!uniqueTransitions.has(key)) {
uniqueTransitions.add(key);
transitionData.push({
selector,
...t
});
uniqueEasings.add(t.easing);
uniqueDurations.add(t.duration);
}
});
}
// Extract animations
const animation = parseAnimation(element, computedStyle);
if (animation) {
const selector = getSelector(element);
const key = `${animation.name}-${animation.duration}`;
if (!uniqueAnimations.has(key)) {
uniqueAnimations.add(key);
animationData.push({
selector,
...animation
});
uniqueEasings.add(animation.easing);
uniqueDurations.add(animation.duration);
}
}
// Extract transforms (on hover/active, we only get current state)
const transform = parseTransform(computedStyle);
if (transform) {
const selector = getSelector(element);
transformData.push({
selector,
transform
});
}
});
return {
transitions: transitionData,
animations: animationData,
transforms: transformData,
uniqueEasings: Array.from(uniqueEasings),
uniqueDurations: Array.from(uniqueDurations)
};
}
/**
* Main extraction function
*/
function extractAnimations() {
const elementData = scanElements();
const keyframes = extractKeyframes();
return {
metadata: {
timestamp: extractionTimestamp,
url: currentUrl,
method: 'chrome-devtools',
version: '1.0.0'
},
transitions: elementData.transitions,
animations: elementData.animations,
transforms: elementData.transforms,
keyframes: keyframes,
summary: {
total_transitions: elementData.transitions.length,
total_animations: elementData.animations.length,
total_transforms: elementData.transforms.length,
total_keyframes: Object.keys(keyframes).length,
unique_easings: elementData.uniqueEasings,
unique_durations: elementData.uniqueDurations
}
};
}
// Execute extraction
return extractAnimations();
})();

View File

@@ -0,0 +1,118 @@
/**
* Extract Computed Styles from DOM
*
* This script extracts real CSS computed styles from a webpage's DOM
* to provide accurate design tokens for UI replication.
*
* Usage: Execute this function via Chrome DevTools evaluate_script
*/
(() => {
/**
* Extract unique values from a set and sort them
*/
const uniqueSorted = (set) => {
return Array.from(set)
.filter(v => v && v !== 'none' && v !== '0px' && v !== 'rgba(0, 0, 0, 0)')
.sort();
};
/**
* Parse rgb/rgba to OKLCH format (placeholder - returns original for now)
*/
const toOKLCH = (color) => {
// TODO: Implement actual RGB to OKLCH conversion
// For now, return the original color with a note
return `${color} /* TODO: Convert to OKLCH */`;
};
/**
* Extract only key styles from an element
*/
const extractKeyStyles = (element) => {
const s = window.getComputedStyle(element);
return {
color: s.color,
bg: s.backgroundColor,
borderRadius: s.borderRadius,
boxShadow: s.boxShadow,
fontSize: s.fontSize,
fontWeight: s.fontWeight,
padding: s.padding,
margin: s.margin
};
};
/**
* Main extraction function - extract all critical design tokens
*/
const extractDesignTokens = () => {
// Include all key UI elements
const selectors = [
'button', '.btn', '[role="button"]',
'input', 'textarea', 'select',
'h1', 'h2', 'h3', 'h4', 'h5', 'h6',
'.card', 'article', 'section',
'a', 'p', 'nav', 'header', 'footer'
];
// Collect all design tokens
const tokens = {
colors: new Set(),
borderRadii: new Set(),
shadows: new Set(),
fontSizes: new Set(),
fontWeights: new Set(),
spacing: new Set()
};
// Extract from all elements
selectors.forEach(selector => {
try {
const elements = document.querySelectorAll(selector);
elements.forEach(element => {
const s = extractKeyStyles(element);
// Collect all tokens (no limits)
if (s.color && s.color !== 'rgba(0, 0, 0, 0)') tokens.colors.add(s.color);
if (s.bg && s.bg !== 'rgba(0, 0, 0, 0)') tokens.colors.add(s.bg);
if (s.borderRadius && s.borderRadius !== '0px') tokens.borderRadii.add(s.borderRadius);
if (s.boxShadow && s.boxShadow !== 'none') tokens.shadows.add(s.boxShadow);
if (s.fontSize) tokens.fontSizes.add(s.fontSize);
if (s.fontWeight) tokens.fontWeights.add(s.fontWeight);
// Extract all spacing values
[s.padding, s.margin].forEach(val => {
if (val && val !== '0px') {
val.split(' ').forEach(v => {
if (v && v !== '0px') tokens.spacing.add(v);
});
}
});
});
} catch (e) {
console.warn(`Error: ${selector}`, e);
}
});
// Return all tokens (no element details to save context)
return {
metadata: {
extractedAt: new Date().toISOString(),
url: window.location.href,
method: 'computed-styles'
},
tokens: {
colors: uniqueSorted(tokens.colors),
borderRadii: uniqueSorted(tokens.borderRadii), // ALL radius values
shadows: uniqueSorted(tokens.shadows), // ALL shadows
fontSizes: uniqueSorted(tokens.fontSizes),
fontWeights: uniqueSorted(tokens.fontWeights),
spacing: uniqueSorted(tokens.spacing)
}
};
};
// Execute and return results
return extractDesignTokens();
})();

View File

@@ -0,0 +1,411 @@
/**
* Extract Layout Structure from DOM - Enhanced Version
*
* Extracts real layout information from DOM to provide accurate
* structural data for UI replication.
*
* Features:
* - Framework detection (Nuxt.js, Next.js, React, Vue, Angular)
* - Multi-strategy container detection (strict → relaxed → class-based → framework-specific)
* - Intelligent main content detection with common class names support
* - Supports modern SPA frameworks
* - Detects non-semantic main containers (.main, .content, etc.)
* - Progressive exploration: Auto-discovers missing selectors when standard patterns fail
* - Suggests new class names to add to script based on actual page structure
*
* Progressive Exploration:
* When fewer than 3 main containers are found, the script automatically:
* 1. Analyzes all large visible containers (≥500×300px)
* 2. Extracts class name patterns (main/content/wrapper/container/page/etc.)
* 3. Suggests new selectors to add to the script
* 4. Returns exploration data in result.exploration
*
* Usage: Execute via Chrome DevTools evaluate_script
* Version: 2.2.0
*/
(() => {
/**
* Get element's bounding box relative to viewport
*/
const getBounds = (element) => {
const rect = element.getBoundingClientRect();
return {
x: Math.round(rect.x),
y: Math.round(rect.y),
width: Math.round(rect.width),
height: Math.round(rect.height)
};
};
/**
* Extract layout properties from an element
*/
const extractLayoutProps = (element) => {
const s = window.getComputedStyle(element);
return {
// Core layout
display: s.display,
position: s.position,
// Flexbox
flexDirection: s.flexDirection,
justifyContent: s.justifyContent,
alignItems: s.alignItems,
flexWrap: s.flexWrap,
gap: s.gap,
// Grid
gridTemplateColumns: s.gridTemplateColumns,
gridTemplateRows: s.gridTemplateRows,
gridAutoFlow: s.gridAutoFlow,
// Dimensions
width: s.width,
height: s.height,
maxWidth: s.maxWidth,
minWidth: s.minWidth,
// Spacing
padding: s.padding,
margin: s.margin
};
};
/**
* Identify layout pattern for an element
*/
const identifyPattern = (props) => {
const { display, flexDirection, gridTemplateColumns } = props;
if (display === 'flex' || display === 'inline-flex') {
if (flexDirection === 'column') return 'flex-column';
if (flexDirection === 'row') return 'flex-row';
return 'flex';
}
if (display === 'grid') {
const cols = gridTemplateColumns;
if (cols && cols !== 'none') {
const colCount = cols.split(' ').length;
return `grid-${colCount}col`;
}
return 'grid';
}
if (display === 'block') return 'block';
return display;
};
/**
* Detect frontend framework
*/
const detectFramework = () => {
if (document.querySelector('#__nuxt')) return { name: 'Nuxt.js', version: 'unknown' };
if (document.querySelector('#__next')) return { name: 'Next.js', version: 'unknown' };
if (document.querySelector('[data-reactroot]')) return { name: 'React', version: 'unknown' };
if (document.querySelector('[ng-version]')) return { name: 'Angular', version: 'unknown' };
if (window.Vue) return { name: 'Vue.js', version: window.Vue.version || 'unknown' };
return { name: 'Unknown', version: 'unknown' };
};
/**
* Build layout tree recursively
*/
const buildLayoutTree = (element, depth = 0, maxDepth = 3) => {
if (depth > maxDepth) return null;
const props = extractLayoutProps(element);
const bounds = getBounds(element);
const pattern = identifyPattern(props);
// Get semantic role
const tagName = element.tagName.toLowerCase();
const classes = Array.from(element.classList).slice(0, 3); // Max 3 classes
const role = element.getAttribute('role');
// Build node
const node = {
tag: tagName,
classes: classes,
role: role,
pattern: pattern,
bounds: bounds,
layout: {
display: props.display,
position: props.position
}
};
// Add flex/grid specific properties
if (props.display === 'flex' || props.display === 'inline-flex') {
node.layout.flexDirection = props.flexDirection;
node.layout.justifyContent = props.justifyContent;
node.layout.alignItems = props.alignItems;
node.layout.gap = props.gap;
}
if (props.display === 'grid') {
node.layout.gridTemplateColumns = props.gridTemplateColumns;
node.layout.gridTemplateRows = props.gridTemplateRows;
node.layout.gap = props.gap;
}
// Process children for container elements
if (props.display === 'flex' || props.display === 'grid' || props.display === 'block') {
const children = Array.from(element.children);
if (children.length > 0 && children.length < 50) { // Limit to 50 children
node.children = children
.map(child => buildLayoutTree(child, depth + 1, maxDepth))
.filter(child => child !== null);
}
}
return node;
};
/**
* Find main layout containers with multi-strategy approach
*/
const findMainContainers = () => {
const containers = [];
const found = new Set();
// Strategy 1: Strict selectors (body direct children)
const strictSelectors = [
'body > header',
'body > nav',
'body > main',
'body > footer'
];
// Strategy 2: Relaxed selectors (any level)
const relaxedSelectors = [
'header',
'nav',
'main',
'footer',
'[role="banner"]',
'[role="navigation"]',
'[role="main"]',
'[role="contentinfo"]'
];
// Strategy 3: Common class-based main content selectors
const commonClassSelectors = [
'.main',
'.content',
'.main-content',
'.page-content',
'.container.main',
'.wrapper > .main',
'div[class*="main-wrapper"]',
'div[class*="content-wrapper"]'
];
// Strategy 4: Framework-specific selectors
const frameworkSelectors = [
'#__nuxt header', '#__nuxt .main', '#__nuxt main', '#__nuxt footer',
'#__next header', '#__next .main', '#__next main', '#__next footer',
'#app header', '#app .main', '#app main', '#app footer',
'[data-app] header', '[data-app] .main', '[data-app] main', '[data-app] footer'
];
// Try all strategies
const allSelectors = [...strictSelectors, ...relaxedSelectors, ...commonClassSelectors, ...frameworkSelectors];
allSelectors.forEach(selector => {
try {
const elements = document.querySelectorAll(selector);
elements.forEach(element => {
// Avoid duplicates and invisible elements
if (!found.has(element) && element.offsetParent !== null) {
found.add(element);
const tree = buildLayoutTree(element, 0, 3);
if (tree && tree.bounds.width > 0 && tree.bounds.height > 0) {
containers.push(tree);
}
}
});
} catch (e) {
console.warn(`Selector failed: ${selector}`, e);
}
});
// Fallback: If no containers found, use body's direct children
if (containers.length === 0) {
Array.from(document.body.children).forEach(child => {
if (child.offsetParent !== null && !found.has(child)) {
const tree = buildLayoutTree(child, 0, 2);
if (tree && tree.bounds.width > 100 && tree.bounds.height > 100) {
containers.push(tree);
}
}
});
}
return containers;
};
/**
* Progressive exploration: Discover main containers when standard selectors fail
* Analyzes large visible containers and suggests class name patterns
*/
const exploreMainContainers = () => {
const candidates = [];
const minWidth = 500;
const minHeight = 300;
// Find all large visible divs
const allDivs = document.querySelectorAll('div');
allDivs.forEach(div => {
const rect = div.getBoundingClientRect();
const style = window.getComputedStyle(div);
// Filter: large size, visible, not header/footer
if (rect.width >= minWidth &&
rect.height >= minHeight &&
div.offsetParent !== null &&
!div.closest('header') &&
!div.closest('footer')) {
const classes = Array.from(div.classList);
const area = rect.width * rect.height;
candidates.push({
element: div,
classes: classes,
area: area,
bounds: {
width: Math.round(rect.width),
height: Math.round(rect.height)
},
display: style.display,
depth: getElementDepth(div)
});
}
});
// Sort by area (largest first) and take top candidates
candidates.sort((a, b) => b.area - a.area);
// Extract unique class patterns from top candidates
const classPatterns = new Set();
candidates.slice(0, 20).forEach(c => {
c.classes.forEach(cls => {
// Identify potential main content class patterns
if (cls.match(/main|content|container|wrapper|page|body|layout|app/i)) {
classPatterns.add(cls);
}
});
});
return {
candidates: candidates.slice(0, 10).map(c => ({
classes: c.classes,
bounds: c.bounds,
display: c.display,
depth: c.depth
})),
suggestedSelectors: Array.from(classPatterns).map(cls => `.${cls}`)
};
};
/**
* Get element depth in DOM tree
*/
const getElementDepth = (element) => {
let depth = 0;
let current = element;
while (current.parentElement) {
depth++;
current = current.parentElement;
}
return depth;
};
/**
* Analyze layout patterns
*/
const analyzePatterns = (containers) => {
const patterns = {
flexColumn: 0,
flexRow: 0,
grid: 0,
sticky: 0,
fixed: 0
};
const analyze = (node) => {
if (!node) return;
if (node.pattern === 'flex-column') patterns.flexColumn++;
if (node.pattern === 'flex-row') patterns.flexRow++;
if (node.pattern && node.pattern.startsWith('grid')) patterns.grid++;
if (node.layout.position === 'sticky') patterns.sticky++;
if (node.layout.position === 'fixed') patterns.fixed++;
if (node.children) {
node.children.forEach(analyze);
}
};
containers.forEach(analyze);
return patterns;
};
/**
* Main extraction function with progressive exploration
*/
const extractLayout = () => {
const framework = detectFramework();
const containers = findMainContainers();
const patterns = analyzePatterns(containers);
// Progressive exploration: if too few containers found, explore and suggest
let exploration = null;
const minExpectedContainers = 3; // At least header, main, footer
if (containers.length < minExpectedContainers) {
exploration = exploreMainContainers();
// Add warning message
exploration.warning = `Only ${containers.length} containers found. Consider adding these selectors to the script:`;
exploration.recommendation = exploration.suggestedSelectors.join(', ');
}
const result = {
metadata: {
extractedAt: new Date().toISOString(),
url: window.location.href,
framework: framework,
method: 'layout-structure-enhanced',
version: '2.2.0'
},
statistics: {
totalContainers: containers.length,
patterns: patterns
},
structure: containers
};
// Add exploration results if triggered
if (exploration) {
result.exploration = {
triggered: true,
reason: 'Insufficient containers found with standard selectors',
discoveredCandidates: exploration.candidates,
suggestedSelectors: exploration.suggestedSelectors,
warning: exploration.warning,
recommendation: exploration.recommendation
};
}
return result;
};
// Execute and return results
return extractLayout();
})();

View File

@@ -0,0 +1,713 @@
#!/bin/bash
# Generate documentation for modules and projects with multiple strategies
# Usage: generate_module_docs.sh <strategy> <source_path> <project_name> [tool] [model]
# strategy: full|single|project-readme|project-architecture|http-api
# source_path: Path to the source module directory (or project root for project-level docs)
# project_name: Project name for output path (e.g., "myproject")
# tool: gemini|qwen|codex (default: gemini)
# model: Model name (optional, uses tool defaults)
#
# Default Models:
# gemini: gemini-2.5-flash
# qwen: coder-model
# codex: gpt5-codex
#
# Module-Level Strategies:
# full: Full documentation generation
# - Read: All files in current and subdirectories (@**/*)
# - Generate: API.md + README.md for each directory containing code files
# - Use: Deep directories (Layer 3), comprehensive documentation
#
# single: Single-layer documentation
# - Read: Current directory code + child API.md/README.md files
# - Generate: API.md + README.md only in current directory
# - Use: Upper layers (Layer 1-2), incremental updates
#
# Project-Level Strategies:
# project-readme: Project overview documentation
# - Read: All module API.md and README.md files
# - Generate: README.md (project root)
# - Use: After all module docs are generated
#
# project-architecture: System design documentation
# - Read: All module docs + project README
# - Generate: ARCHITECTURE.md + EXAMPLES.md
# - Use: After project README is generated
#
# http-api: HTTP API documentation
# - Read: API route files + existing docs
# - Generate: api/README.md
# - Use: For projects with HTTP APIs
#
# Output Structure:
# Module docs: .workflow/docs/{project_name}/{source_path}/API.md
# Module docs: .workflow/docs/{project_name}/{source_path}/README.md
# Project docs: .workflow/docs/{project_name}/README.md
# Project docs: .workflow/docs/{project_name}/ARCHITECTURE.md
# Project docs: .workflow/docs/{project_name}/EXAMPLES.md
# API docs: .workflow/docs/{project_name}/api/README.md
#
# Features:
# - Path mirroring: source structure → docs structure
# - Template-driven generation
# - Respects .gitignore patterns
# - Detects code vs navigation folders
# - Tool fallback support
# Build exclusion filters from .gitignore
build_exclusion_filters() {
local filters=""
# Common system/cache directories to exclude
local system_excludes=(
".git" "__pycache__" "node_modules" ".venv" "venv" "env"
"dist" "build" ".cache" ".pytest_cache" ".mypy_cache"
"coverage" ".nyc_output" "logs" "tmp" "temp" ".workflow"
)
for exclude in "${system_excludes[@]}"; do
filters+=" -not -path '*/$exclude' -not -path '*/$exclude/*'"
done
# Find and parse .gitignore (current dir first, then git root)
local gitignore_file=""
# Check current directory first
if [ -f ".gitignore" ]; then
gitignore_file=".gitignore"
else
# Try to find git root and check for .gitignore there
local git_root=$(git rev-parse --show-toplevel 2>/dev/null)
if [ -n "$git_root" ] && [ -f "$git_root/.gitignore" ]; then
gitignore_file="$git_root/.gitignore"
fi
fi
# Parse .gitignore if found
if [ -n "$gitignore_file" ]; then
while IFS= read -r line; do
# Skip empty lines and comments
[[ -z "$line" || "$line" =~ ^[[:space:]]*# ]] && continue
# Remove trailing slash and whitespace
line=$(echo "$line" | sed 's|/$||' | xargs)
# Skip wildcards patterns (too complex for simple find)
[[ "$line" =~ \* ]] && continue
# Add to filters
filters+=" -not -path '*/$line' -not -path '*/$line/*'"
done < "$gitignore_file"
fi
echo "$filters"
}
# Detect folder type (code vs navigation)
detect_folder_type() {
local target_path="$1"
local exclusion_filters="$2"
# Count code files (primary indicators)
local code_count=$(eval "find \"$target_path\" -maxdepth 1 -type f \\( -name '*.ts' -o -name '*.tsx' -o -name '*.js' -o -name '*.jsx' -o -name '*.py' -o -name '*.sh' -o -name '*.go' -o -name '*.rs' \\) $exclusion_filters 2>/dev/null" | wc -l)
if [ $code_count -gt 0 ]; then
echo "code"
else
echo "navigation"
fi
}
# Scan directory structure and generate structured information
scan_directory_structure() {
local target_path="$1"
local strategy="$2"
if [ ! -d "$target_path" ]; then
echo "Directory not found: $target_path"
return 1
fi
local exclusion_filters=$(build_exclusion_filters)
local structure_info=""
# Get basic directory info
local dir_name=$(basename "$target_path")
local total_files=$(eval "find \"$target_path\" -type f $exclusion_filters 2>/dev/null" | wc -l)
local total_dirs=$(eval "find \"$target_path\" -type d $exclusion_filters 2>/dev/null" | wc -l)
local folder_type=$(detect_folder_type "$target_path" "$exclusion_filters")
structure_info+="Directory: $dir_name\n"
structure_info+="Total files: $total_files\n"
structure_info+="Total directories: $total_dirs\n"
structure_info+="Folder type: $folder_type\n\n"
if [ "$strategy" = "full" ]; then
# For full: show all subdirectories with file counts
structure_info+="Subdirectories with files:\n"
while IFS= read -r dir; do
if [ -n "$dir" ] && [ "$dir" != "$target_path" ]; then
local rel_path=${dir#$target_path/}
local file_count=$(eval "find \"$dir\" -maxdepth 1 -type f $exclusion_filters 2>/dev/null" | wc -l)
if [ $file_count -gt 0 ]; then
local subdir_type=$(detect_folder_type "$dir" "$exclusion_filters")
structure_info+=" - $rel_path/ ($file_count files, type: $subdir_type)\n"
fi
fi
done < <(eval "find \"$target_path\" -type d $exclusion_filters 2>/dev/null")
else
# For single: show direct children only
structure_info+="Direct subdirectories:\n"
while IFS= read -r dir; do
if [ -n "$dir" ]; then
local dir_name=$(basename "$dir")
local file_count=$(eval "find \"$dir\" -maxdepth 1 -type f $exclusion_filters 2>/dev/null" | wc -l)
local has_api=$([ -f "$dir/API.md" ] && echo " [has API.md]" || echo "")
local has_readme=$([ -f "$dir/README.md" ] && echo " [has README.md]" || echo "")
structure_info+=" - $dir_name/ ($file_count files)$has_api$has_readme\n"
fi
done < <(eval "find \"$target_path\" -maxdepth 1 -type d $exclusion_filters 2>/dev/null" | grep -v "^$target_path$")
fi
# Show main file types in current directory
structure_info+="\nCurrent directory files:\n"
local code_files=$(eval "find \"$target_path\" -maxdepth 1 -type f \\( -name '*.ts' -o -name '*.tsx' -o -name '*.js' -o -name '*.jsx' -o -name '*.py' -o -name '*.sh' -o -name '*.go' -o -name '*.rs' \\) $exclusion_filters 2>/dev/null" | wc -l)
local config_files=$(eval "find \"$target_path\" -maxdepth 1 -type f \\( -name '*.json' -o -name '*.yaml' -o -name '*.yml' -o -name '*.toml' \\) $exclusion_filters 2>/dev/null" | wc -l)
local doc_files=$(eval "find \"$target_path\" -maxdepth 1 -type f -name '*.md' $exclusion_filters 2>/dev/null" | wc -l)
structure_info+=" - Code files: $code_files\n"
structure_info+=" - Config files: $config_files\n"
structure_info+=" - Documentation: $doc_files\n"
printf "%b" "$structure_info"
}
# Calculate output path based on source path and project name
calculate_output_path() {
local source_path="$1"
local project_name="$2"
local project_root="$3"
# Get absolute path of source (normalize to Unix-style path)
local abs_source=$(cd "$source_path" && pwd)
# Normalize project root to same format
local norm_project_root=$(cd "$project_root" && pwd)
# Calculate relative path from project root
local rel_path="${abs_source#$norm_project_root}"
# Remove leading slash if present
rel_path="${rel_path#/}"
# If source is project root, use project name directly
if [ "$abs_source" = "$norm_project_root" ] || [ -z "$rel_path" ]; then
echo "$norm_project_root/.workflow/docs/$project_name"
else
echo "$norm_project_root/.workflow/docs/$project_name/$rel_path"
fi
}
generate_module_docs() {
local strategy="$1"
local source_path="$2"
local project_name="$3"
local tool="${4:-gemini}"
local model="$5"
# Validate parameters
if [ -z "$strategy" ] || [ -z "$source_path" ] || [ -z "$project_name" ]; then
echo "❌ Error: Strategy, source path, and project name are required"
echo "Usage: generate_module_docs.sh <strategy> <source_path> <project_name> [tool] [model]"
echo "Module strategies: full, single"
echo "Project strategies: project-readme, project-architecture, http-api"
return 1
fi
# Validate strategy
local valid_strategies=("full" "single" "project-readme" "project-architecture" "http-api")
local strategy_valid=false
for valid_strategy in "${valid_strategies[@]}"; do
if [ "$strategy" = "$valid_strategy" ]; then
strategy_valid=true
break
fi
done
if [ "$strategy_valid" = false ]; then
echo "❌ Error: Invalid strategy '$strategy'"
echo "Valid module strategies: full, single"
echo "Valid project strategies: project-readme, project-architecture, http-api"
return 1
fi
if [ ! -d "$source_path" ]; then
echo "❌ Error: Source directory '$source_path' does not exist"
return 1
fi
# Set default models if not specified
if [ -z "$model" ]; then
case "$tool" in
gemini)
model="gemini-2.5-flash"
;;
qwen)
model="coder-model"
;;
codex)
model="gpt5-codex"
;;
*)
model=""
;;
esac
fi
# Build exclusion filters
local exclusion_filters=$(build_exclusion_filters)
# Get project root
local project_root=$(git rev-parse --show-toplevel 2>/dev/null || pwd)
# Determine if this is a project-level strategy
local is_project_level=false
if [[ "$strategy" =~ ^project- ]] || [ "$strategy" = "http-api" ]; then
is_project_level=true
fi
# Calculate output path
local output_path
if [ "$is_project_level" = true ]; then
# Project-level docs go to project root
if [ "$strategy" = "http-api" ]; then
output_path="$project_root/.workflow/docs/$project_name/api"
else
output_path="$project_root/.workflow/docs/$project_name"
fi
else
output_path=$(calculate_output_path "$source_path" "$project_name" "$project_root")
fi
# Create output directory
mkdir -p "$output_path"
# Detect folder type (only for module-level strategies)
local folder_type=""
if [ "$is_project_level" = false ]; then
folder_type=$(detect_folder_type "$source_path" "$exclusion_filters")
fi
# Load templates based on strategy
local api_template=""
local readme_template=""
local template_content=""
if [ "$is_project_level" = true ]; then
# Project-level templates
case "$strategy" in
project-readme)
local proj_readme_path="$HOME/.claude/workflows/cli-templates/prompts/documentation/project-readme.txt"
if [ -f "$proj_readme_path" ]; then
template_content=$(cat "$proj_readme_path")
echo " 📋 Loaded Project README template: $(wc -l < "$proj_readme_path") lines"
fi
;;
project-architecture)
local arch_path="$HOME/.claude/workflows/cli-templates/prompts/documentation/project-architecture.txt"
local examples_path="$HOME/.claude/workflows/cli-templates/prompts/documentation/project-examples.txt"
if [ -f "$arch_path" ]; then
template_content=$(cat "$arch_path")
echo " 📋 Loaded Architecture template: $(wc -l < "$arch_path") lines"
fi
if [ -f "$examples_path" ]; then
template_content="$template_content
EXAMPLES TEMPLATE:
$(cat "$examples_path")"
echo " 📋 Loaded Examples template: $(wc -l < "$examples_path") lines"
fi
;;
http-api)
local api_path="$HOME/.claude/workflows/cli-templates/prompts/documentation/api.txt"
if [ -f "$api_path" ]; then
template_content=$(cat "$api_path")
echo " 📋 Loaded HTTP API template: $(wc -l < "$api_path") lines"
fi
;;
esac
else
# Module-level templates
local api_template_path="$HOME/.claude/workflows/cli-templates/prompts/documentation/api.txt"
local readme_template_path="$HOME/.claude/workflows/cli-templates/prompts/documentation/module-readme.txt"
local nav_template_path="$HOME/.claude/workflows/cli-templates/prompts/documentation/folder-navigation.txt"
if [ "$folder_type" = "code" ]; then
if [ -f "$api_template_path" ]; then
api_template=$(cat "$api_template_path")
echo " 📋 Loaded API template: $(wc -l < "$api_template_path") lines"
fi
if [ -f "$readme_template_path" ]; then
readme_template=$(cat "$readme_template_path")
echo " 📋 Loaded README template: $(wc -l < "$readme_template_path") lines"
fi
else
# Navigation folder uses navigation template
if [ -f "$nav_template_path" ]; then
readme_template=$(cat "$nav_template_path")
echo " 📋 Loaded Navigation template: $(wc -l < "$nav_template_path") lines"
fi
fi
fi
# Scan directory structure (only for module-level strategies)
local structure_info=""
if [ "$is_project_level" = false ]; then
echo " 🔍 Scanning directory structure..."
structure_info=$(scan_directory_structure "$source_path" "$strategy")
fi
# Prepare logging info
local module_name=$(basename "$source_path")
echo "⚡ Generating docs: $source_path$output_path"
echo " Strategy: $strategy | Tool: $tool | Model: $model | Type: $folder_type"
echo " Output: $output_path"
# Build strategy-specific prompt
local final_prompt=""
# Project-level strategies
if [ "$strategy" = "project-readme" ]; then
final_prompt="PURPOSE: Generate comprehensive project overview documentation
PROJECT: $project_name
OUTPUT: Current directory (file will be moved to final location)
Read: @.workflow/docs/$project_name/**/*.md
Context: All module documentation files from the project
Generate ONE documentation file in current directory:
- README.md - Project root documentation
Template:
$template_content
Instructions:
- Create README.md in CURRENT DIRECTORY
- Synthesize information from all module docs
- Include project overview, getting started, and navigation
- Create clear module navigation with links
- Follow template structure exactly"
elif [ "$strategy" = "project-architecture" ]; then
final_prompt="PURPOSE: Generate system design and usage examples documentation
PROJECT: $project_name
OUTPUT: Current directory (files will be moved to final location)
Read: @.workflow/docs/$project_name/**/*.md
Context: All project documentation including module docs and project README
Generate TWO documentation files in current directory:
1. ARCHITECTURE.md - System architecture and design patterns
2. EXAMPLES.md - End-to-end usage examples
Template:
$template_content
Instructions:
- Create both ARCHITECTURE.md and EXAMPLES.md in CURRENT DIRECTORY
- Synthesize architectural patterns from module documentation
- Document system structure, module relationships, and design decisions
- Provide practical code examples and usage scenarios
- Follow template structure for both files"
elif [ "$strategy" = "http-api" ]; then
final_prompt="PURPOSE: Generate HTTP API reference documentation
PROJECT: $project_name
OUTPUT: Current directory (file will be moved to final location)
Read: @**/*.{ts,js,py,go,rs} @.workflow/docs/$project_name/**/*.md
Context: API route files and existing documentation
Generate ONE documentation file in current directory:
- README.md - HTTP API documentation (in api/ subdirectory)
Template:
$template_content
Instructions:
- Create README.md in CURRENT DIRECTORY
- Document all HTTP endpoints (routes, methods, parameters, responses)
- Include authentication requirements and error codes
- Provide request/response examples
- Follow template structure (Part B: HTTP API documentation)"
# Module-level strategies
elif [ "$strategy" = "full" ]; then
# Full strategy: read all files, generate for each directory
if [ "$folder_type" = "code" ]; then
final_prompt="PURPOSE: Generate comprehensive API and module documentation
Directory Structure Analysis:
$structure_info
SOURCE: $source_path
OUTPUT: Current directory (files will be moved to final location)
Read: @**/*
Generate TWO documentation files in current directory:
1. API.md - Code API documentation (functions, classes, interfaces)
Template:
$api_template
2. README.md - Module overview documentation
Template:
$readme_template
Instructions:
- Generate both API.md and README.md in CURRENT DIRECTORY
- If subdirectories contain code files, generate their docs too (recursive)
- Work bottom-up: deepest directories first
- Follow template structure exactly
- Use structure analysis for context"
else
# Navigation folder - README only
final_prompt="PURPOSE: Generate navigation documentation for folder structure
Directory Structure Analysis:
$structure_info
SOURCE: $source_path
OUTPUT: Current directory (file will be moved to final location)
Read: @**/*
Generate ONE documentation file in current directory:
- README.md - Navigation and folder overview
Template:
$readme_template
Instructions:
- Create README.md in CURRENT DIRECTORY
- Focus on folder structure and navigation
- Link to subdirectory documentation
- Use structure analysis for context"
fi
else
# Single strategy: read current + child docs only
if [ "$folder_type" = "code" ]; then
final_prompt="PURPOSE: Generate API and module documentation for current directory
Directory Structure Analysis:
$structure_info
SOURCE: $source_path
OUTPUT: Current directory (files will be moved to final location)
Read: @*/API.md @*/README.md @*.ts @*.tsx @*.js @*.jsx @*.py @*.sh @*.go @*.rs @*.md @*.json @*.yaml @*.yml
Generate TWO documentation files in current directory:
1. API.md - Code API documentation
Template:
$api_template
2. README.md - Module overview
Template:
$readme_template
Instructions:
- Generate both API.md and README.md in CURRENT DIRECTORY
- Reference child documentation, do not duplicate
- Follow template structure
- Use structure analysis for current directory context"
else
# Navigation folder - README only
final_prompt="PURPOSE: Generate navigation documentation
Directory Structure Analysis:
$structure_info
SOURCE: $source_path
OUTPUT: Current directory (file will be moved to final location)
Read: @*/API.md @*/README.md @*.md
Generate ONE documentation file in current directory:
- README.md - Navigation and overview
Template:
$readme_template
Instructions:
- Create README.md in CURRENT DIRECTORY
- Link to child documentation
- Use structure analysis for navigation context"
fi
fi
# Execute documentation generation
local start_time=$(date +%s)
echo " 🔄 Starting documentation generation..."
if cd "$source_path" 2>/dev/null; then
local tool_result=0
# Store current output path for CLI context
export DOC_OUTPUT_PATH="$output_path"
# Record git HEAD before CLI execution (to detect unwanted auto-commits)
local git_head_before=""
if git rev-parse --git-dir >/dev/null 2>&1; then
git_head_before=$(git rev-parse HEAD 2>/dev/null)
fi
# Execute with selected tool
case "$tool" in
qwen)
if [ "$model" = "coder-model" ]; then
qwen -p "$final_prompt" --yolo 2>&1
else
qwen -p "$final_prompt" -m "$model" --yolo 2>&1
fi
tool_result=$?
;;
codex)
codex --full-auto exec "$final_prompt" -m "$model" --skip-git-repo-check -s danger-full-access 2>&1
tool_result=$?
;;
gemini)
gemini -p "$final_prompt" -m "$model" --yolo 2>&1
tool_result=$?
;;
*)
echo " ⚠️ Unknown tool: $tool, defaulting to gemini"
gemini -p "$final_prompt" -m "$model" --yolo 2>&1
tool_result=$?
;;
esac
# Move generated files to output directory
local docs_created=0
local moved_files=""
if [ $tool_result -eq 0 ]; then
if [ "$is_project_level" = true ]; then
# Project-level documentation files
case "$strategy" in
project-readme)
if [ -f "README.md" ]; then
mv "README.md" "$output_path/README.md" 2>/dev/null && {
docs_created=$((docs_created + 1))
moved_files+="README.md "
}
fi
;;
project-architecture)
if [ -f "ARCHITECTURE.md" ]; then
mv "ARCHITECTURE.md" "$output_path/ARCHITECTURE.md" 2>/dev/null && {
docs_created=$((docs_created + 1))
moved_files+="ARCHITECTURE.md "
}
fi
if [ -f "EXAMPLES.md" ]; then
mv "EXAMPLES.md" "$output_path/EXAMPLES.md" 2>/dev/null && {
docs_created=$((docs_created + 1))
moved_files+="EXAMPLES.md "
}
fi
;;
http-api)
if [ -f "README.md" ]; then
mv "README.md" "$output_path/README.md" 2>/dev/null && {
docs_created=$((docs_created + 1))
moved_files+="api/README.md "
}
fi
;;
esac
else
# Module-level documentation files
# Check and move API.md if it exists
if [ "$folder_type" = "code" ] && [ -f "API.md" ]; then
mv "API.md" "$output_path/API.md" 2>/dev/null && {
docs_created=$((docs_created + 1))
moved_files+="API.md "
}
fi
# Check and move README.md if it exists
if [ -f "README.md" ]; then
mv "README.md" "$output_path/README.md" 2>/dev/null && {
docs_created=$((docs_created + 1))
moved_files+="README.md "
}
fi
fi
fi
# Check if CLI tool auto-committed (and revert if needed)
if [ -n "$git_head_before" ]; then
local git_head_after=$(git rev-parse HEAD 2>/dev/null)
if [ "$git_head_before" != "$git_head_after" ]; then
echo " ⚠️ Detected unwanted auto-commit by CLI tool, reverting..."
git reset --soft "$git_head_before" 2>/dev/null
echo " ✅ Auto-commit reverted (files remain staged)"
fi
fi
if [ $docs_created -gt 0 ]; then
local end_time=$(date +%s)
local duration=$((end_time - start_time))
echo " ✅ Generated $docs_created doc(s) in ${duration}s: $moved_files"
cd - > /dev/null
return 0
else
echo " ❌ Documentation generation failed for $source_path"
cd - > /dev/null
return 1
fi
else
echo " ❌ Cannot access directory: $source_path"
return 1
fi
}
# Execute function if script is run directly
if [[ "${BASH_SOURCE[0]}" == "${0}" ]]; then
# Show help if no arguments or help requested
if [ $# -eq 0 ] || [ "$1" = "-h" ] || [ "$1" = "--help" ]; then
echo "Usage: generate_module_docs.sh <strategy> <source_path> <project_name> [tool] [model]"
echo ""
echo "Module-Level Strategies:"
echo " full - Generate docs for all subdirectories with code"
echo " single - Generate docs only for current directory"
echo ""
echo "Project-Level Strategies:"
echo " project-readme - Generate project root README.md"
echo " project-architecture - Generate ARCHITECTURE.md + EXAMPLES.md"
echo " http-api - Generate HTTP API documentation (api/README.md)"
echo ""
echo "Tools: gemini (default), qwen, codex"
echo "Models: Use tool defaults if not specified"
echo ""
echo "Module Examples:"
echo " ./generate_module_docs.sh full ./src/auth myproject"
echo " ./generate_module_docs.sh single ./components myproject gemini"
echo ""
echo "Project Examples:"
echo " ./generate_module_docs.sh project-readme . myproject"
echo " ./generate_module_docs.sh project-architecture . myproject qwen"
echo " ./generate_module_docs.sh http-api . myproject"
exit 0
fi
generate_module_docs "$@"
fi

View File

@@ -0,0 +1,166 @@
#!/bin/bash
# Get modules organized by directory depth (deepest first)
# Usage: get_modules_by_depth.sh [format]
# format: list|grouped|json (default: list)
# Parse .gitignore patterns and build exclusion filters
build_exclusion_filters() {
local filters=""
# Always exclude these system/cache directories and common web dev packages
local system_excludes=(
# Version control and IDE
".git" ".gitignore" ".gitmodules" ".gitattributes"
".svn" ".hg" ".bzr"
".history" ".vscode" ".idea" ".vs" ".vscode-test"
".sublime-text" ".atom"
# Python
"__pycache__" ".pytest_cache" ".mypy_cache" ".tox"
".coverage" "htmlcov" ".nox" ".venv" "venv" "env"
".egg-info" "*.egg-info" ".eggs" ".wheel"
"site-packages" ".python-version" ".pyc"
# Node.js/JavaScript
"node_modules" ".npm" ".yarn" ".pnpm" "yarn-error.log"
".nyc_output" "coverage" ".next" ".nuxt"
".cache" ".parcel-cache" ".vite" "dist" "build"
".turbo" ".vercel" ".netlify"
# Package managers
".pnpm-store" "pnpm-lock.yaml" "yarn.lock" "package-lock.json"
".bundle" "vendor/bundle" "Gemfile.lock"
".gradle" "gradle" "gradlew" "gradlew.bat"
".mvn" "target" ".m2"
# Build/compile outputs
"dist" "build" "out" "output" "_site" "public"
".output" ".generated" "generated" "gen"
"bin" "obj" "Debug" "Release"
# Testing
".pytest_cache" ".coverage" "htmlcov" "test-results"
".nyc_output" "junit.xml" "test_results"
"cypress/screenshots" "cypress/videos"
"playwright-report" ".playwright"
# Logs and temp files
"logs" "*.log" "log" "tmp" "temp" ".tmp" ".temp"
".env" ".env.local" ".env.*.local"
".DS_Store" "Thumbs.db" "*.tmp" "*.swp" "*.swo"
# Documentation build outputs
"_book" "_site" "docs/_build" "site" "gh-pages"
".docusaurus" ".vuepress" ".gitbook"
# Database files
"*.sqlite" "*.sqlite3" "*.db" "data.db"
# OS and editor files
".DS_Store" "Thumbs.db" "desktop.ini"
"*.stackdump" "*.core"
# Cloud and deployment
".serverless" ".terraform" "terraform.tfstate"
".aws" ".azure" ".gcp"
# Mobile development
".gradle" "build" ".expo" ".metro"
"android/app/build" "ios/build" "DerivedData"
# Game development
"Library" "Temp" "ProjectSettings"
"Logs" "MemoryCaptures" "UserSettings"
)
for exclude in "${system_excludes[@]}"; do
filters+=" -not -path '*/$exclude' -not -path '*/$exclude/*'"
done
# Parse .gitignore if it exists
if [ -f ".gitignore" ]; then
while IFS= read -r line; do
# Skip empty lines and comments
[[ -z "$line" || "$line" =~ ^[[:space:]]*# ]] && continue
# Remove trailing slash and whitespace
line=$(echo "$line" | sed 's|/$||' | xargs)
# Add to filters
filters+=" -not -path '*/$line' -not -path '*/$line/*'"
done < .gitignore
fi
echo "$filters"
}
get_modules_by_depth() {
local format="${1:-list}"
local exclusion_filters=$(build_exclusion_filters)
local max_depth=$(eval "find . -type d $exclusion_filters 2>/dev/null" | awk -F/ '{print NF-1}' | sort -n | tail -1)
case "$format" in
"grouped")
echo "📊 Modules by depth (deepest first):"
for depth in $(seq $max_depth -1 0); do
local dirs=$(eval "find . -mindepth $depth -maxdepth $depth -type d $exclusion_filters 2>/dev/null" | \
while read dir; do
if [ $(find "$dir" -maxdepth 1 -type f 2>/dev/null | wc -l) -gt 0 ]; then
local claude_indicator=""
[ -f "$dir/CLAUDE.md" ] && claude_indicator=" [✓]"
echo "$dir$claude_indicator"
fi
done)
if [ -n "$dirs" ]; then
echo " 📁 Depth $depth:"
echo "$dirs" | sed 's/^/ - /'
fi
done
;;
"json")
echo "{"
echo " \"max_depth\": $max_depth,"
echo " \"modules\": {"
for depth in $(seq $max_depth -1 0); do
local dirs=$(eval "find . -mindepth $depth -maxdepth $depth -type d $exclusion_filters 2>/dev/null" | \
while read dir; do
if [ $(find "$dir" -maxdepth 1 -type f 2>/dev/null | wc -l) -gt 0 ]; then
local has_claude="false"
[ -f "$dir/CLAUDE.md" ] && has_claude="true"
echo "{\"path\":\"$dir\",\"has_claude\":$has_claude}"
fi
done | tr '\n' ',')
if [ -n "$dirs" ]; then
dirs=${dirs%,} # Remove trailing comma
echo " \"$depth\": [$dirs]"
[ $depth -gt 0 ] && echo ","
fi
done
echo " }"
echo "}"
;;
"list"|*)
# Simple list format (deepest first)
for depth in $(seq $max_depth -1 0); do
eval "find . -mindepth $depth -maxdepth $depth -type d $exclusion_filters 2>/dev/null" | \
while read dir; do
if [ $(find "$dir" -maxdepth 1 -type f 2>/dev/null | wc -l) -gt 0 ]; then
local file_count=$(find "$dir" -maxdepth 1 -type f 2>/dev/null | wc -l)
local types=$(find "$dir" -maxdepth 1 -type f -name "*.*" 2>/dev/null | \
grep -E '\.[^/]*$' | sed 's/.*\.//' | sort -u | tr '\n' ',' | sed 's/,$//')
local has_claude="no"
[ -f "$dir/CLAUDE.md" ] && has_claude="yes"
echo "depth:$depth|path:$dir|files:$file_count|types:[$types]|has_claude:$has_claude"
fi
done
done
;;
esac
}
# Execute function if script is run directly
if [[ "${BASH_SOURCE[0]}" == "${0}" ]]; then
get_modules_by_depth "$@"
fi

View File

@@ -0,0 +1,391 @@
#!/bin/bash
#
# UI Generate Preview v2.0 - Template-Based Preview Generation
# Purpose: Generate compare.html and index.html using template substitution
# Template: ~/.claude/workflows/_template-compare-matrix.html
#
# Usage: ui-generate-preview.sh <prototypes_dir> [--template <path>]
#
set -e
# Color output
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
NC='\033[0m' # No Color
# Default template path
TEMPLATE_PATH="$HOME/.claude/workflows/_template-compare-matrix.html"
# Parse arguments
prototypes_dir="${1:-.}"
shift || true
while [[ $# -gt 0 ]]; do
case $1 in
--template)
TEMPLATE_PATH="$2"
shift 2
;;
*)
echo -e "${RED}Unknown option: $1${NC}"
exit 1
;;
esac
done
if [[ ! -d "$prototypes_dir" ]]; then
echo -e "${RED}Error: Directory not found: $prototypes_dir${NC}"
exit 1
fi
cd "$prototypes_dir" || exit 1
echo -e "${GREEN}📊 Auto-detecting matrix dimensions...${NC}"
# Auto-detect styles, layouts, targets from file patterns
# Pattern: {target}-style-{s}-layout-{l}.html
styles=$(find . -maxdepth 1 -name "*-style-*-layout-*.html" | \
sed 's/.*-style-\([0-9]\+\)-.*/\1/' | sort -un)
layouts=$(find . -maxdepth 1 -name "*-style-*-layout-*.html" | \
sed 's/.*-layout-\([0-9]\+\)\.html/\1/' | sort -un)
targets=$(find . -maxdepth 1 -name "*-style-*-layout-*.html" | \
sed 's/\.\///; s/-style-.*//' | sort -u)
S=$(echo "$styles" | wc -l)
L=$(echo "$layouts" | wc -l)
T=$(echo "$targets" | wc -l)
echo -e " Detected: ${GREEN}${S}${NC} styles × ${GREEN}${L}${NC} layouts × ${GREEN}${T}${NC} targets"
if [[ $S -eq 0 ]] || [[ $L -eq 0 ]] || [[ $T -eq 0 ]]; then
echo -e "${RED}Error: No prototype files found matching pattern {target}-style-{s}-layout-{l}.html${NC}"
exit 1
fi
# ============================================================================
# Generate compare.html from template
# ============================================================================
echo -e "${YELLOW}🎨 Generating compare.html from template...${NC}"
if [[ ! -f "$TEMPLATE_PATH" ]]; then
echo -e "${RED}Error: Template not found: $TEMPLATE_PATH${NC}"
exit 1
fi
# Build pages/targets JSON array
PAGES_JSON="["
first=true
for target in $targets; do
if [[ "$first" == true ]]; then
first=false
else
PAGES_JSON+=", "
fi
PAGES_JSON+="\"$target\""
done
PAGES_JSON+="]"
# Generate metadata
RUN_ID="run-$(date +%Y%m%d-%H%M%S)"
SESSION_ID="standalone"
TIMESTAMP=$(date -u +%Y-%m-%dT%H:%M:%SZ 2>/dev/null || date -u +"%Y-%m-%d")
# Replace placeholders in template
cat "$TEMPLATE_PATH" | \
sed "s|{{run_id}}|${RUN_ID}|g" | \
sed "s|{{session_id}}|${SESSION_ID}|g" | \
sed "s|{{timestamp}}|${TIMESTAMP}|g" | \
sed "s|{{style_variants}}|${S}|g" | \
sed "s|{{layout_variants}}|${L}|g" | \
sed "s|{{pages_json}}|${PAGES_JSON}|g" \
> compare.html
echo -e "${GREEN} ✓ Generated compare.html from template${NC}"
# ============================================================================
# Generate index.html
# ============================================================================
echo -e "${YELLOW}📋 Generating index.html...${NC}"
cat > index.html << 'EOF'
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>UI Prototypes Index</title>
<style>
* { margin: 0; padding: 0; box-sizing: border-box; }
body {
font-family: -apple-system, BlinkMacSystemFont, 'Segoe UI', Roboto, sans-serif;
max-width: 1200px;
margin: 0 auto;
padding: 40px 20px;
background: #f5f5f5;
}
h1 { margin-bottom: 10px; color: #333; }
.subtitle { color: #666; margin-bottom: 30px; }
.cta {
background: linear-gradient(135deg, #667eea 0%, #764ba2 100%);
color: white;
padding: 20px;
border-radius: 8px;
margin-bottom: 30px;
box-shadow: 0 4px 6px rgba(0,0,0,0.1);
}
.cta h2 { margin-bottom: 10px; }
.cta a {
display: inline-block;
background: white;
color: #667eea;
padding: 10px 20px;
border-radius: 6px;
text-decoration: none;
font-weight: 600;
margin-top: 10px;
}
.cta a:hover { background: #f8f9fa; }
.style-section {
background: white;
padding: 20px;
border-radius: 8px;
margin-bottom: 20px;
box-shadow: 0 2px 4px rgba(0,0,0,0.1);
}
.style-section h2 {
color: #495057;
margin-bottom: 15px;
padding-bottom: 10px;
border-bottom: 2px solid #e9ecef;
}
.target-group {
margin-bottom: 20px;
}
.target-group h3 {
color: #6c757d;
font-size: 16px;
margin-bottom: 10px;
}
.link-grid {
display: grid;
grid-template-columns: repeat(auto-fill, minmax(200px, 1fr));
gap: 10px;
}
.prototype-link {
padding: 12px 16px;
background: #f8f9fa;
border: 1px solid #dee2e6;
border-radius: 6px;
text-decoration: none;
color: #495057;
display: flex;
justify-content: space-between;
align-items: center;
transition: all 0.2s;
}
.prototype-link:hover {
background: #e9ecef;
border-color: #667eea;
transform: translateX(2px);
}
.prototype-link .label { font-weight: 500; }
.prototype-link .icon { color: #667eea; }
</style>
</head>
<body>
<h1>🎨 UI Prototypes Index</h1>
<p class="subtitle">Generated __S__×__L__×__T__ = __TOTAL__ prototypes</p>
<div class="cta">
<h2>📊 Interactive Comparison</h2>
<p>View all styles and layouts side-by-side in an interactive matrix</p>
<a href="compare.html">Open Matrix View →</a>
</div>
<h2>📂 All Prototypes</h2>
__CONTENT__
</body>
</html>
EOF
# Build content HTML
CONTENT=""
for style in $styles; do
CONTENT+="<div class='style-section'>"$'\n'
CONTENT+="<h2>Style ${style}</h2>"$'\n'
for target in $targets; do
target_capitalized="$(echo ${target:0:1} | tr '[:lower:]' '[:upper:]')${target:1}"
CONTENT+="<div class='target-group'>"$'\n'
CONTENT+="<h3>${target_capitalized}</h3>"$'\n'
CONTENT+="<div class='link-grid'>"$'\n'
for layout in $layouts; do
html_file="${target}-style-${style}-layout-${layout}.html"
if [[ -f "$html_file" ]]; then
CONTENT+="<a href='${html_file}' class='prototype-link' target='_blank'>"$'\n'
CONTENT+="<span class='label'>Layout ${layout}</span>"$'\n'
CONTENT+="<span class='icon'>↗</span>"$'\n'
CONTENT+="</a>"$'\n'
fi
done
CONTENT+="</div></div>"$'\n'
done
CONTENT+="</div>"$'\n'
done
# Calculate total
TOTAL_PROTOTYPES=$((S * L * T))
# Replace placeholders (using a temp file for complex replacement)
{
echo "$CONTENT" > /tmp/content_tmp.txt
sed "s|__S__|${S}|g" index.html | \
sed "s|__L__|${L}|g" | \
sed "s|__T__|${T}|g" | \
sed "s|__TOTAL__|${TOTAL_PROTOTYPES}|g" | \
sed -e "/__CONTENT__/r /tmp/content_tmp.txt" -e "/__CONTENT__/d" > /tmp/index_tmp.html
mv /tmp/index_tmp.html index.html
rm -f /tmp/content_tmp.txt
}
echo -e "${GREEN} ✓ Generated index.html${NC}"
# ============================================================================
# Generate PREVIEW.md
# ============================================================================
echo -e "${YELLOW}📝 Generating PREVIEW.md...${NC}"
cat > PREVIEW.md << EOF
# UI Prototypes Preview Guide
Generated: $(date +"%Y-%m-%d %H:%M:%S")
## 📊 Matrix Dimensions
- **Styles**: ${S}
- **Layouts**: ${L}
- **Targets**: ${T}
- **Total Prototypes**: $((S*L*T))
## 🌐 How to View
### Option 1: Interactive Matrix (Recommended)
Open \`compare.html\` in your browser to see all prototypes in an interactive matrix view.
**Features**:
- Side-by-side comparison of all styles and layouts
- Switch between targets using the dropdown
- Adjust grid columns for better viewing
- Direct links to full-page views
- Selection system with export to JSON
- Fullscreen mode for detailed inspection
### Option 2: Simple Index
Open \`index.html\` for a simple list of all prototypes with direct links.
### Option 3: Direct File Access
Each prototype can be opened directly:
- Pattern: \`{target}-style-{s}-layout-{l}.html\`
- Example: \`dashboard-style-1-layout-1.html\`
## 📁 File Structure
\`\`\`
prototypes/
├── compare.html # Interactive matrix view
├── index.html # Simple navigation index
├── PREVIEW.md # This file
EOF
for style in $styles; do
for target in $targets; do
for layout in $layouts; do
echo "├── ${target}-style-${style}-layout-${layout}.html" >> PREVIEW.md
echo "├── ${target}-style-${style}-layout-${layout}.css" >> PREVIEW.md
done
done
done
cat >> PREVIEW.md << 'EOF2'
```
## 🎨 Style Variants
EOF2
for style in $styles; do
cat >> PREVIEW.md << EOF3
### Style ${style}
EOF3
style_guide="../style-extraction/style-${style}/style-guide.md"
if [[ -f "$style_guide" ]]; then
head -n 10 "$style_guide" | tail -n +2 >> PREVIEW.md 2>/dev/null || echo "Design philosophy and tokens" >> PREVIEW.md
else
echo "Design system ${style}" >> PREVIEW.md
fi
echo "" >> PREVIEW.md
done
cat >> PREVIEW.md << 'EOF4'
## 🎯 Targets
EOF4
for target in $targets; do
target_capitalized="$(echo ${target:0:1} | tr '[:lower:]' '[:upper:]')${target:1}"
echo "- **${target_capitalized}**: ${L} layouts × ${S} styles = $((L*S)) variations" >> PREVIEW.md
done
cat >> PREVIEW.md << 'EOF5'
## 💡 Tips
1. **Comparison**: Use compare.html to see how different styles affect the same layout
2. **Navigation**: Use index.html for quick access to specific prototypes
3. **Selection**: Mark favorites in compare.html using star icons
4. **Export**: Download selection JSON for implementation planning
5. **Inspection**: Open browser DevTools to inspect HTML structure and CSS
6. **Sharing**: All files are standalone - can be shared or deployed directly
## 📝 Next Steps
1. Review prototypes in compare.html
2. Select preferred style × layout combinations
3. Export selections as JSON
4. Provide feedback for refinement
5. Use selected designs for implementation
---
Generated by /workflow:ui-design:generate-v2 (Style-Centric Architecture)
EOF5
echo -e "${GREEN} ✓ Generated PREVIEW.md${NC}"
# ============================================================================
# Completion Summary
# ============================================================================
echo ""
echo -e "${GREEN}✅ Preview generation complete!${NC}"
echo -e " Files created: compare.html, index.html, PREVIEW.md"
echo -e " Matrix: ${S} styles × ${L} layouts × ${T} targets = $((S*L*T)) prototypes"
echo ""
echo -e "${YELLOW}🌐 Next Steps:${NC}"
echo -e " 1. Open compare.html for interactive matrix view"
echo -e " 2. Open index.html for simple navigation"
echo -e " 3. Read PREVIEW.md for detailed usage guide"
echo ""

View File

@@ -0,0 +1,811 @@
#!/bin/bash
# UI Prototype Instantiation Script with Preview Generation (v3.0 - Auto-detect)
# Purpose: Generate S × L × P final prototypes from templates + interactive preview files
# Usage:
# Simple: ui-instantiate-prototypes.sh <prototypes_dir>
# Full: ui-instantiate-prototypes.sh <base_path> <pages> <style_variants> <layout_variants> [options]
# Use safer error handling
set -o pipefail
# ============================================================================
# Helper Functions
# ============================================================================
log_info() {
echo "$1"
}
log_success() {
echo "$1"
}
log_error() {
echo "$1"
}
log_warning() {
echo "⚠️ $1"
}
# Auto-detect pages from templates directory
auto_detect_pages() {
local templates_dir="$1/_templates"
if [ ! -d "$templates_dir" ]; then
log_error "Templates directory not found: $templates_dir"
return 1
fi
# Find unique page names from template files (e.g., login-layout-1.html -> login)
local pages=$(find "$templates_dir" -name "*-layout-*.html" -type f | \
sed 's|.*/||' | \
sed 's|-layout-[0-9]*\.html||' | \
sort -u | \
tr '\n' ',' | \
sed 's/,$//')
echo "$pages"
}
# Auto-detect style variants count
auto_detect_style_variants() {
local base_path="$1"
local style_dir="$base_path/../style-extraction"
if [ ! -d "$style_dir" ]; then
log_warning "Style consolidation directory not found: $style_dir"
echo "3" # Default
return
fi
# Count style-* directories
local count=$(find "$style_dir" -maxdepth 1 -type d -name "style-*" | wc -l)
if [ "$count" -eq 0 ]; then
echo "3" # Default
else
echo "$count"
fi
}
# Auto-detect layout variants count
auto_detect_layout_variants() {
local templates_dir="$1/_templates"
if [ ! -d "$templates_dir" ]; then
echo "3" # Default
return
fi
# Find the first page and count its layouts
local first_page=$(find "$templates_dir" -name "*-layout-1.html" -type f | head -1 | sed 's|.*/||' | sed 's|-layout-1\.html||')
if [ -z "$first_page" ]; then
echo "3" # Default
return
fi
# Count layout files for this page
local count=$(find "$templates_dir" -name "${first_page}-layout-*.html" -type f | wc -l)
if [ "$count" -eq 0 ]; then
echo "3" # Default
else
echo "$count"
fi
}
# ============================================================================
# Parse Arguments
# ============================================================================
show_usage() {
cat <<'EOF'
Usage:
Simple (auto-detect): ui-instantiate-prototypes.sh <prototypes_dir> [options]
Full: ui-instantiate-prototypes.sh <base_path> <pages> <style_variants> <layout_variants> [options]
Simple Mode (Recommended):
prototypes_dir Path to prototypes directory (auto-detects everything)
Full Mode:
base_path Base path to prototypes directory
pages Comma-separated list of pages/components
style_variants Number of style variants (1-5)
layout_variants Number of layout variants (1-5)
Options:
--run-id <id> Run ID (default: auto-generated)
--session-id <id> Session ID (default: standalone)
--mode <page|component> Exploration mode (default: page)
--template <path> Path to compare.html template (default: ~/.claude/workflows/_template-compare-matrix.html)
--no-preview Skip preview file generation
--help Show this help message
Examples:
# Simple usage (auto-detect everything)
ui-instantiate-prototypes.sh .workflow/design-run-*/prototypes
# With options
ui-instantiate-prototypes.sh .workflow/design-run-*/prototypes --session-id WFS-auth
# Full manual mode
ui-instantiate-prototypes.sh .workflow/design-run-*/prototypes "login,dashboard" 3 3 --session-id WFS-auth
EOF
}
# Default values
BASE_PATH=""
PAGES=""
STYLE_VARIANTS=""
LAYOUT_VARIANTS=""
RUN_ID="run-$(date +%Y%m%d-%H%M%S)"
SESSION_ID="standalone"
MODE="page"
TEMPLATE_PATH="$HOME/.claude/workflows/_template-compare-matrix.html"
GENERATE_PREVIEW=true
AUTO_DETECT=false
# Parse arguments
if [ $# -lt 1 ]; then
log_error "Missing required arguments"
show_usage
exit 1
fi
# Check if using simple mode (only 1 positional arg before options)
if [ $# -eq 1 ] || [[ "$2" == --* ]]; then
# Simple mode - auto-detect
AUTO_DETECT=true
BASE_PATH="$1"
shift 1
else
# Full mode - manual parameters
if [ $# -lt 4 ]; then
log_error "Full mode requires 4 positional arguments"
show_usage
exit 1
fi
BASE_PATH="$1"
PAGES="$2"
STYLE_VARIANTS="$3"
LAYOUT_VARIANTS="$4"
shift 4
fi
# Parse optional arguments
while [[ $# -gt 0 ]]; do
case $1 in
--run-id)
RUN_ID="$2"
shift 2
;;
--session-id)
SESSION_ID="$2"
shift 2
;;
--mode)
MODE="$2"
shift 2
;;
--template)
TEMPLATE_PATH="$2"
shift 2
;;
--no-preview)
GENERATE_PREVIEW=false
shift
;;
--help)
show_usage
exit 0
;;
*)
log_error "Unknown option: $1"
show_usage
exit 1
;;
esac
done
# ============================================================================
# Auto-detection (if enabled)
# ============================================================================
if [ "$AUTO_DETECT" = true ]; then
log_info "🔍 Auto-detecting configuration from directory..."
# Detect pages
PAGES=$(auto_detect_pages "$BASE_PATH")
if [ -z "$PAGES" ]; then
log_error "Could not auto-detect pages from templates"
exit 1
fi
log_info " Pages: $PAGES"
# Detect style variants
STYLE_VARIANTS=$(auto_detect_style_variants "$BASE_PATH")
log_info " Style variants: $STYLE_VARIANTS"
# Detect layout variants
LAYOUT_VARIANTS=$(auto_detect_layout_variants "$BASE_PATH")
log_info " Layout variants: $LAYOUT_VARIANTS"
echo ""
fi
# ============================================================================
# Validation
# ============================================================================
# Validate base path
if [ ! -d "$BASE_PATH" ]; then
log_error "Base path not found: $BASE_PATH"
exit 1
fi
# Validate style and layout variants
if [ "$STYLE_VARIANTS" -lt 1 ] || [ "$STYLE_VARIANTS" -gt 5 ]; then
log_error "Style variants must be between 1 and 5 (got: $STYLE_VARIANTS)"
exit 1
fi
if [ "$LAYOUT_VARIANTS" -lt 1 ] || [ "$LAYOUT_VARIANTS" -gt 5 ]; then
log_error "Layout variants must be between 1 and 5 (got: $LAYOUT_VARIANTS)"
exit 1
fi
# Validate STYLE_VARIANTS against actual style directories
if [ "$STYLE_VARIANTS" -gt 0 ]; then
style_dir="$BASE_PATH/../style-extraction"
if [ ! -d "$style_dir" ]; then
log_error "Style consolidation directory not found: $style_dir"
log_info "Run /workflow:ui-design:consolidate first"
exit 1
fi
actual_styles=$(find "$style_dir" -maxdepth 1 -type d -name "style-*" 2>/dev/null | wc -l)
if [ "$actual_styles" -eq 0 ]; then
log_error "No style directories found in: $style_dir"
log_info "Run /workflow:ui-design:consolidate first to generate style design systems"
exit 1
fi
if [ "$STYLE_VARIANTS" -gt "$actual_styles" ]; then
log_warning "Requested $STYLE_VARIANTS style variants, but only found $actual_styles directories"
log_info "Available style directories:"
find "$style_dir" -maxdepth 1 -type d -name "style-*" 2>/dev/null | sed 's|.*/||' | sort
log_info "Auto-correcting to $actual_styles style variants"
STYLE_VARIANTS=$actual_styles
fi
fi
# Parse pages into array
IFS=',' read -ra PAGE_ARRAY <<< "$PAGES"
if [ ${#PAGE_ARRAY[@]} -eq 0 ]; then
log_error "No pages found"
exit 1
fi
# ============================================================================
# Header Output
# ============================================================================
echo "========================================="
echo "UI Prototype Instantiation & Preview"
if [ "$AUTO_DETECT" = true ]; then
echo "(Auto-detected configuration)"
fi
echo "========================================="
echo "Base Path: $BASE_PATH"
echo "Mode: $MODE"
echo "Pages/Components: $PAGES"
echo "Style Variants: $STYLE_VARIANTS"
echo "Layout Variants: $LAYOUT_VARIANTS"
echo "Run ID: $RUN_ID"
echo "Session ID: $SESSION_ID"
echo "========================================="
echo ""
# Change to base path
cd "$BASE_PATH" || exit 1
# ============================================================================
# Phase 1: Instantiate Prototypes
# ============================================================================
log_info "🚀 Phase 1: Instantiating prototypes from templates..."
echo ""
total_generated=0
total_failed=0
for page in "${PAGE_ARRAY[@]}"; do
# Trim whitespace
page=$(echo "$page" | xargs)
log_info "Processing page/component: $page"
for s in $(seq 1 "$STYLE_VARIANTS"); do
for l in $(seq 1 "$LAYOUT_VARIANTS"); do
# Define file paths
TEMPLATE_HTML="_templates/${page}-layout-${l}.html"
STRUCTURAL_CSS="_templates/${page}-layout-${l}.css"
TOKEN_CSS="../style-extraction/style-${s}/tokens.css"
OUTPUT_HTML="${page}-style-${s}-layout-${l}.html"
# Copy template and replace placeholders
if [ -f "$TEMPLATE_HTML" ]; then
cp "$TEMPLATE_HTML" "$OUTPUT_HTML" || {
log_error "Failed to copy template: $TEMPLATE_HTML"
((total_failed++))
continue
}
# Replace CSS placeholders (Windows-compatible sed syntax)
sed -i "s|{{STRUCTURAL_CSS}}|${STRUCTURAL_CSS}|g" "$OUTPUT_HTML" || true
sed -i "s|{{TOKEN_CSS}}|${TOKEN_CSS}|g" "$OUTPUT_HTML" || true
log_success "Created: $OUTPUT_HTML"
((total_generated++))
# Create implementation notes (simplified)
NOTES_FILE="${page}-style-${s}-layout-${l}-notes.md"
# Generate notes with simple heredoc
cat > "$NOTES_FILE" <<NOTESEOF
# Implementation Notes: ${page}-style-${s}-layout-${l}
## Generation Details
- **Template**: ${TEMPLATE_HTML}
- **Structural CSS**: ${STRUCTURAL_CSS}
- **Style Tokens**: ${TOKEN_CSS}
- **Layout Strategy**: Layout ${l}
- **Style Variant**: Style ${s}
- **Mode**: ${MODE}
## Template Reuse
This prototype was generated from a shared layout template to ensure consistency
across all style variants. The HTML structure is identical for all ${page}-layout-${l}
prototypes, with only the design tokens (colors, fonts, spacing) varying.
## Design System Reference
Refer to \`../style-extraction/style-${s}/style-guide.md\` for:
- Design philosophy
- Token usage guidelines
- Component patterns
- Accessibility requirements
## Customization
To modify this prototype:
1. Edit the layout template: \`${TEMPLATE_HTML}\` (affects all styles)
2. Edit the structural CSS: \`${STRUCTURAL_CSS}\` (affects all styles)
3. Edit design tokens: \`${TOKEN_CSS}\` (affects only this style variant)
## Run Information
- **Run ID**: ${RUN_ID}
- **Session ID**: ${SESSION_ID}
- **Generated**: $(date -u +%Y-%m-%dT%H:%M:%SZ 2>/dev/null || date -u +%Y-%m-%d)
NOTESEOF
else
log_error "Template not found: $TEMPLATE_HTML"
((total_failed++))
fi
done
done
done
echo ""
log_success "Phase 1 complete: Generated ${total_generated} prototypes"
if [ $total_failed -gt 0 ]; then
log_warning "Failed: ${total_failed} prototypes"
fi
echo ""
# ============================================================================
# Phase 2: Generate Preview Files (if enabled)
# ============================================================================
if [ "$GENERATE_PREVIEW" = false ]; then
log_info "⏭️ Skipping preview generation (--no-preview flag)"
exit 0
fi
log_info "🎨 Phase 2: Generating preview files..."
echo ""
# ============================================================================
# 2a. Generate compare.html from template
# ============================================================================
if [ ! -f "$TEMPLATE_PATH" ]; then
log_warning "Template not found: $TEMPLATE_PATH"
log_info " Skipping compare.html generation"
else
log_info "📄 Generating compare.html from template..."
# Convert page array to JSON format
PAGES_JSON="["
for i in "${!PAGE_ARRAY[@]}"; do
page=$(echo "${PAGE_ARRAY[$i]}" | xargs)
PAGES_JSON+="\"$page\""
if [ $i -lt $((${#PAGE_ARRAY[@]} - 1)) ]; then
PAGES_JSON+=", "
fi
done
PAGES_JSON+="]"
TIMESTAMP=$(date -u +%Y-%m-%dT%H:%M:%SZ 2>/dev/null || date -u +%Y-%m-%d)
# Read template and replace placeholders
cat "$TEMPLATE_PATH" | \
sed "s|{{run_id}}|${RUN_ID}|g" | \
sed "s|{{session_id}}|${SESSION_ID}|g" | \
sed "s|{{timestamp}}|${TIMESTAMP}|g" | \
sed "s|{{style_variants}}|${STYLE_VARIANTS}|g" | \
sed "s|{{layout_variants}}|${LAYOUT_VARIANTS}|g" | \
sed "s|{{pages_json}}|${PAGES_JSON}|g" \
> compare.html
log_success "Generated: compare.html"
fi
# ============================================================================
# 2b. Generate index.html
# ============================================================================
log_info "📄 Generating index.html..."
# Calculate total prototypes
TOTAL_PROTOTYPES=$((STYLE_VARIANTS * LAYOUT_VARIANTS * ${#PAGE_ARRAY[@]}))
# Generate index.html with simple heredoc
cat > index.html <<'INDEXEOF'
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>UI Prototypes - __MODE__ Mode - __RUN_ID__</title>
<style>
body {
font-family: system-ui, -apple-system, sans-serif;
max-width: 900px;
margin: 2rem auto;
padding: 0 2rem;
background: #f9fafb;
}
.header {
background: white;
padding: 2rem;
border-radius: 0.75rem;
box-shadow: 0 1px 3px rgba(0,0,0,0.1);
margin-bottom: 2rem;
}
h1 {
color: #2563eb;
margin-bottom: 0.5rem;
font-size: 2rem;
}
.meta {
color: #6b7280;
font-size: 0.875rem;
margin-top: 0.5rem;
}
.info {
background: #f3f4f6;
padding: 1.5rem;
border-radius: 0.5rem;
margin: 1.5rem 0;
border-left: 4px solid #2563eb;
}
.cta {
display: inline-block;
background: #2563eb;
color: white;
padding: 1rem 2rem;
border-radius: 0.5rem;
text-decoration: none;
font-weight: 600;
margin: 1rem 0;
transition: background 0.2s;
}
.cta:hover {
background: #1d4ed8;
}
.stats {
display: grid;
grid-template-columns: repeat(auto-fit, minmax(150px, 1fr));
gap: 1rem;
margin: 1.5rem 0;
}
.stat {
background: white;
border: 1px solid #e5e7eb;
padding: 1.5rem;
border-radius: 0.5rem;
text-align: center;
box-shadow: 0 1px 2px rgba(0,0,0,0.05);
}
.stat-value {
font-size: 2.5rem;
font-weight: bold;
color: #2563eb;
margin-bottom: 0.25rem;
}
.stat-label {
color: #6b7280;
font-size: 0.875rem;
}
.section {
background: white;
padding: 2rem;
border-radius: 0.75rem;
margin-bottom: 2rem;
box-shadow: 0 1px 3px rgba(0,0,0,0.1);
}
h2 {
color: #1f2937;
margin-bottom: 1rem;
font-size: 1.5rem;
}
ul {
line-height: 1.8;
color: #374151;
}
.pages-list {
list-style: none;
padding: 0;
}
.pages-list li {
background: #f9fafb;
padding: 0.75rem 1rem;
margin: 0.5rem 0;
border-radius: 0.375rem;
border-left: 3px solid #2563eb;
}
.badge {
display: inline-block;
background: #dbeafe;
color: #1e40af;
padding: 0.25rem 0.75rem;
border-radius: 0.25rem;
font-size: 0.75rem;
font-weight: 600;
margin-left: 0.5rem;
}
</style>
</head>
<body>
<div class="header">
<h1>🎨 UI Prototype __MODE__ Mode</h1>
<div class="meta">
<strong>Run ID:</strong> __RUN_ID__ |
<strong>Session:</strong> __SESSION_ID__ |
<strong>Generated:</strong> __TIMESTAMP__
</div>
</div>
<div class="info">
<p><strong>Matrix Configuration:</strong> __STYLE_VARIANTS__ styles × __LAYOUT_VARIANTS__ layouts × __PAGE_COUNT__ __MODE__s</p>
<p><strong>Total Prototypes:</strong> __TOTAL_PROTOTYPES__ interactive HTML files</p>
</div>
<a href="compare.html" class="cta">🔍 Open Interactive Matrix Comparison →</a>
<div class="stats">
<div class="stat">
<div class="stat-value">__STYLE_VARIANTS__</div>
<div class="stat-label">Style Variants</div>
</div>
<div class="stat">
<div class="stat-value">__LAYOUT_VARIANTS__</div>
<div class="stat-label">Layout Options</div>
</div>
<div class="stat">
<div class="stat-value">__PAGE_COUNT__</div>
<div class="stat-label">__MODE__s</div>
</div>
<div class="stat">
<div class="stat-value">__TOTAL_PROTOTYPES__</div>
<div class="stat-label">Total Prototypes</div>
</div>
</div>
<div class="section">
<h2>🌟 Features</h2>
<ul>
<li><strong>Interactive Matrix View:</strong> __STYLE_VARIANTS__×__LAYOUT_VARIANTS__ grid with synchronized scrolling</li>
<li><strong>Flexible Zoom:</strong> 25%, 50%, 75%, 100% viewport scaling</li>
<li><strong>Fullscreen Mode:</strong> Detailed view for individual prototypes</li>
<li><strong>Selection System:</strong> Mark favorites with export to JSON</li>
<li><strong>__MODE__ Switcher:</strong> Compare different __MODE__s side-by-side</li>
<li><strong>Persistent State:</strong> Selections saved in localStorage</li>
</ul>
</div>
<div class="section">
<h2>📄 Generated __MODE__s</h2>
<ul class="pages-list">
__PAGES_LIST__
</ul>
</div>
<div class="section">
<h2>📚 Next Steps</h2>
<ol>
<li>Open <code>compare.html</code> to explore all variants in matrix view</li>
<li>Use zoom and sync scroll controls to compare details</li>
<li>Select your preferred style×layout combinations</li>
<li>Export selections as JSON for implementation planning</li>
<li>Review implementation notes in <code>*-notes.md</code> files</li>
</ol>
</div>
</body>
</html>
INDEXEOF
# Build pages list HTML
PAGES_LIST_HTML=""
for page in "${PAGE_ARRAY[@]}"; do
page=$(echo "$page" | xargs)
VARIANT_COUNT=$((STYLE_VARIANTS * LAYOUT_VARIANTS))
PAGES_LIST_HTML+=" <li>\n"
PAGES_LIST_HTML+=" <strong>${page}</strong>\n"
PAGES_LIST_HTML+=" <span class=\"badge\">${STYLE_VARIANTS}×${LAYOUT_VARIANTS} = ${VARIANT_COUNT} variants</span>\n"
PAGES_LIST_HTML+=" </li>\n"
done
# Replace all placeholders in index.html
MODE_UPPER=$(echo "$MODE" | awk '{print toupper(substr($0,1,1)) tolower(substr($0,2))}')
sed -i "s|__RUN_ID__|${RUN_ID}|g" index.html
sed -i "s|__SESSION_ID__|${SESSION_ID}|g" index.html
sed -i "s|__TIMESTAMP__|${TIMESTAMP}|g" index.html
sed -i "s|__MODE__|${MODE_UPPER}|g" index.html
sed -i "s|__STYLE_VARIANTS__|${STYLE_VARIANTS}|g" index.html
sed -i "s|__LAYOUT_VARIANTS__|${LAYOUT_VARIANTS}|g" index.html
sed -i "s|__PAGE_COUNT__|${#PAGE_ARRAY[@]}|g" index.html
sed -i "s|__TOTAL_PROTOTYPES__|${TOTAL_PROTOTYPES}|g" index.html
sed -i "s|__PAGES_LIST__|${PAGES_LIST_HTML}|g" index.html
log_success "Generated: index.html"
# ============================================================================
# 2c. Generate PREVIEW.md
# ============================================================================
log_info "📄 Generating PREVIEW.md..."
cat > PREVIEW.md <<PREVIEWEOF
# UI Prototype Preview Guide
## Quick Start
1. Open \`index.html\` for overview and navigation
2. Open \`compare.html\` for interactive matrix comparison
3. Use browser developer tools to inspect responsive behavior
## Configuration
- **Exploration Mode:** ${MODE_UPPER}
- **Run ID:** ${RUN_ID}
- **Session ID:** ${SESSION_ID}
- **Style Variants:** ${STYLE_VARIANTS}
- **Layout Options:** ${LAYOUT_VARIANTS}
- **${MODE_UPPER}s:** ${PAGES}
- **Total Prototypes:** ${TOTAL_PROTOTYPES}
- **Generated:** ${TIMESTAMP}
## File Naming Convention
\`\`\`
{${MODE}}-style-{s}-layout-{l}.html
\`\`\`
**Example:** \`dashboard-style-1-layout-2.html\`
- ${MODE_UPPER}: dashboard
- Style: Design system 1
- Layout: Layout variant 2
## Interactive Features (compare.html)
### Matrix View
- **Grid Layout:** ${STYLE_VARIANTS}×${LAYOUT_VARIANTS} table with all prototypes visible
- **Synchronized Scroll:** All iframes scroll together (toggle with button)
- **Zoom Controls:** Adjust viewport scale (25%, 50%, 75%, 100%)
- **${MODE_UPPER} Selector:** Switch between different ${MODE}s instantly
### Prototype Actions
- **⭐ Selection:** Click star icon to mark favorites
- **⛶ Fullscreen:** View prototype in fullscreen overlay
- **↗ New Tab:** Open prototype in dedicated browser tab
### Selection Export
1. Select preferred prototypes using star icons
2. Click "Export Selection" button
3. Downloads JSON file: \`selection-${RUN_ID}.json\`
4. Use exported file for implementation planning
## Design System References
Each prototype references a specific style design system:
PREVIEWEOF
# Add style references
for s in $(seq 1 "$STYLE_VARIANTS"); do
cat >> PREVIEW.md <<STYLEEOF
### Style ${s}
- **Tokens:** \`../style-extraction/style-${s}/design-tokens.json\`
- **CSS Variables:** \`../style-extraction/style-${s}/tokens.css\`
- **Style Guide:** \`../style-extraction/style-${s}/style-guide.md\`
STYLEEOF
done
cat >> PREVIEW.md <<'FOOTEREOF'
## Responsive Testing
All prototypes are mobile-first responsive. Test at these breakpoints:
- **Mobile:** 375px - 767px
- **Tablet:** 768px - 1023px
- **Desktop:** 1024px+
Use browser DevTools responsive mode for testing.
## Accessibility Features
- Semantic HTML5 structure
- ARIA attributes for screen readers
- Keyboard navigation support
- Proper heading hierarchy
- Focus indicators
## Next Steps
1. **Review:** Open `compare.html` and explore all variants
2. **Select:** Mark preferred prototypes using star icons
3. **Export:** Download selection JSON for implementation
4. **Implement:** Use `/workflow:ui-design:update` to integrate selected designs
5. **Plan:** Run `/workflow:plan` to generate implementation tasks
---
**Generated by:** `ui-instantiate-prototypes.sh`
**Version:** 3.0 (auto-detect mode)
FOOTEREOF
log_success "Generated: PREVIEW.md"
# ============================================================================
# Completion Summary
# ============================================================================
echo ""
echo "========================================="
echo "✅ Generation Complete!"
echo "========================================="
echo ""
echo "📊 Summary:"
echo " Prototypes: ${total_generated} generated"
if [ $total_failed -gt 0 ]; then
echo " Failed: ${total_failed}"
fi
echo " Preview Files: compare.html, index.html, PREVIEW.md"
echo " Matrix: ${STYLE_VARIANTS}×${LAYOUT_VARIANTS} (${#PAGE_ARRAY[@]} ${MODE}s)"
echo " Total Files: ${TOTAL_PROTOTYPES} prototypes + preview files"
echo ""
echo "🌐 Next Steps:"
echo " 1. Open: ${BASE_PATH}/index.html"
echo " 2. Explore: ${BASE_PATH}/compare.html"
echo " 3. Review: ${BASE_PATH}/PREVIEW.md"
echo ""
echo "Performance: Template-based approach with ${STYLE_VARIANTS}× speedup"
echo "========================================="

View File

@@ -0,0 +1,333 @@
#!/bin/bash
# Update CLAUDE.md for modules with two strategies
# Usage: update_module_claude.sh <strategy> <module_path> [tool] [model]
# strategy: single-layer|multi-layer
# module_path: Path to the module directory
# tool: gemini|qwen|codex (default: gemini)
# model: Model name (optional, uses tool defaults)
#
# Default Models:
# gemini: gemini-2.5-flash
# qwen: coder-model
# codex: gpt5-codex
#
# Strategies:
# single-layer: Upward aggregation
# - Read: Current directory code + child CLAUDE.md files
# - Generate: Single ./CLAUDE.md in current directory
# - Use: Large projects, incremental bottom-up updates
#
# multi-layer: Downward distribution
# - Read: All files in current and subdirectories
# - Generate: CLAUDE.md for each directory containing files
# - Use: Small projects, full documentation generation
#
# Features:
# - Minimal prompts based on unified template
# - Respects .gitignore patterns
# - Path-focused processing (script only cares about paths)
# - Template-driven generation
# Build exclusion filters from .gitignore
build_exclusion_filters() {
local filters=""
# Common system/cache directories to exclude
local system_excludes=(
".git" "__pycache__" "node_modules" ".venv" "venv" "env"
"dist" "build" ".cache" ".pytest_cache" ".mypy_cache"
"coverage" ".nyc_output" "logs" "tmp" "temp"
)
for exclude in "${system_excludes[@]}"; do
filters+=" -not -path '*/$exclude' -not -path '*/$exclude/*'"
done
# Find and parse .gitignore (current dir first, then git root)
local gitignore_file=""
# Check current directory first
if [ -f ".gitignore" ]; then
gitignore_file=".gitignore"
else
# Try to find git root and check for .gitignore there
local git_root=$(git rev-parse --show-toplevel 2>/dev/null)
if [ -n "$git_root" ] && [ -f "$git_root/.gitignore" ]; then
gitignore_file="$git_root/.gitignore"
fi
fi
# Parse .gitignore if found
if [ -n "$gitignore_file" ]; then
while IFS= read -r line; do
# Skip empty lines and comments
[[ -z "$line" || "$line" =~ ^[[:space:]]*# ]] && continue
# Remove trailing slash and whitespace
line=$(echo "$line" | sed 's|/$||' | xargs)
# Skip wildcards patterns (too complex for simple find)
[[ "$line" =~ \* ]] && continue
# Add to filters
filters+=" -not -path '*/$line' -not -path '*/$line/*'"
done < "$gitignore_file"
fi
echo "$filters"
}
# Scan directory structure and generate structured information
scan_directory_structure() {
local target_path="$1"
local strategy="$2"
if [ ! -d "$target_path" ]; then
echo "Directory not found: $target_path"
return 1
fi
local exclusion_filters=$(build_exclusion_filters)
local structure_info=""
# Get basic directory info
local dir_name=$(basename "$target_path")
local total_files=$(eval "find \"$target_path\" -type f $exclusion_filters 2>/dev/null" | wc -l)
local total_dirs=$(eval "find \"$target_path\" -type d $exclusion_filters 2>/dev/null" | wc -l)
structure_info+="Directory: $dir_name\n"
structure_info+="Total files: $total_files\n"
structure_info+="Total directories: $total_dirs\n\n"
if [ "$strategy" = "multi-layer" ]; then
# For multi-layer: show all subdirectories with file counts
structure_info+="Subdirectories with files:\n"
while IFS= read -r dir; do
if [ -n "$dir" ] && [ "$dir" != "$target_path" ]; then
local rel_path=${dir#$target_path/}
local file_count=$(eval "find \"$dir\" -maxdepth 1 -type f $exclusion_filters 2>/dev/null" | wc -l)
if [ $file_count -gt 0 ]; then
structure_info+=" - $rel_path/ ($file_count files)\n"
fi
fi
done < <(eval "find \"$target_path\" -type d $exclusion_filters 2>/dev/null")
else
# For single-layer: show direct children only
structure_info+="Direct subdirectories:\n"
while IFS= read -r dir; do
if [ -n "$dir" ]; then
local dir_name=$(basename "$dir")
local file_count=$(eval "find \"$dir\" -maxdepth 1 -type f $exclusion_filters 2>/dev/null" | wc -l)
local has_claude=$([ -f "$dir/CLAUDE.md" ] && echo " [has CLAUDE.md]" || echo "")
structure_info+=" - $dir_name/ ($file_count files)$has_claude\n"
fi
done < <(eval "find \"$target_path\" -maxdepth 1 -type d $exclusion_filters 2>/dev/null" | grep -v "^$target_path$")
fi
# Show main file types in current directory
structure_info+="\nCurrent directory files:\n"
local code_files=$(eval "find \"$target_path\" -maxdepth 1 -type f \\( -name '*.ts' -o -name '*.tsx' -o -name '*.js' -o -name '*.jsx' -o -name '*.py' -o -name '*.sh' \\) $exclusion_filters 2>/dev/null" | wc -l)
local config_files=$(eval "find \"$target_path\" -maxdepth 1 -type f \\( -name '*.json' -o -name '*.yaml' -o -name '*.yml' -o -name '*.toml' \\) $exclusion_filters 2>/dev/null" | wc -l)
local doc_files=$(eval "find \"$target_path\" -maxdepth 1 -type f -name '*.md' $exclusion_filters 2>/dev/null" | wc -l)
structure_info+=" - Code files: $code_files\n"
structure_info+=" - Config files: $config_files\n"
structure_info+=" - Documentation: $doc_files\n"
printf "%b" "$structure_info"
}
update_module_claude() {
local strategy="$1"
local module_path="$2"
local tool="${3:-gemini}"
local model="$4"
# Validate parameters
if [ -z "$strategy" ] || [ -z "$module_path" ]; then
echo "❌ Error: Strategy and module path are required"
echo "Usage: update_module_claude.sh <strategy> <module_path> [tool] [model]"
echo "Strategies: single-layer|multi-layer"
return 1
fi
# Validate strategy
if [ "$strategy" != "single-layer" ] && [ "$strategy" != "multi-layer" ]; then
echo "❌ Error: Invalid strategy '$strategy'"
echo "Valid strategies: single-layer, multi-layer"
return 1
fi
if [ ! -d "$module_path" ]; then
echo "❌ Error: Directory '$module_path' does not exist"
return 1
fi
# Set default models if not specified
if [ -z "$model" ]; then
case "$tool" in
gemini)
model="gemini-2.5-flash"
;;
qwen)
model="coder-model"
;;
codex)
model="gpt5-codex"
;;
*)
model=""
;;
esac
fi
# Build exclusion filters from .gitignore
local exclusion_filters=$(build_exclusion_filters)
# Check if directory has files (excluding gitignored paths)
local file_count=$(eval "find \"$module_path\" -maxdepth 1 -type f $exclusion_filters 2>/dev/null" | wc -l)
if [ $file_count -eq 0 ]; then
echo "⚠️ Skipping '$module_path' - no files found (after .gitignore filtering)"
return 0
fi
# Use unified template for all modules
local template_path="$HOME/.claude/workflows/cli-templates/prompts/memory/02-document-module-structure.txt"
# Read template content directly
local template_content=""
if [ -f "$template_path" ]; then
template_content=$(cat "$template_path")
echo " 📋 Loaded template: $(wc -l < "$template_path") lines"
else
echo " ⚠️ Template not found: $template_path"
echo " Using fallback template..."
template_content="Create comprehensive CLAUDE.md documentation following standard structure with Purpose, Structure, Components, Dependencies, Integration, and Implementation sections."
fi
# Scan directory structure first
echo " 🔍 Scanning directory structure..."
local structure_info=$(scan_directory_structure "$module_path" "$strategy")
# Prepare logging info
local module_name=$(basename "$module_path")
echo "⚡ Updating: $module_path"
echo " Strategy: $strategy | Tool: $tool | Model: $model | Files: $file_count"
echo " Template: $(basename "$template_path") ($(echo "$template_content" | wc -l) lines)"
echo " Structure: Scanned $(echo "$structure_info" | wc -l) lines of structure info"
# Build minimal strategy-specific prompt with explicit paths and structure info
local final_prompt=""
if [ "$strategy" = "multi-layer" ]; then
# multi-layer strategy: read all, generate for each directory
final_prompt="Directory Structure Analysis:
$structure_info
Read: @**/*
Generate CLAUDE.md files:
- Primary: ./CLAUDE.md (current directory)
- Additional: CLAUDE.md in each subdirectory containing files
Template Guidelines:
$template_content
Instructions:
- Work bottom-up: deepest directories first
- Parent directories reference children
- Each CLAUDE.md file must be in its respective directory
- Follow the template guidelines above for consistent structure
- Use the structure analysis to understand directory hierarchy"
else
# single-layer strategy: read current + child CLAUDE.md, generate current only
final_prompt="Directory Structure Analysis:
$structure_info
Read: @*/CLAUDE.md @*.ts @*.tsx @*.js @*.jsx @*.py @*.sh @*.md @*.json @*.yaml @*.yml
Generate single file: ./CLAUDE.md
Template Guidelines:
$template_content
Instructions:
- Create exactly one CLAUDE.md file in the current directory
- Reference child CLAUDE.md files, do not duplicate their content
- Follow the template guidelines above for consistent structure
- Use the structure analysis to understand the current directory context"
fi
# Execute update
local start_time=$(date +%s)
echo " 🔄 Starting update..."
if cd "$module_path" 2>/dev/null; then
local tool_result=0
# Execute with selected tool
# NOTE: Model parameter (-m) is placed AFTER the prompt
case "$tool" in
qwen)
if [ "$model" = "coder-model" ]; then
# coder-model is default, -m is optional
qwen -p "$final_prompt" --yolo 2>&1
else
qwen -p "$final_prompt" -m "$model" --yolo 2>&1
fi
tool_result=$?
;;
codex)
codex --full-auto exec "$final_prompt" -m "$model" --skip-git-repo-check -s danger-full-access 2>&1
tool_result=$?
;;
gemini)
gemini -p "$final_prompt" -m "$model" --yolo 2>&1
tool_result=$?
;;
*)
echo " ⚠️ Unknown tool: $tool, defaulting to gemini"
gemini -p "$final_prompt" -m "$model" --yolo 2>&1
tool_result=$?
;;
esac
if [ $tool_result -eq 0 ]; then
local end_time=$(date +%s)
local duration=$((end_time - start_time))
echo " ✅ Completed in ${duration}s"
cd - > /dev/null
return 0
else
echo " ❌ Update failed for $module_path"
cd - > /dev/null
return 1
fi
else
echo " ❌ Cannot access directory: $module_path"
return 1
fi
}
# Execute function if script is run directly
if [[ "${BASH_SOURCE[0]}" == "${0}" ]]; then
# Show help if no arguments or help requested
if [ $# -eq 0 ] || [ "$1" = "-h" ] || [ "$1" = "--help" ]; then
echo "Usage: update_module_claude.sh <strategy> <module_path> [tool] [model]"
echo ""
echo "Strategies:"
echo " single-layer - Read current dir code + child CLAUDE.md, generate ./CLAUDE.md"
echo " multi-layer - Read all files, generate CLAUDE.md for each directory"
echo ""
echo "Tools: gemini (default), qwen, codex"
echo "Models: Use tool defaults if not specified"
echo ""
echo "Examples:"
echo " ./update_module_claude.sh single-layer ./src/auth"
echo " ./update_module_claude.sh multi-layer ./components gemini gemini-2.5-flash"
exit 0
fi
update_module_claude "$@"
fi

View File

@@ -1,584 +0,0 @@
# Mermaid Utilities Library
Shared utilities for generating and validating Mermaid diagrams across all analysis skills.
## Sanitization Functions
### sanitizeId
Convert any text to a valid Mermaid node ID.
```javascript
/**
* Sanitize text to valid Mermaid node ID
* - Only alphanumeric and underscore allowed
* - Cannot start with number
* - Truncates to 50 chars max
*
* @param {string} text - Input text
* @returns {string} - Valid Mermaid ID
*/
function sanitizeId(text) {
if (!text) return '_empty';
return text
.replace(/[^a-zA-Z0-9_\u4e00-\u9fa5]/g, '_') // Allow Chinese chars
.replace(/^[0-9]/, '_$&') // Prefix number with _
.replace(/_+/g, '_') // Collapse multiple _
.substring(0, 50); // Limit length
}
// Examples:
// sanitizeId("User-Service") → "User_Service"
// sanitizeId("3rdParty") → "_3rdParty"
// sanitizeId("用户服务") → "用户服务"
```
### escapeLabel
Escape special characters for Mermaid labels.
```javascript
/**
* Escape special characters in Mermaid labels
* Uses HTML entity encoding for problematic chars
*
* @param {string} text - Label text
* @returns {string} - Escaped label
*/
function escapeLabel(text) {
if (!text) return '';
return text
.replace(/"/g, "'") // Avoid quote issues
.replace(/\(/g, '#40;') // (
.replace(/\)/g, '#41;') // )
.replace(/\{/g, '#123;') // {
.replace(/\}/g, '#125;') // }
.replace(/\[/g, '#91;') // [
.replace(/\]/g, '#93;') // ]
.replace(/</g, '#60;') // <
.replace(/>/g, '#62;') // >
.replace(/\|/g, '#124;') // |
.substring(0, 80); // Limit length
}
// Examples:
// escapeLabel("Process(data)") → "Process#40;data#41;"
// escapeLabel("Check {valid?}") → "Check #123;valid?#125;"
```
### sanitizeType
Sanitize type names for class diagrams.
```javascript
/**
* Sanitize type names for Mermaid classDiagram
* Removes generics syntax that causes issues
*
* @param {string} type - Type name
* @returns {string} - Sanitized type
*/
function sanitizeType(type) {
if (!type) return 'any';
return type
.replace(/<[^>]*>/g, '') // Remove generics <T>
.replace(/\|/g, ' or ') // Union types
.replace(/&/g, ' and ') // Intersection types
.replace(/\[\]/g, 'Array') // Array notation
.substring(0, 30);
}
// Examples:
// sanitizeType("Array<string>") → "Array"
// sanitizeType("string | number") → "string or number"
```
## Diagram Generation Functions
### generateFlowchartNode
Generate a flowchart node with proper shape.
```javascript
/**
* Generate flowchart node with shape
*
* @param {string} id - Node ID
* @param {string} label - Display label
* @param {string} type - Node type: start|end|process|decision|io|subroutine
* @returns {string} - Mermaid node definition
*/
function generateFlowchartNode(id, label, type = 'process') {
const safeId = sanitizeId(id);
const safeLabel = escapeLabel(label);
const shapes = {
start: `${safeId}(["${safeLabel}"])`, // Stadium shape
end: `${safeId}(["${safeLabel}"])`, // Stadium shape
process: `${safeId}["${safeLabel}"]`, // Rectangle
decision: `${safeId}{"${safeLabel}"}`, // Diamond
io: `${safeId}[/"${safeLabel}"/]`, // Parallelogram
subroutine: `${safeId}[["${safeLabel}"]]`, // Subroutine
database: `${safeId}[("${safeLabel}")]`, // Cylinder
manual: `${safeId}[/"${safeLabel}"\\]` // Trapezoid
};
return shapes[type] || shapes.process;
}
```
### generateFlowchartEdge
Generate a flowchart edge with optional label.
```javascript
/**
* Generate flowchart edge
*
* @param {string} from - Source node ID
* @param {string} to - Target node ID
* @param {string} label - Edge label (optional)
* @param {string} style - Edge style: solid|dashed|thick
* @returns {string} - Mermaid edge definition
*/
function generateFlowchartEdge(from, to, label = '', style = 'solid') {
const safeFrom = sanitizeId(from);
const safeTo = sanitizeId(to);
const safeLabel = label ? `|"${escapeLabel(label)}"|` : '';
const arrows = {
solid: '-->',
dashed: '-.->',
thick: '==>'
};
const arrow = arrows[style] || arrows.solid;
return ` ${safeFrom} ${arrow}${safeLabel} ${safeTo}`;
}
```
### generateAlgorithmFlowchart (Enhanced)
Generate algorithm flowchart with branch/loop support.
```javascript
/**
* Generate algorithm flowchart with decision support
*
* @param {Object} algorithm - Algorithm definition
* - name: Algorithm name
* - inputs: [{name, type}]
* - outputs: [{name, type}]
* - steps: [{id, description, type, next: [id], conditions: [text]}]
* @returns {string} - Complete Mermaid flowchart
*/
function generateAlgorithmFlowchart(algorithm) {
let mermaid = 'flowchart TD\n';
// Start node
mermaid += ` START(["开始: ${escapeLabel(algorithm.name)}"])\n`;
// Input node (if has inputs)
if (algorithm.inputs?.length > 0) {
const inputList = algorithm.inputs.map(i => `${i.name}: ${i.type}`).join(', ');
mermaid += ` INPUT[/"输入: ${escapeLabel(inputList)}"/]\n`;
mermaid += ` START --> INPUT\n`;
}
// Process nodes
const steps = algorithm.steps || [];
for (const step of steps) {
const nodeId = sanitizeId(step.id || `STEP_${step.step_num}`);
if (step.type === 'decision') {
mermaid += ` ${nodeId}{"${escapeLabel(step.description)}"}\n`;
} else if (step.type === 'io') {
mermaid += ` ${nodeId}[/"${escapeLabel(step.description)}"/]\n`;
} else if (step.type === 'loop_start') {
mermaid += ` ${nodeId}[["循环: ${escapeLabel(step.description)}"]]\n`;
} else {
mermaid += ` ${nodeId}["${escapeLabel(step.description)}"]\n`;
}
}
// Output node
const outputDesc = algorithm.outputs?.map(o => o.name).join(', ') || '结果';
mermaid += ` OUTPUT[/"输出: ${escapeLabel(outputDesc)}"/]\n`;
mermaid += ` END_(["结束"])\n`;
// Connect first step to input/start
if (steps.length > 0) {
const firstStep = sanitizeId(steps[0].id || 'STEP_1');
if (algorithm.inputs?.length > 0) {
mermaid += ` INPUT --> ${firstStep}\n`;
} else {
mermaid += ` START --> ${firstStep}\n`;
}
}
// Connect steps based on next array
for (const step of steps) {
const nodeId = sanitizeId(step.id || `STEP_${step.step_num}`);
if (step.next && step.next.length > 0) {
step.next.forEach((nextId, index) => {
const safeNextId = sanitizeId(nextId);
const condition = step.conditions?.[index];
if (condition) {
mermaid += ` ${nodeId} -->|"${escapeLabel(condition)}"| ${safeNextId}\n`;
} else {
mermaid += ` ${nodeId} --> ${safeNextId}\n`;
}
});
} else if (!step.type?.includes('end')) {
// Default: connect to next step or output
const stepIndex = steps.indexOf(step);
if (stepIndex < steps.length - 1) {
const nextStep = sanitizeId(steps[stepIndex + 1].id || `STEP_${stepIndex + 2}`);
mermaid += ` ${nodeId} --> ${nextStep}\n`;
} else {
mermaid += ` ${nodeId} --> OUTPUT\n`;
}
}
}
// Connect output to end
mermaid += ` OUTPUT --> END_\n`;
return mermaid;
}
```
## Diagram Validation
### validateMermaidSyntax
Comprehensive Mermaid syntax validation.
```javascript
/**
* Validate Mermaid diagram syntax
*
* @param {string} content - Mermaid diagram content
* @returns {Object} - {valid: boolean, issues: string[]}
*/
function validateMermaidSyntax(content) {
const issues = [];
// Check 1: Diagram type declaration
if (!content.match(/^(graph|flowchart|classDiagram|sequenceDiagram|stateDiagram|erDiagram|gantt|pie|mindmap)/m)) {
issues.push('Missing diagram type declaration');
}
// Check 2: Undefined values
if (content.includes('undefined') || content.includes('null')) {
issues.push('Contains undefined/null values');
}
// Check 3: Invalid arrow syntax
if (content.match(/-->\s*-->/)) {
issues.push('Double arrow syntax error');
}
// Check 4: Unescaped special characters in labels
const labelMatches = content.match(/\["[^"]*[(){}[\]<>][^"]*"\]/g);
if (labelMatches?.some(m => !m.includes('#'))) {
issues.push('Unescaped special characters in labels');
}
// Check 5: Node ID starts with number
if (content.match(/\n\s*[0-9][a-zA-Z0-9_]*[\[\({]/)) {
issues.push('Node ID cannot start with number');
}
// Check 6: Nested subgraph syntax error
if (content.match(/subgraph\s+\S+\s*\n[^e]*subgraph/)) {
// This is actually valid, only flag if brackets don't match
const subgraphCount = (content.match(/subgraph/g) || []).length;
const endCount = (content.match(/\bend\b/g) || []).length;
if (subgraphCount > endCount) {
issues.push('Unbalanced subgraph/end blocks');
}
}
// Check 7: Invalid arrow type for diagram type
const diagramType = content.match(/^(graph|flowchart|classDiagram|sequenceDiagram)/m)?.[1];
if (diagramType === 'classDiagram' && content.includes('-->|')) {
issues.push('Invalid edge label syntax for classDiagram');
}
// Check 8: Empty node labels
if (content.match(/\[""\]|\{\}|\(\)/)) {
issues.push('Empty node labels detected');
}
// Check 9: Reserved keywords as IDs
const reserved = ['end', 'graph', 'subgraph', 'direction', 'class', 'click'];
for (const keyword of reserved) {
const pattern = new RegExp(`\\n\\s*${keyword}\\s*[\\[\\(\\{]`, 'i');
if (content.match(pattern)) {
issues.push(`Reserved keyword "${keyword}" used as node ID`);
}
}
// Check 10: Line length (Mermaid has issues with very long lines)
const lines = content.split('\n');
for (let i = 0; i < lines.length; i++) {
if (lines[i].length > 500) {
issues.push(`Line ${i + 1} exceeds 500 characters`);
}
}
return {
valid: issues.length === 0,
issues
};
}
```
### validateDiagramDirectory
Validate all diagrams in a directory.
```javascript
/**
* Validate all Mermaid diagrams in directory
*
* @param {string} diagramDir - Path to diagrams directory
* @returns {Object[]} - Array of {file, valid, issues}
*/
function validateDiagramDirectory(diagramDir) {
const files = Glob(`${diagramDir}/*.mmd`);
const results = [];
for (const file of files) {
const content = Read(file);
const validation = validateMermaidSyntax(content);
results.push({
file: file.split('/').pop(),
path: file,
valid: validation.valid,
issues: validation.issues,
lines: content.split('\n').length
});
}
return results;
}
```
## Class Diagram Utilities
### generateClassDiagram
Generate class diagram with relationships.
```javascript
/**
* Generate class diagram from analysis data
*
* @param {Object} analysis - Data structure analysis
* - entities: [{name, type, properties, methods}]
* - relationships: [{from, to, type, label}]
* @param {Object} options - Generation options
* - maxClasses: Max classes to include (default: 15)
* - maxProperties: Max properties per class (default: 8)
* - maxMethods: Max methods per class (default: 6)
* @returns {string} - Mermaid classDiagram
*/
function generateClassDiagram(analysis, options = {}) {
const maxClasses = options.maxClasses || 15;
const maxProperties = options.maxProperties || 8;
const maxMethods = options.maxMethods || 6;
let mermaid = 'classDiagram\n';
const entities = (analysis.entities || []).slice(0, maxClasses);
// Generate classes
for (const entity of entities) {
const className = sanitizeId(entity.name);
mermaid += ` class ${className} {\n`;
// Properties
for (const prop of (entity.properties || []).slice(0, maxProperties)) {
const vis = {public: '+', private: '-', protected: '#'}[prop.visibility] || '+';
const type = sanitizeType(prop.type);
mermaid += ` ${vis}${type} ${prop.name}\n`;
}
// Methods
for (const method of (entity.methods || []).slice(0, maxMethods)) {
const vis = {public: '+', private: '-', protected: '#'}[method.visibility] || '+';
const params = (method.params || []).map(p => p.name).join(', ');
const returnType = sanitizeType(method.returnType || 'void');
mermaid += ` ${vis}${method.name}(${params}) ${returnType}\n`;
}
mermaid += ' }\n';
// Add stereotype if applicable
if (entity.type === 'interface') {
mermaid += ` <<interface>> ${className}\n`;
} else if (entity.type === 'abstract') {
mermaid += ` <<abstract>> ${className}\n`;
}
}
// Generate relationships
const arrows = {
inheritance: '--|>',
implementation: '..|>',
composition: '*--',
aggregation: 'o--',
association: '-->',
dependency: '..>'
};
for (const rel of (analysis.relationships || [])) {
const from = sanitizeId(rel.from);
const to = sanitizeId(rel.to);
const arrow = arrows[rel.type] || '-->';
const label = rel.label ? ` : ${escapeLabel(rel.label)}` : '';
// Only include if both entities exist
if (entities.some(e => sanitizeId(e.name) === from) &&
entities.some(e => sanitizeId(e.name) === to)) {
mermaid += ` ${from} ${arrow} ${to}${label}\n`;
}
}
return mermaid;
}
```
## Sequence Diagram Utilities
### generateSequenceDiagram
Generate sequence diagram from scenario.
```javascript
/**
* Generate sequence diagram from scenario
*
* @param {Object} scenario - Sequence scenario
* - name: Scenario name
* - actors: [{id, name, type}]
* - messages: [{from, to, description, type}]
* - blocks: [{type, condition, messages}]
* @returns {string} - Mermaid sequenceDiagram
*/
function generateSequenceDiagram(scenario) {
let mermaid = 'sequenceDiagram\n';
// Title
if (scenario.name) {
mermaid += ` title ${escapeLabel(scenario.name)}\n`;
}
// Participants
for (const actor of scenario.actors || []) {
const actorType = actor.type === 'external' ? 'actor' : 'participant';
mermaid += ` ${actorType} ${sanitizeId(actor.id)} as ${escapeLabel(actor.name)}\n`;
}
mermaid += '\n';
// Messages
for (const msg of scenario.messages || []) {
const from = sanitizeId(msg.from);
const to = sanitizeId(msg.to);
const desc = escapeLabel(msg.description);
let arrow;
switch (msg.type) {
case 'async': arrow = '->>'; break;
case 'response': arrow = '-->>'; break;
case 'create': arrow = '->>+'; break;
case 'destroy': arrow = '->>-'; break;
case 'self': arrow = '->>'; break;
default: arrow = '->>';
}
mermaid += ` ${from}${arrow}${to}: ${desc}\n`;
// Activation
if (msg.activate) {
mermaid += ` activate ${to}\n`;
}
if (msg.deactivate) {
mermaid += ` deactivate ${from}\n`;
}
// Notes
if (msg.note) {
mermaid += ` Note over ${to}: ${escapeLabel(msg.note)}\n`;
}
}
// Blocks (loops, alt, opt)
for (const block of scenario.blocks || []) {
switch (block.type) {
case 'loop':
mermaid += ` loop ${escapeLabel(block.condition)}\n`;
break;
case 'alt':
mermaid += ` alt ${escapeLabel(block.condition)}\n`;
break;
case 'opt':
mermaid += ` opt ${escapeLabel(block.condition)}\n`;
break;
}
for (const m of block.messages || []) {
mermaid += ` ${sanitizeId(m.from)}->>${sanitizeId(m.to)}: ${escapeLabel(m.description)}\n`;
}
mermaid += ' end\n';
}
return mermaid;
}
```
## Usage Examples
### Example 1: Algorithm with Branches
```javascript
const algorithm = {
name: "用户认证流程",
inputs: [{name: "credentials", type: "Object"}],
outputs: [{name: "token", type: "JWT"}],
steps: [
{id: "validate", description: "验证输入格式", type: "process"},
{id: "check_user", description: "用户是否存在?", type: "decision",
next: ["verify_pwd", "error_user"], conditions: ["是", "否"]},
{id: "verify_pwd", description: "验证密码", type: "process"},
{id: "pwd_ok", description: "密码正确?", type: "decision",
next: ["gen_token", "error_pwd"], conditions: ["是", "否"]},
{id: "gen_token", description: "生成 JWT Token", type: "process"},
{id: "error_user", description: "返回用户不存在", type: "io"},
{id: "error_pwd", description: "返回密码错误", type: "io"}
]
};
const flowchart = generateAlgorithmFlowchart(algorithm);
```
### Example 2: Validate Before Output
```javascript
const diagram = generateClassDiagram(analysis);
const validation = validateMermaidSyntax(diagram);
if (!validation.valid) {
console.log("Diagram has issues:", validation.issues);
// Fix issues or regenerate
} else {
Write(`${outputDir}/class-diagram.mmd`, diagram);
}
```

View File

@@ -1,13 +1,13 @@
---
name: command-guide
description: Workflow command guide for Claude Code Workflow (78 commands). Search/browse commands, get next-step recommendations, view documentation, report issues. Triggers "CCW-help", "CCW-issue", "ccw-help", "ccw-issue", "ccw"
description: Workflow command guide for Claude DMS3 (78 commands). Search/browse commands, get next-step recommendations, view documentation, report issues. Triggers "CCW-help", "CCW-issue", "ccw-help", "ccw-issue", "ccw"
allowed-tools: Read, Grep, Glob, AskUserQuestion
version: 5.8.0
---
# Command Guide Skill
Comprehensive command guide for Claude Code Workflow (CCW) system covering 78 commands across 5 categories (workflow, cli, memory, task, general).
Comprehensive command guide for Claude DMS3 workflow system covering 78 commands across 5 categories (workflow, cli, memory, task, general).
## 🆕 What's New in v5.8.0
@@ -200,21 +200,21 @@ Comprehensive command guide for Claude Code Workflow (CCW) system covering 78 co
**Complex Query** (CLI-assisted analysis):
1. **Detect complexity indicators** (多个命令对比、工作流程分析、最佳实践)
2. **Design targeted analysis prompt** for gemini/qwen via CCW:
2. **Design targeted analysis prompt** for gemini/qwen:
- Frame user's question precisely
- Specify required analysis depth
- Request structured comparison/synthesis
```bash
ccw cli -p "
gemini -p "
PURPOSE: Analyze command documentation to answer user query
TASK: [extracted user question with context]
TASK: [extracted user question with context]
MODE: analysis
CONTEXT: @**/*
EXPECTED: Comprehensive answer with examples and recommendations
RULES: $(cat ~/.claude/workflows/cli-templates/prompts/analysis/02-analyze-code-patterns.txt) | Focus on practical usage | analysis=READ-ONLY
" --tool gemini --cd ~/.claude/skills/command-guide/reference
" -m gemini-3-pro-preview-11-2025 --include-directories ~/.claude/skills/command-guide/reference
```
Note: Use `--cd` with absolute path `~/.claude/skills/command-guide/reference` for reference documentation access
Note: Use absolute path `~/.claude/skills/command-guide/reference` for reference documentation access
3. **Process and integrate CLI analysis**:
- Extract key insights from CLI output
- Add context-specific examples
@@ -385,4 +385,4 @@ This SKILL documentation is kept in sync with command implementations through a
- 4 issue templates for standardized problem reporting
- CLI-assisted complex query analysis with gemini/qwen integration
**Maintainer**: CCW Team
**Maintainer**: Claude DMS3 Team

View File

@@ -112,7 +112,7 @@
{
"name": "tech-research",
"command": "/memory:tech-research",
"description": "3-phase orchestrator: extract tech stack → Exa research → generate path-conditional rules (auto-loaded by Claude Code)",
"description": "3-phase orchestrator: extract tech stack from session/name → delegate to agent for Exa research and module generation → generate SKILL.md index (skips phase 2 if exists)",
"arguments": "[session-id | tech-stack-name] [--regenerate] [--tool <gemini|qwen>]",
"category": "memory",
"subcategory": null,
@@ -509,13 +509,24 @@
"name": "start",
"command": "/workflow:session:start",
"description": "Discover existing sessions or start new workflow session with intelligent session management and conflict detection",
"arguments": "[--type <workflow|review|tdd|test|docs>] [--auto|--new] [optional: task description for new session]",
"arguments": "[--auto|--new] [optional: task description for new session]",
"category": "workflow",
"subcategory": "session",
"usage_scenario": "general",
"difficulty": "Intermediate",
"file_path": "workflow/session/start.md"
},
{
"name": "workflow:status",
"command": "/workflow:status",
"description": "Generate on-demand views for project overview and workflow tasks with optional task-id filtering for detailed view",
"arguments": "[optional: --project|task-id|--validate|--dashboard]",
"category": "workflow",
"subcategory": null,
"usage_scenario": "session-management",
"difficulty": "Beginner",
"file_path": "workflow/status.md"
},
{
"name": "tdd-plan",
"command": "/workflow:tdd-plan",

View File

@@ -133,7 +133,7 @@
{
"name": "tech-research",
"command": "/memory:tech-research",
"description": "3-phase orchestrator: extract tech stack → Exa research → generate path-conditional rules (auto-loaded by Claude Code)",
"description": "3-phase orchestrator: extract tech stack from session/name → delegate to agent for Exa research and module generation → generate SKILL.md index (skips phase 2 if exists)",
"arguments": "[session-id | tech-stack-name] [--regenerate] [--tool <gemini|qwen>]",
"category": "memory",
"subcategory": null,
@@ -358,6 +358,17 @@
"difficulty": "Intermediate",
"file_path": "workflow/review.md"
},
{
"name": "workflow:status",
"command": "/workflow:status",
"description": "Generate on-demand views for project overview and workflow tasks with optional task-id filtering for detailed view",
"arguments": "[optional: --project|task-id|--validate|--dashboard]",
"category": "workflow",
"subcategory": null,
"usage_scenario": "session-management",
"difficulty": "Beginner",
"file_path": "workflow/status.md"
},
{
"name": "tdd-plan",
"command": "/workflow:tdd-plan",
@@ -586,7 +597,7 @@
"name": "start",
"command": "/workflow:session:start",
"description": "Discover existing sessions or start new workflow session with intelligent session management and conflict detection",
"arguments": "[--type <workflow|review|tdd|test|docs>] [--auto|--new] [optional: task description for new session]",
"arguments": "[--auto|--new] [optional: task description for new session]",
"category": "workflow",
"subcategory": "session",
"usage_scenario": "general",

View File

@@ -36,7 +36,7 @@
{
"name": "tech-research",
"command": "/memory:tech-research",
"description": "3-phase orchestrator: extract tech stack → Exa research → generate path-conditional rules (auto-loaded by Claude Code)",
"description": "3-phase orchestrator: extract tech stack from session/name → delegate to agent for Exa research and module generation → generate SKILL.md index (skips phase 2 if exists)",
"arguments": "[session-id | tech-stack-name] [--regenerate] [--tool <gemini|qwen>]",
"category": "memory",
"subcategory": null,
@@ -224,7 +224,7 @@
"name": "start",
"command": "/workflow:session:start",
"description": "Discover existing sessions or start new workflow session with intelligent session management and conflict detection",
"arguments": "[--type <workflow|review|tdd|test|docs>] [--auto|--new] [optional: task description for new session]",
"arguments": "[--auto|--new] [optional: task description for new session]",
"category": "workflow",
"subcategory": "session",
"usage_scenario": "general",
@@ -713,6 +713,17 @@
"usage_scenario": "session-management",
"difficulty": "Intermediate",
"file_path": "workflow/session/resume.md"
},
{
"name": "workflow:status",
"command": "/workflow:status",
"description": "Generate on-demand views for project overview and workflow tasks with optional task-id filtering for detailed view",
"arguments": "[optional: --project|task-id|--validate|--dashboard]",
"category": "workflow",
"subcategory": null,
"usage_scenario": "session-management",
"difficulty": "Beginner",
"file_path": "workflow/status.md"
}
],
"testing": [

View File

@@ -43,11 +43,22 @@
"difficulty": "Intermediate",
"file_path": "workflow/execute.md"
},
{
"name": "workflow:status",
"command": "/workflow:status",
"description": "Generate on-demand views for project overview and workflow tasks with optional task-id filtering for detailed view",
"arguments": "[optional: --project|task-id|--validate|--dashboard]",
"category": "workflow",
"subcategory": null,
"usage_scenario": "session-management",
"difficulty": "Beginner",
"file_path": "workflow/status.md"
},
{
"name": "start",
"command": "/workflow:session:start",
"description": "Discover existing sessions or start new workflow session with intelligent session management and conflict detection",
"arguments": "[--type <workflow|review|tdd|test|docs>] [--auto|--new] [optional: task description for new session]",
"arguments": "[--auto|--new] [optional: task description for new session]",
"category": "workflow",
"subcategory": "session",
"usage_scenario": "general",

View File

@@ -16,9 +16,11 @@ description: |
color: yellow
---
You are a pure execution agent specialized in creating actionable implementation plans. You receive requirements and control flags from the command layer and execute planning tasks without complex decision-making logic.
## Overview
**Agent Role**: Pure execution agent that transforms user requirements and brainstorming artifacts into structured, executable implementation plans with quantified deliverables and measurable acceptance criteria. Receives requirements and control flags from the command layer and executes planning tasks without complex decision-making logic.
**Agent Role**: Transform user requirements and brainstorming artifacts into structured, executable implementation plans with quantified deliverables and measurable acceptance criteria.
**Core Capabilities**:
- Load and synthesize context from multiple sources (session metadata, context packages, brainstorming artifacts)
@@ -31,7 +33,7 @@ color: yellow
---
## 1. Input & Execution
## 1. Execution Process
### 1.1 Input Processing
@@ -48,7 +50,7 @@ color: yellow
- **Control flags**: DEEP_ANALYSIS_REQUIRED, etc.
- **Task requirements**: Direct task description
### 1.2 Execution Flow
### 1.2 Two-Phase Execution Flow
#### Phase 1: Context Loading & Assembly
@@ -86,27 +88,6 @@ color: yellow
6. Assess task complexity (simple/medium/complex)
```
**MCP Integration** (when `mcp_capabilities` available):
```javascript
// Exa Code Context (mcp_capabilities.exa_code = true)
mcp__exa__get_code_context_exa(
query="TypeScript OAuth2 JWT authentication patterns",
tokensNum="dynamic"
)
// Integration in flow_control.pre_analysis
{
"step": "local_codebase_exploration",
"action": "Explore codebase structure",
"commands": [
"bash(rg '^(function|class|interface).*[task_keyword]' --type ts -n --max-count 15)",
"bash(find . -name '*[task_keyword]*' -type f | grep -v node_modules | head -10)"
],
"output_to": "codebase_structure"
}
```
**Context Package Structure** (fields defined by context-search-agent):
**Always Present**:
@@ -188,6 +169,30 @@ if (contextPackage.brainstorm_artifacts?.role_analyses?.length > 0) {
5. Update session state for execution readiness
```
### 1.3 MCP Integration Guidelines
**Exa Code Context** (`mcp_capabilities.exa_code = true`):
```javascript
// Get best practices and examples
mcp__exa__get_code_context_exa(
query="TypeScript OAuth2 JWT authentication patterns",
tokensNum="dynamic"
)
```
**Integration in flow_control.pre_analysis**:
```json
{
"step": "local_codebase_exploration",
"action": "Explore codebase structure",
"commands": [
"bash(rg '^(function|class|interface).*[task_keyword]' --type ts -n --max-count 15)",
"bash(find . -name '*[task_keyword]*' -type f | grep -v node_modules | head -10)"
],
"output_to": "codebase_structure"
}
```
---
## 2. Output Specifications
@@ -203,69 +208,15 @@ Generate individual `.task/IMPL-*.json` files with the following structure:
"id": "IMPL-N",
"title": "Descriptive task name",
"status": "pending|active|completed|blocked",
"context_package_path": ".workflow/active/WFS-{session}/.process/context-package.json",
"cli_execution_id": "WFS-{session}-IMPL-N",
"cli_execution": {
"strategy": "new|resume|fork|merge_fork",
"resume_from": "parent-cli-id",
"merge_from": ["id1", "id2"]
}
"context_package_path": ".workflow/active/WFS-{session}/.process/context-package.json"
}
```
**Field Descriptions**:
- `id`: Task identifier
- Single module format: `IMPL-N` (e.g., IMPL-001, IMPL-002)
- Multi-module format: `IMPL-{prefix}{seq}` (e.g., IMPL-A1, IMPL-B1, IMPL-C1)
- Prefix: A, B, C... (assigned by module detection order)
- Sequence: 1, 2, 3... (per-module increment)
- `id`: Task identifier (format: `IMPL-N`)
- `title`: Descriptive task name summarizing the work
- `status`: Task state - `pending` (not started), `active` (in progress), `completed` (done), `blocked` (waiting on dependencies)
- `context_package_path`: Path to smart context package containing project structure, dependencies, and brainstorming artifacts catalog
- `cli_execution_id`: Unique CLI conversation ID (format: `{session_id}-{task_id}`)
- `cli_execution`: CLI execution strategy based on task dependencies
- `strategy`: Execution pattern (`new`, `resume`, `fork`, `merge_fork`)
- `resume_from`: Parent task's cli_execution_id (for resume/fork)
- `merge_from`: Array of parent cli_execution_ids (for merge_fork)
**CLI Execution Strategy Rules** (MANDATORY - apply to all tasks):
| Dependency Pattern | Strategy | CLI Command Pattern |
|--------------------|----------|---------------------|
| No `depends_on` | `new` | `--id {cli_execution_id}` |
| 1 parent, parent has 1 child | `resume` | `--resume {resume_from}` |
| 1 parent, parent has N children | `fork` | `--resume {resume_from} --id {cli_execution_id}` |
| N parents | `merge_fork` | `--resume {merge_from.join(',')} --id {cli_execution_id}` |
**Strategy Selection Algorithm**:
```javascript
function computeCliStrategy(task, allTasks) {
const deps = task.context?.depends_on || []
const childCount = allTasks.filter(t =>
t.context?.depends_on?.includes(task.id)
).length
if (deps.length === 0) {
return { strategy: "new" }
} else if (deps.length === 1) {
const parentTask = allTasks.find(t => t.id === deps[0])
const parentChildCount = allTasks.filter(t =>
t.context?.depends_on?.includes(deps[0])
).length
if (parentChildCount === 1) {
return { strategy: "resume", resume_from: parentTask.cli_execution_id }
} else {
return { strategy: "fork", resume_from: parentTask.cli_execution_id }
}
} else {
const mergeFrom = deps.map(depId =>
allTasks.find(t => t.id === depId).cli_execution_id
)
return { strategy: "merge_fork", merge_from: mergeFrom }
}
}
```
#### Meta Object
@@ -274,14 +225,7 @@ function computeCliStrategy(task, allTasks) {
"meta": {
"type": "feature|bugfix|refactor|test-gen|test-fix|docs",
"agent": "@code-developer|@action-planning-agent|@test-fix-agent|@universal-executor",
"execution_group": "parallel-abc123|null",
"module": "frontend|backend|shared|null",
"execution_config": {
"method": "agent|hybrid|cli",
"cli_tool": "codex|gemini|qwen|auto",
"enable_resume": true,
"previous_cli_id": "string|null"
}
"execution_group": "parallel-abc123|null"
}
}
```
@@ -290,12 +234,6 @@ function computeCliStrategy(task, allTasks) {
- `type`: Task category - `feature` (new functionality), `bugfix` (fix defects), `refactor` (restructure code), `test-gen` (generate tests), `test-fix` (fix failing tests), `docs` (documentation)
- `agent`: Assigned agent for execution
- `execution_group`: Parallelization group ID (tasks with same ID can run concurrently) or `null` for sequential tasks
- `module`: Module identifier for multi-module projects (e.g., `frontend`, `backend`, `shared`) or `null` for single-module
- `execution_config`: CLI execution settings (from userConfig in task-generate-agent)
- `method`: Execution method - `agent` (direct), `hybrid` (agent + CLI), `cli` (CLI only)
- `cli_tool`: Preferred CLI tool - `codex`, `gemini`, `qwen`, or `auto`
- `enable_resume`: Whether to use `--resume` for CLI continuity (default: true)
- `previous_cli_id`: Previous task's CLI execution ID for resume (populated at runtime)
**Test Task Extensions** (for type="test-gen" or type="test-fix"):
@@ -453,7 +391,7 @@ function computeCliStrategy(task, allTasks) {
// Pattern: Project structure analysis
{
"step": "analyze_project_architecture",
"commands": ["bash(ccw tool exec get_modules_by_depth '{}')"],
"commands": ["bash(~/.claude/scripts/get_modules_by_depth.sh)"],
"output_to": "project_architecture"
},
@@ -470,14 +408,14 @@ function computeCliStrategy(task, allTasks) {
// Pattern: Gemini CLI deep analysis
{
"step": "gemini_analyze_[aspect]",
"command": "ccw cli -p 'PURPOSE: [goal]\\nTASK: [tasks]\\nMODE: analysis\\nCONTEXT: @[paths]\\nEXPECTED: [output]\\nRULES: $(cat [template]) | [constraints] | analysis=READ-ONLY' --tool gemini --mode analysis --cd [path]",
"command": "bash(cd [path] && gemini -p 'PURPOSE: [goal]\\nTASK: [tasks]\\nMODE: analysis\\nCONTEXT: @[paths]\\nEXPECTED: [output]\\nRULES: $(cat [template]) | [constraints] | analysis=READ-ONLY')",
"output_to": "analysis_result"
},
// Pattern: Qwen CLI analysis (fallback/alternative)
{
"step": "qwen_analyze_[aspect]",
"command": "ccw cli -p '[similar to gemini pattern]' --tool qwen --mode analysis --cd [path]",
"command": "bash(cd [path] && qwen -p '[similar to gemini pattern]')",
"output_to": "analysis_result"
},
@@ -518,7 +456,7 @@ The examples above demonstrate **patterns**, not fixed requirements. Agent MUST:
4. **Command Composition Patterns**:
- **Single command**: `bash([simple_search])`
- **Multiple commands**: `["bash([cmd1])", "bash([cmd2])"]`
- **CLI analysis**: `ccw cli -p '[prompt]' --tool gemini --mode analysis --cd [path]`
- **CLI analysis**: `bash(cd [path] && gemini -p '[prompt]')`
- **MCP integration**: `mcp__[tool]__[function]([params])`
**Key Principle**: Examples show **structure patterns**, not specific implementations. Agent must create task-appropriate steps dynamically.
@@ -540,12 +478,11 @@ The `implementation_approach` supports **two execution modes** based on the pres
- Specified command executes the step directly
- Leverages specialized CLI tools (codex/gemini/qwen) for complex reasoning
- **Use for**: Large-scale features, complex refactoring, or when user explicitly requests CLI tool usage
- **Required fields**: Same as default mode **PLUS** `command`, `resume_from` (optional)
- **Command patterns** (with resume support):
- `ccw cli -p '[prompt]' --tool codex --mode write --cd [path]`
- `ccw cli -p '[prompt]' --resume ${previousCliId} --tool codex --mode write` (resume from previous)
- `ccw cli -p '[prompt]' --tool gemini --mode write --cd [path]` (write mode)
- **Resume mechanism**: When step depends on previous CLI execution, include `--resume` with previous execution ID
- **Required fields**: Same as default mode **PLUS** `command`
- **Command patterns**:
- `bash(codex -C [path] --full-auto exec '[prompt]' --skip-git-repo-check -s danger-full-access)`
- `bash(codex --full-auto exec '[task]' resume --last --skip-git-repo-check -s danger-full-access)` (multi-step)
- `bash(cd [path] && gemini -p '[prompt]' --approval-mode yolo)` (write mode)
**Semantic CLI Tool Selection**:
@@ -562,12 +499,12 @@ Agent determines CLI tool usage per-step based on user semantics and task nature
**Task-Based Selection** (when no explicit user preference):
- **Implementation/coding**: Codex preferred for autonomous development
- **Analysis/exploration**: Gemini preferred for large context analysis
- **Documentation**: Gemini/Qwen with write mode (`--mode write`)
- **Documentation**: Gemini/Qwen with write mode (`--approval-mode yolo`)
- **Testing**: Depends on complexity - simple=agent, complex=Codex
**Default Behavior**: Agent always executes the workflow. CLI commands are embedded in `implementation_approach` steps:
- Agent orchestrates task execution
- When step has `command` field, agent executes it via CCW CLI
- When step has `command` field, agent executes it via Bash
- When step has no `command` field, agent implements directly
- This maintains agent control while leveraging CLI tool power
@@ -621,26 +558,11 @@ Agent determines CLI tool usage per-step based on user semantics and task nature
"step": 3,
"title": "Execute implementation using CLI tool",
"description": "Use Codex/Gemini for complex autonomous execution",
"command": "ccw cli -p '[prompt]' --tool codex --mode write --cd [path]",
"command": "bash(codex -C [path] --full-auto exec '[prompt]' --skip-git-repo-check -s danger-full-access)",
"modification_points": ["[Same as default mode]"],
"logic_flow": ["[Same as default mode]"],
"depends_on": [1, 2],
"output": "cli_implementation",
"cli_output_id": "step3_cli_id" // Store execution ID for resume
},
// === CLI MODE with Resume: Continue from previous CLI execution ===
{
"step": 4,
"title": "Continue implementation with context",
"description": "Resume from previous step with accumulated context",
"command": "ccw cli -p '[continuation prompt]' --resume ${step3_cli_id} --tool codex --mode write",
"resume_from": "step3_cli_id", // Reference previous step's CLI ID
"modification_points": ["[Continue from step 3]"],
"logic_flow": ["[Build on previous output]"],
"depends_on": [3],
"output": "continued_implementation",
"cli_output_id": "step4_cli_id"
"output": "cli_implementation"
}
]
```
@@ -682,42 +604,10 @@ Agent determines CLI tool usage per-step based on user semantics and task nature
- Analysis results (technical approach, architecture decisions)
- Brainstorming artifacts (role analyses, guidance specifications)
**Multi-Module Format** (when modules detected):
When multiple modules are detected (frontend/backend, etc.), organize IMPL_PLAN.md by module:
```markdown
# Implementation Plan
## Module A: Frontend (N tasks)
### IMPL-A1: [Task Title]
[Task details...]
### IMPL-A2: [Task Title]
[Task details...]
## Module B: Backend (N tasks)
### IMPL-B1: [Task Title]
[Task details...]
### IMPL-B2: [Task Title]
[Task details...]
## Cross-Module Dependencies
- IMPL-A1 → IMPL-B1 (Frontend depends on Backend API)
- IMPL-A2 → IMPL-B2 (UI state depends on Backend service)
```
**Cross-Module Dependency Notation**:
- During parallel planning, use `CROSS::{module}::{pattern}` format
- Example: `depends_on: ["CROSS::B::api-endpoint"]`
- Integration phase resolves to actual task IDs: `CROSS::B::api → IMPL-B1`
### 2.3 TODO_LIST.md Structure
Generate at `.workflow/active/{session_id}/TODO_LIST.md`:
**Single Module Format**:
```markdown
# Tasks: {Session Topic}
@@ -731,54 +621,30 @@ Generate at `.workflow/active/{session_id}/TODO_LIST.md`:
- `- [x]` = Completed task
```
**Multi-Module Format** (hierarchical by module):
```markdown
# Tasks: {Session Topic}
## Module A (Frontend)
- [ ] **IMPL-A1**: [Task Title] → [📋](./.task/IMPL-A1.json)
- [ ] **IMPL-A2**: [Task Title] → [📋](./.task/IMPL-A2.json)
## Module B (Backend)
- [ ] **IMPL-B1**: [Task Title] → [📋](./.task/IMPL-B1.json)
- [ ] **IMPL-B2**: [Task Title] → [📋](./.task/IMPL-B2.json)
## Cross-Module Dependencies
- IMPL-A1 → IMPL-B1 (Frontend depends on Backend API)
## Status Legend
- `- [ ]` = Pending task
- `- [x]` = Completed task
```
**Linking Rules**:
- Todo items → task JSON: `[📋](./.task/IMPL-XXX.json)`
- Completed tasks → summaries: `[✅](./.summaries/IMPL-XXX-summary.md)`
- Consistent ID schemes: `IMPL-N` (single) or `IMPL-{prefix}{seq}` (multi-module)
- Consistent ID schemes: IMPL-XXX
### 2.4 Complexity & Structure Selection
### 2.4 Complexity-Based Structure Selection
Use `analysis_results.complexity` or task count to determine structure:
**Single Module Mode**:
- **Simple Tasks** (≤5 tasks): Flat structure
- **Medium Tasks** (6-12 tasks): Flat structure
- **Complex Tasks** (>12 tasks): Re-scope required (maximum 12 tasks hard limit)
**Simple Tasks** (≤5 tasks):
- Flat structure: IMPL_PLAN.md + TODO_LIST.md + task JSONs
- All tasks at same level
**Multi-Module Mode** (N+1 parallel planning):
- **Per-module limit**: ≤9 tasks per module
- **Total limit**: Sum of all module tasks ≤27 (3 modules × 9 tasks)
- **Task ID format**: `IMPL-{prefix}{seq}` (e.g., IMPL-A1, IMPL-B1)
- **Structure**: Hierarchical by module in IMPL_PLAN.md and TODO_LIST.md
**Medium Tasks** (6-12 tasks):
- Flat structure: IMPL_PLAN.md + TODO_LIST.md + task JSONs
- All tasks at same level
**Multi-Module Detection Triggers**:
- Explicit frontend/backend separation (`src/frontend`, `src/backend`)
- Monorepo structure (`packages/*`, `apps/*`)
- Context-package dependency clustering (2+ distinct module groups)
**Complex Tasks** (>12 tasks):
- **Re-scope required**: Maximum 12 tasks hard limit
- If analysis_results contains >12 tasks, consolidate or request re-scoping
---
## 3. Quality Standards
## 3. Quality & Standards
### 3.1 Quantification Requirements (MANDATORY)
@@ -804,46 +670,47 @@ Use `analysis_results.complexity` or task count to determine structure:
- [ ] Each implementation step has its own acceptance criteria
**Examples**:
- GOOD: `"Implement 5 commands: [cmd1, cmd2, cmd3, cmd4, cmd5]"`
- BAD: `"Implement new commands"`
- GOOD: `"Implement 5 commands: [cmd1, cmd2, cmd3, cmd4, cmd5]"`
- BAD: `"Implement new commands"`
- ✅ GOOD: `"5 files created: verify by ls .claude/commands/*.md | wc -l = 5"`
- ❌ BAD: `"All commands implemented successfully"`
### 3.2 Planning & Organization Standards
### 3.2 Planning Principles
**Planning Principles**:
- Each stage produces working, testable code
- Clear success criteria for each deliverable
- Dependencies clearly identified between stages
- Incremental progress over big bangs
**File Organization**:
- Session naming: `WFS-[topic-slug]`
- Task IDs:
- Single module: `IMPL-N` (e.g., IMPL-001, IMPL-002)
- Multi-module: `IMPL-{prefix}{seq}` (e.g., IMPL-A1, IMPL-B1)
- Directory structure: flat task organization (all tasks in `.task/`)
### 3.3 File Organization
- Session naming: `WFS-[topic-slug]`
- Task IDs: IMPL-XXX (flat structure only)
- Directory structure: flat task organization
### 3.4 Document Standards
**Document Standards**:
- Proper linking between documents
- Consistent navigation and references
### 3.3 Guidelines Checklist
---
## 4. Key Reminders
**ALWAYS:**
- Apply Quantification Requirements to all requirements, acceptance criteria, and modification points
- Load IMPL_PLAN template: `Read(~/.claude/workflows/cli-templates/prompts/workflow/impl-plan-template.txt)` before generating IMPL_PLAN.md
- Use provided context package: Extract all information from structured context
- Respect memory-first rule: Use provided content (already loaded from memory/file)
- Follow 6-field schema: All task JSONs must have id, title, status, context_package_path, meta, context, flow_control
- **Assign CLI execution IDs**: Every task MUST have `cli_execution_id` (format: `{session_id}-{task_id}`)
- **Compute CLI execution strategy**: Based on `depends_on`, set `cli_execution.strategy` (new/resume/fork/merge_fork)
- Map artifacts: Use artifacts_inventory to populate task.context.artifacts array
- Add MCP integration: Include MCP tool steps in flow_control.pre_analysis when capabilities available
- Validate task count: Maximum 12 tasks hard limit, request re-scope if exceeded
- Use session paths: Construct all paths using provided session_id
- Link documents properly: Use correct linking format (📋 for JSON, ✅ for summaries)
- Run validation checklist: Verify all quantification requirements before finalizing task JSONs
- Apply 举一反三 principle: Adapt pre-analysis patterns to task-specific needs dynamically
- Follow template validation: Complete IMPL_PLAN.md template validation checklist before finalization
- **Apply Quantification Requirements**: All requirements, acceptance criteria, and modification points MUST include explicit counts and enumerations
- **Load IMPL_PLAN template**: Read(~/.claude/workflows/cli-templates/prompts/workflow/impl-plan-template.txt) before generating IMPL_PLAN.md
- **Use provided context package**: Extract all information from structured context
- **Respect memory-first rule**: Use provided content (already loaded from memory/file)
- **Follow 6-field schema**: All task JSONs must have id, title, status, context_package_path, meta, context, flow_control
- **Map artifacts**: Use artifacts_inventory to populate task.context.artifacts array
- **Add MCP integration**: Include MCP tool steps in flow_control.pre_analysis when capabilities available
- **Validate task count**: Maximum 12 tasks hard limit, request re-scope if exceeded
- **Use session paths**: Construct all paths using provided session_id
- **Link documents properly**: Use correct linking format (📋 for JSON, ✅ for summaries)
- **Run validation checklist**: Verify all quantification requirements before finalizing task JSONs
- **Apply 举一反三 principle**: Adapt pre-analysis patterns to task-specific needs dynamically
- **Follow template validation**: Complete IMPL_PLAN.md template validation checklist before finalization
**NEVER:**
- Load files directly (use provided context package instead)

View File

@@ -67,7 +67,7 @@ Score = 0
**1. Project Structure**:
```bash
ccw tool exec get_modules_by_depth '{}'
~/.claude/scripts/get_modules_by_depth.sh
```
**2. Content Search**:
@@ -100,7 +100,7 @@ CONTEXT: @**/*
# Specific patterns
CONTEXT: @CLAUDE.md @src/**/* @*.ts
# Cross-directory (requires --includeDirs)
# Cross-directory (requires --include-directories)
CONTEXT: @**/* @../shared/**/* @../types/**/*
```
@@ -134,7 +134,7 @@ RULES: $(cat {selected_template}) | {constraints}
```
analyze|plan → gemini (qwen fallback) + mode=analysis
execute (simple|medium) → gemini (qwen fallback) + mode=write
execute (complex) → codex + mode=write
execute (complex) → codex + mode=auto
discuss → multi (gemini + codex parallel)
```
@@ -144,40 +144,43 @@ discuss → multi (gemini + codex parallel)
- Codex: `gpt-5` (default), `gpt5-codex` (large context)
- **Position**: `-m` after prompt, before flags
### Command Templates (CCW Unified CLI)
### Command Templates
**Gemini/Qwen (Analysis)**:
```bash
ccw cli -p "
cd {dir} && gemini -p "
PURPOSE: {goal}
TASK: {task}
MODE: analysis
CONTEXT: @**/*
EXPECTED: {output}
RULES: $(cat ~/.claude/workflows/cli-templates/prompts/analysis/pattern.txt)
" --tool gemini --mode analysis --cd {dir}
" -m gemini-2.5-pro
# Qwen fallback: Replace '--tool gemini' with '--tool qwen'
# Qwen fallback: Replace 'gemini' with 'qwen'
```
**Gemini/Qwen (Write)**:
```bash
ccw cli -p "..." --tool gemini --mode write --cd {dir}
cd {dir} && gemini -p "..." --approval-mode yolo
```
**Codex (Write)**:
**Codex (Auto)**:
```bash
ccw cli -p "..." --tool codex --mode write --cd {dir}
codex -C {dir} --full-auto exec "..." --skip-git-repo-check -s danger-full-access
# Resume: Add 'resume --last' after prompt
codex --full-auto exec "..." resume --last --skip-git-repo-check -s danger-full-access
```
**Cross-Directory** (Gemini/Qwen):
```bash
ccw cli -p "CONTEXT: @**/* @../shared/**/*" --tool gemini --mode analysis --cd src/auth --includeDirs ../shared
cd src/auth && gemini -p "CONTEXT: @**/* @../shared/**/*" --include-directories ../shared
```
**Directory Scope**:
- `@` only references current directory + subdirectories
- External dirs: MUST use `--includeDirs` + explicit CONTEXT reference
- External dirs: MUST use `--include-directories` + explicit CONTEXT reference
**Timeout**: Simple 20min | Medium 40min | Complex 60min (Codex ×1.5)

View File

@@ -67,7 +67,7 @@ Phase 4: Output Generation
```bash
# Project structure
ccw tool exec get_modules_by_depth '{}'
~/.claude/scripts/get_modules_by_depth.sh
# Pattern discovery (adapt based on language)
rg "^export (class|interface|function) " --type ts -n
@@ -78,14 +78,14 @@ rg "^import .* from " -n | head -30
### Gemini Semantic Analysis (deep-scan, dependency-map)
```bash
ccw cli -p "
cd {dir} && gemini -p "
PURPOSE: {from prompt}
TASK: {from prompt}
MODE: analysis
CONTEXT: @**/*
EXPECTED: {from prompt}
RULES: {from prompt, if template specified} | analysis=READ-ONLY
" --tool gemini --mode analysis --cd {dir}
"
```
**Fallback Chain**: Gemini → Qwen → Codex → Bash-only

View File

@@ -1,117 +1,140 @@
---
name: cli-lite-planning-agent
description: |
Generic planning agent for lite-plan and lite-fix workflows. Generates structured plan JSON based on provided schema reference.
Specialized agent for executing CLI planning tools (Gemini/Qwen) to generate detailed implementation plans. Used by lite-plan workflow for Medium/High complexity tasks.
Core capabilities:
- Schema-driven output (plan-json-schema or fix-plan-json-schema)
- Task decomposition with dependency analysis
- CLI execution ID assignment for fork/merge strategies
- Multi-angle context integration (explorations or diagnoses)
- Task decomposition (1-10 tasks with IDs: T1, T2...)
- Dependency analysis (depends_on references)
- Flow control (parallel/sequential phases)
- Multi-angle exploration context integration
color: cyan
---
You are a generic planning agent that generates structured plan JSON for lite workflows. Output format is determined by the schema reference provided in the prompt. You execute CLI planning tools (Gemini/Qwen), parse results, and generate planObject conforming to the specified schema.
You are a specialized execution agent that bridges CLI planning tools (Gemini/Qwen) with lite-plan workflow. You execute CLI commands for task breakdown, parse structured results, and generate planObject for downstream execution.
## Output Schema
**Reference**: `~/.claude/workflows/cli-templates/schemas/plan-json-schema.json`
**planObject Structure**:
```javascript
{
summary: string, // 2-3 sentence overview
approach: string, // High-level strategy
tasks: [TaskObject], // 1-10 structured tasks
flow_control: { // Execution phases
execution_order: [{ phase, tasks, type }],
exit_conditions: { success, failure }
},
focus_paths: string[], // Affected files (aggregated)
estimated_time: string,
recommended_execution: "Agent" | "Codex",
complexity: "Low" | "Medium" | "High",
_metadata: { timestamp, source, planning_mode, exploration_angles, duration_seconds }
}
```
**TaskObject Structure**:
```javascript
{
id: string, // T1, T2, T3...
title: string, // Action verb + target
file: string, // Target file path
action: string, // Create|Update|Implement|Refactor|Add|Delete|Configure|Test|Fix
description: string, // What to implement (1-2 sentences)
modification_points: [{ // Precise changes (optional)
file: string,
target: string, // function:lineRange
change: string
}],
implementation: string[], // 2-7 actionable steps
reference: { // Pattern guidance (optional)
pattern: string,
files: string[],
examples: string
},
acceptance: string[], // 1-4 quantified criteria
depends_on: string[] // Task IDs: ["T1", "T2"]
}
```
## Input Context
```javascript
{
// Required
task_description: string, // Task or bug description
schema_path: string, // Schema reference path (plan-json-schema or fix-plan-json-schema)
session: { id, folder, artifacts },
// Context (one of these based on workflow)
explorationsContext: { [angle]: ExplorationResult } | null, // From lite-plan
diagnosesContext: { [angle]: DiagnosisResult } | null, // From lite-fix
contextAngles: string[], // Exploration or diagnosis angles
// Optional
task_description: string,
explorationsContext: { [angle]: ExplorationResult } | null,
explorationAngles: string[],
clarificationContext: { [question]: answer } | null,
complexity: "Low" | "Medium" | "High", // For lite-plan
severity: "Low" | "Medium" | "High" | "Critical", // For lite-fix
cli_config: { tool, template, timeout, fallback }
complexity: "Low" | "Medium" | "High",
cli_config: { tool, template, timeout, fallback },
session: { id, folder, artifacts }
}
```
## Schema-Driven Output
**CRITICAL**: Read the schema reference first to determine output structure:
- `plan-json-schema.json` → Implementation plan with `approach`, `complexity`
- `fix-plan-json-schema.json` → Fix plan with `root_cause`, `severity`, `risk_level`
```javascript
// Step 1: Always read schema first
const schema = Bash(`cat ${schema_path}`)
// Step 2: Generate plan conforming to schema
const planObject = generatePlanFromSchema(schema, context)
```
## Execution Flow
```
Phase 1: Schema & Context Loading
├─ Read schema reference (plan-json-schema or fix-plan-json-schema)
├─ Aggregate multi-angle context (explorations or diagnoses)
└─ Determine output structure from schema
Phase 2: CLI Execution
Phase 1: CLI Execution
├─ Aggregate multi-angle exploration findings
├─ Construct CLI command with planning template
├─ Execute Gemini (fallback: Qwen → degraded mode)
└─ Timeout: 60 minutes
Phase 3: Parsing & Enhancement
├─ Parse CLI output sections
Phase 2: Parsing & Enhancement
├─ Parse CLI output sections (Summary, Approach, Tasks, Flow Control)
├─ Validate and enhance task objects
└─ Infer missing fields from context
└─ Infer missing fields from exploration context
Phase 4: planObject Generation
├─ Build planObject conforming to schema
├─ Assign CLI execution IDs and strategies
├─ Generate flow_control from depends_on
└─ Return to orchestrator
Phase 3: planObject Generation
├─ Build planObject from parsed results
├─ Generate flow_control from depends_on if not provided
├─ Aggregate focus_paths from all tasks
└─ Return to orchestrator (lite-plan)
```
## CLI Command Template
```bash
ccw cli -p "
PURPOSE: Generate plan for {task_description}
cd {project_root} && {cli_tool} -p "
PURPOSE: Generate implementation plan for {complexity} task
TASK:
• Analyze task/bug description and context
• Break down into tasks following schema structure
• Identify dependencies and execution phases
• Analyze: {task_description}
• Break down into 1-10 tasks with: id, title, file, action, description, modification_points, implementation, reference, acceptance, depends_on
• Identify parallel vs sequential execution phases
MODE: analysis
CONTEXT: @**/* | Memory: {context_summary}
CONTEXT: @**/* | Memory: {exploration_summary}
EXPECTED:
## Summary
## Implementation Summary
[overview]
## High-Level Approach
[strategy]
## Task Breakdown
### T1: [Title] (or FIX1 for fix-plan)
**Scope**: [module/feature path]
### T1: [Title]
**File**: [path]
**Action**: [type]
**Description**: [what]
**Modification Points**: - [file]: [target] - [change]
**Implementation**: 1. [step]
**Acceptance/Verification**: - [quantified criterion]
**Reference**: - Pattern: [name] - Files: [paths] - Examples: [guidance]
**Acceptance**: - [quantified criterion]
**Depends On**: []
## Flow Control
**Execution Order**: - Phase parallel-1: [T1, T2] (independent)
**Exit Conditions**: - Success: [condition] - Failure: [condition]
## Time Estimate
**Total**: [time]
RULES: $(cat ~/.claude/workflows/cli-templates/prompts/planning/02-breakdown-task-steps.txt) |
- Follow schema structure from {schema_path}
- Acceptance/verification must be quantified
- Dependencies use task IDs
- Acceptance must be quantified (counts, method names, metrics)
- Dependencies use task IDs (T1, T2)
- analysis=READ-ONLY
" --tool {cli_tool} --mode analysis --cd {project_root}
"
```
## Core Functions
@@ -256,51 +279,6 @@ function inferFile(task, ctx) {
}
```
### CLI Execution ID Assignment (MANDATORY)
```javascript
function assignCliExecutionIds(tasks, sessionId) {
const taskMap = new Map(tasks.map(t => [t.id, t]))
const childCount = new Map()
// Count children for each task
tasks.forEach(task => {
(task.depends_on || []).forEach(depId => {
childCount.set(depId, (childCount.get(depId) || 0) + 1)
})
})
tasks.forEach(task => {
task.cli_execution_id = `${sessionId}-${task.id}`
const deps = task.depends_on || []
if (deps.length === 0) {
task.cli_execution = { strategy: "new" }
} else if (deps.length === 1) {
const parent = taskMap.get(deps[0])
const parentChildCount = childCount.get(deps[0]) || 0
task.cli_execution = parentChildCount === 1
? { strategy: "resume", resume_from: parent.cli_execution_id }
: { strategy: "fork", resume_from: parent.cli_execution_id }
} else {
task.cli_execution = {
strategy: "merge_fork",
merge_from: deps.map(depId => taskMap.get(depId).cli_execution_id)
}
}
})
return tasks
}
```
**Strategy Rules**:
| depends_on | Parent Children | Strategy | CLI Command |
|------------|-----------------|----------|-------------|
| [] | - | `new` | `--id {cli_execution_id}` |
| [T1] | 1 | `resume` | `--resume {resume_from}` |
| [T1] | >1 | `fork` | `--resume {resume_from} --id {cli_execution_id}` |
| [T1,T2] | - | `merge_fork` | `--resume {ids.join(',')} --id {cli_execution_id}` |
### Flow Control Inference
```javascript
@@ -325,44 +303,21 @@ function inferFlowControl(tasks) {
### planObject Generation
```javascript
function generatePlanObject(parsed, enrichedContext, input, schemaType) {
function generatePlanObject(parsed, enrichedContext, input) {
const tasks = validateAndEnhanceTasks(parsed.raw_tasks, enrichedContext)
assignCliExecutionIds(tasks, input.session.id) // MANDATORY: Assign CLI execution IDs
const flow_control = parsed.flow_control?.execution_order?.length > 0 ? parsed.flow_control : inferFlowControl(tasks)
const focus_paths = [...new Set(tasks.flatMap(t => [t.file || t.scope, ...t.modification_points.map(m => m.file)]).filter(Boolean))]
const focus_paths = [...new Set(tasks.flatMap(t => [t.file, ...t.modification_points.map(m => m.file)]).filter(Boolean))]
// Base fields (common to both schemas)
const base = {
summary: parsed.summary || `Plan for: ${input.task_description.slice(0, 100)}`,
return {
summary: parsed.summary || `Implementation plan for: ${input.task_description.slice(0, 100)}`,
approach: parsed.approach || "Step-by-step implementation",
tasks,
flow_control,
focus_paths,
estimated_time: parsed.time_estimate || `${tasks.length * 30} minutes`,
recommended_execution: (input.complexity === "Low" || input.severity === "Low") ? "Agent" : "Codex",
_metadata: {
timestamp: new Date().toISOString(),
source: "cli-lite-planning-agent",
planning_mode: "agent-based",
context_angles: input.contextAngles || [],
duration_seconds: Math.round((Date.now() - startTime) / 1000)
}
}
// Schema-specific fields
if (schemaType === 'fix-plan') {
return {
...base,
root_cause: parsed.root_cause || "Root cause from diagnosis",
strategy: parsed.strategy || "comprehensive_fix",
severity: input.severity || "Medium",
risk_level: parsed.risk_level || "medium"
}
} else {
return {
...base,
approach: parsed.approach || "Step-by-step implementation",
complexity: input.complexity || "Medium"
}
recommended_execution: input.complexity === "Low" ? "Agent" : "Codex",
complexity: input.complexity,
_metadata: { timestamp: new Date().toISOString(), source: "cli-lite-planning-agent", planning_mode: "agent-based", exploration_angles: input.explorationAngles || [], duration_seconds: Math.round((Date.now() - startTime) / 1000) }
}
}
```
@@ -428,12 +383,9 @@ function validateTask(task) {
## Key Reminders
**ALWAYS**:
- **Read schema first** to determine output structure
- Generate task IDs (T1/T2 for plan, FIX1/FIX2 for fix-plan)
- Generate task IDs (T1, T2, T3...)
- Include depends_on (even if empty [])
- **Assign cli_execution_id** (`{sessionId}-{taskId}`)
- **Compute cli_execution strategy** based on depends_on
- Quantify acceptance/verification criteria
- Quantify acceptance criteria
- Generate flow_control from dependencies
- Handle CLI errors with fallback chain
@@ -442,5 +394,3 @@ function validateTask(task) {
- Use vague acceptance criteria
- Create circular dependencies
- Skip task validation
- **Skip CLI execution ID assignment**
- **Ignore schema structure**

View File

@@ -107,7 +107,7 @@ Phase 3: Task JSON Generation
**Template-Based Command Construction with Test Layer Awareness**:
```bash
ccw cli -p "
cd {project_root} && {cli_tool} -p "
PURPOSE: Analyze {test_type} test failures and generate fix strategy for iteration {iteration}
TASK:
• Review {failed_tests.length} {test_type} test failures: [{test_names}]
@@ -134,7 +134,7 @@ RULES: $(cat ~/.claude/workflows/cli-templates/prompts/{template}) |
- Consider previous iteration failures
- Validate fix doesn't introduce new vulnerabilities
- analysis=READ-ONLY
" --tool {cli_tool} --mode analysis --cd {project_root} --timeout {timeout_value}
" {timeout_flag}
```
**Layer-Specific Guidance Injection**:
@@ -527,9 +527,9 @@ See: `.process/iteration-{iteration}-cli-output.txt`
1. **Detect test_type**: "integration" → Apply integration-specific diagnosis
2. **Execute CLI**:
```bash
ccw cli -p "PURPOSE: Analyze integration test failure...
gemini -p "PURPOSE: Analyze integration test failure...
TASK: Examine component interactions, data flow, interface contracts...
RULES: Analyze full call stack and data flow across components" --tool gemini --mode analysis
RULES: Analyze full call stack and data flow across components"
```
3. **Parse Output**: Extract RCA, 修复建议, 验证建议 sections
4. **Generate Task JSON** (IMPL-fix-1.json):

View File

@@ -34,11 +34,10 @@ You are a code execution specialist focused on implementing high-quality, produc
- **context-package.json** (when available in workflow tasks)
**Context Package** :
`context-package.json` provides artifact paths - read using Read tool or ccw session:
`context-package.json` provides artifact paths - extract dynamically using `jq`:
```bash
# Get context package content from session using Read tool
Read(.workflow/active/${SESSION_ID}/.process/context-package.json)
# Returns parsed JSON with brainstorm_artifacts, focus_paths, etc.
# Get role analysis paths from context package
jq -r '.brainstorm_artifacts.role_analyses[].files[].path' context-package.json
```
**Pre-Analysis: Smart Tech Stack Loading**:
@@ -122,9 +121,9 @@ When task JSON contains `flow_control.implementation_approach` array:
- If `command` field present, execute it; otherwise use agent capabilities
**CLI Command Execution (CLI Execute Mode)**:
When step contains `command` field with Codex CLI, execute via CCW CLI. For Codex resume:
- First task (`depends_on: []`): `ccw cli -p "..." --tool codex --mode write --cd [path]`
- Subsequent tasks (has `depends_on`): Use CCW CLI with resume context to maintain session
When step contains `command` field with Codex CLI, execute via Bash tool. For Codex resume:
- First task (`depends_on: []`): `codex -C [path] --full-auto exec "..." --skip-git-repo-check -s danger-full-access`
- Subsequent tasks (has `depends_on`): Add `resume --last` flag to maintain session context
**Test-Driven Development**:
- Write tests first (red → green → refactor)

View File

@@ -109,7 +109,7 @@ This agent processes **simplified inline [FLOW_CONTROL]** format from brainstorm
3. **load_session_metadata**
- Action: Load session metadata
- Command: Read(.workflow/active/WFS-{session}/workflow-session.json)
- Command: bash(cat .workflow/WFS-{session}/workflow-session.json)
- Output: session_metadata
```
@@ -119,6 +119,17 @@ This agent processes **simplified inline [FLOW_CONTROL]** format from brainstorm
- No dependency management
- Used for temporary context preparation
### NOT Handled by This Agent
**JSON format** (used by code-developer, test-fix-agent):
```json
"flow_control": {
"pre_analysis": [...],
"implementation_approach": [...]
}
```
This complete JSON format is stored in `.task/IMPL-*.json` files and handled by implementation agents, not conceptual-planning-agent.
### Role-Specific Analysis Dimensions
@@ -135,14 +146,14 @@ This agent processes **simplified inline [FLOW_CONTROL]** format from brainstorm
### Output Integration
**Gemini Analysis Integration**: Pattern-based analysis results are integrated into role output documents:
- Enhanced analysis documents with codebase insights and architectural patterns
**Gemini Analysis Integration**: Pattern-based analysis results are integrated into the single role's output:
- Enhanced `analysis.md` with codebase insights and architectural patterns
- Role-specific technical recommendations based on existing conventions
- Pattern-based best practices from actual code examination
- Realistic feasibility assessments based on current implementation
**Codex Analysis Integration**: Autonomous analysis results provide comprehensive insights:
- Enhanced analysis documents with autonomous development recommendations
- Enhanced `analysis.md` with autonomous development recommendations
- Role-specific strategy based on intelligent system understanding
- Autonomous development approaches and implementation guidance
- Self-guided optimization and integration recommendations
@@ -155,7 +166,7 @@ When called, you receive:
- **User Context**: Specific requirements, constraints, and expectations from user discussion
- **Output Location**: Directory path for generated analysis files
- **Role Hint** (optional): Suggested role or role selection guidance
- **context-package.json** (CCW Workflow): Artifact paths catalog - use Read tool to get context package from `.workflow/active/{session}/.process/context-package.json`
- **context-package.json** (CCW Workflow): Artifact paths catalog - extract using `jq -r '.brainstorm_artifacts.role_analyses[].files[].path'`
- **ASSIGNED_ROLE** (optional): Specific role assignment
- **ANALYSIS_DIMENSIONS** (optional): Role-specific analysis dimensions
@@ -218,23 +229,26 @@ Generate documents according to loaded role template specifications:
**Output Location**: `.workflow/WFS-[session]/.brainstorming/[assigned-role]/`
**Output Files**:
- **analysis.md**: Index document with overview (optionally with `@` references to sub-documents)
**Required Files**:
- **analysis.md**: Main role perspective analysis incorporating user context and role template
- **File Naming**: MUST start with `analysis` prefix (e.g., `analysis.md`, `analysis-1.md`, `analysis-2.md`)
- **FORBIDDEN**: Never create `recommendations.md` or any file not starting with `analysis` prefix
- **analysis-{slug}.md**: Section content documents (slug from section heading: lowercase, hyphens)
- Maximum 5 sub-documents (merge related sections if needed)
- **Content**: Analysis AND recommendations sections
- **Auto-split if large**: If content >800 lines, split to `analysis-1.md`, `analysis-2.md` (max 3 files: analysis.md, analysis-1.md, analysis-2.md)
- **Content**: Includes both analysis AND recommendations sections within analysis files
- **[role-deliverables]/**: Directory for specialized role outputs as defined in planning role template (optional)
**File Structure Example**:
```
.workflow/WFS-[session]/.brainstorming/system-architect/
├── analysis.md # Index with overview + @references
├── analysis-architecture-assessment.md # Section content
── analysis-technology-evaluation.md # Section content
├── analysis-integration-strategy.md # Section content
└── analysis-recommendations.md # Section content (max 5 sub-docs total)
├── analysis.md # Main system architecture analysis with recommendations
├── analysis-1.md # (Optional) Continuation if content >800 lines
── deliverables/ # (Optional) Additional role-specific outputs
├── technical-architecture.md # System design specifications
├── technology-stack.md # Technology selection rationale
└── scalability-plan.md # Scaling strategy
NOTE: ALL files MUST start with 'analysis' prefix. Max 5 sub-documents.
NOTE: ALL brainstorming output files MUST start with 'analysis' prefix
FORBIDDEN: recommendations.md, recommendations-*.md, or any non-'analysis' prefixed files
```
## Role-Specific Planning Process
@@ -254,10 +268,14 @@ NOTE: ALL files MUST start with 'analysis' prefix. Max 5 sub-documents.
- **Validate Against Template**: Ensure analysis meets role template requirements and standards
### 3. Brainstorming Documentation Phase
- **Create analysis.md**: Main document with overview (optionally with `@` references)
- **Create sub-documents**: `analysis-{slug}.md` for major sections (max 5)
- **Create analysis.md**: Generate comprehensive role perspective analysis in designated output directory
- **File Naming**: MUST start with `analysis` prefix (e.g., `analysis.md`, `analysis-1.md`, `analysis-2.md`)
- **FORBIDDEN**: Never create `recommendations.md` or any file not starting with `analysis` prefix
- **Content**: Include both analysis AND recommendations sections within analysis files
- **Auto-split**: If content >800 lines, split to `analysis-1.md`, `analysis-2.md` (max 3 files total)
- **Generate Role Deliverables**: Create specialized outputs as defined in planning role template (optional)
- **Validate Output Structure**: Ensure all files saved to correct `.brainstorming/[role]/` directory
- **Naming Validation**: Verify ALL files start with `analysis` prefix
- **Naming Validation**: Verify NO files with `recommendations` prefix exist
- **Quality Review**: Ensure outputs meet role template standards and user requirements
## Role-Specific Analysis Framework
@@ -306,3 +324,5 @@ When analysis is complete, ensure:
- **Relevance**: Directly addresses user's specified requirements
- **Actionability**: Provides concrete next steps and recommendations
### Windows Path Format Guidelines
- **Quick Ref**: `C:\Users` → MCP: `C:\\Users` | Bash: `/c/Users` or `C:/Users`

View File

@@ -31,7 +31,7 @@ You are a context discovery specialist focused on gathering relevant project inf
### 1. Reference Documentation (Project Standards)
**Tools**:
- `Read()` - Load CLAUDE.md, README.md, architecture docs
- `Bash(ccw tool exec get_modules_by_depth '{}')` - Project structure
- `Bash(~/.claude/scripts/get_modules_by_depth.sh)` - Project structure
- `Glob()` - Find documentation files
**Use**: Phase 0 foundation setup
@@ -44,19 +44,19 @@ You are a context discovery specialist focused on gathering relevant project inf
**Use**: Unfamiliar APIs/libraries/patterns
### 3. Existing Code Discovery
**Primary (CCW CodexLens MCP)**:
- `mcp__ccw-tools__codex_lens(action="init", path=".")` - Initialize index for directory
- `mcp__ccw-tools__codex_lens(action="search", query="pattern", path=".")` - Content search (requires query)
- `mcp__ccw-tools__codex_lens(action="search_files", query="pattern")` - File name search, returns paths only (requires query)
- `mcp__ccw-tools__codex_lens(action="symbol", file="path")` - Extract all symbols from file (no query, returns functions/classes/variables)
- `mcp__ccw-tools__codex_lens(action="update", files=[...])` - Update index for specific files
**Primary (Code-Index MCP)**:
- `mcp__code-index__set_project_path()` - Initialize index
- `mcp__code-index__find_files(pattern)` - File pattern matching
- `mcp__code-index__search_code_advanced()` - Content search
- `mcp__code-index__get_file_summary()` - File structure analysis
- `mcp__code-index__refresh_index()` - Update index
**Fallback (CLI)**:
- `rg` (ripgrep) - Fast content search
- `find` - File discovery
- `Grep` - Pattern matching
**Priority**: CodexLens MCP > ripgrep > find > grep
**Priority**: Code-Index MCP > ripgrep > find > grep
## Simplified Execution Process (3 Phases)
@@ -77,11 +77,12 @@ if (file_exists(contextPackagePath)) {
**1.2 Foundation Setup**:
```javascript
// 1. Initialize CodexLens (if available)
mcp__ccw-tools__codex_lens({ action: "init", path: "." })
// 1. Initialize Code Index (if available)
mcp__code-index__set_project_path(process.cwd())
mcp__code-index__refresh_index()
// 2. Project Structure
bash(ccw tool exec get_modules_by_depth '{}')
bash(~/.claude/scripts/get_modules_by_depth.sh)
// 3. Load Documentation (if not in memory)
if (!memory.has("CLAUDE.md")) Read(CLAUDE.md)
@@ -211,18 +212,18 @@ mcp__exa__web_search_exa({
**Layer 1: File Pattern Discovery**
```javascript
// Primary: CodexLens MCP
const files = mcp__ccw-tools__codex_lens({ action: "search_files", query: "*{keyword}*" })
// Primary: Code-Index MCP
const files = mcp__code-index__find_files("*{keyword}*")
// Fallback: find . -iname "*{keyword}*" -type f
```
**Layer 2: Content Search**
```javascript
// Primary: CodexLens MCP
mcp__ccw-tools__codex_lens({
action: "search",
query: "{keyword}",
path: "."
// Primary: Code-Index MCP
mcp__code-index__search_code_advanced({
pattern: "{keyword}",
file_pattern: "*.ts",
output_mode: "files_with_matches"
})
// Fallback: rg "{keyword}" -t ts --files-with-matches
```
@@ -230,10 +231,11 @@ mcp__ccw-tools__codex_lens({
**Layer 3: Semantic Patterns**
```javascript
// Find definitions (class, interface, function)
mcp__ccw-tools__codex_lens({
action: "search",
query: "^(export )?(class|interface|type|function) .*{keyword}",
path: "."
mcp__code-index__search_code_advanced({
pattern: "^(export )?(class|interface|type|function) .*{keyword}",
regex: true,
output_mode: "content",
context_lines: 2
})
```
@@ -241,22 +243,21 @@ mcp__ccw-tools__codex_lens({
```javascript
// Get file summaries for imports/exports
for (const file of discovered_files) {
const summary = mcp__ccw-tools__codex_lens({ action: "symbol", file: file })
// summary: {symbols: [{name, type, line}]}
const summary = mcp__code-index__get_file_summary(file)
// summary: {imports, functions, classes, line_count}
}
```
**Layer 5: Config & Tests**
```javascript
// Config files
mcp__ccw-tools__codex_lens({ action: "search_files", query: "*.config.*" })
mcp__ccw-tools__codex_lens({ action: "search_files", query: "package.json" })
mcp__code-index__find_files("*.config.*")
mcp__code-index__find_files("package.json")
// Tests
mcp__ccw-tools__codex_lens({
action: "search",
query: "(describe|it|test).*{keyword}",
path: "."
mcp__code-index__search_code_advanced({
pattern: "(describe|it|test).*{keyword}",
file_pattern: "*.{test,spec}.*"
})
```
@@ -448,12 +449,7 @@ Calculate risk level based on:
{
"path": "system-architect/analysis.md",
"type": "primary",
"content": "# System Architecture Analysis\n\n## Overview\n@analysis-architecture.md\n@analysis-recommendations.md"
},
{
"path": "system-architect/analysis-architecture.md",
"type": "supplementary",
"content": "# Architecture Assessment\n\n..."
"content": "# System Architecture Analysis\n\n## Overview\n..."
}
]
}
@@ -559,14 +555,14 @@ Output: .workflow/session/{session}/.process/context-package.json
- Expose sensitive data (credentials, keys)
- Exceed file limits (50 total)
- Include binaries/generated files
- Use ripgrep if CodexLens available
- Use ripgrep if code-index available
**ALWAYS**:
- Initialize CodexLens in Phase 0
- Initialize code-index in Phase 0
- Execute get_modules_by_depth.sh
- Load CLAUDE.md/README.md (unless in memory)
- Execute all 3 discovery tracks
- Use CodexLens MCP as primary
- Use code-index MCP as primary
- Fallback to ripgrep only when needed
- Use Exa for unfamiliar APIs
- Apply multi-factor scoring

View File

@@ -61,9 +61,9 @@ The agent supports **two execution modes** based on task JSON's `meta.cli_execut
**Step 2** (CLI execution):
- Agent substitutes [target_folders] into command
- Agent executes CLI command via CCW:
- Agent executes CLI command via Bash tool:
```bash
ccw cli -p "
bash(cd src/modules && gemini --approval-mode yolo -p "
PURPOSE: Generate module documentation
TASK: Create API.md and README.md for each module
MODE: write
@@ -71,7 +71,7 @@ The agent supports **two execution modes** based on task JSON's `meta.cli_execut
./src/modules/api|code|code:3|dirs:0
EXPECTED: Documentation files in .workflow/docs/my_project/src/modules/
RULES: $(cat ~/.claude/workflows/cli-templates/prompts/documentation/module-documentation.txt) | Mirror source structure
" --tool gemini --mode write --cd src/modules
")
```
4. **CLI Execution** (Gemini CLI):
@@ -216,7 +216,7 @@ Before completion, verify:
{
"step": "analyze_module_structure",
"action": "Deep analysis of module structure and API",
"command": "ccw cli -p \"PURPOSE: Document module comprehensively\nTASK: Extract module purpose, architecture, public API, dependencies\nMODE: analysis\nCONTEXT: @**/* System: [system_context]\nEXPECTED: Complete module analysis for documentation\nRULES: $(cat ~/.claude/workflows/cli-templates/prompts/documentation/module-documentation.txt)\" --tool gemini --mode analysis --cd src/auth",
"command": "bash(cd src/auth && gemini \"PURPOSE: Document module comprehensively\nTASK: Extract module purpose, architecture, public API, dependencies\nMODE: analysis\nCONTEXT: @**/* System: [system_context]\nEXPECTED: Complete module analysis for documentation\nRULES: $(cat ~/.claude/workflows/cli-templates/prompts/documentation/module-documentation.txt)\")",
"output_to": "module_analysis",
"on_error": "fail"
}

View File

@@ -8,7 +8,7 @@ You are a documentation update coordinator for complex projects. Orchestrate par
## Core Mission
Execute depth-parallel updates for all modules using `ccw tool exec update_module_claude`. **Every module path must be processed**.
Execute depth-parallel updates for all modules using `~/.claude/scripts/update_module_claude.sh`. **Every module path must be processed**.
## Input Context
@@ -42,12 +42,12 @@ TodoWrite([
# 3. Launch parallel jobs (max 4)
# Depth 5 example (Layer 3 - use multi-layer):
ccw tool exec update_module_claude '{"strategy":"multi-layer","path":"./.claude/workflows/cli-templates/prompts/analysis","tool":"gemini"}' &
ccw tool exec update_module_claude '{"strategy":"multi-layer","path":"./.claude/workflows/cli-templates/prompts/development","tool":"gemini"}' &
~/.claude/scripts/update_module_claude.sh "multi-layer" "./.claude/workflows/cli-templates/prompts/analysis" "gemini" &
~/.claude/scripts/update_module_claude.sh "multi-layer" "./.claude/workflows/cli-templates/prompts/development" "gemini" &
# Depth 1 example (Layer 2 - use single-layer):
ccw tool exec update_module_claude '{"strategy":"single-layer","path":"./src/auth","tool":"gemini"}' &
ccw tool exec update_module_claude '{"strategy":"single-layer","path":"./src/api","tool":"gemini"}' &
~/.claude/scripts/update_module_claude.sh "single-layer" "./src/auth" "gemini" &
~/.claude/scripts/update_module_claude.sh "single-layer" "./src/api" "gemini" &
# ... up to 4 concurrent jobs
# 4. Wait for all depth jobs to complete

View File

@@ -36,10 +36,10 @@ You are a test context discovery specialist focused on gathering test coverage i
**Use**: Phase 1 source context loading
### 2. Test Coverage Discovery
**Primary (CCW CodexLens MCP)**:
- `mcp__ccw-tools__codex_lens(action="search_files", query="*.test.*")` - Find test files
- `mcp__ccw-tools__codex_lens(action="search", query="pattern")` - Search test patterns
- `mcp__ccw-tools__codex_lens(action="symbol", file="path")` - Analyze test structure
**Primary (Code-Index MCP)**:
- `mcp__code-index__find_files(pattern)` - Find test files (*.test.*, *.spec.*)
- `mcp__code-index__search_code_advanced()` - Search test patterns
- `mcp__code-index__get_file_summary()` - Analyze test structure
**Fallback (CLI)**:
- `rg` (ripgrep) - Fast test pattern search
@@ -120,10 +120,9 @@ for (const summary_path of summaries) {
**2.1 Existing Test Discovery**:
```javascript
// Method 1: CodexLens MCP (preferred)
const test_files = mcp__ccw-tools__codex_lens({
action: "search_files",
query: "*.test.* OR *.spec.* OR test_*.py OR *_test.go"
// Method 1: Code-Index MCP (preferred)
const test_files = mcp__code-index__find_files({
patterns: ["*.test.*", "*.spec.*", "*test_*.py", "*_test.go"]
});
// Method 2: Fallback CLI

View File

@@ -83,18 +83,17 @@ When task JSON contains implementation_approach array:
- L1 (Unit): `*.test.*`, `*.spec.*` in `__tests__/`, `tests/unit/`
- L2 (Integration): `tests/integration/`, `*.integration.test.*`
- L3 (E2E): `tests/e2e/`, `*.e2e.test.*`, `cypress/`, `playwright/`
- **context-package.json** (CCW Workflow): Use Read tool to get context package from `.workflow/active/{session}/.process/context-package.json`
- **context-package.json** (CCW Workflow): Extract artifact paths using `jq -r '.brainstorm_artifacts.role_analyses[].files[].path'`
- Identify test commands from project configuration
```bash
# Detect test framework and multi-layered commands
if [ -f "package.json" ]; then
# Extract layer-specific test commands using Read tool or jq
PKG_JSON=$(cat package.json)
LINT_CMD=$(echo "$PKG_JSON" | jq -r '.scripts.lint // "eslint ."')
UNIT_CMD=$(echo "$PKG_JSON" | jq -r '.scripts["test:unit"] // .scripts.test')
INTEGRATION_CMD=$(echo "$PKG_JSON" | jq -r '.scripts["test:integration"] // ""')
E2E_CMD=$(echo "$PKG_JSON" | jq -r '.scripts["test:e2e"] // ""')
# Extract layer-specific test commands
LINT_CMD=$(cat package.json | jq -r '.scripts.lint // "eslint ."')
UNIT_CMD=$(cat package.json | jq -r '.scripts["test:unit"] // .scripts.test')
INTEGRATION_CMD=$(cat package.json | jq -r '.scripts["test:integration"] // ""')
E2E_CMD=$(cat package.json | jq -r '.scripts["test:e2e"] // ""')
elif [ -f "pytest.ini" ] || [ -f "setup.py" ]; then
LINT_CMD="ruff check . || flake8 ."
UNIT_CMD="pytest tests/unit/"

View File

@@ -191,7 +191,7 @@ target/
### Step 2: Workspace Analysis (MANDATORY FIRST)
```bash
# Analyze workspace structure
bash(ccw tool exec get_modules_by_depth '{"format":"json"}')
bash(~/.claude/scripts/get_modules_by_depth.sh json)
```
### Step 3: Technology Detection

View File

@@ -101,10 +101,10 @@ src/ (depth 1) → SINGLE STRATEGY
Bash({command: "pwd && basename \"$(pwd)\" && git rev-parse --show-toplevel 2>/dev/null || pwd", run_in_background: false});
// Get module structure with classification
Bash({command: "ccw tool exec get_modules_by_depth '{\"format\":\"list\"}' | ccw tool exec classify_folders '{}'", run_in_background: false});
Bash({command: "~/.claude/scripts/get_modules_by_depth.sh list | ~/.claude/scripts/classify-folders.sh", run_in_background: false});
// OR with path parameter
Bash({command: "cd <target-path> && ccw tool exec get_modules_by_depth '{\"format\":\"list\"}' | ccw tool exec classify_folders '{}'", run_in_background: false});
Bash({command: "cd <target-path> && ~/.claude/scripts/get_modules_by_depth.sh list | ~/.claude/scripts/classify-folders.sh", run_in_background: false});
```
**Parse output** `depth:N|path:<PATH>|type:<code|navigation>|...` to extract module paths, types, and count.
@@ -200,7 +200,7 @@ for (let layer of [3, 2, 1]) {
let strategy = module.depth >= 3 ? "full" : "single";
for (let tool of tool_order) {
Bash({
command: `cd ${module.path} && ccw tool exec generate_module_docs '{"strategy":"${strategy}","sourcePath":".","projectName":"${project_name}","tool":"${tool}"}'`,
command: `cd ${module.path} && ~/.claude/scripts/generate_module_docs.sh "${strategy}" "." "${project_name}" "${tool}"`,
run_in_background: false
});
if (bash_result.exit_code === 0) {
@@ -263,7 +263,7 @@ MODULES:
TOOLS (try in order): {{tool_1}}, {{tool_2}}, {{tool_3}}
EXECUTION SCRIPT: ccw tool exec generate_module_docs
EXECUTION SCRIPT: ~/.claude/scripts/generate_module_docs.sh
- Accepts strategy parameter: full | single
- Accepts folder type detection: code | navigation
- Tool execution via direct CLI commands (gemini/qwen/codex)
@@ -273,7 +273,7 @@ EXECUTION FLOW (for each module):
1. Tool fallback loop (exit on first success):
for tool in {{tool_1}} {{tool_2}} {{tool_3}}; do
Bash({
command: `cd "{{module_path}}" && ccw tool exec generate_module_docs '{"strategy":"{{strategy}}","sourcePath":".","projectName":"{{project_name}}","tool":"${tool}"}'`,
command: `cd "{{module_path}}" && ~/.claude/scripts/generate_module_docs.sh "{{strategy}}" "." "{{project_name}}" "${tool}"`,
run_in_background: false
})
exit_code=$?
@@ -322,7 +322,7 @@ let project_root = get_project_root();
report("Generating project README.md...");
for (let tool of tool_order) {
Bash({
command: `cd ${project_root} && ccw tool exec generate_module_docs '{"strategy":"project-readme","sourcePath":".","projectName":"${project_name}","tool":"${tool}"}'`,
command: `cd ${project_root} && ~/.claude/scripts/generate_module_docs.sh "project-readme" "." "${project_name}" "${tool}"`,
run_in_background: false
});
if (bash_result.exit_code === 0) {
@@ -335,7 +335,7 @@ for (let tool of tool_order) {
report("Generating ARCHITECTURE.md and EXAMPLES.md...");
for (let tool of tool_order) {
Bash({
command: `cd ${project_root} && ccw tool exec generate_module_docs '{"strategy":"project-architecture","sourcePath":".","projectName":"${project_name}","tool":"${tool}"}'`,
command: `cd ${project_root} && ~/.claude/scripts/generate_module_docs.sh "project-architecture" "." "${project_name}" "${tool}"`,
run_in_background: false
});
if (bash_result.exit_code === 0) {
@@ -350,7 +350,7 @@ if (bash_result.stdout.includes("API_FOUND")) {
report("Generating HTTP API documentation...");
for (let tool of tool_order) {
Bash({
command: `cd ${project_root} && ccw tool exec generate_module_docs '{"strategy":"http-api","sourcePath":".","projectName":"${project_name}","tool":"${tool}"}'`,
command: `cd ${project_root} && ~/.claude/scripts/generate_module_docs.sh "http-api" "." "${project_name}" "${tool}"`,
run_in_background: false
});
if (bash_result.exit_code === 0) {

View File

@@ -51,7 +51,7 @@ Orchestrates context-aware documentation generation/update for changed modules u
Bash({command: "pwd && basename \"$(pwd)\" && git rev-parse --show-toplevel 2>/dev/null || pwd", run_in_background: false});
// Detect changed modules
Bash({command: "ccw tool exec detect_changed_modules '{\"format\":\"list\"}'", run_in_background: false});
Bash({command: "~/.claude/scripts/detect_changed_modules.sh list", run_in_background: false});
// Cache git changes
Bash({command: "git add -A 2>/dev/null || true", run_in_background: false});
@@ -123,7 +123,7 @@ for (let depth of sorted_depths.reverse()) { // N → 0
return async () => {
for (let tool of tool_order) {
Bash({
command: `cd ${module.path} && ccw tool exec generate_module_docs '{"strategy":"single","sourcePath":".","projectName":"${project_name}","tool":"${tool}"}'`,
command: `cd ${module.path} && ~/.claude/scripts/generate_module_docs.sh "single" "." "${project_name}" "${tool}"`,
run_in_background: false
});
if (bash_result.exit_code === 0) {
@@ -207,21 +207,21 @@ EXECUTION:
For each module above:
1. Try tool 1:
Bash({
command: `cd "{{module_path}}" && ccw tool exec generate_module_docs '{"strategy":"single","sourcePath":".","projectName":"{{project_name}}","tool":"{{tool_1}}"}'`,
command: `cd "{{module_path}}" && ~/.claude/scripts/generate_module_docs.sh "single" "." "{{project_name}}" "{{tool_1}}"`,
run_in_background: false
})
→ Success: Report "✅ {{module_path}} docs generated with {{tool_1}}", proceed to next module
→ Failure: Try tool 2
2. Try tool 2:
Bash({
command: `cd "{{module_path}}" && ccw tool exec generate_module_docs '{"strategy":"single","sourcePath":".","projectName":"{{project_name}}","tool":"{{tool_2}}"}'`,
command: `cd "{{module_path}}" && ~/.claude/scripts/generate_module_docs.sh "single" "." "{{project_name}}" "{{tool_2}}"`,
run_in_background: false
})
→ Success: Report "✅ {{module_path}} docs generated with {{tool_2}}", proceed to next module
→ Failure: Try tool 3
3. Try tool 3:
Bash({
command: `cd "{{module_path}}" && ccw tool exec generate_module_docs '{"strategy":"single","sourcePath":".","projectName":"{{project_name}}","tool":"{{tool_3}}"}'`,
command: `cd "{{module_path}}" && ~/.claude/scripts/generate_module_docs.sh "single" "." "{{project_name}}" "{{tool_3}}"`,
run_in_background: false
})
→ Success: Report "✅ {{module_path}} docs generated with {{tool_3}}", proceed to next module

View File

@@ -64,17 +64,12 @@ Lightweight planner that analyzes project structure, decomposes documentation wo
```bash
# Get target path, project name, and root
bash(pwd && basename "$(pwd)" && git rev-parse --show-toplevel 2>/dev/null || pwd && date +%Y%m%d-%H%M%S)
```
```javascript
// Create docs session (type: docs)
SlashCommand(command="/workflow:session:start --type docs --new \"{project_name}-docs-{timestamp}\"")
// Parse output to get sessionId
```
# Create session directories (replace timestamp)
bash(mkdir -p .workflow/active/WFS-docs-{timestamp}/.{task,process,summaries})
```bash
# Update workflow-session.json with docs-specific fields
ccw session {sessionId} write workflow-session.json '{"target_path":"{target_path}","project_root":"{project_root}","project_name":"{project_name}","mode":"full","tool":"gemini","cli_execute":false}'
# Create workflow-session.json (replace values)
bash(echo '{"session_id":"WFS-docs-{timestamp}","project":"{project} documentation","status":"planning","timestamp":"2024-01-20T14:30:22+08:00","path":".","target_path":"{target_path}","project_root":"{project_root}","project_name":"{project_name}","mode":"full","tool":"gemini","cli_execute":false}' | jq '.' > .workflow/active/WFS-docs-{timestamp}/workflow-session.json)
```
### Phase 2: Analyze Structure
@@ -85,10 +80,10 @@ ccw session {sessionId} write workflow-session.json '{"target_path":"{target_pat
```bash
# 1. Run folder analysis
bash(ccw tool exec get_modules_by_depth '{}' | ccw tool exec classify_folders '{}')
bash(~/.claude/scripts/get_modules_by_depth.sh | ~/.claude/scripts/classify-folders.sh)
# 2. Get top-level directories (first 2 path levels)
bash(ccw tool exec get_modules_by_depth '{}' | ccw tool exec classify_folders '{}' | awk -F'|' '{print $1}' | sed 's|^\./||' | awk -F'/' '{if(NF>=2) print $1"/"$2; else if(NF==1) print $1}' | sort -u)
bash(~/.claude/scripts/get_modules_by_depth.sh | ~/.claude/scripts/classify-folders.sh | awk -F'|' '{print $1}' | sed 's|^\./||' | awk -F'/' '{if(NF>=2) print $1"/"$2; else if(NF==1) print $1}' | sort -u)
# 3. Find existing docs (if directory exists)
bash(if [ -d .workflow/docs/\${project_name} ]; then find .workflow/docs/\${project_name} -type f -name "*.md" ! -path "*/README.md" ! -path "*/ARCHITECTURE.md" ! -path "*/EXAMPLES.md" ! -path "*/api/*" 2>/dev/null; fi)
@@ -136,8 +131,7 @@ bash(if [ -d .workflow/docs/\${project_name} ]; then find .workflow/docs/\${proj
```bash
# Count existing docs from doc-planning-data.json
ccw session WFS-docs-{timestamp} read .process/doc-planning-data.json --raw | jq '.existing_docs.file_list | length'
# Or read entire process file and parse
bash(cat .workflow/active/WFS-docs-{timestamp}/.process/doc-planning-data.json | jq '.existing_docs.file_list | length')
```
**Data Processing**: Use count result, then use **Edit tool** to update `workflow-session.json`:
@@ -191,10 +185,10 @@ Large Projects (single dir >10 docs):
```bash
# 1. Get top-level directories from doc-planning-data.json
ccw session WFS-docs-{timestamp} read .process/doc-planning-data.json --raw | jq -r '.top_level_dirs[]'
bash(cat .workflow/active/WFS-docs-{timestamp}/.process/doc-planning-data.json | jq -r '.top_level_dirs[]')
# 2. Get mode from workflow-session.json
ccw session WFS-docs-{timestamp} read workflow-session.json --raw | jq -r '.mode // "full"'
bash(cat .workflow/active/WFS-docs-{timestamp}/workflow-session.json | jq -r '.mode // "full"')
# 3. Check for HTTP API
bash(grep -r "router\.|@Get\|@Post" src/ 2>/dev/null && echo "API_FOUND" || echo "NO_API")
@@ -223,7 +217,7 @@ bash(grep -r "router\.|@Get\|@Post" src/ 2>/dev/null && echo "API_FOUND" || echo
**Task ID Calculation**:
```bash
group_count=$(ccw session WFS-docs-{timestamp} read .process/doc-planning-data.json --raw | jq '.groups.count')
group_count=$(jq '.groups.count' .workflow/active/WFS-docs-{timestamp}/.process/doc-planning-data.json)
readme_id=$((group_count + 1)) # Next ID after groups
arch_id=$((group_count + 2))
api_id=$((group_count + 3))
@@ -236,12 +230,12 @@ api_id=$((group_count + 3))
| Mode | cli_execute | Placement | CLI MODE | Approval Flag | Agent Role |
|------|-------------|-----------|----------|---------------|------------|
| **Agent** | false | pre_analysis | analysis | (none) | Generate docs in implementation_approach |
| **CLI** | true | implementation_approach | write | --mode write | Execute CLI commands, validate output |
| **CLI** | true | implementation_approach | write | --approval-mode yolo | Execute CLI commands, validate output |
**Command Patterns**:
- Gemini/Qwen: `ccw cli -p "..." --tool gemini --mode analysis --cd dir`
- CLI Mode: `ccw cli -p "..." --tool gemini --mode write --cd dir`
- Codex: `ccw cli -p "..." --tool codex --mode write --cd dir`
- Gemini/Qwen: `cd dir && gemini -p "..."`
- CLI Mode: `cd dir && gemini --approval-mode yolo -p "..."`
- Codex: `codex -C dir --full-auto exec "..." --skip-git-repo-check -s danger-full-access`
**Generation Process**:
1. Read configuration values (tool, cli_execute, mode) from workflow-session.json
@@ -286,8 +280,8 @@ api_id=$((group_count + 3))
"step": "load_precomputed_data",
"action": "Load Phase 2 analysis and extract group directories",
"commands": [
"ccw session ${session_id} read .process/doc-planning-data.json",
"ccw session ${session_id} read .process/doc-planning-data.json --raw | jq '.groups.assignments[] | select(.group_id == \"${group_number}\") | .directories'"
"bash(cat ${session_dir}/.process/doc-planning-data.json)",
"bash(jq '.groups.assignments[] | select(.group_id == \"${group_number}\") | .directories' ${session_dir}/.process/doc-planning-data.json)"
],
"output_to": "phase2_context",
"note": "Single JSON file contains all Phase 2 analysis results"
@@ -332,7 +326,7 @@ api_id=$((group_count + 3))
{
"step": 2,
"title": "Batch generate documentation via CLI",
"command": "ccw cli -p 'PURPOSE: Generate module docs\\nTASK: Create documentation\\nMODE: write\\nCONTEXT: @**/* [phase2_context]\\nEXPECTED: API.md and README.md\\nRULES: Mirror structure' --tool gemini --mode write --cd ${dirs_from_group}",
"command": "bash(dirs=$(jq -r '.groups.assignments[] | select(.group_id == \"${group_number}\") | .directories[]' ${session_dir}/.process/doc-planning-data.json); for dir in $dirs; do cd \"$dir\" && gemini --approval-mode yolo -p \"PURPOSE: Generate module docs\\nTASK: Create documentation\\nMODE: write\\nCONTEXT: @**/* [phase2_context]\\nEXPECTED: API.md and README.md\\nRULES: Mirror structure\" || echo \"Failed: $dir\"; cd -; done)",
"depends_on": [1],
"output": "generated_docs"
}
@@ -364,7 +358,7 @@ api_id=$((group_count + 3))
},
{
"step": "analyze_project",
"command": "bash(ccw cli -p \"PURPOSE: Analyze project structure\\nTASK: Extract overview from modules\\nMODE: analysis\\nCONTEXT: [all_module_docs]\\nEXPECTED: Project outline\" --tool gemini --mode analysis)",
"command": "bash(gemini \"PURPOSE: Analyze project structure\\nTASK: Extract overview from modules\\nMODE: analysis\\nCONTEXT: [all_module_docs]\\nEXPECTED: Project outline\")",
"output_to": "project_outline"
}
],
@@ -404,7 +398,7 @@ api_id=$((group_count + 3))
"pre_analysis": [
{"step": "load_existing_docs", "command": "bash(cat .workflow/docs/${project_name}/{ARCHITECTURE,EXAMPLES}.md 2>/dev/null || echo 'No existing docs')", "output_to": "existing_arch_examples"},
{"step": "load_all_docs", "command": "bash(cat .workflow/docs/${project_name}/README.md && find .workflow/docs/${project_name} -type f -name '*.md' ! -path '*/README.md' ! -path '*/ARCHITECTURE.md' ! -path '*/EXAMPLES.md' ! -path '*/api/*' | xargs cat)", "output_to": "all_docs"},
{"step": "analyze_architecture", "command": "bash(ccw cli -p \"PURPOSE: Analyze system architecture\\nTASK: Synthesize architectural overview and examples\\nMODE: analysis\\nCONTEXT: [all_docs]\\nEXPECTED: Architecture + Examples outline\" --tool gemini --mode analysis)", "output_to": "arch_examples_outline"}
{"step": "analyze_architecture", "command": "bash(gemini \"PURPOSE: Analyze system architecture\\nTASK: Synthesize architectural overview and examples\\nMODE: analysis\\nCONTEXT: [all_docs]\\nEXPECTED: Architecture + Examples outline\")", "output_to": "arch_examples_outline"}
],
"implementation_approach": [
{
@@ -441,7 +435,7 @@ api_id=$((group_count + 3))
"pre_analysis": [
{"step": "discover_api", "command": "bash(rg 'router\\.| @(Get|Post)' -g '*.{ts,js}')", "output_to": "endpoint_discovery"},
{"step": "load_existing_api", "command": "bash(cat .workflow/docs/${project_name}/api/README.md 2>/dev/null || echo 'No existing API docs')", "output_to": "existing_api_docs"},
{"step": "analyze_api", "command": "bash(ccw cli -p \"PURPOSE: Document HTTP API\\nTASK: Analyze endpoints\\nMODE: analysis\\nCONTEXT: @src/api/**/* [endpoint_discovery]\\nEXPECTED: API outline\" --tool gemini --mode analysis)", "output_to": "api_outline"}
{"step": "analyze_api", "command": "bash(gemini \"PURPOSE: Document HTTP API\\nTASK: Analyze endpoints\\nMODE: analysis\\nCONTEXT: @src/api/**/* [endpoint_discovery]\\nEXPECTED: API outline\")", "output_to": "api_outline"}
],
"implementation_approach": [
{
@@ -602,7 +596,7 @@ api_id=$((group_count + 3))
| Mode | CLI Placement | CLI MODE | Approval Flag | Agent Role |
|------|---------------|----------|---------------|------------|
| **Agent (default)** | pre_analysis | analysis | (none) | Generates documentation content |
| **CLI (--cli-execute)** | implementation_approach | write | --mode write | Executes CLI commands, validates output |
| **CLI (--cli-execute)** | implementation_approach | write | --approval-mode yolo | Executes CLI commands, validates output |
**Execution Flow**:
- **Phase 2**: Unified analysis once, results in `.process/`

View File

@@ -5,7 +5,7 @@ argument-hint: "[--tool gemini|qwen] \"task context description\""
allowed-tools: Task(*), Bash(*)
examples:
- /memory:load "在当前前端基础上开发用户认证功能"
- /memory:load --tool qwen "重构支付模块API"
- /memory:load --tool qwen -p "重构支付模块API"
---
# Memory Load Command (/memory:load)
@@ -39,7 +39,7 @@ The command fully delegates to **universal-executor agent**, which autonomously:
1. **Analyzes Project Structure**: Executes `get_modules_by_depth.sh` to understand architecture
2. **Loads Documentation**: Reads CLAUDE.md, README.md and other key docs
3. **Extracts Keywords**: Derives core keywords from task description
4. **Discovers Files**: Uses CodexLens MCP or rg/find to locate relevant files
4. **Discovers Files**: Uses MCP code-index or rg/find to locate relevant files
5. **CLI Deep Analysis**: Executes Gemini/Qwen CLI for deep context analysis
6. **Generates Content Package**: Returns structured JSON core content package
@@ -109,7 +109,7 @@ Task(
1. **Project Structure**
\`\`\`bash
bash(ccw tool exec get_modules_by_depth '{}')
bash(~/.claude/scripts/get_modules_by_depth.sh)
\`\`\`
2. **Core Documentation**
@@ -136,7 +136,7 @@ Task(
Execute Gemini/Qwen CLI for deep analysis (saves main thread tokens):
\`\`\`bash
ccw cli -p "
cd . && ${tool} -p "
PURPOSE: Extract project core context for task: ${task_description}
TASK: Analyze project architecture, tech stack, key patterns, relevant files
MODE: analysis
@@ -147,7 +147,7 @@ RULES:
- Identify key architecture patterns and technical constraints
- Extract integration points and development standards
- Output concise, structured format
" --tool ${tool} --mode analysis
"
\`\`\`
### Step 4: Generate Core Content Package
@@ -212,7 +212,7 @@ Before returning:
### Example 2: Using Qwen Tool
```bash
/memory:load --tool qwen "重构支付模块API"
/memory:load --tool qwen -p "重构支付模块API"
```
Agent uses Qwen CLI for analysis, returns same structured package.

View File

@@ -1,314 +1,477 @@
---
name: tech-research
description: "3-phase orchestrator: extract tech stack → Exa research → generate path-conditional rules (auto-loaded by Claude Code)"
description: 3-phase orchestrator: extract tech stack from session/name → delegate to agent for Exa research and module generation → generate SKILL.md index (skips phase 2 if exists)
argument-hint: "[session-id | tech-stack-name] [--regenerate] [--tool <gemini|qwen>]"
allowed-tools: SlashCommand(*), TodoWrite(*), Bash(*), Read(*), Write(*), Task(*)
---
# Tech Stack Rules Generator
# Tech Stack Research SKILL Generator
## Overview
**Purpose**: Generate multi-layered, path-conditional rules that Claude Code automatically loads based on file context.
**Pure Orchestrator with Agent Delegation**: Prepares context paths and delegates ALL work to agent. Agent produces files directly.
**Key Difference from SKILL Memory**:
- **SKILL**: Manual loading via `Skill(command: "tech-name")`
- **Rules**: Automatic loading when working with matching file paths
**Auto-Continue Workflow**: Runs fully autonomously once triggered. Each phase completes and automatically triggers the next phase.
**Output Structure**:
```
.claude/rules/tech/{tech-stack}/
├── core.md # paths: **/*.{ext} - Core principles
├── patterns.md # paths: src/**/*.{ext} - Implementation patterns
├── testing.md # paths: **/*.{test,spec}.{ext} - Testing rules
├── config.md # paths: *.config.* - Configuration rules
├── api.md # paths: **/api/**/* - API rules (backend only)
├── components.md # paths: **/components/**/* - Component rules (frontend only)
└── metadata.json # Generation metadata
```
**Execution Paths**:
- **Full Path**: All 3 phases (no existing SKILL OR `--regenerate` specified)
- **Skip Path**: Phase 1 → Phase 3 (existing SKILL found AND no `--regenerate` flag)
- **Phase 3 Always Executes**: SKILL index is always generated or updated
**Templates Location**: `~/.claude/workflows/cli-templates/prompts/rules/`
---
**Agent Responsibility**:
- Agent does ALL the work: context reading, Exa research, content synthesis, file writing
- Orchestrator only provides context paths and waits for completion
## Core Rules
1. **Start Immediately**: First action is TodoWrite initialization
2. **Path-Conditional Output**: Every rule file includes `paths` frontmatter
3. **Template-Driven**: Agent reads templates before generating content
4. **Agent Produces Files**: Agent writes all rule files directly
5. **No Manual Loading**: Rules auto-activate when Claude works with matching files
1. **Start Immediately**: First action is TodoWrite initialization, second action is Phase 1 execution
2. **Context Path Delegation**: Pass session directory or tech stack name to agent, let agent do discovery
3. **Agent Produces Files**: Agent directly writes all module files, orchestrator does NOT parse agent output
4. **Auto-Continue**: After completing each phase, update TodoWrite and immediately execute next phase
5. **No User Prompts**: Never ask user questions or wait for input between phases
6. **Track Progress**: Update TodoWrite after EVERY phase completion before starting next phase
7. **Lightweight Index**: Phase 3 only generates SKILL.md index by reading existing files
---
## 3-Phase Execution
### Phase 1: Prepare Context & Detect Tech Stack
### Phase 1: Prepare Context Paths
**Goal**: Detect input mode, extract tech stack info, determine file extensions
**Goal**: Detect input mode, prepare context paths for agent, check existing SKILL
**Input Mode Detection**:
```bash
# Get input parameter
input="$1"
# Detect mode
if [[ "$input" == WFS-* ]]; then
MODE="session"
SESSION_ID="$input"
# Read workflow-session.json to extract tech stack
CONTEXT_PATH=".workflow/${SESSION_ID}"
else
MODE="direct"
TECH_STACK_NAME="$input"
CONTEXT_PATH="$input" # Pass tech stack name as context
fi
```
**Tech Stack Analysis**:
```javascript
// Decompose composite tech stacks
// "typescript-react-nextjs" → ["typescript", "react", "nextjs"]
const TECH_EXTENSIONS = {
"typescript": "{ts,tsx}",
"javascript": "{js,jsx}",
"python": "py",
"rust": "rs",
"go": "go",
"java": "java",
"csharp": "cs",
"ruby": "rb",
"php": "php"
};
const FRAMEWORK_TYPE = {
"react": "frontend",
"vue": "frontend",
"angular": "frontend",
"nextjs": "fullstack",
"nuxt": "fullstack",
"fastapi": "backend",
"express": "backend",
"django": "backend",
"rails": "backend"
};
```
**Check Existing Rules**:
**Check Existing SKILL**:
```bash
# For session mode, peek at session to get tech stack name
if [[ "$MODE" == "session" ]]; then
bash(test -f ".workflow/${SESSION_ID}/workflow-session.json")
Read(.workflow/${SESSION_ID}/workflow-session.json)
# Extract tech_stack_name (minimal extraction)
fi
# Normalize and check
normalized_name=$(echo "$TECH_STACK_NAME" | tr '[:upper:]' '[:lower:]' | tr ' ' '-')
rules_dir=".claude/rules/tech/${normalized_name}"
existing_count=$(find "${rules_dir}" -name "*.md" 2>/dev/null | wc -l || echo 0)
bash(test -d ".claude/skills/${normalized_name}" && echo "exists" || echo "not_exists")
bash(find ".claude/skills/${normalized_name}" -name "*.md" 2>/dev/null | wc -l || echo 0)
```
**Skip Decision**:
- If `existing_count > 0` AND no `--regenerate``SKIP_GENERATION = true`
- If `--regenerate` → Delete existing and regenerate
```javascript
if (existing_files > 0 && !regenerate_flag) {
SKIP_GENERATION = true
message = "Tech stack SKILL already exists, skipping Phase 2. Use --regenerate to force regeneration."
} else if (regenerate_flag) {
bash(rm -rf ".claude/skills/${normalized_name}")
SKIP_GENERATION = false
message = "Regenerating tech stack SKILL from scratch."
} else {
SKIP_GENERATION = false
message = "No existing SKILL found, generating new tech stack documentation."
}
```
**Output Variables**:
- `TECH_STACK_NAME`: Normalized name
- `PRIMARY_LANG`: Primary language
- `FILE_EXT`: File extension pattern
- `FRAMEWORK_TYPE`: frontend | backend | fullstack | library
- `COMPONENTS`: Array of tech components
- `SKIP_GENERATION`: Boolean
- `MODE`: `session` or `direct`
- `SESSION_ID`: Session ID (if session mode)
- `CONTEXT_PATH`: Path to session directory OR tech stack name
- `TECH_STACK_NAME`: Extracted or provided tech stack name
- `SKIP_GENERATION`: Boolean - whether to skip Phase 2
**TodoWrite**: Mark phase 1 completed
**TodoWrite**:
- If skipping: Mark phase 1 completed, phase 2 completed, phase 3 in_progress
- If not skipping: Mark phase 1 completed, phase 2 in_progress
---
### Phase 2: Agent Produces Path-Conditional Rules
### Phase 2: Agent Produces All Files
**Skip Condition**: Skipped if `SKIP_GENERATION = true`
**Goal**: Delegate to agent for Exa research and rule file generation
**Goal**: Delegate EVERYTHING to agent - context reading, Exa research, content synthesis, and file writing
**Agent Task Specification**:
**Template Files**:
```
~/.claude/workflows/cli-templates/prompts/rules/
├── tech-rules-agent-prompt.txt # Agent instructions
├── rule-core.txt # Core principles template
├── rule-patterns.txt # Implementation patterns template
├── rule-testing.txt # Testing rules template
├── rule-config.txt # Configuration rules template
├── rule-api.txt # API rules template (backend)
└── rule-components.txt # Component rules template (frontend)
```
**Agent Task**:
```javascript
Task({
Task(
subagent_type: "general-purpose",
description: `Generate tech stack rules: ${TECH_STACK_NAME}`,
prompt: `
You are generating path-conditional rules for Claude Code.
description: "Generate tech stack SKILL: {CONTEXT_PATH}",
prompt: "
Generate a complete tech stack SKILL package with Exa research.
## Context
- Tech Stack: ${TECH_STACK_NAME}
- Primary Language: ${PRIMARY_LANG}
- File Extensions: ${FILE_EXT}
- Framework Type: ${FRAMEWORK_TYPE}
- Components: ${JSON.stringify(COMPONENTS)}
- Output Directory: .claude/rules/tech/${TECH_STACK_NAME}/
**Context Provided**:
- Mode: {MODE}
- Context Path: {CONTEXT_PATH}
## Instructions
**Templates Available**:
- Module Format: ~/.claude/workflows/cli-templates/prompts/tech/tech-module-format.txt
- SKILL Index: ~/.claude/workflows/cli-templates/prompts/tech/tech-skill-index.txt
Read the agent prompt template for detailed instructions:
$(cat ~/.claude/workflows/cli-templates/prompts/rules/tech-rules-agent-prompt.txt)
**Your Responsibilities**:
## Execution Steps
1. **Extract Tech Stack Information**:
1. Execute Exa research queries (see agent prompt)
2. Read each rule template
3. Generate rule files following template structure
4. Write files to output directory
5. Write metadata.json
6. Report completion
IF MODE == 'session':
- Read `.workflow/active/{session_id}/workflow-session.json`
- Read `.workflow/active/{session_id}/.process/context-package.json`
- Extract tech_stack: {language, frameworks, libraries}
- Build tech stack name: \"{language}-{framework1}-{framework2}\"
- Example: \"typescript-react-nextjs\"
## Variable Substitutions
IF MODE == 'direct':
- Tech stack name = CONTEXT_PATH
- Parse composite: split by '-' delimiter
- Example: \"typescript-react-nextjs\" → [\"typescript\", \"react\", \"nextjs\"]
Replace in templates:
- {TECH_STACK_NAME} → ${TECH_STACK_NAME}
- {PRIMARY_LANG} → ${PRIMARY_LANG}
- {FILE_EXT} → ${FILE_EXT}
- {FRAMEWORK_TYPE} → ${FRAMEWORK_TYPE}
`
})
2. **Execute Exa Research** (4-6 parallel queries):
Base Queries (always execute):
- mcp__exa__get_code_context_exa(query: \"{tech} core principles best practices 2025\", tokensNum: 8000)
- mcp__exa__get_code_context_exa(query: \"{tech} common patterns architecture examples\", tokensNum: 7000)
- mcp__exa__web_search_exa(query: \"{tech} configuration setup tooling 2025\", numResults: 5)
- mcp__exa__get_code_context_exa(query: \"{tech} testing strategies\", tokensNum: 5000)
Component Queries (if composite):
- For each additional component:
mcp__exa__get_code_context_exa(query: \"{main_tech} {component} integration\", tokensNum: 5000)
3. **Read Module Format Template**:
Read template for structure guidance:
```bash
Read(~/.claude/workflows/cli-templates/prompts/tech/tech-module-format.txt)
```
4. **Synthesize Content into 6 Modules**:
Follow template structure from tech-module-format.txt:
- **principles.md** - Core concepts, philosophies (~3K tokens)
- **patterns.md** - Implementation patterns with code examples (~5K tokens)
- **practices.md** - Best practices, anti-patterns, pitfalls (~4K tokens)
- **testing.md** - Testing strategies, frameworks (~3K tokens)
- **config.md** - Setup, configuration, tooling (~3K tokens)
- **frameworks.md** - Framework integration (only if composite, ~4K tokens)
Each module follows template format:
- Frontmatter (YAML)
- Main sections with clear headings
- Code examples from Exa research
- Best practices sections
- References to Exa sources
5. **Write Files Directly**:
```javascript
// Create directory
bash(mkdir -p \".claude/skills/{tech_stack_name}\")
// Write each module file using Write tool
Write({ file_path: \".claude/skills/{tech_stack_name}/principles.md\", content: ... })
Write({ file_path: \".claude/skills/{tech_stack_name}/patterns.md\", content: ... })
Write({ file_path: \".claude/skills/{tech_stack_name}/practices.md\", content: ... })
Write({ file_path: \".claude/skills/{tech_stack_name}/testing.md\", content: ... })
Write({ file_path: \".claude/skills/{tech_stack_name}/config.md\", content: ... })
// Write frameworks.md only if composite
// Write metadata.json
Write({
file_path: \".claude/skills/{tech_stack_name}/metadata.json\",
content: JSON.stringify({
tech_stack_name,
components,
is_composite,
generated_at: timestamp,
source: \"exa-research\",
research_summary: { total_queries, total_sources }
})
})
```
6. **Report Completion**:
Provide summary:
- Tech stack name
- Files created (count)
- Exa queries executed
- Sources consulted
**CRITICAL**:
- MUST read external template files before generating content (step 3 for modules, step 4 for index)
- You have FULL autonomy - read files, execute Exa, synthesize content, write files
- Do NOT return JSON or structured data - produce actual .md files
- Handle errors gracefully (Exa failures, missing files, template read failures)
- If tech stack cannot be determined, ask orchestrator to clarify
"
)
```
**Completion Criteria**:
- 4-6 rule files written with proper `paths` frontmatter
- Agent task executed successfully
- 5-6 modular files written to `.claude/skills/{tech_stack_name}/`
- metadata.json written
- Agent reports files created
- Agent reports completion
**TodoWrite**: Mark phase 2 completed
**TodoWrite**: Mark phase 2 completed, phase 3 in_progress
---
### Phase 3: Verify & Report
### Phase 3: Generate SKILL.md Index
**Goal**: Verify generated files and provide usage summary
**Note**: This phase **ALWAYS executes** - generates or updates the SKILL index.
**Goal**: Read generated module files and create SKILL.md index with loading recommendations
**Steps**:
1. **Verify Files**:
1. **Verify Generated Files**:
```bash
find ".claude/rules/tech/${TECH_STACK_NAME}" -name "*.md" -type f
bash(find ".claude/skills/${TECH_STACK_NAME}" -name "*.md" -type f | sort)
```
2. **Validate Frontmatter**:
```bash
head -5 ".claude/rules/tech/${TECH_STACK_NAME}/core.md"
```
3. **Read Metadata**:
2. **Read metadata.json**:
```javascript
Read(`.claude/rules/tech/${TECH_STACK_NAME}/metadata.json`)
Read(.claude/skills/${TECH_STACK_NAME}/metadata.json)
// Extract: tech_stack_name, components, is_composite, research_summary
```
4. **Generate Summary Report**:
3. **Read Module Headers** (optional, first 20 lines):
```javascript
Read(.claude/skills/${TECH_STACK_NAME}/principles.md, limit: 20)
// Repeat for other modules
```
Tech Stack Rules Generated
Tech Stack: {TECH_STACK_NAME}
Location: .claude/rules/tech/{TECH_STACK_NAME}/
4. **Read SKILL Index Template**:
Files Created:
├── core.md → paths: **/*.{ext}
├── patterns.md → paths: src/**/*.{ext}
├── testing.md → paths: **/*.{test,spec}.{ext}
├── config.md → paths: *.config.*
├── api.md → paths: **/api/**/* (if backend)
└── components.md → paths: **/components/**/* (if frontend)
Auto-Loading:
- Rules apply automatically when editing matching files
- No manual loading required
Example Activation:
- Edit src/components/Button.tsx → core.md + patterns.md + components.md
- Edit tests/api.test.ts → core.md + testing.md
- Edit package.json → config.md
```javascript
Read(~/.claude/workflows/cli-templates/prompts/tech/tech-skill-index.txt)
```
5. **Generate SKILL.md Index**:
Follow template from tech-skill-index.txt with variable substitutions:
- `{TECH_STACK_NAME}`: From metadata.json
- `{MAIN_TECH}`: Primary technology
- `{ISO_TIMESTAMP}`: Current timestamp
- `{QUERY_COUNT}`: From research_summary
- `{SOURCE_COUNT}`: From research_summary
- Conditional sections for composite tech stacks
Template provides structure for:
- Frontmatter with metadata
- Overview and tech stack description
- Module organization (Core/Practical/Config sections)
- Loading recommendations (Quick/Implementation/Complete)
- Usage guidelines and auto-trigger keywords
- Research metadata and version history
6. **Write SKILL.md**:
```javascript
Write({
file_path: `.claude/skills/${TECH_STACK_NAME}/SKILL.md`,
content: generatedIndexMarkdown
})
```
**Completion Criteria**:
- SKILL.md index written
- All module files verified
- Loading recommendations included
**TodoWrite**: Mark phase 3 completed
**Final Report**:
```
Tech Stack SKILL Package Complete
Tech Stack: {TECH_STACK_NAME}
Location: .claude/skills/{TECH_STACK_NAME}/
Files: SKILL.md + 5-6 modules + metadata.json
Exa Research: {queries} queries, {sources} sources
Usage: Skill(command: "{TECH_STACK_NAME}")
```
---
## Path Pattern Reference
## Implementation Details
| Pattern | Matches |
|---------|---------|
| `**/*.ts` | All .ts files |
| `src/**/*` | All files under src/ |
| `*.config.*` | Config files in root |
| `**/*.{ts,tsx}` | .ts and .tsx files |
### TodoWrite Patterns
| Tech Stack | Core Pattern | Test Pattern |
|------------|--------------|--------------|
| TypeScript | `**/*.{ts,tsx}` | `**/*.{test,spec}.{ts,tsx}` |
| Python | `**/*.py` | `**/test_*.py, **/*_test.py` |
| Rust | `**/*.rs` | `**/tests/**/*.rs` |
| Go | `**/*.go` | `**/*_test.go` |
**Initialization** (Before Phase 1):
```javascript
TodoWrite({todos: [
{"content": "Prepare context paths", "status": "in_progress", "activeForm": "Preparing context paths"},
{"content": "Agent produces all module files", "status": "pending", "activeForm": "Agent producing files"},
{"content": "Generate SKILL.md index", "status": "pending", "activeForm": "Generating SKILL index"}
]})
```
**Full Path** (SKIP_GENERATION = false):
```javascript
// After Phase 1
TodoWrite({todos: [
{"content": "Prepare context paths", "status": "completed", ...},
{"content": "Agent produces all module files", "status": "in_progress", ...},
{"content": "Generate SKILL.md index", "status": "pending", ...}
]})
// After Phase 2
TodoWrite({todos: [
{"content": "Prepare context paths", "status": "completed", ...},
{"content": "Agent produces all module files", "status": "completed", ...},
{"content": "Generate SKILL.md index", "status": "in_progress", ...}
]})
// After Phase 3
TodoWrite({todos: [
{"content": "Prepare context paths", "status": "completed", ...},
{"content": "Agent produces all module files", "status": "completed", ...},
{"content": "Generate SKILL.md index", "status": "completed", ...}
]})
```
**Skip Path** (SKIP_GENERATION = true):
```javascript
// After Phase 1 (skip Phase 2)
TodoWrite({todos: [
{"content": "Prepare context paths", "status": "completed", ...},
{"content": "Agent produces all module files", "status": "completed", ...}, // Skipped
{"content": "Generate SKILL.md index", "status": "in_progress", ...}
]})
```
### Execution Flow
**Full Path**:
```
User → TodoWrite Init → Phase 1 (prepare) → Phase 2 (agent writes files) → Phase 3 (write index) → Report
```
**Skip Path**:
```
User → TodoWrite Init → Phase 1 (detect existing) → Phase 3 (update index) → Report
```
### Error Handling
**Phase 1 Errors**:
- Invalid session ID: Report error, verify session exists
- Missing context-package: Warn, fall back to direct mode
- No tech stack detected: Ask user to specify tech stack name
**Phase 2 Errors (Agent)**:
- Agent task fails: Retry once, report if fails again
- Exa API failures: Agent handles internally with retries
- Incomplete results: Warn user, proceed with partial data if minimum sections available
**Phase 3 Errors**:
- Write failures: Report which files failed
- Missing files: Note in SKILL.md, suggest regeneration
---
## Parameters
```bash
/memory:tech-research [session-id | "tech-stack-name"] [--regenerate]
/memory:tech-research [session-id | "tech-stack-name"] [--regenerate] [--tool <gemini|qwen>]
```
**Arguments**:
- **session-id**: `WFS-*` format - Extract from workflow session
- **tech-stack-name**: Direct input - `"typescript"`, `"typescript-react"`
- **--regenerate**: Force regenerate existing rules
- **session-id | tech-stack-name**: Input source (auto-detected by WFS- prefix)
- Session mode: `WFS-user-auth-v2` - Extract tech stack from workflow
- Direct mode: `"typescript"`, `"typescript-react-nextjs"` - User specifies
- **--regenerate**: Force regenerate existing SKILL (deletes and recreates)
- **--tool**: Reserved for future CLI integration (default: gemini)
---
## Examples
### Single Language
**Generated File Structure** (for all examples):
```
.claude/skills/{tech-stack}/
├── SKILL.md # Index (Phase 3)
├── principles.md # Agent (Phase 2)
├── patterns.md # Agent
├── practices.md # Agent
├── testing.md # Agent
├── config.md # Agent
├── frameworks.md # Agent (if composite)
└── metadata.json # Agent
```
### Direct Mode - Single Stack
```bash
/memory:tech-research "typescript"
```
**Output**: `.claude/rules/tech/typescript/` with 4 rule files
**Workflow**:
1. Phase 1: Detects direct mode, checks existing SKILL
2. Phase 2: Agent executes 4 Exa queries, writes 5 modules
3. Phase 3: Generates SKILL.md index
### Frontend Stack
### Direct Mode - Composite Stack
```bash
/memory:tech-research "typescript-react"
/memory:tech-research "typescript-react-nextjs"
```
**Output**: `.claude/rules/tech/typescript-react/` with 5 rule files (includes components.md)
**Workflow**:
1. Phase 1: Decomposes into ["typescript", "react", "nextjs"]
2. Phase 2: Agent executes 6 Exa queries (4 base + 2 components), writes 6 modules (adds frameworks.md)
3. Phase 3: Generates SKILL.md index with framework integration
### Backend Stack
```bash
/memory:tech-research "python-fastapi"
```
**Output**: `.claude/rules/tech/python-fastapi/` with 5 rule files (includes api.md)
### From Session
### Session Mode - Extract from Workflow
```bash
/memory:tech-research WFS-user-auth-20251104
```
**Workflow**: Extract tech stack from session → Generate rules
**Workflow**:
1. Phase 1: Reads session, extracts tech stack: `python-fastapi-sqlalchemy`
2. Phase 2: Agent researches Python + FastAPI + SQLAlchemy, writes 6 modules
3. Phase 3: Generates SKILL.md index
---
### Regenerate Existing
## Comparison: Rules vs SKILL
```bash
/memory:tech-research "react" --regenerate
```
**Workflow**:
1. Phase 1: Deletes existing SKILL due to --regenerate
2. Phase 2: Agent executes fresh Exa research (latest 2025 practices)
3. Phase 3: Generates updated SKILL.md
### Skip Path - Fast Update
```bash
/memory:tech-research "python"
```
**Scenario**: SKILL already exists with 7 files
**Workflow**:
1. Phase 1: Detects existing SKILL, sets SKIP_GENERATION = true
2. Phase 2: **SKIPPED**
3. Phase 3: Updates SKILL.md index only (5-10x faster)
| Aspect | SKILL Memory | Rules |
|--------|--------------|-------|
| Loading | Manual: `Skill("tech")` | Automatic by path |
| Scope | All files when loaded | Only matching files |
| Granularity | Monolithic packages | Per-file-type |
| Context | Full package | Only relevant rules |
**When to Use**:
- **Rules**: Tech stack conventions per file type
- **SKILL**: Reference docs, APIs, examples for manual lookup

View File

@@ -99,10 +99,10 @@ src/ (depth 1) → SINGLE-LAYER STRATEGY
Bash({command: "git add -A 2>/dev/null || true", run_in_background: false});
// Get module structure
Bash({command: "ccw tool exec get_modules_by_depth '{\"format\":\"list\"}'", run_in_background: false});
Bash({command: "~/.claude/scripts/get_modules_by_depth.sh list", run_in_background: false});
// OR with --path
Bash({command: "cd <target-path> && ccw tool exec get_modules_by_depth '{\"format\":\"list\"}'", run_in_background: false});
Bash({command: "cd <target-path> && ~/.claude/scripts/get_modules_by_depth.sh list", run_in_background: false});
```
**Parse output** `depth:N|path:<PATH>|...` to extract module paths and count.
@@ -185,7 +185,7 @@ for (let layer of [3, 2, 1]) {
let strategy = module.depth >= 3 ? "multi-layer" : "single-layer";
for (let tool of tool_order) {
Bash({
command: `cd ${module.path} && ccw tool exec update_module_claude '{"strategy":"${strategy}","path":".","tool":"${tool}"}'`,
command: `cd ${module.path} && ~/.claude/scripts/update_module_claude.sh "${strategy}" "." "${tool}"`,
run_in_background: false
});
if (bash_result.exit_code === 0) {
@@ -244,7 +244,7 @@ MODULES:
TOOLS (try in order): {{tool_1}}, {{tool_2}}, {{tool_3}}
EXECUTION SCRIPT: ccw tool exec update_module_claude
EXECUTION SCRIPT: ~/.claude/scripts/update_module_claude.sh
- Accepts strategy parameter: multi-layer | single-layer
- Tool execution via direct CLI commands (gemini/qwen/codex)
@@ -252,7 +252,7 @@ EXECUTION FLOW (for each module):
1. Tool fallback loop (exit on first success):
for tool in {{tool_1}} {{tool_2}} {{tool_3}}; do
Bash({
command: `cd "{{module_path}}" && ccw tool exec update_module_claude '{"strategy":"{{strategy}}","path":".","tool":"${tool}"}'`,
command: `cd "{{module_path}}" && ~/.claude/scripts/update_module_claude.sh "{{strategy}}" "." "${tool}"`,
run_in_background: false
})
exit_code=$?

View File

@@ -41,7 +41,7 @@ Orchestrates context-aware CLAUDE.md updates for changed modules using batched a
```javascript
// Detect changed modules
Bash({command: "ccw tool exec detect_changed_modules '{\"format\":\"list\"}'", run_in_background: false});
Bash({command: "~/.claude/scripts/detect_changed_modules.sh list", run_in_background: false});
// Cache git changes
Bash({command: "git add -A 2>/dev/null || true", run_in_background: false});
@@ -102,7 +102,7 @@ for (let depth of sorted_depths.reverse()) { // N → 0
return async () => {
for (let tool of tool_order) {
Bash({
command: `cd ${module.path} && ccw tool exec update_module_claude '{"strategy":"single-layer","path":".","tool":"${tool}"}'`,
command: `cd ${module.path} && ~/.claude/scripts/update_module_claude.sh "single-layer" "." "${tool}"`,
run_in_background: false
});
if (bash_result.exit_code === 0) {
@@ -184,21 +184,21 @@ EXECUTION:
For each module above:
1. Try tool 1:
Bash({
command: `cd "{{module_path}}" && ccw tool exec update_module_claude '{"strategy":"single-layer","path":".","tool":"{{tool_1}}"}'`,
command: `cd "{{module_path}}" && ~/.claude/scripts/update_module_claude.sh "single-layer" "." "{{tool_1}}"`,
run_in_background: false
})
→ Success: Report "✅ {{module_path}} updated with {{tool_1}}", proceed to next module
→ Failure: Try tool 2
2. Try tool 2:
Bash({
command: `cd "{{module_path}}" && ccw tool exec update_module_claude '{"strategy":"single-layer","path":".","tool":"{{tool_2}}"}'`,
command: `cd "{{module_path}}" && ~/.claude/scripts/update_module_claude.sh "single-layer" "." "{{tool_2}}"`,
run_in_background: false
})
→ Success: Report "✅ {{module_path}} updated with {{tool_2}}", proceed to next module
→ Failure: Try tool 3
3. Try tool 3:
Bash({
command: `cd "{{module_path}}" && ccw tool exec update_module_claude '{"strategy":"single-layer","path":".","tool":"{{tool_3}}"}'`,
command: `cd "{{module_path}}" && ~/.claude/scripts/update_module_claude.sh "single-layer" "." "{{tool_3}}"`,
run_in_background: false
})
→ Success: Report "✅ {{module_path}} updated with {{tool_3}}", proceed to next module

Some files were not shown because too many files have changed in this diff Show More