mirror of
https://github.com/catlog22/Claude-Code-Workflow.git
synced 2026-02-06 01:54:11 +08:00
Compare commits
238 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
a73828b4d6 | ||
|
|
6244bf0405 | ||
|
|
90852c7788 | ||
|
|
3b842ed290 | ||
|
|
673e1d117a | ||
|
|
f64f619713 | ||
|
|
a742fa0f8a | ||
|
|
6894c7e80b | ||
|
|
203100431b | ||
|
|
e8b9bcae92 | ||
|
|
052351ab5b | ||
|
|
9dd84e3416 | ||
|
|
211c25d969 | ||
|
|
275684d319 | ||
|
|
0f8a47e8f6 | ||
|
|
303c840464 | ||
|
|
b15008fbce | ||
|
|
a8cf3e1ad6 | ||
|
|
0515ef6e8b | ||
|
|
777d5df573 | ||
|
|
c5f379ba01 | ||
|
|
145d38c9bd | ||
|
|
eab957ce00 | ||
|
|
b5fb077ad6 | ||
|
|
ebcbb11cb2 | ||
|
|
a1413dd1b3 | ||
|
|
4e6ee2db25 | ||
|
|
8e744597d1 | ||
|
|
dfa8b541b4 | ||
|
|
1dc55f8811 | ||
|
|
501d9a05d4 | ||
|
|
229d51cd18 | ||
|
|
40e61b30d6 | ||
|
|
3c3ce55842 | ||
|
|
e3e61bcae9 | ||
|
|
dfca4d60ee | ||
|
|
e671b45948 | ||
|
|
b00113d212 | ||
|
|
9b926d1a1e | ||
|
|
98c9f1a830 | ||
|
|
46ac591fe8 | ||
|
|
bf66b095c7 | ||
|
|
5228581324 | ||
|
|
c9c704e671 | ||
|
|
16d4c7c646 | ||
|
|
39056292b7 | ||
|
|
87ffd283ce | ||
|
|
8eb42816f1 | ||
|
|
ebdf64c0b9 | ||
|
|
caab5f476e | ||
|
|
1998f3ae8a | ||
|
|
5ff2a43b70 | ||
|
|
3cd842ca1a | ||
|
|
86cefa7bda | ||
|
|
fdac697f6e | ||
|
|
8203d690cb | ||
|
|
cf58dc0dd3 | ||
|
|
6a69af3bf1 | ||
|
|
acdfbb4644 | ||
|
|
72f24bf535 | ||
|
|
ba23244876 | ||
|
|
624f9f18b4 | ||
|
|
17002345c9 | ||
|
|
f3f2051c45 | ||
|
|
e60d793c8c | ||
|
|
7ecc64614a | ||
|
|
0311237db2 | ||
|
|
11d8187258 | ||
|
|
fc4a9af0cb | ||
|
|
fa64e11a77 | ||
|
|
210f0f1012 | ||
|
|
6d3f10d1d7 | ||
|
|
09483c9f07 | ||
|
|
2871950ab8 | ||
|
|
5849f751bc | ||
|
|
45f92fe066 | ||
|
|
f492f4839a | ||
|
|
fa81793bea | ||
|
|
c12ef3e772 | ||
|
|
6eebdb8898 | ||
|
|
3e9a309079 | ||
|
|
15d5890861 | ||
|
|
89b3475508 | ||
|
|
6e301538ed | ||
|
|
c3a31f2c5d | ||
|
|
559b1e02a7 | ||
|
|
9e4412c7a8 | ||
|
|
6dab38172f | ||
|
|
f1ee46e1ac | ||
|
|
775928456d | ||
|
|
fd4a15c84e | ||
|
|
be725ce21f | ||
|
|
fa31552cc1 | ||
|
|
a3ccf5baed | ||
|
|
8c6225b749 | ||
|
|
89e77c0089 | ||
|
|
b27d8a9570 | ||
|
|
4a3ff82200 | ||
|
|
bfbab44756 | ||
|
|
4458af83d8 | ||
|
|
6b62b5b5a9 | ||
|
|
31cc060837 | ||
|
|
ea284d739a | ||
|
|
ab06ed0083 | ||
|
|
4de4db3c69 | ||
|
|
e1cac5dd50 | ||
|
|
7adde91e9f | ||
|
|
3428642d04 | ||
|
|
2f0cce0089 | ||
|
|
c7ced2bfbb | ||
|
|
69049e3f45 | ||
|
|
e17e9a6473 | ||
|
|
5e91ba6c60 | ||
|
|
9f6e6852da | ||
|
|
68f9de0c69 | ||
|
|
17af615fe2 | ||
|
|
4577be71ce | ||
|
|
0311d63b7d | ||
|
|
440314c16d | ||
|
|
8dd4a513c8 | ||
|
|
e096fc98e2 | ||
|
|
4329bd8e80 | ||
|
|
ae07df612d | ||
|
|
d5d6f1fbbe | ||
|
|
b9d068d6d4 | ||
|
|
48ac43d628 | ||
|
|
79da2c8c17 | ||
|
|
6aac7bb8e3 | ||
|
|
51a61bef31 | ||
|
|
44d84116c3 | ||
|
|
474a1ce027 | ||
|
|
b22839c99f | ||
|
|
8b927f302c | ||
|
|
c16da759b2 | ||
|
|
74a830694c | ||
|
|
d06a3ca12e | ||
|
|
154a9283b5 | ||
|
|
b702791c2c | ||
|
|
d21066c282 | ||
|
|
df23975a0b | ||
|
|
3da0ef2adb | ||
|
|
35485bbbb1 | ||
|
|
894b93e08d | ||
|
|
97640a517a | ||
|
|
ee0886fc48 | ||
|
|
0fe16963cd | ||
|
|
82dcafff00 | ||
|
|
3ffb907a6f | ||
|
|
d91477ad80 | ||
|
|
0529b57694 | ||
|
|
79a2953862 | ||
|
|
8d542b8e45 | ||
|
|
ac9060ab3a | ||
|
|
1c9716e460 | ||
|
|
7e70e4c299 | ||
|
|
ac43cf85ec | ||
|
|
08dc0a0348 | ||
|
|
90adef6cfb | ||
|
|
d4499cc6d7 | ||
|
|
958cf290e2 | ||
|
|
d3a522f3e8 | ||
|
|
52935d4b8e | ||
|
|
32217f87fd | ||
|
|
675aff26ff | ||
|
|
029384c427 | ||
|
|
37417caca2 | ||
|
|
8f58e4e48a | ||
|
|
68c872ad36 | ||
|
|
c780544792 | ||
|
|
23e15e479e | ||
|
|
684618e72b | ||
|
|
93d3df1e08 | ||
|
|
335f5e9ec6 | ||
|
|
30e9ae0153 | ||
|
|
25ac862f46 | ||
|
|
d4e59770d0 | ||
|
|
15122b9ebb | ||
|
|
a41e6d19fd | ||
|
|
e879ec7189 | ||
|
|
4faa5f1c95 | ||
|
|
c42f91a7fe | ||
|
|
92d2085b64 | ||
|
|
6a39f7e69d | ||
|
|
dfa8dbc52a | ||
|
|
a393601ec5 | ||
|
|
b74a90b416 | ||
|
|
77de8d857b | ||
|
|
76c1745269 | ||
|
|
811382775d | ||
|
|
e8f1caa219 | ||
|
|
15c5cd5f6e | ||
|
|
766a8d2145 | ||
|
|
e350e0c7bb | ||
|
|
db4ab85d3e | ||
|
|
cfcd277a58 | ||
|
|
c256fd9379 | ||
|
|
0a4c205105 | ||
|
|
e815c3c10e | ||
|
|
8eb1a4e52e | ||
|
|
19648721fc | ||
|
|
b81d1039c5 | ||
|
|
a667b7548c | ||
|
|
598bea9b21 | ||
|
|
df104d6e9b | ||
|
|
417f3c0f8c | ||
|
|
5114a942dc | ||
|
|
edef937822 | ||
|
|
faa86eded0 | ||
|
|
44fa6e0a42 | ||
|
|
be9a1c76d4 | ||
|
|
fcc811d6a1 | ||
|
|
906404f075 | ||
|
|
1267c8d0f4 | ||
|
|
eb1093128e | ||
|
|
4ddeb6551e | ||
|
|
7252c2ff3d | ||
|
|
8dee45c0a3 | ||
|
|
99ead8b165 | ||
|
|
0c7f13d9a4 | ||
|
|
ed32b95de1 | ||
|
|
beacc2e26b | ||
|
|
389621c954 | ||
|
|
2ba7756d13 | ||
|
|
02f77c0a51 | ||
|
|
5aa8d37cd0 | ||
|
|
a7b8ffc716 | ||
|
|
b0bc53646e | ||
|
|
5f31c9ad7e | ||
|
|
818d9f3f5d | ||
|
|
1c3c070db4 | ||
|
|
91e4792aa9 | ||
|
|
813bfa8f97 | ||
|
|
8b29f6bb7c | ||
|
|
27273405f7 | ||
|
|
f4299457fb | ||
|
|
06983a35ad | ||
|
|
a80953527b | ||
|
|
0f469e225b |
33
.claude/CLAUDE.md
Normal file
33
.claude/CLAUDE.md
Normal file
@@ -0,0 +1,33 @@
|
||||
# Claude Instructions
|
||||
|
||||
- **CLI Tools Usage**: @~/.claude/workflows/cli-tools-usage.md
|
||||
- **Coding Philosophy**: @~/.claude/workflows/coding-philosophy.md
|
||||
- **Context Requirements**: @~/.claude/workflows/context-tools-ace.md
|
||||
- **File Modification**: @~/.claude/workflows/file-modification.md
|
||||
- **CLI Endpoints Config**: @.claude/cli-tools.json
|
||||
|
||||
## CLI Endpoints
|
||||
|
||||
**Strictly follow the @.claude/cli-tools.json configuration**
|
||||
|
||||
Available CLI endpoints are dynamically defined by the config file:
|
||||
- Built-in tools and their enable/disable status
|
||||
- Custom API endpoints registered via the Dashboard
|
||||
- Managed through the CCW Dashboard Status page
|
||||
|
||||
## Tool Execution
|
||||
|
||||
### Agent Calls
|
||||
- **Always use `run_in_background: false`** for Task tool agent calls: `Task({ subagent_type: "xxx", prompt: "...", run_in_background: false })` to ensure synchronous execution and immediate result visibility
|
||||
- **TaskOutput usage**: Only use `TaskOutput({ task_id: "xxx", block: false })` + sleep loop to poll completion status. NEVER read intermediate output during agent/CLI execution - wait for final result only
|
||||
|
||||
### CLI Tool Calls (ccw cli)
|
||||
- **Always use `run_in_background: true`** for Bash tool when calling ccw cli:
|
||||
```
|
||||
Bash({ command: "ccw cli -p '...' --tool gemini", run_in_background: true })
|
||||
```
|
||||
- **After CLI call**: If no other tasks, stop immediately - let CLI execute in background, do NOT poll with TaskOutput
|
||||
|
||||
## Code Diagnostics
|
||||
|
||||
- **Prefer `mcp__ide__getDiagnostics`** for code error checking over shell-based TypeScript compilation
|
||||
4
.claude/active_memory_config.json
Normal file
4
.claude/active_memory_config.json
Normal file
@@ -0,0 +1,4 @@
|
||||
{
|
||||
"interval": "manual",
|
||||
"tool": "gemini"
|
||||
}
|
||||
@@ -203,7 +203,13 @@ Generate individual `.task/IMPL-*.json` files with the following structure:
|
||||
"id": "IMPL-N",
|
||||
"title": "Descriptive task name",
|
||||
"status": "pending|active|completed|blocked",
|
||||
"context_package_path": ".workflow/active/WFS-{session}/.process/context-package.json"
|
||||
"context_package_path": ".workflow/active/WFS-{session}/.process/context-package.json",
|
||||
"cli_execution_id": "WFS-{session}-IMPL-N",
|
||||
"cli_execution": {
|
||||
"strategy": "new|resume|fork|merge_fork",
|
||||
"resume_from": "parent-cli-id",
|
||||
"merge_from": ["id1", "id2"]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
@@ -216,6 +222,50 @@ Generate individual `.task/IMPL-*.json` files with the following structure:
|
||||
- `title`: Descriptive task name summarizing the work
|
||||
- `status`: Task state - `pending` (not started), `active` (in progress), `completed` (done), `blocked` (waiting on dependencies)
|
||||
- `context_package_path`: Path to smart context package containing project structure, dependencies, and brainstorming artifacts catalog
|
||||
- `cli_execution_id`: Unique CLI conversation ID (format: `{session_id}-{task_id}`)
|
||||
- `cli_execution`: CLI execution strategy based on task dependencies
|
||||
- `strategy`: Execution pattern (`new`, `resume`, `fork`, `merge_fork`)
|
||||
- `resume_from`: Parent task's cli_execution_id (for resume/fork)
|
||||
- `merge_from`: Array of parent cli_execution_ids (for merge_fork)
|
||||
|
||||
**CLI Execution Strategy Rules** (MANDATORY - apply to all tasks):
|
||||
|
||||
| Dependency Pattern | Strategy | CLI Command Pattern |
|
||||
|--------------------|----------|---------------------|
|
||||
| No `depends_on` | `new` | `--id {cli_execution_id}` |
|
||||
| 1 parent, parent has 1 child | `resume` | `--resume {resume_from}` |
|
||||
| 1 parent, parent has N children | `fork` | `--resume {resume_from} --id {cli_execution_id}` |
|
||||
| N parents | `merge_fork` | `--resume {merge_from.join(',')} --id {cli_execution_id}` |
|
||||
|
||||
**Strategy Selection Algorithm**:
|
||||
```javascript
|
||||
function computeCliStrategy(task, allTasks) {
|
||||
const deps = task.context?.depends_on || []
|
||||
const childCount = allTasks.filter(t =>
|
||||
t.context?.depends_on?.includes(task.id)
|
||||
).length
|
||||
|
||||
if (deps.length === 0) {
|
||||
return { strategy: "new" }
|
||||
} else if (deps.length === 1) {
|
||||
const parentTask = allTasks.find(t => t.id === deps[0])
|
||||
const parentChildCount = allTasks.filter(t =>
|
||||
t.context?.depends_on?.includes(deps[0])
|
||||
).length
|
||||
|
||||
if (parentChildCount === 1) {
|
||||
return { strategy: "resume", resume_from: parentTask.cli_execution_id }
|
||||
} else {
|
||||
return { strategy: "fork", resume_from: parentTask.cli_execution_id }
|
||||
}
|
||||
} else {
|
||||
const mergeFrom = deps.map(depId =>
|
||||
allTasks.find(t => t.id === depId).cli_execution_id
|
||||
)
|
||||
return { strategy: "merge_fork", merge_from: mergeFrom }
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
#### Meta Object
|
||||
|
||||
@@ -225,7 +275,13 @@ Generate individual `.task/IMPL-*.json` files with the following structure:
|
||||
"type": "feature|bugfix|refactor|test-gen|test-fix|docs",
|
||||
"agent": "@code-developer|@action-planning-agent|@test-fix-agent|@universal-executor",
|
||||
"execution_group": "parallel-abc123|null",
|
||||
"module": "frontend|backend|shared|null"
|
||||
"module": "frontend|backend|shared|null",
|
||||
"execution_config": {
|
||||
"method": "agent|hybrid|cli",
|
||||
"cli_tool": "codex|gemini|qwen|auto",
|
||||
"enable_resume": true,
|
||||
"previous_cli_id": "string|null"
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
@@ -235,6 +291,11 @@ Generate individual `.task/IMPL-*.json` files with the following structure:
|
||||
- `agent`: Assigned agent for execution
|
||||
- `execution_group`: Parallelization group ID (tasks with same ID can run concurrently) or `null` for sequential tasks
|
||||
- `module`: Module identifier for multi-module projects (e.g., `frontend`, `backend`, `shared`) or `null` for single-module
|
||||
- `execution_config`: CLI execution settings (from userConfig in task-generate-agent)
|
||||
- `method`: Execution method - `agent` (direct), `hybrid` (agent + CLI), `cli` (CLI only)
|
||||
- `cli_tool`: Preferred CLI tool - `codex`, `gemini`, `qwen`, or `auto`
|
||||
- `enable_resume`: Whether to use `--resume` for CLI continuity (default: true)
|
||||
- `previous_cli_id`: Previous task's CLI execution ID for resume (populated at runtime)
|
||||
|
||||
**Test Task Extensions** (for type="test-gen" or type="test-fix"):
|
||||
|
||||
@@ -392,7 +453,7 @@ Generate individual `.task/IMPL-*.json` files with the following structure:
|
||||
// Pattern: Project structure analysis
|
||||
{
|
||||
"step": "analyze_project_architecture",
|
||||
"commands": ["bash(~/.claude/scripts/get_modules_by_depth.sh)"],
|
||||
"commands": ["bash(ccw tool exec get_modules_by_depth '{}')"],
|
||||
"output_to": "project_architecture"
|
||||
},
|
||||
|
||||
@@ -409,14 +470,14 @@ Generate individual `.task/IMPL-*.json` files with the following structure:
|
||||
// Pattern: Gemini CLI deep analysis
|
||||
{
|
||||
"step": "gemini_analyze_[aspect]",
|
||||
"command": "bash(cd [path] && gemini -p 'PURPOSE: [goal]\\nTASK: [tasks]\\nMODE: analysis\\nCONTEXT: @[paths]\\nEXPECTED: [output]\\nRULES: $(cat [template]) | [constraints] | analysis=READ-ONLY')",
|
||||
"command": "ccw cli -p 'PURPOSE: [goal]\\nTASK: [tasks]\\nMODE: analysis\\nCONTEXT: @[paths]\\nEXPECTED: [output]\\nRULES: $(cat [template]) | [constraints] | analysis=READ-ONLY' --tool gemini --mode analysis --cd [path]",
|
||||
"output_to": "analysis_result"
|
||||
},
|
||||
|
||||
// Pattern: Qwen CLI analysis (fallback/alternative)
|
||||
{
|
||||
"step": "qwen_analyze_[aspect]",
|
||||
"command": "bash(cd [path] && qwen -p '[similar to gemini pattern]')",
|
||||
"command": "ccw cli -p '[similar to gemini pattern]' --tool qwen --mode analysis --cd [path]",
|
||||
"output_to": "analysis_result"
|
||||
},
|
||||
|
||||
@@ -457,7 +518,7 @@ The examples above demonstrate **patterns**, not fixed requirements. Agent MUST:
|
||||
4. **Command Composition Patterns**:
|
||||
- **Single command**: `bash([simple_search])`
|
||||
- **Multiple commands**: `["bash([cmd1])", "bash([cmd2])"]`
|
||||
- **CLI analysis**: `bash(cd [path] && gemini -p '[prompt]')`
|
||||
- **CLI analysis**: `ccw cli -p '[prompt]' --tool gemini --mode analysis --cd [path]`
|
||||
- **MCP integration**: `mcp__[tool]__[function]([params])`
|
||||
|
||||
**Key Principle**: Examples show **structure patterns**, not specific implementations. Agent must create task-appropriate steps dynamically.
|
||||
@@ -479,11 +540,12 @@ The `implementation_approach` supports **two execution modes** based on the pres
|
||||
- Specified command executes the step directly
|
||||
- Leverages specialized CLI tools (codex/gemini/qwen) for complex reasoning
|
||||
- **Use for**: Large-scale features, complex refactoring, or when user explicitly requests CLI tool usage
|
||||
- **Required fields**: Same as default mode **PLUS** `command`
|
||||
- **Command patterns**:
|
||||
- `bash(codex -C [path] --full-auto exec '[prompt]' --skip-git-repo-check -s danger-full-access)`
|
||||
- `bash(codex --full-auto exec '[task]' resume --last --skip-git-repo-check -s danger-full-access)` (multi-step)
|
||||
- `bash(cd [path] && gemini -p '[prompt]' --approval-mode yolo)` (write mode)
|
||||
- **Required fields**: Same as default mode **PLUS** `command`, `resume_from` (optional)
|
||||
- **Command patterns** (with resume support):
|
||||
- `ccw cli -p '[prompt]' --tool codex --mode write --cd [path]`
|
||||
- `ccw cli -p '[prompt]' --resume ${previousCliId} --tool codex --mode write` (resume from previous)
|
||||
- `ccw cli -p '[prompt]' --tool gemini --mode write --cd [path]` (write mode)
|
||||
- **Resume mechanism**: When step depends on previous CLI execution, include `--resume` with previous execution ID
|
||||
|
||||
**Semantic CLI Tool Selection**:
|
||||
|
||||
@@ -500,12 +562,12 @@ Agent determines CLI tool usage per-step based on user semantics and task nature
|
||||
**Task-Based Selection** (when no explicit user preference):
|
||||
- **Implementation/coding**: Codex preferred for autonomous development
|
||||
- **Analysis/exploration**: Gemini preferred for large context analysis
|
||||
- **Documentation**: Gemini/Qwen with write mode (`--approval-mode yolo`)
|
||||
- **Documentation**: Gemini/Qwen with write mode (`--mode write`)
|
||||
- **Testing**: Depends on complexity - simple=agent, complex=Codex
|
||||
|
||||
**Default Behavior**: Agent always executes the workflow. CLI commands are embedded in `implementation_approach` steps:
|
||||
- Agent orchestrates task execution
|
||||
- When step has `command` field, agent executes it via Bash
|
||||
- When step has `command` field, agent executes it via CCW CLI
|
||||
- When step has no `command` field, agent implements directly
|
||||
- This maintains agent control while leveraging CLI tool power
|
||||
|
||||
@@ -559,11 +621,26 @@ Agent determines CLI tool usage per-step based on user semantics and task nature
|
||||
"step": 3,
|
||||
"title": "Execute implementation using CLI tool",
|
||||
"description": "Use Codex/Gemini for complex autonomous execution",
|
||||
"command": "bash(codex -C [path] --full-auto exec '[prompt]' --skip-git-repo-check -s danger-full-access)",
|
||||
"command": "ccw cli -p '[prompt]' --tool codex --mode write --cd [path]",
|
||||
"modification_points": ["[Same as default mode]"],
|
||||
"logic_flow": ["[Same as default mode]"],
|
||||
"depends_on": [1, 2],
|
||||
"output": "cli_implementation"
|
||||
"output": "cli_implementation",
|
||||
"cli_output_id": "step3_cli_id" // Store execution ID for resume
|
||||
},
|
||||
|
||||
// === CLI MODE with Resume: Continue from previous CLI execution ===
|
||||
{
|
||||
"step": 4,
|
||||
"title": "Continue implementation with context",
|
||||
"description": "Resume from previous step with accumulated context",
|
||||
"command": "ccw cli -p '[continuation prompt]' --resume ${step3_cli_id} --tool codex --mode write",
|
||||
"resume_from": "step3_cli_id", // Reference previous step's CLI ID
|
||||
"modification_points": ["[Continue from step 3]"],
|
||||
"logic_flow": ["[Build on previous output]"],
|
||||
"depends_on": [3],
|
||||
"output": "continued_implementation",
|
||||
"cli_output_id": "step4_cli_id"
|
||||
}
|
||||
]
|
||||
```
|
||||
@@ -759,6 +836,8 @@ Use `analysis_results.complexity` or task count to determine structure:
|
||||
- Use provided context package: Extract all information from structured context
|
||||
- Respect memory-first rule: Use provided content (already loaded from memory/file)
|
||||
- Follow 6-field schema: All task JSONs must have id, title, status, context_package_path, meta, context, flow_control
|
||||
- **Assign CLI execution IDs**: Every task MUST have `cli_execution_id` (format: `{session_id}-{task_id}`)
|
||||
- **Compute CLI execution strategy**: Based on `depends_on`, set `cli_execution.strategy` (new/resume/fork/merge_fork)
|
||||
- Map artifacts: Use artifacts_inventory to populate task.context.artifacts array
|
||||
- Add MCP integration: Include MCP tool steps in flow_control.pre_analysis when capabilities available
|
||||
- Validate task count: Maximum 12 tasks hard limit, request re-scope if exceeded
|
||||
|
||||
@@ -67,7 +67,7 @@ Score = 0
|
||||
|
||||
**1. Project Structure**:
|
||||
```bash
|
||||
~/.claude/scripts/get_modules_by_depth.sh
|
||||
ccw tool exec get_modules_by_depth '{}'
|
||||
```
|
||||
|
||||
**2. Content Search**:
|
||||
@@ -100,7 +100,7 @@ CONTEXT: @**/*
|
||||
# Specific patterns
|
||||
CONTEXT: @CLAUDE.md @src/**/* @*.ts
|
||||
|
||||
# Cross-directory (requires --include-directories)
|
||||
# Cross-directory (requires --includeDirs)
|
||||
CONTEXT: @**/* @../shared/**/* @../types/**/*
|
||||
```
|
||||
|
||||
@@ -134,7 +134,7 @@ RULES: $(cat {selected_template}) | {constraints}
|
||||
```
|
||||
analyze|plan → gemini (qwen fallback) + mode=analysis
|
||||
execute (simple|medium) → gemini (qwen fallback) + mode=write
|
||||
execute (complex) → codex + mode=auto
|
||||
execute (complex) → codex + mode=write
|
||||
discuss → multi (gemini + codex parallel)
|
||||
```
|
||||
|
||||
@@ -144,43 +144,40 @@ discuss → multi (gemini + codex parallel)
|
||||
- Codex: `gpt-5` (default), `gpt5-codex` (large context)
|
||||
- **Position**: `-m` after prompt, before flags
|
||||
|
||||
### Command Templates
|
||||
### Command Templates (CCW Unified CLI)
|
||||
|
||||
**Gemini/Qwen (Analysis)**:
|
||||
```bash
|
||||
cd {dir} && gemini -p "
|
||||
ccw cli -p "
|
||||
PURPOSE: {goal}
|
||||
TASK: {task}
|
||||
MODE: analysis
|
||||
CONTEXT: @**/*
|
||||
EXPECTED: {output}
|
||||
RULES: $(cat ~/.claude/workflows/cli-templates/prompts/analysis/pattern.txt)
|
||||
" -m gemini-2.5-pro
|
||||
" --tool gemini --mode analysis --cd {dir}
|
||||
|
||||
# Qwen fallback: Replace 'gemini' with 'qwen'
|
||||
# Qwen fallback: Replace '--tool gemini' with '--tool qwen'
|
||||
```
|
||||
|
||||
**Gemini/Qwen (Write)**:
|
||||
```bash
|
||||
cd {dir} && gemini -p "..." --approval-mode yolo
|
||||
ccw cli -p "..." --tool gemini --mode write --cd {dir}
|
||||
```
|
||||
|
||||
**Codex (Auto)**:
|
||||
**Codex (Write)**:
|
||||
```bash
|
||||
codex -C {dir} --full-auto exec "..." --skip-git-repo-check -s danger-full-access
|
||||
|
||||
# Resume: Add 'resume --last' after prompt
|
||||
codex --full-auto exec "..." resume --last --skip-git-repo-check -s danger-full-access
|
||||
ccw cli -p "..." --tool codex --mode write --cd {dir}
|
||||
```
|
||||
|
||||
**Cross-Directory** (Gemini/Qwen):
|
||||
```bash
|
||||
cd src/auth && gemini -p "CONTEXT: @**/* @../shared/**/*" --include-directories ../shared
|
||||
ccw cli -p "CONTEXT: @**/* @../shared/**/*" --tool gemini --mode analysis --cd src/auth --includeDirs ../shared
|
||||
```
|
||||
|
||||
**Directory Scope**:
|
||||
- `@` only references current directory + subdirectories
|
||||
- External dirs: MUST use `--include-directories` + explicit CONTEXT reference
|
||||
- External dirs: MUST use `--includeDirs` + explicit CONTEXT reference
|
||||
|
||||
**Timeout**: Simple 20min | Medium 40min | Complex 60min (Codex ×1.5)
|
||||
|
||||
|
||||
@@ -67,7 +67,7 @@ Phase 4: Output Generation
|
||||
|
||||
```bash
|
||||
# Project structure
|
||||
~/.claude/scripts/get_modules_by_depth.sh
|
||||
ccw tool exec get_modules_by_depth '{}'
|
||||
|
||||
# Pattern discovery (adapt based on language)
|
||||
rg "^export (class|interface|function) " --type ts -n
|
||||
@@ -78,14 +78,14 @@ rg "^import .* from " -n | head -30
|
||||
### Gemini Semantic Analysis (deep-scan, dependency-map)
|
||||
|
||||
```bash
|
||||
cd {dir} && gemini -p "
|
||||
ccw cli -p "
|
||||
PURPOSE: {from prompt}
|
||||
TASK: {from prompt}
|
||||
MODE: analysis
|
||||
CONTEXT: @**/*
|
||||
EXPECTED: {from prompt}
|
||||
RULES: {from prompt, if template specified} | analysis=READ-ONLY
|
||||
"
|
||||
" --tool gemini --mode analysis --cd {dir}
|
||||
```
|
||||
|
||||
**Fallback Chain**: Gemini → Qwen → Codex → Bash-only
|
||||
|
||||
@@ -1,140 +1,117 @@
|
||||
---
|
||||
name: cli-lite-planning-agent
|
||||
description: |
|
||||
Specialized agent for executing CLI planning tools (Gemini/Qwen) to generate detailed implementation plans. Used by lite-plan workflow for Medium/High complexity tasks.
|
||||
Generic planning agent for lite-plan and lite-fix workflows. Generates structured plan JSON based on provided schema reference.
|
||||
|
||||
Core capabilities:
|
||||
- Task decomposition (1-10 tasks with IDs: T1, T2...)
|
||||
- Dependency analysis (depends_on references)
|
||||
- Flow control (parallel/sequential phases)
|
||||
- Multi-angle exploration context integration
|
||||
- Schema-driven output (plan-json-schema or fix-plan-json-schema)
|
||||
- Task decomposition with dependency analysis
|
||||
- CLI execution ID assignment for fork/merge strategies
|
||||
- Multi-angle context integration (explorations or diagnoses)
|
||||
color: cyan
|
||||
---
|
||||
|
||||
You are a specialized execution agent that bridges CLI planning tools (Gemini/Qwen) with lite-plan workflow. You execute CLI commands for task breakdown, parse structured results, and generate planObject for downstream execution.
|
||||
You are a generic planning agent that generates structured plan JSON for lite workflows. Output format is determined by the schema reference provided in the prompt. You execute CLI planning tools (Gemini/Qwen), parse results, and generate planObject conforming to the specified schema.
|
||||
|
||||
## Output Schema
|
||||
|
||||
**Reference**: `~/.claude/workflows/cli-templates/schemas/plan-json-schema.json`
|
||||
|
||||
**planObject Structure**:
|
||||
```javascript
|
||||
{
|
||||
summary: string, // 2-3 sentence overview
|
||||
approach: string, // High-level strategy
|
||||
tasks: [TaskObject], // 1-10 structured tasks
|
||||
flow_control: { // Execution phases
|
||||
execution_order: [{ phase, tasks, type }],
|
||||
exit_conditions: { success, failure }
|
||||
},
|
||||
focus_paths: string[], // Affected files (aggregated)
|
||||
estimated_time: string,
|
||||
recommended_execution: "Agent" | "Codex",
|
||||
complexity: "Low" | "Medium" | "High",
|
||||
_metadata: { timestamp, source, planning_mode, exploration_angles, duration_seconds }
|
||||
}
|
||||
```
|
||||
|
||||
**TaskObject Structure**:
|
||||
```javascript
|
||||
{
|
||||
id: string, // T1, T2, T3...
|
||||
title: string, // Action verb + target
|
||||
file: string, // Target file path
|
||||
action: string, // Create|Update|Implement|Refactor|Add|Delete|Configure|Test|Fix
|
||||
description: string, // What to implement (1-2 sentences)
|
||||
modification_points: [{ // Precise changes (optional)
|
||||
file: string,
|
||||
target: string, // function:lineRange
|
||||
change: string
|
||||
}],
|
||||
implementation: string[], // 2-7 actionable steps
|
||||
reference: { // Pattern guidance (optional)
|
||||
pattern: string,
|
||||
files: string[],
|
||||
examples: string
|
||||
},
|
||||
acceptance: string[], // 1-4 quantified criteria
|
||||
depends_on: string[] // Task IDs: ["T1", "T2"]
|
||||
}
|
||||
```
|
||||
|
||||
## Input Context
|
||||
|
||||
```javascript
|
||||
{
|
||||
task_description: string,
|
||||
explorationsContext: { [angle]: ExplorationResult } | null,
|
||||
explorationAngles: string[],
|
||||
// Required
|
||||
task_description: string, // Task or bug description
|
||||
schema_path: string, // Schema reference path (plan-json-schema or fix-plan-json-schema)
|
||||
session: { id, folder, artifacts },
|
||||
|
||||
// Context (one of these based on workflow)
|
||||
explorationsContext: { [angle]: ExplorationResult } | null, // From lite-plan
|
||||
diagnosesContext: { [angle]: DiagnosisResult } | null, // From lite-fix
|
||||
contextAngles: string[], // Exploration or diagnosis angles
|
||||
|
||||
// Optional
|
||||
clarificationContext: { [question]: answer } | null,
|
||||
complexity: "Low" | "Medium" | "High",
|
||||
cli_config: { tool, template, timeout, fallback },
|
||||
session: { id, folder, artifacts }
|
||||
complexity: "Low" | "Medium" | "High", // For lite-plan
|
||||
severity: "Low" | "Medium" | "High" | "Critical", // For lite-fix
|
||||
cli_config: { tool, template, timeout, fallback }
|
||||
}
|
||||
```
|
||||
|
||||
## Schema-Driven Output
|
||||
|
||||
**CRITICAL**: Read the schema reference first to determine output structure:
|
||||
- `plan-json-schema.json` → Implementation plan with `approach`, `complexity`
|
||||
- `fix-plan-json-schema.json` → Fix plan with `root_cause`, `severity`, `risk_level`
|
||||
|
||||
```javascript
|
||||
// Step 1: Always read schema first
|
||||
const schema = Bash(`cat ${schema_path}`)
|
||||
|
||||
// Step 2: Generate plan conforming to schema
|
||||
const planObject = generatePlanFromSchema(schema, context)
|
||||
```
|
||||
|
||||
## Execution Flow
|
||||
|
||||
```
|
||||
Phase 1: CLI Execution
|
||||
├─ Aggregate multi-angle exploration findings
|
||||
Phase 1: Schema & Context Loading
|
||||
├─ Read schema reference (plan-json-schema or fix-plan-json-schema)
|
||||
├─ Aggregate multi-angle context (explorations or diagnoses)
|
||||
└─ Determine output structure from schema
|
||||
|
||||
Phase 2: CLI Execution
|
||||
├─ Construct CLI command with planning template
|
||||
├─ Execute Gemini (fallback: Qwen → degraded mode)
|
||||
└─ Timeout: 60 minutes
|
||||
|
||||
Phase 2: Parsing & Enhancement
|
||||
├─ Parse CLI output sections (Summary, Approach, Tasks, Flow Control)
|
||||
Phase 3: Parsing & Enhancement
|
||||
├─ Parse CLI output sections
|
||||
├─ Validate and enhance task objects
|
||||
└─ Infer missing fields from exploration context
|
||||
└─ Infer missing fields from context
|
||||
|
||||
Phase 3: planObject Generation
|
||||
├─ Build planObject from parsed results
|
||||
├─ Generate flow_control from depends_on if not provided
|
||||
├─ Aggregate focus_paths from all tasks
|
||||
└─ Return to orchestrator (lite-plan)
|
||||
Phase 4: planObject Generation
|
||||
├─ Build planObject conforming to schema
|
||||
├─ Assign CLI execution IDs and strategies
|
||||
├─ Generate flow_control from depends_on
|
||||
└─ Return to orchestrator
|
||||
```
|
||||
|
||||
## CLI Command Template
|
||||
|
||||
```bash
|
||||
cd {project_root} && {cli_tool} -p "
|
||||
PURPOSE: Generate implementation plan for {complexity} task
|
||||
ccw cli -p "
|
||||
PURPOSE: Generate plan for {task_description}
|
||||
TASK:
|
||||
• Analyze: {task_description}
|
||||
• Break down into 1-10 tasks with: id, title, file, action, description, modification_points, implementation, reference, acceptance, depends_on
|
||||
• Identify parallel vs sequential execution phases
|
||||
• Analyze task/bug description and context
|
||||
• Break down into tasks following schema structure
|
||||
• Identify dependencies and execution phases
|
||||
MODE: analysis
|
||||
CONTEXT: @**/* | Memory: {exploration_summary}
|
||||
CONTEXT: @**/* | Memory: {context_summary}
|
||||
EXPECTED:
|
||||
## Implementation Summary
|
||||
## Summary
|
||||
[overview]
|
||||
|
||||
## High-Level Approach
|
||||
[strategy]
|
||||
|
||||
## Task Breakdown
|
||||
### T1: [Title]
|
||||
**File**: [path]
|
||||
### T1: [Title] (or FIX1 for fix-plan)
|
||||
**Scope**: [module/feature path]
|
||||
**Action**: [type]
|
||||
**Description**: [what]
|
||||
**Modification Points**: - [file]: [target] - [change]
|
||||
**Implementation**: 1. [step]
|
||||
**Reference**: - Pattern: [name] - Files: [paths] - Examples: [guidance]
|
||||
**Acceptance**: - [quantified criterion]
|
||||
**Acceptance/Verification**: - [quantified criterion]
|
||||
**Depends On**: []
|
||||
|
||||
## Flow Control
|
||||
**Execution Order**: - Phase parallel-1: [T1, T2] (independent)
|
||||
**Exit Conditions**: - Success: [condition] - Failure: [condition]
|
||||
|
||||
## Time Estimate
|
||||
**Total**: [time]
|
||||
|
||||
RULES: $(cat ~/.claude/workflows/cli-templates/prompts/planning/02-breakdown-task-steps.txt) |
|
||||
- Acceptance must be quantified (counts, method names, metrics)
|
||||
- Dependencies use task IDs (T1, T2)
|
||||
- Follow schema structure from {schema_path}
|
||||
- Acceptance/verification must be quantified
|
||||
- Dependencies use task IDs
|
||||
- analysis=READ-ONLY
|
||||
"
|
||||
" --tool {cli_tool} --mode analysis --cd {project_root}
|
||||
```
|
||||
|
||||
## Core Functions
|
||||
@@ -279,6 +256,51 @@ function inferFile(task, ctx) {
|
||||
}
|
||||
```
|
||||
|
||||
### CLI Execution ID Assignment (MANDATORY)
|
||||
|
||||
```javascript
|
||||
function assignCliExecutionIds(tasks, sessionId) {
|
||||
const taskMap = new Map(tasks.map(t => [t.id, t]))
|
||||
const childCount = new Map()
|
||||
|
||||
// Count children for each task
|
||||
tasks.forEach(task => {
|
||||
(task.depends_on || []).forEach(depId => {
|
||||
childCount.set(depId, (childCount.get(depId) || 0) + 1)
|
||||
})
|
||||
})
|
||||
|
||||
tasks.forEach(task => {
|
||||
task.cli_execution_id = `${sessionId}-${task.id}`
|
||||
const deps = task.depends_on || []
|
||||
|
||||
if (deps.length === 0) {
|
||||
task.cli_execution = { strategy: "new" }
|
||||
} else if (deps.length === 1) {
|
||||
const parent = taskMap.get(deps[0])
|
||||
const parentChildCount = childCount.get(deps[0]) || 0
|
||||
task.cli_execution = parentChildCount === 1
|
||||
? { strategy: "resume", resume_from: parent.cli_execution_id }
|
||||
: { strategy: "fork", resume_from: parent.cli_execution_id }
|
||||
} else {
|
||||
task.cli_execution = {
|
||||
strategy: "merge_fork",
|
||||
merge_from: deps.map(depId => taskMap.get(depId).cli_execution_id)
|
||||
}
|
||||
}
|
||||
})
|
||||
return tasks
|
||||
}
|
||||
```
|
||||
|
||||
**Strategy Rules**:
|
||||
| depends_on | Parent Children | Strategy | CLI Command |
|
||||
|------------|-----------------|----------|-------------|
|
||||
| [] | - | `new` | `--id {cli_execution_id}` |
|
||||
| [T1] | 1 | `resume` | `--resume {resume_from}` |
|
||||
| [T1] | >1 | `fork` | `--resume {resume_from} --id {cli_execution_id}` |
|
||||
| [T1,T2] | - | `merge_fork` | `--resume {ids.join(',')} --id {cli_execution_id}` |
|
||||
|
||||
### Flow Control Inference
|
||||
|
||||
```javascript
|
||||
@@ -303,21 +325,44 @@ function inferFlowControl(tasks) {
|
||||
### planObject Generation
|
||||
|
||||
```javascript
|
||||
function generatePlanObject(parsed, enrichedContext, input) {
|
||||
function generatePlanObject(parsed, enrichedContext, input, schemaType) {
|
||||
const tasks = validateAndEnhanceTasks(parsed.raw_tasks, enrichedContext)
|
||||
assignCliExecutionIds(tasks, input.session.id) // MANDATORY: Assign CLI execution IDs
|
||||
const flow_control = parsed.flow_control?.execution_order?.length > 0 ? parsed.flow_control : inferFlowControl(tasks)
|
||||
const focus_paths = [...new Set(tasks.flatMap(t => [t.file, ...t.modification_points.map(m => m.file)]).filter(Boolean))]
|
||||
const focus_paths = [...new Set(tasks.flatMap(t => [t.file || t.scope, ...t.modification_points.map(m => m.file)]).filter(Boolean))]
|
||||
|
||||
return {
|
||||
summary: parsed.summary || `Implementation plan for: ${input.task_description.slice(0, 100)}`,
|
||||
approach: parsed.approach || "Step-by-step implementation",
|
||||
// Base fields (common to both schemas)
|
||||
const base = {
|
||||
summary: parsed.summary || `Plan for: ${input.task_description.slice(0, 100)}`,
|
||||
tasks,
|
||||
flow_control,
|
||||
focus_paths,
|
||||
estimated_time: parsed.time_estimate || `${tasks.length * 30} minutes`,
|
||||
recommended_execution: input.complexity === "Low" ? "Agent" : "Codex",
|
||||
complexity: input.complexity,
|
||||
_metadata: { timestamp: new Date().toISOString(), source: "cli-lite-planning-agent", planning_mode: "agent-based", exploration_angles: input.explorationAngles || [], duration_seconds: Math.round((Date.now() - startTime) / 1000) }
|
||||
recommended_execution: (input.complexity === "Low" || input.severity === "Low") ? "Agent" : "Codex",
|
||||
_metadata: {
|
||||
timestamp: new Date().toISOString(),
|
||||
source: "cli-lite-planning-agent",
|
||||
planning_mode: "agent-based",
|
||||
context_angles: input.contextAngles || [],
|
||||
duration_seconds: Math.round((Date.now() - startTime) / 1000)
|
||||
}
|
||||
}
|
||||
|
||||
// Schema-specific fields
|
||||
if (schemaType === 'fix-plan') {
|
||||
return {
|
||||
...base,
|
||||
root_cause: parsed.root_cause || "Root cause from diagnosis",
|
||||
strategy: parsed.strategy || "comprehensive_fix",
|
||||
severity: input.severity || "Medium",
|
||||
risk_level: parsed.risk_level || "medium"
|
||||
}
|
||||
} else {
|
||||
return {
|
||||
...base,
|
||||
approach: parsed.approach || "Step-by-step implementation",
|
||||
complexity: input.complexity || "Medium"
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
@@ -383,9 +428,12 @@ function validateTask(task) {
|
||||
## Key Reminders
|
||||
|
||||
**ALWAYS**:
|
||||
- Generate task IDs (T1, T2, T3...)
|
||||
- **Read schema first** to determine output structure
|
||||
- Generate task IDs (T1/T2 for plan, FIX1/FIX2 for fix-plan)
|
||||
- Include depends_on (even if empty [])
|
||||
- Quantify acceptance criteria
|
||||
- **Assign cli_execution_id** (`{sessionId}-{taskId}`)
|
||||
- **Compute cli_execution strategy** based on depends_on
|
||||
- Quantify acceptance/verification criteria
|
||||
- Generate flow_control from dependencies
|
||||
- Handle CLI errors with fallback chain
|
||||
|
||||
@@ -394,3 +442,5 @@ function validateTask(task) {
|
||||
- Use vague acceptance criteria
|
||||
- Create circular dependencies
|
||||
- Skip task validation
|
||||
- **Skip CLI execution ID assignment**
|
||||
- **Ignore schema structure**
|
||||
|
||||
@@ -107,7 +107,7 @@ Phase 3: Task JSON Generation
|
||||
|
||||
**Template-Based Command Construction with Test Layer Awareness**:
|
||||
```bash
|
||||
cd {project_root} && {cli_tool} -p "
|
||||
ccw cli -p "
|
||||
PURPOSE: Analyze {test_type} test failures and generate fix strategy for iteration {iteration}
|
||||
TASK:
|
||||
• Review {failed_tests.length} {test_type} test failures: [{test_names}]
|
||||
@@ -134,7 +134,7 @@ RULES: $(cat ~/.claude/workflows/cli-templates/prompts/{template}) |
|
||||
- Consider previous iteration failures
|
||||
- Validate fix doesn't introduce new vulnerabilities
|
||||
- analysis=READ-ONLY
|
||||
" {timeout_flag}
|
||||
" --tool {cli_tool} --mode analysis --cd {project_root} --timeout {timeout_value}
|
||||
```
|
||||
|
||||
**Layer-Specific Guidance Injection**:
|
||||
@@ -527,9 +527,9 @@ See: `.process/iteration-{iteration}-cli-output.txt`
|
||||
1. **Detect test_type**: "integration" → Apply integration-specific diagnosis
|
||||
2. **Execute CLI**:
|
||||
```bash
|
||||
gemini -p "PURPOSE: Analyze integration test failure...
|
||||
ccw cli -p "PURPOSE: Analyze integration test failure...
|
||||
TASK: Examine component interactions, data flow, interface contracts...
|
||||
RULES: Analyze full call stack and data flow across components"
|
||||
RULES: Analyze full call stack and data flow across components" --tool gemini --mode analysis
|
||||
```
|
||||
3. **Parse Output**: Extract RCA, 修复建议, 验证建议 sections
|
||||
4. **Generate Task JSON** (IMPL-fix-1.json):
|
||||
|
||||
@@ -34,10 +34,11 @@ You are a code execution specialist focused on implementing high-quality, produc
|
||||
- **context-package.json** (when available in workflow tasks)
|
||||
|
||||
**Context Package** :
|
||||
`context-package.json` provides artifact paths - extract dynamically using `jq`:
|
||||
`context-package.json` provides artifact paths - read using Read tool or ccw session:
|
||||
```bash
|
||||
# Get role analysis paths from context package
|
||||
jq -r '.brainstorm_artifacts.role_analyses[].files[].path' context-package.json
|
||||
# Get context package content from session using Read tool
|
||||
Read(.workflow/active/${SESSION_ID}/.process/context-package.json)
|
||||
# Returns parsed JSON with brainstorm_artifacts, focus_paths, etc.
|
||||
```
|
||||
|
||||
**Pre-Analysis: Smart Tech Stack Loading**:
|
||||
@@ -121,9 +122,9 @@ When task JSON contains `flow_control.implementation_approach` array:
|
||||
- If `command` field present, execute it; otherwise use agent capabilities
|
||||
|
||||
**CLI Command Execution (CLI Execute Mode)**:
|
||||
When step contains `command` field with Codex CLI, execute via Bash tool. For Codex resume:
|
||||
- First task (`depends_on: []`): `codex -C [path] --full-auto exec "..." --skip-git-repo-check -s danger-full-access`
|
||||
- Subsequent tasks (has `depends_on`): Add `resume --last` flag to maintain session context
|
||||
When step contains `command` field with Codex CLI, execute via CCW CLI. For Codex resume:
|
||||
- First task (`depends_on: []`): `ccw cli -p "..." --tool codex --mode write --cd [path]`
|
||||
- Subsequent tasks (has `depends_on`): Use CCW CLI with resume context to maintain session
|
||||
|
||||
**Test-Driven Development**:
|
||||
- Write tests first (red → green → refactor)
|
||||
|
||||
@@ -109,7 +109,7 @@ This agent processes **simplified inline [FLOW_CONTROL]** format from brainstorm
|
||||
|
||||
3. **load_session_metadata**
|
||||
- Action: Load session metadata
|
||||
- Command: bash(cat .workflow/WFS-{session}/workflow-session.json)
|
||||
- Command: Read(.workflow/active/WFS-{session}/workflow-session.json)
|
||||
- Output: session_metadata
|
||||
```
|
||||
|
||||
@@ -155,7 +155,7 @@ When called, you receive:
|
||||
- **User Context**: Specific requirements, constraints, and expectations from user discussion
|
||||
- **Output Location**: Directory path for generated analysis files
|
||||
- **Role Hint** (optional): Suggested role or role selection guidance
|
||||
- **context-package.json** (CCW Workflow): Artifact paths catalog - extract using `jq -r '.brainstorm_artifacts.role_analyses[].files[].path'`
|
||||
- **context-package.json** (CCW Workflow): Artifact paths catalog - use Read tool to get context package from `.workflow/active/{session}/.process/context-package.json`
|
||||
- **ASSIGNED_ROLE** (optional): Specific role assignment
|
||||
- **ANALYSIS_DIMENSIONS** (optional): Role-specific analysis dimensions
|
||||
|
||||
|
||||
@@ -31,7 +31,7 @@ You are a context discovery specialist focused on gathering relevant project inf
|
||||
### 1. Reference Documentation (Project Standards)
|
||||
**Tools**:
|
||||
- `Read()` - Load CLAUDE.md, README.md, architecture docs
|
||||
- `Bash(~/.claude/scripts/get_modules_by_depth.sh)` - Project structure
|
||||
- `Bash(ccw tool exec get_modules_by_depth '{}')` - Project structure
|
||||
- `Glob()` - Find documentation files
|
||||
|
||||
**Use**: Phase 0 foundation setup
|
||||
@@ -44,19 +44,19 @@ You are a context discovery specialist focused on gathering relevant project inf
|
||||
**Use**: Unfamiliar APIs/libraries/patterns
|
||||
|
||||
### 3. Existing Code Discovery
|
||||
**Primary (Code-Index MCP)**:
|
||||
- `mcp__code-index__set_project_path()` - Initialize index
|
||||
- `mcp__code-index__find_files(pattern)` - File pattern matching
|
||||
- `mcp__code-index__search_code_advanced()` - Content search
|
||||
- `mcp__code-index__get_file_summary()` - File structure analysis
|
||||
- `mcp__code-index__refresh_index()` - Update index
|
||||
**Primary (CCW CodexLens MCP)**:
|
||||
- `mcp__ccw-tools__codex_lens(action="init", path=".")` - Initialize index for directory
|
||||
- `mcp__ccw-tools__codex_lens(action="search", query="pattern", path=".")` - Content search (requires query)
|
||||
- `mcp__ccw-tools__codex_lens(action="search_files", query="pattern")` - File name search, returns paths only (requires query)
|
||||
- `mcp__ccw-tools__codex_lens(action="symbol", file="path")` - Extract all symbols from file (no query, returns functions/classes/variables)
|
||||
- `mcp__ccw-tools__codex_lens(action="update", files=[...])` - Update index for specific files
|
||||
|
||||
**Fallback (CLI)**:
|
||||
- `rg` (ripgrep) - Fast content search
|
||||
- `find` - File discovery
|
||||
- `Grep` - Pattern matching
|
||||
|
||||
**Priority**: Code-Index MCP > ripgrep > find > grep
|
||||
**Priority**: CodexLens MCP > ripgrep > find > grep
|
||||
|
||||
## Simplified Execution Process (3 Phases)
|
||||
|
||||
@@ -77,12 +77,11 @@ if (file_exists(contextPackagePath)) {
|
||||
|
||||
**1.2 Foundation Setup**:
|
||||
```javascript
|
||||
// 1. Initialize Code Index (if available)
|
||||
mcp__code-index__set_project_path(process.cwd())
|
||||
mcp__code-index__refresh_index()
|
||||
// 1. Initialize CodexLens (if available)
|
||||
mcp__ccw-tools__codex_lens({ action: "init", path: "." })
|
||||
|
||||
// 2. Project Structure
|
||||
bash(~/.claude/scripts/get_modules_by_depth.sh)
|
||||
bash(ccw tool exec get_modules_by_depth '{}')
|
||||
|
||||
// 3. Load Documentation (if not in memory)
|
||||
if (!memory.has("CLAUDE.md")) Read(CLAUDE.md)
|
||||
@@ -212,18 +211,18 @@ mcp__exa__web_search_exa({
|
||||
|
||||
**Layer 1: File Pattern Discovery**
|
||||
```javascript
|
||||
// Primary: Code-Index MCP
|
||||
const files = mcp__code-index__find_files("*{keyword}*")
|
||||
// Primary: CodexLens MCP
|
||||
const files = mcp__ccw-tools__codex_lens({ action: "search_files", query: "*{keyword}*" })
|
||||
// Fallback: find . -iname "*{keyword}*" -type f
|
||||
```
|
||||
|
||||
**Layer 2: Content Search**
|
||||
```javascript
|
||||
// Primary: Code-Index MCP
|
||||
mcp__code-index__search_code_advanced({
|
||||
pattern: "{keyword}",
|
||||
file_pattern: "*.ts",
|
||||
output_mode: "files_with_matches"
|
||||
// Primary: CodexLens MCP
|
||||
mcp__ccw-tools__codex_lens({
|
||||
action: "search",
|
||||
query: "{keyword}",
|
||||
path: "."
|
||||
})
|
||||
// Fallback: rg "{keyword}" -t ts --files-with-matches
|
||||
```
|
||||
@@ -231,11 +230,10 @@ mcp__code-index__search_code_advanced({
|
||||
**Layer 3: Semantic Patterns**
|
||||
```javascript
|
||||
// Find definitions (class, interface, function)
|
||||
mcp__code-index__search_code_advanced({
|
||||
pattern: "^(export )?(class|interface|type|function) .*{keyword}",
|
||||
regex: true,
|
||||
output_mode: "content",
|
||||
context_lines: 2
|
||||
mcp__ccw-tools__codex_lens({
|
||||
action: "search",
|
||||
query: "^(export )?(class|interface|type|function) .*{keyword}",
|
||||
path: "."
|
||||
})
|
||||
```
|
||||
|
||||
@@ -243,21 +241,22 @@ mcp__code-index__search_code_advanced({
|
||||
```javascript
|
||||
// Get file summaries for imports/exports
|
||||
for (const file of discovered_files) {
|
||||
const summary = mcp__code-index__get_file_summary(file)
|
||||
// summary: {imports, functions, classes, line_count}
|
||||
const summary = mcp__ccw-tools__codex_lens({ action: "symbol", file: file })
|
||||
// summary: {symbols: [{name, type, line}]}
|
||||
}
|
||||
```
|
||||
|
||||
**Layer 5: Config & Tests**
|
||||
```javascript
|
||||
// Config files
|
||||
mcp__code-index__find_files("*.config.*")
|
||||
mcp__code-index__find_files("package.json")
|
||||
mcp__ccw-tools__codex_lens({ action: "search_files", query: "*.config.*" })
|
||||
mcp__ccw-tools__codex_lens({ action: "search_files", query: "package.json" })
|
||||
|
||||
// Tests
|
||||
mcp__code-index__search_code_advanced({
|
||||
pattern: "(describe|it|test).*{keyword}",
|
||||
file_pattern: "*.{test,spec}.*"
|
||||
mcp__ccw-tools__codex_lens({
|
||||
action: "search",
|
||||
query: "(describe|it|test).*{keyword}",
|
||||
path: "."
|
||||
})
|
||||
```
|
||||
|
||||
@@ -560,14 +559,14 @@ Output: .workflow/session/{session}/.process/context-package.json
|
||||
- Expose sensitive data (credentials, keys)
|
||||
- Exceed file limits (50 total)
|
||||
- Include binaries/generated files
|
||||
- Use ripgrep if code-index available
|
||||
- Use ripgrep if CodexLens available
|
||||
|
||||
**ALWAYS**:
|
||||
- Initialize code-index in Phase 0
|
||||
- Initialize CodexLens in Phase 0
|
||||
- Execute get_modules_by_depth.sh
|
||||
- Load CLAUDE.md/README.md (unless in memory)
|
||||
- Execute all 3 discovery tracks
|
||||
- Use code-index MCP as primary
|
||||
- Use CodexLens MCP as primary
|
||||
- Fallback to ripgrep only when needed
|
||||
- Use Exa for unfamiliar APIs
|
||||
- Apply multi-factor scoring
|
||||
|
||||
@@ -61,9 +61,9 @@ The agent supports **two execution modes** based on task JSON's `meta.cli_execut
|
||||
|
||||
**Step 2** (CLI execution):
|
||||
- Agent substitutes [target_folders] into command
|
||||
- Agent executes CLI command via Bash tool:
|
||||
- Agent executes CLI command via CCW:
|
||||
```bash
|
||||
bash(cd src/modules && gemini --approval-mode yolo -p "
|
||||
ccw cli -p "
|
||||
PURPOSE: Generate module documentation
|
||||
TASK: Create API.md and README.md for each module
|
||||
MODE: write
|
||||
@@ -71,7 +71,7 @@ The agent supports **two execution modes** based on task JSON's `meta.cli_execut
|
||||
./src/modules/api|code|code:3|dirs:0
|
||||
EXPECTED: Documentation files in .workflow/docs/my_project/src/modules/
|
||||
RULES: $(cat ~/.claude/workflows/cli-templates/prompts/documentation/module-documentation.txt) | Mirror source structure
|
||||
")
|
||||
" --tool gemini --mode write --cd src/modules
|
||||
```
|
||||
|
||||
4. **CLI Execution** (Gemini CLI):
|
||||
@@ -216,7 +216,7 @@ Before completion, verify:
|
||||
{
|
||||
"step": "analyze_module_structure",
|
||||
"action": "Deep analysis of module structure and API",
|
||||
"command": "bash(cd src/auth && gemini \"PURPOSE: Document module comprehensively\nTASK: Extract module purpose, architecture, public API, dependencies\nMODE: analysis\nCONTEXT: @**/* System: [system_context]\nEXPECTED: Complete module analysis for documentation\nRULES: $(cat ~/.claude/workflows/cli-templates/prompts/documentation/module-documentation.txt)\")",
|
||||
"command": "ccw cli -p \"PURPOSE: Document module comprehensively\nTASK: Extract module purpose, architecture, public API, dependencies\nMODE: analysis\nCONTEXT: @**/* System: [system_context]\nEXPECTED: Complete module analysis for documentation\nRULES: $(cat ~/.claude/workflows/cli-templates/prompts/documentation/module-documentation.txt)\" --tool gemini --mode analysis --cd src/auth",
|
||||
"output_to": "module_analysis",
|
||||
"on_error": "fail"
|
||||
}
|
||||
|
||||
@@ -8,7 +8,7 @@ You are a documentation update coordinator for complex projects. Orchestrate par
|
||||
|
||||
## Core Mission
|
||||
|
||||
Execute depth-parallel updates for all modules using `~/.claude/scripts/update_module_claude.sh`. **Every module path must be processed**.
|
||||
Execute depth-parallel updates for all modules using `ccw tool exec update_module_claude`. **Every module path must be processed**.
|
||||
|
||||
## Input Context
|
||||
|
||||
@@ -42,12 +42,12 @@ TodoWrite([
|
||||
# 3. Launch parallel jobs (max 4)
|
||||
|
||||
# Depth 5 example (Layer 3 - use multi-layer):
|
||||
~/.claude/scripts/update_module_claude.sh "multi-layer" "./.claude/workflows/cli-templates/prompts/analysis" "gemini" &
|
||||
~/.claude/scripts/update_module_claude.sh "multi-layer" "./.claude/workflows/cli-templates/prompts/development" "gemini" &
|
||||
ccw tool exec update_module_claude '{"strategy":"multi-layer","path":"./.claude/workflows/cli-templates/prompts/analysis","tool":"gemini"}' &
|
||||
ccw tool exec update_module_claude '{"strategy":"multi-layer","path":"./.claude/workflows/cli-templates/prompts/development","tool":"gemini"}' &
|
||||
|
||||
# Depth 1 example (Layer 2 - use single-layer):
|
||||
~/.claude/scripts/update_module_claude.sh "single-layer" "./src/auth" "gemini" &
|
||||
~/.claude/scripts/update_module_claude.sh "single-layer" "./src/api" "gemini" &
|
||||
ccw tool exec update_module_claude '{"strategy":"single-layer","path":"./src/auth","tool":"gemini"}' &
|
||||
ccw tool exec update_module_claude '{"strategy":"single-layer","path":"./src/api","tool":"gemini"}' &
|
||||
# ... up to 4 concurrent jobs
|
||||
|
||||
# 4. Wait for all depth jobs to complete
|
||||
|
||||
@@ -36,10 +36,10 @@ You are a test context discovery specialist focused on gathering test coverage i
|
||||
**Use**: Phase 1 source context loading
|
||||
|
||||
### 2. Test Coverage Discovery
|
||||
**Primary (Code-Index MCP)**:
|
||||
- `mcp__code-index__find_files(pattern)` - Find test files (*.test.*, *.spec.*)
|
||||
- `mcp__code-index__search_code_advanced()` - Search test patterns
|
||||
- `mcp__code-index__get_file_summary()` - Analyze test structure
|
||||
**Primary (CCW CodexLens MCP)**:
|
||||
- `mcp__ccw-tools__codex_lens(action="search_files", query="*.test.*")` - Find test files
|
||||
- `mcp__ccw-tools__codex_lens(action="search", query="pattern")` - Search test patterns
|
||||
- `mcp__ccw-tools__codex_lens(action="symbol", file="path")` - Analyze test structure
|
||||
|
||||
**Fallback (CLI)**:
|
||||
- `rg` (ripgrep) - Fast test pattern search
|
||||
@@ -120,9 +120,10 @@ for (const summary_path of summaries) {
|
||||
|
||||
**2.1 Existing Test Discovery**:
|
||||
```javascript
|
||||
// Method 1: Code-Index MCP (preferred)
|
||||
const test_files = mcp__code-index__find_files({
|
||||
patterns: ["*.test.*", "*.spec.*", "*test_*.py", "*_test.go"]
|
||||
// Method 1: CodexLens MCP (preferred)
|
||||
const test_files = mcp__ccw-tools__codex_lens({
|
||||
action: "search_files",
|
||||
query: "*.test.* OR *.spec.* OR test_*.py OR *_test.go"
|
||||
});
|
||||
|
||||
// Method 2: Fallback CLI
|
||||
|
||||
@@ -83,17 +83,18 @@ When task JSON contains implementation_approach array:
|
||||
- L1 (Unit): `*.test.*`, `*.spec.*` in `__tests__/`, `tests/unit/`
|
||||
- L2 (Integration): `tests/integration/`, `*.integration.test.*`
|
||||
- L3 (E2E): `tests/e2e/`, `*.e2e.test.*`, `cypress/`, `playwright/`
|
||||
- **context-package.json** (CCW Workflow): Extract artifact paths using `jq -r '.brainstorm_artifacts.role_analyses[].files[].path'`
|
||||
- **context-package.json** (CCW Workflow): Use Read tool to get context package from `.workflow/active/{session}/.process/context-package.json`
|
||||
- Identify test commands from project configuration
|
||||
|
||||
```bash
|
||||
# Detect test framework and multi-layered commands
|
||||
if [ -f "package.json" ]; then
|
||||
# Extract layer-specific test commands
|
||||
LINT_CMD=$(cat package.json | jq -r '.scripts.lint // "eslint ."')
|
||||
UNIT_CMD=$(cat package.json | jq -r '.scripts["test:unit"] // .scripts.test')
|
||||
INTEGRATION_CMD=$(cat package.json | jq -r '.scripts["test:integration"] // ""')
|
||||
E2E_CMD=$(cat package.json | jq -r '.scripts["test:e2e"] // ""')
|
||||
# Extract layer-specific test commands using Read tool or jq
|
||||
PKG_JSON=$(cat package.json)
|
||||
LINT_CMD=$(echo "$PKG_JSON" | jq -r '.scripts.lint // "eslint ."')
|
||||
UNIT_CMD=$(echo "$PKG_JSON" | jq -r '.scripts["test:unit"] // .scripts.test')
|
||||
INTEGRATION_CMD=$(echo "$PKG_JSON" | jq -r '.scripts["test:integration"] // ""')
|
||||
E2E_CMD=$(echo "$PKG_JSON" | jq -r '.scripts["test:e2e"] // ""')
|
||||
elif [ -f "pytest.ini" ] || [ -f "setup.py" ]; then
|
||||
LINT_CMD="ruff check . || flake8 ."
|
||||
UNIT_CMD="pytest tests/unit/"
|
||||
|
||||
47
.claude/cli-tools.json
Normal file
47
.claude/cli-tools.json
Normal file
@@ -0,0 +1,47 @@
|
||||
{
|
||||
"version": "1.0.0",
|
||||
"tools": {
|
||||
"gemini": {
|
||||
"enabled": true,
|
||||
"isBuiltin": true,
|
||||
"command": "gemini",
|
||||
"description": "Google AI for code analysis"
|
||||
},
|
||||
"qwen": {
|
||||
"enabled": true,
|
||||
"isBuiltin": true,
|
||||
"command": "qwen",
|
||||
"description": "Alibaba AI assistant"
|
||||
},
|
||||
"codex": {
|
||||
"enabled": true,
|
||||
"isBuiltin": true,
|
||||
"command": "codex",
|
||||
"description": "OpenAI code generation"
|
||||
},
|
||||
"claude": {
|
||||
"enabled": true,
|
||||
"isBuiltin": true,
|
||||
"command": "claude",
|
||||
"description": "Anthropic AI assistant"
|
||||
}
|
||||
},
|
||||
"customEndpoints": [],
|
||||
"defaultTool": "gemini",
|
||||
"settings": {
|
||||
"promptFormat": "plain",
|
||||
"smartContext": {
|
||||
"enabled": false,
|
||||
"maxFiles": 10
|
||||
},
|
||||
"nativeResume": true,
|
||||
"recursiveQuery": true,
|
||||
"cache": {
|
||||
"injectionMode": "auto",
|
||||
"defaultPrefix": "",
|
||||
"defaultSuffix": ""
|
||||
},
|
||||
"codeIndexMcp": "ace"
|
||||
},
|
||||
"$schema": "./cli-tools.schema.json"
|
||||
}
|
||||
516
.claude/commands/clean.md
Normal file
516
.claude/commands/clean.md
Normal file
@@ -0,0 +1,516 @@
|
||||
---
|
||||
name: clean
|
||||
description: Intelligent code cleanup with mainline detection, stale artifact discovery, and safe execution
|
||||
argument-hint: "[--dry-run] [\"focus area\"]"
|
||||
allowed-tools: TodoWrite(*), Task(*), AskUserQuestion(*), Read(*), Glob(*), Bash(*), Write(*)
|
||||
---
|
||||
|
||||
# Clean Command (/clean)
|
||||
|
||||
## Overview
|
||||
|
||||
Intelligent cleanup command that explores the codebase to identify the development mainline, discovers artifacts that have drifted from it, and safely removes stale sessions, abandoned documents, and dead code.
|
||||
|
||||
**Core capabilities:**
|
||||
- Mainline detection: Identify active development branches and core modules
|
||||
- Drift analysis: Find sessions, documents, and code that deviate from mainline
|
||||
- Intelligent discovery: cli-explore-agent based artifact scanning
|
||||
- Safe execution: Confirmation-based cleanup with dry-run preview
|
||||
|
||||
## Usage
|
||||
|
||||
```bash
|
||||
/clean # Full intelligent cleanup (explore → analyze → confirm → execute)
|
||||
/clean --dry-run # Explore and analyze only, no execution
|
||||
/clean "auth module" # Focus cleanup on specific area
|
||||
```
|
||||
|
||||
## Execution Process
|
||||
|
||||
```
|
||||
Phase 1: Mainline Detection
|
||||
├─ Analyze git history for development trends
|
||||
├─ Identify core modules (high commit frequency)
|
||||
├─ Map active vs stale branches
|
||||
└─ Build mainline profile
|
||||
|
||||
Phase 2: Drift Discovery (cli-explore-agent)
|
||||
├─ Scan workflow sessions for orphaned artifacts
|
||||
├─ Identify documents drifted from mainline
|
||||
├─ Detect dead code and unused exports
|
||||
└─ Generate cleanup manifest
|
||||
|
||||
Phase 3: Confirmation
|
||||
├─ Display cleanup summary by category
|
||||
├─ Show impact analysis (files, size, risk)
|
||||
└─ AskUserQuestion: Select categories to clean
|
||||
|
||||
Phase 4: Execution (unless --dry-run)
|
||||
├─ Execute cleanup by category
|
||||
├─ Update manifests and indexes
|
||||
└─ Report results
|
||||
```
|
||||
|
||||
## Implementation
|
||||
|
||||
### Phase 1: Mainline Detection
|
||||
|
||||
**Session Setup**:
|
||||
```javascript
|
||||
const getUtc8ISOString = () => new Date(Date.now() + 8 * 60 * 60 * 1000).toISOString()
|
||||
|
||||
const dateStr = getUtc8ISOString().substring(0, 10)
|
||||
const sessionId = `clean-${dateStr}`
|
||||
const sessionFolder = `.workflow/.clean/${sessionId}`
|
||||
|
||||
Bash(`mkdir -p ${sessionFolder}`)
|
||||
```
|
||||
|
||||
**Step 1.1: Git History Analysis**
|
||||
```bash
|
||||
# Get commit frequency by directory (last 30 days)
|
||||
bash(git log --since="30 days ago" --name-only --pretty=format: | grep -v "^$" | cut -d/ -f1-2 | sort | uniq -c | sort -rn | head -20)
|
||||
|
||||
# Get recent active branches
|
||||
bash(git for-each-ref --sort=-committerdate refs/heads/ --format='%(refname:short) %(committerdate:relative)' | head -10)
|
||||
|
||||
# Get files with most recent changes
|
||||
bash(git log --since="7 days ago" --name-only --pretty=format: | grep -v "^$" | sort | uniq -c | sort -rn | head -30)
|
||||
```
|
||||
|
||||
**Step 1.2: Build Mainline Profile**
|
||||
```javascript
|
||||
const mainlineProfile = {
|
||||
coreModules: [], // High-frequency directories
|
||||
activeFiles: [], // Recently modified files
|
||||
activeBranches: [], // Branches with recent commits
|
||||
staleThreshold: {
|
||||
sessions: 7, // Days
|
||||
branches: 30,
|
||||
documents: 14
|
||||
},
|
||||
timestamp: getUtc8ISOString()
|
||||
}
|
||||
|
||||
// Parse git log output to identify core modules
|
||||
// Modules with >5 commits in last 30 days = core
|
||||
// Modules with 0 commits in last 30 days = potentially stale
|
||||
|
||||
Write(`${sessionFolder}/mainline-profile.json`, JSON.stringify(mainlineProfile, null, 2))
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Phase 2: Drift Discovery
|
||||
|
||||
**Launch cli-explore-agent for intelligent artifact scanning**:
|
||||
|
||||
```javascript
|
||||
Task(
|
||||
subagent_type="cli-explore-agent",
|
||||
run_in_background=false,
|
||||
description="Discover stale artifacts",
|
||||
prompt=`
|
||||
## Task Objective
|
||||
Discover artifacts that have drifted from the development mainline. Identify stale sessions, abandoned documents, and dead code for cleanup.
|
||||
|
||||
## Context
|
||||
- **Session Folder**: ${sessionFolder}
|
||||
- **Mainline Profile**: ${sessionFolder}/mainline-profile.json
|
||||
- **Focus Area**: ${focusArea || "全项目"}
|
||||
|
||||
## Discovery Categories
|
||||
|
||||
### Category 1: Stale Workflow Sessions
|
||||
Scan and analyze workflow session directories:
|
||||
|
||||
**Locations to scan**:
|
||||
- .workflow/active/WFS-* (active sessions)
|
||||
- .workflow/archives/WFS-* (archived sessions)
|
||||
- .workflow/.lite-plan/* (lite-plan sessions)
|
||||
- .workflow/.debug/DBG-* (debug sessions)
|
||||
|
||||
**Staleness criteria**:
|
||||
- Active sessions: No modification >7 days + no related git commits
|
||||
- Archives: >30 days old + no feature references in project.json
|
||||
- Lite-plan: >7 days old + plan.json not executed
|
||||
- Debug: >3 days old + issue not in recent commits
|
||||
|
||||
**Analysis steps**:
|
||||
1. List all session directories with modification times
|
||||
2. Cross-reference with git log (are session topics in recent commits?)
|
||||
3. Check manifest.json for orphan entries
|
||||
4. Identify sessions with .archiving marker (interrupted)
|
||||
|
||||
### Category 2: Drifted Documents
|
||||
Scan documentation that no longer aligns with code:
|
||||
|
||||
**Locations to scan**:
|
||||
- .claude/rules/tech/* (generated tech rules)
|
||||
- .workflow/.scratchpad/* (temporary notes)
|
||||
- **/CLAUDE.md (module documentation)
|
||||
- **/README.md (outdated descriptions)
|
||||
|
||||
**Drift criteria**:
|
||||
- Tech rules: Referenced files no longer exist
|
||||
- Scratchpad: Any file (always temporary)
|
||||
- Module docs: Describe functions/classes that were removed
|
||||
- READMEs: Reference deleted directories
|
||||
|
||||
**Analysis steps**:
|
||||
1. Parse document content for file/function references
|
||||
2. Verify referenced entities still exist in codebase
|
||||
3. Flag documents with >30% broken references
|
||||
|
||||
### Category 3: Dead Code
|
||||
Identify code that is no longer used:
|
||||
|
||||
**Scan patterns**:
|
||||
- Unused exports (exported but never imported)
|
||||
- Orphan files (not imported anywhere)
|
||||
- Commented-out code blocks (>10 lines)
|
||||
- TODO/FIXME comments >90 days old
|
||||
|
||||
**Analysis steps**:
|
||||
1. Build import graph using rg/grep
|
||||
2. Identify exports with no importers
|
||||
3. Find files not in import graph
|
||||
4. Scan for large comment blocks
|
||||
|
||||
## Output Format
|
||||
|
||||
Write to: ${sessionFolder}/cleanup-manifest.json
|
||||
|
||||
\`\`\`json
|
||||
{
|
||||
"generated_at": "ISO timestamp",
|
||||
"mainline_summary": {
|
||||
"core_modules": ["src/core", "src/api"],
|
||||
"active_branches": ["main", "feature/auth"],
|
||||
"health_score": 0.85
|
||||
},
|
||||
"discoveries": {
|
||||
"stale_sessions": [
|
||||
{
|
||||
"path": ".workflow/active/WFS-old-feature",
|
||||
"type": "active",
|
||||
"age_days": 15,
|
||||
"reason": "No related commits in 15 days",
|
||||
"size_kb": 1024,
|
||||
"risk": "low"
|
||||
}
|
||||
],
|
||||
"drifted_documents": [
|
||||
{
|
||||
"path": ".claude/rules/tech/deprecated-lib",
|
||||
"type": "tech_rules",
|
||||
"broken_references": 5,
|
||||
"total_references": 6,
|
||||
"drift_percentage": 83,
|
||||
"reason": "Referenced library removed",
|
||||
"risk": "low"
|
||||
}
|
||||
],
|
||||
"dead_code": [
|
||||
{
|
||||
"path": "src/utils/legacy.ts",
|
||||
"type": "orphan_file",
|
||||
"reason": "Not imported by any file",
|
||||
"last_modified": "2025-10-01",
|
||||
"risk": "medium"
|
||||
}
|
||||
]
|
||||
},
|
||||
"summary": {
|
||||
"total_items": 12,
|
||||
"total_size_mb": 45.2,
|
||||
"by_category": {
|
||||
"stale_sessions": 5,
|
||||
"drifted_documents": 4,
|
||||
"dead_code": 3
|
||||
},
|
||||
"by_risk": {
|
||||
"low": 8,
|
||||
"medium": 3,
|
||||
"high": 1
|
||||
}
|
||||
}
|
||||
}
|
||||
\`\`\`
|
||||
|
||||
## Execution Commands
|
||||
|
||||
\`\`\`bash
|
||||
# Session directories
|
||||
find .workflow -type d -name "WFS-*" -o -name "DBG-*" 2>/dev/null
|
||||
|
||||
# Check modification times (Linux/Mac)
|
||||
stat -c "%Y %n" .workflow/active/WFS-* 2>/dev/null
|
||||
|
||||
# Check modification times (Windows PowerShell via bash)
|
||||
powershell -Command "Get-ChildItem '.workflow/active/WFS-*' | ForEach-Object { Write-Output \"$($_.LastWriteTime) $($_.FullName)\" }"
|
||||
|
||||
# Find orphan exports (TypeScript)
|
||||
rg "export (const|function|class|interface|type)" --type ts -l
|
||||
|
||||
# Find imports
|
||||
rg "import.*from" --type ts
|
||||
|
||||
# Find large comment blocks
|
||||
rg "^\\s*/\\*" -A 10 --type ts
|
||||
|
||||
# Find old TODOs
|
||||
rg "TODO|FIXME" --type ts -n
|
||||
\`\`\`
|
||||
|
||||
## Success Criteria
|
||||
- [ ] All session directories scanned with age calculation
|
||||
- [ ] Documents cross-referenced with existing code
|
||||
- [ ] Dead code detection via import graph analysis
|
||||
- [ ] cleanup-manifest.json written with complete data
|
||||
- [ ] Each item has risk level and cleanup reason
|
||||
`
|
||||
)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Phase 3: Confirmation
|
||||
|
||||
**Step 3.1: Display Summary**
|
||||
```javascript
|
||||
const manifest = JSON.parse(Read(`${sessionFolder}/cleanup-manifest.json`))
|
||||
|
||||
console.log(`
|
||||
## Cleanup Discovery Report
|
||||
|
||||
**Mainline Health**: ${Math.round(manifest.mainline_summary.health_score * 100)}%
|
||||
**Core Modules**: ${manifest.mainline_summary.core_modules.join(', ')}
|
||||
|
||||
### Summary
|
||||
| Category | Count | Size | Risk |
|
||||
|----------|-------|------|------|
|
||||
| Stale Sessions | ${manifest.summary.by_category.stale_sessions} | - | ${getRiskSummary('sessions')} |
|
||||
| Drifted Documents | ${manifest.summary.by_category.drifted_documents} | - | ${getRiskSummary('documents')} |
|
||||
| Dead Code | ${manifest.summary.by_category.dead_code} | - | ${getRiskSummary('code')} |
|
||||
|
||||
**Total**: ${manifest.summary.total_items} items, ~${manifest.summary.total_size_mb} MB
|
||||
|
||||
### Stale Sessions
|
||||
${manifest.discoveries.stale_sessions.map(s =>
|
||||
`- ${s.path} (${s.age_days}d, ${s.risk}): ${s.reason}`
|
||||
).join('\n')}
|
||||
|
||||
### Drifted Documents
|
||||
${manifest.discoveries.drifted_documents.map(d =>
|
||||
`- ${d.path} (${d.drift_percentage}% broken, ${d.risk}): ${d.reason}`
|
||||
).join('\n')}
|
||||
|
||||
### Dead Code
|
||||
${manifest.discoveries.dead_code.map(c =>
|
||||
`- ${c.path} (${c.type}, ${c.risk}): ${c.reason}`
|
||||
).join('\n')}
|
||||
`)
|
||||
```
|
||||
|
||||
**Step 3.2: Dry-Run Exit**
|
||||
```javascript
|
||||
if (flags.includes('--dry-run')) {
|
||||
console.log(`
|
||||
---
|
||||
**Dry-run mode**: No changes made.
|
||||
Manifest saved to: ${sessionFolder}/cleanup-manifest.json
|
||||
|
||||
To execute cleanup: /clean
|
||||
`)
|
||||
return
|
||||
}
|
||||
```
|
||||
|
||||
**Step 3.3: User Confirmation**
|
||||
```javascript
|
||||
AskUserQuestion({
|
||||
questions: [
|
||||
{
|
||||
question: "Which categories to clean?",
|
||||
header: "Categories",
|
||||
multiSelect: true,
|
||||
options: [
|
||||
{
|
||||
label: "Sessions",
|
||||
description: `${manifest.summary.by_category.stale_sessions} stale workflow sessions`
|
||||
},
|
||||
{
|
||||
label: "Documents",
|
||||
description: `${manifest.summary.by_category.drifted_documents} drifted documents`
|
||||
},
|
||||
{
|
||||
label: "Dead Code",
|
||||
description: `${manifest.summary.by_category.dead_code} unused code files`
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
question: "Risk level to include?",
|
||||
header: "Risk",
|
||||
multiSelect: false,
|
||||
options: [
|
||||
{ label: "Low only", description: "Safest - only obviously stale items" },
|
||||
{ label: "Low + Medium", description: "Recommended - includes likely unused items" },
|
||||
{ label: "All", description: "Aggressive - includes high-risk items" }
|
||||
]
|
||||
}
|
||||
]
|
||||
})
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Phase 4: Execution
|
||||
|
||||
**Step 4.1: Filter Items by Selection**
|
||||
```javascript
|
||||
const selectedCategories = userSelection.categories // ['Sessions', 'Documents', ...]
|
||||
const riskLevel = userSelection.risk // 'Low only', 'Low + Medium', 'All'
|
||||
|
||||
const riskFilter = {
|
||||
'Low only': ['low'],
|
||||
'Low + Medium': ['low', 'medium'],
|
||||
'All': ['low', 'medium', 'high']
|
||||
}[riskLevel]
|
||||
|
||||
const itemsToClean = []
|
||||
|
||||
if (selectedCategories.includes('Sessions')) {
|
||||
itemsToClean.push(...manifest.discoveries.stale_sessions.filter(s => riskFilter.includes(s.risk)))
|
||||
}
|
||||
if (selectedCategories.includes('Documents')) {
|
||||
itemsToClean.push(...manifest.discoveries.drifted_documents.filter(d => riskFilter.includes(d.risk)))
|
||||
}
|
||||
if (selectedCategories.includes('Dead Code')) {
|
||||
itemsToClean.push(...manifest.discoveries.dead_code.filter(c => riskFilter.includes(c.risk)))
|
||||
}
|
||||
|
||||
TodoWrite({
|
||||
todos: itemsToClean.map(item => ({
|
||||
content: `Clean: ${item.path}`,
|
||||
status: "pending",
|
||||
activeForm: `Cleaning ${item.path}`
|
||||
}))
|
||||
})
|
||||
```
|
||||
|
||||
**Step 4.2: Execute Cleanup**
|
||||
```javascript
|
||||
const results = { deleted: [], failed: [], skipped: [] }
|
||||
|
||||
for (const item of itemsToClean) {
|
||||
TodoWrite({ todos: [...] }) // Mark current as in_progress
|
||||
|
||||
try {
|
||||
if (item.type === 'orphan_file' || item.type === 'dead_export') {
|
||||
// Dead code: Delete file or remove export
|
||||
Bash({ command: `rm -rf "${item.path}"` })
|
||||
} else {
|
||||
// Sessions and documents: Delete directory/file
|
||||
Bash({ command: `rm -rf "${item.path}"` })
|
||||
}
|
||||
|
||||
results.deleted.push(item.path)
|
||||
TodoWrite({ todos: [...] }) // Mark as completed
|
||||
} catch (error) {
|
||||
results.failed.push({ path: item.path, error: error.message })
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Step 4.3: Update Manifests**
|
||||
```javascript
|
||||
// Update archives manifest if sessions were deleted
|
||||
if (selectedCategories.includes('Sessions')) {
|
||||
const archiveManifestPath = '.workflow/archives/manifest.json'
|
||||
if (fileExists(archiveManifestPath)) {
|
||||
const archiveManifest = JSON.parse(Read(archiveManifestPath))
|
||||
const deletedSessionIds = results.deleted
|
||||
.filter(p => p.includes('WFS-'))
|
||||
.map(p => p.split('/').pop())
|
||||
|
||||
const updatedManifest = archiveManifest.filter(entry =>
|
||||
!deletedSessionIds.includes(entry.session_id)
|
||||
)
|
||||
|
||||
Write(archiveManifestPath, JSON.stringify(updatedManifest, null, 2))
|
||||
}
|
||||
}
|
||||
|
||||
// Update project.json if features referenced deleted sessions
|
||||
const projectPath = '.workflow/project.json'
|
||||
if (fileExists(projectPath)) {
|
||||
const project = JSON.parse(Read(projectPath))
|
||||
const deletedPaths = new Set(results.deleted)
|
||||
|
||||
project.features = project.features.filter(f =>
|
||||
!deletedPaths.has(f.traceability?.archive_path)
|
||||
)
|
||||
|
||||
project.statistics.total_features = project.features.length
|
||||
project.statistics.last_updated = getUtc8ISOString()
|
||||
|
||||
Write(projectPath, JSON.stringify(project, null, 2))
|
||||
}
|
||||
```
|
||||
|
||||
**Step 4.4: Report Results**
|
||||
```javascript
|
||||
console.log(`
|
||||
## Cleanup Complete
|
||||
|
||||
**Deleted**: ${results.deleted.length} items
|
||||
**Failed**: ${results.failed.length} items
|
||||
**Skipped**: ${results.skipped.length} items
|
||||
|
||||
### Deleted Items
|
||||
${results.deleted.map(p => `- ${p}`).join('\n')}
|
||||
|
||||
${results.failed.length > 0 ? `
|
||||
### Failed Items
|
||||
${results.failed.map(f => `- ${f.path}: ${f.error}`).join('\n')}
|
||||
` : ''}
|
||||
|
||||
Cleanup manifest archived to: ${sessionFolder}/cleanup-manifest.json
|
||||
`)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Session Folder Structure
|
||||
|
||||
```
|
||||
.workflow/.clean/{YYYY-MM-DD}/
|
||||
├── mainline-profile.json # Git history analysis
|
||||
└── cleanup-manifest.json # Discovery results
|
||||
```
|
||||
|
||||
## Risk Level Definitions
|
||||
|
||||
| Risk | Description | Examples |
|
||||
|------|-------------|----------|
|
||||
| **Low** | Safe to delete, no dependencies | Empty sessions, scratchpad files, 100% broken docs |
|
||||
| **Medium** | Likely unused, verify before delete | Orphan files, old archives, partially broken docs |
|
||||
| **High** | May have hidden dependencies | Files with some imports, recent modifications |
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Situation | Action |
|
||||
|-----------|--------|
|
||||
| No git repository | Skip mainline detection, use file timestamps only |
|
||||
| Session in use (.archiving) | Skip with warning |
|
||||
| Permission denied | Report error, continue with others |
|
||||
| Manifest parse error | Regenerate from filesystem scan |
|
||||
| Empty discovery | Report "codebase is clean" |
|
||||
|
||||
## Related Commands
|
||||
|
||||
- `/workflow:session:complete` - Properly archive active sessions
|
||||
- `/memory:compact` - Save session memory before cleanup
|
||||
- `/workflow:status` - View current workflow state
|
||||
@@ -191,7 +191,7 @@ target/
|
||||
### Step 2: Workspace Analysis (MANDATORY FIRST)
|
||||
```bash
|
||||
# Analyze workspace structure
|
||||
bash(~/.claude/scripts/get_modules_by_depth.sh json)
|
||||
bash(ccw tool exec get_modules_by_depth '{"format":"json"}')
|
||||
```
|
||||
|
||||
### Step 3: Technology Detection
|
||||
|
||||
383
.claude/commands/memory/compact.md
Normal file
383
.claude/commands/memory/compact.md
Normal file
@@ -0,0 +1,383 @@
|
||||
---
|
||||
name: compact
|
||||
description: Compact current session memory into structured text for session recovery, extracting objective/plan/files/decisions/constraints/state, and save via MCP core_memory tool
|
||||
argument-hint: "[optional: session description]"
|
||||
allowed-tools: mcp__ccw-tools__core_memory(*), Read(*)
|
||||
examples:
|
||||
- /memory:compact
|
||||
- /memory:compact "completed core-memory module"
|
||||
---
|
||||
|
||||
# Memory Compact Command (/memory:compact)
|
||||
|
||||
## 1. Overview
|
||||
|
||||
The `memory:compact` command **compresses current session working memory** into structured text optimized for **session recovery**, extracts critical information, and saves it to persistent storage via MCP `core_memory` tool.
|
||||
|
||||
**Core Philosophy**:
|
||||
- **Session Recovery First**: Capture everything needed to resume work seamlessly
|
||||
- **Minimize Re-exploration**: Include file paths, decisions, and state to avoid redundant analysis
|
||||
- **Preserve Train of Thought**: Keep notes and hypotheses for complex debugging
|
||||
- **Actionable State**: Record last action result and known issues
|
||||
|
||||
## 2. Parameters
|
||||
|
||||
- `"session description"` (Optional): Session description to supplement objective
|
||||
- Example: "completed core-memory module"
|
||||
- Example: "debugging JWT refresh - suspected memory leak"
|
||||
|
||||
## 3. Structured Output Format
|
||||
|
||||
```markdown
|
||||
## Session ID
|
||||
[WFS-ID if workflow session active, otherwise (none)]
|
||||
|
||||
## Project Root
|
||||
[Absolute path to project root, e.g., D:\Claude_dms3]
|
||||
|
||||
## Objective
|
||||
[High-level goal - the "North Star" of this session]
|
||||
|
||||
## Execution Plan
|
||||
[CRITICAL: Embed the LATEST plan in its COMPLETE and DETAILED form]
|
||||
|
||||
### Source: [workflow | todo | user-stated | inferred]
|
||||
|
||||
<details>
|
||||
<summary>Full Execution Plan (Click to expand)</summary>
|
||||
|
||||
[PRESERVE COMPLETE PLAN VERBATIM - DO NOT SUMMARIZE]
|
||||
- ALL phases, tasks, subtasks
|
||||
- ALL file paths (absolute)
|
||||
- ALL dependencies and prerequisites
|
||||
- ALL acceptance criteria
|
||||
- ALL status markers ([x] done, [ ] pending)
|
||||
- ALL notes and context
|
||||
|
||||
Example:
|
||||
## Phase 1: Setup
|
||||
- [x] Initialize project structure
|
||||
- Created D:\Claude_dms3\src\core\index.ts
|
||||
- Added dependencies: lodash, zod
|
||||
- [ ] Configure TypeScript
|
||||
- Update tsconfig.json for strict mode
|
||||
|
||||
## Phase 2: Implementation
|
||||
- [ ] Implement core API
|
||||
- Target: D:\Claude_dms3\src\api\handler.ts
|
||||
- Dependencies: Phase 1 complete
|
||||
- Acceptance: All tests pass
|
||||
|
||||
</details>
|
||||
|
||||
## Working Files (Modified)
|
||||
[Absolute paths to actively modified files]
|
||||
- D:\Claude_dms3\src\file1.ts (role: main implementation)
|
||||
- D:\Claude_dms3\tests\file1.test.ts (role: unit tests)
|
||||
|
||||
## Reference Files (Read-Only)
|
||||
[Absolute paths to context files - NOT modified but essential for understanding]
|
||||
- D:\Claude_dms3\.claude\CLAUDE.md (role: project instructions)
|
||||
- D:\Claude_dms3\src\types\index.ts (role: type definitions)
|
||||
- D:\Claude_dms3\package.json (role: dependencies)
|
||||
|
||||
## Last Action
|
||||
[Last significant action and its result/status]
|
||||
|
||||
## Decisions
|
||||
- [Decision]: [Reasoning]
|
||||
- [Decision]: [Reasoning]
|
||||
|
||||
## Constraints
|
||||
- [User-specified limitation or preference]
|
||||
|
||||
## Dependencies
|
||||
- [Added/changed packages or environment requirements]
|
||||
|
||||
## Known Issues
|
||||
- [Deferred bug or edge case]
|
||||
|
||||
## Changes Made
|
||||
- [Completed modification]
|
||||
|
||||
## Pending
|
||||
- [Next step] or (none)
|
||||
|
||||
## Notes
|
||||
[Unstructured thoughts, hypotheses, debugging trails]
|
||||
```
|
||||
|
||||
## 4. Field Definitions
|
||||
|
||||
| Field | Purpose | Recovery Value |
|
||||
|-------|---------|----------------|
|
||||
| **Session ID** | Workflow session identifier (WFS-*) | Links memory to specific stateful task execution |
|
||||
| **Project Root** | Absolute path to project directory | Enables correct path resolution in new sessions |
|
||||
| **Objective** | Ultimate goal of the session | Prevents losing track of broader feature |
|
||||
| **Execution Plan** | Complete plan from any source (verbatim) | Preserves full planning context, avoids re-planning |
|
||||
| **Working Files** | Actively modified files (absolute paths) | Immediately identifies where work was happening |
|
||||
| **Reference Files** | Read-only context files (absolute paths) | Eliminates re-exploration for critical context |
|
||||
| **Last Action** | Final tool output/status | Immediate state awareness (success/failure) |
|
||||
| **Decisions** | Architectural choices + reasoning | Prevents re-litigating settled decisions |
|
||||
| **Constraints** | User-imposed limitations | Maintains personalized coding style |
|
||||
| **Dependencies** | Package/environment changes | Prevents missing dependency errors |
|
||||
| **Known Issues** | Deferred bugs/edge cases | Ensures issues aren't forgotten |
|
||||
| **Changes Made** | Completed modifications | Clear record of what was done |
|
||||
| **Pending** | Next steps | Immediate action items |
|
||||
| **Notes** | Hypotheses, debugging trails | Preserves "train of thought" |
|
||||
|
||||
## 5. Execution Flow
|
||||
|
||||
### Step 1: Analyze Current Session
|
||||
|
||||
Extract the following from conversation history:
|
||||
|
||||
```javascript
|
||||
const sessionAnalysis = {
|
||||
sessionId: "", // WFS-* if workflow session active, null otherwise
|
||||
projectRoot: "", // Absolute path: D:\Claude_dms3
|
||||
objective: "", // High-level goal (1-2 sentences)
|
||||
executionPlan: {
|
||||
source: "workflow" | "todo" | "user-stated" | "inferred",
|
||||
content: "" // Full plan content - ALWAYS preserve COMPLETE and DETAILED form
|
||||
},
|
||||
workingFiles: [], // {absolutePath, role} - modified files
|
||||
referenceFiles: [], // {absolutePath, role} - read-only context files
|
||||
lastAction: "", // Last significant action + result
|
||||
decisions: [], // {decision, reasoning}
|
||||
constraints: [], // User-specified limitations
|
||||
dependencies: [], // Added/changed packages
|
||||
knownIssues: [], // Deferred bugs
|
||||
changesMade: [], // Completed modifications
|
||||
pending: [], // Next steps
|
||||
notes: "" // Unstructured thoughts
|
||||
}
|
||||
```
|
||||
|
||||
### Step 2: Generate Structured Text
|
||||
|
||||
```javascript
|
||||
// Helper: Generate execution plan section
|
||||
const generateExecutionPlan = (plan) => {
|
||||
const sourceLabels = {
|
||||
'workflow': 'workflow (IMPL_PLAN.md)',
|
||||
'todo': 'todo (TodoWrite)',
|
||||
'user-stated': 'user-stated',
|
||||
'inferred': 'inferred'
|
||||
};
|
||||
|
||||
// CRITICAL: Preserve complete plan content verbatim - DO NOT summarize
|
||||
return `### Source: ${sourceLabels[plan.source] || plan.source}
|
||||
|
||||
<details>
|
||||
<summary>Full Execution Plan (Click to expand)</summary>
|
||||
|
||||
${plan.content}
|
||||
|
||||
</details>`;
|
||||
};
|
||||
|
||||
const structuredText = `## Session ID
|
||||
${sessionAnalysis.sessionId || '(none)'}
|
||||
|
||||
## Project Root
|
||||
${sessionAnalysis.projectRoot}
|
||||
|
||||
## Objective
|
||||
${sessionAnalysis.objective}
|
||||
|
||||
## Execution Plan
|
||||
${generateExecutionPlan(sessionAnalysis.executionPlan)}
|
||||
|
||||
## Working Files (Modified)
|
||||
${sessionAnalysis.workingFiles.map(f => `- ${f.absolutePath} (role: ${f.role})`).join('\n') || '(none)'}
|
||||
|
||||
## Reference Files (Read-Only)
|
||||
${sessionAnalysis.referenceFiles.map(f => `- ${f.absolutePath} (role: ${f.role})`).join('\n') || '(none)'}
|
||||
|
||||
## Last Action
|
||||
${sessionAnalysis.lastAction}
|
||||
|
||||
## Decisions
|
||||
${sessionAnalysis.decisions.map(d => `- ${d.decision}: ${d.reasoning}`).join('\n') || '(none)'}
|
||||
|
||||
## Constraints
|
||||
${sessionAnalysis.constraints.map(c => `- ${c}`).join('\n') || '(none)'}
|
||||
|
||||
## Dependencies
|
||||
${sessionAnalysis.dependencies.map(d => `- ${d}`).join('\n') || '(none)'}
|
||||
|
||||
## Known Issues
|
||||
${sessionAnalysis.knownIssues.map(i => `- ${i}`).join('\n') || '(none)'}
|
||||
|
||||
## Changes Made
|
||||
${sessionAnalysis.changesMade.map(c => `- ${c}`).join('\n') || '(none)'}
|
||||
|
||||
## Pending
|
||||
${sessionAnalysis.pending.length > 0
|
||||
? sessionAnalysis.pending.map(p => `- ${p}`).join('\n')
|
||||
: '(none)'}
|
||||
|
||||
## Notes
|
||||
${sessionAnalysis.notes || '(none)'}`
|
||||
```
|
||||
|
||||
### Step 3: Import to Core Memory via MCP
|
||||
|
||||
Use the MCP `core_memory` tool to save the structured text:
|
||||
|
||||
```javascript
|
||||
mcp__ccw-tools__core_memory({
|
||||
operation: "import",
|
||||
text: structuredText
|
||||
})
|
||||
```
|
||||
|
||||
Or via CLI (pipe structured text to import):
|
||||
|
||||
```bash
|
||||
# Write structured text to temp file, then import
|
||||
echo "$structuredText" | ccw core-memory import
|
||||
|
||||
# Or from a file
|
||||
ccw core-memory import --file /path/to/session-memory.md
|
||||
```
|
||||
|
||||
**Response Format**:
|
||||
```json
|
||||
{
|
||||
"operation": "import",
|
||||
"id": "CMEM-YYYYMMDD-HHMMSS",
|
||||
"message": "Created memory: CMEM-YYYYMMDD-HHMMSS"
|
||||
}
|
||||
```
|
||||
|
||||
### Step 4: Report Recovery ID
|
||||
|
||||
After successful import, **clearly display the Recovery ID** to the user:
|
||||
|
||||
```
|
||||
╔════════════════════════════════════════════════════════════════════════════╗
|
||||
║ ✓ Session Memory Saved ║
|
||||
║ ║
|
||||
║ Recovery ID: CMEM-YYYYMMDD-HHMMSS ║
|
||||
║ ║
|
||||
║ To restore: "Please import memory <ID>" ║
|
||||
║ (MCP: core_memory export | CLI: ccw core-memory export --id <ID>) ║
|
||||
╚════════════════════════════════════════════════════════════════════════════╝
|
||||
```
|
||||
|
||||
## 6. Quality Checklist
|
||||
|
||||
Before generating:
|
||||
- [ ] Session ID captured if workflow session active (WFS-*)
|
||||
- [ ] Project Root is absolute path (e.g., D:\Claude_dms3)
|
||||
- [ ] Objective clearly states the "North Star" goal
|
||||
- [ ] Execution Plan: COMPLETE plan preserved VERBATIM (no summarization)
|
||||
- [ ] Plan Source: Clearly identified (workflow | todo | user-stated | inferred)
|
||||
- [ ] Plan Details: ALL phases, tasks, file paths, dependencies, status markers included
|
||||
- [ ] All file paths are ABSOLUTE (not relative)
|
||||
- [ ] Working Files: 3-8 modified files with roles
|
||||
- [ ] Reference Files: Key context files (CLAUDE.md, types, configs)
|
||||
- [ ] Last Action captures final state (success/failure)
|
||||
- [ ] Decisions include reasoning, not just choices
|
||||
- [ ] Known Issues separates deferred from forgotten bugs
|
||||
- [ ] Notes preserve debugging hypotheses if any
|
||||
|
||||
## 7. Path Resolution Rules
|
||||
|
||||
### Project Root Detection
|
||||
1. Check current working directory from environment
|
||||
2. Look for project markers: `.git/`, `package.json`, `.claude/`
|
||||
3. Use the topmost directory containing these markers
|
||||
|
||||
### Absolute Path Conversion
|
||||
```javascript
|
||||
// Convert relative to absolute
|
||||
const toAbsolutePath = (relativePath, projectRoot) => {
|
||||
if (path.isAbsolute(relativePath)) return relativePath;
|
||||
return path.join(projectRoot, relativePath);
|
||||
};
|
||||
|
||||
// Example: "src/api/auth.ts" → "D:\Claude_dms3\src\api\auth.ts"
|
||||
```
|
||||
|
||||
### Reference File Categories
|
||||
| Category | Examples | Priority |
|
||||
|----------|----------|----------|
|
||||
| Project Config | `.claude/CLAUDE.md`, `package.json`, `tsconfig.json` | High |
|
||||
| Type Definitions | `src/types/*.ts`, `*.d.ts` | High |
|
||||
| Related Modules | Parent/sibling modules with shared interfaces | Medium |
|
||||
| Test Files | Corresponding test files for modified code | Medium |
|
||||
| Documentation | `README.md`, `ARCHITECTURE.md` | Low |
|
||||
|
||||
## 8. Plan Detection (Priority Order)
|
||||
|
||||
### Priority 1: Workflow Session (IMPL_PLAN.md)
|
||||
```javascript
|
||||
// Check for active workflow session
|
||||
const manifest = await mcp__ccw-tools__session_manager({
|
||||
operation: "list",
|
||||
location: "active"
|
||||
});
|
||||
|
||||
if (manifest.sessions?.length > 0) {
|
||||
const session = manifest.sessions[0];
|
||||
const plan = await mcp__ccw-tools__session_manager({
|
||||
operation: "read",
|
||||
session_id: session.id,
|
||||
content_type: "plan"
|
||||
});
|
||||
sessionAnalysis.sessionId = session.id;
|
||||
sessionAnalysis.executionPlan.source = "workflow";
|
||||
sessionAnalysis.executionPlan.content = plan.content;
|
||||
}
|
||||
```
|
||||
|
||||
### Priority 2: TodoWrite (Current Session Todos)
|
||||
```javascript
|
||||
// Extract from conversation - look for TodoWrite tool calls
|
||||
// Preserve COMPLETE todo list with all details
|
||||
const todos = extractTodosFromConversation();
|
||||
if (todos.length > 0) {
|
||||
sessionAnalysis.executionPlan.source = "todo";
|
||||
// Format todos with full context - preserve status markers
|
||||
sessionAnalysis.executionPlan.content = todos.map(t =>
|
||||
`- [${t.status === 'completed' ? 'x' : t.status === 'in_progress' ? '>' : ' '}] ${t.content}`
|
||||
).join('\n');
|
||||
}
|
||||
```
|
||||
|
||||
### Priority 3: User-Stated Plan
|
||||
```javascript
|
||||
// Look for explicit plan statements in user messages:
|
||||
// - "Here's my plan: 1. ... 2. ... 3. ..."
|
||||
// - "I want to: first..., then..., finally..."
|
||||
// - Numbered or bulleted lists describing steps
|
||||
const userPlan = extractUserStatedPlan();
|
||||
if (userPlan) {
|
||||
sessionAnalysis.executionPlan.source = "user-stated";
|
||||
sessionAnalysis.executionPlan.content = userPlan;
|
||||
}
|
||||
```
|
||||
|
||||
### Priority 4: Inferred Plan
|
||||
```javascript
|
||||
// If no explicit plan, infer from:
|
||||
// - Task description and breakdown discussion
|
||||
// - Sequence of actions taken
|
||||
// - Outstanding work mentioned
|
||||
const inferredPlan = inferPlanFromDiscussion();
|
||||
if (inferredPlan) {
|
||||
sessionAnalysis.executionPlan.source = "inferred";
|
||||
sessionAnalysis.executionPlan.content = inferredPlan;
|
||||
}
|
||||
```
|
||||
|
||||
## 9. Notes
|
||||
|
||||
- **Timing**: Execute at task completion or before context switch
|
||||
- **Frequency**: Once per independent task or milestone
|
||||
- **Recovery**: New session can immediately continue with full context
|
||||
- **Knowledge Graph**: Entity relationships auto-extracted for visualization
|
||||
- **Absolute Paths**: Critical for cross-session recovery on different machines
|
||||
@@ -101,10 +101,10 @@ src/ (depth 1) → SINGLE STRATEGY
|
||||
Bash({command: "pwd && basename \"$(pwd)\" && git rev-parse --show-toplevel 2>/dev/null || pwd", run_in_background: false});
|
||||
|
||||
// Get module structure with classification
|
||||
Bash({command: "~/.claude/scripts/get_modules_by_depth.sh list | ~/.claude/scripts/classify-folders.sh", run_in_background: false});
|
||||
Bash({command: "ccw tool exec get_modules_by_depth '{\"format\":\"list\"}' | ccw tool exec classify_folders '{}'", run_in_background: false});
|
||||
|
||||
// OR with path parameter
|
||||
Bash({command: "cd <target-path> && ~/.claude/scripts/get_modules_by_depth.sh list | ~/.claude/scripts/classify-folders.sh", run_in_background: false});
|
||||
Bash({command: "cd <target-path> && ccw tool exec get_modules_by_depth '{\"format\":\"list\"}' | ccw tool exec classify_folders '{}'", run_in_background: false});
|
||||
```
|
||||
|
||||
**Parse output** `depth:N|path:<PATH>|type:<code|navigation>|...` to extract module paths, types, and count.
|
||||
@@ -200,7 +200,7 @@ for (let layer of [3, 2, 1]) {
|
||||
let strategy = module.depth >= 3 ? "full" : "single";
|
||||
for (let tool of tool_order) {
|
||||
Bash({
|
||||
command: `cd ${module.path} && ~/.claude/scripts/generate_module_docs.sh "${strategy}" "." "${project_name}" "${tool}"`,
|
||||
command: `cd ${module.path} && ccw tool exec generate_module_docs '{"strategy":"${strategy}","sourcePath":".","projectName":"${project_name}","tool":"${tool}"}'`,
|
||||
run_in_background: false
|
||||
});
|
||||
if (bash_result.exit_code === 0) {
|
||||
@@ -263,7 +263,7 @@ MODULES:
|
||||
|
||||
TOOLS (try in order): {{tool_1}}, {{tool_2}}, {{tool_3}}
|
||||
|
||||
EXECUTION SCRIPT: ~/.claude/scripts/generate_module_docs.sh
|
||||
EXECUTION SCRIPT: ccw tool exec generate_module_docs
|
||||
- Accepts strategy parameter: full | single
|
||||
- Accepts folder type detection: code | navigation
|
||||
- Tool execution via direct CLI commands (gemini/qwen/codex)
|
||||
@@ -273,7 +273,7 @@ EXECUTION FLOW (for each module):
|
||||
1. Tool fallback loop (exit on first success):
|
||||
for tool in {{tool_1}} {{tool_2}} {{tool_3}}; do
|
||||
Bash({
|
||||
command: `cd "{{module_path}}" && ~/.claude/scripts/generate_module_docs.sh "{{strategy}}" "." "{{project_name}}" "${tool}"`,
|
||||
command: `cd "{{module_path}}" && ccw tool exec generate_module_docs '{"strategy":"{{strategy}}","sourcePath":".","projectName":"{{project_name}}","tool":"${tool}"}'`,
|
||||
run_in_background: false
|
||||
})
|
||||
exit_code=$?
|
||||
@@ -322,7 +322,7 @@ let project_root = get_project_root();
|
||||
report("Generating project README.md...");
|
||||
for (let tool of tool_order) {
|
||||
Bash({
|
||||
command: `cd ${project_root} && ~/.claude/scripts/generate_module_docs.sh "project-readme" "." "${project_name}" "${tool}"`,
|
||||
command: `cd ${project_root} && ccw tool exec generate_module_docs '{"strategy":"project-readme","sourcePath":".","projectName":"${project_name}","tool":"${tool}"}'`,
|
||||
run_in_background: false
|
||||
});
|
||||
if (bash_result.exit_code === 0) {
|
||||
@@ -335,7 +335,7 @@ for (let tool of tool_order) {
|
||||
report("Generating ARCHITECTURE.md and EXAMPLES.md...");
|
||||
for (let tool of tool_order) {
|
||||
Bash({
|
||||
command: `cd ${project_root} && ~/.claude/scripts/generate_module_docs.sh "project-architecture" "." "${project_name}" "${tool}"`,
|
||||
command: `cd ${project_root} && ccw tool exec generate_module_docs '{"strategy":"project-architecture","sourcePath":".","projectName":"${project_name}","tool":"${tool}"}'`,
|
||||
run_in_background: false
|
||||
});
|
||||
if (bash_result.exit_code === 0) {
|
||||
@@ -350,7 +350,7 @@ if (bash_result.stdout.includes("API_FOUND")) {
|
||||
report("Generating HTTP API documentation...");
|
||||
for (let tool of tool_order) {
|
||||
Bash({
|
||||
command: `cd ${project_root} && ~/.claude/scripts/generate_module_docs.sh "http-api" "." "${project_name}" "${tool}"`,
|
||||
command: `cd ${project_root} && ccw tool exec generate_module_docs '{"strategy":"http-api","sourcePath":".","projectName":"${project_name}","tool":"${tool}"}'`,
|
||||
run_in_background: false
|
||||
});
|
||||
if (bash_result.exit_code === 0) {
|
||||
|
||||
@@ -51,7 +51,7 @@ Orchestrates context-aware documentation generation/update for changed modules u
|
||||
Bash({command: "pwd && basename \"$(pwd)\" && git rev-parse --show-toplevel 2>/dev/null || pwd", run_in_background: false});
|
||||
|
||||
// Detect changed modules
|
||||
Bash({command: "~/.claude/scripts/detect_changed_modules.sh list", run_in_background: false});
|
||||
Bash({command: "ccw tool exec detect_changed_modules '{\"format\":\"list\"}'", run_in_background: false});
|
||||
|
||||
// Cache git changes
|
||||
Bash({command: "git add -A 2>/dev/null || true", run_in_background: false});
|
||||
@@ -123,7 +123,7 @@ for (let depth of sorted_depths.reverse()) { // N → 0
|
||||
return async () => {
|
||||
for (let tool of tool_order) {
|
||||
Bash({
|
||||
command: `cd ${module.path} && ~/.claude/scripts/generate_module_docs.sh "single" "." "${project_name}" "${tool}"`,
|
||||
command: `cd ${module.path} && ccw tool exec generate_module_docs '{"strategy":"single","sourcePath":".","projectName":"${project_name}","tool":"${tool}"}'`,
|
||||
run_in_background: false
|
||||
});
|
||||
if (bash_result.exit_code === 0) {
|
||||
@@ -207,21 +207,21 @@ EXECUTION:
|
||||
For each module above:
|
||||
1. Try tool 1:
|
||||
Bash({
|
||||
command: `cd "{{module_path}}" && ~/.claude/scripts/generate_module_docs.sh "single" "." "{{project_name}}" "{{tool_1}}"`,
|
||||
command: `cd "{{module_path}}" && ccw tool exec generate_module_docs '{"strategy":"single","sourcePath":".","projectName":"{{project_name}}","tool":"{{tool_1}}"}'`,
|
||||
run_in_background: false
|
||||
})
|
||||
→ Success: Report "✅ {{module_path}} docs generated with {{tool_1}}", proceed to next module
|
||||
→ Failure: Try tool 2
|
||||
2. Try tool 2:
|
||||
Bash({
|
||||
command: `cd "{{module_path}}" && ~/.claude/scripts/generate_module_docs.sh "single" "." "{{project_name}}" "{{tool_2}}"`,
|
||||
command: `cd "{{module_path}}" && ccw tool exec generate_module_docs '{"strategy":"single","sourcePath":".","projectName":"{{project_name}}","tool":"{{tool_2}}"}'`,
|
||||
run_in_background: false
|
||||
})
|
||||
→ Success: Report "✅ {{module_path}} docs generated with {{tool_2}}", proceed to next module
|
||||
→ Failure: Try tool 3
|
||||
3. Try tool 3:
|
||||
Bash({
|
||||
command: `cd "{{module_path}}" && ~/.claude/scripts/generate_module_docs.sh "single" "." "{{project_name}}" "{{tool_3}}"`,
|
||||
command: `cd "{{module_path}}" && ccw tool exec generate_module_docs '{"strategy":"single","sourcePath":".","projectName":"{{project_name}}","tool":"{{tool_3}}"}'`,
|
||||
run_in_background: false
|
||||
})
|
||||
→ Success: Report "✅ {{module_path}} docs generated with {{tool_3}}", proceed to next module
|
||||
|
||||
@@ -85,10 +85,10 @@ bash(jq '. + {"target_path":"{target_path}","project_root":"{project_root}","pro
|
||||
|
||||
```bash
|
||||
# 1. Run folder analysis
|
||||
bash(~/.claude/scripts/get_modules_by_depth.sh | ~/.claude/scripts/classify-folders.sh)
|
||||
bash(ccw tool exec get_modules_by_depth '{}' | ccw tool exec classify_folders '{}')
|
||||
|
||||
# 2. Get top-level directories (first 2 path levels)
|
||||
bash(~/.claude/scripts/get_modules_by_depth.sh | ~/.claude/scripts/classify-folders.sh | awk -F'|' '{print $1}' | sed 's|^\./||' | awk -F'/' '{if(NF>=2) print $1"/"$2; else if(NF==1) print $1}' | sort -u)
|
||||
bash(ccw tool exec get_modules_by_depth '{}' | ccw tool exec classify_folders '{}' | awk -F'|' '{print $1}' | sed 's|^\./||' | awk -F'/' '{if(NF>=2) print $1"/"$2; else if(NF==1) print $1}' | sort -u)
|
||||
|
||||
# 3. Find existing docs (if directory exists)
|
||||
bash(if [ -d .workflow/docs/\${project_name} ]; then find .workflow/docs/\${project_name} -type f -name "*.md" ! -path "*/README.md" ! -path "*/ARCHITECTURE.md" ! -path "*/EXAMPLES.md" ! -path "*/api/*" 2>/dev/null; fi)
|
||||
@@ -235,12 +235,12 @@ api_id=$((group_count + 3))
|
||||
| Mode | cli_execute | Placement | CLI MODE | Approval Flag | Agent Role |
|
||||
|------|-------------|-----------|----------|---------------|------------|
|
||||
| **Agent** | false | pre_analysis | analysis | (none) | Generate docs in implementation_approach |
|
||||
| **CLI** | true | implementation_approach | write | --approval-mode yolo | Execute CLI commands, validate output |
|
||||
| **CLI** | true | implementation_approach | write | --mode write | Execute CLI commands, validate output |
|
||||
|
||||
**Command Patterns**:
|
||||
- Gemini/Qwen: `cd dir && gemini -p "..."`
|
||||
- CLI Mode: `cd dir && gemini --approval-mode yolo -p "..."`
|
||||
- Codex: `codex -C dir --full-auto exec "..." --skip-git-repo-check -s danger-full-access`
|
||||
- Gemini/Qwen: `ccw cli -p "..." --tool gemini --mode analysis --cd dir`
|
||||
- CLI Mode: `ccw cli -p "..." --tool gemini --mode write --cd dir`
|
||||
- Codex: `ccw cli -p "..." --tool codex --mode write --cd dir`
|
||||
|
||||
**Generation Process**:
|
||||
1. Read configuration values (tool, cli_execute, mode) from workflow-session.json
|
||||
@@ -331,7 +331,7 @@ api_id=$((group_count + 3))
|
||||
{
|
||||
"step": 2,
|
||||
"title": "Batch generate documentation via CLI",
|
||||
"command": "bash(dirs=$(jq -r '.groups.assignments[] | select(.group_id == \"${group_number}\") | .directories[]' ${session_dir}/.process/doc-planning-data.json); for dir in $dirs; do cd \"$dir\" && gemini --approval-mode yolo -p \"PURPOSE: Generate module docs\\nTASK: Create documentation\\nMODE: write\\nCONTEXT: @**/* [phase2_context]\\nEXPECTED: API.md and README.md\\nRULES: Mirror structure\" || echo \"Failed: $dir\"; cd -; done)",
|
||||
"command": "ccw cli -p 'PURPOSE: Generate module docs\\nTASK: Create documentation\\nMODE: write\\nCONTEXT: @**/* [phase2_context]\\nEXPECTED: API.md and README.md\\nRULES: Mirror structure' --tool gemini --mode write --cd ${dirs_from_group}",
|
||||
"depends_on": [1],
|
||||
"output": "generated_docs"
|
||||
}
|
||||
@@ -363,7 +363,7 @@ api_id=$((group_count + 3))
|
||||
},
|
||||
{
|
||||
"step": "analyze_project",
|
||||
"command": "bash(gemini \"PURPOSE: Analyze project structure\\nTASK: Extract overview from modules\\nMODE: analysis\\nCONTEXT: [all_module_docs]\\nEXPECTED: Project outline\")",
|
||||
"command": "bash(ccw cli -p \"PURPOSE: Analyze project structure\\nTASK: Extract overview from modules\\nMODE: analysis\\nCONTEXT: [all_module_docs]\\nEXPECTED: Project outline\" --tool gemini --mode analysis)",
|
||||
"output_to": "project_outline"
|
||||
}
|
||||
],
|
||||
@@ -403,7 +403,7 @@ api_id=$((group_count + 3))
|
||||
"pre_analysis": [
|
||||
{"step": "load_existing_docs", "command": "bash(cat .workflow/docs/${project_name}/{ARCHITECTURE,EXAMPLES}.md 2>/dev/null || echo 'No existing docs')", "output_to": "existing_arch_examples"},
|
||||
{"step": "load_all_docs", "command": "bash(cat .workflow/docs/${project_name}/README.md && find .workflow/docs/${project_name} -type f -name '*.md' ! -path '*/README.md' ! -path '*/ARCHITECTURE.md' ! -path '*/EXAMPLES.md' ! -path '*/api/*' | xargs cat)", "output_to": "all_docs"},
|
||||
{"step": "analyze_architecture", "command": "bash(gemini \"PURPOSE: Analyze system architecture\\nTASK: Synthesize architectural overview and examples\\nMODE: analysis\\nCONTEXT: [all_docs]\\nEXPECTED: Architecture + Examples outline\")", "output_to": "arch_examples_outline"}
|
||||
{"step": "analyze_architecture", "command": "bash(ccw cli -p \"PURPOSE: Analyze system architecture\\nTASK: Synthesize architectural overview and examples\\nMODE: analysis\\nCONTEXT: [all_docs]\\nEXPECTED: Architecture + Examples outline\" --tool gemini --mode analysis)", "output_to": "arch_examples_outline"}
|
||||
],
|
||||
"implementation_approach": [
|
||||
{
|
||||
@@ -440,7 +440,7 @@ api_id=$((group_count + 3))
|
||||
"pre_analysis": [
|
||||
{"step": "discover_api", "command": "bash(rg 'router\\.| @(Get|Post)' -g '*.{ts,js}')", "output_to": "endpoint_discovery"},
|
||||
{"step": "load_existing_api", "command": "bash(cat .workflow/docs/${project_name}/api/README.md 2>/dev/null || echo 'No existing API docs')", "output_to": "existing_api_docs"},
|
||||
{"step": "analyze_api", "command": "bash(gemini \"PURPOSE: Document HTTP API\\nTASK: Analyze endpoints\\nMODE: analysis\\nCONTEXT: @src/api/**/* [endpoint_discovery]\\nEXPECTED: API outline\")", "output_to": "api_outline"}
|
||||
{"step": "analyze_api", "command": "bash(ccw cli -p \"PURPOSE: Document HTTP API\\nTASK: Analyze endpoints\\nMODE: analysis\\nCONTEXT: @src/api/**/* [endpoint_discovery]\\nEXPECTED: API outline\" --tool gemini --mode analysis)", "output_to": "api_outline"}
|
||||
],
|
||||
"implementation_approach": [
|
||||
{
|
||||
@@ -601,7 +601,7 @@ api_id=$((group_count + 3))
|
||||
| Mode | CLI Placement | CLI MODE | Approval Flag | Agent Role |
|
||||
|------|---------------|----------|---------------|------------|
|
||||
| **Agent (default)** | pre_analysis | analysis | (none) | Generates documentation content |
|
||||
| **CLI (--cli-execute)** | implementation_approach | write | --approval-mode yolo | Executes CLI commands, validates output |
|
||||
| **CLI (--cli-execute)** | implementation_approach | write | --mode write | Executes CLI commands, validates output |
|
||||
|
||||
**Execution Flow**:
|
||||
- **Phase 2**: Unified analysis once, results in `.process/`
|
||||
|
||||
@@ -5,7 +5,7 @@ argument-hint: "[--tool gemini|qwen] \"task context description\""
|
||||
allowed-tools: Task(*), Bash(*)
|
||||
examples:
|
||||
- /memory:load "在当前前端基础上开发用户认证功能"
|
||||
- /memory:load --tool qwen -p "重构支付模块API"
|
||||
- /memory:load --tool qwen "重构支付模块API"
|
||||
---
|
||||
|
||||
# Memory Load Command (/memory:load)
|
||||
@@ -39,7 +39,7 @@ The command fully delegates to **universal-executor agent**, which autonomously:
|
||||
1. **Analyzes Project Structure**: Executes `get_modules_by_depth.sh` to understand architecture
|
||||
2. **Loads Documentation**: Reads CLAUDE.md, README.md and other key docs
|
||||
3. **Extracts Keywords**: Derives core keywords from task description
|
||||
4. **Discovers Files**: Uses MCP code-index or rg/find to locate relevant files
|
||||
4. **Discovers Files**: Uses CodexLens MCP or rg/find to locate relevant files
|
||||
5. **CLI Deep Analysis**: Executes Gemini/Qwen CLI for deep context analysis
|
||||
6. **Generates Content Package**: Returns structured JSON core content package
|
||||
|
||||
@@ -109,7 +109,7 @@ Task(
|
||||
|
||||
1. **Project Structure**
|
||||
\`\`\`bash
|
||||
bash(~/.claude/scripts/get_modules_by_depth.sh)
|
||||
bash(ccw tool exec get_modules_by_depth '{}')
|
||||
\`\`\`
|
||||
|
||||
2. **Core Documentation**
|
||||
@@ -136,7 +136,7 @@ Task(
|
||||
Execute Gemini/Qwen CLI for deep analysis (saves main thread tokens):
|
||||
|
||||
\`\`\`bash
|
||||
cd . && ${tool} -p "
|
||||
ccw cli -p "
|
||||
PURPOSE: Extract project core context for task: ${task_description}
|
||||
TASK: Analyze project architecture, tech stack, key patterns, relevant files
|
||||
MODE: analysis
|
||||
@@ -147,7 +147,7 @@ RULES:
|
||||
- Identify key architecture patterns and technical constraints
|
||||
- Extract integration points and development standards
|
||||
- Output concise, structured format
|
||||
"
|
||||
" --tool ${tool} --mode analysis
|
||||
\`\`\`
|
||||
|
||||
### Step 4: Generate Core Content Package
|
||||
@@ -212,7 +212,7 @@ Before returning:
|
||||
### Example 2: Using Qwen Tool
|
||||
|
||||
```bash
|
||||
/memory:load --tool qwen -p "重构支付模块API"
|
||||
/memory:load --tool qwen "重构支付模块API"
|
||||
```
|
||||
|
||||
Agent uses Qwen CLI for analysis, returns same structured package.
|
||||
|
||||
310
.claude/commands/memory/tech-research-rules.md
Normal file
310
.claude/commands/memory/tech-research-rules.md
Normal file
@@ -0,0 +1,310 @@
|
||||
---
|
||||
name: tech-research-rules
|
||||
description: "3-phase orchestrator: extract tech stack → Exa research → generate path-conditional rules (auto-loaded by Claude Code)"
|
||||
argument-hint: "[session-id | tech-stack-name] [--regenerate] [--tool <gemini|qwen>]"
|
||||
allowed-tools: SlashCommand(*), TodoWrite(*), Bash(*), Read(*), Write(*), Task(*)
|
||||
---
|
||||
|
||||
# Tech Stack Rules Generator
|
||||
|
||||
## Overview
|
||||
|
||||
**Purpose**: Generate multi-layered, path-conditional rules that Claude Code automatically loads based on file context.
|
||||
|
||||
**Output Structure**:
|
||||
```
|
||||
.claude/rules/tech/{tech-stack}/
|
||||
├── core.md # paths: **/*.{ext} - Core principles
|
||||
├── patterns.md # paths: src/**/*.{ext} - Implementation patterns
|
||||
├── testing.md # paths: **/*.{test,spec}.{ext} - Testing rules
|
||||
├── config.md # paths: *.config.* - Configuration rules
|
||||
├── api.md # paths: **/api/**/* - API rules (backend only)
|
||||
├── components.md # paths: **/components/**/* - Component rules (frontend only)
|
||||
└── metadata.json # Generation metadata
|
||||
```
|
||||
|
||||
**Templates Location**: `~/.claude/workflows/cli-templates/prompts/rules/`
|
||||
|
||||
---
|
||||
|
||||
## Core Rules
|
||||
|
||||
1. **Start Immediately**: First action is TodoWrite initialization
|
||||
2. **Path-Conditional Output**: Every rule file includes `paths` frontmatter
|
||||
3. **Template-Driven**: Agent reads templates before generating content
|
||||
4. **Agent Produces Files**: Agent writes all rule files directly
|
||||
5. **No Manual Loading**: Rules auto-activate when Claude works with matching files
|
||||
|
||||
---
|
||||
|
||||
## 3-Phase Execution
|
||||
|
||||
### Phase 1: Prepare Context & Detect Tech Stack
|
||||
|
||||
**Goal**: Detect input mode, extract tech stack info, determine file extensions
|
||||
|
||||
**Input Mode Detection**:
|
||||
```bash
|
||||
input="$1"
|
||||
|
||||
if [[ "$input" == WFS-* ]]; then
|
||||
MODE="session"
|
||||
SESSION_ID="$input"
|
||||
# Read workflow-session.json to extract tech stack
|
||||
else
|
||||
MODE="direct"
|
||||
TECH_STACK_NAME="$input"
|
||||
fi
|
||||
```
|
||||
|
||||
**Tech Stack Analysis**:
|
||||
```javascript
|
||||
// Decompose composite tech stacks
|
||||
// "typescript-react-nextjs" → ["typescript", "react", "nextjs"]
|
||||
|
||||
const TECH_EXTENSIONS = {
|
||||
"typescript": "{ts,tsx}",
|
||||
"javascript": "{js,jsx}",
|
||||
"python": "py",
|
||||
"rust": "rs",
|
||||
"go": "go",
|
||||
"java": "java",
|
||||
"csharp": "cs",
|
||||
"ruby": "rb",
|
||||
"php": "php"
|
||||
};
|
||||
|
||||
const FRAMEWORK_TYPE = {
|
||||
"react": "frontend",
|
||||
"vue": "frontend",
|
||||
"angular": "frontend",
|
||||
"nextjs": "fullstack",
|
||||
"nuxt": "fullstack",
|
||||
"fastapi": "backend",
|
||||
"express": "backend",
|
||||
"django": "backend",
|
||||
"rails": "backend"
|
||||
};
|
||||
```
|
||||
|
||||
**Check Existing Rules**:
|
||||
```bash
|
||||
normalized_name=$(echo "$TECH_STACK_NAME" | tr '[:upper:]' '[:lower:]' | tr ' ' '-')
|
||||
rules_dir=".claude/rules/tech/${normalized_name}"
|
||||
existing_count=$(find "${rules_dir}" -name "*.md" 2>/dev/null | wc -l || echo 0)
|
||||
```
|
||||
|
||||
**Skip Decision**:
|
||||
- If `existing_count > 0` AND no `--regenerate` → `SKIP_GENERATION = true`
|
||||
- If `--regenerate` → Delete existing and regenerate
|
||||
|
||||
**Output Variables**:
|
||||
- `TECH_STACK_NAME`: Normalized name
|
||||
- `PRIMARY_LANG`: Primary language
|
||||
- `FILE_EXT`: File extension pattern
|
||||
- `FRAMEWORK_TYPE`: frontend | backend | fullstack | library
|
||||
- `COMPONENTS`: Array of tech components
|
||||
- `SKIP_GENERATION`: Boolean
|
||||
|
||||
**TodoWrite**: Mark phase 1 completed
|
||||
|
||||
---
|
||||
|
||||
### Phase 2: Agent Produces Path-Conditional Rules
|
||||
|
||||
**Skip Condition**: Skipped if `SKIP_GENERATION = true`
|
||||
|
||||
**Goal**: Delegate to agent for Exa research and rule file generation
|
||||
|
||||
**Template Files**:
|
||||
```
|
||||
~/.claude/workflows/cli-templates/prompts/rules/
|
||||
├── tech-rules-agent-prompt.txt # Agent instructions
|
||||
├── rule-core.txt # Core principles template
|
||||
├── rule-patterns.txt # Implementation patterns template
|
||||
├── rule-testing.txt # Testing rules template
|
||||
├── rule-config.txt # Configuration rules template
|
||||
├── rule-api.txt # API rules template (backend)
|
||||
└── rule-components.txt # Component rules template (frontend)
|
||||
```
|
||||
|
||||
**Agent Task**:
|
||||
|
||||
```javascript
|
||||
Task({
|
||||
subagent_type: "general-purpose",
|
||||
description: `Generate tech stack rules: ${TECH_STACK_NAME}`,
|
||||
prompt: `
|
||||
You are generating path-conditional rules for Claude Code.
|
||||
|
||||
## Context
|
||||
- Tech Stack: ${TECH_STACK_NAME}
|
||||
- Primary Language: ${PRIMARY_LANG}
|
||||
- File Extensions: ${FILE_EXT}
|
||||
- Framework Type: ${FRAMEWORK_TYPE}
|
||||
- Components: ${JSON.stringify(COMPONENTS)}
|
||||
- Output Directory: .claude/rules/tech/${TECH_STACK_NAME}/
|
||||
|
||||
## Instructions
|
||||
|
||||
Read the agent prompt template for detailed instructions:
|
||||
$(cat ~/.claude/workflows/cli-templates/prompts/rules/tech-rules-agent-prompt.txt)
|
||||
|
||||
## Execution Steps
|
||||
|
||||
1. Execute Exa research queries (see agent prompt)
|
||||
2. Read each rule template
|
||||
3. Generate rule files following template structure
|
||||
4. Write files to output directory
|
||||
5. Write metadata.json
|
||||
6. Report completion
|
||||
|
||||
## Variable Substitutions
|
||||
|
||||
Replace in templates:
|
||||
- {TECH_STACK_NAME} → ${TECH_STACK_NAME}
|
||||
- {PRIMARY_LANG} → ${PRIMARY_LANG}
|
||||
- {FILE_EXT} → ${FILE_EXT}
|
||||
- {FRAMEWORK_TYPE} → ${FRAMEWORK_TYPE}
|
||||
`
|
||||
})
|
||||
```
|
||||
|
||||
**Completion Criteria**:
|
||||
- 4-6 rule files written with proper `paths` frontmatter
|
||||
- metadata.json written
|
||||
- Agent reports files created
|
||||
|
||||
**TodoWrite**: Mark phase 2 completed
|
||||
|
||||
---
|
||||
|
||||
### Phase 3: Verify & Report
|
||||
|
||||
**Goal**: Verify generated files and provide usage summary
|
||||
|
||||
**Steps**:
|
||||
|
||||
1. **Verify Files**:
|
||||
```bash
|
||||
find ".claude/rules/tech/${TECH_STACK_NAME}" -name "*.md" -type f
|
||||
```
|
||||
|
||||
2. **Validate Frontmatter**:
|
||||
```bash
|
||||
head -5 ".claude/rules/tech/${TECH_STACK_NAME}/core.md"
|
||||
```
|
||||
|
||||
3. **Read Metadata**:
|
||||
```javascript
|
||||
Read(`.claude/rules/tech/${TECH_STACK_NAME}/metadata.json`)
|
||||
```
|
||||
|
||||
4. **Generate Summary Report**:
|
||||
```
|
||||
Tech Stack Rules Generated
|
||||
|
||||
Tech Stack: {TECH_STACK_NAME}
|
||||
Location: .claude/rules/tech/{TECH_STACK_NAME}/
|
||||
|
||||
Files Created:
|
||||
├── core.md → paths: **/*.{ext}
|
||||
├── patterns.md → paths: src/**/*.{ext}
|
||||
├── testing.md → paths: **/*.{test,spec}.{ext}
|
||||
├── config.md → paths: *.config.*
|
||||
├── api.md → paths: **/api/**/* (if backend)
|
||||
└── components.md → paths: **/components/**/* (if frontend)
|
||||
|
||||
Auto-Loading:
|
||||
- Rules apply automatically when editing matching files
|
||||
- No manual loading required
|
||||
|
||||
Example Activation:
|
||||
- Edit src/components/Button.tsx → core.md + patterns.md + components.md
|
||||
- Edit tests/api.test.ts → core.md + testing.md
|
||||
- Edit package.json → config.md
|
||||
```
|
||||
|
||||
**TodoWrite**: Mark phase 3 completed
|
||||
|
||||
---
|
||||
|
||||
## Path Pattern Reference
|
||||
|
||||
| Pattern | Matches |
|
||||
|---------|---------|
|
||||
| `**/*.ts` | All .ts files |
|
||||
| `src/**/*` | All files under src/ |
|
||||
| `*.config.*` | Config files in root |
|
||||
| `**/*.{ts,tsx}` | .ts and .tsx files |
|
||||
|
||||
| Tech Stack | Core Pattern | Test Pattern |
|
||||
|------------|--------------|--------------|
|
||||
| TypeScript | `**/*.{ts,tsx}` | `**/*.{test,spec}.{ts,tsx}` |
|
||||
| Python | `**/*.py` | `**/test_*.py, **/*_test.py` |
|
||||
| Rust | `**/*.rs` | `**/tests/**/*.rs` |
|
||||
| Go | `**/*.go` | `**/*_test.go` |
|
||||
|
||||
---
|
||||
|
||||
## Parameters
|
||||
|
||||
```bash
|
||||
/memory:tech-research [session-id | "tech-stack-name"] [--regenerate]
|
||||
```
|
||||
|
||||
**Arguments**:
|
||||
- **session-id**: `WFS-*` format - Extract from workflow session
|
||||
- **tech-stack-name**: Direct input - `"typescript"`, `"typescript-react"`
|
||||
- **--regenerate**: Force regenerate existing rules
|
||||
|
||||
---
|
||||
|
||||
## Examples
|
||||
|
||||
### Single Language
|
||||
|
||||
```bash
|
||||
/memory:tech-research "typescript"
|
||||
```
|
||||
|
||||
**Output**: `.claude/rules/tech/typescript/` with 4 rule files
|
||||
|
||||
### Frontend Stack
|
||||
|
||||
```bash
|
||||
/memory:tech-research "typescript-react"
|
||||
```
|
||||
|
||||
**Output**: `.claude/rules/tech/typescript-react/` with 5 rule files (includes components.md)
|
||||
|
||||
### Backend Stack
|
||||
|
||||
```bash
|
||||
/memory:tech-research "python-fastapi"
|
||||
```
|
||||
|
||||
**Output**: `.claude/rules/tech/python-fastapi/` with 5 rule files (includes api.md)
|
||||
|
||||
### From Session
|
||||
|
||||
```bash
|
||||
/memory:tech-research WFS-user-auth-20251104
|
||||
```
|
||||
|
||||
**Workflow**: Extract tech stack from session → Generate rules
|
||||
|
||||
---
|
||||
|
||||
## Comparison: Rules vs SKILL
|
||||
|
||||
| Aspect | SKILL Memory | Rules |
|
||||
|--------|--------------|-------|
|
||||
| Loading | Manual: `Skill("tech")` | Automatic by path |
|
||||
| Scope | All files when loaded | Only matching files |
|
||||
| Granularity | Monolithic packages | Per-file-type |
|
||||
| Context | Full package | Only relevant rules |
|
||||
|
||||
**When to Use**:
|
||||
- **Rules**: Tech stack conventions per file type
|
||||
- **SKILL**: Reference docs, APIs, examples for manual lookup
|
||||
@@ -1,477 +0,0 @@
|
||||
---
|
||||
name: tech-research
|
||||
description: 3-phase orchestrator: extract tech stack from session/name → delegate to agent for Exa research and module generation → generate SKILL.md index (skips phase 2 if exists)
|
||||
argument-hint: "[session-id | tech-stack-name] [--regenerate] [--tool <gemini|qwen>]"
|
||||
allowed-tools: SlashCommand(*), TodoWrite(*), Bash(*), Read(*), Write(*), Task(*)
|
||||
---
|
||||
|
||||
# Tech Stack Research SKILL Generator
|
||||
|
||||
## Overview
|
||||
|
||||
**Pure Orchestrator with Agent Delegation**: Prepares context paths and delegates ALL work to agent. Agent produces files directly.
|
||||
|
||||
**Auto-Continue Workflow**: Runs fully autonomously once triggered. Each phase completes and automatically triggers the next phase.
|
||||
|
||||
**Execution Paths**:
|
||||
- **Full Path**: All 3 phases (no existing SKILL OR `--regenerate` specified)
|
||||
- **Skip Path**: Phase 1 → Phase 3 (existing SKILL found AND no `--regenerate` flag)
|
||||
- **Phase 3 Always Executes**: SKILL index is always generated or updated
|
||||
|
||||
**Agent Responsibility**:
|
||||
- Agent does ALL the work: context reading, Exa research, content synthesis, file writing
|
||||
- Orchestrator only provides context paths and waits for completion
|
||||
|
||||
## Core Rules
|
||||
|
||||
1. **Start Immediately**: First action is TodoWrite initialization, second action is Phase 1 execution
|
||||
2. **Context Path Delegation**: Pass session directory or tech stack name to agent, let agent do discovery
|
||||
3. **Agent Produces Files**: Agent directly writes all module files, orchestrator does NOT parse agent output
|
||||
4. **Auto-Continue**: After completing each phase, update TodoWrite and immediately execute next phase
|
||||
5. **No User Prompts**: Never ask user questions or wait for input between phases
|
||||
6. **Track Progress**: Update TodoWrite after EVERY phase completion before starting next phase
|
||||
7. **Lightweight Index**: Phase 3 only generates SKILL.md index by reading existing files
|
||||
|
||||
---
|
||||
|
||||
## 3-Phase Execution
|
||||
|
||||
### Phase 1: Prepare Context Paths
|
||||
|
||||
**Goal**: Detect input mode, prepare context paths for agent, check existing SKILL
|
||||
|
||||
**Input Mode Detection**:
|
||||
```bash
|
||||
# Get input parameter
|
||||
input="$1"
|
||||
|
||||
# Detect mode
|
||||
if [[ "$input" == WFS-* ]]; then
|
||||
MODE="session"
|
||||
SESSION_ID="$input"
|
||||
CONTEXT_PATH=".workflow/${SESSION_ID}"
|
||||
else
|
||||
MODE="direct"
|
||||
TECH_STACK_NAME="$input"
|
||||
CONTEXT_PATH="$input" # Pass tech stack name as context
|
||||
fi
|
||||
```
|
||||
|
||||
**Check Existing SKILL**:
|
||||
```bash
|
||||
# For session mode, peek at session to get tech stack name
|
||||
if [[ "$MODE" == "session" ]]; then
|
||||
bash(test -f ".workflow/${SESSION_ID}/workflow-session.json")
|
||||
Read(.workflow/${SESSION_ID}/workflow-session.json)
|
||||
# Extract tech_stack_name (minimal extraction)
|
||||
fi
|
||||
|
||||
# Normalize and check
|
||||
normalized_name=$(echo "$TECH_STACK_NAME" | tr '[:upper:]' '[:lower:]' | tr ' ' '-')
|
||||
bash(test -d ".claude/skills/${normalized_name}" && echo "exists" || echo "not_exists")
|
||||
bash(find ".claude/skills/${normalized_name}" -name "*.md" 2>/dev/null | wc -l || echo 0)
|
||||
```
|
||||
|
||||
**Skip Decision**:
|
||||
```javascript
|
||||
if (existing_files > 0 && !regenerate_flag) {
|
||||
SKIP_GENERATION = true
|
||||
message = "Tech stack SKILL already exists, skipping Phase 2. Use --regenerate to force regeneration."
|
||||
} else if (regenerate_flag) {
|
||||
bash(rm -rf ".claude/skills/${normalized_name}")
|
||||
SKIP_GENERATION = false
|
||||
message = "Regenerating tech stack SKILL from scratch."
|
||||
} else {
|
||||
SKIP_GENERATION = false
|
||||
message = "No existing SKILL found, generating new tech stack documentation."
|
||||
}
|
||||
```
|
||||
|
||||
**Output Variables**:
|
||||
- `MODE`: `session` or `direct`
|
||||
- `SESSION_ID`: Session ID (if session mode)
|
||||
- `CONTEXT_PATH`: Path to session directory OR tech stack name
|
||||
- `TECH_STACK_NAME`: Extracted or provided tech stack name
|
||||
- `SKIP_GENERATION`: Boolean - whether to skip Phase 2
|
||||
|
||||
**TodoWrite**:
|
||||
- If skipping: Mark phase 1 completed, phase 2 completed, phase 3 in_progress
|
||||
- If not skipping: Mark phase 1 completed, phase 2 in_progress
|
||||
|
||||
---
|
||||
|
||||
### Phase 2: Agent Produces All Files
|
||||
|
||||
**Skip Condition**: Skipped if `SKIP_GENERATION = true`
|
||||
|
||||
**Goal**: Delegate EVERYTHING to agent - context reading, Exa research, content synthesis, and file writing
|
||||
|
||||
**Agent Task Specification**:
|
||||
|
||||
```
|
||||
Task(
|
||||
subagent_type: "general-purpose",
|
||||
description: "Generate tech stack SKILL: {CONTEXT_PATH}",
|
||||
prompt: "
|
||||
Generate a complete tech stack SKILL package with Exa research.
|
||||
|
||||
**Context Provided**:
|
||||
- Mode: {MODE}
|
||||
- Context Path: {CONTEXT_PATH}
|
||||
|
||||
**Templates Available**:
|
||||
- Module Format: ~/.claude/workflows/cli-templates/prompts/tech/tech-module-format.txt
|
||||
- SKILL Index: ~/.claude/workflows/cli-templates/prompts/tech/tech-skill-index.txt
|
||||
|
||||
**Your Responsibilities**:
|
||||
|
||||
1. **Extract Tech Stack Information**:
|
||||
|
||||
IF MODE == 'session':
|
||||
- Read `.workflow/active/{session_id}/workflow-session.json`
|
||||
- Read `.workflow/active/{session_id}/.process/context-package.json`
|
||||
- Extract tech_stack: {language, frameworks, libraries}
|
||||
- Build tech stack name: \"{language}-{framework1}-{framework2}\"
|
||||
- Example: \"typescript-react-nextjs\"
|
||||
|
||||
IF MODE == 'direct':
|
||||
- Tech stack name = CONTEXT_PATH
|
||||
- Parse composite: split by '-' delimiter
|
||||
- Example: \"typescript-react-nextjs\" → [\"typescript\", \"react\", \"nextjs\"]
|
||||
|
||||
2. **Execute Exa Research** (4-6 parallel queries):
|
||||
|
||||
Base Queries (always execute):
|
||||
- mcp__exa__get_code_context_exa(query: \"{tech} core principles best practices 2025\", tokensNum: 8000)
|
||||
- mcp__exa__get_code_context_exa(query: \"{tech} common patterns architecture examples\", tokensNum: 7000)
|
||||
- mcp__exa__web_search_exa(query: \"{tech} configuration setup tooling 2025\", numResults: 5)
|
||||
- mcp__exa__get_code_context_exa(query: \"{tech} testing strategies\", tokensNum: 5000)
|
||||
|
||||
Component Queries (if composite):
|
||||
- For each additional component:
|
||||
mcp__exa__get_code_context_exa(query: \"{main_tech} {component} integration\", tokensNum: 5000)
|
||||
|
||||
3. **Read Module Format Template**:
|
||||
|
||||
Read template for structure guidance:
|
||||
```bash
|
||||
Read(~/.claude/workflows/cli-templates/prompts/tech/tech-module-format.txt)
|
||||
```
|
||||
|
||||
4. **Synthesize Content into 6 Modules**:
|
||||
|
||||
Follow template structure from tech-module-format.txt:
|
||||
- **principles.md** - Core concepts, philosophies (~3K tokens)
|
||||
- **patterns.md** - Implementation patterns with code examples (~5K tokens)
|
||||
- **practices.md** - Best practices, anti-patterns, pitfalls (~4K tokens)
|
||||
- **testing.md** - Testing strategies, frameworks (~3K tokens)
|
||||
- **config.md** - Setup, configuration, tooling (~3K tokens)
|
||||
- **frameworks.md** - Framework integration (only if composite, ~4K tokens)
|
||||
|
||||
Each module follows template format:
|
||||
- Frontmatter (YAML)
|
||||
- Main sections with clear headings
|
||||
- Code examples from Exa research
|
||||
- Best practices sections
|
||||
- References to Exa sources
|
||||
|
||||
5. **Write Files Directly**:
|
||||
|
||||
```javascript
|
||||
// Create directory
|
||||
bash(mkdir -p \".claude/skills/{tech_stack_name}\")
|
||||
|
||||
// Write each module file using Write tool
|
||||
Write({ file_path: \".claude/skills/{tech_stack_name}/principles.md\", content: ... })
|
||||
Write({ file_path: \".claude/skills/{tech_stack_name}/patterns.md\", content: ... })
|
||||
Write({ file_path: \".claude/skills/{tech_stack_name}/practices.md\", content: ... })
|
||||
Write({ file_path: \".claude/skills/{tech_stack_name}/testing.md\", content: ... })
|
||||
Write({ file_path: \".claude/skills/{tech_stack_name}/config.md\", content: ... })
|
||||
// Write frameworks.md only if composite
|
||||
|
||||
// Write metadata.json
|
||||
Write({
|
||||
file_path: \".claude/skills/{tech_stack_name}/metadata.json\",
|
||||
content: JSON.stringify({
|
||||
tech_stack_name,
|
||||
components,
|
||||
is_composite,
|
||||
generated_at: timestamp,
|
||||
source: \"exa-research\",
|
||||
research_summary: { total_queries, total_sources }
|
||||
})
|
||||
})
|
||||
```
|
||||
|
||||
6. **Report Completion**:
|
||||
|
||||
Provide summary:
|
||||
- Tech stack name
|
||||
- Files created (count)
|
||||
- Exa queries executed
|
||||
- Sources consulted
|
||||
|
||||
**CRITICAL**:
|
||||
- MUST read external template files before generating content (step 3 for modules, step 4 for index)
|
||||
- You have FULL autonomy - read files, execute Exa, synthesize content, write files
|
||||
- Do NOT return JSON or structured data - produce actual .md files
|
||||
- Handle errors gracefully (Exa failures, missing files, template read failures)
|
||||
- If tech stack cannot be determined, ask orchestrator to clarify
|
||||
"
|
||||
)
|
||||
```
|
||||
|
||||
**Completion Criteria**:
|
||||
- Agent task executed successfully
|
||||
- 5-6 modular files written to `.claude/skills/{tech_stack_name}/`
|
||||
- metadata.json written
|
||||
- Agent reports completion
|
||||
|
||||
**TodoWrite**: Mark phase 2 completed, phase 3 in_progress
|
||||
|
||||
---
|
||||
|
||||
### Phase 3: Generate SKILL.md Index
|
||||
|
||||
**Note**: This phase **ALWAYS executes** - generates or updates the SKILL index.
|
||||
|
||||
**Goal**: Read generated module files and create SKILL.md index with loading recommendations
|
||||
|
||||
**Steps**:
|
||||
|
||||
1. **Verify Generated Files**:
|
||||
```bash
|
||||
bash(find ".claude/skills/${TECH_STACK_NAME}" -name "*.md" -type f | sort)
|
||||
```
|
||||
|
||||
2. **Read metadata.json**:
|
||||
```javascript
|
||||
Read(.claude/skills/${TECH_STACK_NAME}/metadata.json)
|
||||
// Extract: tech_stack_name, components, is_composite, research_summary
|
||||
```
|
||||
|
||||
3. **Read Module Headers** (optional, first 20 lines):
|
||||
```javascript
|
||||
Read(.claude/skills/${TECH_STACK_NAME}/principles.md, limit: 20)
|
||||
// Repeat for other modules
|
||||
```
|
||||
|
||||
4. **Read SKILL Index Template**:
|
||||
|
||||
```javascript
|
||||
Read(~/.claude/workflows/cli-templates/prompts/tech/tech-skill-index.txt)
|
||||
```
|
||||
|
||||
5. **Generate SKILL.md Index**:
|
||||
|
||||
Follow template from tech-skill-index.txt with variable substitutions:
|
||||
- `{TECH_STACK_NAME}`: From metadata.json
|
||||
- `{MAIN_TECH}`: Primary technology
|
||||
- `{ISO_TIMESTAMP}`: Current timestamp
|
||||
- `{QUERY_COUNT}`: From research_summary
|
||||
- `{SOURCE_COUNT}`: From research_summary
|
||||
- Conditional sections for composite tech stacks
|
||||
|
||||
Template provides structure for:
|
||||
- Frontmatter with metadata
|
||||
- Overview and tech stack description
|
||||
- Module organization (Core/Practical/Config sections)
|
||||
- Loading recommendations (Quick/Implementation/Complete)
|
||||
- Usage guidelines and auto-trigger keywords
|
||||
- Research metadata and version history
|
||||
|
||||
6. **Write SKILL.md**:
|
||||
```javascript
|
||||
Write({
|
||||
file_path: `.claude/skills/${TECH_STACK_NAME}/SKILL.md`,
|
||||
content: generatedIndexMarkdown
|
||||
})
|
||||
```
|
||||
|
||||
**Completion Criteria**:
|
||||
- SKILL.md index written
|
||||
- All module files verified
|
||||
- Loading recommendations included
|
||||
|
||||
**TodoWrite**: Mark phase 3 completed
|
||||
|
||||
**Final Report**:
|
||||
```
|
||||
Tech Stack SKILL Package Complete
|
||||
|
||||
Tech Stack: {TECH_STACK_NAME}
|
||||
Location: .claude/skills/{TECH_STACK_NAME}/
|
||||
|
||||
Files: SKILL.md + 5-6 modules + metadata.json
|
||||
Exa Research: {queries} queries, {sources} sources
|
||||
|
||||
Usage: Skill(command: "{TECH_STACK_NAME}")
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Implementation Details
|
||||
|
||||
### TodoWrite Patterns
|
||||
|
||||
**Initialization** (Before Phase 1):
|
||||
```javascript
|
||||
TodoWrite({todos: [
|
||||
{"content": "Prepare context paths", "status": "in_progress", "activeForm": "Preparing context paths"},
|
||||
{"content": "Agent produces all module files", "status": "pending", "activeForm": "Agent producing files"},
|
||||
{"content": "Generate SKILL.md index", "status": "pending", "activeForm": "Generating SKILL index"}
|
||||
]})
|
||||
```
|
||||
|
||||
**Full Path** (SKIP_GENERATION = false):
|
||||
```javascript
|
||||
// After Phase 1
|
||||
TodoWrite({todos: [
|
||||
{"content": "Prepare context paths", "status": "completed", ...},
|
||||
{"content": "Agent produces all module files", "status": "in_progress", ...},
|
||||
{"content": "Generate SKILL.md index", "status": "pending", ...}
|
||||
]})
|
||||
|
||||
// After Phase 2
|
||||
TodoWrite({todos: [
|
||||
{"content": "Prepare context paths", "status": "completed", ...},
|
||||
{"content": "Agent produces all module files", "status": "completed", ...},
|
||||
{"content": "Generate SKILL.md index", "status": "in_progress", ...}
|
||||
]})
|
||||
|
||||
// After Phase 3
|
||||
TodoWrite({todos: [
|
||||
{"content": "Prepare context paths", "status": "completed", ...},
|
||||
{"content": "Agent produces all module files", "status": "completed", ...},
|
||||
{"content": "Generate SKILL.md index", "status": "completed", ...}
|
||||
]})
|
||||
```
|
||||
|
||||
**Skip Path** (SKIP_GENERATION = true):
|
||||
```javascript
|
||||
// After Phase 1 (skip Phase 2)
|
||||
TodoWrite({todos: [
|
||||
{"content": "Prepare context paths", "status": "completed", ...},
|
||||
{"content": "Agent produces all module files", "status": "completed", ...}, // Skipped
|
||||
{"content": "Generate SKILL.md index", "status": "in_progress", ...}
|
||||
]})
|
||||
```
|
||||
|
||||
### Execution Flow
|
||||
|
||||
**Full Path**:
|
||||
```
|
||||
User → TodoWrite Init → Phase 1 (prepare) → Phase 2 (agent writes files) → Phase 3 (write index) → Report
|
||||
```
|
||||
|
||||
**Skip Path**:
|
||||
```
|
||||
User → TodoWrite Init → Phase 1 (detect existing) → Phase 3 (update index) → Report
|
||||
```
|
||||
|
||||
### Error Handling
|
||||
|
||||
**Phase 1 Errors**:
|
||||
- Invalid session ID: Report error, verify session exists
|
||||
- Missing context-package: Warn, fall back to direct mode
|
||||
- No tech stack detected: Ask user to specify tech stack name
|
||||
|
||||
**Phase 2 Errors (Agent)**:
|
||||
- Agent task fails: Retry once, report if fails again
|
||||
- Exa API failures: Agent handles internally with retries
|
||||
- Incomplete results: Warn user, proceed with partial data if minimum sections available
|
||||
|
||||
**Phase 3 Errors**:
|
||||
- Write failures: Report which files failed
|
||||
- Missing files: Note in SKILL.md, suggest regeneration
|
||||
|
||||
---
|
||||
|
||||
## Parameters
|
||||
|
||||
```bash
|
||||
/memory:tech-research [session-id | "tech-stack-name"] [--regenerate] [--tool <gemini|qwen>]
|
||||
```
|
||||
|
||||
**Arguments**:
|
||||
- **session-id | tech-stack-name**: Input source (auto-detected by WFS- prefix)
|
||||
- Session mode: `WFS-user-auth-v2` - Extract tech stack from workflow
|
||||
- Direct mode: `"typescript"`, `"typescript-react-nextjs"` - User specifies
|
||||
- **--regenerate**: Force regenerate existing SKILL (deletes and recreates)
|
||||
- **--tool**: Reserved for future CLI integration (default: gemini)
|
||||
|
||||
---
|
||||
|
||||
## Examples
|
||||
|
||||
**Generated File Structure** (for all examples):
|
||||
```
|
||||
.claude/skills/{tech-stack}/
|
||||
├── SKILL.md # Index (Phase 3)
|
||||
├── principles.md # Agent (Phase 2)
|
||||
├── patterns.md # Agent
|
||||
├── practices.md # Agent
|
||||
├── testing.md # Agent
|
||||
├── config.md # Agent
|
||||
├── frameworks.md # Agent (if composite)
|
||||
└── metadata.json # Agent
|
||||
```
|
||||
|
||||
### Direct Mode - Single Stack
|
||||
|
||||
```bash
|
||||
/memory:tech-research "typescript"
|
||||
```
|
||||
|
||||
**Workflow**:
|
||||
1. Phase 1: Detects direct mode, checks existing SKILL
|
||||
2. Phase 2: Agent executes 4 Exa queries, writes 5 modules
|
||||
3. Phase 3: Generates SKILL.md index
|
||||
|
||||
### Direct Mode - Composite Stack
|
||||
|
||||
```bash
|
||||
/memory:tech-research "typescript-react-nextjs"
|
||||
```
|
||||
|
||||
**Workflow**:
|
||||
1. Phase 1: Decomposes into ["typescript", "react", "nextjs"]
|
||||
2. Phase 2: Agent executes 6 Exa queries (4 base + 2 components), writes 6 modules (adds frameworks.md)
|
||||
3. Phase 3: Generates SKILL.md index with framework integration
|
||||
|
||||
### Session Mode - Extract from Workflow
|
||||
|
||||
```bash
|
||||
/memory:tech-research WFS-user-auth-20251104
|
||||
```
|
||||
|
||||
**Workflow**:
|
||||
1. Phase 1: Reads session, extracts tech stack: `python-fastapi-sqlalchemy`
|
||||
2. Phase 2: Agent researches Python + FastAPI + SQLAlchemy, writes 6 modules
|
||||
3. Phase 3: Generates SKILL.md index
|
||||
|
||||
### Regenerate Existing
|
||||
|
||||
```bash
|
||||
/memory:tech-research "react" --regenerate
|
||||
```
|
||||
|
||||
**Workflow**:
|
||||
1. Phase 1: Deletes existing SKILL due to --regenerate
|
||||
2. Phase 2: Agent executes fresh Exa research (latest 2025 practices)
|
||||
3. Phase 3: Generates updated SKILL.md
|
||||
|
||||
### Skip Path - Fast Update
|
||||
|
||||
```bash
|
||||
/memory:tech-research "python"
|
||||
```
|
||||
|
||||
**Scenario**: SKILL already exists with 7 files
|
||||
|
||||
**Workflow**:
|
||||
1. Phase 1: Detects existing SKILL, sets SKIP_GENERATION = true
|
||||
2. Phase 2: **SKIPPED**
|
||||
3. Phase 3: Updates SKILL.md index only (5-10x faster)
|
||||
|
||||
|
||||
@@ -99,10 +99,10 @@ src/ (depth 1) → SINGLE-LAYER STRATEGY
|
||||
Bash({command: "git add -A 2>/dev/null || true", run_in_background: false});
|
||||
|
||||
// Get module structure
|
||||
Bash({command: "~/.claude/scripts/get_modules_by_depth.sh list", run_in_background: false});
|
||||
Bash({command: "ccw tool exec get_modules_by_depth '{\"format\":\"list\"}'", run_in_background: false});
|
||||
|
||||
// OR with --path
|
||||
Bash({command: "cd <target-path> && ~/.claude/scripts/get_modules_by_depth.sh list", run_in_background: false});
|
||||
Bash({command: "cd <target-path> && ccw tool exec get_modules_by_depth '{\"format\":\"list\"}'", run_in_background: false});
|
||||
```
|
||||
|
||||
**Parse output** `depth:N|path:<PATH>|...` to extract module paths and count.
|
||||
@@ -185,7 +185,7 @@ for (let layer of [3, 2, 1]) {
|
||||
let strategy = module.depth >= 3 ? "multi-layer" : "single-layer";
|
||||
for (let tool of tool_order) {
|
||||
Bash({
|
||||
command: `cd ${module.path} && ~/.claude/scripts/update_module_claude.sh "${strategy}" "." "${tool}"`,
|
||||
command: `cd ${module.path} && ccw tool exec update_module_claude '{"strategy":"${strategy}","path":".","tool":"${tool}"}'`,
|
||||
run_in_background: false
|
||||
});
|
||||
if (bash_result.exit_code === 0) {
|
||||
@@ -244,7 +244,7 @@ MODULES:
|
||||
|
||||
TOOLS (try in order): {{tool_1}}, {{tool_2}}, {{tool_3}}
|
||||
|
||||
EXECUTION SCRIPT: ~/.claude/scripts/update_module_claude.sh
|
||||
EXECUTION SCRIPT: ccw tool exec update_module_claude
|
||||
- Accepts strategy parameter: multi-layer | single-layer
|
||||
- Tool execution via direct CLI commands (gemini/qwen/codex)
|
||||
|
||||
@@ -252,7 +252,7 @@ EXECUTION FLOW (for each module):
|
||||
1. Tool fallback loop (exit on first success):
|
||||
for tool in {{tool_1}} {{tool_2}} {{tool_3}}; do
|
||||
Bash({
|
||||
command: `cd "{{module_path}}" && ~/.claude/scripts/update_module_claude.sh "{{strategy}}" "." "${tool}"`,
|
||||
command: `cd "{{module_path}}" && ccw tool exec update_module_claude '{"strategy":"{{strategy}}","path":".","tool":"${tool}"}'`,
|
||||
run_in_background: false
|
||||
})
|
||||
exit_code=$?
|
||||
|
||||
@@ -41,7 +41,7 @@ Orchestrates context-aware CLAUDE.md updates for changed modules using batched a
|
||||
|
||||
```javascript
|
||||
// Detect changed modules
|
||||
Bash({command: "~/.claude/scripts/detect_changed_modules.sh list", run_in_background: false});
|
||||
Bash({command: "ccw tool exec detect_changed_modules '{\"format\":\"list\"}'", run_in_background: false});
|
||||
|
||||
// Cache git changes
|
||||
Bash({command: "git add -A 2>/dev/null || true", run_in_background: false});
|
||||
@@ -102,7 +102,7 @@ for (let depth of sorted_depths.reverse()) { // N → 0
|
||||
return async () => {
|
||||
for (let tool of tool_order) {
|
||||
Bash({
|
||||
command: `cd ${module.path} && ~/.claude/scripts/update_module_claude.sh "single-layer" "." "${tool}"`,
|
||||
command: `cd ${module.path} && ccw tool exec update_module_claude '{"strategy":"single-layer","path":".","tool":"${tool}"}'`,
|
||||
run_in_background: false
|
||||
});
|
||||
if (bash_result.exit_code === 0) {
|
||||
@@ -184,21 +184,21 @@ EXECUTION:
|
||||
For each module above:
|
||||
1. Try tool 1:
|
||||
Bash({
|
||||
command: `cd "{{module_path}}" && ~/.claude/scripts/update_module_claude.sh "single-layer" "." "{{tool_1}}"`,
|
||||
command: `cd "{{module_path}}" && ccw tool exec update_module_claude '{"strategy":"single-layer","path":".","tool":"{{tool_1}}"}'`,
|
||||
run_in_background: false
|
||||
})
|
||||
→ Success: Report "✅ {{module_path}} updated with {{tool_1}}", proceed to next module
|
||||
→ Failure: Try tool 2
|
||||
2. Try tool 2:
|
||||
Bash({
|
||||
command: `cd "{{module_path}}" && ~/.claude/scripts/update_module_claude.sh "single-layer" "." "{{tool_2}}"`,
|
||||
command: `cd "{{module_path}}" && ccw tool exec update_module_claude '{"strategy":"single-layer","path":".","tool":"{{tool_2}}"}'`,
|
||||
run_in_background: false
|
||||
})
|
||||
→ Success: Report "✅ {{module_path}} updated with {{tool_2}}", proceed to next module
|
||||
→ Failure: Try tool 3
|
||||
3. Try tool 3:
|
||||
Bash({
|
||||
command: `cd "{{module_path}}" && ~/.claude/scripts/update_module_claude.sh "single-layer" "." "{{tool_3}}"`,
|
||||
command: `cd "{{module_path}}" && ccw tool exec update_module_claude '{"strategy":"single-layer","path":".","tool":"{{tool_3}}"}'`,
|
||||
run_in_background: false
|
||||
})
|
||||
→ Success: Report "✅ {{module_path}} updated with {{tool_3}}", proceed to next module
|
||||
|
||||
@@ -187,7 +187,7 @@ Objectives:
|
||||
|
||||
3. Use Gemini for aggregation (optional):
|
||||
Command pattern:
|
||||
cd .workflow/.archives/{session_id} && gemini -p "
|
||||
ccw cli -p "
|
||||
PURPOSE: Extract lessons and conflicts from workflow session
|
||||
TASK:
|
||||
• Analyze IMPL_PLAN and lessons from manifest
|
||||
@@ -198,7 +198,7 @@ Objectives:
|
||||
CONTEXT: @IMPL_PLAN.md @workflow-session.json
|
||||
EXPECTED: Structured lessons and conflicts in JSON format
|
||||
RULES: Template reference from skill-aggregation.txt
|
||||
"
|
||||
" --tool gemini --mode analysis --cd .workflow/.archives/{session_id}
|
||||
|
||||
3.5. **Generate SKILL.md Description** (CRITICAL for auto-loading):
|
||||
|
||||
@@ -334,7 +334,7 @@ Objectives:
|
||||
- Sort sessions by date
|
||||
|
||||
2. Use Gemini for final aggregation:
|
||||
gemini -p "
|
||||
ccw cli -p "
|
||||
PURPOSE: Aggregate lessons and conflicts from all workflow sessions
|
||||
TASK:
|
||||
• Group successes by functional domain
|
||||
@@ -345,7 +345,7 @@ Objectives:
|
||||
CONTEXT: [Provide aggregated JSON data]
|
||||
EXPECTED: Final aggregated structure for SKILL documents
|
||||
RULES: Template reference from skill-aggregation.txt
|
||||
"
|
||||
" --tool gemini --mode analysis
|
||||
|
||||
3. Read templates for formatting (same 4 templates as single mode)
|
||||
|
||||
|
||||
@@ -81,6 +81,7 @@ ELSE:
|
||||
**Framework-Based Analysis** (when guidance-specification.md exists):
|
||||
```bash
|
||||
Task(subagent_type="conceptual-planning-agent",
|
||||
run_in_background=false,
|
||||
prompt="Generate API designer analysis addressing topic framework
|
||||
|
||||
## Framework Integration Required
|
||||
@@ -136,6 +137,7 @@ Task(subagent_type="conceptual-planning-agent",
|
||||
# For existing analysis updates
|
||||
IF update_mode = "incremental":
|
||||
Task(subagent_type="conceptual-planning-agent",
|
||||
run_in_background=false,
|
||||
prompt="Update existing API designer analysis
|
||||
|
||||
## Current Analysis Context
|
||||
|
||||
@@ -128,6 +128,7 @@ for (let i = 0; i < allQuestions.length; i += BATCH_SIZE) {
|
||||
```javascript
|
||||
Task(
|
||||
subagent_type="context-search-agent",
|
||||
run_in_background=false,
|
||||
description="Gather project context for brainstorm",
|
||||
prompt=`
|
||||
Execute context-search-agent in BRAINSTORM MODE (Phase 1-2 only).
|
||||
|
||||
@@ -81,6 +81,7 @@ ELSE:
|
||||
**Framework-Based Analysis** (when guidance-specification.md exists):
|
||||
```bash
|
||||
Task(subagent_type="conceptual-planning-agent",
|
||||
run_in_background=false,
|
||||
prompt="Generate system architect analysis addressing topic framework
|
||||
|
||||
## Framework Integration Required
|
||||
@@ -136,6 +137,7 @@ Task(subagent_type="conceptual-planning-agent",
|
||||
# For existing analysis updates
|
||||
IF update_mode = "incremental":
|
||||
Task(subagent_type="conceptual-planning-agent",
|
||||
run_in_background=false,
|
||||
prompt="Update existing system architect analysis
|
||||
|
||||
## Current Analysis Context
|
||||
|
||||
321
.claude/commands/workflow/debug.md
Normal file
321
.claude/commands/workflow/debug.md
Normal file
@@ -0,0 +1,321 @@
|
||||
---
|
||||
name: debug
|
||||
description: Interactive hypothesis-driven debugging with NDJSON logging, iterative until resolved
|
||||
argument-hint: "\"bug description or error message\""
|
||||
allowed-tools: TodoWrite(*), Task(*), AskUserQuestion(*), Read(*), Grep(*), Glob(*), Bash(*), Edit(*), Write(*)
|
||||
---
|
||||
|
||||
# Workflow Debug Command (/workflow:debug)
|
||||
|
||||
## Overview
|
||||
|
||||
Evidence-based interactive debugging command. Systematically identifies root causes through hypothesis-driven logging and iterative verification.
|
||||
|
||||
**Core workflow**: Explore → Add Logging → Reproduce → Analyze Log → Fix → Verify
|
||||
|
||||
## Usage
|
||||
|
||||
```bash
|
||||
/workflow:debug <BUG_DESCRIPTION>
|
||||
|
||||
# Arguments
|
||||
<bug-description> Bug description, error message, or stack trace (required)
|
||||
```
|
||||
|
||||
## Execution Process
|
||||
|
||||
```
|
||||
Session Detection:
|
||||
├─ Check if debug session exists for this bug
|
||||
├─ EXISTS + debug.log has content → Analyze mode
|
||||
└─ NOT_FOUND or empty log → Explore mode
|
||||
|
||||
Explore Mode:
|
||||
├─ Locate error source in codebase
|
||||
├─ Generate testable hypotheses (dynamic count)
|
||||
├─ Add NDJSON logging instrumentation
|
||||
└─ Output: Hypothesis list + await user reproduction
|
||||
|
||||
Analyze Mode:
|
||||
├─ Parse debug.log, validate each hypothesis
|
||||
└─ Decision:
|
||||
├─ Confirmed → Fix root cause
|
||||
├─ Inconclusive → Add more logging, iterate
|
||||
└─ All rejected → Generate new hypotheses
|
||||
|
||||
Fix & Cleanup:
|
||||
├─ Apply fix based on confirmed hypothesis
|
||||
├─ User verifies
|
||||
├─ Remove debug instrumentation
|
||||
└─ If not fixed → Return to Analyze mode
|
||||
```
|
||||
|
||||
## Implementation
|
||||
|
||||
### Session Setup & Mode Detection
|
||||
|
||||
```javascript
|
||||
const getUtc8ISOString = () => new Date(Date.now() + 8 * 60 * 60 * 1000).toISOString()
|
||||
|
||||
const bugSlug = bug_description.toLowerCase().replace(/[^a-z0-9]+/g, '-').substring(0, 30)
|
||||
const dateStr = getUtc8ISOString().substring(0, 10)
|
||||
|
||||
const sessionId = `DBG-${bugSlug}-${dateStr}`
|
||||
const sessionFolder = `.workflow/.debug/${sessionId}`
|
||||
const debugLogPath = `${sessionFolder}/debug.log`
|
||||
|
||||
// Auto-detect mode
|
||||
const sessionExists = fs.existsSync(sessionFolder)
|
||||
const logHasContent = sessionExists && fs.existsSync(debugLogPath) && fs.statSync(debugLogPath).size > 0
|
||||
|
||||
const mode = logHasContent ? 'analyze' : 'explore'
|
||||
|
||||
if (!sessionExists) {
|
||||
bash(`mkdir -p ${sessionFolder}`)
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Explore Mode
|
||||
|
||||
**Step 1.1: Locate Error Source**
|
||||
|
||||
```javascript
|
||||
// Extract keywords from bug description
|
||||
const keywords = extractErrorKeywords(bug_description)
|
||||
// e.g., ['Stack Length', '未找到', 'registered 0']
|
||||
|
||||
// Search codebase for error locations
|
||||
for (const keyword of keywords) {
|
||||
Grep({ pattern: keyword, path: ".", output_mode: "content", "-C": 3 })
|
||||
}
|
||||
|
||||
// Identify affected files and functions
|
||||
const affectedLocations = [...] // from search results
|
||||
```
|
||||
|
||||
**Step 1.2: Generate Hypotheses (Dynamic)**
|
||||
|
||||
```javascript
|
||||
// Hypothesis categories based on error pattern
|
||||
const HYPOTHESIS_PATTERNS = {
|
||||
"not found|missing|undefined|未找到": "data_mismatch",
|
||||
"0|empty|zero|registered 0": "logic_error",
|
||||
"timeout|connection|sync": "integration_issue",
|
||||
"type|format|parse": "type_mismatch"
|
||||
}
|
||||
|
||||
// Generate hypotheses based on actual issue (NOT fixed count)
|
||||
function generateHypotheses(bugDescription, affectedLocations) {
|
||||
const hypotheses = []
|
||||
|
||||
// Analyze bug and create targeted hypotheses
|
||||
// Each hypothesis has:
|
||||
// - id: H1, H2, ... (dynamic count)
|
||||
// - description: What might be wrong
|
||||
// - testable_condition: What to log
|
||||
// - logging_point: Where to add instrumentation
|
||||
|
||||
return hypotheses // Could be 1, 3, 5, or more
|
||||
}
|
||||
|
||||
const hypotheses = generateHypotheses(bug_description, affectedLocations)
|
||||
```
|
||||
|
||||
**Step 1.3: Add NDJSON Instrumentation**
|
||||
|
||||
For each hypothesis, add logging at the relevant location:
|
||||
|
||||
**Python template**:
|
||||
```python
|
||||
# region debug [H{n}]
|
||||
try:
|
||||
import json, time
|
||||
_dbg = {
|
||||
"sid": "{sessionId}",
|
||||
"hid": "H{n}",
|
||||
"loc": "{file}:{line}",
|
||||
"msg": "{testable_condition}",
|
||||
"data": {
|
||||
# Capture relevant values here
|
||||
},
|
||||
"ts": int(time.time() * 1000)
|
||||
}
|
||||
with open(r"{debugLogPath}", "a", encoding="utf-8") as _f:
|
||||
_f.write(json.dumps(_dbg, ensure_ascii=False) + "\n")
|
||||
except: pass
|
||||
# endregion
|
||||
```
|
||||
|
||||
**JavaScript/TypeScript template**:
|
||||
```javascript
|
||||
// region debug [H{n}]
|
||||
try {
|
||||
require('fs').appendFileSync("{debugLogPath}", JSON.stringify({
|
||||
sid: "{sessionId}",
|
||||
hid: "H{n}",
|
||||
loc: "{file}:{line}",
|
||||
msg: "{testable_condition}",
|
||||
data: { /* Capture relevant values */ },
|
||||
ts: Date.now()
|
||||
}) + "\n");
|
||||
} catch(_) {}
|
||||
// endregion
|
||||
```
|
||||
|
||||
**Output to user**:
|
||||
```
|
||||
## Hypotheses Generated
|
||||
|
||||
Based on error "{bug_description}", generated {n} hypotheses:
|
||||
|
||||
{hypotheses.map(h => `
|
||||
### ${h.id}: ${h.description}
|
||||
- Logging at: ${h.logging_point}
|
||||
- Testing: ${h.testable_condition}
|
||||
`).join('')}
|
||||
|
||||
**Debug log**: ${debugLogPath}
|
||||
|
||||
**Next**: Run reproduction steps, then come back for analysis.
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Analyze Mode
|
||||
|
||||
```javascript
|
||||
// Parse NDJSON log
|
||||
const entries = Read(debugLogPath).split('\n')
|
||||
.filter(l => l.trim())
|
||||
.map(l => JSON.parse(l))
|
||||
|
||||
// Group by hypothesis
|
||||
const byHypothesis = groupBy(entries, 'hid')
|
||||
|
||||
// Validate each hypothesis
|
||||
for (const [hid, logs] of Object.entries(byHypothesis)) {
|
||||
const hypothesis = hypotheses.find(h => h.id === hid)
|
||||
const latestLog = logs[logs.length - 1]
|
||||
|
||||
// Check if evidence confirms or rejects hypothesis
|
||||
const verdict = evaluateEvidence(hypothesis, latestLog.data)
|
||||
// Returns: 'confirmed' | 'rejected' | 'inconclusive'
|
||||
}
|
||||
```
|
||||
|
||||
**Output**:
|
||||
```
|
||||
## Evidence Analysis
|
||||
|
||||
Analyzed ${entries.length} log entries.
|
||||
|
||||
${results.map(r => `
|
||||
### ${r.id}: ${r.description}
|
||||
- **Status**: ${r.verdict}
|
||||
- **Evidence**: ${JSON.stringify(r.evidence)}
|
||||
- **Reason**: ${r.reason}
|
||||
`).join('')}
|
||||
|
||||
${confirmedHypothesis ? `
|
||||
## Root Cause Identified
|
||||
|
||||
**${confirmedHypothesis.id}**: ${confirmedHypothesis.description}
|
||||
|
||||
Ready to fix.
|
||||
` : `
|
||||
## Need More Evidence
|
||||
|
||||
Add more logging or refine hypotheses.
|
||||
`}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Fix & Cleanup
|
||||
|
||||
```javascript
|
||||
// Apply fix based on confirmed hypothesis
|
||||
// ... Edit affected files
|
||||
|
||||
// After user verifies fix works:
|
||||
|
||||
// Remove debug instrumentation (search for region markers)
|
||||
const instrumentedFiles = Grep({
|
||||
pattern: "# region debug|// region debug",
|
||||
output_mode: "files_with_matches"
|
||||
})
|
||||
|
||||
for (const file of instrumentedFiles) {
|
||||
// Remove content between region markers
|
||||
removeDebugRegions(file)
|
||||
}
|
||||
|
||||
console.log(`
|
||||
## Debug Complete
|
||||
|
||||
- Root cause: ${confirmedHypothesis.description}
|
||||
- Fix applied to: ${modifiedFiles.join(', ')}
|
||||
- Debug instrumentation removed
|
||||
`)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Debug Log Format (NDJSON)
|
||||
|
||||
Each line is a JSON object:
|
||||
|
||||
```json
|
||||
{"sid":"DBG-xxx-2025-12-18","hid":"H1","loc":"file.py:func:42","msg":"Check dict keys","data":{"keys":["a","b"],"target":"c","found":false},"ts":1734567890123}
|
||||
```
|
||||
|
||||
| Field | Description |
|
||||
|-------|-------------|
|
||||
| `sid` | Session ID |
|
||||
| `hid` | Hypothesis ID (H1, H2, ...) |
|
||||
| `loc` | Code location |
|
||||
| `msg` | What's being tested |
|
||||
| `data` | Captured values |
|
||||
| `ts` | Timestamp (ms) |
|
||||
|
||||
## Session Folder
|
||||
|
||||
```
|
||||
.workflow/.debug/DBG-{slug}-{date}/
|
||||
├── debug.log # NDJSON log (main artifact)
|
||||
└── resolution.md # Summary after fix (optional)
|
||||
```
|
||||
|
||||
## Iteration Flow
|
||||
|
||||
```
|
||||
First Call (/workflow:debug "error"):
|
||||
├─ No session exists → Explore mode
|
||||
├─ Extract error keywords, search codebase
|
||||
├─ Generate hypotheses, add logging
|
||||
└─ Await user reproduction
|
||||
|
||||
After Reproduction (/workflow:debug "error"):
|
||||
├─ Session exists + debug.log has content → Analyze mode
|
||||
├─ Parse log, evaluate hypotheses
|
||||
└─ Decision:
|
||||
├─ Confirmed → Fix → User verify
|
||||
│ ├─ Fixed → Cleanup → Done
|
||||
│ └─ Not fixed → Add logging → Iterate
|
||||
├─ Inconclusive → Add logging → Iterate
|
||||
└─ All rejected → New hypotheses → Iterate
|
||||
|
||||
Output:
|
||||
└─ .workflow/.debug/DBG-{slug}-{date}/debug.log
|
||||
```
|
||||
|
||||
## Error Handling
|
||||
|
||||
| Situation | Action |
|
||||
|-----------|--------|
|
||||
| Empty debug.log | Verify reproduction triggered the code path |
|
||||
| All hypotheses rejected | Generate new hypotheses with broader scope |
|
||||
| Fix doesn't work | Iterate with more granular logging |
|
||||
| >5 iterations | Escalate to `/workflow:lite-fix` with evidence |
|
||||
@@ -56,6 +56,7 @@ Phase 2: Planning Document Validation
|
||||
└─ Validate .task/ contains IMPL-*.json files
|
||||
|
||||
Phase 3: TodoWrite Generation
|
||||
├─ Update session status to "active" (Step 0)
|
||||
├─ Parse TODO_LIST.md for task statuses
|
||||
├─ Generate TodoWrite for entire workflow
|
||||
└─ Prepare session context paths
|
||||
@@ -67,7 +68,9 @@ Phase 4: Execution Strategy & Task Execution
|
||||
├─ Get next in_progress task from TodoWrite
|
||||
├─ Lazy load task JSON
|
||||
├─ Launch agent with task context
|
||||
├─ Mark task completed
|
||||
├─ Mark task completed (update IMPL-*.json status)
|
||||
│ # Quick fix: Update task status for ccw dashboard
|
||||
│ # TS=$(date -Iseconds) && jq --arg ts "$TS" '.status="completed" | .status_history=(.status_history // [])+[{"from":"in_progress","to":"completed","changed_at":$ts}]' IMPL-X.json > tmp.json && mv tmp.json IMPL-X.json
|
||||
└─ Advance to next task
|
||||
|
||||
Phase 5: Completion
|
||||
@@ -78,6 +81,7 @@ Phase 5: Completion
|
||||
Resume Mode (--resume-session):
|
||||
├─ Skip Phase 1 & Phase 2
|
||||
└─ Entry Point: Phase 3 (TodoWrite Generation)
|
||||
├─ Update session status to "active" (if not already)
|
||||
└─ Continue: Phase 4 → Phase 5
|
||||
```
|
||||
|
||||
@@ -177,6 +181,16 @@ bash(cat .workflow/active/${sessionId}/workflow-session.json)
|
||||
### Phase 3: TodoWrite Generation
|
||||
**Applies to**: Both normal and resume modes (resume mode entry point)
|
||||
|
||||
**Step 0: Update Session Status to Active**
|
||||
Before generating TodoWrite, update session status from "planning" to "active":
|
||||
```bash
|
||||
# Update session status (idempotent - safe to run if already active)
|
||||
jq '.status = "active" | .execution_started_at = (.execution_started_at // now | todate)' \
|
||||
.workflow/active/${sessionId}/workflow-session.json > tmp.json && \
|
||||
mv tmp.json .workflow/active/${sessionId}/workflow-session.json
|
||||
```
|
||||
This ensures the dashboard shows the session as "ACTIVE" during execution.
|
||||
|
||||
**Process**:
|
||||
1. **Create TodoWrite List**: Generate task list from TODO_LIST.md (not from task JSONs)
|
||||
- Parse TODO_LIST.md to extract all tasks with current statuses
|
||||
@@ -378,6 +392,7 @@ TodoWrite({
|
||||
|
||||
```bash
|
||||
Task(subagent_type="{meta.agent}",
|
||||
run_in_background=false,
|
||||
prompt="Execute task: {task.title}
|
||||
|
||||
{[FLOW_CONTROL]}
|
||||
|
||||
@@ -86,13 +86,14 @@ bash(cp .workflow/project.json .workflow/project.json.backup)
|
||||
```javascript
|
||||
Task(
|
||||
subagent_type="cli-explore-agent",
|
||||
run_in_background=false,
|
||||
description="Deep project analysis",
|
||||
prompt=`
|
||||
Analyze project for workflow initialization and generate .workflow/project.json.
|
||||
|
||||
## MANDATORY FIRST STEPS
|
||||
1. Execute: cat ~/.claude/workflows/cli-templates/schemas/project-json-schema.json (get schema reference)
|
||||
2. Execute: ~/.claude/scripts/get_modules_by_depth.sh (get project structure)
|
||||
2. Execute: ccw tool exec get_modules_by_depth '{}' (get project structure)
|
||||
|
||||
## Task
|
||||
Generate complete project.json with:
|
||||
|
||||
@@ -258,6 +258,33 @@ TodoWrite({
|
||||
|
||||
### Step 3: Launch Execution
|
||||
|
||||
**Executor Resolution** (任务级 executor 优先于全局设置):
|
||||
```javascript
|
||||
// 获取任务的 executor(优先使用 executorAssignments,fallback 到全局 executionMethod)
|
||||
function getTaskExecutor(task) {
|
||||
const assignments = executionContext?.executorAssignments || {}
|
||||
if (assignments[task.id]) {
|
||||
return assignments[task.id].executor // 'gemini' | 'codex' | 'agent'
|
||||
}
|
||||
// Fallback: 全局 executionMethod 映射
|
||||
const method = executionContext?.executionMethod || 'Auto'
|
||||
if (method === 'Agent') return 'agent'
|
||||
if (method === 'Codex') return 'codex'
|
||||
// Auto: 根据复杂度
|
||||
return planObject.complexity === 'Low' ? 'agent' : 'codex'
|
||||
}
|
||||
|
||||
// 按 executor 分组任务
|
||||
function groupTasksByExecutor(tasks) {
|
||||
const groups = { gemini: [], codex: [], agent: [] }
|
||||
tasks.forEach(task => {
|
||||
const executor = getTaskExecutor(task)
|
||||
groups[executor].push(task)
|
||||
})
|
||||
return groups
|
||||
}
|
||||
```
|
||||
|
||||
**Execution Flow**: Parallel batches concurrently → Sequential batches in order
|
||||
```javascript
|
||||
const parallel = executionCalls.filter(c => c.executionType === "parallel")
|
||||
@@ -283,8 +310,9 @@ for (const call of sequential) {
|
||||
**Option A: Agent Execution**
|
||||
|
||||
When to use:
|
||||
- `executionMethod = "Agent"`
|
||||
- `executionMethod = "Auto" AND complexity = "Low"`
|
||||
- `getTaskExecutor(task) === "agent"`
|
||||
- 或 `executionMethod = "Agent"` (全局 fallback)
|
||||
- 或 `executionMethod = "Auto" AND complexity = "Low"` (全局 fallback)
|
||||
|
||||
**Task Formatting Principle**: Each task is a self-contained checklist. The agent only needs to know what THIS task requires, not its position or relation to other tasks.
|
||||
|
||||
@@ -333,6 +361,7 @@ Complete each task according to its "Done when" checklist.
|
||||
|
||||
Task(
|
||||
subagent_type="code-developer",
|
||||
run_in_background=false,
|
||||
description=batch.taskSummary,
|
||||
prompt=formatBatchPrompt({
|
||||
tasks: batch.tasks,
|
||||
@@ -399,8 +428,9 @@ function extractRelatedFiles(tasks) {
|
||||
**Option B: CLI Execution (Codex)**
|
||||
|
||||
When to use:
|
||||
- `executionMethod = "Codex"`
|
||||
- `executionMethod = "Auto" AND complexity = "Medium" or "High"`
|
||||
- `getTaskExecutor(task) === "codex"`
|
||||
- 或 `executionMethod = "Codex"` (全局 fallback)
|
||||
- 或 `executionMethod = "Auto" AND complexity = "Medium/High"` (全局 fallback)
|
||||
|
||||
**Task Formatting Principle**: Same as Agent - each task is a self-contained checklist. No task numbering or position awareness.
|
||||
|
||||
@@ -473,10 +503,10 @@ Detailed plan: ${executionContext.session.artifacts.plan}`)
|
||||
return prompt
|
||||
}
|
||||
|
||||
codex --full-auto exec "${buildCLIPrompt(batch)}" --skip-git-repo-check -s danger-full-access
|
||||
ccw cli -p "${buildCLIPrompt(batch)}" --tool codex --mode write
|
||||
```
|
||||
|
||||
**Execution with tracking**:
|
||||
**Execution with fixed IDs** (predictable ID pattern):
|
||||
```javascript
|
||||
// Launch CLI in foreground (NOT background)
|
||||
// Timeout based on complexity: Low=40min, Medium=60min, High=100min
|
||||
@@ -486,15 +516,57 @@ const timeoutByComplexity = {
|
||||
"High": 6000000 // 100 minutes
|
||||
}
|
||||
|
||||
// Generate fixed execution ID: ${sessionId}-${groupId}
|
||||
// This enables predictable ID lookup without relying on resume context chains
|
||||
const sessionId = executionContext?.session?.id || 'standalone'
|
||||
const fixedExecutionId = `${sessionId}-${batch.groupId}` // e.g., "implement-auth-2025-12-13-P1"
|
||||
|
||||
// Check if resuming from previous failed execution
|
||||
const previousCliId = batch.resumeFromCliId || null
|
||||
|
||||
// Build command with fixed ID (and optional resume for continuation)
|
||||
const cli_command = previousCliId
|
||||
? `ccw cli -p "${buildCLIPrompt(batch)}" --tool codex --mode write --id ${fixedExecutionId} --resume ${previousCliId}`
|
||||
: `ccw cli -p "${buildCLIPrompt(batch)}" --tool codex --mode write --id ${fixedExecutionId}`
|
||||
|
||||
bash_result = Bash(
|
||||
command=cli_command,
|
||||
timeout=timeoutByComplexity[planObject.complexity] || 3600000
|
||||
)
|
||||
|
||||
// Execution ID is now predictable: ${fixedExecutionId}
|
||||
// Can also extract from output: "ID: implement-auth-2025-12-13-P1"
|
||||
const cliExecutionId = fixedExecutionId
|
||||
|
||||
// Update TodoWrite when execution completes
|
||||
```
|
||||
|
||||
**Result Collection**: After completion, analyze output and collect result following `executionResult` structure
|
||||
**Resume on Failure** (with fixed ID):
|
||||
```javascript
|
||||
// If execution failed or timed out, offer resume option
|
||||
if (bash_result.status === 'failed' || bash_result.status === 'timeout') {
|
||||
console.log(`
|
||||
⚠️ Execution incomplete. Resume available:
|
||||
Fixed ID: ${fixedExecutionId}
|
||||
Lookup: ccw cli detail ${fixedExecutionId}
|
||||
Resume: ccw cli -p "Continue tasks" --resume ${fixedExecutionId} --tool codex --mode write --id ${fixedExecutionId}-retry
|
||||
`)
|
||||
|
||||
// Store for potential retry in same session
|
||||
batch.resumeFromCliId = fixedExecutionId
|
||||
}
|
||||
```
|
||||
|
||||
**Result Collection**: After completion, analyze output and collect result following `executionResult` structure (include `cliExecutionId` for resume capability)
|
||||
|
||||
**Option C: CLI Execution (Gemini)**
|
||||
|
||||
When to use: `getTaskExecutor(task) === "gemini"` (分析类任务)
|
||||
|
||||
```bash
|
||||
# 使用与 Option B 相同的 formatBatchPrompt,切换 tool 和 mode
|
||||
ccw cli -p "${formatBatchPrompt(batch)}" --tool gemini --mode analysis --id ${sessionId}-${batch.groupId}
|
||||
```
|
||||
|
||||
### Step 4: Progress Tracking
|
||||
|
||||
@@ -541,15 +613,30 @@ RULES: $(cat ~/.claude/workflows/cli-templates/prompts/analysis/02-review-code-q
|
||||
# - Report findings directly
|
||||
|
||||
# Method 2: Gemini Review (recommended)
|
||||
gemini -p "[Shared Prompt Template with artifacts]"
|
||||
ccw cli -p "[Shared Prompt Template with artifacts]" --tool gemini --mode analysis
|
||||
# CONTEXT includes: @**/* @${plan.json} [@${exploration.json}]
|
||||
|
||||
# Method 3: Qwen Review (alternative)
|
||||
qwen -p "[Shared Prompt Template with artifacts]"
|
||||
ccw cli -p "[Shared Prompt Template with artifacts]" --tool qwen --mode analysis
|
||||
# Same prompt as Gemini, different execution engine
|
||||
|
||||
# Method 4: Codex Review (autonomous)
|
||||
codex --full-auto exec "[Verify plan acceptance criteria at ${plan.json}]" --skip-git-repo-check -s danger-full-access
|
||||
ccw cli -p "[Verify plan acceptance criteria at ${plan.json}]" --tool codex --mode write
|
||||
```
|
||||
|
||||
**Multi-Round Review with Fixed IDs**:
|
||||
```javascript
|
||||
// Generate fixed review ID
|
||||
const reviewId = `${sessionId}-review`
|
||||
|
||||
// First review pass with fixed ID
|
||||
const reviewResult = Bash(`ccw cli -p "[Review prompt]" --tool gemini --mode analysis --id ${reviewId}`)
|
||||
|
||||
// If issues found, continue review dialog with fixed ID chain
|
||||
if (hasUnresolvedIssues(reviewResult)) {
|
||||
// Resume with follow-up questions
|
||||
Bash(`ccw cli -p "Clarify the security concerns you mentioned" --resume ${reviewId} --tool gemini --mode analysis --id ${reviewId}-followup`)
|
||||
}
|
||||
```
|
||||
|
||||
**Implementation Note**: Replace `[Shared Prompt Template with artifacts]` placeholder with actual template content, substituting:
|
||||
@@ -623,8 +710,10 @@ console.log(`✓ Development index: [${category}] ${entry.title}`)
|
||||
| Empty file | File exists but no content | Error: "File is empty: {path}. Provide task description." |
|
||||
| Invalid Enhanced Task JSON | JSON missing required fields | Warning: "Missing required fields. Treating as plain text." |
|
||||
| Malformed JSON | JSON parsing fails | Treat as plain text (expected for non-JSON files) |
|
||||
| Execution failure | Agent/Codex crashes | Display error, save partial progress, suggest retry |
|
||||
| Execution failure | Agent/Codex crashes | Display error, use fixed ID `${sessionId}-${groupId}` for resume: `ccw cli -p "Continue" --resume <fixed-id> --id <fixed-id>-retry` |
|
||||
| Execution timeout | CLI exceeded timeout | Use fixed ID for resume with extended timeout |
|
||||
| Codex unavailable | Codex not installed | Show installation instructions, offer Agent execution |
|
||||
| Fixed ID not found | Custom ID lookup failed | Check `ccw cli history`, verify date directories |
|
||||
|
||||
## Data Structures
|
||||
|
||||
@@ -646,10 +735,15 @@ Passed from lite-plan via global variable:
|
||||
explorationAngles: string[], // List of exploration angles
|
||||
explorationManifest: {...} | null, // Exploration manifest
|
||||
clarificationContext: {...} | null,
|
||||
executionMethod: "Agent" | "Codex" | "Auto",
|
||||
executionMethod: "Agent" | "Codex" | "Auto", // 全局默认
|
||||
codeReviewTool: "Skip" | "Gemini Review" | "Agent Review" | string,
|
||||
originalUserInput: string,
|
||||
|
||||
// 任务级 executor 分配(优先于 executionMethod)
|
||||
executorAssignments: {
|
||||
[taskId]: { executor: "gemini" | "codex" | "agent", reason: string }
|
||||
},
|
||||
|
||||
// Session artifacts location (saved by lite-plan)
|
||||
session: {
|
||||
id: string, // Session identifier: {taskSlug}-{shortTimestamp}
|
||||
@@ -679,8 +773,20 @@ Collected after each execution call completes:
|
||||
tasksSummary: string, // Brief description of tasks handled
|
||||
completionSummary: string, // What was completed
|
||||
keyOutputs: string, // Files created/modified, key changes
|
||||
notes: string // Important context for next execution
|
||||
notes: string, // Important context for next execution
|
||||
fixedCliId: string | null // Fixed CLI execution ID (e.g., "implement-auth-2025-12-13-P1")
|
||||
}
|
||||
```
|
||||
|
||||
Appended to `previousExecutionResults` array for context continuity in multi-execution scenarios.
|
||||
|
||||
**Fixed ID Pattern**: `${sessionId}-${groupId}` enables predictable lookup without auto-generated timestamps.
|
||||
|
||||
**Resume Usage**: If `status` is "partial" or "failed", use `fixedCliId` to resume:
|
||||
```bash
|
||||
# Lookup previous execution
|
||||
ccw cli detail ${fixedCliId}
|
||||
|
||||
# Resume with new fixed ID for retry
|
||||
ccw cli -p "Continue from where we left off" --resume ${fixedCliId} --tool codex --mode write --id ${fixedCliId}-retry
|
||||
```
|
||||
|
||||
@@ -164,6 +164,7 @@ Launching ${selectedAngles.length} parallel diagnoses...
|
||||
const diagnosisTasks = selectedAngles.map((angle, index) =>
|
||||
Task(
|
||||
subagent_type="cli-explore-agent",
|
||||
run_in_background=false,
|
||||
description=`Diagnose: ${angle}`,
|
||||
prompt=`
|
||||
## Task Objective
|
||||
@@ -177,7 +178,7 @@ Execute **${angle}** diagnosis for bug root cause analysis. Analyze codebase fro
|
||||
|
||||
## MANDATORY FIRST STEPS (Execute by Agent)
|
||||
**You (cli-explore-agent) MUST execute these steps in order:**
|
||||
1. Run: ~/.claude/scripts/get_modules_by_depth.sh (project structure)
|
||||
1. Run: ccw tool exec get_modules_by_depth '{}' (project structure)
|
||||
2. Run: rg -l "{error_keyword_from_bug}" --type ts (locate relevant files)
|
||||
3. Execute: cat ~/.claude/workflows/cli-templates/schemas/diagnosis-json-schema.json (get output schema reference)
|
||||
|
||||
@@ -400,6 +401,7 @@ Write(`${sessionFolder}/fix-plan.json`, JSON.stringify(fixPlan, null, 2))
|
||||
```javascript
|
||||
Task(
|
||||
subagent_type="cli-lite-planning-agent",
|
||||
run_in_background=false,
|
||||
description="Generate detailed fix plan",
|
||||
prompt=`
|
||||
Generate fix plan and write fix-plan.json.
|
||||
|
||||
@@ -15,7 +15,7 @@ Intelligent lightweight planning command with dynamic workflow adaptation based
|
||||
- Intelligent task analysis with automatic exploration detection
|
||||
- Dynamic code exploration (cli-explore-agent) when codebase understanding needed
|
||||
- Interactive clarification after exploration to gather missing information
|
||||
- Adaptive planning strategy (direct Claude vs cli-lite-planning-agent) based on complexity
|
||||
- Adaptive planning: Low complexity → Direct Claude; Medium/High → cli-lite-planning-agent
|
||||
- Two-step confirmation: plan display → multi-dimensional input collection
|
||||
- Execution dispatch with complete context handoff to lite-execute
|
||||
|
||||
@@ -38,7 +38,7 @@ Phase 1: Task Analysis & Exploration
|
||||
├─ Parse input (description or .md file)
|
||||
├─ intelligent complexity assessment (Low/Medium/High)
|
||||
├─ Exploration decision (auto-detect or --explore flag)
|
||||
├─ ⚠️ Context protection: If file reading ≥50k chars → force cli-explore-agent
|
||||
├─ Context protection: If file reading ≥50k chars → force cli-explore-agent
|
||||
└─ Decision:
|
||||
├─ needsExploration=true → Launch parallel cli-explore-agents (1-4 based on complexity)
|
||||
└─ needsExploration=false → Skip to Phase 2/3
|
||||
@@ -140,11 +140,17 @@ function selectAngles(taskDescription, count) {
|
||||
|
||||
const selectedAngles = selectAngles(task_description, complexity === 'High' ? 4 : (complexity === 'Medium' ? 3 : 1))
|
||||
|
||||
// Planning strategy determination
|
||||
const planningStrategy = complexity === 'Low'
|
||||
? 'Direct Claude Planning'
|
||||
: 'cli-lite-planning-agent'
|
||||
|
||||
console.log(`
|
||||
## Exploration Plan
|
||||
|
||||
Task Complexity: ${complexity}
|
||||
Selected Angles: ${selectedAngles.join(', ')}
|
||||
Planning Strategy: ${planningStrategy}
|
||||
|
||||
Launching ${selectedAngles.length} parallel explorations...
|
||||
`)
|
||||
@@ -152,11 +158,16 @@ Launching ${selectedAngles.length} parallel explorations...
|
||||
|
||||
**Launch Parallel Explorations** - Orchestrator assigns angle to each agent:
|
||||
|
||||
**⚠️ CRITICAL - NO BACKGROUND EXECUTION**:
|
||||
- **MUST NOT use `run_in_background: true`** - exploration results are REQUIRED before planning
|
||||
|
||||
|
||||
```javascript
|
||||
// Launch agents with pre-assigned angles
|
||||
const explorationTasks = selectedAngles.map((angle, index) =>
|
||||
Task(
|
||||
subagent_type="cli-explore-agent",
|
||||
run_in_background=false, // ⚠️ MANDATORY: Must wait for results
|
||||
description=`Explore: ${angle}`,
|
||||
prompt=`
|
||||
## Task Objective
|
||||
@@ -170,7 +181,7 @@ Execute **${angle}** exploration for task planning context. Analyze codebase fro
|
||||
|
||||
## MANDATORY FIRST STEPS (Execute by Agent)
|
||||
**You (cli-explore-agent) MUST execute these steps in order:**
|
||||
1. Run: ~/.claude/scripts/get_modules_by_depth.sh (project structure)
|
||||
1. Run: ccw tool exec get_modules_by_depth '{}' (project structure)
|
||||
2. Run: rg -l "{keyword_from_task}" --type ts (locate relevant files)
|
||||
3. Execute: cat ~/.claude/workflows/cli-templates/schemas/explore-json-schema.json (get output schema reference)
|
||||
|
||||
@@ -304,14 +315,11 @@ explorations.forEach(exp => {
|
||||
}
|
||||
})
|
||||
|
||||
// Deduplicate exact same questions only
|
||||
const seen = new Set()
|
||||
const dedupedClarifications = allClarifications.filter(c => {
|
||||
const key = c.question.toLowerCase()
|
||||
if (seen.has(key)) return false
|
||||
seen.add(key)
|
||||
return true
|
||||
})
|
||||
// Intelligent deduplication: analyze allClarifications by intent
|
||||
// - Identify questions with similar intent across different angles
|
||||
// - Merge similar questions: combine options, consolidate context
|
||||
// - Produce dedupedClarifications with unique intents only
|
||||
const dedupedClarifications = intelligentMerge(allClarifications)
|
||||
|
||||
// Multi-round clarification: batch questions (max 4 per round)
|
||||
if (dedupedClarifications.length > 0) {
|
||||
@@ -351,12 +359,34 @@ if (dedupedClarifications.length > 0) {
|
||||
|
||||
**IMPORTANT**: Phase 3 is **planning only** - NO code execution. All execution happens in Phase 5 via lite-execute.
|
||||
|
||||
**Executor Assignment** (Claude 智能分配,plan 生成后执行):
|
||||
|
||||
```javascript
|
||||
// 分配规则(优先级从高到低):
|
||||
// 1. 用户明确指定:"用 gemini 分析..." → gemini, "codex 实现..." → codex
|
||||
// 2. 默认 → agent
|
||||
|
||||
const executorAssignments = {} // { taskId: { executor: 'gemini'|'codex'|'agent', reason: string } }
|
||||
plan.tasks.forEach(task => {
|
||||
// Claude 根据上述规则语义分析,为每个 task 分配 executor
|
||||
executorAssignments[task.id] = { executor: '...', reason: '...' }
|
||||
})
|
||||
```
|
||||
|
||||
**Low Complexity** - Direct planning by Claude:
|
||||
```javascript
|
||||
// Step 1: Read schema
|
||||
const schema = Bash(`cat ~/.claude/workflows/cli-templates/schemas/plan-json-schema.json`)
|
||||
|
||||
// Step 2: Generate plan following schema (Claude directly, no agent)
|
||||
// Step 2: ⚠️ MANDATORY - Read and review ALL exploration files
|
||||
const manifest = JSON.parse(Read(`${sessionFolder}/explorations-manifest.json`))
|
||||
manifest.explorations.forEach(exp => {
|
||||
const explorationData = Read(exp.path)
|
||||
console.log(`\n### Exploration: ${exp.angle}\n${explorationData}`)
|
||||
})
|
||||
|
||||
// Step 3: Generate plan following schema (Claude directly, no agent)
|
||||
// ⚠️ Plan MUST incorporate insights from exploration files read in Step 2
|
||||
const plan = {
|
||||
summary: "...",
|
||||
approach: "...",
|
||||
@@ -367,10 +397,10 @@ const plan = {
|
||||
_metadata: { timestamp: getUtc8ISOString(), source: "direct-planning", planning_mode: "direct" }
|
||||
}
|
||||
|
||||
// Step 3: Write plan to session folder
|
||||
// Step 4: Write plan to session folder
|
||||
Write(`${sessionFolder}/plan.json`, JSON.stringify(plan, null, 2))
|
||||
|
||||
// Step 4: MUST continue to Phase 4 (Confirmation) - DO NOT execute code here
|
||||
// Step 5: MUST continue to Phase 4 (Confirmation) - DO NOT execute code here
|
||||
```
|
||||
|
||||
**Medium/High Complexity** - Invoke cli-lite-planning-agent:
|
||||
@@ -378,6 +408,7 @@ Write(`${sessionFolder}/plan.json`, JSON.stringify(plan, null, 2))
|
||||
```javascript
|
||||
Task(
|
||||
subagent_type="cli-lite-planning-agent",
|
||||
run_in_background=false,
|
||||
description="Generate detailed implementation plan",
|
||||
prompt=`
|
||||
Generate implementation plan and write plan.json.
|
||||
@@ -407,23 +438,9 @@ ${JSON.stringify(clarificationContext) || "None"}
|
||||
${complexity}
|
||||
|
||||
## Requirements
|
||||
Generate plan.json with:
|
||||
- summary: 2-3 sentence overview
|
||||
- approach: High-level implementation strategy (incorporating insights from all exploration angles)
|
||||
- tasks: 2-7 structured tasks (**IMPORTANT: group by feature/module, NOT by file**)
|
||||
- **Task Granularity Principle**: Each task = one complete feature unit or module
|
||||
- title: action verb + target module/feature (e.g., "Implement auth token refresh")
|
||||
- scope: module path (src/auth/) or feature name, prefer module-level over single file
|
||||
- action, description
|
||||
- modification_points: ALL files to modify for this feature (group related changes)
|
||||
- implementation (3-7 steps covering all modification_points)
|
||||
- reference (pattern, files, examples)
|
||||
- acceptance (2-4 criteria for the entire feature)
|
||||
- depends_on: task IDs this task depends on (use sparingly, only for true dependencies)
|
||||
- estimated_time, recommended_execution, complexity
|
||||
- _metadata:
|
||||
- timestamp, source, planning_mode
|
||||
- exploration_angles: ${JSON.stringify(manifest.explorations.map(e => e.angle))}
|
||||
Generate plan.json following the schema obtained above. Key constraints:
|
||||
- tasks: 2-7 structured tasks (**group by feature/module, NOT by file**)
|
||||
- _metadata.exploration_angles: ${JSON.stringify(manifest.explorations.map(e => e.angle))}
|
||||
|
||||
## Task Grouping Rules
|
||||
1. **Group by feature**: All changes for one feature = one task (even if 3-5 files)
|
||||
@@ -435,10 +452,10 @@ Generate plan.json with:
|
||||
7. **Prefer parallel**: Most tasks should be independent (no depends_on)
|
||||
|
||||
## Execution
|
||||
1. Read ALL exploration files for comprehensive context
|
||||
1. Read schema file (cat command above)
|
||||
2. Execute CLI planning using Gemini (Qwen fallback)
|
||||
3. Synthesize findings from multiple exploration angles
|
||||
4. Parse output and structure plan
|
||||
3. Read ALL exploration files for comprehensive context
|
||||
4. Synthesize findings and generate plan following schema
|
||||
5. Write JSON: Write('${sessionFolder}/plan.json', jsonContent)
|
||||
6. Return brief completion summary
|
||||
`
|
||||
@@ -535,9 +552,13 @@ executionContext = {
|
||||
explorationAngles: manifest.explorations.map(e => e.angle),
|
||||
explorationManifest: manifest,
|
||||
clarificationContext: clarificationContext || null,
|
||||
executionMethod: userSelection.execution_method,
|
||||
executionMethod: userSelection.execution_method, // 全局默认,可被 executorAssignments 覆盖
|
||||
codeReviewTool: userSelection.code_review_tool,
|
||||
originalUserInput: task_description,
|
||||
|
||||
// 任务级 executor 分配(优先于全局 executionMethod)
|
||||
executorAssignments: executorAssignments, // { taskId: { executor, reason } }
|
||||
|
||||
session: {
|
||||
id: sessionId,
|
||||
folder: sessionFolder,
|
||||
|
||||
@@ -61,7 +61,7 @@ Phase 2: Context Gathering
|
||||
├─ Tasks attached: Analyze structure → Identify integration → Generate package
|
||||
└─ Output: contextPath + conflict_risk
|
||||
|
||||
Phase 3: Conflict Resolution (conditional)
|
||||
Phase 3: Conflict Resolution
|
||||
└─ Decision (conflict_risk check):
|
||||
├─ conflict_risk ≥ medium → Execute /workflow:tools:conflict-resolution
|
||||
│ ├─ Tasks attached: Detect conflicts → Present to user → Apply strategies
|
||||
@@ -168,7 +168,7 @@ SlashCommand(command="/workflow:tools:context-gather --session [sessionId] \"[st
|
||||
|
||||
---
|
||||
|
||||
### Phase 3: Conflict Resolution (Optional - auto-triggered by conflict risk)
|
||||
### Phase 3: Conflict Resolution
|
||||
|
||||
**Trigger**: Only execute when context-package.json indicates conflict_risk is "medium" or "high"
|
||||
|
||||
@@ -185,10 +185,10 @@ SlashCommand(command="/workflow:tools:conflict-resolution --session [sessionId]
|
||||
|
||||
**Parse Output**:
|
||||
- Extract: Execution status (success/skipped/failed)
|
||||
- Verify: CONFLICT_RESOLUTION.md file path (if executed)
|
||||
- Verify: conflict-resolution.json file path (if executed)
|
||||
|
||||
**Validation**:
|
||||
- File `.workflow/active/[sessionId]/.process/CONFLICT_RESOLUTION.md` exists (if executed)
|
||||
- File `.workflow/active/[sessionId]/.process/conflict-resolution.json` exists (if executed)
|
||||
|
||||
**Skip Behavior**:
|
||||
- If conflict_risk is "none" or "low", skip directly to Phase 3.5
|
||||
@@ -497,7 +497,7 @@ Return summary to user
|
||||
- Parse context path from Phase 2 output, store in memory
|
||||
- **Extract conflict_risk from context-package.json**: Determine Phase 3 execution
|
||||
- **If conflict_risk ≥ medium**: Launch Phase 3 conflict-resolution with sessionId and contextPath
|
||||
- Wait for Phase 3 to finish executing (if executed), verify CONFLICT_RESOLUTION.md created
|
||||
- Wait for Phase 3 to finish executing (if executed), verify conflict-resolution.json created
|
||||
- **If conflict_risk is none/low**: Skip Phase 3, proceed directly to Phase 4
|
||||
- **Build Phase 4 command**: `/workflow:tools:task-generate-agent --session [sessionId]`
|
||||
- Pass session ID to Phase 4 command
|
||||
|
||||
@@ -391,6 +391,7 @@ done
|
||||
```javascript
|
||||
Task(
|
||||
subagent_type="cli-explore-agent",
|
||||
run_in_background=false,
|
||||
description=`Execute ${dimension} review analysis via Deep Scan`,
|
||||
prompt=`
|
||||
## Task Objective
|
||||
@@ -476,6 +477,7 @@ Task(
|
||||
```javascript
|
||||
Task(
|
||||
subagent_type="cli-explore-agent",
|
||||
run_in_background=false,
|
||||
description=`Deep-dive analysis for critical finding: ${findingTitle} via Dependency Map + Deep Scan`,
|
||||
prompt=`
|
||||
## Task Objective
|
||||
|
||||
@@ -401,6 +401,7 @@ git log --since="${sessionCreatedAt}" --name-only --pretty=format: | sort -u
|
||||
```javascript
|
||||
Task(
|
||||
subagent_type="cli-explore-agent",
|
||||
run_in_background=false,
|
||||
description=`Execute ${dimension} review analysis via Deep Scan`,
|
||||
prompt=`
|
||||
## Task Objective
|
||||
@@ -487,6 +488,7 @@ Task(
|
||||
```javascript
|
||||
Task(
|
||||
subagent_type="cli-explore-agent",
|
||||
run_in_background=false,
|
||||
description=`Deep-dive analysis for critical finding: ${findingTitle} via Dependency Map + Deep Scan`,
|
||||
prompt=`
|
||||
## Task Objective
|
||||
|
||||
@@ -112,11 +112,15 @@ After bash validation, the model takes control to:
|
||||
|
||||
1. **Load Context**: Read completed task summaries and changed files
|
||||
```bash
|
||||
# Load implementation summaries
|
||||
cat .workflow/active/${sessionId}/.summaries/IMPL-*.md
|
||||
# Load implementation summaries (iterate through .summaries/ directory)
|
||||
for summary in .workflow/active/${sessionId}/.summaries/*.md; do
|
||||
cat "$summary"
|
||||
done
|
||||
|
||||
# Load test results (if available)
|
||||
cat .workflow/active/${sessionId}/.summaries/TEST-FIX-*.md 2>/dev/null
|
||||
for test_summary in .workflow/active/${sessionId}/.summaries/TEST-FIX-*.md 2>/dev/null; do
|
||||
cat "$test_summary"
|
||||
done
|
||||
|
||||
# Get changed files
|
||||
git log --since="$(cat .workflow/active/${sessionId}/workflow-session.json | jq -r .created_at)" --name-only --pretty=format: | sort -u
|
||||
@@ -132,51 +136,53 @@ After bash validation, the model takes control to:
|
||||
```
|
||||
- Use Gemini for security analysis:
|
||||
```bash
|
||||
cd .workflow/active/${sessionId} && gemini -p "
|
||||
ccw cli -p "
|
||||
PURPOSE: Security audit of completed implementation
|
||||
TASK: Review code for security vulnerabilities, insecure patterns, auth/authz issues
|
||||
CONTEXT: @.summaries/IMPL-*.md,../.. @../../CLAUDE.md
|
||||
EXPECTED: Security findings report with severity levels
|
||||
RULES: Focus on OWASP Top 10, authentication, authorization, data validation, injection risks
|
||||
" --approval-mode yolo
|
||||
" --tool gemini --mode write --cd .workflow/active/${sessionId}
|
||||
```
|
||||
|
||||
**Architecture Review** (`--type=architecture`):
|
||||
- Use Qwen for architecture analysis:
|
||||
```bash
|
||||
cd .workflow/active/${sessionId} && qwen -p "
|
||||
ccw cli -p "
|
||||
PURPOSE: Architecture compliance review
|
||||
TASK: Evaluate adherence to architectural patterns, identify technical debt, review design decisions
|
||||
CONTEXT: @.summaries/IMPL-*.md,../.. @../../CLAUDE.md
|
||||
EXPECTED: Architecture assessment with recommendations
|
||||
RULES: Check for patterns, separation of concerns, modularity, scalability
|
||||
" --approval-mode yolo
|
||||
" --tool qwen --mode write --cd .workflow/active/${sessionId}
|
||||
```
|
||||
|
||||
**Quality Review** (`--type=quality`):
|
||||
- Use Gemini for code quality:
|
||||
```bash
|
||||
cd .workflow/active/${sessionId} && gemini -p "
|
||||
ccw cli -p "
|
||||
PURPOSE: Code quality and best practices review
|
||||
TASK: Assess code readability, maintainability, adherence to best practices
|
||||
CONTEXT: @.summaries/IMPL-*.md,../.. @../../CLAUDE.md
|
||||
EXPECTED: Quality assessment with improvement suggestions
|
||||
RULES: Check for code smells, duplication, complexity, naming conventions
|
||||
" --approval-mode yolo
|
||||
" --tool gemini --mode write --cd .workflow/active/${sessionId}
|
||||
```
|
||||
|
||||
**Action Items Review** (`--type=action-items`):
|
||||
- Verify all requirements and acceptance criteria met:
|
||||
```bash
|
||||
# Load task requirements and acceptance criteria
|
||||
find .workflow/active/${sessionId}/.task -name "IMPL-*.json" -exec jq -r '
|
||||
"Task: " + .id + "\n" +
|
||||
"Requirements: " + (.context.requirements | join(", ")) + "\n" +
|
||||
"Acceptance: " + (.context.acceptance | join(", "))
|
||||
' {} \;
|
||||
for task_file in .workflow/active/${sessionId}/.task/*.json; do
|
||||
cat "$task_file" | jq -r '
|
||||
"Task: " + .id + "\n" +
|
||||
"Requirements: " + (.context.requirements | join(", ")) + "\n" +
|
||||
"Acceptance: " + (.context.acceptance | join(", "))
|
||||
'
|
||||
done
|
||||
|
||||
# Check implementation summaries against requirements
|
||||
cd .workflow/active/${sessionId} && gemini -p "
|
||||
ccw cli -p "
|
||||
PURPOSE: Verify all requirements and acceptance criteria are met
|
||||
TASK: Cross-check implementation summaries against original requirements
|
||||
CONTEXT: @.task/IMPL-*.json,.summaries/IMPL-*.md,../.. @../../CLAUDE.md
|
||||
@@ -190,7 +196,7 @@ After bash validation, the model takes control to:
|
||||
- Verify all acceptance criteria are met
|
||||
- Flag any incomplete or missing action items
|
||||
- Assess deployment readiness
|
||||
" --approval-mode yolo
|
||||
" --tool gemini --mode write --cd .workflow/active/${sessionId}
|
||||
```
|
||||
|
||||
|
||||
|
||||
@@ -8,493 +8,146 @@ examples:
|
||||
|
||||
# Complete Workflow Session (/workflow:session:complete)
|
||||
|
||||
## Overview
|
||||
Mark the currently active workflow session as complete, analyze it for lessons learned, move it to the archive directory, and remove the active flag marker.
|
||||
Mark the currently active workflow session as complete, archive it, and update manifests.
|
||||
|
||||
## Usage
|
||||
```bash
|
||||
/workflow:session:complete # Complete current active session
|
||||
/workflow:session:complete --detailed # Show detailed completion summary
|
||||
```
|
||||
|
||||
## Implementation Flow
|
||||
|
||||
### Phase 1: Pre-Archival Preparation (Transactional Setup)
|
||||
|
||||
**Purpose**: Find active session, create archiving marker to prevent concurrent operations. Session remains in active location for agent processing.
|
||||
|
||||
#### Step 1.1: Find Active Session and Get Name
|
||||
```bash
|
||||
# Find active session directory
|
||||
bash(find .workflow/active/ -name "WFS-*" -type d | head -1)
|
||||
|
||||
# Extract session name from directory path
|
||||
bash(basename .workflow/active/WFS-session-name)
|
||||
```
|
||||
**Output**: Session name `WFS-session-name`
|
||||
|
||||
#### Step 1.2: Check for Existing Archiving Marker (Resume Detection)
|
||||
```bash
|
||||
# Check if session is already being archived
|
||||
bash(test -f .workflow/active/WFS-session-name/.archiving && echo "RESUMING" || echo "NEW")
|
||||
```
|
||||
|
||||
**If RESUMING**:
|
||||
- Previous archival attempt was interrupted
|
||||
- Skip to Phase 2 to resume agent analysis
|
||||
|
||||
**If NEW**:
|
||||
- Continue to Step 1.3
|
||||
|
||||
#### Step 1.3: Create Archiving Marker
|
||||
```bash
|
||||
# Mark session as "archiving in progress"
|
||||
bash(touch .workflow/active/WFS-session-name/.archiving)
|
||||
```
|
||||
**Purpose**:
|
||||
- Prevents concurrent operations on this session
|
||||
- Enables recovery if archival fails
|
||||
- Session remains in `.workflow/active/` for agent analysis
|
||||
|
||||
**Result**: Session still at `.workflow/active/WFS-session-name/` with `.archiving` marker
|
||||
|
||||
### Phase 2: Agent Analysis (In-Place Processing)
|
||||
|
||||
**Purpose**: Agent analyzes session WHILE STILL IN ACTIVE LOCATION. Generates metadata but does NOT move files or update manifest.
|
||||
|
||||
#### Agent Invocation
|
||||
|
||||
Invoke `universal-executor` agent to analyze session and prepare archive metadata.
|
||||
|
||||
**Agent Task**:
|
||||
```
|
||||
Task(
|
||||
subagent_type="universal-executor",
|
||||
description="Analyze session for archival",
|
||||
prompt=`
|
||||
Analyze workflow session for archival preparation. Session is STILL in active location.
|
||||
|
||||
## Context
|
||||
- Session: .workflow/active/WFS-session-name/
|
||||
- Status: Marked as archiving (.archiving marker present)
|
||||
- Location: Active sessions directory (NOT archived yet)
|
||||
|
||||
## Tasks
|
||||
|
||||
1. **Extract session data** from workflow-session.json
|
||||
- session_id, description/topic, started_at, completed_at, status
|
||||
- If status != "completed", update it with timestamp
|
||||
|
||||
2. **Count files**: tasks (.task/*.json) and summaries (.summaries/*.md)
|
||||
|
||||
3. **Extract review data** (if .review/ exists):
|
||||
- Count dimension results: .review/dimensions/*.json
|
||||
- Count deep-dive results: .review/iterations/*.json
|
||||
- Extract findings summary from dimension JSONs (total, critical, high, medium, low)
|
||||
- Check fix results if .review/fixes/ exists (fixed_count, failed_count)
|
||||
- Build review_metrics: {dimensions_analyzed, total_findings, severity_distribution, fix_success_rate}
|
||||
|
||||
4. **Generate lessons**: Use gemini with ~/.claude/workflows/cli-templates/prompts/archive/analysis-simple.txt
|
||||
- Return: {successes, challenges, watch_patterns}
|
||||
- If review data exists, include review-specific lessons (common issue patterns, effective fixes)
|
||||
|
||||
5. **Build archive entry**:
|
||||
- Calculate: duration_hours, success_rate, tags (3-5 keywords)
|
||||
- Construct complete JSON with session_id, description, archived_at, metrics, tags, lessons
|
||||
- Include archive_path: ".workflow/archives/WFS-session-name" (future location)
|
||||
- If review data exists, include review_metrics in metrics object
|
||||
|
||||
6. **Extract feature metadata** (for Phase 4):
|
||||
- Parse IMPL_PLAN.md for title (first # heading)
|
||||
- Extract description (first paragraph, max 200 chars)
|
||||
- Generate feature tags (3-5 keywords from content)
|
||||
|
||||
7. **Return result**: Complete metadata package for atomic commit
|
||||
{
|
||||
"status": "success",
|
||||
"session_id": "WFS-session-name",
|
||||
"archive_entry": {
|
||||
"session_id": "...",
|
||||
"description": "...",
|
||||
"archived_at": "...",
|
||||
"archive_path": ".workflow/archives/WFS-session-name",
|
||||
"metrics": {
|
||||
"duration_hours": 2.5,
|
||||
"tasks_completed": 5,
|
||||
"summaries_generated": 3,
|
||||
"review_metrics": { // Optional, only if .review/ exists
|
||||
"dimensions_analyzed": 4,
|
||||
"total_findings": 15,
|
||||
"severity_distribution": {"critical": 1, "high": 3, "medium": 8, "low": 3},
|
||||
"fix_success_rate": 0.87 // Optional, only if .review/fixes/ exists
|
||||
}
|
||||
},
|
||||
"tags": [...],
|
||||
"lessons": {...}
|
||||
},
|
||||
"feature_metadata": {
|
||||
"title": "...",
|
||||
"description": "...",
|
||||
"tags": [...]
|
||||
}
|
||||
}
|
||||
|
||||
## Important Constraints
|
||||
- DO NOT move or delete any files
|
||||
- DO NOT update manifest.json yet
|
||||
- Session remains in .workflow/active/ during analysis
|
||||
- Return complete metadata package for orchestrator to commit atomically
|
||||
|
||||
## Error Handling
|
||||
- On failure: return {"status": "error", "task": "...", "message": "..."}
|
||||
- Do NOT modify any files on error
|
||||
`
|
||||
)
|
||||
```
|
||||
|
||||
**Expected Output**:
|
||||
- Agent returns complete metadata package
|
||||
- Session remains in `.workflow/active/` with `.archiving` marker
|
||||
- No files moved or manifests updated yet
|
||||
|
||||
### Phase 3: Atomic Commit (Transactional File Operations)
|
||||
|
||||
**Purpose**: Atomically commit all changes. Only execute if Phase 2 succeeds.
|
||||
|
||||
#### Step 3.1: Create Archive Directory
|
||||
```bash
|
||||
bash(mkdir -p .workflow/archives/)
|
||||
```
|
||||
|
||||
#### Step 3.2: Move Session to Archive
|
||||
```bash
|
||||
bash(mv .workflow/active/WFS-session-name .workflow/archives/WFS-session-name)
|
||||
```
|
||||
**Result**: Session now at `.workflow/archives/WFS-session-name/`
|
||||
|
||||
#### Step 3.3: Update Manifest
|
||||
```bash
|
||||
# Read current manifest (or create empty array if not exists)
|
||||
bash(test -f .workflow/archives/manifest.json && cat .workflow/archives/manifest.json || echo "[]")
|
||||
```
|
||||
|
||||
**JSON Update Logic**:
|
||||
```javascript
|
||||
// Read agent result from Phase 2
|
||||
const agentResult = JSON.parse(agentOutput);
|
||||
const archiveEntry = agentResult.archive_entry;
|
||||
|
||||
// Read existing manifest
|
||||
let manifest = [];
|
||||
try {
|
||||
const manifestContent = Read('.workflow/archives/manifest.json');
|
||||
manifest = JSON.parse(manifestContent);
|
||||
} catch {
|
||||
manifest = []; // Initialize if not exists
|
||||
}
|
||||
|
||||
// Append new entry
|
||||
manifest.push(archiveEntry);
|
||||
|
||||
// Write back
|
||||
Write('.workflow/archives/manifest.json', JSON.stringify(manifest, null, 2));
|
||||
```
|
||||
|
||||
#### Step 3.4: Remove Archiving Marker
|
||||
```bash
|
||||
bash(rm .workflow/archives/WFS-session-name/.archiving)
|
||||
```
|
||||
**Result**: Clean archived session without temporary markers
|
||||
|
||||
**Output Confirmation**:
|
||||
```
|
||||
✓ Session "${sessionId}" archived successfully
|
||||
Location: .workflow/archives/WFS-session-name/
|
||||
Lessons: ${archiveEntry.lessons.successes.length} successes, ${archiveEntry.lessons.challenges.length} challenges
|
||||
Manifest: Updated with ${manifest.length} total sessions
|
||||
${reviewMetrics ? `Review: ${reviewMetrics.total_findings} findings across ${reviewMetrics.dimensions_analyzed} dimensions, ${Math.round(reviewMetrics.fix_success_rate * 100)}% fixed` : ''}
|
||||
```
|
||||
|
||||
### Phase 4: Update Project Feature Registry
|
||||
|
||||
**Purpose**: Record completed session as a project feature in `.workflow/project.json`.
|
||||
|
||||
**Execution**: Uses feature metadata from Phase 2 agent result to update project registry.
|
||||
|
||||
#### Step 4.1: Check Project State Exists
|
||||
```bash
|
||||
bash(test -f .workflow/project.json && echo "EXISTS" || echo "SKIP")
|
||||
```
|
||||
|
||||
**If SKIP**: Output warning and skip Phase 4
|
||||
```
|
||||
WARNING: No project.json found. Run /workflow:session:start to initialize.
|
||||
```
|
||||
|
||||
#### Step 4.2: Extract Feature Information from Agent Result
|
||||
|
||||
**Data Processing** (Uses Phase 2 agent output):
|
||||
```javascript
|
||||
// Extract feature metadata from agent result
|
||||
const agentResult = JSON.parse(agentOutput);
|
||||
const featureMeta = agentResult.feature_metadata;
|
||||
|
||||
// Data already prepared by agent:
|
||||
const title = featureMeta.title;
|
||||
const description = featureMeta.description;
|
||||
const tags = featureMeta.tags;
|
||||
|
||||
// Create feature ID (lowercase slug)
|
||||
const featureId = title.toLowerCase().replace(/[^a-z0-9]+/g, '-').substring(0, 50);
|
||||
```
|
||||
|
||||
#### Step 4.3: Update project.json
|
||||
## Pre-defined Commands
|
||||
|
||||
```bash
|
||||
# Read current project state
|
||||
bash(cat .workflow/project.json)
|
||||
# Phase 1: Find active session
|
||||
SESSION_PATH=$(find .workflow/active/ -maxdepth 1 -name "WFS-*" -type d | head -1)
|
||||
SESSION_ID=$(basename "$SESSION_PATH")
|
||||
|
||||
# Phase 3: Move to archive
|
||||
mkdir -p .workflow/archives/
|
||||
mv .workflow/active/$SESSION_ID .workflow/archives/$SESSION_ID
|
||||
|
||||
# Cleanup marker
|
||||
rm -f .workflow/archives/$SESSION_ID/.archiving
|
||||
```
|
||||
|
||||
**JSON Update Logic**:
|
||||
```javascript
|
||||
// Read existing project.json (created by /workflow:init)
|
||||
// Note: overview field is managed by /workflow:init, not modified here
|
||||
const projectMeta = JSON.parse(Read('.workflow/project.json'));
|
||||
const currentTimestamp = new Date().toISOString();
|
||||
const currentDate = currentTimestamp.split('T')[0]; // YYYY-MM-DD
|
||||
## Key Files to Read
|
||||
|
||||
// Extract tags from IMPL_PLAN.md (simple keyword extraction)
|
||||
const tags = extractTags(planContent); // e.g., ["auth", "security"]
|
||||
**For manifest.json generation**, read ONLY these files:
|
||||
|
||||
// Build feature object with complete metadata
|
||||
const newFeature = {
|
||||
id: featureId,
|
||||
title: title,
|
||||
description: description,
|
||||
status: "completed",
|
||||
tags: tags,
|
||||
timeline: {
|
||||
created_at: currentTimestamp,
|
||||
implemented_at: currentDate,
|
||||
updated_at: currentTimestamp
|
||||
},
|
||||
traceability: {
|
||||
session_id: sessionId,
|
||||
archive_path: archivePath, // e.g., ".workflow/archives/WFS-auth-system"
|
||||
commit_hash: getLatestCommitHash() || "" // Optional: git rev-parse HEAD
|
||||
},
|
||||
docs: [], // Placeholder for future doc links
|
||||
relations: [] // Placeholder for feature dependencies
|
||||
};
|
||||
| File | Extract |
|
||||
|------|---------|
|
||||
| `$SESSION_PATH/workflow-session.json` | session_id, description, started_at, status |
|
||||
| `$SESSION_PATH/IMPL_PLAN.md` | title (first # heading), description (first paragraph) |
|
||||
| `$SESSION_PATH/.tasks/*.json` | count files |
|
||||
| `$SESSION_PATH/.summaries/*.md` | count files |
|
||||
| `$SESSION_PATH/.review/dimensions/*.json` | count + findings summary (optional) |
|
||||
|
||||
// Add new feature to array
|
||||
projectMeta.features.push(newFeature);
|
||||
## Execution Flow
|
||||
|
||||
// Update statistics
|
||||
projectMeta.statistics.total_features = projectMeta.features.length;
|
||||
projectMeta.statistics.total_sessions += 1;
|
||||
projectMeta.statistics.last_updated = currentTimestamp;
|
||||
### Phase 1: Find Session (2 commands)
|
||||
|
||||
// Write back
|
||||
Write('.workflow/project.json', JSON.stringify(projectMeta, null, 2));
|
||||
```bash
|
||||
# 1. Find and extract session
|
||||
SESSION_PATH=$(find .workflow/active/ -maxdepth 1 -name "WFS-*" -type d | head -1)
|
||||
SESSION_ID=$(basename "$SESSION_PATH")
|
||||
|
||||
# 2. Check/create archiving marker
|
||||
test -f "$SESSION_PATH/.archiving" && echo "RESUMING" || touch "$SESSION_PATH/.archiving"
|
||||
```
|
||||
|
||||
**Helper Functions**:
|
||||
```javascript
|
||||
// Extract tags from IMPL_PLAN.md content
|
||||
function extractTags(planContent) {
|
||||
const tags = [];
|
||||
**Output**: `SESSION_ID` = e.g., `WFS-auth-feature`
|
||||
|
||||
// Look for common keywords
|
||||
const keywords = {
|
||||
'auth': /authentication|login|oauth|jwt/i,
|
||||
'security': /security|encrypt|hash|token/i,
|
||||
'api': /api|endpoint|rest|graphql/i,
|
||||
'ui': /component|page|interface|frontend/i,
|
||||
'database': /database|schema|migration|sql/i,
|
||||
'test': /test|testing|spec|coverage/i
|
||||
};
|
||||
### Phase 2: Generate Manifest Entry (Read-only)
|
||||
|
||||
for (const [tag, pattern] of Object.entries(keywords)) {
|
||||
if (pattern.test(planContent)) {
|
||||
tags.push(tag);
|
||||
Read the key files above, then build this structure:
|
||||
|
||||
```json
|
||||
{
|
||||
"session_id": "<from workflow-session.json>",
|
||||
"description": "<from workflow-session.json>",
|
||||
"archived_at": "<current ISO timestamp>",
|
||||
"archive_path": ".workflow/archives/<SESSION_ID>",
|
||||
"metrics": {
|
||||
"duration_hours": "<(completed_at - started_at) / 3600000>",
|
||||
"tasks_completed": "<count .tasks/*.json>",
|
||||
"summaries_generated": "<count .summaries/*.md>",
|
||||
"review_metrics": {
|
||||
"dimensions_analyzed": "<count .review/dimensions/*.json>",
|
||||
"total_findings": "<sum from dimension JSONs>"
|
||||
}
|
||||
}
|
||||
|
||||
return tags.slice(0, 5); // Max 5 tags
|
||||
}
|
||||
|
||||
// Get latest git commit hash (optional)
|
||||
function getLatestCommitHash() {
|
||||
try {
|
||||
const result = Bash({
|
||||
command: "git rev-parse --short HEAD 2>/dev/null",
|
||||
description: "Get latest commit hash"
|
||||
});
|
||||
return result.trim();
|
||||
} catch {
|
||||
return "";
|
||||
},
|
||||
"tags": ["<3-5 keywords from IMPL_PLAN.md>"],
|
||||
"lessons": {
|
||||
"successes": ["<key wins>"],
|
||||
"challenges": ["<difficulties>"],
|
||||
"watch_patterns": ["<patterns to monitor>"]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
#### Step 4.4: Output Confirmation
|
||||
**Lessons Generation**: Use gemini with `~/.claude/workflows/cli-templates/prompts/archive/analysis-simple.txt`
|
||||
|
||||
```
|
||||
✓ Feature "${title}" added to project registry
|
||||
ID: ${featureId}
|
||||
Session: ${sessionId}
|
||||
Location: .workflow/project.json
|
||||
### Phase 3: Atomic Commit (4 commands)
|
||||
|
||||
```bash
|
||||
# 1. Create archive directory
|
||||
mkdir -p .workflow/archives/
|
||||
|
||||
# 2. Move session
|
||||
mv .workflow/active/$SESSION_ID .workflow/archives/$SESSION_ID
|
||||
|
||||
# 3. Update manifest.json (Read → Append → Write)
|
||||
# Read: .workflow/archives/manifest.json (or [])
|
||||
# Append: archive_entry from Phase 2
|
||||
# Write: updated JSON
|
||||
|
||||
# 4. Remove marker
|
||||
rm -f .workflow/archives/$SESSION_ID/.archiving
|
||||
```
|
||||
|
||||
**Error Handling**:
|
||||
- If project.json malformed: Output error, skip update
|
||||
- If feature_metadata missing from agent result: Skip Phase 4
|
||||
- If extraction fails: Use minimal defaults
|
||||
**Output**:
|
||||
```
|
||||
✓ Session "$SESSION_ID" archived successfully
|
||||
Location: .workflow/archives/$SESSION_ID/
|
||||
Manifest: Updated with N total sessions
|
||||
```
|
||||
|
||||
**Phase 4 Total Commands**: 1 bash read + JSON manipulation
|
||||
### Phase 4: Update project.json (Optional)
|
||||
|
||||
**Skip if**: `.workflow/project.json` doesn't exist
|
||||
|
||||
```bash
|
||||
# Check
|
||||
test -f .workflow/project.json || echo "SKIP"
|
||||
```
|
||||
|
||||
**If exists**, add feature entry:
|
||||
|
||||
```json
|
||||
{
|
||||
"id": "<slugified title>",
|
||||
"title": "<from IMPL_PLAN.md>",
|
||||
"status": "completed",
|
||||
"tags": ["<from Phase 2>"],
|
||||
"timeline": { "implemented_at": "<date>" },
|
||||
"traceability": { "session_id": "<SESSION_ID>", "archive_path": "<path>" }
|
||||
}
|
||||
```
|
||||
|
||||
**Output**:
|
||||
```
|
||||
✓ Feature added to project registry
|
||||
```
|
||||
|
||||
## Error Recovery
|
||||
|
||||
### If Agent Fails (Phase 2)
|
||||
| Phase | Symptom | Recovery |
|
||||
|-------|---------|----------|
|
||||
| 1 | No active session | `No active session found` |
|
||||
| 2 | Analysis fails | Remove marker: `rm $SESSION_PATH/.archiving`, retry |
|
||||
| 3 | Move fails | Session safe in active/, fix issue, retry |
|
||||
| 3 | Manifest fails | Session in archives/, manually add entry, remove marker |
|
||||
|
||||
**Symptoms**:
|
||||
- Agent returns `{"status": "error", ...}`
|
||||
- Agent crashes or times out
|
||||
- Analysis incomplete
|
||||
## Quick Reference
|
||||
|
||||
**Recovery Steps**:
|
||||
```bash
|
||||
# Session still in .workflow/active/WFS-session-name
|
||||
# Remove archiving marker
|
||||
bash(rm .workflow/active/WFS-session-name/.archiving)
|
||||
```
|
||||
|
||||
**User Notification**:
|
||||
Phase 1: find session → create .archiving marker
|
||||
Phase 2: read key files → build manifest entry (no writes)
|
||||
Phase 3: mkdir → mv → update manifest.json → rm marker
|
||||
Phase 4: update project.json features array (optional)
|
||||
```
|
||||
ERROR: Session archival failed during analysis phase
|
||||
Reason: [error message from agent]
|
||||
Session remains active in: .workflow/active/WFS-session-name
|
||||
|
||||
Recovery:
|
||||
1. Fix any issues identified in error message
|
||||
2. Retry: /workflow:session:complete
|
||||
|
||||
Session state: SAFE (no changes committed)
|
||||
```
|
||||
|
||||
### If Move Fails (Phase 3)
|
||||
|
||||
**Symptoms**:
|
||||
- `mv` command fails
|
||||
- Permission denied
|
||||
- Disk full
|
||||
|
||||
**Recovery Steps**:
|
||||
```bash
|
||||
# Archiving marker still present
|
||||
# Session still in .workflow/active/ (move failed)
|
||||
# No manifest updated yet
|
||||
```
|
||||
|
||||
**User Notification**:
|
||||
```
|
||||
ERROR: Session archival failed during move operation
|
||||
Reason: [mv error message]
|
||||
Session remains in: .workflow/active/WFS-session-name
|
||||
|
||||
Recovery:
|
||||
1. Fix filesystem issues (permissions, disk space)
|
||||
2. Retry: /workflow:session:complete
|
||||
- System will detect .archiving marker
|
||||
- Will resume from Phase 2 (agent analysis)
|
||||
|
||||
Session state: SAFE (analysis complete, ready to retry move)
|
||||
```
|
||||
|
||||
### If Manifest Update Fails (Phase 3)
|
||||
|
||||
**Symptoms**:
|
||||
- JSON parsing error
|
||||
- Write permission denied
|
||||
- Session moved but manifest not updated
|
||||
|
||||
**Recovery Steps**:
|
||||
```bash
|
||||
# Session moved to .workflow/archives/WFS-session-name
|
||||
# Manifest NOT updated
|
||||
# Archiving marker still present in archived location
|
||||
```
|
||||
|
||||
**User Notification**:
|
||||
```
|
||||
ERROR: Session archived but manifest update failed
|
||||
Reason: [error message]
|
||||
Session location: .workflow/archives/WFS-session-name
|
||||
|
||||
Recovery:
|
||||
1. Fix manifest.json issues (syntax, permissions)
|
||||
2. Manual manifest update:
|
||||
- Add archive entry from agent output
|
||||
- Remove .archiving marker: rm .workflow/archives/WFS-session-name/.archiving
|
||||
|
||||
Session state: PARTIALLY COMPLETE (session archived, manifest needs update)
|
||||
```
|
||||
|
||||
## Workflow Execution Strategy
|
||||
|
||||
### Transactional Four-Phase Approach
|
||||
|
||||
**Phase 1: Pre-Archival Preparation** (Marker creation)
|
||||
- Find active session and extract name
|
||||
- Check for existing `.archiving` marker (resume detection)
|
||||
- Create `.archiving` marker if new
|
||||
- **No data processing** - just state tracking
|
||||
- **Total**: 2-3 bash commands (find + marker check/create)
|
||||
|
||||
**Phase 2: Agent Analysis** (Read-only data processing)
|
||||
- Extract all session data from active location
|
||||
- Count tasks and summaries
|
||||
- Extract review data if .review/ exists (dimension results, findings, fix results)
|
||||
- Generate lessons learned analysis (including review-specific lessons if applicable)
|
||||
- Extract feature metadata from IMPL_PLAN.md
|
||||
- Build complete archive + feature metadata package (with review_metrics if applicable)
|
||||
- **No file modifications** - pure analysis
|
||||
- **Total**: 1 agent invocation
|
||||
|
||||
**Phase 3: Atomic Commit** (Transactional file operations)
|
||||
- Create archive directory
|
||||
- Move session to archive location
|
||||
- Update manifest.json with archive entry
|
||||
- Remove `.archiving` marker
|
||||
- **All-or-nothing**: Either all succeed or session remains in safe state
|
||||
- **Total**: 4 bash commands + JSON manipulation
|
||||
|
||||
**Phase 4: Project Registry Update** (Optional feature tracking)
|
||||
- Check project.json exists
|
||||
- Use feature metadata from Phase 2 agent result
|
||||
- Build feature object with complete traceability
|
||||
- Update project statistics
|
||||
- **Independent**: Can fail without affecting archival
|
||||
- **Total**: 1 bash read + JSON manipulation
|
||||
|
||||
### Transactional Guarantees
|
||||
|
||||
**State Consistency**:
|
||||
- Session NEVER in inconsistent state
|
||||
- `.archiving` marker enables safe resume
|
||||
- Agent failure leaves session in recoverable state
|
||||
- Move/manifest operations grouped in Phase 3
|
||||
|
||||
**Failure Isolation**:
|
||||
- Phase 1 failure: No changes made
|
||||
- Phase 2 failure: Session still active, can retry
|
||||
- Phase 3 failure: Clear error state, manual recovery documented
|
||||
- Phase 4 failure: Does not affect archival success
|
||||
|
||||
**Resume Capability**:
|
||||
- Detect interrupted archival via `.archiving` marker
|
||||
- Resume from Phase 2 (skip marker creation)
|
||||
- Idempotent operations (safe to retry)
|
||||
|
||||
|
||||
|
||||
@@ -164,10 +164,10 @@ SlashCommand(command="/workflow:tools:conflict-resolution --session [sessionId]
|
||||
|
||||
**Parse Output**:
|
||||
- Extract: Execution status (success/skipped/failed)
|
||||
- Verify: CONFLICT_RESOLUTION.md file path (if executed)
|
||||
- Verify: conflict-resolution.json file path (if executed)
|
||||
|
||||
**Validation**:
|
||||
- File `.workflow/active/[sessionId]/.process/CONFLICT_RESOLUTION.md` exists (if executed)
|
||||
- File `.workflow/active/[sessionId]/.process/conflict-resolution.json` exists (if executed)
|
||||
|
||||
**Skip Behavior**:
|
||||
- If conflict_risk is "none" or "low", skip directly to Phase 5
|
||||
@@ -402,7 +402,7 @@ TDD Workflow Orchestrator
|
||||
│ ├─ Phase 4.1: Detect conflicts with CLI
|
||||
│ ├─ Phase 4.2: Present conflicts to user
|
||||
│ └─ Phase 4.3: Apply resolution strategies
|
||||
│ └─ Returns: CONFLICT_RESOLUTION.md ← COLLAPSED
|
||||
│ └─ Returns: conflict-resolution.json ← COLLAPSED
|
||||
│ ELSE:
|
||||
│ └─ Skip to Phase 5
|
||||
│
|
||||
|
||||
@@ -77,18 +77,32 @@ find .workflow/active/ -name "WFS-*" -type d | head -1 | sed 's/.*\///'
|
||||
|
||||
```bash
|
||||
# Load all task JSONs
|
||||
find .workflow/active/{sessionId}/.task/ -name '*.json'
|
||||
for task_file in .workflow/active/{sessionId}/.task/*.json; do
|
||||
cat "$task_file"
|
||||
done
|
||||
|
||||
# Extract task IDs
|
||||
find .workflow/active/{sessionId}/.task/ -name '*.json' -exec jq -r '.id' {} \;
|
||||
for task_file in .workflow/active/{sessionId}/.task/*.json; do
|
||||
cat "$task_file" | jq -r '.id'
|
||||
done
|
||||
|
||||
# Check dependencies
|
||||
find .workflow/active/{sessionId}/.task/ -name 'IMPL-*.json' -exec jq -r '.context.depends_on[]?' {} \;
|
||||
find .workflow/active/{sessionId}/.task/ -name 'REFACTOR-*.json' -exec jq -r '.context.depends_on[]?' {} \;
|
||||
# Check dependencies - read tasks and filter for IMPL/REFACTOR
|
||||
for task_file in .workflow/active/{sessionId}/.task/IMPL-*.json; do
|
||||
cat "$task_file" | jq -r '.context.depends_on[]?'
|
||||
done
|
||||
|
||||
for task_file in .workflow/active/{sessionId}/.task/REFACTOR-*.json; do
|
||||
cat "$task_file" | jq -r '.context.depends_on[]?'
|
||||
done
|
||||
|
||||
# Check meta fields
|
||||
find .workflow/active/{sessionId}/.task/ -name '*.json' -exec jq -r '.meta.tdd_phase' {} \;
|
||||
find .workflow/active/{sessionId}/.task/ -name '*.json' -exec jq -r '.meta.agent' {} \;
|
||||
for task_file in .workflow/active/{sessionId}/.task/*.json; do
|
||||
cat "$task_file" | jq -r '.meta.tdd_phase'
|
||||
done
|
||||
|
||||
for task_file in .workflow/active/{sessionId}/.task/*.json; do
|
||||
cat "$task_file" | jq -r '.meta.agent'
|
||||
done
|
||||
```
|
||||
|
||||
**Validation**:
|
||||
@@ -127,7 +141,7 @@ find .workflow/active/{sessionId}/.task/ -name '*.json' -exec jq -r '.meta.agent
|
||||
**Gemini analysis for comprehensive TDD compliance report**
|
||||
|
||||
```bash
|
||||
cd project-root && gemini -p "
|
||||
ccw cli -p "
|
||||
PURPOSE: Generate TDD compliance report
|
||||
TASK: Analyze TDD workflow execution and generate quality report
|
||||
CONTEXT: @{.workflow/active/{sessionId}/.task/*.json,.workflow/active/{sessionId}/.summaries/*,.workflow/active/{sessionId}/.process/tdd-cycle-report.md}
|
||||
@@ -139,7 +153,7 @@ EXPECTED:
|
||||
- Red-Green-Refactor cycle validation
|
||||
- Best practices adherence assessment
|
||||
RULES: Focus on TDD best practices and workflow adherence. Be specific about violations and improvements.
|
||||
" > .workflow/active/{sessionId}/TDD_COMPLIANCE_REPORT.md
|
||||
" --tool gemini --mode analysis --cd project-root > .workflow/active/{sessionId}/TDD_COMPLIANCE_REPORT.md
|
||||
```
|
||||
|
||||
**Output**: TDD_COMPLIANCE_REPORT.md
|
||||
|
||||
@@ -221,6 +221,7 @@ return "conservative";
|
||||
```javascript
|
||||
Task(
|
||||
subagent_type="cli-planning-agent",
|
||||
run_in_background=false,
|
||||
description=`Analyze test failures (iteration ${N}) - ${strategy} strategy`,
|
||||
prompt=`
|
||||
## Task Objective
|
||||
@@ -271,6 +272,7 @@ Task(
|
||||
```javascript
|
||||
Task(
|
||||
subagent_type="test-fix-agent",
|
||||
run_in_background=false,
|
||||
description=`Execute ${task.meta.type}: ${task.title}`,
|
||||
prompt=`
|
||||
## Task Objective
|
||||
|
||||
@@ -108,7 +108,7 @@ Phase 4: Apply Modifications
|
||||
|
||||
**Agent Delegation**:
|
||||
```javascript
|
||||
Task(subagent_type="cli-execution-agent", prompt=`
|
||||
Task(subagent_type="cli-execution-agent", run_in_background=false, prompt=`
|
||||
## Context
|
||||
- Session: {session_id}
|
||||
- Risk: {conflict_risk}
|
||||
@@ -133,7 +133,7 @@ Task(subagent_type="cli-execution-agent", prompt=`
|
||||
### 2. Execute CLI Analysis (Enhanced with Exploration + Scenario Uniqueness)
|
||||
|
||||
Primary (Gemini):
|
||||
cd {project_root} && gemini -p "
|
||||
ccw cli -p "
|
||||
PURPOSE: Detect conflicts between plan and codebase, using exploration insights
|
||||
TASK:
|
||||
• **Review pre-identified conflict_indicators from exploration results**
|
||||
@@ -152,7 +152,7 @@ Task(subagent_type="cli-execution-agent", prompt=`
|
||||
- ModuleOverlap conflicts with overlap_analysis
|
||||
- Targeted clarification questions
|
||||
RULES: $(cat ~/.claude/workflows/cli-templates/prompts/analysis/02-analyze-code-patterns.txt) | Focus on breaking changes, migration needs, and functional overlaps | Prioritize exploration-identified conflicts | analysis=READ-ONLY
|
||||
"
|
||||
" --tool gemini --mode analysis --cd {project_root}
|
||||
|
||||
Fallback: Qwen (same prompt) → Claude (manual analysis)
|
||||
|
||||
@@ -169,7 +169,7 @@ Task(subagent_type="cli-execution-agent", prompt=`
|
||||
|
||||
### 4. Return Structured Conflict Data
|
||||
|
||||
⚠️ DO NOT generate CONFLICT_RESOLUTION.md file
|
||||
⚠️ Output to conflict-resolution.json (generated in Phase 4)
|
||||
|
||||
Return JSON format for programmatic processing:
|
||||
|
||||
@@ -467,14 +467,30 @@ selectedStrategies.forEach(item => {
|
||||
|
||||
console.log(`\n正在应用 ${modifications.length} 个修改...`);
|
||||
|
||||
// 2. Apply each modification using Edit tool
|
||||
// 2. Apply each modification using Edit tool (with fallback to context-package.json)
|
||||
const appliedModifications = [];
|
||||
const failedModifications = [];
|
||||
const fallbackConstraints = []; // For files that don't exist
|
||||
|
||||
modifications.forEach((mod, idx) => {
|
||||
try {
|
||||
console.log(`[${idx + 1}/${modifications.length}] 修改 ${mod.file}...`);
|
||||
|
||||
// Check if target file exists (brainstorm files may not exist in lite workflow)
|
||||
if (!file_exists(mod.file)) {
|
||||
console.log(` ⚠️ 文件不存在,写入 context-package.json 作为约束`);
|
||||
fallbackConstraints.push({
|
||||
source: "conflict-resolution",
|
||||
conflict_id: mod.conflict_id,
|
||||
target_file: mod.file,
|
||||
section: mod.section,
|
||||
change_type: mod.change_type,
|
||||
content: mod.new_content,
|
||||
rationale: mod.rationale
|
||||
});
|
||||
return; // Skip to next modification
|
||||
}
|
||||
|
||||
if (mod.change_type === "update") {
|
||||
Edit({
|
||||
file_path: mod.file,
|
||||
@@ -502,14 +518,45 @@ modifications.forEach((mod, idx) => {
|
||||
}
|
||||
});
|
||||
|
||||
// 3. Update context-package.json with resolution details
|
||||
// 2b. Generate conflict-resolution.json output file
|
||||
const resolutionOutput = {
|
||||
session_id: sessionId,
|
||||
resolved_at: new Date().toISOString(),
|
||||
summary: {
|
||||
total_conflicts: conflicts.length,
|
||||
resolved_with_strategy: selectedStrategies.length,
|
||||
custom_handling: customConflicts.length,
|
||||
fallback_constraints: fallbackConstraints.length
|
||||
},
|
||||
resolved_conflicts: selectedStrategies.map(s => ({
|
||||
conflict_id: s.conflict_id,
|
||||
strategy_name: s.strategy.name,
|
||||
strategy_approach: s.strategy.approach,
|
||||
clarifications: s.clarifications || [],
|
||||
modifications_applied: s.strategy.modifications?.filter(m =>
|
||||
appliedModifications.some(am => am.conflict_id === s.conflict_id)
|
||||
) || []
|
||||
})),
|
||||
custom_conflicts: customConflicts.map(c => ({
|
||||
id: c.id,
|
||||
brief: c.brief,
|
||||
category: c.category,
|
||||
suggestions: c.suggestions,
|
||||
overlap_analysis: c.overlap_analysis || null
|
||||
})),
|
||||
planning_constraints: fallbackConstraints, // Constraints for files that don't exist
|
||||
failed_modifications: failedModifications
|
||||
};
|
||||
|
||||
const resolutionPath = `.workflow/active/${sessionId}/.process/conflict-resolution.json`;
|
||||
Write(resolutionPath, JSON.stringify(resolutionOutput, null, 2));
|
||||
console.log(`\n📄 冲突解决结果已保存: ${resolutionPath}`);
|
||||
|
||||
// 3. Update context-package.json with resolution details (reference to JSON file)
|
||||
const contextPackage = JSON.parse(Read(contextPath));
|
||||
contextPackage.conflict_detection.conflict_risk = "resolved";
|
||||
contextPackage.conflict_detection.resolved_conflicts = selectedStrategies.map(s => ({
|
||||
conflict_id: s.conflict_id,
|
||||
strategy_name: s.strategy.name,
|
||||
clarifications: s.clarifications
|
||||
}));
|
||||
contextPackage.conflict_detection.resolution_file = resolutionPath; // Reference to detailed JSON
|
||||
contextPackage.conflict_detection.resolved_conflicts = selectedStrategies.map(s => s.conflict_id);
|
||||
contextPackage.conflict_detection.custom_conflicts = customConflicts.map(c => c.id);
|
||||
contextPackage.conflict_detection.resolved_at = new Date().toISOString();
|
||||
Write(contextPath, JSON.stringify(contextPackage, null, 2));
|
||||
@@ -582,12 +629,50 @@ return {
|
||||
✓ Agent log saved to .workflow/active/{session_id}/.chat/
|
||||
```
|
||||
|
||||
## Output Format: Agent JSON Response
|
||||
## Output Format
|
||||
|
||||
### Primary Output: conflict-resolution.json
|
||||
|
||||
**Path**: `.workflow/active/{session_id}/.process/conflict-resolution.json`
|
||||
|
||||
**Schema**:
|
||||
```json
|
||||
{
|
||||
"session_id": "WFS-xxx",
|
||||
"resolved_at": "ISO timestamp",
|
||||
"summary": {
|
||||
"total_conflicts": 3,
|
||||
"resolved_with_strategy": 2,
|
||||
"custom_handling": 1,
|
||||
"fallback_constraints": 0
|
||||
},
|
||||
"resolved_conflicts": [
|
||||
{
|
||||
"conflict_id": "CON-001",
|
||||
"strategy_name": "策略名称",
|
||||
"strategy_approach": "实现方法",
|
||||
"clarifications": [],
|
||||
"modifications_applied": []
|
||||
}
|
||||
],
|
||||
"custom_conflicts": [
|
||||
{
|
||||
"id": "CON-002",
|
||||
"brief": "冲突摘要",
|
||||
"category": "ModuleOverlap",
|
||||
"suggestions": ["建议1", "建议2"],
|
||||
"overlap_analysis": null
|
||||
}
|
||||
],
|
||||
"planning_constraints": [],
|
||||
"failed_modifications": []
|
||||
}
|
||||
```
|
||||
|
||||
### Secondary: Agent JSON Response (stdout)
|
||||
|
||||
**Focus**: Structured conflict data with actionable modifications for programmatic processing.
|
||||
|
||||
**Format**: JSON to stdout (NO file generation)
|
||||
|
||||
**Structure**: Defined in Phase 2, Step 4 (agent prompt)
|
||||
|
||||
### Key Requirements
|
||||
@@ -635,11 +720,12 @@ If Edit tool fails mid-application:
|
||||
- Requires: `conflict_risk ≥ medium`
|
||||
|
||||
**Output**:
|
||||
- Modified files:
|
||||
- Generated file:
|
||||
- `.workflow/active/{session_id}/.process/conflict-resolution.json` (primary output)
|
||||
- Modified files (if exist):
|
||||
- `.workflow/active/{session_id}/.brainstorm/guidance-specification.md`
|
||||
- `.workflow/active/{session_id}/.brainstorm/{role}/analysis.md`
|
||||
- `.workflow/active/{session_id}/.process/context-package.json` (conflict_risk → resolved)
|
||||
- NO report file generation
|
||||
- `.workflow/active/{session_id}/.process/context-package.json` (conflict_risk → resolved, resolution_file reference)
|
||||
|
||||
**User Interaction**:
|
||||
- **Iterative conflict processing**: One conflict at a time, not in batches
|
||||
@@ -667,7 +753,7 @@ If Edit tool fails mid-application:
|
||||
✓ guidance-specification.md updated with resolved conflicts
|
||||
✓ Role analyses (*.md) updated with resolved conflicts
|
||||
✓ context-package.json marked as "resolved" with clarification records
|
||||
✓ No CONFLICT_RESOLUTION.md file generated
|
||||
✓ conflict-resolution.json generated with full resolution details
|
||||
✓ Modification summary includes:
|
||||
- Total conflicts
|
||||
- Resolved with strategy (count)
|
||||
|
||||
@@ -121,6 +121,7 @@ const sessionFolder = `.workflow/active/${session_id}/.process`;
|
||||
const explorationTasks = selectedAngles.map((angle, index) =>
|
||||
Task(
|
||||
subagent_type="cli-explore-agent",
|
||||
run_in_background=false,
|
||||
description=`Explore: ${angle}`,
|
||||
prompt=`
|
||||
## Task Objective
|
||||
@@ -135,7 +136,7 @@ Execute **${angle}** exploration for task planning context. Analyze codebase fro
|
||||
|
||||
## MANDATORY FIRST STEPS (Execute by Agent)
|
||||
**You (cli-explore-agent) MUST execute these steps in order:**
|
||||
1. Run: ~/.claude/scripts/get_modules_by_depth.sh (project structure)
|
||||
1. Run: ccw tool exec get_modules_by_depth '{}' (project structure)
|
||||
2. Run: rg -l "{keyword_from_task}" --type ts (locate relevant files)
|
||||
3. Execute: cat ~/.claude/workflows/cli-templates/schemas/explore-json-schema.json (get output schema reference)
|
||||
|
||||
@@ -215,6 +216,7 @@ Write(`${sessionFolder}/explorations-manifest.json`, JSON.stringify(explorationM
|
||||
```javascript
|
||||
Task(
|
||||
subagent_type="context-search-agent",
|
||||
run_in_background=false,
|
||||
description="Gather comprehensive context for plan",
|
||||
prompt=`
|
||||
## Execution Mode
|
||||
@@ -237,7 +239,7 @@ Execute complete context-search-agent workflow for implementation planning:
|
||||
### Phase 1: Initialization & Pre-Analysis
|
||||
1. **Project State Loading**: Read and parse `.workflow/project.json`. Use its `overview` section as the foundational `project_context`. This is your primary source for architecture, tech stack, and key components. If file doesn't exist, proceed with fresh analysis.
|
||||
2. **Detection**: Check for existing context-package (early exit if valid)
|
||||
3. **Foundation**: Initialize code-index, get project structure, load docs
|
||||
3. **Foundation**: Initialize CodexLens, get project structure, load docs
|
||||
4. **Analysis**: Extract keywords, determine scope, classify complexity based on task description and project state
|
||||
|
||||
### Phase 2: Multi-Source Context Discovery
|
||||
|
||||
@@ -28,6 +28,12 @@ Input Parsing:
|
||||
├─ Parse flags: --session
|
||||
└─ Validation: session_id REQUIRED
|
||||
|
||||
Phase 0: User Configuration (Interactive)
|
||||
├─ Question 1: Supplementary materials/guidelines?
|
||||
├─ Question 2: Execution method preference (Agent/CLI/Hybrid)
|
||||
├─ Question 3: CLI tool preference (if CLI selected)
|
||||
└─ Store: userConfig for agent prompt
|
||||
|
||||
Phase 1: Context Preparation & Module Detection (Command)
|
||||
├─ Assemble session paths (metadata, context package, output dirs)
|
||||
├─ Provide metadata (session_id, execution_mode, mcp_capabilities)
|
||||
@@ -57,6 +63,82 @@ Phase 3: Integration (+1 Coordinator, Multi-Module Only)
|
||||
|
||||
## Document Generation Lifecycle
|
||||
|
||||
### Phase 0: User Configuration (Interactive)
|
||||
|
||||
**Purpose**: Collect user preferences before task generation to ensure generated tasks match execution expectations.
|
||||
|
||||
**User Questions**:
|
||||
```javascript
|
||||
AskUserQuestion({
|
||||
questions: [
|
||||
{
|
||||
question: "Do you have supplementary materials or guidelines to include?",
|
||||
header: "Materials",
|
||||
multiSelect: false,
|
||||
options: [
|
||||
{ label: "No additional materials", description: "Use existing context only" },
|
||||
{ label: "Provide file paths", description: "I'll specify paths to include" },
|
||||
{ label: "Provide inline content", description: "I'll paste content directly" }
|
||||
]
|
||||
},
|
||||
{
|
||||
question: "Select execution method for generated tasks:",
|
||||
header: "Execution",
|
||||
multiSelect: false,
|
||||
options: [
|
||||
{ label: "Agent (Recommended)", description: "Claude agent executes tasks directly" },
|
||||
{ label: "Hybrid", description: "Agent orchestrates, calls CLI for complex steps" },
|
||||
{ label: "CLI Only", description: "All execution via CLI tools (codex/gemini/qwen)" }
|
||||
]
|
||||
},
|
||||
{
|
||||
question: "If using CLI, which tool do you prefer?",
|
||||
header: "CLI Tool",
|
||||
multiSelect: false,
|
||||
options: [
|
||||
{ label: "Codex (Recommended)", description: "Best for implementation tasks" },
|
||||
{ label: "Gemini", description: "Best for analysis and large context" },
|
||||
{ label: "Qwen", description: "Alternative analysis tool" },
|
||||
{ label: "Auto", description: "Let agent decide per-task" }
|
||||
]
|
||||
}
|
||||
]
|
||||
})
|
||||
```
|
||||
|
||||
**Handle Materials Response**:
|
||||
```javascript
|
||||
if (userConfig.materials === "Provide file paths") {
|
||||
// Follow-up question for file paths
|
||||
const pathsResponse = AskUserQuestion({
|
||||
questions: [{
|
||||
question: "Enter file paths to include (comma-separated or one per line):",
|
||||
header: "Paths",
|
||||
multiSelect: false,
|
||||
options: [
|
||||
{ label: "Enter paths", description: "Provide paths in text input" }
|
||||
]
|
||||
}]
|
||||
})
|
||||
userConfig.supplementaryPaths = parseUserPaths(pathsResponse)
|
||||
}
|
||||
```
|
||||
|
||||
**Build userConfig**:
|
||||
```javascript
|
||||
const userConfig = {
|
||||
supplementaryMaterials: {
|
||||
type: "none|paths|inline",
|
||||
content: [...], // Parsed paths or inline content
|
||||
},
|
||||
executionMethod: "agent|hybrid|cli",
|
||||
preferredCliTool: "codex|gemini|qwen|auto",
|
||||
enableResume: true // Always enable resume for CLI executions
|
||||
}
|
||||
```
|
||||
|
||||
**Pass to Agent**: Include `userConfig` in agent prompt for Phase 2A/2B.
|
||||
|
||||
### Phase 1: Context Preparation & Module Detection (Command Responsibility)
|
||||
|
||||
**Command prepares session paths, metadata, and detects module structure.**
|
||||
@@ -89,6 +171,14 @@ Phase 3: Integration (+1 Coordinator, Multi-Module Only)
|
||||
3. **Auto Module Detection** (determines single vs parallel mode):
|
||||
```javascript
|
||||
function autoDetectModules(contextPackage, projectRoot) {
|
||||
// === Complexity Gate: Only parallelize for High complexity ===
|
||||
const complexity = contextPackage.metadata?.complexity || 'Medium';
|
||||
if (complexity !== 'High') {
|
||||
// Force single agent mode for Low/Medium complexity
|
||||
// This maximizes agent context reuse for related tasks
|
||||
return [{ name: 'main', prefix: '', paths: ['.'] }];
|
||||
}
|
||||
|
||||
// Priority 1: Explicit frontend/backend separation
|
||||
if (exists('src/frontend') && exists('src/backend')) {
|
||||
return [
|
||||
@@ -112,8 +202,9 @@ Phase 3: Integration (+1 Coordinator, Multi-Module Only)
|
||||
```
|
||||
|
||||
**Decision Logic**:
|
||||
- `complexity !== 'High'` → Force Phase 2A (Single Agent, maximize context reuse)
|
||||
- `modules.length == 1` → Phase 2A (Single Agent, original flow)
|
||||
- `modules.length >= 2` → Phase 2B + Phase 3 (N+1 Parallel)
|
||||
- `modules.length >= 2 && complexity == 'High'` → Phase 2B + Phase 3 (N+1 Parallel)
|
||||
|
||||
**Note**: CLI tool usage is now determined semantically by action-planning-agent based on user's task description, not by flags.
|
||||
|
||||
@@ -127,6 +218,7 @@ Phase 3: Integration (+1 Coordinator, Multi-Module Only)
|
||||
```javascript
|
||||
Task(
|
||||
subagent_type="action-planning-agent",
|
||||
run_in_background=false,
|
||||
description="Generate planning documents (IMPL_PLAN.md, task JSONs, TODO_LIST.md)",
|
||||
prompt=`
|
||||
## TASK OBJECTIVE
|
||||
@@ -150,10 +242,21 @@ Output:
|
||||
Session ID: {session-id}
|
||||
MCP Capabilities: {exa_code, exa_web, code_index}
|
||||
|
||||
## USER CONFIGURATION (from Phase 0)
|
||||
Execution Method: ${userConfig.executionMethod} // agent|hybrid|cli
|
||||
Preferred CLI Tool: ${userConfig.preferredCliTool} // codex|gemini|qwen|auto
|
||||
Supplementary Materials: ${userConfig.supplementaryMaterials}
|
||||
|
||||
## CLI TOOL SELECTION
|
||||
Determine CLI tool usage per-step based on user's task description:
|
||||
- If user specifies "use Codex/Gemini/Qwen for X" → Add command field to relevant steps
|
||||
- Default: Agent execution (no command field) unless user explicitly requests CLI
|
||||
Based on userConfig.executionMethod:
|
||||
- "agent": No command field in implementation_approach steps
|
||||
- "hybrid": Add command field to complex steps only (agent handles simple steps)
|
||||
- "cli": Add command field to ALL implementation_approach steps
|
||||
|
||||
CLI Resume Support (MANDATORY for all CLI commands):
|
||||
- Use --resume parameter to continue from previous task execution
|
||||
- Read previous task's cliExecutionId from session state
|
||||
- Format: ccw cli -p "[prompt]" --resume ${previousCliId} --tool ${tool} --mode write
|
||||
|
||||
## EXPLORATION CONTEXT (from context-package.exploration_results)
|
||||
- Load exploration_results from context-package.json
|
||||
@@ -163,6 +266,13 @@ Determine CLI tool usage per-step based on user's task description:
|
||||
- Use aggregated_insights.all_integration_points for precise modification locations
|
||||
- Use conflict_indicators for risk-aware task sequencing
|
||||
|
||||
## CONFLICT RESOLUTION CONTEXT (if exists)
|
||||
- Check context-package.conflict_detection.resolution_file for conflict-resolution.json path
|
||||
- If exists, load .process/conflict-resolution.json:
|
||||
- Apply planning_constraints as task constraints (for brainstorm-less workflows)
|
||||
- Reference resolved_conflicts for implementation approach alignment
|
||||
- Handle custom_conflicts with explicit task notes
|
||||
|
||||
## EXPECTED DELIVERABLES
|
||||
1. Task JSON Files (.task/IMPL-*.json)
|
||||
- 6-field schema (id, title, status, context_package_path, meta, context, flow_control)
|
||||
@@ -170,6 +280,7 @@ Determine CLI tool usage per-step based on user's task description:
|
||||
- Artifacts integration from context package
|
||||
- **focus_paths enhanced with exploration critical_files**
|
||||
- Flow control with pre_analysis steps (include exploration integration_points analysis)
|
||||
- **CLI Execution IDs and strategies (MANDATORY)**
|
||||
|
||||
2. Implementation Plan (IMPL_PLAN.md)
|
||||
- Context analysis and artifact references
|
||||
@@ -181,6 +292,27 @@ Determine CLI tool usage per-step based on user's task description:
|
||||
- Links to task JSONs and summaries
|
||||
- Matches task JSON hierarchy
|
||||
|
||||
## CLI EXECUTION ID REQUIREMENTS (MANDATORY)
|
||||
Each task JSON MUST include:
|
||||
- **cli_execution_id**: Unique ID for CLI execution (format: `{session_id}-{task_id}`)
|
||||
- **cli_execution**: Strategy object based on depends_on:
|
||||
- No deps → `{ "strategy": "new" }`
|
||||
- 1 dep (single child) → `{ "strategy": "resume", "resume_from": "parent-cli-id" }`
|
||||
- 1 dep (multiple children) → `{ "strategy": "fork", "resume_from": "parent-cli-id" }`
|
||||
- N deps → `{ "strategy": "merge_fork", "merge_from": ["id1", "id2", ...] }`
|
||||
|
||||
**CLI Execution Strategy Rules**:
|
||||
1. **new**: Task has no dependencies - starts fresh CLI conversation
|
||||
2. **resume**: Task has 1 parent AND that parent has only this child - continues same conversation
|
||||
3. **fork**: Task has 1 parent BUT parent has multiple children - creates new branch with parent context
|
||||
4. **merge_fork**: Task has multiple parents - merges all parent contexts into new conversation
|
||||
|
||||
**Execution Command Patterns**:
|
||||
- new: `ccw cli -p "[prompt]" --tool [tool] --mode write --id [cli_execution_id]`
|
||||
- resume: `ccw cli -p "[prompt]" --resume [resume_from] --tool [tool] --mode write`
|
||||
- fork: `ccw cli -p "[prompt]" --resume [resume_from] --id [cli_execution_id] --tool [tool] --mode write`
|
||||
- merge_fork: `ccw cli -p "[prompt]" --resume [merge_from.join(',')] --id [cli_execution_id] --tool [tool] --mode write`
|
||||
|
||||
## QUALITY STANDARDS
|
||||
Hard Constraints:
|
||||
- Task count <= 18 (hard limit - request re-scope if exceeded)
|
||||
@@ -203,7 +335,9 @@ Hard Constraints:
|
||||
|
||||
**Condition**: `modules.length >= 2` (multi-module detected)
|
||||
|
||||
**Purpose**: Launch N action-planning-agents simultaneously, one per module, for parallel task generation.
|
||||
**Purpose**: Launch N action-planning-agents simultaneously, one per module, for parallel task JSON generation.
|
||||
|
||||
**Note**: Phase 2B agents generate Task JSONs ONLY. IMPL_PLAN.md and TODO_LIST.md are generated by Phase 3 Coordinator.
|
||||
|
||||
**Parallel Agent Invocation**:
|
||||
```javascript
|
||||
@@ -211,27 +345,50 @@ Hard Constraints:
|
||||
const planningTasks = modules.map(module =>
|
||||
Task(
|
||||
subagent_type="action-planning-agent",
|
||||
description=`Plan ${module.name} module`,
|
||||
run_in_background=false,
|
||||
description=`Generate ${module.name} module task JSONs`,
|
||||
prompt=`
|
||||
## SCOPE
|
||||
## TASK OBJECTIVE
|
||||
Generate task JSON files for ${module.name} module within workflow session
|
||||
|
||||
IMPORTANT: This is PLANNING ONLY - generate task JSONs, NOT implementing code.
|
||||
IMPORTANT: Generate Task JSONs ONLY. IMPL_PLAN.md and TODO_LIST.md by Phase 3 Coordinator.
|
||||
|
||||
CRITICAL: Follow progressive loading strategy in agent specification
|
||||
|
||||
## MODULE SCOPE
|
||||
- Module: ${module.name} (${module.type})
|
||||
- Focus Paths: ${module.paths.join(', ')}
|
||||
- Task ID Prefix: IMPL-${module.prefix}
|
||||
- Task Limit: ≤9 tasks
|
||||
- Other Modules: ${otherModules.join(', ')}
|
||||
- Cross-module deps format: "CROSS::{module}::{pattern}"
|
||||
|
||||
## SESSION PATHS
|
||||
Input:
|
||||
- Session Metadata: .workflow/active/{session-id}/workflow-session.json
|
||||
- Context Package: .workflow/active/{session-id}/.process/context-package.json
|
||||
Output:
|
||||
- Task Dir: .workflow/active/{session-id}/.task/
|
||||
|
||||
## INSTRUCTIONS
|
||||
- Generate tasks ONLY for ${module.name} module
|
||||
- Use task ID format: IMPL-${module.prefix}1, IMPL-${module.prefix}2, ...
|
||||
- For cross-module dependencies, use: depends_on: ["CROSS::B::api-endpoint"]
|
||||
- Maximum 9 tasks per module
|
||||
## CONTEXT METADATA
|
||||
Session ID: {session-id}
|
||||
MCP Capabilities: {exa_code, exa_web, code_index}
|
||||
|
||||
## CROSS-MODULE DEPENDENCIES
|
||||
- Use placeholder: depends_on: ["CROSS::{module}::{pattern}"]
|
||||
- Example: depends_on: ["CROSS::B::api-endpoint"]
|
||||
- Phase 3 Coordinator resolves to actual task IDs
|
||||
|
||||
## EXPECTED DELIVERABLES
|
||||
Task JSON Files (.task/IMPL-${module.prefix}*.json):
|
||||
- 6-field schema per agent specification
|
||||
- Task ID format: IMPL-${module.prefix}1, IMPL-${module.prefix}2, ...
|
||||
- Focus ONLY on ${module.name} module scope
|
||||
|
||||
## SUCCESS CRITERIA
|
||||
- Task JSONs saved to .task/ with IMPL-${module.prefix}* naming
|
||||
- Cross-module dependencies use CROSS:: placeholder format
|
||||
- Return task count and brief summary
|
||||
`
|
||||
)
|
||||
);
|
||||
@@ -255,37 +412,79 @@ await Promise.all(planningTasks);
|
||||
- Prefix: A, B, C... (assigned by detection order)
|
||||
- Sequence: 1, 2, 3... (per-module increment)
|
||||
|
||||
### Phase 3: Integration (+1 Coordinator, Multi-Module Only)
|
||||
### Phase 3: Integration (+1 Coordinator Agent, Multi-Module Only)
|
||||
|
||||
**Condition**: Only executed when `modules.length >= 2`
|
||||
|
||||
**Purpose**: Collect all module tasks, resolve cross-module dependencies, generate unified documents.
|
||||
**Purpose**: Collect all module tasks, resolve cross-module dependencies, generate unified IMPL_PLAN.md and TODO_LIST.md documents.
|
||||
|
||||
**Integration Logic**:
|
||||
**Coordinator Agent Invocation**:
|
||||
```javascript
|
||||
// 1. Collect all module task JSONs
|
||||
const allTasks = glob('.task/IMPL-*.json').map(loadJson);
|
||||
// Wait for all Phase 2B agents to complete
|
||||
const moduleResults = await Promise.all(planningTasks);
|
||||
|
||||
// 2. Resolve cross-module dependencies
|
||||
for (const task of allTasks) {
|
||||
if (task.depends_on) {
|
||||
task.depends_on = task.depends_on.map(dep => {
|
||||
if (dep.startsWith('CROSS::')) {
|
||||
// CROSS::B::api-endpoint → find matching IMPL-B* task
|
||||
const [, targetModule, pattern] = dep.match(/CROSS::(\w+)::(.+)/);
|
||||
return findTaskByModuleAndPattern(allTasks, targetModule, pattern);
|
||||
}
|
||||
return dep;
|
||||
});
|
||||
}
|
||||
}
|
||||
// Launch +1 Coordinator Agent
|
||||
Task(
|
||||
subagent_type="action-planning-agent",
|
||||
run_in_background=false,
|
||||
description="Integrate module tasks and generate unified documents",
|
||||
prompt=`
|
||||
## TASK OBJECTIVE
|
||||
Integrate all module task JSONs, resolve cross-module dependencies, and generate unified IMPL_PLAN.md and TODO_LIST.md
|
||||
|
||||
// 3. Generate unified IMPL_PLAN.md (grouped by module)
|
||||
generateIMPL_PLAN(allTasks, modules);
|
||||
IMPORTANT: This is INTEGRATION ONLY - consolidate existing task JSONs, NOT creating new tasks.
|
||||
|
||||
// 4. Generate TODO_LIST.md (hierarchical structure)
|
||||
generateTODO_LIST(allTasks, modules);
|
||||
## SESSION PATHS
|
||||
Input:
|
||||
- Session Metadata: .workflow/active/{session-id}/workflow-session.json
|
||||
- Context Package: .workflow/active/{session-id}/.process/context-package.json
|
||||
- Task JSONs: .workflow/active/{session-id}/.task/IMPL-*.json (from Phase 2B)
|
||||
Output:
|
||||
- Updated Task JSONs: .workflow/active/{session-id}/.task/IMPL-*.json (resolved dependencies)
|
||||
- IMPL_PLAN: .workflow/active/{session-id}/IMPL_PLAN.md
|
||||
- TODO_LIST: .workflow/active/{session-id}/TODO_LIST.md
|
||||
|
||||
## CONTEXT METADATA
|
||||
Session ID: {session-id}
|
||||
Modules: ${modules.map(m => m.name + '(' + m.prefix + ')').join(', ')}
|
||||
Module Count: ${modules.length}
|
||||
|
||||
## INTEGRATION STEPS
|
||||
1. Collect all .task/IMPL-*.json, group by module prefix
|
||||
2. Resolve CROSS:: dependencies → actual task IDs, update task JSONs
|
||||
3. Generate IMPL_PLAN.md (multi-module format per agent specification)
|
||||
4. Generate TODO_LIST.md (hierarchical format per agent specification)
|
||||
|
||||
## CROSS-MODULE DEPENDENCY RESOLUTION
|
||||
- Pattern: CROSS::{module}::{pattern} → IMPL-{module}* matching title/context
|
||||
- Example: CROSS::B::api-endpoint → IMPL-B1 (if B1 title contains "api-endpoint")
|
||||
- Log unresolved as warnings
|
||||
|
||||
## EXPECTED DELIVERABLES
|
||||
1. Updated Task JSONs with resolved dependency IDs
|
||||
2. IMPL_PLAN.md - multi-module format with cross-dependency section
|
||||
3. TODO_LIST.md - hierarchical by module with cross-dependency section
|
||||
|
||||
## SUCCESS CRITERIA
|
||||
- No CROSS:: placeholders remaining in task JSONs
|
||||
- IMPL_PLAN.md and TODO_LIST.md generated with multi-module structure
|
||||
- Return: task count, per-module breakdown, resolved dependency count
|
||||
`
|
||||
)
|
||||
```
|
||||
|
||||
**Note**: IMPL_PLAN.md and TODO_LIST.md structure definitions are in `action-planning-agent.md`.
|
||||
**Dependency Resolution Algorithm**:
|
||||
```javascript
|
||||
function resolveCrossModuleDependency(placeholder, allTasks) {
|
||||
const [, targetModule, pattern] = placeholder.match(/CROSS::(\w+)::(.+)/);
|
||||
const candidates = allTasks.filter(t =>
|
||||
t.id.startsWith(`IMPL-${targetModule}`) &&
|
||||
(t.title.toLowerCase().includes(pattern.toLowerCase()) ||
|
||||
t.context?.description?.toLowerCase().includes(pattern.toLowerCase()))
|
||||
);
|
||||
return candidates.length > 0
|
||||
? candidates.sort((a, b) => a.id.localeCompare(b.id))[0].id
|
||||
: placeholder; // Keep for manual resolution
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
@@ -113,7 +113,7 @@ Phase 2: Agent Execution (Document Generation)
|
||||
// Existing test patterns and coverage analysis
|
||||
},
|
||||
"mcp_capabilities": {
|
||||
"code_index": true,
|
||||
"codex_lens": true,
|
||||
"exa_code": true,
|
||||
"exa_web": true
|
||||
}
|
||||
@@ -152,9 +152,14 @@ Phase 2: Agent Execution (Document Generation)
|
||||
roleAnalysisPaths.forEach(path => Read(path));
|
||||
```
|
||||
|
||||
5. **Load Conflict Resolution** (from context-package.json, if exists)
|
||||
5. **Load Conflict Resolution** (from conflict-resolution.json, if exists)
|
||||
```javascript
|
||||
if (contextPackage.brainstorm_artifacts.conflict_resolution?.exists) {
|
||||
// Check for new conflict-resolution.json format
|
||||
if (contextPackage.conflict_detection?.resolution_file) {
|
||||
Read(contextPackage.conflict_detection.resolution_file) // .process/conflict-resolution.json
|
||||
}
|
||||
// Fallback: legacy brainstorm_artifacts path
|
||||
else if (contextPackage.brainstorm_artifacts?.conflict_resolution?.exists) {
|
||||
Read(contextPackage.brainstorm_artifacts.conflict_resolution.path)
|
||||
}
|
||||
```
|
||||
@@ -189,6 +194,7 @@ const templatePath = hasCliExecuteFlag
|
||||
```javascript
|
||||
Task(
|
||||
subagent_type="action-planning-agent",
|
||||
run_in_background=false,
|
||||
description="Generate TDD task JSON and implementation plan",
|
||||
prompt=`
|
||||
## Execution Context
|
||||
@@ -223,7 +229,7 @@ If conflict_risk was medium/high, modifications have been applied to:
|
||||
- **guidance-specification.md**: Design decisions updated to resolve conflicts
|
||||
- **Role analyses (*.md)**: Recommendations adjusted for compatibility
|
||||
- **context-package.json**: Marked as "resolved" with conflict IDs
|
||||
- NO separate CONFLICT_RESOLUTION.md file (conflicts resolved in-place)
|
||||
- Conflict resolution results stored in conflict-resolution.json
|
||||
|
||||
### MCP Analysis Results (Optional)
|
||||
**Code Structure**: {mcp_code_index_results}
|
||||
@@ -333,7 +339,7 @@ Generate all three documents and report completion status:
|
||||
- TDD cycles configured: N cycles with quantified test cases
|
||||
- Artifacts integrated: synthesis-spec, guidance-specification, N role analyses
|
||||
- Test context integrated: existing patterns and coverage
|
||||
- MCP enhancements: code-index, exa-research
|
||||
- MCP enhancements: CodexLens, exa-research
|
||||
- Session ready for TDD execution: /workflow:execute
|
||||
`
|
||||
)
|
||||
@@ -373,10 +379,12 @@ const agentContext = {
|
||||
.flatMap(role => role.files)
|
||||
.map(file => Read(file.path)),
|
||||
|
||||
// Load conflict resolution if exists (from context package)
|
||||
conflict_resolution: brainstorm_artifacts.conflict_resolution?.exists
|
||||
? Read(brainstorm_artifacts.conflict_resolution.path)
|
||||
: null,
|
||||
// Load conflict resolution if exists (prefer new JSON format)
|
||||
conflict_resolution: context_package.conflict_detection?.resolution_file
|
||||
? Read(context_package.conflict_detection.resolution_file) // .process/conflict-resolution.json
|
||||
: (brainstorm_artifacts?.conflict_resolution?.exists
|
||||
? Read(brainstorm_artifacts.conflict_resolution.path)
|
||||
: null),
|
||||
|
||||
// Optional MCP enhancements
|
||||
mcp_analysis: executeMcpDiscovery()
|
||||
@@ -408,7 +416,7 @@ This section provides quick reference for TDD task JSON structure. For complete
|
||||
│ ├── IMPL-3.2.json # Complex feature subtask (if needed)
|
||||
│ └── ...
|
||||
└── .process/
|
||||
├── CONFLICT_RESOLUTION.md # Conflict resolution strategies (if conflict_risk ≥ medium)
|
||||
├── conflict-resolution.json # Conflict resolution results (if conflict_risk ≥ medium)
|
||||
├── test-context-package.json # Test coverage analysis
|
||||
├── context-package.json # Input from context-gather
|
||||
├── context_package_path # Path to smart context package
|
||||
|
||||
@@ -76,6 +76,7 @@ Phase 3: Output Validation (Command)
|
||||
```javascript
|
||||
Task(
|
||||
subagent_type="cli-execution-agent",
|
||||
run_in_background=false,
|
||||
description="Analyze test coverage gaps and generate test strategy",
|
||||
prompt=`
|
||||
## TASK OBJECTIVE
|
||||
@@ -89,7 +90,7 @@ Template: ~/.claude/workflows/cli-templates/prompts/test/test-concept-analysis.t
|
||||
|
||||
## EXECUTION STEPS
|
||||
1. Execute Gemini analysis:
|
||||
cd .workflow/active/{test_session_id}/.process && gemini -p "$(cat ~/.claude/workflows/cli-templates/prompts/test/test-concept-analysis.txt)" --approval-mode yolo
|
||||
ccw cli -p "$(cat ~/.claude/workflows/cli-templates/prompts/test/test-concept-analysis.txt)" --tool gemini --mode write --cd .workflow/active/{test_session_id}/.process
|
||||
|
||||
2. Generate TEST_ANALYSIS_RESULTS.md:
|
||||
Synthesize gemini-test-analysis.md into standardized format for task generation
|
||||
|
||||
@@ -86,6 +86,7 @@ if (file_exists(testContextPath)) {
|
||||
```javascript
|
||||
Task(
|
||||
subagent_type="test-context-search-agent",
|
||||
run_in_background=false,
|
||||
description="Gather test coverage context",
|
||||
prompt=`
|
||||
You are executing as test-context-search-agent (.claude/agents/test-context-search-agent.md).
|
||||
|
||||
@@ -94,6 +94,7 @@ Phase 2: Test Document Generation (Agent)
|
||||
```javascript
|
||||
Task(
|
||||
subagent_type="action-planning-agent",
|
||||
run_in_background=false,
|
||||
description="Generate test planning documents (IMPL_PLAN.md, task JSONs, TODO_LIST.md)",
|
||||
prompt=`
|
||||
## TASK OBJECTIVE
|
||||
|
||||
@@ -320,7 +320,7 @@ Read({base_path}/prototypes/{target}-style-{style_id}-layout-{layout_id}.html)
|
||||
|
||||
### Step 1: Run Preview Generation Script
|
||||
```bash
|
||||
bash(~/.claude/scripts/ui-generate-preview.sh "{base_path}/prototypes")
|
||||
bash(ccw tool exec ui_generate_preview '{"prototypesDir":"{base_path}/prototypes"}')
|
||||
```
|
||||
|
||||
**Script generates**:
|
||||
@@ -432,7 +432,7 @@ bash(test -f {base_path}/prototypes/compare.html && echo "exists")
|
||||
bash(mkdir -p {base_path}/prototypes)
|
||||
|
||||
# Run preview script
|
||||
bash(~/.claude/scripts/ui-generate-preview.sh "{base_path}/prototypes")
|
||||
bash(ccw tool exec ui_generate_preview '{"prototypesDir":"{base_path}/prototypes"}')
|
||||
```
|
||||
|
||||
## Output Structure
|
||||
@@ -467,7 +467,7 @@ ERROR: Agent assembly failed
|
||||
→ Check inputs exist, validate JSON structure
|
||||
|
||||
ERROR: Script permission denied
|
||||
→ chmod +x ~/.claude/scripts/ui-generate-preview.sh
|
||||
→ Verify ccw tool is available: ccw tool list
|
||||
```
|
||||
|
||||
### Recovery Strategies
|
||||
|
||||
@@ -106,7 +106,7 @@ echo " Output: $base_path"
|
||||
|
||||
# 3. Discover files using script
|
||||
discovery_file="${intermediates_dir}/discovered-files.json"
|
||||
~/.claude/scripts/discover-design-files.sh "$source" "$discovery_file"
|
||||
ccw tool exec discover_design_files '{"sourceDir":"'"$source"'","outputPath":"'"$discovery_file"'"}'
|
||||
|
||||
echo " Output: $discovery_file"
|
||||
```
|
||||
@@ -161,6 +161,7 @@ echo "[Phase 1] Starting parallel agent analysis (3 agents)"
|
||||
|
||||
```javascript
|
||||
Task(subagent_type="ui-design-agent",
|
||||
run_in_background=false,
|
||||
prompt="[STYLE_TOKENS_EXTRACTION]
|
||||
Extract visual design tokens from code files using code import extraction pattern.
|
||||
|
||||
@@ -180,14 +181,14 @@ Task(subagent_type="ui-design-agent",
|
||||
- Pattern: rg → Extract values → Compare → If different → Read full context with comments → Record conflict
|
||||
- Alternative (if many files): Execute CLI analysis for comprehensive report:
|
||||
\`\`\`bash
|
||||
cd ${source} && gemini -p \"
|
||||
ccw cli -p \"
|
||||
PURPOSE: Detect color token conflicts across all CSS/SCSS/JS files
|
||||
TASK: • Scan all files for color definitions • Identify conflicting values • Extract semantic comments
|
||||
MODE: analysis
|
||||
CONTEXT: @**/*.css @**/*.scss @**/*.js @**/*.ts
|
||||
EXPECTED: JSON report listing conflicts with file:line, values, semantic context
|
||||
RULES: Focus on core tokens | Report ALL variants | analysis=READ-ONLY
|
||||
\"
|
||||
\" --tool gemini --mode analysis --cd ${source}
|
||||
\`\`\`
|
||||
|
||||
**Step 1: Load file list**
|
||||
@@ -276,6 +277,7 @@ Task(subagent_type="ui-design-agent",
|
||||
|
||||
```javascript
|
||||
Task(subagent_type="ui-design-agent",
|
||||
run_in_background=false,
|
||||
prompt="[ANIMATION_TOKEN_GENERATION_TASK]
|
||||
Extract animation tokens from code files using code import extraction pattern.
|
||||
|
||||
@@ -295,14 +297,14 @@ Task(subagent_type="ui-design-agent",
|
||||
- Pattern: rg → Identify animation types → Map framework usage → Prioritize extraction targets
|
||||
- Alternative (if complex framework mix): Execute CLI analysis for comprehensive report:
|
||||
\`\`\`bash
|
||||
cd ${source} && gemini -p \"
|
||||
ccw cli -p \"
|
||||
PURPOSE: Detect animation frameworks and patterns
|
||||
TASK: • Identify frameworks • Map animation patterns • Categorize by complexity
|
||||
MODE: analysis
|
||||
CONTEXT: @**/*.css @**/*.scss @**/*.js @**/*.ts
|
||||
EXPECTED: JSON report listing frameworks, animation types, file locations
|
||||
RULES: Focus on framework consistency | Map all animations | analysis=READ-ONLY
|
||||
\"
|
||||
\" --tool gemini --mode analysis --cd ${source}
|
||||
\`\`\`
|
||||
|
||||
**Step 1: Load file list**
|
||||
@@ -355,6 +357,7 @@ Task(subagent_type="ui-design-agent",
|
||||
|
||||
```javascript
|
||||
Task(subagent_type="ui-design-agent",
|
||||
run_in_background=false,
|
||||
prompt="[LAYOUT_TEMPLATE_GENERATION_TASK]
|
||||
Extract layout patterns from code files using code import extraction pattern.
|
||||
|
||||
@@ -374,14 +377,14 @@ Task(subagent_type="ui-design-agent",
|
||||
- Pattern: rg → Count occurrences → Classify by frequency → Prioritize universal components
|
||||
- Alternative (if large codebase): Execute CLI analysis for comprehensive categorization:
|
||||
\`\`\`bash
|
||||
cd ${source} && gemini -p \"
|
||||
ccw cli -p \"
|
||||
PURPOSE: Classify components as universal vs specialized
|
||||
TASK: • Identify UI components • Classify reusability • Map layout systems
|
||||
MODE: analysis
|
||||
CONTEXT: @**/*.css @**/*.scss @**/*.js @**/*.ts @**/*.html
|
||||
EXPECTED: JSON report categorizing components, layout patterns, naming conventions
|
||||
RULES: Focus on component reusability | Identify layout systems | analysis=READ-ONLY
|
||||
\"
|
||||
\" --tool gemini --mode analysis --cd ${source}
|
||||
\`\`\`
|
||||
|
||||
**Step 1: Load file list**
|
||||
|
||||
@@ -1,35 +0,0 @@
|
||||
#!/bin/bash
|
||||
# Classify folders by type for documentation generation
|
||||
# Usage: get_modules_by_depth.sh | classify-folders.sh
|
||||
# Output: folder_path|folder_type|code:N|dirs:N
|
||||
|
||||
while IFS='|' read -r depth_info path_info files_info types_info claude_info; do
|
||||
# Extract folder path from format "path:./src/modules"
|
||||
folder_path=$(echo "$path_info" | cut -d':' -f2-)
|
||||
|
||||
# Skip if path extraction failed
|
||||
[[ -z "$folder_path" || ! -d "$folder_path" ]] && continue
|
||||
|
||||
# Count code files (maxdepth 1)
|
||||
code_files=$(find "$folder_path" -maxdepth 1 -type f \
|
||||
\( -name "*.ts" -o -name "*.tsx" -o -name "*.js" -o -name "*.jsx" \
|
||||
-o -name "*.py" -o -name "*.go" -o -name "*.java" -o -name "*.rs" \
|
||||
-o -name "*.c" -o -name "*.cpp" -o -name "*.cs" \) \
|
||||
2>/dev/null | wc -l)
|
||||
|
||||
# Count subdirectories
|
||||
subfolders=$(find "$folder_path" -maxdepth 1 -type d \
|
||||
-not -path "$folder_path" 2>/dev/null | wc -l)
|
||||
|
||||
# Determine folder type
|
||||
if [[ $code_files -gt 0 ]]; then
|
||||
folder_type="code" # API.md + README.md
|
||||
elif [[ $subfolders -gt 0 ]]; then
|
||||
folder_type="navigation" # README.md only
|
||||
else
|
||||
folder_type="skip" # Empty or no relevant content
|
||||
fi
|
||||
|
||||
# Output classification result
|
||||
echo "${folder_path}|${folder_type}|code:${code_files}|dirs:${subfolders}"
|
||||
done
|
||||
@@ -1,225 +0,0 @@
|
||||
#!/bin/bash
|
||||
# Convert design-tokens.json to tokens.css with Google Fonts import and global font rules
|
||||
# Usage: cat design-tokens.json | ./convert_tokens_to_css.sh > tokens.css
|
||||
# Or: ./convert_tokens_to_css.sh < design-tokens.json > tokens.css
|
||||
|
||||
# Read JSON from stdin
|
||||
json_input=$(cat)
|
||||
|
||||
# Extract metadata for header comment
|
||||
style_name=$(echo "$json_input" | jq -r '.meta.name // "Unknown Style"' 2>/dev/null || echo "Design Tokens")
|
||||
|
||||
# Generate header
|
||||
cat <<EOF
|
||||
/* ========================================
|
||||
Design Tokens: ${style_name}
|
||||
Auto-generated from design-tokens.json
|
||||
======================================== */
|
||||
|
||||
EOF
|
||||
|
||||
# ========================================
|
||||
# Google Fonts Import Generation
|
||||
# ========================================
|
||||
# Extract font families and generate Google Fonts import URL
|
||||
fonts=$(echo "$json_input" | jq -r '
|
||||
.typography.font_family | to_entries[] | .value
|
||||
' 2>/dev/null | sed "s/'//g" | cut -d',' -f1 | sort -u)
|
||||
|
||||
# Build Google Fonts URL
|
||||
google_fonts_url="https://fonts.googleapis.com/css2?"
|
||||
font_params=""
|
||||
|
||||
while IFS= read -r font; do
|
||||
# Skip system fonts and empty lines
|
||||
if [[ -z "$font" ]] || [[ "$font" =~ ^(system-ui|sans-serif|serif|monospace|cursive|fantasy)$ ]]; then
|
||||
continue
|
||||
fi
|
||||
|
||||
# Special handling for common web fonts with weights
|
||||
case "$font" in
|
||||
"Comic Neue")
|
||||
font_params+="family=Comic+Neue:wght@300;400;700&"
|
||||
;;
|
||||
"Patrick Hand"|"Caveat"|"Dancing Script"|"Architects Daughter"|"Indie Flower"|"Shadows Into Light"|"Permanent Marker")
|
||||
# URL-encode font name and add common weights
|
||||
encoded_font=$(echo "$font" | sed 's/ /+/g')
|
||||
font_params+="family=${encoded_font}:wght@400;700&"
|
||||
;;
|
||||
"Segoe Print"|"Bradley Hand"|"Chilanka")
|
||||
# These are system fonts, skip
|
||||
;;
|
||||
*)
|
||||
# Generic font: add with default weights
|
||||
encoded_font=$(echo "$font" | sed 's/ /+/g')
|
||||
font_params+="family=${encoded_font}:wght@400;500;600;700&"
|
||||
;;
|
||||
esac
|
||||
done <<< "$fonts"
|
||||
|
||||
# Generate @import if we have fonts
|
||||
if [[ -n "$font_params" ]]; then
|
||||
# Remove trailing &
|
||||
font_params="${font_params%&}"
|
||||
echo "/* Import Web Fonts */"
|
||||
echo "@import url('${google_fonts_url}${font_params}&display=swap');"
|
||||
echo ""
|
||||
fi
|
||||
|
||||
# ========================================
|
||||
# CSS Custom Properties Generation
|
||||
# ========================================
|
||||
echo ":root {"
|
||||
|
||||
# Colors - Brand
|
||||
echo " /* Colors - Brand */"
|
||||
echo "$json_input" | jq -r '
|
||||
.colors.brand | to_entries[] |
|
||||
" --color-brand-\(.key): \(.value);"
|
||||
' 2>/dev/null
|
||||
|
||||
echo ""
|
||||
|
||||
# Colors - Surface
|
||||
echo " /* Colors - Surface */"
|
||||
echo "$json_input" | jq -r '
|
||||
.colors.surface | to_entries[] |
|
||||
" --color-surface-\(.key): \(.value);"
|
||||
' 2>/dev/null
|
||||
|
||||
echo ""
|
||||
|
||||
# Colors - Semantic
|
||||
echo " /* Colors - Semantic */"
|
||||
echo "$json_input" | jq -r '
|
||||
.colors.semantic | to_entries[] |
|
||||
" --color-semantic-\(.key): \(.value);"
|
||||
' 2>/dev/null
|
||||
|
||||
echo ""
|
||||
|
||||
# Colors - Text
|
||||
echo " /* Colors - Text */"
|
||||
echo "$json_input" | jq -r '
|
||||
.colors.text | to_entries[] |
|
||||
" --color-text-\(.key): \(.value);"
|
||||
' 2>/dev/null
|
||||
|
||||
echo ""
|
||||
|
||||
# Colors - Border
|
||||
echo " /* Colors - Border */"
|
||||
echo "$json_input" | jq -r '
|
||||
.colors.border | to_entries[] |
|
||||
" --color-border-\(.key): \(.value);"
|
||||
' 2>/dev/null
|
||||
|
||||
echo ""
|
||||
|
||||
# Typography - Font Family
|
||||
echo " /* Typography - Font Family */"
|
||||
echo "$json_input" | jq -r '
|
||||
.typography.font_family | to_entries[] |
|
||||
" --font-family-\(.key): \(.value);"
|
||||
' 2>/dev/null
|
||||
|
||||
echo ""
|
||||
|
||||
# Typography - Font Size
|
||||
echo " /* Typography - Font Size */"
|
||||
echo "$json_input" | jq -r '
|
||||
.typography.font_size | to_entries[] |
|
||||
" --font-size-\(.key): \(.value);"
|
||||
' 2>/dev/null
|
||||
|
||||
echo ""
|
||||
|
||||
# Typography - Font Weight
|
||||
echo " /* Typography - Font Weight */"
|
||||
echo "$json_input" | jq -r '
|
||||
.typography.font_weight | to_entries[] |
|
||||
" --font-weight-\(.key): \(.value);"
|
||||
' 2>/dev/null
|
||||
|
||||
echo ""
|
||||
|
||||
# Typography - Line Height
|
||||
echo " /* Typography - Line Height */"
|
||||
echo "$json_input" | jq -r '
|
||||
.typography.line_height | to_entries[] |
|
||||
" --line-height-\(.key): \(.value);"
|
||||
' 2>/dev/null
|
||||
|
||||
echo ""
|
||||
|
||||
# Typography - Letter Spacing
|
||||
echo " /* Typography - Letter Spacing */"
|
||||
echo "$json_input" | jq -r '
|
||||
.typography.letter_spacing | to_entries[] |
|
||||
" --letter-spacing-\(.key): \(.value);"
|
||||
' 2>/dev/null
|
||||
|
||||
echo ""
|
||||
|
||||
# Spacing
|
||||
echo " /* Spacing */"
|
||||
echo "$json_input" | jq -r '
|
||||
.spacing | to_entries[] |
|
||||
" --spacing-\(.key): \(.value);"
|
||||
' 2>/dev/null
|
||||
|
||||
echo ""
|
||||
|
||||
# Border Radius
|
||||
echo " /* Border Radius */"
|
||||
echo "$json_input" | jq -r '
|
||||
.border_radius | to_entries[] |
|
||||
" --border-radius-\(.key): \(.value);"
|
||||
' 2>/dev/null
|
||||
|
||||
echo ""
|
||||
|
||||
# Shadows
|
||||
echo " /* Shadows */"
|
||||
echo "$json_input" | jq -r '
|
||||
.shadows | to_entries[] |
|
||||
" --shadow-\(.key): \(.value);"
|
||||
' 2>/dev/null
|
||||
|
||||
echo ""
|
||||
|
||||
# Breakpoints
|
||||
echo " /* Breakpoints */"
|
||||
echo "$json_input" | jq -r '
|
||||
.breakpoints | to_entries[] |
|
||||
" --breakpoint-\(.key): \(.value);"
|
||||
' 2>/dev/null
|
||||
|
||||
echo "}"
|
||||
echo ""
|
||||
|
||||
# ========================================
|
||||
# Global Font Application
|
||||
# ========================================
|
||||
echo "/* ========================================"
|
||||
echo " Global Font Application"
|
||||
echo " ======================================== */"
|
||||
echo ""
|
||||
echo "body {"
|
||||
echo " font-family: var(--font-family-body);"
|
||||
echo " font-size: var(--font-size-base);"
|
||||
echo " line-height: var(--line-height-normal);"
|
||||
echo " color: var(--color-text-primary);"
|
||||
echo " background-color: var(--color-surface-background);"
|
||||
echo "}"
|
||||
echo ""
|
||||
echo "h1, h2, h3, h4, h5, h6, legend {"
|
||||
echo " font-family: var(--font-family-heading);"
|
||||
echo "}"
|
||||
echo ""
|
||||
echo "/* Reset default margins for better control */"
|
||||
echo "* {"
|
||||
echo " margin: 0;"
|
||||
echo " padding: 0;"
|
||||
echo " box-sizing: border-box;"
|
||||
echo "}"
|
||||
@@ -1,157 +0,0 @@
|
||||
#!/bin/bash
|
||||
# Detect modules affected by git changes or recent modifications
|
||||
# Usage: detect_changed_modules.sh [format]
|
||||
# format: list|grouped|paths (default: paths)
|
||||
#
|
||||
# Features:
|
||||
# - Respects .gitignore patterns (current directory or git root)
|
||||
# - Detects git changes (staged, unstaged, or last commit)
|
||||
# - Falls back to recently modified files (last 24 hours)
|
||||
|
||||
# Build exclusion filters from .gitignore
|
||||
build_exclusion_filters() {
|
||||
local filters=""
|
||||
|
||||
# Common system/cache directories to exclude
|
||||
local system_excludes=(
|
||||
".git" "__pycache__" "node_modules" ".venv" "venv" "env"
|
||||
"dist" "build" ".cache" ".pytest_cache" ".mypy_cache"
|
||||
"coverage" ".nyc_output" "logs" "tmp" "temp"
|
||||
)
|
||||
|
||||
for exclude in "${system_excludes[@]}"; do
|
||||
filters+=" -not -path '*/$exclude' -not -path '*/$exclude/*'"
|
||||
done
|
||||
|
||||
# Find and parse .gitignore (current dir first, then git root)
|
||||
local gitignore_file=""
|
||||
|
||||
# Check current directory first
|
||||
if [ -f ".gitignore" ]; then
|
||||
gitignore_file=".gitignore"
|
||||
else
|
||||
# Try to find git root and check for .gitignore there
|
||||
local git_root=$(git rev-parse --show-toplevel 2>/dev/null)
|
||||
if [ -n "$git_root" ] && [ -f "$git_root/.gitignore" ]; then
|
||||
gitignore_file="$git_root/.gitignore"
|
||||
fi
|
||||
fi
|
||||
|
||||
# Parse .gitignore if found
|
||||
if [ -n "$gitignore_file" ]; then
|
||||
while IFS= read -r line; do
|
||||
# Skip empty lines and comments
|
||||
[[ -z "$line" || "$line" =~ ^[[:space:]]*# ]] && continue
|
||||
|
||||
# Remove trailing slash and whitespace
|
||||
line=$(echo "$line" | sed 's|/$||' | xargs)
|
||||
|
||||
# Skip wildcards patterns (too complex for simple find)
|
||||
[[ "$line" =~ \* ]] && continue
|
||||
|
||||
# Add to filters
|
||||
filters+=" -not -path '*/$line' -not -path '*/$line/*'"
|
||||
done < "$gitignore_file"
|
||||
fi
|
||||
|
||||
echo "$filters"
|
||||
}
|
||||
|
||||
detect_changed_modules() {
|
||||
local format="${1:-paths}"
|
||||
local changed_files=""
|
||||
local affected_dirs=""
|
||||
local exclusion_filters=$(build_exclusion_filters)
|
||||
|
||||
# Step 1: Try to get git changes (staged + unstaged)
|
||||
if git rev-parse --git-dir > /dev/null 2>&1; then
|
||||
changed_files=$(git diff --name-only HEAD 2>/dev/null; git diff --name-only --cached 2>/dev/null)
|
||||
|
||||
# If no changes in working directory, check last commit
|
||||
if [ -z "$changed_files" ]; then
|
||||
changed_files=$(git diff --name-only HEAD~1 HEAD 2>/dev/null)
|
||||
fi
|
||||
fi
|
||||
|
||||
# Step 2: If no git changes, find recently modified source files (last 24 hours)
|
||||
# Apply exclusion filters from .gitignore
|
||||
if [ -z "$changed_files" ]; then
|
||||
changed_files=$(eval "find . -type f \( \
|
||||
-name '*.md' -o \
|
||||
-name '*.js' -o -name '*.ts' -o -name '*.jsx' -o -name '*.tsx' -o \
|
||||
-name '*.py' -o -name '*.go' -o -name '*.rs' -o \
|
||||
-name '*.java' -o -name '*.cpp' -o -name '*.c' -o -name '*.h' -o \
|
||||
-name '*.sh' -o -name '*.ps1' -o \
|
||||
-name '*.json' -o -name '*.yaml' -o -name '*.yml' \
|
||||
\) $exclusion_filters -mtime -1 2>/dev/null")
|
||||
fi
|
||||
|
||||
# Step 3: Extract unique parent directories
|
||||
if [ -n "$changed_files" ]; then
|
||||
affected_dirs=$(echo "$changed_files" | \
|
||||
sed 's|/[^/]*$||' | \
|
||||
grep -v '^\.$' | \
|
||||
sort -u)
|
||||
|
||||
# Add current directory if files are in root
|
||||
if echo "$changed_files" | grep -q '^[^/]*$'; then
|
||||
affected_dirs=$(echo -e ".\n$affected_dirs" | sort -u)
|
||||
fi
|
||||
fi
|
||||
|
||||
# Step 4: Output in requested format
|
||||
case "$format" in
|
||||
"list")
|
||||
if [ -n "$affected_dirs" ]; then
|
||||
echo "$affected_dirs" | while read dir; do
|
||||
if [ -d "$dir" ]; then
|
||||
local file_count=$(find "$dir" -maxdepth 1 -type f 2>/dev/null | wc -l)
|
||||
local depth=$(echo "$dir" | tr -cd '/' | wc -c)
|
||||
if [ "$dir" = "." ]; then depth=0; fi
|
||||
|
||||
local types=$(find "$dir" -maxdepth 1 -type f -name "*.*" 2>/dev/null | \
|
||||
grep -E '\.[^/]*$' | sed 's/.*\.//' | sort -u | tr '\n' ',' | sed 's/,$//')
|
||||
local has_claude="no"
|
||||
[ -f "$dir/CLAUDE.md" ] && has_claude="yes"
|
||||
echo "depth:$depth|path:$dir|files:$file_count|types:[$types]|has_claude:$has_claude|status:changed"
|
||||
fi
|
||||
done
|
||||
fi
|
||||
;;
|
||||
|
||||
"grouped")
|
||||
if [ -n "$affected_dirs" ]; then
|
||||
echo "📊 Affected modules by changes:"
|
||||
# Group by depth
|
||||
echo "$affected_dirs" | while read dir; do
|
||||
if [ -d "$dir" ]; then
|
||||
local depth=$(echo "$dir" | tr -cd '/' | wc -c)
|
||||
if [ "$dir" = "." ]; then depth=0; fi
|
||||
local claude_indicator=""
|
||||
[ -f "$dir/CLAUDE.md" ] && claude_indicator=" [✓]"
|
||||
echo "$depth:$dir$claude_indicator"
|
||||
fi
|
||||
done | sort -n | awk -F: '
|
||||
{
|
||||
if ($1 != prev_depth) {
|
||||
if (prev_depth != "") print ""
|
||||
print " 📁 Depth " $1 ":"
|
||||
prev_depth = $1
|
||||
}
|
||||
print " - " $2 " (changed)"
|
||||
}'
|
||||
else
|
||||
echo "📊 No recent changes detected"
|
||||
fi
|
||||
;;
|
||||
|
||||
"paths"|*)
|
||||
echo "$affected_dirs"
|
||||
;;
|
||||
esac
|
||||
}
|
||||
|
||||
# Execute function if script is run directly
|
||||
if [[ "${BASH_SOURCE[0]}" == "${0}" ]]; then
|
||||
detect_changed_modules "$@"
|
||||
fi
|
||||
@@ -1,83 +0,0 @@
|
||||
#!/usr/bin/env bash
|
||||
# discover-design-files.sh - Discover design-related files and output JSON
|
||||
# Usage: discover-design-files.sh <source_dir> <output_json>
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
source_dir="${1:-.}"
|
||||
output_json="${2:-discovered-files.json}"
|
||||
|
||||
# Function to find and format files as JSON array
|
||||
find_files() {
|
||||
local pattern="$1"
|
||||
local files
|
||||
files=$(eval "find \"$source_dir\" -type f $pattern \
|
||||
! -path \"*/node_modules/*\" \
|
||||
! -path \"*/dist/*\" \
|
||||
! -path \"*/.git/*\" \
|
||||
! -path \"*/build/*\" \
|
||||
! -path \"*/coverage/*\" \
|
||||
2>/dev/null | sort || true")
|
||||
|
||||
local count
|
||||
if [ -z "$files" ]; then
|
||||
count=0
|
||||
else
|
||||
count=$(echo "$files" | grep -c . || echo 0)
|
||||
fi
|
||||
local json_files=""
|
||||
|
||||
if [ "$count" -gt 0 ]; then
|
||||
json_files=$(echo "$files" | awk '{printf "\"%s\"%s\n", $0, (NR<'$count'?",":"")}' | tr '\n' ' ')
|
||||
fi
|
||||
|
||||
echo "$count|$json_files"
|
||||
}
|
||||
|
||||
# Discover CSS/SCSS files
|
||||
css_result=$(find_files '\( -name "*.css" -o -name "*.scss" \)')
|
||||
css_count=${css_result%%|*}
|
||||
css_files=${css_result#*|}
|
||||
|
||||
# Discover JS/TS files (all framework files)
|
||||
js_result=$(find_files '\( -name "*.js" -o -name "*.ts" -o -name "*.jsx" -o -name "*.tsx" -o -name "*.mjs" -o -name "*.cjs" -o -name "*.vue" -o -name "*.svelte" \)')
|
||||
js_count=${js_result%%|*}
|
||||
js_files=${js_result#*|}
|
||||
|
||||
# Discover HTML files
|
||||
html_result=$(find_files '-name "*.html"')
|
||||
html_count=${html_result%%|*}
|
||||
html_files=${html_result#*|}
|
||||
|
||||
# Calculate total
|
||||
total_count=$((css_count + js_count + html_count))
|
||||
|
||||
# Generate JSON
|
||||
cat > "$output_json" << EOF
|
||||
{
|
||||
"discovery_time": "$(date -u +%Y-%m-%dT%H:%M:%SZ)",
|
||||
"source_directory": "$(cd "$source_dir" && pwd)",
|
||||
"file_types": {
|
||||
"css": {
|
||||
"count": $css_count,
|
||||
"files": [${css_files}]
|
||||
},
|
||||
"js": {
|
||||
"count": $js_count,
|
||||
"files": [${js_files}]
|
||||
},
|
||||
"html": {
|
||||
"count": $html_count,
|
||||
"files": [${html_files}]
|
||||
}
|
||||
},
|
||||
"total_files": $total_count
|
||||
}
|
||||
EOF
|
||||
|
||||
# Ensure file is fully written and synchronized to disk
|
||||
# This prevents race conditions when the file is immediately read by another process
|
||||
sync "$output_json" 2>/dev/null || sync # Sync specific file, fallback to full sync
|
||||
sleep 0.1 # Additional safety: 100ms delay for filesystem metadata update
|
||||
|
||||
echo "Discovered: CSS=$css_count, JS=$js_count, HTML=$html_count (Total: $total_count)" >&2
|
||||
@@ -1,243 +0,0 @@
|
||||
/**
|
||||
* Animation & Transition Extraction Script
|
||||
*
|
||||
* Extracts CSS animations, transitions, and transform patterns from a live web page.
|
||||
* This script runs in the browser context via Chrome DevTools Protocol.
|
||||
*
|
||||
* @returns {Object} Structured animation data
|
||||
*/
|
||||
(() => {
|
||||
const extractionTimestamp = new Date().toISOString();
|
||||
const currentUrl = window.location.href;
|
||||
|
||||
/**
|
||||
* Parse transition shorthand or individual properties
|
||||
*/
|
||||
function parseTransition(element, computedStyle) {
|
||||
const transition = computedStyle.transition || computedStyle.webkitTransition;
|
||||
|
||||
if (!transition || transition === 'none' || transition === 'all 0s ease 0s') {
|
||||
return null;
|
||||
}
|
||||
|
||||
// Parse shorthand: "property duration easing delay"
|
||||
const transitions = [];
|
||||
const parts = transition.split(/,\s*/);
|
||||
|
||||
parts.forEach(part => {
|
||||
const match = part.match(/^(\S+)\s+([\d.]+m?s)\s+(\S+)(?:\s+([\d.]+m?s))?/);
|
||||
if (match) {
|
||||
transitions.push({
|
||||
property: match[1],
|
||||
duration: match[2],
|
||||
easing: match[3],
|
||||
delay: match[4] || '0s'
|
||||
});
|
||||
}
|
||||
});
|
||||
|
||||
return transitions.length > 0 ? transitions : null;
|
||||
}
|
||||
|
||||
/**
|
||||
* Extract animation name and properties
|
||||
*/
|
||||
function parseAnimation(element, computedStyle) {
|
||||
const animationName = computedStyle.animationName || computedStyle.webkitAnimationName;
|
||||
|
||||
if (!animationName || animationName === 'none') {
|
||||
return null;
|
||||
}
|
||||
|
||||
return {
|
||||
name: animationName,
|
||||
duration: computedStyle.animationDuration || computedStyle.webkitAnimationDuration,
|
||||
easing: computedStyle.animationTimingFunction || computedStyle.webkitAnimationTimingFunction,
|
||||
delay: computedStyle.animationDelay || computedStyle.webkitAnimationDelay || '0s',
|
||||
iterationCount: computedStyle.animationIterationCount || computedStyle.webkitAnimationIterationCount || '1',
|
||||
direction: computedStyle.animationDirection || computedStyle.webkitAnimationDirection || 'normal',
|
||||
fillMode: computedStyle.animationFillMode || computedStyle.webkitAnimationFillMode || 'none'
|
||||
};
|
||||
}
|
||||
|
||||
/**
|
||||
* Extract transform value
|
||||
*/
|
||||
function parseTransform(computedStyle) {
|
||||
const transform = computedStyle.transform || computedStyle.webkitTransform;
|
||||
|
||||
if (!transform || transform === 'none') {
|
||||
return null;
|
||||
}
|
||||
|
||||
return transform;
|
||||
}
|
||||
|
||||
/**
|
||||
* Get element selector (simplified for readability)
|
||||
*/
|
||||
function getSelector(element) {
|
||||
if (element.id) {
|
||||
return `#${element.id}`;
|
||||
}
|
||||
|
||||
if (element.className && typeof element.className === 'string') {
|
||||
const classes = element.className.trim().split(/\s+/).slice(0, 2).join('.');
|
||||
if (classes) {
|
||||
return `.${classes}`;
|
||||
}
|
||||
}
|
||||
|
||||
return element.tagName.toLowerCase();
|
||||
}
|
||||
|
||||
/**
|
||||
* Extract all stylesheets and find @keyframes rules
|
||||
*/
|
||||
function extractKeyframes() {
|
||||
const keyframes = {};
|
||||
|
||||
try {
|
||||
// Iterate through all stylesheets
|
||||
Array.from(document.styleSheets).forEach(sheet => {
|
||||
try {
|
||||
// Skip external stylesheets due to CORS
|
||||
if (sheet.href && !sheet.href.startsWith(window.location.origin)) {
|
||||
return;
|
||||
}
|
||||
|
||||
Array.from(sheet.cssRules || sheet.rules || []).forEach(rule => {
|
||||
// Check for @keyframes rules
|
||||
if (rule.type === CSSRule.KEYFRAMES_RULE || rule.type === CSSRule.WEBKIT_KEYFRAMES_RULE) {
|
||||
const name = rule.name;
|
||||
const frames = {};
|
||||
|
||||
Array.from(rule.cssRules || []).forEach(keyframe => {
|
||||
const key = keyframe.keyText; // e.g., "0%", "50%", "100%"
|
||||
frames[key] = keyframe.style.cssText;
|
||||
});
|
||||
|
||||
keyframes[name] = frames;
|
||||
}
|
||||
});
|
||||
} catch (e) {
|
||||
// Skip stylesheets that can't be accessed (CORS)
|
||||
console.warn('Cannot access stylesheet:', sheet.href, e.message);
|
||||
}
|
||||
});
|
||||
} catch (e) {
|
||||
console.error('Error extracting keyframes:', e);
|
||||
}
|
||||
|
||||
return keyframes;
|
||||
}
|
||||
|
||||
/**
|
||||
* Scan visible elements for animations and transitions
|
||||
*/
|
||||
function scanElements() {
|
||||
const elements = document.querySelectorAll('*');
|
||||
const transitionData = [];
|
||||
const animationData = [];
|
||||
const transformData = [];
|
||||
|
||||
const uniqueTransitions = new Set();
|
||||
const uniqueAnimations = new Set();
|
||||
const uniqueEasings = new Set();
|
||||
const uniqueDurations = new Set();
|
||||
|
||||
elements.forEach(element => {
|
||||
// Skip invisible elements
|
||||
const rect = element.getBoundingClientRect();
|
||||
if (rect.width === 0 && rect.height === 0) {
|
||||
return;
|
||||
}
|
||||
|
||||
const computedStyle = window.getComputedStyle(element);
|
||||
|
||||
// Extract transitions
|
||||
const transitions = parseTransition(element, computedStyle);
|
||||
if (transitions) {
|
||||
const selector = getSelector(element);
|
||||
transitions.forEach(t => {
|
||||
const key = `${t.property}-${t.duration}-${t.easing}`;
|
||||
if (!uniqueTransitions.has(key)) {
|
||||
uniqueTransitions.add(key);
|
||||
transitionData.push({
|
||||
selector,
|
||||
...t
|
||||
});
|
||||
uniqueEasings.add(t.easing);
|
||||
uniqueDurations.add(t.duration);
|
||||
}
|
||||
});
|
||||
}
|
||||
|
||||
// Extract animations
|
||||
const animation = parseAnimation(element, computedStyle);
|
||||
if (animation) {
|
||||
const selector = getSelector(element);
|
||||
const key = `${animation.name}-${animation.duration}`;
|
||||
if (!uniqueAnimations.has(key)) {
|
||||
uniqueAnimations.add(key);
|
||||
animationData.push({
|
||||
selector,
|
||||
...animation
|
||||
});
|
||||
uniqueEasings.add(animation.easing);
|
||||
uniqueDurations.add(animation.duration);
|
||||
}
|
||||
}
|
||||
|
||||
// Extract transforms (on hover/active, we only get current state)
|
||||
const transform = parseTransform(computedStyle);
|
||||
if (transform) {
|
||||
const selector = getSelector(element);
|
||||
transformData.push({
|
||||
selector,
|
||||
transform
|
||||
});
|
||||
}
|
||||
});
|
||||
|
||||
return {
|
||||
transitions: transitionData,
|
||||
animations: animationData,
|
||||
transforms: transformData,
|
||||
uniqueEasings: Array.from(uniqueEasings),
|
||||
uniqueDurations: Array.from(uniqueDurations)
|
||||
};
|
||||
}
|
||||
|
||||
/**
|
||||
* Main extraction function
|
||||
*/
|
||||
function extractAnimations() {
|
||||
const elementData = scanElements();
|
||||
const keyframes = extractKeyframes();
|
||||
|
||||
return {
|
||||
metadata: {
|
||||
timestamp: extractionTimestamp,
|
||||
url: currentUrl,
|
||||
method: 'chrome-devtools',
|
||||
version: '1.0.0'
|
||||
},
|
||||
transitions: elementData.transitions,
|
||||
animations: elementData.animations,
|
||||
transforms: elementData.transforms,
|
||||
keyframes: keyframes,
|
||||
summary: {
|
||||
total_transitions: elementData.transitions.length,
|
||||
total_animations: elementData.animations.length,
|
||||
total_transforms: elementData.transforms.length,
|
||||
total_keyframes: Object.keys(keyframes).length,
|
||||
unique_easings: elementData.uniqueEasings,
|
||||
unique_durations: elementData.uniqueDurations
|
||||
}
|
||||
};
|
||||
}
|
||||
|
||||
// Execute extraction
|
||||
return extractAnimations();
|
||||
})();
|
||||
@@ -1,118 +0,0 @@
|
||||
/**
|
||||
* Extract Computed Styles from DOM
|
||||
*
|
||||
* This script extracts real CSS computed styles from a webpage's DOM
|
||||
* to provide accurate design tokens for UI replication.
|
||||
*
|
||||
* Usage: Execute this function via Chrome DevTools evaluate_script
|
||||
*/
|
||||
|
||||
(() => {
|
||||
/**
|
||||
* Extract unique values from a set and sort them
|
||||
*/
|
||||
const uniqueSorted = (set) => {
|
||||
return Array.from(set)
|
||||
.filter(v => v && v !== 'none' && v !== '0px' && v !== 'rgba(0, 0, 0, 0)')
|
||||
.sort();
|
||||
};
|
||||
|
||||
/**
|
||||
* Parse rgb/rgba to OKLCH format (placeholder - returns original for now)
|
||||
*/
|
||||
const toOKLCH = (color) => {
|
||||
// TODO: Implement actual RGB to OKLCH conversion
|
||||
// For now, return the original color with a note
|
||||
return `${color} /* TODO: Convert to OKLCH */`;
|
||||
};
|
||||
|
||||
/**
|
||||
* Extract only key styles from an element
|
||||
*/
|
||||
const extractKeyStyles = (element) => {
|
||||
const s = window.getComputedStyle(element);
|
||||
return {
|
||||
color: s.color,
|
||||
bg: s.backgroundColor,
|
||||
borderRadius: s.borderRadius,
|
||||
boxShadow: s.boxShadow,
|
||||
fontSize: s.fontSize,
|
||||
fontWeight: s.fontWeight,
|
||||
padding: s.padding,
|
||||
margin: s.margin
|
||||
};
|
||||
};
|
||||
|
||||
/**
|
||||
* Main extraction function - extract all critical design tokens
|
||||
*/
|
||||
const extractDesignTokens = () => {
|
||||
// Include all key UI elements
|
||||
const selectors = [
|
||||
'button', '.btn', '[role="button"]',
|
||||
'input', 'textarea', 'select',
|
||||
'h1', 'h2', 'h3', 'h4', 'h5', 'h6',
|
||||
'.card', 'article', 'section',
|
||||
'a', 'p', 'nav', 'header', 'footer'
|
||||
];
|
||||
|
||||
// Collect all design tokens
|
||||
const tokens = {
|
||||
colors: new Set(),
|
||||
borderRadii: new Set(),
|
||||
shadows: new Set(),
|
||||
fontSizes: new Set(),
|
||||
fontWeights: new Set(),
|
||||
spacing: new Set()
|
||||
};
|
||||
|
||||
// Extract from all elements
|
||||
selectors.forEach(selector => {
|
||||
try {
|
||||
const elements = document.querySelectorAll(selector);
|
||||
elements.forEach(element => {
|
||||
const s = extractKeyStyles(element);
|
||||
|
||||
// Collect all tokens (no limits)
|
||||
if (s.color && s.color !== 'rgba(0, 0, 0, 0)') tokens.colors.add(s.color);
|
||||
if (s.bg && s.bg !== 'rgba(0, 0, 0, 0)') tokens.colors.add(s.bg);
|
||||
if (s.borderRadius && s.borderRadius !== '0px') tokens.borderRadii.add(s.borderRadius);
|
||||
if (s.boxShadow && s.boxShadow !== 'none') tokens.shadows.add(s.boxShadow);
|
||||
if (s.fontSize) tokens.fontSizes.add(s.fontSize);
|
||||
if (s.fontWeight) tokens.fontWeights.add(s.fontWeight);
|
||||
|
||||
// Extract all spacing values
|
||||
[s.padding, s.margin].forEach(val => {
|
||||
if (val && val !== '0px') {
|
||||
val.split(' ').forEach(v => {
|
||||
if (v && v !== '0px') tokens.spacing.add(v);
|
||||
});
|
||||
}
|
||||
});
|
||||
});
|
||||
} catch (e) {
|
||||
console.warn(`Error: ${selector}`, e);
|
||||
}
|
||||
});
|
||||
|
||||
// Return all tokens (no element details to save context)
|
||||
return {
|
||||
metadata: {
|
||||
extractedAt: new Date().toISOString(),
|
||||
url: window.location.href,
|
||||
method: 'computed-styles'
|
||||
},
|
||||
tokens: {
|
||||
colors: uniqueSorted(tokens.colors),
|
||||
borderRadii: uniqueSorted(tokens.borderRadii), // ALL radius values
|
||||
shadows: uniqueSorted(tokens.shadows), // ALL shadows
|
||||
fontSizes: uniqueSorted(tokens.fontSizes),
|
||||
fontWeights: uniqueSorted(tokens.fontWeights),
|
||||
spacing: uniqueSorted(tokens.spacing)
|
||||
}
|
||||
};
|
||||
};
|
||||
|
||||
// Execute and return results
|
||||
return extractDesignTokens();
|
||||
})();
|
||||
@@ -1,411 +0,0 @@
|
||||
/**
|
||||
* Extract Layout Structure from DOM - Enhanced Version
|
||||
*
|
||||
* Extracts real layout information from DOM to provide accurate
|
||||
* structural data for UI replication.
|
||||
*
|
||||
* Features:
|
||||
* - Framework detection (Nuxt.js, Next.js, React, Vue, Angular)
|
||||
* - Multi-strategy container detection (strict → relaxed → class-based → framework-specific)
|
||||
* - Intelligent main content detection with common class names support
|
||||
* - Supports modern SPA frameworks
|
||||
* - Detects non-semantic main containers (.main, .content, etc.)
|
||||
* - Progressive exploration: Auto-discovers missing selectors when standard patterns fail
|
||||
* - Suggests new class names to add to script based on actual page structure
|
||||
*
|
||||
* Progressive Exploration:
|
||||
* When fewer than 3 main containers are found, the script automatically:
|
||||
* 1. Analyzes all large visible containers (≥500×300px)
|
||||
* 2. Extracts class name patterns (main/content/wrapper/container/page/etc.)
|
||||
* 3. Suggests new selectors to add to the script
|
||||
* 4. Returns exploration data in result.exploration
|
||||
*
|
||||
* Usage: Execute via Chrome DevTools evaluate_script
|
||||
* Version: 2.2.0
|
||||
*/
|
||||
|
||||
(() => {
|
||||
/**
|
||||
* Get element's bounding box relative to viewport
|
||||
*/
|
||||
const getBounds = (element) => {
|
||||
const rect = element.getBoundingClientRect();
|
||||
return {
|
||||
x: Math.round(rect.x),
|
||||
y: Math.round(rect.y),
|
||||
width: Math.round(rect.width),
|
||||
height: Math.round(rect.height)
|
||||
};
|
||||
};
|
||||
|
||||
/**
|
||||
* Extract layout properties from an element
|
||||
*/
|
||||
const extractLayoutProps = (element) => {
|
||||
const s = window.getComputedStyle(element);
|
||||
|
||||
return {
|
||||
// Core layout
|
||||
display: s.display,
|
||||
position: s.position,
|
||||
|
||||
// Flexbox
|
||||
flexDirection: s.flexDirection,
|
||||
justifyContent: s.justifyContent,
|
||||
alignItems: s.alignItems,
|
||||
flexWrap: s.flexWrap,
|
||||
gap: s.gap,
|
||||
|
||||
// Grid
|
||||
gridTemplateColumns: s.gridTemplateColumns,
|
||||
gridTemplateRows: s.gridTemplateRows,
|
||||
gridAutoFlow: s.gridAutoFlow,
|
||||
|
||||
// Dimensions
|
||||
width: s.width,
|
||||
height: s.height,
|
||||
maxWidth: s.maxWidth,
|
||||
minWidth: s.minWidth,
|
||||
|
||||
// Spacing
|
||||
padding: s.padding,
|
||||
margin: s.margin
|
||||
};
|
||||
};
|
||||
|
||||
/**
|
||||
* Identify layout pattern for an element
|
||||
*/
|
||||
const identifyPattern = (props) => {
|
||||
const { display, flexDirection, gridTemplateColumns } = props;
|
||||
|
||||
if (display === 'flex' || display === 'inline-flex') {
|
||||
if (flexDirection === 'column') return 'flex-column';
|
||||
if (flexDirection === 'row') return 'flex-row';
|
||||
return 'flex';
|
||||
}
|
||||
|
||||
if (display === 'grid') {
|
||||
const cols = gridTemplateColumns;
|
||||
if (cols && cols !== 'none') {
|
||||
const colCount = cols.split(' ').length;
|
||||
return `grid-${colCount}col`;
|
||||
}
|
||||
return 'grid';
|
||||
}
|
||||
|
||||
if (display === 'block') return 'block';
|
||||
|
||||
return display;
|
||||
};
|
||||
|
||||
/**
|
||||
* Detect frontend framework
|
||||
*/
|
||||
const detectFramework = () => {
|
||||
if (document.querySelector('#__nuxt')) return { name: 'Nuxt.js', version: 'unknown' };
|
||||
if (document.querySelector('#__next')) return { name: 'Next.js', version: 'unknown' };
|
||||
if (document.querySelector('[data-reactroot]')) return { name: 'React', version: 'unknown' };
|
||||
if (document.querySelector('[ng-version]')) return { name: 'Angular', version: 'unknown' };
|
||||
if (window.Vue) return { name: 'Vue.js', version: window.Vue.version || 'unknown' };
|
||||
return { name: 'Unknown', version: 'unknown' };
|
||||
};
|
||||
|
||||
/**
|
||||
* Build layout tree recursively
|
||||
*/
|
||||
const buildLayoutTree = (element, depth = 0, maxDepth = 3) => {
|
||||
if (depth > maxDepth) return null;
|
||||
|
||||
const props = extractLayoutProps(element);
|
||||
const bounds = getBounds(element);
|
||||
const pattern = identifyPattern(props);
|
||||
|
||||
// Get semantic role
|
||||
const tagName = element.tagName.toLowerCase();
|
||||
const classes = Array.from(element.classList).slice(0, 3); // Max 3 classes
|
||||
const role = element.getAttribute('role');
|
||||
|
||||
// Build node
|
||||
const node = {
|
||||
tag: tagName,
|
||||
classes: classes,
|
||||
role: role,
|
||||
pattern: pattern,
|
||||
bounds: bounds,
|
||||
layout: {
|
||||
display: props.display,
|
||||
position: props.position
|
||||
}
|
||||
};
|
||||
|
||||
// Add flex/grid specific properties
|
||||
if (props.display === 'flex' || props.display === 'inline-flex') {
|
||||
node.layout.flexDirection = props.flexDirection;
|
||||
node.layout.justifyContent = props.justifyContent;
|
||||
node.layout.alignItems = props.alignItems;
|
||||
node.layout.gap = props.gap;
|
||||
}
|
||||
|
||||
if (props.display === 'grid') {
|
||||
node.layout.gridTemplateColumns = props.gridTemplateColumns;
|
||||
node.layout.gridTemplateRows = props.gridTemplateRows;
|
||||
node.layout.gap = props.gap;
|
||||
}
|
||||
|
||||
// Process children for container elements
|
||||
if (props.display === 'flex' || props.display === 'grid' || props.display === 'block') {
|
||||
const children = Array.from(element.children);
|
||||
if (children.length > 0 && children.length < 50) { // Limit to 50 children
|
||||
node.children = children
|
||||
.map(child => buildLayoutTree(child, depth + 1, maxDepth))
|
||||
.filter(child => child !== null);
|
||||
}
|
||||
}
|
||||
|
||||
return node;
|
||||
};
|
||||
|
||||
/**
|
||||
* Find main layout containers with multi-strategy approach
|
||||
*/
|
||||
const findMainContainers = () => {
|
||||
const containers = [];
|
||||
const found = new Set();
|
||||
|
||||
// Strategy 1: Strict selectors (body direct children)
|
||||
const strictSelectors = [
|
||||
'body > header',
|
||||
'body > nav',
|
||||
'body > main',
|
||||
'body > footer'
|
||||
];
|
||||
|
||||
// Strategy 2: Relaxed selectors (any level)
|
||||
const relaxedSelectors = [
|
||||
'header',
|
||||
'nav',
|
||||
'main',
|
||||
'footer',
|
||||
'[role="banner"]',
|
||||
'[role="navigation"]',
|
||||
'[role="main"]',
|
||||
'[role="contentinfo"]'
|
||||
];
|
||||
|
||||
// Strategy 3: Common class-based main content selectors
|
||||
const commonClassSelectors = [
|
||||
'.main',
|
||||
'.content',
|
||||
'.main-content',
|
||||
'.page-content',
|
||||
'.container.main',
|
||||
'.wrapper > .main',
|
||||
'div[class*="main-wrapper"]',
|
||||
'div[class*="content-wrapper"]'
|
||||
];
|
||||
|
||||
// Strategy 4: Framework-specific selectors
|
||||
const frameworkSelectors = [
|
||||
'#__nuxt header', '#__nuxt .main', '#__nuxt main', '#__nuxt footer',
|
||||
'#__next header', '#__next .main', '#__next main', '#__next footer',
|
||||
'#app header', '#app .main', '#app main', '#app footer',
|
||||
'[data-app] header', '[data-app] .main', '[data-app] main', '[data-app] footer'
|
||||
];
|
||||
|
||||
// Try all strategies
|
||||
const allSelectors = [...strictSelectors, ...relaxedSelectors, ...commonClassSelectors, ...frameworkSelectors];
|
||||
|
||||
allSelectors.forEach(selector => {
|
||||
try {
|
||||
const elements = document.querySelectorAll(selector);
|
||||
elements.forEach(element => {
|
||||
// Avoid duplicates and invisible elements
|
||||
if (!found.has(element) && element.offsetParent !== null) {
|
||||
found.add(element);
|
||||
const tree = buildLayoutTree(element, 0, 3);
|
||||
if (tree && tree.bounds.width > 0 && tree.bounds.height > 0) {
|
||||
containers.push(tree);
|
||||
}
|
||||
}
|
||||
});
|
||||
} catch (e) {
|
||||
console.warn(`Selector failed: ${selector}`, e);
|
||||
}
|
||||
});
|
||||
|
||||
// Fallback: If no containers found, use body's direct children
|
||||
if (containers.length === 0) {
|
||||
Array.from(document.body.children).forEach(child => {
|
||||
if (child.offsetParent !== null && !found.has(child)) {
|
||||
const tree = buildLayoutTree(child, 0, 2);
|
||||
if (tree && tree.bounds.width > 100 && tree.bounds.height > 100) {
|
||||
containers.push(tree);
|
||||
}
|
||||
}
|
||||
});
|
||||
}
|
||||
|
||||
return containers;
|
||||
};
|
||||
|
||||
/**
|
||||
* Progressive exploration: Discover main containers when standard selectors fail
|
||||
* Analyzes large visible containers and suggests class name patterns
|
||||
*/
|
||||
const exploreMainContainers = () => {
|
||||
const candidates = [];
|
||||
const minWidth = 500;
|
||||
const minHeight = 300;
|
||||
|
||||
// Find all large visible divs
|
||||
const allDivs = document.querySelectorAll('div');
|
||||
allDivs.forEach(div => {
|
||||
const rect = div.getBoundingClientRect();
|
||||
const style = window.getComputedStyle(div);
|
||||
|
||||
// Filter: large size, visible, not header/footer
|
||||
if (rect.width >= minWidth &&
|
||||
rect.height >= minHeight &&
|
||||
div.offsetParent !== null &&
|
||||
!div.closest('header') &&
|
||||
!div.closest('footer')) {
|
||||
|
||||
const classes = Array.from(div.classList);
|
||||
const area = rect.width * rect.height;
|
||||
|
||||
candidates.push({
|
||||
element: div,
|
||||
classes: classes,
|
||||
area: area,
|
||||
bounds: {
|
||||
width: Math.round(rect.width),
|
||||
height: Math.round(rect.height)
|
||||
},
|
||||
display: style.display,
|
||||
depth: getElementDepth(div)
|
||||
});
|
||||
}
|
||||
});
|
||||
|
||||
// Sort by area (largest first) and take top candidates
|
||||
candidates.sort((a, b) => b.area - a.area);
|
||||
|
||||
// Extract unique class patterns from top candidates
|
||||
const classPatterns = new Set();
|
||||
candidates.slice(0, 20).forEach(c => {
|
||||
c.classes.forEach(cls => {
|
||||
// Identify potential main content class patterns
|
||||
if (cls.match(/main|content|container|wrapper|page|body|layout|app/i)) {
|
||||
classPatterns.add(cls);
|
||||
}
|
||||
});
|
||||
});
|
||||
|
||||
return {
|
||||
candidates: candidates.slice(0, 10).map(c => ({
|
||||
classes: c.classes,
|
||||
bounds: c.bounds,
|
||||
display: c.display,
|
||||
depth: c.depth
|
||||
})),
|
||||
suggestedSelectors: Array.from(classPatterns).map(cls => `.${cls}`)
|
||||
};
|
||||
};
|
||||
|
||||
/**
|
||||
* Get element depth in DOM tree
|
||||
*/
|
||||
const getElementDepth = (element) => {
|
||||
let depth = 0;
|
||||
let current = element;
|
||||
while (current.parentElement) {
|
||||
depth++;
|
||||
current = current.parentElement;
|
||||
}
|
||||
return depth;
|
||||
};
|
||||
|
||||
/**
|
||||
* Analyze layout patterns
|
||||
*/
|
||||
const analyzePatterns = (containers) => {
|
||||
const patterns = {
|
||||
flexColumn: 0,
|
||||
flexRow: 0,
|
||||
grid: 0,
|
||||
sticky: 0,
|
||||
fixed: 0
|
||||
};
|
||||
|
||||
const analyze = (node) => {
|
||||
if (!node) return;
|
||||
|
||||
if (node.pattern === 'flex-column') patterns.flexColumn++;
|
||||
if (node.pattern === 'flex-row') patterns.flexRow++;
|
||||
if (node.pattern && node.pattern.startsWith('grid')) patterns.grid++;
|
||||
if (node.layout.position === 'sticky') patterns.sticky++;
|
||||
if (node.layout.position === 'fixed') patterns.fixed++;
|
||||
|
||||
if (node.children) {
|
||||
node.children.forEach(analyze);
|
||||
}
|
||||
};
|
||||
|
||||
containers.forEach(analyze);
|
||||
return patterns;
|
||||
};
|
||||
|
||||
/**
|
||||
* Main extraction function with progressive exploration
|
||||
*/
|
||||
const extractLayout = () => {
|
||||
const framework = detectFramework();
|
||||
const containers = findMainContainers();
|
||||
const patterns = analyzePatterns(containers);
|
||||
|
||||
// Progressive exploration: if too few containers found, explore and suggest
|
||||
let exploration = null;
|
||||
const minExpectedContainers = 3; // At least header, main, footer
|
||||
|
||||
if (containers.length < minExpectedContainers) {
|
||||
exploration = exploreMainContainers();
|
||||
|
||||
// Add warning message
|
||||
exploration.warning = `Only ${containers.length} containers found. Consider adding these selectors to the script:`;
|
||||
exploration.recommendation = exploration.suggestedSelectors.join(', ');
|
||||
}
|
||||
|
||||
const result = {
|
||||
metadata: {
|
||||
extractedAt: new Date().toISOString(),
|
||||
url: window.location.href,
|
||||
framework: framework,
|
||||
method: 'layout-structure-enhanced',
|
||||
version: '2.2.0'
|
||||
},
|
||||
statistics: {
|
||||
totalContainers: containers.length,
|
||||
patterns: patterns
|
||||
},
|
||||
structure: containers
|
||||
};
|
||||
|
||||
// Add exploration results if triggered
|
||||
if (exploration) {
|
||||
result.exploration = {
|
||||
triggered: true,
|
||||
reason: 'Insufficient containers found with standard selectors',
|
||||
discoveredCandidates: exploration.candidates,
|
||||
suggestedSelectors: exploration.suggestedSelectors,
|
||||
warning: exploration.warning,
|
||||
recommendation: exploration.recommendation
|
||||
};
|
||||
}
|
||||
|
||||
return result;
|
||||
};
|
||||
|
||||
// Execute and return results
|
||||
return extractLayout();
|
||||
})();
|
||||
@@ -1,713 +0,0 @@
|
||||
#!/bin/bash
|
||||
# Generate documentation for modules and projects with multiple strategies
|
||||
# Usage: generate_module_docs.sh <strategy> <source_path> <project_name> [tool] [model]
|
||||
# strategy: full|single|project-readme|project-architecture|http-api
|
||||
# source_path: Path to the source module directory (or project root for project-level docs)
|
||||
# project_name: Project name for output path (e.g., "myproject")
|
||||
# tool: gemini|qwen|codex (default: gemini)
|
||||
# model: Model name (optional, uses tool defaults)
|
||||
#
|
||||
# Default Models:
|
||||
# gemini: gemini-2.5-flash
|
||||
# qwen: coder-model
|
||||
# codex: gpt5-codex
|
||||
#
|
||||
# Module-Level Strategies:
|
||||
# full: Full documentation generation
|
||||
# - Read: All files in current and subdirectories (@**/*)
|
||||
# - Generate: API.md + README.md for each directory containing code files
|
||||
# - Use: Deep directories (Layer 3), comprehensive documentation
|
||||
#
|
||||
# single: Single-layer documentation
|
||||
# - Read: Current directory code + child API.md/README.md files
|
||||
# - Generate: API.md + README.md only in current directory
|
||||
# - Use: Upper layers (Layer 1-2), incremental updates
|
||||
#
|
||||
# Project-Level Strategies:
|
||||
# project-readme: Project overview documentation
|
||||
# - Read: All module API.md and README.md files
|
||||
# - Generate: README.md (project root)
|
||||
# - Use: After all module docs are generated
|
||||
#
|
||||
# project-architecture: System design documentation
|
||||
# - Read: All module docs + project README
|
||||
# - Generate: ARCHITECTURE.md + EXAMPLES.md
|
||||
# - Use: After project README is generated
|
||||
#
|
||||
# http-api: HTTP API documentation
|
||||
# - Read: API route files + existing docs
|
||||
# - Generate: api/README.md
|
||||
# - Use: For projects with HTTP APIs
|
||||
#
|
||||
# Output Structure:
|
||||
# Module docs: .workflow/docs/{project_name}/{source_path}/API.md
|
||||
# Module docs: .workflow/docs/{project_name}/{source_path}/README.md
|
||||
# Project docs: .workflow/docs/{project_name}/README.md
|
||||
# Project docs: .workflow/docs/{project_name}/ARCHITECTURE.md
|
||||
# Project docs: .workflow/docs/{project_name}/EXAMPLES.md
|
||||
# API docs: .workflow/docs/{project_name}/api/README.md
|
||||
#
|
||||
# Features:
|
||||
# - Path mirroring: source structure → docs structure
|
||||
# - Template-driven generation
|
||||
# - Respects .gitignore patterns
|
||||
# - Detects code vs navigation folders
|
||||
# - Tool fallback support
|
||||
|
||||
# Build exclusion filters from .gitignore
|
||||
build_exclusion_filters() {
|
||||
local filters=""
|
||||
|
||||
# Common system/cache directories to exclude
|
||||
local system_excludes=(
|
||||
".git" "__pycache__" "node_modules" ".venv" "venv" "env"
|
||||
"dist" "build" ".cache" ".pytest_cache" ".mypy_cache"
|
||||
"coverage" ".nyc_output" "logs" "tmp" "temp" ".workflow"
|
||||
)
|
||||
|
||||
for exclude in "${system_excludes[@]}"; do
|
||||
filters+=" -not -path '*/$exclude' -not -path '*/$exclude/*'"
|
||||
done
|
||||
|
||||
# Find and parse .gitignore (current dir first, then git root)
|
||||
local gitignore_file=""
|
||||
|
||||
# Check current directory first
|
||||
if [ -f ".gitignore" ]; then
|
||||
gitignore_file=".gitignore"
|
||||
else
|
||||
# Try to find git root and check for .gitignore there
|
||||
local git_root=$(git rev-parse --show-toplevel 2>/dev/null)
|
||||
if [ -n "$git_root" ] && [ -f "$git_root/.gitignore" ]; then
|
||||
gitignore_file="$git_root/.gitignore"
|
||||
fi
|
||||
fi
|
||||
|
||||
# Parse .gitignore if found
|
||||
if [ -n "$gitignore_file" ]; then
|
||||
while IFS= read -r line; do
|
||||
# Skip empty lines and comments
|
||||
[[ -z "$line" || "$line" =~ ^[[:space:]]*# ]] && continue
|
||||
|
||||
# Remove trailing slash and whitespace
|
||||
line=$(echo "$line" | sed 's|/$||' | xargs)
|
||||
|
||||
# Skip wildcards patterns (too complex for simple find)
|
||||
[[ "$line" =~ \* ]] && continue
|
||||
|
||||
# Add to filters
|
||||
filters+=" -not -path '*/$line' -not -path '*/$line/*'"
|
||||
done < "$gitignore_file"
|
||||
fi
|
||||
|
||||
echo "$filters"
|
||||
}
|
||||
|
||||
# Detect folder type (code vs navigation)
|
||||
detect_folder_type() {
|
||||
local target_path="$1"
|
||||
local exclusion_filters="$2"
|
||||
|
||||
# Count code files (primary indicators)
|
||||
local code_count=$(eval "find \"$target_path\" -maxdepth 1 -type f \\( -name '*.ts' -o -name '*.tsx' -o -name '*.js' -o -name '*.jsx' -o -name '*.py' -o -name '*.sh' -o -name '*.go' -o -name '*.rs' \\) $exclusion_filters 2>/dev/null" | wc -l)
|
||||
|
||||
if [ $code_count -gt 0 ]; then
|
||||
echo "code"
|
||||
else
|
||||
echo "navigation"
|
||||
fi
|
||||
}
|
||||
|
||||
# Scan directory structure and generate structured information
|
||||
scan_directory_structure() {
|
||||
local target_path="$1"
|
||||
local strategy="$2"
|
||||
|
||||
if [ ! -d "$target_path" ]; then
|
||||
echo "Directory not found: $target_path"
|
||||
return 1
|
||||
fi
|
||||
|
||||
local exclusion_filters=$(build_exclusion_filters)
|
||||
local structure_info=""
|
||||
|
||||
# Get basic directory info
|
||||
local dir_name=$(basename "$target_path")
|
||||
local total_files=$(eval "find \"$target_path\" -type f $exclusion_filters 2>/dev/null" | wc -l)
|
||||
local total_dirs=$(eval "find \"$target_path\" -type d $exclusion_filters 2>/dev/null" | wc -l)
|
||||
local folder_type=$(detect_folder_type "$target_path" "$exclusion_filters")
|
||||
|
||||
structure_info+="Directory: $dir_name\n"
|
||||
structure_info+="Total files: $total_files\n"
|
||||
structure_info+="Total directories: $total_dirs\n"
|
||||
structure_info+="Folder type: $folder_type\n\n"
|
||||
|
||||
if [ "$strategy" = "full" ]; then
|
||||
# For full: show all subdirectories with file counts
|
||||
structure_info+="Subdirectories with files:\n"
|
||||
while IFS= read -r dir; do
|
||||
if [ -n "$dir" ] && [ "$dir" != "$target_path" ]; then
|
||||
local rel_path=${dir#$target_path/}
|
||||
local file_count=$(eval "find \"$dir\" -maxdepth 1 -type f $exclusion_filters 2>/dev/null" | wc -l)
|
||||
if [ $file_count -gt 0 ]; then
|
||||
local subdir_type=$(detect_folder_type "$dir" "$exclusion_filters")
|
||||
structure_info+=" - $rel_path/ ($file_count files, type: $subdir_type)\n"
|
||||
fi
|
||||
fi
|
||||
done < <(eval "find \"$target_path\" -type d $exclusion_filters 2>/dev/null")
|
||||
else
|
||||
# For single: show direct children only
|
||||
structure_info+="Direct subdirectories:\n"
|
||||
while IFS= read -r dir; do
|
||||
if [ -n "$dir" ]; then
|
||||
local dir_name=$(basename "$dir")
|
||||
local file_count=$(eval "find \"$dir\" -maxdepth 1 -type f $exclusion_filters 2>/dev/null" | wc -l)
|
||||
local has_api=$([ -f "$dir/API.md" ] && echo " [has API.md]" || echo "")
|
||||
local has_readme=$([ -f "$dir/README.md" ] && echo " [has README.md]" || echo "")
|
||||
structure_info+=" - $dir_name/ ($file_count files)$has_api$has_readme\n"
|
||||
fi
|
||||
done < <(eval "find \"$target_path\" -maxdepth 1 -type d $exclusion_filters 2>/dev/null" | grep -v "^$target_path$")
|
||||
fi
|
||||
|
||||
# Show main file types in current directory
|
||||
structure_info+="\nCurrent directory files:\n"
|
||||
local code_files=$(eval "find \"$target_path\" -maxdepth 1 -type f \\( -name '*.ts' -o -name '*.tsx' -o -name '*.js' -o -name '*.jsx' -o -name '*.py' -o -name '*.sh' -o -name '*.go' -o -name '*.rs' \\) $exclusion_filters 2>/dev/null" | wc -l)
|
||||
local config_files=$(eval "find \"$target_path\" -maxdepth 1 -type f \\( -name '*.json' -o -name '*.yaml' -o -name '*.yml' -o -name '*.toml' \\) $exclusion_filters 2>/dev/null" | wc -l)
|
||||
local doc_files=$(eval "find \"$target_path\" -maxdepth 1 -type f -name '*.md' $exclusion_filters 2>/dev/null" | wc -l)
|
||||
|
||||
structure_info+=" - Code files: $code_files\n"
|
||||
structure_info+=" - Config files: $config_files\n"
|
||||
structure_info+=" - Documentation: $doc_files\n"
|
||||
|
||||
printf "%b" "$structure_info"
|
||||
}
|
||||
|
||||
# Calculate output path based on source path and project name
|
||||
calculate_output_path() {
|
||||
local source_path="$1"
|
||||
local project_name="$2"
|
||||
local project_root="$3"
|
||||
|
||||
# Get absolute path of source (normalize to Unix-style path)
|
||||
local abs_source=$(cd "$source_path" && pwd)
|
||||
|
||||
# Normalize project root to same format
|
||||
local norm_project_root=$(cd "$project_root" && pwd)
|
||||
|
||||
# Calculate relative path from project root
|
||||
local rel_path="${abs_source#$norm_project_root}"
|
||||
|
||||
# Remove leading slash if present
|
||||
rel_path="${rel_path#/}"
|
||||
|
||||
# If source is project root, use project name directly
|
||||
if [ "$abs_source" = "$norm_project_root" ] || [ -z "$rel_path" ]; then
|
||||
echo "$norm_project_root/.workflow/docs/$project_name"
|
||||
else
|
||||
echo "$norm_project_root/.workflow/docs/$project_name/$rel_path"
|
||||
fi
|
||||
}
|
||||
|
||||
generate_module_docs() {
|
||||
local strategy="$1"
|
||||
local source_path="$2"
|
||||
local project_name="$3"
|
||||
local tool="${4:-gemini}"
|
||||
local model="$5"
|
||||
|
||||
# Validate parameters
|
||||
if [ -z "$strategy" ] || [ -z "$source_path" ] || [ -z "$project_name" ]; then
|
||||
echo "❌ Error: Strategy, source path, and project name are required"
|
||||
echo "Usage: generate_module_docs.sh <strategy> <source_path> <project_name> [tool] [model]"
|
||||
echo "Module strategies: full, single"
|
||||
echo "Project strategies: project-readme, project-architecture, http-api"
|
||||
return 1
|
||||
fi
|
||||
|
||||
# Validate strategy
|
||||
local valid_strategies=("full" "single" "project-readme" "project-architecture" "http-api")
|
||||
local strategy_valid=false
|
||||
for valid_strategy in "${valid_strategies[@]}"; do
|
||||
if [ "$strategy" = "$valid_strategy" ]; then
|
||||
strategy_valid=true
|
||||
break
|
||||
fi
|
||||
done
|
||||
|
||||
if [ "$strategy_valid" = false ]; then
|
||||
echo "❌ Error: Invalid strategy '$strategy'"
|
||||
echo "Valid module strategies: full, single"
|
||||
echo "Valid project strategies: project-readme, project-architecture, http-api"
|
||||
return 1
|
||||
fi
|
||||
|
||||
if [ ! -d "$source_path" ]; then
|
||||
echo "❌ Error: Source directory '$source_path' does not exist"
|
||||
return 1
|
||||
fi
|
||||
|
||||
# Set default models if not specified
|
||||
if [ -z "$model" ]; then
|
||||
case "$tool" in
|
||||
gemini)
|
||||
model="gemini-2.5-flash"
|
||||
;;
|
||||
qwen)
|
||||
model="coder-model"
|
||||
;;
|
||||
codex)
|
||||
model="gpt5-codex"
|
||||
;;
|
||||
*)
|
||||
model=""
|
||||
;;
|
||||
esac
|
||||
fi
|
||||
|
||||
# Build exclusion filters
|
||||
local exclusion_filters=$(build_exclusion_filters)
|
||||
|
||||
# Get project root
|
||||
local project_root=$(git rev-parse --show-toplevel 2>/dev/null || pwd)
|
||||
|
||||
# Determine if this is a project-level strategy
|
||||
local is_project_level=false
|
||||
if [[ "$strategy" =~ ^project- ]] || [ "$strategy" = "http-api" ]; then
|
||||
is_project_level=true
|
||||
fi
|
||||
|
||||
# Calculate output path
|
||||
local output_path
|
||||
if [ "$is_project_level" = true ]; then
|
||||
# Project-level docs go to project root
|
||||
if [ "$strategy" = "http-api" ]; then
|
||||
output_path="$project_root/.workflow/docs/$project_name/api"
|
||||
else
|
||||
output_path="$project_root/.workflow/docs/$project_name"
|
||||
fi
|
||||
else
|
||||
output_path=$(calculate_output_path "$source_path" "$project_name" "$project_root")
|
||||
fi
|
||||
|
||||
# Create output directory
|
||||
mkdir -p "$output_path"
|
||||
|
||||
# Detect folder type (only for module-level strategies)
|
||||
local folder_type=""
|
||||
if [ "$is_project_level" = false ]; then
|
||||
folder_type=$(detect_folder_type "$source_path" "$exclusion_filters")
|
||||
fi
|
||||
|
||||
# Load templates based on strategy
|
||||
local api_template=""
|
||||
local readme_template=""
|
||||
local template_content=""
|
||||
|
||||
if [ "$is_project_level" = true ]; then
|
||||
# Project-level templates
|
||||
case "$strategy" in
|
||||
project-readme)
|
||||
local proj_readme_path="$HOME/.claude/workflows/cli-templates/prompts/documentation/project-readme.txt"
|
||||
if [ -f "$proj_readme_path" ]; then
|
||||
template_content=$(cat "$proj_readme_path")
|
||||
echo " 📋 Loaded Project README template: $(wc -l < "$proj_readme_path") lines"
|
||||
fi
|
||||
;;
|
||||
project-architecture)
|
||||
local arch_path="$HOME/.claude/workflows/cli-templates/prompts/documentation/project-architecture.txt"
|
||||
local examples_path="$HOME/.claude/workflows/cli-templates/prompts/documentation/project-examples.txt"
|
||||
if [ -f "$arch_path" ]; then
|
||||
template_content=$(cat "$arch_path")
|
||||
echo " 📋 Loaded Architecture template: $(wc -l < "$arch_path") lines"
|
||||
fi
|
||||
if [ -f "$examples_path" ]; then
|
||||
template_content="$template_content
|
||||
|
||||
EXAMPLES TEMPLATE:
|
||||
$(cat "$examples_path")"
|
||||
echo " 📋 Loaded Examples template: $(wc -l < "$examples_path") lines"
|
||||
fi
|
||||
;;
|
||||
http-api)
|
||||
local api_path="$HOME/.claude/workflows/cli-templates/prompts/documentation/api.txt"
|
||||
if [ -f "$api_path" ]; then
|
||||
template_content=$(cat "$api_path")
|
||||
echo " 📋 Loaded HTTP API template: $(wc -l < "$api_path") lines"
|
||||
fi
|
||||
;;
|
||||
esac
|
||||
else
|
||||
# Module-level templates
|
||||
local api_template_path="$HOME/.claude/workflows/cli-templates/prompts/documentation/api.txt"
|
||||
local readme_template_path="$HOME/.claude/workflows/cli-templates/prompts/documentation/module-readme.txt"
|
||||
local nav_template_path="$HOME/.claude/workflows/cli-templates/prompts/documentation/folder-navigation.txt"
|
||||
|
||||
if [ "$folder_type" = "code" ]; then
|
||||
if [ -f "$api_template_path" ]; then
|
||||
api_template=$(cat "$api_template_path")
|
||||
echo " 📋 Loaded API template: $(wc -l < "$api_template_path") lines"
|
||||
fi
|
||||
if [ -f "$readme_template_path" ]; then
|
||||
readme_template=$(cat "$readme_template_path")
|
||||
echo " 📋 Loaded README template: $(wc -l < "$readme_template_path") lines"
|
||||
fi
|
||||
else
|
||||
# Navigation folder uses navigation template
|
||||
if [ -f "$nav_template_path" ]; then
|
||||
readme_template=$(cat "$nav_template_path")
|
||||
echo " 📋 Loaded Navigation template: $(wc -l < "$nav_template_path") lines"
|
||||
fi
|
||||
fi
|
||||
fi
|
||||
|
||||
# Scan directory structure (only for module-level strategies)
|
||||
local structure_info=""
|
||||
if [ "$is_project_level" = false ]; then
|
||||
echo " 🔍 Scanning directory structure..."
|
||||
structure_info=$(scan_directory_structure "$source_path" "$strategy")
|
||||
fi
|
||||
|
||||
# Prepare logging info
|
||||
local module_name=$(basename "$source_path")
|
||||
|
||||
echo "⚡ Generating docs: $source_path → $output_path"
|
||||
echo " Strategy: $strategy | Tool: $tool | Model: $model | Type: $folder_type"
|
||||
echo " Output: $output_path"
|
||||
|
||||
# Build strategy-specific prompt
|
||||
local final_prompt=""
|
||||
|
||||
# Project-level strategies
|
||||
if [ "$strategy" = "project-readme" ]; then
|
||||
final_prompt="PURPOSE: Generate comprehensive project overview documentation
|
||||
|
||||
PROJECT: $project_name
|
||||
OUTPUT: Current directory (file will be moved to final location)
|
||||
|
||||
Read: @.workflow/docs/$project_name/**/*.md
|
||||
|
||||
Context: All module documentation files from the project
|
||||
|
||||
Generate ONE documentation file in current directory:
|
||||
- README.md - Project root documentation
|
||||
|
||||
Template:
|
||||
$template_content
|
||||
|
||||
Instructions:
|
||||
- Create README.md in CURRENT DIRECTORY
|
||||
- Synthesize information from all module docs
|
||||
- Include project overview, getting started, and navigation
|
||||
- Create clear module navigation with links
|
||||
- Follow template structure exactly"
|
||||
|
||||
elif [ "$strategy" = "project-architecture" ]; then
|
||||
final_prompt="PURPOSE: Generate system design and usage examples documentation
|
||||
|
||||
PROJECT: $project_name
|
||||
OUTPUT: Current directory (files will be moved to final location)
|
||||
|
||||
Read: @.workflow/docs/$project_name/**/*.md
|
||||
|
||||
Context: All project documentation including module docs and project README
|
||||
|
||||
Generate TWO documentation files in current directory:
|
||||
1. ARCHITECTURE.md - System architecture and design patterns
|
||||
2. EXAMPLES.md - End-to-end usage examples
|
||||
|
||||
Template:
|
||||
$template_content
|
||||
|
||||
Instructions:
|
||||
- Create both ARCHITECTURE.md and EXAMPLES.md in CURRENT DIRECTORY
|
||||
- Synthesize architectural patterns from module documentation
|
||||
- Document system structure, module relationships, and design decisions
|
||||
- Provide practical code examples and usage scenarios
|
||||
- Follow template structure for both files"
|
||||
|
||||
elif [ "$strategy" = "http-api" ]; then
|
||||
final_prompt="PURPOSE: Generate HTTP API reference documentation
|
||||
|
||||
PROJECT: $project_name
|
||||
OUTPUT: Current directory (file will be moved to final location)
|
||||
|
||||
Read: @**/*.{ts,js,py,go,rs} @.workflow/docs/$project_name/**/*.md
|
||||
|
||||
Context: API route files and existing documentation
|
||||
|
||||
Generate ONE documentation file in current directory:
|
||||
- README.md - HTTP API documentation (in api/ subdirectory)
|
||||
|
||||
Template:
|
||||
$template_content
|
||||
|
||||
Instructions:
|
||||
- Create README.md in CURRENT DIRECTORY
|
||||
- Document all HTTP endpoints (routes, methods, parameters, responses)
|
||||
- Include authentication requirements and error codes
|
||||
- Provide request/response examples
|
||||
- Follow template structure (Part B: HTTP API documentation)"
|
||||
|
||||
# Module-level strategies
|
||||
elif [ "$strategy" = "full" ]; then
|
||||
# Full strategy: read all files, generate for each directory
|
||||
if [ "$folder_type" = "code" ]; then
|
||||
final_prompt="PURPOSE: Generate comprehensive API and module documentation
|
||||
|
||||
Directory Structure Analysis:
|
||||
$structure_info
|
||||
|
||||
SOURCE: $source_path
|
||||
OUTPUT: Current directory (files will be moved to final location)
|
||||
|
||||
Read: @**/*
|
||||
|
||||
Generate TWO documentation files in current directory:
|
||||
1. API.md - Code API documentation (functions, classes, interfaces)
|
||||
Template:
|
||||
$api_template
|
||||
|
||||
2. README.md - Module overview documentation
|
||||
Template:
|
||||
$readme_template
|
||||
|
||||
Instructions:
|
||||
- Generate both API.md and README.md in CURRENT DIRECTORY
|
||||
- If subdirectories contain code files, generate their docs too (recursive)
|
||||
- Work bottom-up: deepest directories first
|
||||
- Follow template structure exactly
|
||||
- Use structure analysis for context"
|
||||
else
|
||||
# Navigation folder - README only
|
||||
final_prompt="PURPOSE: Generate navigation documentation for folder structure
|
||||
|
||||
Directory Structure Analysis:
|
||||
$structure_info
|
||||
|
||||
SOURCE: $source_path
|
||||
OUTPUT: Current directory (file will be moved to final location)
|
||||
|
||||
Read: @**/*
|
||||
|
||||
Generate ONE documentation file in current directory:
|
||||
- README.md - Navigation and folder overview
|
||||
|
||||
Template:
|
||||
$readme_template
|
||||
|
||||
Instructions:
|
||||
- Create README.md in CURRENT DIRECTORY
|
||||
- Focus on folder structure and navigation
|
||||
- Link to subdirectory documentation
|
||||
- Use structure analysis for context"
|
||||
fi
|
||||
else
|
||||
# Single strategy: read current + child docs only
|
||||
if [ "$folder_type" = "code" ]; then
|
||||
final_prompt="PURPOSE: Generate API and module documentation for current directory
|
||||
|
||||
Directory Structure Analysis:
|
||||
$structure_info
|
||||
|
||||
SOURCE: $source_path
|
||||
OUTPUT: Current directory (files will be moved to final location)
|
||||
|
||||
Read: @*/API.md @*/README.md @*.ts @*.tsx @*.js @*.jsx @*.py @*.sh @*.go @*.rs @*.md @*.json @*.yaml @*.yml
|
||||
|
||||
Generate TWO documentation files in current directory:
|
||||
1. API.md - Code API documentation
|
||||
Template:
|
||||
$api_template
|
||||
|
||||
2. README.md - Module overview
|
||||
Template:
|
||||
$readme_template
|
||||
|
||||
Instructions:
|
||||
- Generate both API.md and README.md in CURRENT DIRECTORY
|
||||
- Reference child documentation, do not duplicate
|
||||
- Follow template structure
|
||||
- Use structure analysis for current directory context"
|
||||
else
|
||||
# Navigation folder - README only
|
||||
final_prompt="PURPOSE: Generate navigation documentation
|
||||
|
||||
Directory Structure Analysis:
|
||||
$structure_info
|
||||
|
||||
SOURCE: $source_path
|
||||
OUTPUT: Current directory (file will be moved to final location)
|
||||
|
||||
Read: @*/API.md @*/README.md @*.md
|
||||
|
||||
Generate ONE documentation file in current directory:
|
||||
- README.md - Navigation and overview
|
||||
|
||||
Template:
|
||||
$readme_template
|
||||
|
||||
Instructions:
|
||||
- Create README.md in CURRENT DIRECTORY
|
||||
- Link to child documentation
|
||||
- Use structure analysis for navigation context"
|
||||
fi
|
||||
fi
|
||||
|
||||
# Execute documentation generation
|
||||
local start_time=$(date +%s)
|
||||
echo " 🔄 Starting documentation generation..."
|
||||
|
||||
if cd "$source_path" 2>/dev/null; then
|
||||
local tool_result=0
|
||||
|
||||
# Store current output path for CLI context
|
||||
export DOC_OUTPUT_PATH="$output_path"
|
||||
|
||||
# Record git HEAD before CLI execution (to detect unwanted auto-commits)
|
||||
local git_head_before=""
|
||||
if git rev-parse --git-dir >/dev/null 2>&1; then
|
||||
git_head_before=$(git rev-parse HEAD 2>/dev/null)
|
||||
fi
|
||||
|
||||
# Execute with selected tool
|
||||
case "$tool" in
|
||||
qwen)
|
||||
if [ "$model" = "coder-model" ]; then
|
||||
qwen -p "$final_prompt" --yolo 2>&1
|
||||
else
|
||||
qwen -p "$final_prompt" -m "$model" --yolo 2>&1
|
||||
fi
|
||||
tool_result=$?
|
||||
;;
|
||||
codex)
|
||||
codex --full-auto exec "$final_prompt" -m "$model" --skip-git-repo-check -s danger-full-access 2>&1
|
||||
tool_result=$?
|
||||
;;
|
||||
gemini)
|
||||
gemini -p "$final_prompt" -m "$model" --yolo 2>&1
|
||||
tool_result=$?
|
||||
;;
|
||||
*)
|
||||
echo " ⚠️ Unknown tool: $tool, defaulting to gemini"
|
||||
gemini -p "$final_prompt" -m "$model" --yolo 2>&1
|
||||
tool_result=$?
|
||||
;;
|
||||
esac
|
||||
|
||||
# Move generated files to output directory
|
||||
local docs_created=0
|
||||
local moved_files=""
|
||||
|
||||
if [ $tool_result -eq 0 ]; then
|
||||
if [ "$is_project_level" = true ]; then
|
||||
# Project-level documentation files
|
||||
case "$strategy" in
|
||||
project-readme)
|
||||
if [ -f "README.md" ]; then
|
||||
mv "README.md" "$output_path/README.md" 2>/dev/null && {
|
||||
docs_created=$((docs_created + 1))
|
||||
moved_files+="README.md "
|
||||
}
|
||||
fi
|
||||
;;
|
||||
project-architecture)
|
||||
if [ -f "ARCHITECTURE.md" ]; then
|
||||
mv "ARCHITECTURE.md" "$output_path/ARCHITECTURE.md" 2>/dev/null && {
|
||||
docs_created=$((docs_created + 1))
|
||||
moved_files+="ARCHITECTURE.md "
|
||||
}
|
||||
fi
|
||||
if [ -f "EXAMPLES.md" ]; then
|
||||
mv "EXAMPLES.md" "$output_path/EXAMPLES.md" 2>/dev/null && {
|
||||
docs_created=$((docs_created + 1))
|
||||
moved_files+="EXAMPLES.md "
|
||||
}
|
||||
fi
|
||||
;;
|
||||
http-api)
|
||||
if [ -f "README.md" ]; then
|
||||
mv "README.md" "$output_path/README.md" 2>/dev/null && {
|
||||
docs_created=$((docs_created + 1))
|
||||
moved_files+="api/README.md "
|
||||
}
|
||||
fi
|
||||
;;
|
||||
esac
|
||||
else
|
||||
# Module-level documentation files
|
||||
# Check and move API.md if it exists
|
||||
if [ "$folder_type" = "code" ] && [ -f "API.md" ]; then
|
||||
mv "API.md" "$output_path/API.md" 2>/dev/null && {
|
||||
docs_created=$((docs_created + 1))
|
||||
moved_files+="API.md "
|
||||
}
|
||||
fi
|
||||
|
||||
# Check and move README.md if it exists
|
||||
if [ -f "README.md" ]; then
|
||||
mv "README.md" "$output_path/README.md" 2>/dev/null && {
|
||||
docs_created=$((docs_created + 1))
|
||||
moved_files+="README.md "
|
||||
}
|
||||
fi
|
||||
fi
|
||||
fi
|
||||
|
||||
# Check if CLI tool auto-committed (and revert if needed)
|
||||
if [ -n "$git_head_before" ]; then
|
||||
local git_head_after=$(git rev-parse HEAD 2>/dev/null)
|
||||
if [ "$git_head_before" != "$git_head_after" ]; then
|
||||
echo " ⚠️ Detected unwanted auto-commit by CLI tool, reverting..."
|
||||
git reset --soft "$git_head_before" 2>/dev/null
|
||||
echo " ✅ Auto-commit reverted (files remain staged)"
|
||||
fi
|
||||
fi
|
||||
|
||||
if [ $docs_created -gt 0 ]; then
|
||||
local end_time=$(date +%s)
|
||||
local duration=$((end_time - start_time))
|
||||
echo " ✅ Generated $docs_created doc(s) in ${duration}s: $moved_files"
|
||||
cd - > /dev/null
|
||||
return 0
|
||||
else
|
||||
echo " ❌ Documentation generation failed for $source_path"
|
||||
cd - > /dev/null
|
||||
return 1
|
||||
fi
|
||||
else
|
||||
echo " ❌ Cannot access directory: $source_path"
|
||||
return 1
|
||||
fi
|
||||
}
|
||||
|
||||
# Execute function if script is run directly
|
||||
if [[ "${BASH_SOURCE[0]}" == "${0}" ]]; then
|
||||
# Show help if no arguments or help requested
|
||||
if [ $# -eq 0 ] || [ "$1" = "-h" ] || [ "$1" = "--help" ]; then
|
||||
echo "Usage: generate_module_docs.sh <strategy> <source_path> <project_name> [tool] [model]"
|
||||
echo ""
|
||||
echo "Module-Level Strategies:"
|
||||
echo " full - Generate docs for all subdirectories with code"
|
||||
echo " single - Generate docs only for current directory"
|
||||
echo ""
|
||||
echo "Project-Level Strategies:"
|
||||
echo " project-readme - Generate project root README.md"
|
||||
echo " project-architecture - Generate ARCHITECTURE.md + EXAMPLES.md"
|
||||
echo " http-api - Generate HTTP API documentation (api/README.md)"
|
||||
echo ""
|
||||
echo "Tools: gemini (default), qwen, codex"
|
||||
echo "Models: Use tool defaults if not specified"
|
||||
echo ""
|
||||
echo "Module Examples:"
|
||||
echo " ./generate_module_docs.sh full ./src/auth myproject"
|
||||
echo " ./generate_module_docs.sh single ./components myproject gemini"
|
||||
echo ""
|
||||
echo "Project Examples:"
|
||||
echo " ./generate_module_docs.sh project-readme . myproject"
|
||||
echo " ./generate_module_docs.sh project-architecture . myproject qwen"
|
||||
echo " ./generate_module_docs.sh http-api . myproject"
|
||||
exit 0
|
||||
fi
|
||||
|
||||
generate_module_docs "$@"
|
||||
fi
|
||||
@@ -1,166 +0,0 @@
|
||||
#!/bin/bash
|
||||
# Get modules organized by directory depth (deepest first)
|
||||
# Usage: get_modules_by_depth.sh [format]
|
||||
# format: list|grouped|json (default: list)
|
||||
|
||||
# Parse .gitignore patterns and build exclusion filters
|
||||
build_exclusion_filters() {
|
||||
local filters=""
|
||||
|
||||
# Always exclude these system/cache directories and common web dev packages
|
||||
local system_excludes=(
|
||||
# Version control and IDE
|
||||
".git" ".gitignore" ".gitmodules" ".gitattributes"
|
||||
".svn" ".hg" ".bzr"
|
||||
".history" ".vscode" ".idea" ".vs" ".vscode-test"
|
||||
".sublime-text" ".atom"
|
||||
|
||||
# Python
|
||||
"__pycache__" ".pytest_cache" ".mypy_cache" ".tox"
|
||||
".coverage" "htmlcov" ".nox" ".venv" "venv" "env"
|
||||
".egg-info" "*.egg-info" ".eggs" ".wheel"
|
||||
"site-packages" ".python-version" ".pyc"
|
||||
|
||||
# Node.js/JavaScript
|
||||
"node_modules" ".npm" ".yarn" ".pnpm" "yarn-error.log"
|
||||
".nyc_output" "coverage" ".next" ".nuxt"
|
||||
".cache" ".parcel-cache" ".vite" "dist" "build"
|
||||
".turbo" ".vercel" ".netlify"
|
||||
|
||||
# Package managers
|
||||
".pnpm-store" "pnpm-lock.yaml" "yarn.lock" "package-lock.json"
|
||||
".bundle" "vendor/bundle" "Gemfile.lock"
|
||||
".gradle" "gradle" "gradlew" "gradlew.bat"
|
||||
".mvn" "target" ".m2"
|
||||
|
||||
# Build/compile outputs
|
||||
"dist" "build" "out" "output" "_site" "public"
|
||||
".output" ".generated" "generated" "gen"
|
||||
"bin" "obj" "Debug" "Release"
|
||||
|
||||
# Testing
|
||||
".pytest_cache" ".coverage" "htmlcov" "test-results"
|
||||
".nyc_output" "junit.xml" "test_results"
|
||||
"cypress/screenshots" "cypress/videos"
|
||||
"playwright-report" ".playwright"
|
||||
|
||||
# Logs and temp files
|
||||
"logs" "*.log" "log" "tmp" "temp" ".tmp" ".temp"
|
||||
".env" ".env.local" ".env.*.local"
|
||||
".DS_Store" "Thumbs.db" "*.tmp" "*.swp" "*.swo"
|
||||
|
||||
# Documentation build outputs
|
||||
"_book" "_site" "docs/_build" "site" "gh-pages"
|
||||
".docusaurus" ".vuepress" ".gitbook"
|
||||
|
||||
# Database files
|
||||
"*.sqlite" "*.sqlite3" "*.db" "data.db"
|
||||
|
||||
# OS and editor files
|
||||
".DS_Store" "Thumbs.db" "desktop.ini"
|
||||
"*.stackdump" "*.core"
|
||||
|
||||
# Cloud and deployment
|
||||
".serverless" ".terraform" "terraform.tfstate"
|
||||
".aws" ".azure" ".gcp"
|
||||
|
||||
# Mobile development
|
||||
".gradle" "build" ".expo" ".metro"
|
||||
"android/app/build" "ios/build" "DerivedData"
|
||||
|
||||
# Game development
|
||||
"Library" "Temp" "ProjectSettings"
|
||||
"Logs" "MemoryCaptures" "UserSettings"
|
||||
)
|
||||
|
||||
for exclude in "${system_excludes[@]}"; do
|
||||
filters+=" -not -path '*/$exclude' -not -path '*/$exclude/*'"
|
||||
done
|
||||
|
||||
# Parse .gitignore if it exists
|
||||
if [ -f ".gitignore" ]; then
|
||||
while IFS= read -r line; do
|
||||
# Skip empty lines and comments
|
||||
[[ -z "$line" || "$line" =~ ^[[:space:]]*# ]] && continue
|
||||
|
||||
# Remove trailing slash and whitespace
|
||||
line=$(echo "$line" | sed 's|/$||' | xargs)
|
||||
|
||||
# Add to filters
|
||||
filters+=" -not -path '*/$line' -not -path '*/$line/*'"
|
||||
done < .gitignore
|
||||
fi
|
||||
|
||||
echo "$filters"
|
||||
}
|
||||
|
||||
get_modules_by_depth() {
|
||||
local format="${1:-list}"
|
||||
local exclusion_filters=$(build_exclusion_filters)
|
||||
local max_depth=$(eval "find . -type d $exclusion_filters 2>/dev/null" | awk -F/ '{print NF-1}' | sort -n | tail -1)
|
||||
|
||||
case "$format" in
|
||||
"grouped")
|
||||
echo "📊 Modules by depth (deepest first):"
|
||||
for depth in $(seq $max_depth -1 0); do
|
||||
local dirs=$(eval "find . -mindepth $depth -maxdepth $depth -type d $exclusion_filters 2>/dev/null" | \
|
||||
while read dir; do
|
||||
if [ $(find "$dir" -maxdepth 1 -type f 2>/dev/null | wc -l) -gt 0 ]; then
|
||||
local claude_indicator=""
|
||||
[ -f "$dir/CLAUDE.md" ] && claude_indicator=" [✓]"
|
||||
echo "$dir$claude_indicator"
|
||||
fi
|
||||
done)
|
||||
if [ -n "$dirs" ]; then
|
||||
echo " 📁 Depth $depth:"
|
||||
echo "$dirs" | sed 's/^/ - /'
|
||||
fi
|
||||
done
|
||||
;;
|
||||
|
||||
"json")
|
||||
echo "{"
|
||||
echo " \"max_depth\": $max_depth,"
|
||||
echo " \"modules\": {"
|
||||
for depth in $(seq $max_depth -1 0); do
|
||||
local dirs=$(eval "find . -mindepth $depth -maxdepth $depth -type d $exclusion_filters 2>/dev/null" | \
|
||||
while read dir; do
|
||||
if [ $(find "$dir" -maxdepth 1 -type f 2>/dev/null | wc -l) -gt 0 ]; then
|
||||
local has_claude="false"
|
||||
[ -f "$dir/CLAUDE.md" ] && has_claude="true"
|
||||
echo "{\"path\":\"$dir\",\"has_claude\":$has_claude}"
|
||||
fi
|
||||
done | tr '\n' ',')
|
||||
if [ -n "$dirs" ]; then
|
||||
dirs=${dirs%,} # Remove trailing comma
|
||||
echo " \"$depth\": [$dirs]"
|
||||
[ $depth -gt 0 ] && echo ","
|
||||
fi
|
||||
done
|
||||
echo " }"
|
||||
echo "}"
|
||||
;;
|
||||
|
||||
"list"|*)
|
||||
# Simple list format (deepest first)
|
||||
for depth in $(seq $max_depth -1 0); do
|
||||
eval "find . -mindepth $depth -maxdepth $depth -type d $exclusion_filters 2>/dev/null" | \
|
||||
while read dir; do
|
||||
if [ $(find "$dir" -maxdepth 1 -type f 2>/dev/null | wc -l) -gt 0 ]; then
|
||||
local file_count=$(find "$dir" -maxdepth 1 -type f 2>/dev/null | wc -l)
|
||||
local types=$(find "$dir" -maxdepth 1 -type f -name "*.*" 2>/dev/null | \
|
||||
grep -E '\.[^/]*$' | sed 's/.*\.//' | sort -u | tr '\n' ',' | sed 's/,$//')
|
||||
local has_claude="no"
|
||||
[ -f "$dir/CLAUDE.md" ] && has_claude="yes"
|
||||
echo "depth:$depth|path:$dir|files:$file_count|types:[$types]|has_claude:$has_claude"
|
||||
fi
|
||||
done
|
||||
done
|
||||
;;
|
||||
esac
|
||||
}
|
||||
|
||||
# Execute function if script is run directly
|
||||
if [[ "${BASH_SOURCE[0]}" == "${0}" ]]; then
|
||||
get_modules_by_depth "$@"
|
||||
fi
|
||||
@@ -1,391 +0,0 @@
|
||||
#!/bin/bash
|
||||
#
|
||||
# UI Generate Preview v2.0 - Template-Based Preview Generation
|
||||
# Purpose: Generate compare.html and index.html using template substitution
|
||||
# Template: ~/.claude/workflows/_template-compare-matrix.html
|
||||
#
|
||||
# Usage: ui-generate-preview.sh <prototypes_dir> [--template <path>]
|
||||
#
|
||||
|
||||
set -e
|
||||
|
||||
# Color output
|
||||
RED='\033[0;31m'
|
||||
GREEN='\033[0;32m'
|
||||
YELLOW='\033[1;33m'
|
||||
NC='\033[0m' # No Color
|
||||
|
||||
# Default template path
|
||||
TEMPLATE_PATH="$HOME/.claude/workflows/_template-compare-matrix.html"
|
||||
|
||||
# Parse arguments
|
||||
prototypes_dir="${1:-.}"
|
||||
shift || true
|
||||
|
||||
while [[ $# -gt 0 ]]; do
|
||||
case $1 in
|
||||
--template)
|
||||
TEMPLATE_PATH="$2"
|
||||
shift 2
|
||||
;;
|
||||
*)
|
||||
echo -e "${RED}Unknown option: $1${NC}"
|
||||
exit 1
|
||||
;;
|
||||
esac
|
||||
done
|
||||
|
||||
if [[ ! -d "$prototypes_dir" ]]; then
|
||||
echo -e "${RED}Error: Directory not found: $prototypes_dir${NC}"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
cd "$prototypes_dir" || exit 1
|
||||
|
||||
echo -e "${GREEN}📊 Auto-detecting matrix dimensions...${NC}"
|
||||
|
||||
# Auto-detect styles, layouts, targets from file patterns
|
||||
# Pattern: {target}-style-{s}-layout-{l}.html
|
||||
styles=$(find . -maxdepth 1 -name "*-style-*-layout-*.html" | \
|
||||
sed 's/.*-style-\([0-9]\+\)-.*/\1/' | sort -un)
|
||||
layouts=$(find . -maxdepth 1 -name "*-style-*-layout-*.html" | \
|
||||
sed 's/.*-layout-\([0-9]\+\)\.html/\1/' | sort -un)
|
||||
targets=$(find . -maxdepth 1 -name "*-style-*-layout-*.html" | \
|
||||
sed 's/\.\///; s/-style-.*//' | sort -u)
|
||||
|
||||
S=$(echo "$styles" | wc -l)
|
||||
L=$(echo "$layouts" | wc -l)
|
||||
T=$(echo "$targets" | wc -l)
|
||||
|
||||
echo -e " Detected: ${GREEN}${S}${NC} styles × ${GREEN}${L}${NC} layouts × ${GREEN}${T}${NC} targets"
|
||||
|
||||
if [[ $S -eq 0 ]] || [[ $L -eq 0 ]] || [[ $T -eq 0 ]]; then
|
||||
echo -e "${RED}Error: No prototype files found matching pattern {target}-style-{s}-layout-{l}.html${NC}"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# ============================================================================
|
||||
# Generate compare.html from template
|
||||
# ============================================================================
|
||||
|
||||
echo -e "${YELLOW}🎨 Generating compare.html from template...${NC}"
|
||||
|
||||
if [[ ! -f "$TEMPLATE_PATH" ]]; then
|
||||
echo -e "${RED}Error: Template not found: $TEMPLATE_PATH${NC}"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Build pages/targets JSON array
|
||||
PAGES_JSON="["
|
||||
first=true
|
||||
for target in $targets; do
|
||||
if [[ "$first" == true ]]; then
|
||||
first=false
|
||||
else
|
||||
PAGES_JSON+=", "
|
||||
fi
|
||||
PAGES_JSON+="\"$target\""
|
||||
done
|
||||
PAGES_JSON+="]"
|
||||
|
||||
# Generate metadata
|
||||
RUN_ID="run-$(date +%Y%m%d-%H%M%S)"
|
||||
SESSION_ID="standalone"
|
||||
TIMESTAMP=$(date -u +%Y-%m-%dT%H:%M:%SZ 2>/dev/null || date -u +"%Y-%m-%d")
|
||||
|
||||
# Replace placeholders in template
|
||||
cat "$TEMPLATE_PATH" | \
|
||||
sed "s|{{run_id}}|${RUN_ID}|g" | \
|
||||
sed "s|{{session_id}}|${SESSION_ID}|g" | \
|
||||
sed "s|{{timestamp}}|${TIMESTAMP}|g" | \
|
||||
sed "s|{{style_variants}}|${S}|g" | \
|
||||
sed "s|{{layout_variants}}|${L}|g" | \
|
||||
sed "s|{{pages_json}}|${PAGES_JSON}|g" \
|
||||
> compare.html
|
||||
|
||||
echo -e "${GREEN} ✓ Generated compare.html from template${NC}"
|
||||
|
||||
# ============================================================================
|
||||
# Generate index.html
|
||||
# ============================================================================
|
||||
|
||||
echo -e "${YELLOW}📋 Generating index.html...${NC}"
|
||||
|
||||
cat > index.html << 'EOF'
|
||||
<!DOCTYPE html>
|
||||
<html lang="en">
|
||||
<head>
|
||||
<meta charset="UTF-8">
|
||||
<meta name="viewport" content="width=device-width, initial-scale=1.0">
|
||||
<title>UI Prototypes Index</title>
|
||||
<style>
|
||||
* { margin: 0; padding: 0; box-sizing: border-box; }
|
||||
body {
|
||||
font-family: -apple-system, BlinkMacSystemFont, 'Segoe UI', Roboto, sans-serif;
|
||||
max-width: 1200px;
|
||||
margin: 0 auto;
|
||||
padding: 40px 20px;
|
||||
background: #f5f5f5;
|
||||
}
|
||||
h1 { margin-bottom: 10px; color: #333; }
|
||||
.subtitle { color: #666; margin-bottom: 30px; }
|
||||
.cta {
|
||||
background: linear-gradient(135deg, #667eea 0%, #764ba2 100%);
|
||||
color: white;
|
||||
padding: 20px;
|
||||
border-radius: 8px;
|
||||
margin-bottom: 30px;
|
||||
box-shadow: 0 4px 6px rgba(0,0,0,0.1);
|
||||
}
|
||||
.cta h2 { margin-bottom: 10px; }
|
||||
.cta a {
|
||||
display: inline-block;
|
||||
background: white;
|
||||
color: #667eea;
|
||||
padding: 10px 20px;
|
||||
border-radius: 6px;
|
||||
text-decoration: none;
|
||||
font-weight: 600;
|
||||
margin-top: 10px;
|
||||
}
|
||||
.cta a:hover { background: #f8f9fa; }
|
||||
.style-section {
|
||||
background: white;
|
||||
padding: 20px;
|
||||
border-radius: 8px;
|
||||
margin-bottom: 20px;
|
||||
box-shadow: 0 2px 4px rgba(0,0,0,0.1);
|
||||
}
|
||||
.style-section h2 {
|
||||
color: #495057;
|
||||
margin-bottom: 15px;
|
||||
padding-bottom: 10px;
|
||||
border-bottom: 2px solid #e9ecef;
|
||||
}
|
||||
.target-group {
|
||||
margin-bottom: 20px;
|
||||
}
|
||||
.target-group h3 {
|
||||
color: #6c757d;
|
||||
font-size: 16px;
|
||||
margin-bottom: 10px;
|
||||
}
|
||||
.link-grid {
|
||||
display: grid;
|
||||
grid-template-columns: repeat(auto-fill, minmax(200px, 1fr));
|
||||
gap: 10px;
|
||||
}
|
||||
.prototype-link {
|
||||
padding: 12px 16px;
|
||||
background: #f8f9fa;
|
||||
border: 1px solid #dee2e6;
|
||||
border-radius: 6px;
|
||||
text-decoration: none;
|
||||
color: #495057;
|
||||
display: flex;
|
||||
justify-content: space-between;
|
||||
align-items: center;
|
||||
transition: all 0.2s;
|
||||
}
|
||||
.prototype-link:hover {
|
||||
background: #e9ecef;
|
||||
border-color: #667eea;
|
||||
transform: translateX(2px);
|
||||
}
|
||||
.prototype-link .label { font-weight: 500; }
|
||||
.prototype-link .icon { color: #667eea; }
|
||||
</style>
|
||||
</head>
|
||||
<body>
|
||||
<h1>🎨 UI Prototypes Index</h1>
|
||||
<p class="subtitle">Generated __S__×__L__×__T__ = __TOTAL__ prototypes</p>
|
||||
|
||||
<div class="cta">
|
||||
<h2>📊 Interactive Comparison</h2>
|
||||
<p>View all styles and layouts side-by-side in an interactive matrix</p>
|
||||
<a href="compare.html">Open Matrix View →</a>
|
||||
</div>
|
||||
|
||||
<h2>📂 All Prototypes</h2>
|
||||
__CONTENT__
|
||||
</body>
|
||||
</html>
|
||||
EOF
|
||||
|
||||
# Build content HTML
|
||||
CONTENT=""
|
||||
for style in $styles; do
|
||||
CONTENT+="<div class='style-section'>"$'\n'
|
||||
CONTENT+="<h2>Style ${style}</h2>"$'\n'
|
||||
|
||||
for target in $targets; do
|
||||
target_capitalized="$(echo ${target:0:1} | tr '[:lower:]' '[:upper:]')${target:1}"
|
||||
CONTENT+="<div class='target-group'>"$'\n'
|
||||
CONTENT+="<h3>${target_capitalized}</h3>"$'\n'
|
||||
CONTENT+="<div class='link-grid'>"$'\n'
|
||||
|
||||
for layout in $layouts; do
|
||||
html_file="${target}-style-${style}-layout-${layout}.html"
|
||||
if [[ -f "$html_file" ]]; then
|
||||
CONTENT+="<a href='${html_file}' class='prototype-link' target='_blank'>"$'\n'
|
||||
CONTENT+="<span class='label'>Layout ${layout}</span>"$'\n'
|
||||
CONTENT+="<span class='icon'>↗</span>"$'\n'
|
||||
CONTENT+="</a>"$'\n'
|
||||
fi
|
||||
done
|
||||
|
||||
CONTENT+="</div></div>"$'\n'
|
||||
done
|
||||
|
||||
CONTENT+="</div>"$'\n'
|
||||
done
|
||||
|
||||
# Calculate total
|
||||
TOTAL_PROTOTYPES=$((S * L * T))
|
||||
|
||||
# Replace placeholders (using a temp file for complex replacement)
|
||||
{
|
||||
echo "$CONTENT" > /tmp/content_tmp.txt
|
||||
sed "s|__S__|${S}|g" index.html | \
|
||||
sed "s|__L__|${L}|g" | \
|
||||
sed "s|__T__|${T}|g" | \
|
||||
sed "s|__TOTAL__|${TOTAL_PROTOTYPES}|g" | \
|
||||
sed -e "/__CONTENT__/r /tmp/content_tmp.txt" -e "/__CONTENT__/d" > /tmp/index_tmp.html
|
||||
mv /tmp/index_tmp.html index.html
|
||||
rm -f /tmp/content_tmp.txt
|
||||
}
|
||||
|
||||
echo -e "${GREEN} ✓ Generated index.html${NC}"
|
||||
|
||||
# ============================================================================
|
||||
# Generate PREVIEW.md
|
||||
# ============================================================================
|
||||
|
||||
echo -e "${YELLOW}📝 Generating PREVIEW.md...${NC}"
|
||||
|
||||
cat > PREVIEW.md << EOF
|
||||
# UI Prototypes Preview Guide
|
||||
|
||||
Generated: $(date +"%Y-%m-%d %H:%M:%S")
|
||||
|
||||
## 📊 Matrix Dimensions
|
||||
|
||||
- **Styles**: ${S}
|
||||
- **Layouts**: ${L}
|
||||
- **Targets**: ${T}
|
||||
- **Total Prototypes**: $((S*L*T))
|
||||
|
||||
## 🌐 How to View
|
||||
|
||||
### Option 1: Interactive Matrix (Recommended)
|
||||
|
||||
Open \`compare.html\` in your browser to see all prototypes in an interactive matrix view.
|
||||
|
||||
**Features**:
|
||||
- Side-by-side comparison of all styles and layouts
|
||||
- Switch between targets using the dropdown
|
||||
- Adjust grid columns for better viewing
|
||||
- Direct links to full-page views
|
||||
- Selection system with export to JSON
|
||||
- Fullscreen mode for detailed inspection
|
||||
|
||||
### Option 2: Simple Index
|
||||
|
||||
Open \`index.html\` for a simple list of all prototypes with direct links.
|
||||
|
||||
### Option 3: Direct File Access
|
||||
|
||||
Each prototype can be opened directly:
|
||||
- Pattern: \`{target}-style-{s}-layout-{l}.html\`
|
||||
- Example: \`dashboard-style-1-layout-1.html\`
|
||||
|
||||
## 📁 File Structure
|
||||
|
||||
\`\`\`
|
||||
prototypes/
|
||||
├── compare.html # Interactive matrix view
|
||||
├── index.html # Simple navigation index
|
||||
├── PREVIEW.md # This file
|
||||
EOF
|
||||
|
||||
for style in $styles; do
|
||||
for target in $targets; do
|
||||
for layout in $layouts; do
|
||||
echo "├── ${target}-style-${style}-layout-${layout}.html" >> PREVIEW.md
|
||||
echo "├── ${target}-style-${style}-layout-${layout}.css" >> PREVIEW.md
|
||||
done
|
||||
done
|
||||
done
|
||||
|
||||
cat >> PREVIEW.md << 'EOF2'
|
||||
```
|
||||
|
||||
## 🎨 Style Variants
|
||||
|
||||
EOF2
|
||||
|
||||
for style in $styles; do
|
||||
cat >> PREVIEW.md << EOF3
|
||||
### Style ${style}
|
||||
|
||||
EOF3
|
||||
style_guide="../style-extraction/style-${style}/style-guide.md"
|
||||
if [[ -f "$style_guide" ]]; then
|
||||
head -n 10 "$style_guide" | tail -n +2 >> PREVIEW.md 2>/dev/null || echo "Design philosophy and tokens" >> PREVIEW.md
|
||||
else
|
||||
echo "Design system ${style}" >> PREVIEW.md
|
||||
fi
|
||||
echo "" >> PREVIEW.md
|
||||
done
|
||||
|
||||
cat >> PREVIEW.md << 'EOF4'
|
||||
|
||||
## 🎯 Targets
|
||||
|
||||
EOF4
|
||||
|
||||
for target in $targets; do
|
||||
target_capitalized="$(echo ${target:0:1} | tr '[:lower:]' '[:upper:]')${target:1}"
|
||||
echo "- **${target_capitalized}**: ${L} layouts × ${S} styles = $((L*S)) variations" >> PREVIEW.md
|
||||
done
|
||||
|
||||
cat >> PREVIEW.md << 'EOF5'
|
||||
|
||||
## 💡 Tips
|
||||
|
||||
1. **Comparison**: Use compare.html to see how different styles affect the same layout
|
||||
2. **Navigation**: Use index.html for quick access to specific prototypes
|
||||
3. **Selection**: Mark favorites in compare.html using star icons
|
||||
4. **Export**: Download selection JSON for implementation planning
|
||||
5. **Inspection**: Open browser DevTools to inspect HTML structure and CSS
|
||||
6. **Sharing**: All files are standalone - can be shared or deployed directly
|
||||
|
||||
## 📝 Next Steps
|
||||
|
||||
1. Review prototypes in compare.html
|
||||
2. Select preferred style × layout combinations
|
||||
3. Export selections as JSON
|
||||
4. Provide feedback for refinement
|
||||
5. Use selected designs for implementation
|
||||
|
||||
---
|
||||
|
||||
Generated by /workflow:ui-design:generate-v2 (Style-Centric Architecture)
|
||||
EOF5
|
||||
|
||||
echo -e "${GREEN} ✓ Generated PREVIEW.md${NC}"
|
||||
|
||||
# ============================================================================
|
||||
# Completion Summary
|
||||
# ============================================================================
|
||||
|
||||
echo ""
|
||||
echo -e "${GREEN}✅ Preview generation complete!${NC}"
|
||||
echo -e " Files created: compare.html, index.html, PREVIEW.md"
|
||||
echo -e " Matrix: ${S} styles × ${L} layouts × ${T} targets = $((S*L*T)) prototypes"
|
||||
echo ""
|
||||
echo -e "${YELLOW}🌐 Next Steps:${NC}"
|
||||
echo -e " 1. Open compare.html for interactive matrix view"
|
||||
echo -e " 2. Open index.html for simple navigation"
|
||||
echo -e " 3. Read PREVIEW.md for detailed usage guide"
|
||||
echo ""
|
||||
@@ -1,811 +0,0 @@
|
||||
#!/bin/bash
|
||||
|
||||
# UI Prototype Instantiation Script with Preview Generation (v3.0 - Auto-detect)
|
||||
# Purpose: Generate S × L × P final prototypes from templates + interactive preview files
|
||||
# Usage:
|
||||
# Simple: ui-instantiate-prototypes.sh <prototypes_dir>
|
||||
# Full: ui-instantiate-prototypes.sh <base_path> <pages> <style_variants> <layout_variants> [options]
|
||||
|
||||
# Use safer error handling
|
||||
set -o pipefail
|
||||
|
||||
# ============================================================================
|
||||
# Helper Functions
|
||||
# ============================================================================
|
||||
|
||||
log_info() {
|
||||
echo "$1"
|
||||
}
|
||||
|
||||
log_success() {
|
||||
echo "✅ $1"
|
||||
}
|
||||
|
||||
log_error() {
|
||||
echo "❌ $1"
|
||||
}
|
||||
|
||||
log_warning() {
|
||||
echo "⚠️ $1"
|
||||
}
|
||||
|
||||
# Auto-detect pages from templates directory
|
||||
auto_detect_pages() {
|
||||
local templates_dir="$1/_templates"
|
||||
|
||||
if [ ! -d "$templates_dir" ]; then
|
||||
log_error "Templates directory not found: $templates_dir"
|
||||
return 1
|
||||
fi
|
||||
|
||||
# Find unique page names from template files (e.g., login-layout-1.html -> login)
|
||||
local pages=$(find "$templates_dir" -name "*-layout-*.html" -type f | \
|
||||
sed 's|.*/||' | \
|
||||
sed 's|-layout-[0-9]*\.html||' | \
|
||||
sort -u | \
|
||||
tr '\n' ',' | \
|
||||
sed 's/,$//')
|
||||
|
||||
echo "$pages"
|
||||
}
|
||||
|
||||
# Auto-detect style variants count
|
||||
auto_detect_style_variants() {
|
||||
local base_path="$1"
|
||||
local style_dir="$base_path/../style-extraction"
|
||||
|
||||
if [ ! -d "$style_dir" ]; then
|
||||
log_warning "Style consolidation directory not found: $style_dir"
|
||||
echo "3" # Default
|
||||
return
|
||||
fi
|
||||
|
||||
# Count style-* directories
|
||||
local count=$(find "$style_dir" -maxdepth 1 -type d -name "style-*" | wc -l)
|
||||
|
||||
if [ "$count" -eq 0 ]; then
|
||||
echo "3" # Default
|
||||
else
|
||||
echo "$count"
|
||||
fi
|
||||
}
|
||||
|
||||
# Auto-detect layout variants count
|
||||
auto_detect_layout_variants() {
|
||||
local templates_dir="$1/_templates"
|
||||
|
||||
if [ ! -d "$templates_dir" ]; then
|
||||
echo "3" # Default
|
||||
return
|
||||
fi
|
||||
|
||||
# Find the first page and count its layouts
|
||||
local first_page=$(find "$templates_dir" -name "*-layout-1.html" -type f | head -1 | sed 's|.*/||' | sed 's|-layout-1\.html||')
|
||||
|
||||
if [ -z "$first_page" ]; then
|
||||
echo "3" # Default
|
||||
return
|
||||
fi
|
||||
|
||||
# Count layout files for this page
|
||||
local count=$(find "$templates_dir" -name "${first_page}-layout-*.html" -type f | wc -l)
|
||||
|
||||
if [ "$count" -eq 0 ]; then
|
||||
echo "3" # Default
|
||||
else
|
||||
echo "$count"
|
||||
fi
|
||||
}
|
||||
|
||||
# ============================================================================
|
||||
# Parse Arguments
|
||||
# ============================================================================
|
||||
|
||||
show_usage() {
|
||||
cat <<'EOF'
|
||||
Usage:
|
||||
Simple (auto-detect): ui-instantiate-prototypes.sh <prototypes_dir> [options]
|
||||
Full: ui-instantiate-prototypes.sh <base_path> <pages> <style_variants> <layout_variants> [options]
|
||||
|
||||
Simple Mode (Recommended):
|
||||
prototypes_dir Path to prototypes directory (auto-detects everything)
|
||||
|
||||
Full Mode:
|
||||
base_path Base path to prototypes directory
|
||||
pages Comma-separated list of pages/components
|
||||
style_variants Number of style variants (1-5)
|
||||
layout_variants Number of layout variants (1-5)
|
||||
|
||||
Options:
|
||||
--run-id <id> Run ID (default: auto-generated)
|
||||
--session-id <id> Session ID (default: standalone)
|
||||
--mode <page|component> Exploration mode (default: page)
|
||||
--template <path> Path to compare.html template (default: ~/.claude/workflows/_template-compare-matrix.html)
|
||||
--no-preview Skip preview file generation
|
||||
--help Show this help message
|
||||
|
||||
Examples:
|
||||
# Simple usage (auto-detect everything)
|
||||
ui-instantiate-prototypes.sh .workflow/design-run-*/prototypes
|
||||
|
||||
# With options
|
||||
ui-instantiate-prototypes.sh .workflow/design-run-*/prototypes --session-id WFS-auth
|
||||
|
||||
# Full manual mode
|
||||
ui-instantiate-prototypes.sh .workflow/design-run-*/prototypes "login,dashboard" 3 3 --session-id WFS-auth
|
||||
EOF
|
||||
}
|
||||
|
||||
# Default values
|
||||
BASE_PATH=""
|
||||
PAGES=""
|
||||
STYLE_VARIANTS=""
|
||||
LAYOUT_VARIANTS=""
|
||||
RUN_ID="run-$(date +%Y%m%d-%H%M%S)"
|
||||
SESSION_ID="standalone"
|
||||
MODE="page"
|
||||
TEMPLATE_PATH="$HOME/.claude/workflows/_template-compare-matrix.html"
|
||||
GENERATE_PREVIEW=true
|
||||
AUTO_DETECT=false
|
||||
|
||||
# Parse arguments
|
||||
if [ $# -lt 1 ]; then
|
||||
log_error "Missing required arguments"
|
||||
show_usage
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Check if using simple mode (only 1 positional arg before options)
|
||||
if [ $# -eq 1 ] || [[ "$2" == --* ]]; then
|
||||
# Simple mode - auto-detect
|
||||
AUTO_DETECT=true
|
||||
BASE_PATH="$1"
|
||||
shift 1
|
||||
else
|
||||
# Full mode - manual parameters
|
||||
if [ $# -lt 4 ]; then
|
||||
log_error "Full mode requires 4 positional arguments"
|
||||
show_usage
|
||||
exit 1
|
||||
fi
|
||||
|
||||
BASE_PATH="$1"
|
||||
PAGES="$2"
|
||||
STYLE_VARIANTS="$3"
|
||||
LAYOUT_VARIANTS="$4"
|
||||
shift 4
|
||||
fi
|
||||
|
||||
# Parse optional arguments
|
||||
while [[ $# -gt 0 ]]; do
|
||||
case $1 in
|
||||
--run-id)
|
||||
RUN_ID="$2"
|
||||
shift 2
|
||||
;;
|
||||
--session-id)
|
||||
SESSION_ID="$2"
|
||||
shift 2
|
||||
;;
|
||||
--mode)
|
||||
MODE="$2"
|
||||
shift 2
|
||||
;;
|
||||
--template)
|
||||
TEMPLATE_PATH="$2"
|
||||
shift 2
|
||||
;;
|
||||
--no-preview)
|
||||
GENERATE_PREVIEW=false
|
||||
shift
|
||||
;;
|
||||
--help)
|
||||
show_usage
|
||||
exit 0
|
||||
;;
|
||||
*)
|
||||
log_error "Unknown option: $1"
|
||||
show_usage
|
||||
exit 1
|
||||
;;
|
||||
esac
|
||||
done
|
||||
|
||||
# ============================================================================
|
||||
# Auto-detection (if enabled)
|
||||
# ============================================================================
|
||||
|
||||
if [ "$AUTO_DETECT" = true ]; then
|
||||
log_info "🔍 Auto-detecting configuration from directory..."
|
||||
|
||||
# Detect pages
|
||||
PAGES=$(auto_detect_pages "$BASE_PATH")
|
||||
if [ -z "$PAGES" ]; then
|
||||
log_error "Could not auto-detect pages from templates"
|
||||
exit 1
|
||||
fi
|
||||
log_info " Pages: $PAGES"
|
||||
|
||||
# Detect style variants
|
||||
STYLE_VARIANTS=$(auto_detect_style_variants "$BASE_PATH")
|
||||
log_info " Style variants: $STYLE_VARIANTS"
|
||||
|
||||
# Detect layout variants
|
||||
LAYOUT_VARIANTS=$(auto_detect_layout_variants "$BASE_PATH")
|
||||
log_info " Layout variants: $LAYOUT_VARIANTS"
|
||||
|
||||
echo ""
|
||||
fi
|
||||
|
||||
# ============================================================================
|
||||
# Validation
|
||||
# ============================================================================
|
||||
|
||||
# Validate base path
|
||||
if [ ! -d "$BASE_PATH" ]; then
|
||||
log_error "Base path not found: $BASE_PATH"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Validate style and layout variants
|
||||
if [ "$STYLE_VARIANTS" -lt 1 ] || [ "$STYLE_VARIANTS" -gt 5 ]; then
|
||||
log_error "Style variants must be between 1 and 5 (got: $STYLE_VARIANTS)"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
if [ "$LAYOUT_VARIANTS" -lt 1 ] || [ "$LAYOUT_VARIANTS" -gt 5 ]; then
|
||||
log_error "Layout variants must be between 1 and 5 (got: $LAYOUT_VARIANTS)"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Validate STYLE_VARIANTS against actual style directories
|
||||
if [ "$STYLE_VARIANTS" -gt 0 ]; then
|
||||
style_dir="$BASE_PATH/../style-extraction"
|
||||
|
||||
if [ ! -d "$style_dir" ]; then
|
||||
log_error "Style consolidation directory not found: $style_dir"
|
||||
log_info "Run /workflow:ui-design:consolidate first"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
actual_styles=$(find "$style_dir" -maxdepth 1 -type d -name "style-*" 2>/dev/null | wc -l)
|
||||
|
||||
if [ "$actual_styles" -eq 0 ]; then
|
||||
log_error "No style directories found in: $style_dir"
|
||||
log_info "Run /workflow:ui-design:consolidate first to generate style design systems"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
if [ "$STYLE_VARIANTS" -gt "$actual_styles" ]; then
|
||||
log_warning "Requested $STYLE_VARIANTS style variants, but only found $actual_styles directories"
|
||||
log_info "Available style directories:"
|
||||
find "$style_dir" -maxdepth 1 -type d -name "style-*" 2>/dev/null | sed 's|.*/||' | sort
|
||||
log_info "Auto-correcting to $actual_styles style variants"
|
||||
STYLE_VARIANTS=$actual_styles
|
||||
fi
|
||||
fi
|
||||
|
||||
# Parse pages into array
|
||||
IFS=',' read -ra PAGE_ARRAY <<< "$PAGES"
|
||||
|
||||
if [ ${#PAGE_ARRAY[@]} -eq 0 ]; then
|
||||
log_error "No pages found"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# ============================================================================
|
||||
# Header Output
|
||||
# ============================================================================
|
||||
|
||||
echo "========================================="
|
||||
echo "UI Prototype Instantiation & Preview"
|
||||
if [ "$AUTO_DETECT" = true ]; then
|
||||
echo "(Auto-detected configuration)"
|
||||
fi
|
||||
echo "========================================="
|
||||
echo "Base Path: $BASE_PATH"
|
||||
echo "Mode: $MODE"
|
||||
echo "Pages/Components: $PAGES"
|
||||
echo "Style Variants: $STYLE_VARIANTS"
|
||||
echo "Layout Variants: $LAYOUT_VARIANTS"
|
||||
echo "Run ID: $RUN_ID"
|
||||
echo "Session ID: $SESSION_ID"
|
||||
echo "========================================="
|
||||
echo ""
|
||||
|
||||
# Change to base path
|
||||
cd "$BASE_PATH" || exit 1
|
||||
|
||||
# ============================================================================
|
||||
# Phase 1: Instantiate Prototypes
|
||||
# ============================================================================
|
||||
|
||||
log_info "🚀 Phase 1: Instantiating prototypes from templates..."
|
||||
echo ""
|
||||
|
||||
total_generated=0
|
||||
total_failed=0
|
||||
|
||||
for page in "${PAGE_ARRAY[@]}"; do
|
||||
# Trim whitespace
|
||||
page=$(echo "$page" | xargs)
|
||||
|
||||
log_info "Processing page/component: $page"
|
||||
|
||||
for s in $(seq 1 "$STYLE_VARIANTS"); do
|
||||
for l in $(seq 1 "$LAYOUT_VARIANTS"); do
|
||||
# Define file paths
|
||||
TEMPLATE_HTML="_templates/${page}-layout-${l}.html"
|
||||
STRUCTURAL_CSS="_templates/${page}-layout-${l}.css"
|
||||
TOKEN_CSS="../style-extraction/style-${s}/tokens.css"
|
||||
OUTPUT_HTML="${page}-style-${s}-layout-${l}.html"
|
||||
|
||||
# Copy template and replace placeholders
|
||||
if [ -f "$TEMPLATE_HTML" ]; then
|
||||
cp "$TEMPLATE_HTML" "$OUTPUT_HTML" || {
|
||||
log_error "Failed to copy template: $TEMPLATE_HTML"
|
||||
((total_failed++))
|
||||
continue
|
||||
}
|
||||
|
||||
# Replace CSS placeholders (Windows-compatible sed syntax)
|
||||
sed -i "s|{{STRUCTURAL_CSS}}|${STRUCTURAL_CSS}|g" "$OUTPUT_HTML" || true
|
||||
sed -i "s|{{TOKEN_CSS}}|${TOKEN_CSS}|g" "$OUTPUT_HTML" || true
|
||||
|
||||
log_success "Created: $OUTPUT_HTML"
|
||||
((total_generated++))
|
||||
|
||||
# Create implementation notes (simplified)
|
||||
NOTES_FILE="${page}-style-${s}-layout-${l}-notes.md"
|
||||
|
||||
# Generate notes with simple heredoc
|
||||
cat > "$NOTES_FILE" <<NOTESEOF
|
||||
# Implementation Notes: ${page}-style-${s}-layout-${l}
|
||||
|
||||
## Generation Details
|
||||
- **Template**: ${TEMPLATE_HTML}
|
||||
- **Structural CSS**: ${STRUCTURAL_CSS}
|
||||
- **Style Tokens**: ${TOKEN_CSS}
|
||||
- **Layout Strategy**: Layout ${l}
|
||||
- **Style Variant**: Style ${s}
|
||||
- **Mode**: ${MODE}
|
||||
|
||||
## Template Reuse
|
||||
This prototype was generated from a shared layout template to ensure consistency
|
||||
across all style variants. The HTML structure is identical for all ${page}-layout-${l}
|
||||
prototypes, with only the design tokens (colors, fonts, spacing) varying.
|
||||
|
||||
## Design System Reference
|
||||
Refer to \`../style-extraction/style-${s}/style-guide.md\` for:
|
||||
- Design philosophy
|
||||
- Token usage guidelines
|
||||
- Component patterns
|
||||
- Accessibility requirements
|
||||
|
||||
## Customization
|
||||
To modify this prototype:
|
||||
1. Edit the layout template: \`${TEMPLATE_HTML}\` (affects all styles)
|
||||
2. Edit the structural CSS: \`${STRUCTURAL_CSS}\` (affects all styles)
|
||||
3. Edit design tokens: \`${TOKEN_CSS}\` (affects only this style variant)
|
||||
|
||||
## Run Information
|
||||
- **Run ID**: ${RUN_ID}
|
||||
- **Session ID**: ${SESSION_ID}
|
||||
- **Generated**: $(date -u +%Y-%m-%dT%H:%M:%SZ 2>/dev/null || date -u +%Y-%m-%d)
|
||||
NOTESEOF
|
||||
|
||||
else
|
||||
log_error "Template not found: $TEMPLATE_HTML"
|
||||
((total_failed++))
|
||||
fi
|
||||
done
|
||||
done
|
||||
done
|
||||
|
||||
echo ""
|
||||
log_success "Phase 1 complete: Generated ${total_generated} prototypes"
|
||||
if [ $total_failed -gt 0 ]; then
|
||||
log_warning "Failed: ${total_failed} prototypes"
|
||||
fi
|
||||
echo ""
|
||||
|
||||
# ============================================================================
|
||||
# Phase 2: Generate Preview Files (if enabled)
|
||||
# ============================================================================
|
||||
|
||||
if [ "$GENERATE_PREVIEW" = false ]; then
|
||||
log_info "⏭️ Skipping preview generation (--no-preview flag)"
|
||||
exit 0
|
||||
fi
|
||||
|
||||
log_info "🎨 Phase 2: Generating preview files..."
|
||||
echo ""
|
||||
|
||||
# ============================================================================
|
||||
# 2a. Generate compare.html from template
|
||||
# ============================================================================
|
||||
|
||||
if [ ! -f "$TEMPLATE_PATH" ]; then
|
||||
log_warning "Template not found: $TEMPLATE_PATH"
|
||||
log_info " Skipping compare.html generation"
|
||||
else
|
||||
log_info "📄 Generating compare.html from template..."
|
||||
|
||||
# Convert page array to JSON format
|
||||
PAGES_JSON="["
|
||||
for i in "${!PAGE_ARRAY[@]}"; do
|
||||
page=$(echo "${PAGE_ARRAY[$i]}" | xargs)
|
||||
PAGES_JSON+="\"$page\""
|
||||
if [ $i -lt $((${#PAGE_ARRAY[@]} - 1)) ]; then
|
||||
PAGES_JSON+=", "
|
||||
fi
|
||||
done
|
||||
PAGES_JSON+="]"
|
||||
|
||||
TIMESTAMP=$(date -u +%Y-%m-%dT%H:%M:%SZ 2>/dev/null || date -u +%Y-%m-%d)
|
||||
|
||||
# Read template and replace placeholders
|
||||
cat "$TEMPLATE_PATH" | \
|
||||
sed "s|{{run_id}}|${RUN_ID}|g" | \
|
||||
sed "s|{{session_id}}|${SESSION_ID}|g" | \
|
||||
sed "s|{{timestamp}}|${TIMESTAMP}|g" | \
|
||||
sed "s|{{style_variants}}|${STYLE_VARIANTS}|g" | \
|
||||
sed "s|{{layout_variants}}|${LAYOUT_VARIANTS}|g" | \
|
||||
sed "s|{{pages_json}}|${PAGES_JSON}|g" \
|
||||
> compare.html
|
||||
|
||||
log_success "Generated: compare.html"
|
||||
fi
|
||||
|
||||
# ============================================================================
|
||||
# 2b. Generate index.html
|
||||
# ============================================================================
|
||||
|
||||
log_info "📄 Generating index.html..."
|
||||
|
||||
# Calculate total prototypes
|
||||
TOTAL_PROTOTYPES=$((STYLE_VARIANTS * LAYOUT_VARIANTS * ${#PAGE_ARRAY[@]}))
|
||||
|
||||
# Generate index.html with simple heredoc
|
||||
cat > index.html <<'INDEXEOF'
|
||||
<!DOCTYPE html>
|
||||
<html lang="en">
|
||||
<head>
|
||||
<meta charset="UTF-8">
|
||||
<meta name="viewport" content="width=device-width, initial-scale=1.0">
|
||||
<title>UI Prototypes - __MODE__ Mode - __RUN_ID__</title>
|
||||
<style>
|
||||
body {
|
||||
font-family: system-ui, -apple-system, sans-serif;
|
||||
max-width: 900px;
|
||||
margin: 2rem auto;
|
||||
padding: 0 2rem;
|
||||
background: #f9fafb;
|
||||
}
|
||||
.header {
|
||||
background: white;
|
||||
padding: 2rem;
|
||||
border-radius: 0.75rem;
|
||||
box-shadow: 0 1px 3px rgba(0,0,0,0.1);
|
||||
margin-bottom: 2rem;
|
||||
}
|
||||
h1 {
|
||||
color: #2563eb;
|
||||
margin-bottom: 0.5rem;
|
||||
font-size: 2rem;
|
||||
}
|
||||
.meta {
|
||||
color: #6b7280;
|
||||
font-size: 0.875rem;
|
||||
margin-top: 0.5rem;
|
||||
}
|
||||
.info {
|
||||
background: #f3f4f6;
|
||||
padding: 1.5rem;
|
||||
border-radius: 0.5rem;
|
||||
margin: 1.5rem 0;
|
||||
border-left: 4px solid #2563eb;
|
||||
}
|
||||
.cta {
|
||||
display: inline-block;
|
||||
background: #2563eb;
|
||||
color: white;
|
||||
padding: 1rem 2rem;
|
||||
border-radius: 0.5rem;
|
||||
text-decoration: none;
|
||||
font-weight: 600;
|
||||
margin: 1rem 0;
|
||||
transition: background 0.2s;
|
||||
}
|
||||
.cta:hover {
|
||||
background: #1d4ed8;
|
||||
}
|
||||
.stats {
|
||||
display: grid;
|
||||
grid-template-columns: repeat(auto-fit, minmax(150px, 1fr));
|
||||
gap: 1rem;
|
||||
margin: 1.5rem 0;
|
||||
}
|
||||
.stat {
|
||||
background: white;
|
||||
border: 1px solid #e5e7eb;
|
||||
padding: 1.5rem;
|
||||
border-radius: 0.5rem;
|
||||
text-align: center;
|
||||
box-shadow: 0 1px 2px rgba(0,0,0,0.05);
|
||||
}
|
||||
.stat-value {
|
||||
font-size: 2.5rem;
|
||||
font-weight: bold;
|
||||
color: #2563eb;
|
||||
margin-bottom: 0.25rem;
|
||||
}
|
||||
.stat-label {
|
||||
color: #6b7280;
|
||||
font-size: 0.875rem;
|
||||
}
|
||||
.section {
|
||||
background: white;
|
||||
padding: 2rem;
|
||||
border-radius: 0.75rem;
|
||||
margin-bottom: 2rem;
|
||||
box-shadow: 0 1px 3px rgba(0,0,0,0.1);
|
||||
}
|
||||
h2 {
|
||||
color: #1f2937;
|
||||
margin-bottom: 1rem;
|
||||
font-size: 1.5rem;
|
||||
}
|
||||
ul {
|
||||
line-height: 1.8;
|
||||
color: #374151;
|
||||
}
|
||||
.pages-list {
|
||||
list-style: none;
|
||||
padding: 0;
|
||||
}
|
||||
.pages-list li {
|
||||
background: #f9fafb;
|
||||
padding: 0.75rem 1rem;
|
||||
margin: 0.5rem 0;
|
||||
border-radius: 0.375rem;
|
||||
border-left: 3px solid #2563eb;
|
||||
}
|
||||
.badge {
|
||||
display: inline-block;
|
||||
background: #dbeafe;
|
||||
color: #1e40af;
|
||||
padding: 0.25rem 0.75rem;
|
||||
border-radius: 0.25rem;
|
||||
font-size: 0.75rem;
|
||||
font-weight: 600;
|
||||
margin-left: 0.5rem;
|
||||
}
|
||||
</style>
|
||||
</head>
|
||||
<body>
|
||||
<div class="header">
|
||||
<h1>🎨 UI Prototype __MODE__ Mode</h1>
|
||||
<div class="meta">
|
||||
<strong>Run ID:</strong> __RUN_ID__ |
|
||||
<strong>Session:</strong> __SESSION_ID__ |
|
||||
<strong>Generated:</strong> __TIMESTAMP__
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<div class="info">
|
||||
<p><strong>Matrix Configuration:</strong> __STYLE_VARIANTS__ styles × __LAYOUT_VARIANTS__ layouts × __PAGE_COUNT__ __MODE__s</p>
|
||||
<p><strong>Total Prototypes:</strong> __TOTAL_PROTOTYPES__ interactive HTML files</p>
|
||||
</div>
|
||||
|
||||
<a href="compare.html" class="cta">🔍 Open Interactive Matrix Comparison →</a>
|
||||
|
||||
<div class="stats">
|
||||
<div class="stat">
|
||||
<div class="stat-value">__STYLE_VARIANTS__</div>
|
||||
<div class="stat-label">Style Variants</div>
|
||||
</div>
|
||||
<div class="stat">
|
||||
<div class="stat-value">__LAYOUT_VARIANTS__</div>
|
||||
<div class="stat-label">Layout Options</div>
|
||||
</div>
|
||||
<div class="stat">
|
||||
<div class="stat-value">__PAGE_COUNT__</div>
|
||||
<div class="stat-label">__MODE__s</div>
|
||||
</div>
|
||||
<div class="stat">
|
||||
<div class="stat-value">__TOTAL_PROTOTYPES__</div>
|
||||
<div class="stat-label">Total Prototypes</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<div class="section">
|
||||
<h2>🌟 Features</h2>
|
||||
<ul>
|
||||
<li><strong>Interactive Matrix View:</strong> __STYLE_VARIANTS__×__LAYOUT_VARIANTS__ grid with synchronized scrolling</li>
|
||||
<li><strong>Flexible Zoom:</strong> 25%, 50%, 75%, 100% viewport scaling</li>
|
||||
<li><strong>Fullscreen Mode:</strong> Detailed view for individual prototypes</li>
|
||||
<li><strong>Selection System:</strong> Mark favorites with export to JSON</li>
|
||||
<li><strong>__MODE__ Switcher:</strong> Compare different __MODE__s side-by-side</li>
|
||||
<li><strong>Persistent State:</strong> Selections saved in localStorage</li>
|
||||
</ul>
|
||||
</div>
|
||||
|
||||
<div class="section">
|
||||
<h2>📄 Generated __MODE__s</h2>
|
||||
<ul class="pages-list">
|
||||
__PAGES_LIST__
|
||||
</ul>
|
||||
</div>
|
||||
|
||||
<div class="section">
|
||||
<h2>📚 Next Steps</h2>
|
||||
<ol>
|
||||
<li>Open <code>compare.html</code> to explore all variants in matrix view</li>
|
||||
<li>Use zoom and sync scroll controls to compare details</li>
|
||||
<li>Select your preferred style×layout combinations</li>
|
||||
<li>Export selections as JSON for implementation planning</li>
|
||||
<li>Review implementation notes in <code>*-notes.md</code> files</li>
|
||||
</ol>
|
||||
</div>
|
||||
</body>
|
||||
</html>
|
||||
INDEXEOF
|
||||
|
||||
# Build pages list HTML
|
||||
PAGES_LIST_HTML=""
|
||||
for page in "${PAGE_ARRAY[@]}"; do
|
||||
page=$(echo "$page" | xargs)
|
||||
VARIANT_COUNT=$((STYLE_VARIANTS * LAYOUT_VARIANTS))
|
||||
PAGES_LIST_HTML+=" <li>\n"
|
||||
PAGES_LIST_HTML+=" <strong>${page}</strong>\n"
|
||||
PAGES_LIST_HTML+=" <span class=\"badge\">${STYLE_VARIANTS}×${LAYOUT_VARIANTS} = ${VARIANT_COUNT} variants</span>\n"
|
||||
PAGES_LIST_HTML+=" </li>\n"
|
||||
done
|
||||
|
||||
# Replace all placeholders in index.html
|
||||
MODE_UPPER=$(echo "$MODE" | awk '{print toupper(substr($0,1,1)) tolower(substr($0,2))}')
|
||||
sed -i "s|__RUN_ID__|${RUN_ID}|g" index.html
|
||||
sed -i "s|__SESSION_ID__|${SESSION_ID}|g" index.html
|
||||
sed -i "s|__TIMESTAMP__|${TIMESTAMP}|g" index.html
|
||||
sed -i "s|__MODE__|${MODE_UPPER}|g" index.html
|
||||
sed -i "s|__STYLE_VARIANTS__|${STYLE_VARIANTS}|g" index.html
|
||||
sed -i "s|__LAYOUT_VARIANTS__|${LAYOUT_VARIANTS}|g" index.html
|
||||
sed -i "s|__PAGE_COUNT__|${#PAGE_ARRAY[@]}|g" index.html
|
||||
sed -i "s|__TOTAL_PROTOTYPES__|${TOTAL_PROTOTYPES}|g" index.html
|
||||
sed -i "s|__PAGES_LIST__|${PAGES_LIST_HTML}|g" index.html
|
||||
|
||||
log_success "Generated: index.html"
|
||||
|
||||
# ============================================================================
|
||||
# 2c. Generate PREVIEW.md
|
||||
# ============================================================================
|
||||
|
||||
log_info "📄 Generating PREVIEW.md..."
|
||||
|
||||
cat > PREVIEW.md <<PREVIEWEOF
|
||||
# UI Prototype Preview Guide
|
||||
|
||||
## Quick Start
|
||||
1. Open \`index.html\` for overview and navigation
|
||||
2. Open \`compare.html\` for interactive matrix comparison
|
||||
3. Use browser developer tools to inspect responsive behavior
|
||||
|
||||
## Configuration
|
||||
|
||||
- **Exploration Mode:** ${MODE_UPPER}
|
||||
- **Run ID:** ${RUN_ID}
|
||||
- **Session ID:** ${SESSION_ID}
|
||||
- **Style Variants:** ${STYLE_VARIANTS}
|
||||
- **Layout Options:** ${LAYOUT_VARIANTS}
|
||||
- **${MODE_UPPER}s:** ${PAGES}
|
||||
- **Total Prototypes:** ${TOTAL_PROTOTYPES}
|
||||
- **Generated:** ${TIMESTAMP}
|
||||
|
||||
## File Naming Convention
|
||||
|
||||
\`\`\`
|
||||
{${MODE}}-style-{s}-layout-{l}.html
|
||||
\`\`\`
|
||||
|
||||
**Example:** \`dashboard-style-1-layout-2.html\`
|
||||
- ${MODE_UPPER}: dashboard
|
||||
- Style: Design system 1
|
||||
- Layout: Layout variant 2
|
||||
|
||||
## Interactive Features (compare.html)
|
||||
|
||||
### Matrix View
|
||||
- **Grid Layout:** ${STYLE_VARIANTS}×${LAYOUT_VARIANTS} table with all prototypes visible
|
||||
- **Synchronized Scroll:** All iframes scroll together (toggle with button)
|
||||
- **Zoom Controls:** Adjust viewport scale (25%, 50%, 75%, 100%)
|
||||
- **${MODE_UPPER} Selector:** Switch between different ${MODE}s instantly
|
||||
|
||||
### Prototype Actions
|
||||
- **⭐ Selection:** Click star icon to mark favorites
|
||||
- **⛶ Fullscreen:** View prototype in fullscreen overlay
|
||||
- **↗ New Tab:** Open prototype in dedicated browser tab
|
||||
|
||||
### Selection Export
|
||||
1. Select preferred prototypes using star icons
|
||||
2. Click "Export Selection" button
|
||||
3. Downloads JSON file: \`selection-${RUN_ID}.json\`
|
||||
4. Use exported file for implementation planning
|
||||
|
||||
## Design System References
|
||||
|
||||
Each prototype references a specific style design system:
|
||||
PREVIEWEOF
|
||||
|
||||
# Add style references
|
||||
for s in $(seq 1 "$STYLE_VARIANTS"); do
|
||||
cat >> PREVIEW.md <<STYLEEOF
|
||||
|
||||
### Style ${s}
|
||||
- **Tokens:** \`../style-extraction/style-${s}/design-tokens.json\`
|
||||
- **CSS Variables:** \`../style-extraction/style-${s}/tokens.css\`
|
||||
- **Style Guide:** \`../style-extraction/style-${s}/style-guide.md\`
|
||||
STYLEEOF
|
||||
done
|
||||
|
||||
cat >> PREVIEW.md <<'FOOTEREOF'
|
||||
|
||||
## Responsive Testing
|
||||
|
||||
All prototypes are mobile-first responsive. Test at these breakpoints:
|
||||
|
||||
- **Mobile:** 375px - 767px
|
||||
- **Tablet:** 768px - 1023px
|
||||
- **Desktop:** 1024px+
|
||||
|
||||
Use browser DevTools responsive mode for testing.
|
||||
|
||||
## Accessibility Features
|
||||
|
||||
- Semantic HTML5 structure
|
||||
- ARIA attributes for screen readers
|
||||
- Keyboard navigation support
|
||||
- Proper heading hierarchy
|
||||
- Focus indicators
|
||||
|
||||
## Next Steps
|
||||
|
||||
1. **Review:** Open `compare.html` and explore all variants
|
||||
2. **Select:** Mark preferred prototypes using star icons
|
||||
3. **Export:** Download selection JSON for implementation
|
||||
4. **Implement:** Use `/workflow:ui-design:update` to integrate selected designs
|
||||
5. **Plan:** Run `/workflow:plan` to generate implementation tasks
|
||||
|
||||
---
|
||||
|
||||
**Generated by:** `ui-instantiate-prototypes.sh`
|
||||
**Version:** 3.0 (auto-detect mode)
|
||||
FOOTEREOF
|
||||
|
||||
log_success "Generated: PREVIEW.md"
|
||||
|
||||
# ============================================================================
|
||||
# Completion Summary
|
||||
# ============================================================================
|
||||
|
||||
echo ""
|
||||
echo "========================================="
|
||||
echo "✅ Generation Complete!"
|
||||
echo "========================================="
|
||||
echo ""
|
||||
echo "📊 Summary:"
|
||||
echo " Prototypes: ${total_generated} generated"
|
||||
if [ $total_failed -gt 0 ]; then
|
||||
echo " Failed: ${total_failed}"
|
||||
fi
|
||||
echo " Preview Files: compare.html, index.html, PREVIEW.md"
|
||||
echo " Matrix: ${STYLE_VARIANTS}×${LAYOUT_VARIANTS} (${#PAGE_ARRAY[@]} ${MODE}s)"
|
||||
echo " Total Files: ${TOTAL_PROTOTYPES} prototypes + preview files"
|
||||
echo ""
|
||||
echo "🌐 Next Steps:"
|
||||
echo " 1. Open: ${BASE_PATH}/index.html"
|
||||
echo " 2. Explore: ${BASE_PATH}/compare.html"
|
||||
echo " 3. Review: ${BASE_PATH}/PREVIEW.md"
|
||||
echo ""
|
||||
echo "Performance: Template-based approach with ${STYLE_VARIANTS}× speedup"
|
||||
echo "========================================="
|
||||
@@ -1,333 +0,0 @@
|
||||
#!/bin/bash
|
||||
# Update CLAUDE.md for modules with two strategies
|
||||
# Usage: update_module_claude.sh <strategy> <module_path> [tool] [model]
|
||||
# strategy: single-layer|multi-layer
|
||||
# module_path: Path to the module directory
|
||||
# tool: gemini|qwen|codex (default: gemini)
|
||||
# model: Model name (optional, uses tool defaults)
|
||||
#
|
||||
# Default Models:
|
||||
# gemini: gemini-2.5-flash
|
||||
# qwen: coder-model
|
||||
# codex: gpt5-codex
|
||||
#
|
||||
# Strategies:
|
||||
# single-layer: Upward aggregation
|
||||
# - Read: Current directory code + child CLAUDE.md files
|
||||
# - Generate: Single ./CLAUDE.md in current directory
|
||||
# - Use: Large projects, incremental bottom-up updates
|
||||
#
|
||||
# multi-layer: Downward distribution
|
||||
# - Read: All files in current and subdirectories
|
||||
# - Generate: CLAUDE.md for each directory containing files
|
||||
# - Use: Small projects, full documentation generation
|
||||
#
|
||||
# Features:
|
||||
# - Minimal prompts based on unified template
|
||||
# - Respects .gitignore patterns
|
||||
# - Path-focused processing (script only cares about paths)
|
||||
# - Template-driven generation
|
||||
|
||||
# Build exclusion filters from .gitignore
|
||||
build_exclusion_filters() {
|
||||
local filters=""
|
||||
|
||||
# Common system/cache directories to exclude
|
||||
local system_excludes=(
|
||||
".git" "__pycache__" "node_modules" ".venv" "venv" "env"
|
||||
"dist" "build" ".cache" ".pytest_cache" ".mypy_cache"
|
||||
"coverage" ".nyc_output" "logs" "tmp" "temp"
|
||||
)
|
||||
|
||||
for exclude in "${system_excludes[@]}"; do
|
||||
filters+=" -not -path '*/$exclude' -not -path '*/$exclude/*'"
|
||||
done
|
||||
|
||||
# Find and parse .gitignore (current dir first, then git root)
|
||||
local gitignore_file=""
|
||||
|
||||
# Check current directory first
|
||||
if [ -f ".gitignore" ]; then
|
||||
gitignore_file=".gitignore"
|
||||
else
|
||||
# Try to find git root and check for .gitignore there
|
||||
local git_root=$(git rev-parse --show-toplevel 2>/dev/null)
|
||||
if [ -n "$git_root" ] && [ -f "$git_root/.gitignore" ]; then
|
||||
gitignore_file="$git_root/.gitignore"
|
||||
fi
|
||||
fi
|
||||
|
||||
# Parse .gitignore if found
|
||||
if [ -n "$gitignore_file" ]; then
|
||||
while IFS= read -r line; do
|
||||
# Skip empty lines and comments
|
||||
[[ -z "$line" || "$line" =~ ^[[:space:]]*# ]] && continue
|
||||
|
||||
# Remove trailing slash and whitespace
|
||||
line=$(echo "$line" | sed 's|/$||' | xargs)
|
||||
|
||||
# Skip wildcards patterns (too complex for simple find)
|
||||
[[ "$line" =~ \* ]] && continue
|
||||
|
||||
# Add to filters
|
||||
filters+=" -not -path '*/$line' -not -path '*/$line/*'"
|
||||
done < "$gitignore_file"
|
||||
fi
|
||||
|
||||
echo "$filters"
|
||||
}
|
||||
|
||||
# Scan directory structure and generate structured information
|
||||
scan_directory_structure() {
|
||||
local target_path="$1"
|
||||
local strategy="$2"
|
||||
|
||||
if [ ! -d "$target_path" ]; then
|
||||
echo "Directory not found: $target_path"
|
||||
return 1
|
||||
fi
|
||||
|
||||
local exclusion_filters=$(build_exclusion_filters)
|
||||
local structure_info=""
|
||||
|
||||
# Get basic directory info
|
||||
local dir_name=$(basename "$target_path")
|
||||
local total_files=$(eval "find \"$target_path\" -type f $exclusion_filters 2>/dev/null" | wc -l)
|
||||
local total_dirs=$(eval "find \"$target_path\" -type d $exclusion_filters 2>/dev/null" | wc -l)
|
||||
|
||||
structure_info+="Directory: $dir_name\n"
|
||||
structure_info+="Total files: $total_files\n"
|
||||
structure_info+="Total directories: $total_dirs\n\n"
|
||||
|
||||
if [ "$strategy" = "multi-layer" ]; then
|
||||
# For multi-layer: show all subdirectories with file counts
|
||||
structure_info+="Subdirectories with files:\n"
|
||||
while IFS= read -r dir; do
|
||||
if [ -n "$dir" ] && [ "$dir" != "$target_path" ]; then
|
||||
local rel_path=${dir#$target_path/}
|
||||
local file_count=$(eval "find \"$dir\" -maxdepth 1 -type f $exclusion_filters 2>/dev/null" | wc -l)
|
||||
if [ $file_count -gt 0 ]; then
|
||||
structure_info+=" - $rel_path/ ($file_count files)\n"
|
||||
fi
|
||||
fi
|
||||
done < <(eval "find \"$target_path\" -type d $exclusion_filters 2>/dev/null")
|
||||
else
|
||||
# For single-layer: show direct children only
|
||||
structure_info+="Direct subdirectories:\n"
|
||||
while IFS= read -r dir; do
|
||||
if [ -n "$dir" ]; then
|
||||
local dir_name=$(basename "$dir")
|
||||
local file_count=$(eval "find \"$dir\" -maxdepth 1 -type f $exclusion_filters 2>/dev/null" | wc -l)
|
||||
local has_claude=$([ -f "$dir/CLAUDE.md" ] && echo " [has CLAUDE.md]" || echo "")
|
||||
structure_info+=" - $dir_name/ ($file_count files)$has_claude\n"
|
||||
fi
|
||||
done < <(eval "find \"$target_path\" -maxdepth 1 -type d $exclusion_filters 2>/dev/null" | grep -v "^$target_path$")
|
||||
fi
|
||||
|
||||
# Show main file types in current directory
|
||||
structure_info+="\nCurrent directory files:\n"
|
||||
local code_files=$(eval "find \"$target_path\" -maxdepth 1 -type f \\( -name '*.ts' -o -name '*.tsx' -o -name '*.js' -o -name '*.jsx' -o -name '*.py' -o -name '*.sh' \\) $exclusion_filters 2>/dev/null" | wc -l)
|
||||
local config_files=$(eval "find \"$target_path\" -maxdepth 1 -type f \\( -name '*.json' -o -name '*.yaml' -o -name '*.yml' -o -name '*.toml' \\) $exclusion_filters 2>/dev/null" | wc -l)
|
||||
local doc_files=$(eval "find \"$target_path\" -maxdepth 1 -type f -name '*.md' $exclusion_filters 2>/dev/null" | wc -l)
|
||||
|
||||
structure_info+=" - Code files: $code_files\n"
|
||||
structure_info+=" - Config files: $config_files\n"
|
||||
structure_info+=" - Documentation: $doc_files\n"
|
||||
|
||||
printf "%b" "$structure_info"
|
||||
}
|
||||
|
||||
update_module_claude() {
|
||||
local strategy="$1"
|
||||
local module_path="$2"
|
||||
local tool="${3:-gemini}"
|
||||
local model="$4"
|
||||
|
||||
# Validate parameters
|
||||
if [ -z "$strategy" ] || [ -z "$module_path" ]; then
|
||||
echo "❌ Error: Strategy and module path are required"
|
||||
echo "Usage: update_module_claude.sh <strategy> <module_path> [tool] [model]"
|
||||
echo "Strategies: single-layer|multi-layer"
|
||||
return 1
|
||||
fi
|
||||
|
||||
# Validate strategy
|
||||
if [ "$strategy" != "single-layer" ] && [ "$strategy" != "multi-layer" ]; then
|
||||
echo "❌ Error: Invalid strategy '$strategy'"
|
||||
echo "Valid strategies: single-layer, multi-layer"
|
||||
return 1
|
||||
fi
|
||||
|
||||
if [ ! -d "$module_path" ]; then
|
||||
echo "❌ Error: Directory '$module_path' does not exist"
|
||||
return 1
|
||||
fi
|
||||
|
||||
# Set default models if not specified
|
||||
if [ -z "$model" ]; then
|
||||
case "$tool" in
|
||||
gemini)
|
||||
model="gemini-2.5-flash"
|
||||
;;
|
||||
qwen)
|
||||
model="coder-model"
|
||||
;;
|
||||
codex)
|
||||
model="gpt5-codex"
|
||||
;;
|
||||
*)
|
||||
model=""
|
||||
;;
|
||||
esac
|
||||
fi
|
||||
|
||||
# Build exclusion filters from .gitignore
|
||||
local exclusion_filters=$(build_exclusion_filters)
|
||||
|
||||
# Check if directory has files (excluding gitignored paths)
|
||||
local file_count=$(eval "find \"$module_path\" -maxdepth 1 -type f $exclusion_filters 2>/dev/null" | wc -l)
|
||||
if [ $file_count -eq 0 ]; then
|
||||
echo "⚠️ Skipping '$module_path' - no files found (after .gitignore filtering)"
|
||||
return 0
|
||||
fi
|
||||
|
||||
# Use unified template for all modules
|
||||
local template_path="$HOME/.claude/workflows/cli-templates/prompts/memory/02-document-module-structure.txt"
|
||||
|
||||
# Read template content directly
|
||||
local template_content=""
|
||||
if [ -f "$template_path" ]; then
|
||||
template_content=$(cat "$template_path")
|
||||
echo " 📋 Loaded template: $(wc -l < "$template_path") lines"
|
||||
else
|
||||
echo " ⚠️ Template not found: $template_path"
|
||||
echo " Using fallback template..."
|
||||
template_content="Create comprehensive CLAUDE.md documentation following standard structure with Purpose, Structure, Components, Dependencies, Integration, and Implementation sections."
|
||||
fi
|
||||
|
||||
# Scan directory structure first
|
||||
echo " 🔍 Scanning directory structure..."
|
||||
local structure_info=$(scan_directory_structure "$module_path" "$strategy")
|
||||
|
||||
# Prepare logging info
|
||||
local module_name=$(basename "$module_path")
|
||||
|
||||
echo "⚡ Updating: $module_path"
|
||||
echo " Strategy: $strategy | Tool: $tool | Model: $model | Files: $file_count"
|
||||
echo " Template: $(basename "$template_path") ($(echo "$template_content" | wc -l) lines)"
|
||||
echo " Structure: Scanned $(echo "$structure_info" | wc -l) lines of structure info"
|
||||
|
||||
# Build minimal strategy-specific prompt with explicit paths and structure info
|
||||
local final_prompt=""
|
||||
|
||||
if [ "$strategy" = "multi-layer" ]; then
|
||||
# multi-layer strategy: read all, generate for each directory
|
||||
final_prompt="Directory Structure Analysis:
|
||||
$structure_info
|
||||
|
||||
Read: @**/*
|
||||
|
||||
Generate CLAUDE.md files:
|
||||
- Primary: ./CLAUDE.md (current directory)
|
||||
- Additional: CLAUDE.md in each subdirectory containing files
|
||||
|
||||
Template Guidelines:
|
||||
$template_content
|
||||
|
||||
Instructions:
|
||||
- Work bottom-up: deepest directories first
|
||||
- Parent directories reference children
|
||||
- Each CLAUDE.md file must be in its respective directory
|
||||
- Follow the template guidelines above for consistent structure
|
||||
- Use the structure analysis to understand directory hierarchy"
|
||||
else
|
||||
# single-layer strategy: read current + child CLAUDE.md, generate current only
|
||||
final_prompt="Directory Structure Analysis:
|
||||
$structure_info
|
||||
|
||||
Read: @*/CLAUDE.md @*.ts @*.tsx @*.js @*.jsx @*.py @*.sh @*.md @*.json @*.yaml @*.yml
|
||||
|
||||
Generate single file: ./CLAUDE.md
|
||||
|
||||
Template Guidelines:
|
||||
$template_content
|
||||
|
||||
Instructions:
|
||||
- Create exactly one CLAUDE.md file in the current directory
|
||||
- Reference child CLAUDE.md files, do not duplicate their content
|
||||
- Follow the template guidelines above for consistent structure
|
||||
- Use the structure analysis to understand the current directory context"
|
||||
fi
|
||||
|
||||
# Execute update
|
||||
local start_time=$(date +%s)
|
||||
echo " 🔄 Starting update..."
|
||||
|
||||
if cd "$module_path" 2>/dev/null; then
|
||||
local tool_result=0
|
||||
|
||||
# Execute with selected tool
|
||||
# NOTE: Model parameter (-m) is placed AFTER the prompt
|
||||
case "$tool" in
|
||||
qwen)
|
||||
if [ "$model" = "coder-model" ]; then
|
||||
# coder-model is default, -m is optional
|
||||
qwen -p "$final_prompt" --yolo 2>&1
|
||||
else
|
||||
qwen -p "$final_prompt" -m "$model" --yolo 2>&1
|
||||
fi
|
||||
tool_result=$?
|
||||
;;
|
||||
codex)
|
||||
codex --full-auto exec "$final_prompt" -m "$model" --skip-git-repo-check -s danger-full-access 2>&1
|
||||
tool_result=$?
|
||||
;;
|
||||
gemini)
|
||||
gemini -p "$final_prompt" -m "$model" --yolo 2>&1
|
||||
tool_result=$?
|
||||
;;
|
||||
*)
|
||||
echo " ⚠️ Unknown tool: $tool, defaulting to gemini"
|
||||
gemini -p "$final_prompt" -m "$model" --yolo 2>&1
|
||||
tool_result=$?
|
||||
;;
|
||||
esac
|
||||
|
||||
if [ $tool_result -eq 0 ]; then
|
||||
local end_time=$(date +%s)
|
||||
local duration=$((end_time - start_time))
|
||||
echo " ✅ Completed in ${duration}s"
|
||||
cd - > /dev/null
|
||||
return 0
|
||||
else
|
||||
echo " ❌ Update failed for $module_path"
|
||||
cd - > /dev/null
|
||||
return 1
|
||||
fi
|
||||
else
|
||||
echo " ❌ Cannot access directory: $module_path"
|
||||
return 1
|
||||
fi
|
||||
}
|
||||
|
||||
# Execute function if script is run directly
|
||||
if [[ "${BASH_SOURCE[0]}" == "${0}" ]]; then
|
||||
# Show help if no arguments or help requested
|
||||
if [ $# -eq 0 ] || [ "$1" = "-h" ] || [ "$1" = "--help" ]; then
|
||||
echo "Usage: update_module_claude.sh <strategy> <module_path> [tool] [model]"
|
||||
echo ""
|
||||
echo "Strategies:"
|
||||
echo " single-layer - Read current dir code + child CLAUDE.md, generate ./CLAUDE.md"
|
||||
echo " multi-layer - Read all files, generate CLAUDE.md for each directory"
|
||||
echo ""
|
||||
echo "Tools: gemini (default), qwen, codex"
|
||||
echo "Models: Use tool defaults if not specified"
|
||||
echo ""
|
||||
echo "Examples:"
|
||||
echo " ./update_module_claude.sh single-layer ./src/auth"
|
||||
echo " ./update_module_claude.sh multi-layer ./components gemini gemini-2.5-flash"
|
||||
exit 0
|
||||
fi
|
||||
|
||||
update_module_claude "$@"
|
||||
fi
|
||||
@@ -1,13 +1,13 @@
|
||||
---
|
||||
name: command-guide
|
||||
description: Workflow command guide for Claude DMS3 (78 commands). Search/browse commands, get next-step recommendations, view documentation, report issues. Triggers "CCW-help", "CCW-issue", "ccw-help", "ccw-issue", "ccw"
|
||||
description: Workflow command guide for Claude Code Workflow (78 commands). Search/browse commands, get next-step recommendations, view documentation, report issues. Triggers "CCW-help", "CCW-issue", "ccw-help", "ccw-issue", "ccw"
|
||||
allowed-tools: Read, Grep, Glob, AskUserQuestion
|
||||
version: 5.8.0
|
||||
---
|
||||
|
||||
# Command Guide Skill
|
||||
|
||||
Comprehensive command guide for Claude DMS3 workflow system covering 78 commands across 5 categories (workflow, cli, memory, task, general).
|
||||
Comprehensive command guide for Claude Code Workflow (CCW) system covering 78 commands across 5 categories (workflow, cli, memory, task, general).
|
||||
|
||||
## 🆕 What's New in v5.8.0
|
||||
|
||||
@@ -200,21 +200,21 @@ Comprehensive command guide for Claude DMS3 workflow system covering 78 commands
|
||||
|
||||
**Complex Query** (CLI-assisted analysis):
|
||||
1. **Detect complexity indicators** (多个命令对比、工作流程分析、最佳实践)
|
||||
2. **Design targeted analysis prompt** for gemini/qwen:
|
||||
2. **Design targeted analysis prompt** for gemini/qwen via CCW:
|
||||
- Frame user's question precisely
|
||||
- Specify required analysis depth
|
||||
- Request structured comparison/synthesis
|
||||
```bash
|
||||
gemini -p "
|
||||
ccw cli -p "
|
||||
PURPOSE: Analyze command documentation to answer user query
|
||||
TASK: [extracted user question with context]
|
||||
TASK: • [extracted user question with context]
|
||||
MODE: analysis
|
||||
CONTEXT: @**/*
|
||||
EXPECTED: Comprehensive answer with examples and recommendations
|
||||
RULES: $(cat ~/.claude/workflows/cli-templates/prompts/analysis/02-analyze-code-patterns.txt) | Focus on practical usage | analysis=READ-ONLY
|
||||
" -m gemini-3-pro-preview-11-2025 --include-directories ~/.claude/skills/command-guide/reference
|
||||
" --tool gemini --cd ~/.claude/skills/command-guide/reference
|
||||
```
|
||||
Note: Use absolute path `~/.claude/skills/command-guide/reference` for reference documentation access
|
||||
Note: Use `--cd` with absolute path `~/.claude/skills/command-guide/reference` for reference documentation access
|
||||
3. **Process and integrate CLI analysis**:
|
||||
- Extract key insights from CLI output
|
||||
- Add context-specific examples
|
||||
@@ -385,4 +385,4 @@ This SKILL documentation is kept in sync with command implementations through a
|
||||
- 4 issue templates for standardized problem reporting
|
||||
- CLI-assisted complex query analysis with gemini/qwen integration
|
||||
|
||||
**Maintainer**: Claude DMS3 Team
|
||||
**Maintainer**: CCW Team
|
||||
|
||||
@@ -112,7 +112,7 @@
|
||||
{
|
||||
"name": "tech-research",
|
||||
"command": "/memory:tech-research",
|
||||
"description": "3-phase orchestrator: extract tech stack from session/name → delegate to agent for Exa research and module generation → generate SKILL.md index (skips phase 2 if exists)",
|
||||
"description": "3-phase orchestrator: extract tech stack → Exa research → generate path-conditional rules (auto-loaded by Claude Code)",
|
||||
"arguments": "[session-id | tech-stack-name] [--regenerate] [--tool <gemini|qwen>]",
|
||||
"category": "memory",
|
||||
"subcategory": null,
|
||||
@@ -509,24 +509,13 @@
|
||||
"name": "start",
|
||||
"command": "/workflow:session:start",
|
||||
"description": "Discover existing sessions or start new workflow session with intelligent session management and conflict detection",
|
||||
"arguments": "[--auto|--new] [optional: task description for new session]",
|
||||
"arguments": "[--type <workflow|review|tdd|test|docs>] [--auto|--new] [optional: task description for new session]",
|
||||
"category": "workflow",
|
||||
"subcategory": "session",
|
||||
"usage_scenario": "general",
|
||||
"difficulty": "Intermediate",
|
||||
"file_path": "workflow/session/start.md"
|
||||
},
|
||||
{
|
||||
"name": "workflow:status",
|
||||
"command": "/workflow:status",
|
||||
"description": "Generate on-demand views for project overview and workflow tasks with optional task-id filtering for detailed view",
|
||||
"arguments": "[optional: --project|task-id|--validate|--dashboard]",
|
||||
"category": "workflow",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "session-management",
|
||||
"difficulty": "Beginner",
|
||||
"file_path": "workflow/status.md"
|
||||
},
|
||||
{
|
||||
"name": "tdd-plan",
|
||||
"command": "/workflow:tdd-plan",
|
||||
|
||||
@@ -133,7 +133,7 @@
|
||||
{
|
||||
"name": "tech-research",
|
||||
"command": "/memory:tech-research",
|
||||
"description": "3-phase orchestrator: extract tech stack from session/name → delegate to agent for Exa research and module generation → generate SKILL.md index (skips phase 2 if exists)",
|
||||
"description": "3-phase orchestrator: extract tech stack → Exa research → generate path-conditional rules (auto-loaded by Claude Code)",
|
||||
"arguments": "[session-id | tech-stack-name] [--regenerate] [--tool <gemini|qwen>]",
|
||||
"category": "memory",
|
||||
"subcategory": null,
|
||||
@@ -358,17 +358,6 @@
|
||||
"difficulty": "Intermediate",
|
||||
"file_path": "workflow/review.md"
|
||||
},
|
||||
{
|
||||
"name": "workflow:status",
|
||||
"command": "/workflow:status",
|
||||
"description": "Generate on-demand views for project overview and workflow tasks with optional task-id filtering for detailed view",
|
||||
"arguments": "[optional: --project|task-id|--validate|--dashboard]",
|
||||
"category": "workflow",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "session-management",
|
||||
"difficulty": "Beginner",
|
||||
"file_path": "workflow/status.md"
|
||||
},
|
||||
{
|
||||
"name": "tdd-plan",
|
||||
"command": "/workflow:tdd-plan",
|
||||
@@ -597,7 +586,7 @@
|
||||
"name": "start",
|
||||
"command": "/workflow:session:start",
|
||||
"description": "Discover existing sessions or start new workflow session with intelligent session management and conflict detection",
|
||||
"arguments": "[--auto|--new] [optional: task description for new session]",
|
||||
"arguments": "[--type <workflow|review|tdd|test|docs>] [--auto|--new] [optional: task description for new session]",
|
||||
"category": "workflow",
|
||||
"subcategory": "session",
|
||||
"usage_scenario": "general",
|
||||
|
||||
@@ -36,7 +36,7 @@
|
||||
{
|
||||
"name": "tech-research",
|
||||
"command": "/memory:tech-research",
|
||||
"description": "3-phase orchestrator: extract tech stack from session/name → delegate to agent for Exa research and module generation → generate SKILL.md index (skips phase 2 if exists)",
|
||||
"description": "3-phase orchestrator: extract tech stack → Exa research → generate path-conditional rules (auto-loaded by Claude Code)",
|
||||
"arguments": "[session-id | tech-stack-name] [--regenerate] [--tool <gemini|qwen>]",
|
||||
"category": "memory",
|
||||
"subcategory": null,
|
||||
@@ -224,7 +224,7 @@
|
||||
"name": "start",
|
||||
"command": "/workflow:session:start",
|
||||
"description": "Discover existing sessions or start new workflow session with intelligent session management and conflict detection",
|
||||
"arguments": "[--auto|--new] [optional: task description for new session]",
|
||||
"arguments": "[--type <workflow|review|tdd|test|docs>] [--auto|--new] [optional: task description for new session]",
|
||||
"category": "workflow",
|
||||
"subcategory": "session",
|
||||
"usage_scenario": "general",
|
||||
@@ -713,17 +713,6 @@
|
||||
"usage_scenario": "session-management",
|
||||
"difficulty": "Intermediate",
|
||||
"file_path": "workflow/session/resume.md"
|
||||
},
|
||||
{
|
||||
"name": "workflow:status",
|
||||
"command": "/workflow:status",
|
||||
"description": "Generate on-demand views for project overview and workflow tasks with optional task-id filtering for detailed view",
|
||||
"arguments": "[optional: --project|task-id|--validate|--dashboard]",
|
||||
"category": "workflow",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "session-management",
|
||||
"difficulty": "Beginner",
|
||||
"file_path": "workflow/status.md"
|
||||
}
|
||||
],
|
||||
"testing": [
|
||||
|
||||
@@ -43,22 +43,11 @@
|
||||
"difficulty": "Intermediate",
|
||||
"file_path": "workflow/execute.md"
|
||||
},
|
||||
{
|
||||
"name": "workflow:status",
|
||||
"command": "/workflow:status",
|
||||
"description": "Generate on-demand views for project overview and workflow tasks with optional task-id filtering for detailed view",
|
||||
"arguments": "[optional: --project|task-id|--validate|--dashboard]",
|
||||
"category": "workflow",
|
||||
"subcategory": null,
|
||||
"usage_scenario": "session-management",
|
||||
"difficulty": "Beginner",
|
||||
"file_path": "workflow/status.md"
|
||||
},
|
||||
{
|
||||
"name": "start",
|
||||
"command": "/workflow:session:start",
|
||||
"description": "Discover existing sessions or start new workflow session with intelligent session management and conflict detection",
|
||||
"arguments": "[--auto|--new] [optional: task description for new session]",
|
||||
"arguments": "[--type <workflow|review|tdd|test|docs>] [--auto|--new] [optional: task description for new session]",
|
||||
"category": "workflow",
|
||||
"subcategory": "session",
|
||||
"usage_scenario": "general",
|
||||
|
||||
@@ -16,11 +16,9 @@ description: |
|
||||
color: yellow
|
||||
---
|
||||
|
||||
You are a pure execution agent specialized in creating actionable implementation plans. You receive requirements and control flags from the command layer and execute planning tasks without complex decision-making logic.
|
||||
|
||||
## Overview
|
||||
|
||||
**Agent Role**: Transform user requirements and brainstorming artifacts into structured, executable implementation plans with quantified deliverables and measurable acceptance criteria.
|
||||
**Agent Role**: Pure execution agent that transforms user requirements and brainstorming artifacts into structured, executable implementation plans with quantified deliverables and measurable acceptance criteria. Receives requirements and control flags from the command layer and executes planning tasks without complex decision-making logic.
|
||||
|
||||
**Core Capabilities**:
|
||||
- Load and synthesize context from multiple sources (session metadata, context packages, brainstorming artifacts)
|
||||
@@ -33,7 +31,7 @@ You are a pure execution agent specialized in creating actionable implementation
|
||||
|
||||
---
|
||||
|
||||
## 1. Execution Process
|
||||
## 1. Input & Execution
|
||||
|
||||
### 1.1 Input Processing
|
||||
|
||||
@@ -50,7 +48,7 @@ You are a pure execution agent specialized in creating actionable implementation
|
||||
- **Control flags**: DEEP_ANALYSIS_REQUIRED, etc.
|
||||
- **Task requirements**: Direct task description
|
||||
|
||||
### 1.2 Two-Phase Execution Flow
|
||||
### 1.2 Execution Flow
|
||||
|
||||
#### Phase 1: Context Loading & Assembly
|
||||
|
||||
@@ -88,6 +86,27 @@ You are a pure execution agent specialized in creating actionable implementation
|
||||
6. Assess task complexity (simple/medium/complex)
|
||||
```
|
||||
|
||||
**MCP Integration** (when `mcp_capabilities` available):
|
||||
|
||||
```javascript
|
||||
// Exa Code Context (mcp_capabilities.exa_code = true)
|
||||
mcp__exa__get_code_context_exa(
|
||||
query="TypeScript OAuth2 JWT authentication patterns",
|
||||
tokensNum="dynamic"
|
||||
)
|
||||
|
||||
// Integration in flow_control.pre_analysis
|
||||
{
|
||||
"step": "local_codebase_exploration",
|
||||
"action": "Explore codebase structure",
|
||||
"commands": [
|
||||
"bash(rg '^(function|class|interface).*[task_keyword]' --type ts -n --max-count 15)",
|
||||
"bash(find . -name '*[task_keyword]*' -type f | grep -v node_modules | head -10)"
|
||||
],
|
||||
"output_to": "codebase_structure"
|
||||
}
|
||||
```
|
||||
|
||||
**Context Package Structure** (fields defined by context-search-agent):
|
||||
|
||||
**Always Present**:
|
||||
@@ -169,30 +188,6 @@ if (contextPackage.brainstorm_artifacts?.role_analyses?.length > 0) {
|
||||
5. Update session state for execution readiness
|
||||
```
|
||||
|
||||
### 1.3 MCP Integration Guidelines
|
||||
|
||||
**Exa Code Context** (`mcp_capabilities.exa_code = true`):
|
||||
```javascript
|
||||
// Get best practices and examples
|
||||
mcp__exa__get_code_context_exa(
|
||||
query="TypeScript OAuth2 JWT authentication patterns",
|
||||
tokensNum="dynamic"
|
||||
)
|
||||
```
|
||||
|
||||
**Integration in flow_control.pre_analysis**:
|
||||
```json
|
||||
{
|
||||
"step": "local_codebase_exploration",
|
||||
"action": "Explore codebase structure",
|
||||
"commands": [
|
||||
"bash(rg '^(function|class|interface).*[task_keyword]' --type ts -n --max-count 15)",
|
||||
"bash(find . -name '*[task_keyword]*' -type f | grep -v node_modules | head -10)"
|
||||
],
|
||||
"output_to": "codebase_structure"
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 2. Output Specifications
|
||||
@@ -208,15 +203,69 @@ Generate individual `.task/IMPL-*.json` files with the following structure:
|
||||
"id": "IMPL-N",
|
||||
"title": "Descriptive task name",
|
||||
"status": "pending|active|completed|blocked",
|
||||
"context_package_path": ".workflow/active/WFS-{session}/.process/context-package.json"
|
||||
"context_package_path": ".workflow/active/WFS-{session}/.process/context-package.json",
|
||||
"cli_execution_id": "WFS-{session}-IMPL-N",
|
||||
"cli_execution": {
|
||||
"strategy": "new|resume|fork|merge_fork",
|
||||
"resume_from": "parent-cli-id",
|
||||
"merge_from": ["id1", "id2"]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Field Descriptions**:
|
||||
- `id`: Task identifier (format: `IMPL-N`)
|
||||
- `id`: Task identifier
|
||||
- Single module format: `IMPL-N` (e.g., IMPL-001, IMPL-002)
|
||||
- Multi-module format: `IMPL-{prefix}{seq}` (e.g., IMPL-A1, IMPL-B1, IMPL-C1)
|
||||
- Prefix: A, B, C... (assigned by module detection order)
|
||||
- Sequence: 1, 2, 3... (per-module increment)
|
||||
- `title`: Descriptive task name summarizing the work
|
||||
- `status`: Task state - `pending` (not started), `active` (in progress), `completed` (done), `blocked` (waiting on dependencies)
|
||||
- `context_package_path`: Path to smart context package containing project structure, dependencies, and brainstorming artifacts catalog
|
||||
- `cli_execution_id`: Unique CLI conversation ID (format: `{session_id}-{task_id}`)
|
||||
- `cli_execution`: CLI execution strategy based on task dependencies
|
||||
- `strategy`: Execution pattern (`new`, `resume`, `fork`, `merge_fork`)
|
||||
- `resume_from`: Parent task's cli_execution_id (for resume/fork)
|
||||
- `merge_from`: Array of parent cli_execution_ids (for merge_fork)
|
||||
|
||||
**CLI Execution Strategy Rules** (MANDATORY - apply to all tasks):
|
||||
|
||||
| Dependency Pattern | Strategy | CLI Command Pattern |
|
||||
|--------------------|----------|---------------------|
|
||||
| No `depends_on` | `new` | `--id {cli_execution_id}` |
|
||||
| 1 parent, parent has 1 child | `resume` | `--resume {resume_from}` |
|
||||
| 1 parent, parent has N children | `fork` | `--resume {resume_from} --id {cli_execution_id}` |
|
||||
| N parents | `merge_fork` | `--resume {merge_from.join(',')} --id {cli_execution_id}` |
|
||||
|
||||
**Strategy Selection Algorithm**:
|
||||
```javascript
|
||||
function computeCliStrategy(task, allTasks) {
|
||||
const deps = task.context?.depends_on || []
|
||||
const childCount = allTasks.filter(t =>
|
||||
t.context?.depends_on?.includes(task.id)
|
||||
).length
|
||||
|
||||
if (deps.length === 0) {
|
||||
return { strategy: "new" }
|
||||
} else if (deps.length === 1) {
|
||||
const parentTask = allTasks.find(t => t.id === deps[0])
|
||||
const parentChildCount = allTasks.filter(t =>
|
||||
t.context?.depends_on?.includes(deps[0])
|
||||
).length
|
||||
|
||||
if (parentChildCount === 1) {
|
||||
return { strategy: "resume", resume_from: parentTask.cli_execution_id }
|
||||
} else {
|
||||
return { strategy: "fork", resume_from: parentTask.cli_execution_id }
|
||||
}
|
||||
} else {
|
||||
const mergeFrom = deps.map(depId =>
|
||||
allTasks.find(t => t.id === depId).cli_execution_id
|
||||
)
|
||||
return { strategy: "merge_fork", merge_from: mergeFrom }
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
#### Meta Object
|
||||
|
||||
@@ -225,7 +274,14 @@ Generate individual `.task/IMPL-*.json` files with the following structure:
|
||||
"meta": {
|
||||
"type": "feature|bugfix|refactor|test-gen|test-fix|docs",
|
||||
"agent": "@code-developer|@action-planning-agent|@test-fix-agent|@universal-executor",
|
||||
"execution_group": "parallel-abc123|null"
|
||||
"execution_group": "parallel-abc123|null",
|
||||
"module": "frontend|backend|shared|null",
|
||||
"execution_config": {
|
||||
"method": "agent|hybrid|cli",
|
||||
"cli_tool": "codex|gemini|qwen|auto",
|
||||
"enable_resume": true,
|
||||
"previous_cli_id": "string|null"
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
@@ -234,6 +290,12 @@ Generate individual `.task/IMPL-*.json` files with the following structure:
|
||||
- `type`: Task category - `feature` (new functionality), `bugfix` (fix defects), `refactor` (restructure code), `test-gen` (generate tests), `test-fix` (fix failing tests), `docs` (documentation)
|
||||
- `agent`: Assigned agent for execution
|
||||
- `execution_group`: Parallelization group ID (tasks with same ID can run concurrently) or `null` for sequential tasks
|
||||
- `module`: Module identifier for multi-module projects (e.g., `frontend`, `backend`, `shared`) or `null` for single-module
|
||||
- `execution_config`: CLI execution settings (from userConfig in task-generate-agent)
|
||||
- `method`: Execution method - `agent` (direct), `hybrid` (agent + CLI), `cli` (CLI only)
|
||||
- `cli_tool`: Preferred CLI tool - `codex`, `gemini`, `qwen`, or `auto`
|
||||
- `enable_resume`: Whether to use `--resume` for CLI continuity (default: true)
|
||||
- `previous_cli_id`: Previous task's CLI execution ID for resume (populated at runtime)
|
||||
|
||||
**Test Task Extensions** (for type="test-gen" or type="test-fix"):
|
||||
|
||||
@@ -391,7 +453,7 @@ Generate individual `.task/IMPL-*.json` files with the following structure:
|
||||
// Pattern: Project structure analysis
|
||||
{
|
||||
"step": "analyze_project_architecture",
|
||||
"commands": ["bash(~/.claude/scripts/get_modules_by_depth.sh)"],
|
||||
"commands": ["bash(ccw tool exec get_modules_by_depth '{}')"],
|
||||
"output_to": "project_architecture"
|
||||
},
|
||||
|
||||
@@ -408,14 +470,14 @@ Generate individual `.task/IMPL-*.json` files with the following structure:
|
||||
// Pattern: Gemini CLI deep analysis
|
||||
{
|
||||
"step": "gemini_analyze_[aspect]",
|
||||
"command": "bash(cd [path] && gemini -p 'PURPOSE: [goal]\\nTASK: [tasks]\\nMODE: analysis\\nCONTEXT: @[paths]\\nEXPECTED: [output]\\nRULES: $(cat [template]) | [constraints] | analysis=READ-ONLY')",
|
||||
"command": "ccw cli -p 'PURPOSE: [goal]\\nTASK: [tasks]\\nMODE: analysis\\nCONTEXT: @[paths]\\nEXPECTED: [output]\\nRULES: $(cat [template]) | [constraints] | analysis=READ-ONLY' --tool gemini --mode analysis --cd [path]",
|
||||
"output_to": "analysis_result"
|
||||
},
|
||||
|
||||
// Pattern: Qwen CLI analysis (fallback/alternative)
|
||||
{
|
||||
"step": "qwen_analyze_[aspect]",
|
||||
"command": "bash(cd [path] && qwen -p '[similar to gemini pattern]')",
|
||||
"command": "ccw cli -p '[similar to gemini pattern]' --tool qwen --mode analysis --cd [path]",
|
||||
"output_to": "analysis_result"
|
||||
},
|
||||
|
||||
@@ -456,7 +518,7 @@ The examples above demonstrate **patterns**, not fixed requirements. Agent MUST:
|
||||
4. **Command Composition Patterns**:
|
||||
- **Single command**: `bash([simple_search])`
|
||||
- **Multiple commands**: `["bash([cmd1])", "bash([cmd2])"]`
|
||||
- **CLI analysis**: `bash(cd [path] && gemini -p '[prompt]')`
|
||||
- **CLI analysis**: `ccw cli -p '[prompt]' --tool gemini --mode analysis --cd [path]`
|
||||
- **MCP integration**: `mcp__[tool]__[function]([params])`
|
||||
|
||||
**Key Principle**: Examples show **structure patterns**, not specific implementations. Agent must create task-appropriate steps dynamically.
|
||||
@@ -478,11 +540,12 @@ The `implementation_approach` supports **two execution modes** based on the pres
|
||||
- Specified command executes the step directly
|
||||
- Leverages specialized CLI tools (codex/gemini/qwen) for complex reasoning
|
||||
- **Use for**: Large-scale features, complex refactoring, or when user explicitly requests CLI tool usage
|
||||
- **Required fields**: Same as default mode **PLUS** `command`
|
||||
- **Command patterns**:
|
||||
- `bash(codex -C [path] --full-auto exec '[prompt]' --skip-git-repo-check -s danger-full-access)`
|
||||
- `bash(codex --full-auto exec '[task]' resume --last --skip-git-repo-check -s danger-full-access)` (multi-step)
|
||||
- `bash(cd [path] && gemini -p '[prompt]' --approval-mode yolo)` (write mode)
|
||||
- **Required fields**: Same as default mode **PLUS** `command`, `resume_from` (optional)
|
||||
- **Command patterns** (with resume support):
|
||||
- `ccw cli -p '[prompt]' --tool codex --mode write --cd [path]`
|
||||
- `ccw cli -p '[prompt]' --resume ${previousCliId} --tool codex --mode write` (resume from previous)
|
||||
- `ccw cli -p '[prompt]' --tool gemini --mode write --cd [path]` (write mode)
|
||||
- **Resume mechanism**: When step depends on previous CLI execution, include `--resume` with previous execution ID
|
||||
|
||||
**Semantic CLI Tool Selection**:
|
||||
|
||||
@@ -499,12 +562,12 @@ Agent determines CLI tool usage per-step based on user semantics and task nature
|
||||
**Task-Based Selection** (when no explicit user preference):
|
||||
- **Implementation/coding**: Codex preferred for autonomous development
|
||||
- **Analysis/exploration**: Gemini preferred for large context analysis
|
||||
- **Documentation**: Gemini/Qwen with write mode (`--approval-mode yolo`)
|
||||
- **Documentation**: Gemini/Qwen with write mode (`--mode write`)
|
||||
- **Testing**: Depends on complexity - simple=agent, complex=Codex
|
||||
|
||||
**Default Behavior**: Agent always executes the workflow. CLI commands are embedded in `implementation_approach` steps:
|
||||
- Agent orchestrates task execution
|
||||
- When step has `command` field, agent executes it via Bash
|
||||
- When step has `command` field, agent executes it via CCW CLI
|
||||
- When step has no `command` field, agent implements directly
|
||||
- This maintains agent control while leveraging CLI tool power
|
||||
|
||||
@@ -558,11 +621,26 @@ Agent determines CLI tool usage per-step based on user semantics and task nature
|
||||
"step": 3,
|
||||
"title": "Execute implementation using CLI tool",
|
||||
"description": "Use Codex/Gemini for complex autonomous execution",
|
||||
"command": "bash(codex -C [path] --full-auto exec '[prompt]' --skip-git-repo-check -s danger-full-access)",
|
||||
"command": "ccw cli -p '[prompt]' --tool codex --mode write --cd [path]",
|
||||
"modification_points": ["[Same as default mode]"],
|
||||
"logic_flow": ["[Same as default mode]"],
|
||||
"depends_on": [1, 2],
|
||||
"output": "cli_implementation"
|
||||
"output": "cli_implementation",
|
||||
"cli_output_id": "step3_cli_id" // Store execution ID for resume
|
||||
},
|
||||
|
||||
// === CLI MODE with Resume: Continue from previous CLI execution ===
|
||||
{
|
||||
"step": 4,
|
||||
"title": "Continue implementation with context",
|
||||
"description": "Resume from previous step with accumulated context",
|
||||
"command": "ccw cli -p '[continuation prompt]' --resume ${step3_cli_id} --tool codex --mode write",
|
||||
"resume_from": "step3_cli_id", // Reference previous step's CLI ID
|
||||
"modification_points": ["[Continue from step 3]"],
|
||||
"logic_flow": ["[Build on previous output]"],
|
||||
"depends_on": [3],
|
||||
"output": "continued_implementation",
|
||||
"cli_output_id": "step4_cli_id"
|
||||
}
|
||||
]
|
||||
```
|
||||
@@ -604,10 +682,42 @@ Agent determines CLI tool usage per-step based on user semantics and task nature
|
||||
- Analysis results (technical approach, architecture decisions)
|
||||
- Brainstorming artifacts (role analyses, guidance specifications)
|
||||
|
||||
**Multi-Module Format** (when modules detected):
|
||||
|
||||
When multiple modules are detected (frontend/backend, etc.), organize IMPL_PLAN.md by module:
|
||||
|
||||
```markdown
|
||||
# Implementation Plan
|
||||
|
||||
## Module A: Frontend (N tasks)
|
||||
### IMPL-A1: [Task Title]
|
||||
[Task details...]
|
||||
|
||||
### IMPL-A2: [Task Title]
|
||||
[Task details...]
|
||||
|
||||
## Module B: Backend (N tasks)
|
||||
### IMPL-B1: [Task Title]
|
||||
[Task details...]
|
||||
|
||||
### IMPL-B2: [Task Title]
|
||||
[Task details...]
|
||||
|
||||
## Cross-Module Dependencies
|
||||
- IMPL-A1 → IMPL-B1 (Frontend depends on Backend API)
|
||||
- IMPL-A2 → IMPL-B2 (UI state depends on Backend service)
|
||||
```
|
||||
|
||||
**Cross-Module Dependency Notation**:
|
||||
- During parallel planning, use `CROSS::{module}::{pattern}` format
|
||||
- Example: `depends_on: ["CROSS::B::api-endpoint"]`
|
||||
- Integration phase resolves to actual task IDs: `CROSS::B::api → IMPL-B1`
|
||||
|
||||
### 2.3 TODO_LIST.md Structure
|
||||
|
||||
Generate at `.workflow/active/{session_id}/TODO_LIST.md`:
|
||||
|
||||
**Single Module Format**:
|
||||
```markdown
|
||||
# Tasks: {Session Topic}
|
||||
|
||||
@@ -621,30 +731,54 @@ Generate at `.workflow/active/{session_id}/TODO_LIST.md`:
|
||||
- `- [x]` = Completed task
|
||||
```
|
||||
|
||||
**Multi-Module Format** (hierarchical by module):
|
||||
```markdown
|
||||
# Tasks: {Session Topic}
|
||||
|
||||
## Module A (Frontend)
|
||||
- [ ] **IMPL-A1**: [Task Title] → [📋](./.task/IMPL-A1.json)
|
||||
- [ ] **IMPL-A2**: [Task Title] → [📋](./.task/IMPL-A2.json)
|
||||
|
||||
## Module B (Backend)
|
||||
- [ ] **IMPL-B1**: [Task Title] → [📋](./.task/IMPL-B1.json)
|
||||
- [ ] **IMPL-B2**: [Task Title] → [📋](./.task/IMPL-B2.json)
|
||||
|
||||
## Cross-Module Dependencies
|
||||
- IMPL-A1 → IMPL-B1 (Frontend depends on Backend API)
|
||||
|
||||
## Status Legend
|
||||
- `- [ ]` = Pending task
|
||||
- `- [x]` = Completed task
|
||||
```
|
||||
|
||||
**Linking Rules**:
|
||||
- Todo items → task JSON: `[📋](./.task/IMPL-XXX.json)`
|
||||
- Completed tasks → summaries: `[✅](./.summaries/IMPL-XXX-summary.md)`
|
||||
- Consistent ID schemes: IMPL-XXX
|
||||
- Consistent ID schemes: `IMPL-N` (single) or `IMPL-{prefix}{seq}` (multi-module)
|
||||
|
||||
### 2.4 Complexity-Based Structure Selection
|
||||
### 2.4 Complexity & Structure Selection
|
||||
|
||||
Use `analysis_results.complexity` or task count to determine structure:
|
||||
|
||||
**Simple Tasks** (≤5 tasks):
|
||||
- Flat structure: IMPL_PLAN.md + TODO_LIST.md + task JSONs
|
||||
- All tasks at same level
|
||||
**Single Module Mode**:
|
||||
- **Simple Tasks** (≤5 tasks): Flat structure
|
||||
- **Medium Tasks** (6-12 tasks): Flat structure
|
||||
- **Complex Tasks** (>12 tasks): Re-scope required (maximum 12 tasks hard limit)
|
||||
|
||||
**Medium Tasks** (6-12 tasks):
|
||||
- Flat structure: IMPL_PLAN.md + TODO_LIST.md + task JSONs
|
||||
- All tasks at same level
|
||||
**Multi-Module Mode** (N+1 parallel planning):
|
||||
- **Per-module limit**: ≤9 tasks per module
|
||||
- **Total limit**: Sum of all module tasks ≤27 (3 modules × 9 tasks)
|
||||
- **Task ID format**: `IMPL-{prefix}{seq}` (e.g., IMPL-A1, IMPL-B1)
|
||||
- **Structure**: Hierarchical by module in IMPL_PLAN.md and TODO_LIST.md
|
||||
|
||||
**Complex Tasks** (>12 tasks):
|
||||
- **Re-scope required**: Maximum 12 tasks hard limit
|
||||
- If analysis_results contains >12 tasks, consolidate or request re-scoping
|
||||
**Multi-Module Detection Triggers**:
|
||||
- Explicit frontend/backend separation (`src/frontend`, `src/backend`)
|
||||
- Monorepo structure (`packages/*`, `apps/*`)
|
||||
- Context-package dependency clustering (2+ distinct module groups)
|
||||
|
||||
---
|
||||
|
||||
## 3. Quality & Standards
|
||||
## 3. Quality Standards
|
||||
|
||||
### 3.1 Quantification Requirements (MANDATORY)
|
||||
|
||||
@@ -670,47 +804,48 @@ Use `analysis_results.complexity` or task count to determine structure:
|
||||
- [ ] Each implementation step has its own acceptance criteria
|
||||
|
||||
**Examples**:
|
||||
- ✅ GOOD: `"Implement 5 commands: [cmd1, cmd2, cmd3, cmd4, cmd5]"`
|
||||
- ❌ BAD: `"Implement new commands"`
|
||||
- ✅ GOOD: `"5 files created: verify by ls .claude/commands/*.md | wc -l = 5"`
|
||||
- ❌ BAD: `"All commands implemented successfully"`
|
||||
- GOOD: `"Implement 5 commands: [cmd1, cmd2, cmd3, cmd4, cmd5]"`
|
||||
- BAD: `"Implement new commands"`
|
||||
- GOOD: `"5 files created: verify by ls .claude/commands/*.md | wc -l = 5"`
|
||||
- BAD: `"All commands implemented successfully"`
|
||||
|
||||
### 3.2 Planning Principles
|
||||
### 3.2 Planning & Organization Standards
|
||||
|
||||
**Planning Principles**:
|
||||
- Each stage produces working, testable code
|
||||
- Clear success criteria for each deliverable
|
||||
- Dependencies clearly identified between stages
|
||||
- Incremental progress over big bangs
|
||||
|
||||
### 3.3 File Organization
|
||||
|
||||
**File Organization**:
|
||||
- Session naming: `WFS-[topic-slug]`
|
||||
- Task IDs: IMPL-XXX (flat structure only)
|
||||
- Directory structure: flat task organization
|
||||
|
||||
### 3.4 Document Standards
|
||||
- Task IDs:
|
||||
- Single module: `IMPL-N` (e.g., IMPL-001, IMPL-002)
|
||||
- Multi-module: `IMPL-{prefix}{seq}` (e.g., IMPL-A1, IMPL-B1)
|
||||
- Directory structure: flat task organization (all tasks in `.task/`)
|
||||
|
||||
**Document Standards**:
|
||||
- Proper linking between documents
|
||||
- Consistent navigation and references
|
||||
|
||||
---
|
||||
|
||||
## 4. Key Reminders
|
||||
### 3.3 Guidelines Checklist
|
||||
|
||||
**ALWAYS:**
|
||||
- **Apply Quantification Requirements**: All requirements, acceptance criteria, and modification points MUST include explicit counts and enumerations
|
||||
- **Load IMPL_PLAN template**: Read(~/.claude/workflows/cli-templates/prompts/workflow/impl-plan-template.txt) before generating IMPL_PLAN.md
|
||||
- **Use provided context package**: Extract all information from structured context
|
||||
- **Respect memory-first rule**: Use provided content (already loaded from memory/file)
|
||||
- **Follow 6-field schema**: All task JSONs must have id, title, status, context_package_path, meta, context, flow_control
|
||||
- **Map artifacts**: Use artifacts_inventory to populate task.context.artifacts array
|
||||
- **Add MCP integration**: Include MCP tool steps in flow_control.pre_analysis when capabilities available
|
||||
- **Validate task count**: Maximum 12 tasks hard limit, request re-scope if exceeded
|
||||
- **Use session paths**: Construct all paths using provided session_id
|
||||
- **Link documents properly**: Use correct linking format (📋 for JSON, ✅ for summaries)
|
||||
- **Run validation checklist**: Verify all quantification requirements before finalizing task JSONs
|
||||
- **Apply 举一反三 principle**: Adapt pre-analysis patterns to task-specific needs dynamically
|
||||
- **Follow template validation**: Complete IMPL_PLAN.md template validation checklist before finalization
|
||||
- Apply Quantification Requirements to all requirements, acceptance criteria, and modification points
|
||||
- Load IMPL_PLAN template: `Read(~/.claude/workflows/cli-templates/prompts/workflow/impl-plan-template.txt)` before generating IMPL_PLAN.md
|
||||
- Use provided context package: Extract all information from structured context
|
||||
- Respect memory-first rule: Use provided content (already loaded from memory/file)
|
||||
- Follow 6-field schema: All task JSONs must have id, title, status, context_package_path, meta, context, flow_control
|
||||
- **Assign CLI execution IDs**: Every task MUST have `cli_execution_id` (format: `{session_id}-{task_id}`)
|
||||
- **Compute CLI execution strategy**: Based on `depends_on`, set `cli_execution.strategy` (new/resume/fork/merge_fork)
|
||||
- Map artifacts: Use artifacts_inventory to populate task.context.artifacts array
|
||||
- Add MCP integration: Include MCP tool steps in flow_control.pre_analysis when capabilities available
|
||||
- Validate task count: Maximum 12 tasks hard limit, request re-scope if exceeded
|
||||
- Use session paths: Construct all paths using provided session_id
|
||||
- Link documents properly: Use correct linking format (📋 for JSON, ✅ for summaries)
|
||||
- Run validation checklist: Verify all quantification requirements before finalizing task JSONs
|
||||
- Apply 举一反三 principle: Adapt pre-analysis patterns to task-specific needs dynamically
|
||||
- Follow template validation: Complete IMPL_PLAN.md template validation checklist before finalization
|
||||
|
||||
**NEVER:**
|
||||
- Load files directly (use provided context package instead)
|
||||
|
||||
@@ -67,7 +67,7 @@ Score = 0
|
||||
|
||||
**1. Project Structure**:
|
||||
```bash
|
||||
~/.claude/scripts/get_modules_by_depth.sh
|
||||
ccw tool exec get_modules_by_depth '{}'
|
||||
```
|
||||
|
||||
**2. Content Search**:
|
||||
@@ -100,7 +100,7 @@ CONTEXT: @**/*
|
||||
# Specific patterns
|
||||
CONTEXT: @CLAUDE.md @src/**/* @*.ts
|
||||
|
||||
# Cross-directory (requires --include-directories)
|
||||
# Cross-directory (requires --includeDirs)
|
||||
CONTEXT: @**/* @../shared/**/* @../types/**/*
|
||||
```
|
||||
|
||||
@@ -134,7 +134,7 @@ RULES: $(cat {selected_template}) | {constraints}
|
||||
```
|
||||
analyze|plan → gemini (qwen fallback) + mode=analysis
|
||||
execute (simple|medium) → gemini (qwen fallback) + mode=write
|
||||
execute (complex) → codex + mode=auto
|
||||
execute (complex) → codex + mode=write
|
||||
discuss → multi (gemini + codex parallel)
|
||||
```
|
||||
|
||||
@@ -144,43 +144,40 @@ discuss → multi (gemini + codex parallel)
|
||||
- Codex: `gpt-5` (default), `gpt5-codex` (large context)
|
||||
- **Position**: `-m` after prompt, before flags
|
||||
|
||||
### Command Templates
|
||||
### Command Templates (CCW Unified CLI)
|
||||
|
||||
**Gemini/Qwen (Analysis)**:
|
||||
```bash
|
||||
cd {dir} && gemini -p "
|
||||
ccw cli -p "
|
||||
PURPOSE: {goal}
|
||||
TASK: {task}
|
||||
MODE: analysis
|
||||
CONTEXT: @**/*
|
||||
EXPECTED: {output}
|
||||
RULES: $(cat ~/.claude/workflows/cli-templates/prompts/analysis/pattern.txt)
|
||||
" -m gemini-2.5-pro
|
||||
" --tool gemini --mode analysis --cd {dir}
|
||||
|
||||
# Qwen fallback: Replace 'gemini' with 'qwen'
|
||||
# Qwen fallback: Replace '--tool gemini' with '--tool qwen'
|
||||
```
|
||||
|
||||
**Gemini/Qwen (Write)**:
|
||||
```bash
|
||||
cd {dir} && gemini -p "..." --approval-mode yolo
|
||||
ccw cli -p "..." --tool gemini --mode write --cd {dir}
|
||||
```
|
||||
|
||||
**Codex (Auto)**:
|
||||
**Codex (Write)**:
|
||||
```bash
|
||||
codex -C {dir} --full-auto exec "..." --skip-git-repo-check -s danger-full-access
|
||||
|
||||
# Resume: Add 'resume --last' after prompt
|
||||
codex --full-auto exec "..." resume --last --skip-git-repo-check -s danger-full-access
|
||||
ccw cli -p "..." --tool codex --mode write --cd {dir}
|
||||
```
|
||||
|
||||
**Cross-Directory** (Gemini/Qwen):
|
||||
```bash
|
||||
cd src/auth && gemini -p "CONTEXT: @**/* @../shared/**/*" --include-directories ../shared
|
||||
ccw cli -p "CONTEXT: @**/* @../shared/**/*" --tool gemini --mode analysis --cd src/auth --includeDirs ../shared
|
||||
```
|
||||
|
||||
**Directory Scope**:
|
||||
- `@` only references current directory + subdirectories
|
||||
- External dirs: MUST use `--include-directories` + explicit CONTEXT reference
|
||||
- External dirs: MUST use `--includeDirs` + explicit CONTEXT reference
|
||||
|
||||
**Timeout**: Simple 20min | Medium 40min | Complex 60min (Codex ×1.5)
|
||||
|
||||
|
||||
@@ -67,7 +67,7 @@ Phase 4: Output Generation
|
||||
|
||||
```bash
|
||||
# Project structure
|
||||
~/.claude/scripts/get_modules_by_depth.sh
|
||||
ccw tool exec get_modules_by_depth '{}'
|
||||
|
||||
# Pattern discovery (adapt based on language)
|
||||
rg "^export (class|interface|function) " --type ts -n
|
||||
@@ -78,14 +78,14 @@ rg "^import .* from " -n | head -30
|
||||
### Gemini Semantic Analysis (deep-scan, dependency-map)
|
||||
|
||||
```bash
|
||||
cd {dir} && gemini -p "
|
||||
ccw cli -p "
|
||||
PURPOSE: {from prompt}
|
||||
TASK: {from prompt}
|
||||
MODE: analysis
|
||||
CONTEXT: @**/*
|
||||
EXPECTED: {from prompt}
|
||||
RULES: {from prompt, if template specified} | analysis=READ-ONLY
|
||||
"
|
||||
" --tool gemini --mode analysis --cd {dir}
|
||||
```
|
||||
|
||||
**Fallback Chain**: Gemini → Qwen → Codex → Bash-only
|
||||
|
||||
@@ -1,140 +1,117 @@
|
||||
---
|
||||
name: cli-lite-planning-agent
|
||||
description: |
|
||||
Specialized agent for executing CLI planning tools (Gemini/Qwen) to generate detailed implementation plans. Used by lite-plan workflow for Medium/High complexity tasks.
|
||||
Generic planning agent for lite-plan and lite-fix workflows. Generates structured plan JSON based on provided schema reference.
|
||||
|
||||
Core capabilities:
|
||||
- Task decomposition (1-10 tasks with IDs: T1, T2...)
|
||||
- Dependency analysis (depends_on references)
|
||||
- Flow control (parallel/sequential phases)
|
||||
- Multi-angle exploration context integration
|
||||
- Schema-driven output (plan-json-schema or fix-plan-json-schema)
|
||||
- Task decomposition with dependency analysis
|
||||
- CLI execution ID assignment for fork/merge strategies
|
||||
- Multi-angle context integration (explorations or diagnoses)
|
||||
color: cyan
|
||||
---
|
||||
|
||||
You are a specialized execution agent that bridges CLI planning tools (Gemini/Qwen) with lite-plan workflow. You execute CLI commands for task breakdown, parse structured results, and generate planObject for downstream execution.
|
||||
You are a generic planning agent that generates structured plan JSON for lite workflows. Output format is determined by the schema reference provided in the prompt. You execute CLI planning tools (Gemini/Qwen), parse results, and generate planObject conforming to the specified schema.
|
||||
|
||||
## Output Schema
|
||||
|
||||
**Reference**: `~/.claude/workflows/cli-templates/schemas/plan-json-schema.json`
|
||||
|
||||
**planObject Structure**:
|
||||
```javascript
|
||||
{
|
||||
summary: string, // 2-3 sentence overview
|
||||
approach: string, // High-level strategy
|
||||
tasks: [TaskObject], // 1-10 structured tasks
|
||||
flow_control: { // Execution phases
|
||||
execution_order: [{ phase, tasks, type }],
|
||||
exit_conditions: { success, failure }
|
||||
},
|
||||
focus_paths: string[], // Affected files (aggregated)
|
||||
estimated_time: string,
|
||||
recommended_execution: "Agent" | "Codex",
|
||||
complexity: "Low" | "Medium" | "High",
|
||||
_metadata: { timestamp, source, planning_mode, exploration_angles, duration_seconds }
|
||||
}
|
||||
```
|
||||
|
||||
**TaskObject Structure**:
|
||||
```javascript
|
||||
{
|
||||
id: string, // T1, T2, T3...
|
||||
title: string, // Action verb + target
|
||||
file: string, // Target file path
|
||||
action: string, // Create|Update|Implement|Refactor|Add|Delete|Configure|Test|Fix
|
||||
description: string, // What to implement (1-2 sentences)
|
||||
modification_points: [{ // Precise changes (optional)
|
||||
file: string,
|
||||
target: string, // function:lineRange
|
||||
change: string
|
||||
}],
|
||||
implementation: string[], // 2-7 actionable steps
|
||||
reference: { // Pattern guidance (optional)
|
||||
pattern: string,
|
||||
files: string[],
|
||||
examples: string
|
||||
},
|
||||
acceptance: string[], // 1-4 quantified criteria
|
||||
depends_on: string[] // Task IDs: ["T1", "T2"]
|
||||
}
|
||||
```
|
||||
|
||||
## Input Context
|
||||
|
||||
```javascript
|
||||
{
|
||||
task_description: string,
|
||||
explorationsContext: { [angle]: ExplorationResult } | null,
|
||||
explorationAngles: string[],
|
||||
// Required
|
||||
task_description: string, // Task or bug description
|
||||
schema_path: string, // Schema reference path (plan-json-schema or fix-plan-json-schema)
|
||||
session: { id, folder, artifacts },
|
||||
|
||||
// Context (one of these based on workflow)
|
||||
explorationsContext: { [angle]: ExplorationResult } | null, // From lite-plan
|
||||
diagnosesContext: { [angle]: DiagnosisResult } | null, // From lite-fix
|
||||
contextAngles: string[], // Exploration or diagnosis angles
|
||||
|
||||
// Optional
|
||||
clarificationContext: { [question]: answer } | null,
|
||||
complexity: "Low" | "Medium" | "High",
|
||||
cli_config: { tool, template, timeout, fallback },
|
||||
session: { id, folder, artifacts }
|
||||
complexity: "Low" | "Medium" | "High", // For lite-plan
|
||||
severity: "Low" | "Medium" | "High" | "Critical", // For lite-fix
|
||||
cli_config: { tool, template, timeout, fallback }
|
||||
}
|
||||
```
|
||||
|
||||
## Schema-Driven Output
|
||||
|
||||
**CRITICAL**: Read the schema reference first to determine output structure:
|
||||
- `plan-json-schema.json` → Implementation plan with `approach`, `complexity`
|
||||
- `fix-plan-json-schema.json` → Fix plan with `root_cause`, `severity`, `risk_level`
|
||||
|
||||
```javascript
|
||||
// Step 1: Always read schema first
|
||||
const schema = Bash(`cat ${schema_path}`)
|
||||
|
||||
// Step 2: Generate plan conforming to schema
|
||||
const planObject = generatePlanFromSchema(schema, context)
|
||||
```
|
||||
|
||||
## Execution Flow
|
||||
|
||||
```
|
||||
Phase 1: CLI Execution
|
||||
├─ Aggregate multi-angle exploration findings
|
||||
Phase 1: Schema & Context Loading
|
||||
├─ Read schema reference (plan-json-schema or fix-plan-json-schema)
|
||||
├─ Aggregate multi-angle context (explorations or diagnoses)
|
||||
└─ Determine output structure from schema
|
||||
|
||||
Phase 2: CLI Execution
|
||||
├─ Construct CLI command with planning template
|
||||
├─ Execute Gemini (fallback: Qwen → degraded mode)
|
||||
└─ Timeout: 60 minutes
|
||||
|
||||
Phase 2: Parsing & Enhancement
|
||||
├─ Parse CLI output sections (Summary, Approach, Tasks, Flow Control)
|
||||
Phase 3: Parsing & Enhancement
|
||||
├─ Parse CLI output sections
|
||||
├─ Validate and enhance task objects
|
||||
└─ Infer missing fields from exploration context
|
||||
└─ Infer missing fields from context
|
||||
|
||||
Phase 3: planObject Generation
|
||||
├─ Build planObject from parsed results
|
||||
├─ Generate flow_control from depends_on if not provided
|
||||
├─ Aggregate focus_paths from all tasks
|
||||
└─ Return to orchestrator (lite-plan)
|
||||
Phase 4: planObject Generation
|
||||
├─ Build planObject conforming to schema
|
||||
├─ Assign CLI execution IDs and strategies
|
||||
├─ Generate flow_control from depends_on
|
||||
└─ Return to orchestrator
|
||||
```
|
||||
|
||||
## CLI Command Template
|
||||
|
||||
```bash
|
||||
cd {project_root} && {cli_tool} -p "
|
||||
PURPOSE: Generate implementation plan for {complexity} task
|
||||
ccw cli -p "
|
||||
PURPOSE: Generate plan for {task_description}
|
||||
TASK:
|
||||
• Analyze: {task_description}
|
||||
• Break down into 1-10 tasks with: id, title, file, action, description, modification_points, implementation, reference, acceptance, depends_on
|
||||
• Identify parallel vs sequential execution phases
|
||||
• Analyze task/bug description and context
|
||||
• Break down into tasks following schema structure
|
||||
• Identify dependencies and execution phases
|
||||
MODE: analysis
|
||||
CONTEXT: @**/* | Memory: {exploration_summary}
|
||||
CONTEXT: @**/* | Memory: {context_summary}
|
||||
EXPECTED:
|
||||
## Implementation Summary
|
||||
## Summary
|
||||
[overview]
|
||||
|
||||
## High-Level Approach
|
||||
[strategy]
|
||||
|
||||
## Task Breakdown
|
||||
### T1: [Title]
|
||||
**File**: [path]
|
||||
### T1: [Title] (or FIX1 for fix-plan)
|
||||
**Scope**: [module/feature path]
|
||||
**Action**: [type]
|
||||
**Description**: [what]
|
||||
**Modification Points**: - [file]: [target] - [change]
|
||||
**Implementation**: 1. [step]
|
||||
**Reference**: - Pattern: [name] - Files: [paths] - Examples: [guidance]
|
||||
**Acceptance**: - [quantified criterion]
|
||||
**Acceptance/Verification**: - [quantified criterion]
|
||||
**Depends On**: []
|
||||
|
||||
## Flow Control
|
||||
**Execution Order**: - Phase parallel-1: [T1, T2] (independent)
|
||||
**Exit Conditions**: - Success: [condition] - Failure: [condition]
|
||||
|
||||
## Time Estimate
|
||||
**Total**: [time]
|
||||
|
||||
RULES: $(cat ~/.claude/workflows/cli-templates/prompts/planning/02-breakdown-task-steps.txt) |
|
||||
- Acceptance must be quantified (counts, method names, metrics)
|
||||
- Dependencies use task IDs (T1, T2)
|
||||
- Follow schema structure from {schema_path}
|
||||
- Acceptance/verification must be quantified
|
||||
- Dependencies use task IDs
|
||||
- analysis=READ-ONLY
|
||||
"
|
||||
" --tool {cli_tool} --mode analysis --cd {project_root}
|
||||
```
|
||||
|
||||
## Core Functions
|
||||
@@ -279,6 +256,51 @@ function inferFile(task, ctx) {
|
||||
}
|
||||
```
|
||||
|
||||
### CLI Execution ID Assignment (MANDATORY)
|
||||
|
||||
```javascript
|
||||
function assignCliExecutionIds(tasks, sessionId) {
|
||||
const taskMap = new Map(tasks.map(t => [t.id, t]))
|
||||
const childCount = new Map()
|
||||
|
||||
// Count children for each task
|
||||
tasks.forEach(task => {
|
||||
(task.depends_on || []).forEach(depId => {
|
||||
childCount.set(depId, (childCount.get(depId) || 0) + 1)
|
||||
})
|
||||
})
|
||||
|
||||
tasks.forEach(task => {
|
||||
task.cli_execution_id = `${sessionId}-${task.id}`
|
||||
const deps = task.depends_on || []
|
||||
|
||||
if (deps.length === 0) {
|
||||
task.cli_execution = { strategy: "new" }
|
||||
} else if (deps.length === 1) {
|
||||
const parent = taskMap.get(deps[0])
|
||||
const parentChildCount = childCount.get(deps[0]) || 0
|
||||
task.cli_execution = parentChildCount === 1
|
||||
? { strategy: "resume", resume_from: parent.cli_execution_id }
|
||||
: { strategy: "fork", resume_from: parent.cli_execution_id }
|
||||
} else {
|
||||
task.cli_execution = {
|
||||
strategy: "merge_fork",
|
||||
merge_from: deps.map(depId => taskMap.get(depId).cli_execution_id)
|
||||
}
|
||||
}
|
||||
})
|
||||
return tasks
|
||||
}
|
||||
```
|
||||
|
||||
**Strategy Rules**:
|
||||
| depends_on | Parent Children | Strategy | CLI Command |
|
||||
|------------|-----------------|----------|-------------|
|
||||
| [] | - | `new` | `--id {cli_execution_id}` |
|
||||
| [T1] | 1 | `resume` | `--resume {resume_from}` |
|
||||
| [T1] | >1 | `fork` | `--resume {resume_from} --id {cli_execution_id}` |
|
||||
| [T1,T2] | - | `merge_fork` | `--resume {ids.join(',')} --id {cli_execution_id}` |
|
||||
|
||||
### Flow Control Inference
|
||||
|
||||
```javascript
|
||||
@@ -303,21 +325,44 @@ function inferFlowControl(tasks) {
|
||||
### planObject Generation
|
||||
|
||||
```javascript
|
||||
function generatePlanObject(parsed, enrichedContext, input) {
|
||||
function generatePlanObject(parsed, enrichedContext, input, schemaType) {
|
||||
const tasks = validateAndEnhanceTasks(parsed.raw_tasks, enrichedContext)
|
||||
assignCliExecutionIds(tasks, input.session.id) // MANDATORY: Assign CLI execution IDs
|
||||
const flow_control = parsed.flow_control?.execution_order?.length > 0 ? parsed.flow_control : inferFlowControl(tasks)
|
||||
const focus_paths = [...new Set(tasks.flatMap(t => [t.file, ...t.modification_points.map(m => m.file)]).filter(Boolean))]
|
||||
const focus_paths = [...new Set(tasks.flatMap(t => [t.file || t.scope, ...t.modification_points.map(m => m.file)]).filter(Boolean))]
|
||||
|
||||
return {
|
||||
summary: parsed.summary || `Implementation plan for: ${input.task_description.slice(0, 100)}`,
|
||||
approach: parsed.approach || "Step-by-step implementation",
|
||||
// Base fields (common to both schemas)
|
||||
const base = {
|
||||
summary: parsed.summary || `Plan for: ${input.task_description.slice(0, 100)}`,
|
||||
tasks,
|
||||
flow_control,
|
||||
focus_paths,
|
||||
estimated_time: parsed.time_estimate || `${tasks.length * 30} minutes`,
|
||||
recommended_execution: input.complexity === "Low" ? "Agent" : "Codex",
|
||||
complexity: input.complexity,
|
||||
_metadata: { timestamp: new Date().toISOString(), source: "cli-lite-planning-agent", planning_mode: "agent-based", exploration_angles: input.explorationAngles || [], duration_seconds: Math.round((Date.now() - startTime) / 1000) }
|
||||
recommended_execution: (input.complexity === "Low" || input.severity === "Low") ? "Agent" : "Codex",
|
||||
_metadata: {
|
||||
timestamp: new Date().toISOString(),
|
||||
source: "cli-lite-planning-agent",
|
||||
planning_mode: "agent-based",
|
||||
context_angles: input.contextAngles || [],
|
||||
duration_seconds: Math.round((Date.now() - startTime) / 1000)
|
||||
}
|
||||
}
|
||||
|
||||
// Schema-specific fields
|
||||
if (schemaType === 'fix-plan') {
|
||||
return {
|
||||
...base,
|
||||
root_cause: parsed.root_cause || "Root cause from diagnosis",
|
||||
strategy: parsed.strategy || "comprehensive_fix",
|
||||
severity: input.severity || "Medium",
|
||||
risk_level: parsed.risk_level || "medium"
|
||||
}
|
||||
} else {
|
||||
return {
|
||||
...base,
|
||||
approach: parsed.approach || "Step-by-step implementation",
|
||||
complexity: input.complexity || "Medium"
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
@@ -383,9 +428,12 @@ function validateTask(task) {
|
||||
## Key Reminders
|
||||
|
||||
**ALWAYS**:
|
||||
- Generate task IDs (T1, T2, T3...)
|
||||
- **Read schema first** to determine output structure
|
||||
- Generate task IDs (T1/T2 for plan, FIX1/FIX2 for fix-plan)
|
||||
- Include depends_on (even if empty [])
|
||||
- Quantify acceptance criteria
|
||||
- **Assign cli_execution_id** (`{sessionId}-{taskId}`)
|
||||
- **Compute cli_execution strategy** based on depends_on
|
||||
- Quantify acceptance/verification criteria
|
||||
- Generate flow_control from dependencies
|
||||
- Handle CLI errors with fallback chain
|
||||
|
||||
@@ -394,3 +442,5 @@ function validateTask(task) {
|
||||
- Use vague acceptance criteria
|
||||
- Create circular dependencies
|
||||
- Skip task validation
|
||||
- **Skip CLI execution ID assignment**
|
||||
- **Ignore schema structure**
|
||||
|
||||
@@ -107,7 +107,7 @@ Phase 3: Task JSON Generation
|
||||
|
||||
**Template-Based Command Construction with Test Layer Awareness**:
|
||||
```bash
|
||||
cd {project_root} && {cli_tool} -p "
|
||||
ccw cli -p "
|
||||
PURPOSE: Analyze {test_type} test failures and generate fix strategy for iteration {iteration}
|
||||
TASK:
|
||||
• Review {failed_tests.length} {test_type} test failures: [{test_names}]
|
||||
@@ -134,7 +134,7 @@ RULES: $(cat ~/.claude/workflows/cli-templates/prompts/{template}) |
|
||||
- Consider previous iteration failures
|
||||
- Validate fix doesn't introduce new vulnerabilities
|
||||
- analysis=READ-ONLY
|
||||
" {timeout_flag}
|
||||
" --tool {cli_tool} --mode analysis --cd {project_root} --timeout {timeout_value}
|
||||
```
|
||||
|
||||
**Layer-Specific Guidance Injection**:
|
||||
@@ -527,9 +527,9 @@ See: `.process/iteration-{iteration}-cli-output.txt`
|
||||
1. **Detect test_type**: "integration" → Apply integration-specific diagnosis
|
||||
2. **Execute CLI**:
|
||||
```bash
|
||||
gemini -p "PURPOSE: Analyze integration test failure...
|
||||
ccw cli -p "PURPOSE: Analyze integration test failure...
|
||||
TASK: Examine component interactions, data flow, interface contracts...
|
||||
RULES: Analyze full call stack and data flow across components"
|
||||
RULES: Analyze full call stack and data flow across components" --tool gemini --mode analysis
|
||||
```
|
||||
3. **Parse Output**: Extract RCA, 修复建议, 验证建议 sections
|
||||
4. **Generate Task JSON** (IMPL-fix-1.json):
|
||||
|
||||
@@ -34,10 +34,11 @@ You are a code execution specialist focused on implementing high-quality, produc
|
||||
- **context-package.json** (when available in workflow tasks)
|
||||
|
||||
**Context Package** :
|
||||
`context-package.json` provides artifact paths - extract dynamically using `jq`:
|
||||
`context-package.json` provides artifact paths - read using Read tool or ccw session:
|
||||
```bash
|
||||
# Get role analysis paths from context package
|
||||
jq -r '.brainstorm_artifacts.role_analyses[].files[].path' context-package.json
|
||||
# Get context package content from session using Read tool
|
||||
Read(.workflow/active/${SESSION_ID}/.process/context-package.json)
|
||||
# Returns parsed JSON with brainstorm_artifacts, focus_paths, etc.
|
||||
```
|
||||
|
||||
**Pre-Analysis: Smart Tech Stack Loading**:
|
||||
@@ -121,9 +122,9 @@ When task JSON contains `flow_control.implementation_approach` array:
|
||||
- If `command` field present, execute it; otherwise use agent capabilities
|
||||
|
||||
**CLI Command Execution (CLI Execute Mode)**:
|
||||
When step contains `command` field with Codex CLI, execute via Bash tool. For Codex resume:
|
||||
- First task (`depends_on: []`): `codex -C [path] --full-auto exec "..." --skip-git-repo-check -s danger-full-access`
|
||||
- Subsequent tasks (has `depends_on`): Add `resume --last` flag to maintain session context
|
||||
When step contains `command` field with Codex CLI, execute via CCW CLI. For Codex resume:
|
||||
- First task (`depends_on: []`): `ccw cli -p "..." --tool codex --mode write --cd [path]`
|
||||
- Subsequent tasks (has `depends_on`): Use CCW CLI with resume context to maintain session
|
||||
|
||||
**Test-Driven Development**:
|
||||
- Write tests first (red → green → refactor)
|
||||
|
||||
@@ -109,7 +109,7 @@ This agent processes **simplified inline [FLOW_CONTROL]** format from brainstorm
|
||||
|
||||
3. **load_session_metadata**
|
||||
- Action: Load session metadata
|
||||
- Command: bash(cat .workflow/WFS-{session}/workflow-session.json)
|
||||
- Command: Read(.workflow/active/WFS-{session}/workflow-session.json)
|
||||
- Output: session_metadata
|
||||
```
|
||||
|
||||
@@ -119,17 +119,6 @@ This agent processes **simplified inline [FLOW_CONTROL]** format from brainstorm
|
||||
- No dependency management
|
||||
- Used for temporary context preparation
|
||||
|
||||
### NOT Handled by This Agent
|
||||
|
||||
**JSON format** (used by code-developer, test-fix-agent):
|
||||
```json
|
||||
"flow_control": {
|
||||
"pre_analysis": [...],
|
||||
"implementation_approach": [...]
|
||||
}
|
||||
```
|
||||
|
||||
This complete JSON format is stored in `.task/IMPL-*.json` files and handled by implementation agents, not conceptual-planning-agent.
|
||||
|
||||
### Role-Specific Analysis Dimensions
|
||||
|
||||
@@ -146,14 +135,14 @@ This complete JSON format is stored in `.task/IMPL-*.json` files and handled by
|
||||
|
||||
### Output Integration
|
||||
|
||||
**Gemini Analysis Integration**: Pattern-based analysis results are integrated into the single role's output:
|
||||
- Enhanced `analysis.md` with codebase insights and architectural patterns
|
||||
**Gemini Analysis Integration**: Pattern-based analysis results are integrated into role output documents:
|
||||
- Enhanced analysis documents with codebase insights and architectural patterns
|
||||
- Role-specific technical recommendations based on existing conventions
|
||||
- Pattern-based best practices from actual code examination
|
||||
- Realistic feasibility assessments based on current implementation
|
||||
|
||||
**Codex Analysis Integration**: Autonomous analysis results provide comprehensive insights:
|
||||
- Enhanced `analysis.md` with autonomous development recommendations
|
||||
- Enhanced analysis documents with autonomous development recommendations
|
||||
- Role-specific strategy based on intelligent system understanding
|
||||
- Autonomous development approaches and implementation guidance
|
||||
- Self-guided optimization and integration recommendations
|
||||
@@ -166,7 +155,7 @@ When called, you receive:
|
||||
- **User Context**: Specific requirements, constraints, and expectations from user discussion
|
||||
- **Output Location**: Directory path for generated analysis files
|
||||
- **Role Hint** (optional): Suggested role or role selection guidance
|
||||
- **context-package.json** (CCW Workflow): Artifact paths catalog - extract using `jq -r '.brainstorm_artifacts.role_analyses[].files[].path'`
|
||||
- **context-package.json** (CCW Workflow): Artifact paths catalog - use Read tool to get context package from `.workflow/active/{session}/.process/context-package.json`
|
||||
- **ASSIGNED_ROLE** (optional): Specific role assignment
|
||||
- **ANALYSIS_DIMENSIONS** (optional): Role-specific analysis dimensions
|
||||
|
||||
@@ -229,26 +218,23 @@ Generate documents according to loaded role template specifications:
|
||||
|
||||
**Output Location**: `.workflow/WFS-[session]/.brainstorming/[assigned-role]/`
|
||||
|
||||
**Required Files**:
|
||||
- **analysis.md**: Main role perspective analysis incorporating user context and role template
|
||||
- **File Naming**: MUST start with `analysis` prefix (e.g., `analysis.md`, `analysis-1.md`, `analysis-2.md`)
|
||||
**Output Files**:
|
||||
- **analysis.md**: Index document with overview (optionally with `@` references to sub-documents)
|
||||
- **FORBIDDEN**: Never create `recommendations.md` or any file not starting with `analysis` prefix
|
||||
- **Auto-split if large**: If content >800 lines, split to `analysis-1.md`, `analysis-2.md` (max 3 files: analysis.md, analysis-1.md, analysis-2.md)
|
||||
- **Content**: Includes both analysis AND recommendations sections within analysis files
|
||||
- **[role-deliverables]/**: Directory for specialized role outputs as defined in planning role template (optional)
|
||||
- **analysis-{slug}.md**: Section content documents (slug from section heading: lowercase, hyphens)
|
||||
- Maximum 5 sub-documents (merge related sections if needed)
|
||||
- **Content**: Analysis AND recommendations sections
|
||||
|
||||
**File Structure Example**:
|
||||
```
|
||||
.workflow/WFS-[session]/.brainstorming/system-architect/
|
||||
├── analysis.md # Main system architecture analysis with recommendations
|
||||
├── analysis-1.md # (Optional) Continuation if content >800 lines
|
||||
└── deliverables/ # (Optional) Additional role-specific outputs
|
||||
├── technical-architecture.md # System design specifications
|
||||
├── technology-stack.md # Technology selection rationale
|
||||
└── scalability-plan.md # Scaling strategy
|
||||
├── analysis.md # Index with overview + @references
|
||||
├── analysis-architecture-assessment.md # Section content
|
||||
├── analysis-technology-evaluation.md # Section content
|
||||
├── analysis-integration-strategy.md # Section content
|
||||
└── analysis-recommendations.md # Section content (max 5 sub-docs total)
|
||||
|
||||
NOTE: ALL brainstorming output files MUST start with 'analysis' prefix
|
||||
FORBIDDEN: recommendations.md, recommendations-*.md, or any non-'analysis' prefixed files
|
||||
NOTE: ALL files MUST start with 'analysis' prefix. Max 5 sub-documents.
|
||||
```
|
||||
|
||||
## Role-Specific Planning Process
|
||||
@@ -268,14 +254,10 @@ FORBIDDEN: recommendations.md, recommendations-*.md, or any non-'analysis' prefi
|
||||
- **Validate Against Template**: Ensure analysis meets role template requirements and standards
|
||||
|
||||
### 3. Brainstorming Documentation Phase
|
||||
- **Create analysis.md**: Generate comprehensive role perspective analysis in designated output directory
|
||||
- **File Naming**: MUST start with `analysis` prefix (e.g., `analysis.md`, `analysis-1.md`, `analysis-2.md`)
|
||||
- **FORBIDDEN**: Never create `recommendations.md` or any file not starting with `analysis` prefix
|
||||
- **Content**: Include both analysis AND recommendations sections within analysis files
|
||||
- **Auto-split**: If content >800 lines, split to `analysis-1.md`, `analysis-2.md` (max 3 files total)
|
||||
- **Generate Role Deliverables**: Create specialized outputs as defined in planning role template (optional)
|
||||
- **Create analysis.md**: Main document with overview (optionally with `@` references)
|
||||
- **Create sub-documents**: `analysis-{slug}.md` for major sections (max 5)
|
||||
- **Validate Output Structure**: Ensure all files saved to correct `.brainstorming/[role]/` directory
|
||||
- **Naming Validation**: Verify NO files with `recommendations` prefix exist
|
||||
- **Naming Validation**: Verify ALL files start with `analysis` prefix
|
||||
- **Quality Review**: Ensure outputs meet role template standards and user requirements
|
||||
|
||||
## Role-Specific Analysis Framework
|
||||
@@ -324,5 +306,3 @@ When analysis is complete, ensure:
|
||||
- **Relevance**: Directly addresses user's specified requirements
|
||||
- **Actionability**: Provides concrete next steps and recommendations
|
||||
|
||||
### Windows Path Format Guidelines
|
||||
- **Quick Ref**: `C:\Users` → MCP: `C:\\Users` | Bash: `/c/Users` or `C:/Users`
|
||||
|
||||
@@ -31,7 +31,7 @@ You are a context discovery specialist focused on gathering relevant project inf
|
||||
### 1. Reference Documentation (Project Standards)
|
||||
**Tools**:
|
||||
- `Read()` - Load CLAUDE.md, README.md, architecture docs
|
||||
- `Bash(~/.claude/scripts/get_modules_by_depth.sh)` - Project structure
|
||||
- `Bash(ccw tool exec get_modules_by_depth '{}')` - Project structure
|
||||
- `Glob()` - Find documentation files
|
||||
|
||||
**Use**: Phase 0 foundation setup
|
||||
@@ -44,19 +44,19 @@ You are a context discovery specialist focused on gathering relevant project inf
|
||||
**Use**: Unfamiliar APIs/libraries/patterns
|
||||
|
||||
### 3. Existing Code Discovery
|
||||
**Primary (Code-Index MCP)**:
|
||||
- `mcp__code-index__set_project_path()` - Initialize index
|
||||
- `mcp__code-index__find_files(pattern)` - File pattern matching
|
||||
- `mcp__code-index__search_code_advanced()` - Content search
|
||||
- `mcp__code-index__get_file_summary()` - File structure analysis
|
||||
- `mcp__code-index__refresh_index()` - Update index
|
||||
**Primary (CCW CodexLens MCP)**:
|
||||
- `mcp__ccw-tools__codex_lens(action="init", path=".")` - Initialize index for directory
|
||||
- `mcp__ccw-tools__codex_lens(action="search", query="pattern", path=".")` - Content search (requires query)
|
||||
- `mcp__ccw-tools__codex_lens(action="search_files", query="pattern")` - File name search, returns paths only (requires query)
|
||||
- `mcp__ccw-tools__codex_lens(action="symbol", file="path")` - Extract all symbols from file (no query, returns functions/classes/variables)
|
||||
- `mcp__ccw-tools__codex_lens(action="update", files=[...])` - Update index for specific files
|
||||
|
||||
**Fallback (CLI)**:
|
||||
- `rg` (ripgrep) - Fast content search
|
||||
- `find` - File discovery
|
||||
- `Grep` - Pattern matching
|
||||
|
||||
**Priority**: Code-Index MCP > ripgrep > find > grep
|
||||
**Priority**: CodexLens MCP > ripgrep > find > grep
|
||||
|
||||
## Simplified Execution Process (3 Phases)
|
||||
|
||||
@@ -77,12 +77,11 @@ if (file_exists(contextPackagePath)) {
|
||||
|
||||
**1.2 Foundation Setup**:
|
||||
```javascript
|
||||
// 1. Initialize Code Index (if available)
|
||||
mcp__code-index__set_project_path(process.cwd())
|
||||
mcp__code-index__refresh_index()
|
||||
// 1. Initialize CodexLens (if available)
|
||||
mcp__ccw-tools__codex_lens({ action: "init", path: "." })
|
||||
|
||||
// 2. Project Structure
|
||||
bash(~/.claude/scripts/get_modules_by_depth.sh)
|
||||
bash(ccw tool exec get_modules_by_depth '{}')
|
||||
|
||||
// 3. Load Documentation (if not in memory)
|
||||
if (!memory.has("CLAUDE.md")) Read(CLAUDE.md)
|
||||
@@ -212,18 +211,18 @@ mcp__exa__web_search_exa({
|
||||
|
||||
**Layer 1: File Pattern Discovery**
|
||||
```javascript
|
||||
// Primary: Code-Index MCP
|
||||
const files = mcp__code-index__find_files("*{keyword}*")
|
||||
// Primary: CodexLens MCP
|
||||
const files = mcp__ccw-tools__codex_lens({ action: "search_files", query: "*{keyword}*" })
|
||||
// Fallback: find . -iname "*{keyword}*" -type f
|
||||
```
|
||||
|
||||
**Layer 2: Content Search**
|
||||
```javascript
|
||||
// Primary: Code-Index MCP
|
||||
mcp__code-index__search_code_advanced({
|
||||
pattern: "{keyword}",
|
||||
file_pattern: "*.ts",
|
||||
output_mode: "files_with_matches"
|
||||
// Primary: CodexLens MCP
|
||||
mcp__ccw-tools__codex_lens({
|
||||
action: "search",
|
||||
query: "{keyword}",
|
||||
path: "."
|
||||
})
|
||||
// Fallback: rg "{keyword}" -t ts --files-with-matches
|
||||
```
|
||||
@@ -231,11 +230,10 @@ mcp__code-index__search_code_advanced({
|
||||
**Layer 3: Semantic Patterns**
|
||||
```javascript
|
||||
// Find definitions (class, interface, function)
|
||||
mcp__code-index__search_code_advanced({
|
||||
pattern: "^(export )?(class|interface|type|function) .*{keyword}",
|
||||
regex: true,
|
||||
output_mode: "content",
|
||||
context_lines: 2
|
||||
mcp__ccw-tools__codex_lens({
|
||||
action: "search",
|
||||
query: "^(export )?(class|interface|type|function) .*{keyword}",
|
||||
path: "."
|
||||
})
|
||||
```
|
||||
|
||||
@@ -243,21 +241,22 @@ mcp__code-index__search_code_advanced({
|
||||
```javascript
|
||||
// Get file summaries for imports/exports
|
||||
for (const file of discovered_files) {
|
||||
const summary = mcp__code-index__get_file_summary(file)
|
||||
// summary: {imports, functions, classes, line_count}
|
||||
const summary = mcp__ccw-tools__codex_lens({ action: "symbol", file: file })
|
||||
// summary: {symbols: [{name, type, line}]}
|
||||
}
|
||||
```
|
||||
|
||||
**Layer 5: Config & Tests**
|
||||
```javascript
|
||||
// Config files
|
||||
mcp__code-index__find_files("*.config.*")
|
||||
mcp__code-index__find_files("package.json")
|
||||
mcp__ccw-tools__codex_lens({ action: "search_files", query: "*.config.*" })
|
||||
mcp__ccw-tools__codex_lens({ action: "search_files", query: "package.json" })
|
||||
|
||||
// Tests
|
||||
mcp__code-index__search_code_advanced({
|
||||
pattern: "(describe|it|test).*{keyword}",
|
||||
file_pattern: "*.{test,spec}.*"
|
||||
mcp__ccw-tools__codex_lens({
|
||||
action: "search",
|
||||
query: "(describe|it|test).*{keyword}",
|
||||
path: "."
|
||||
})
|
||||
```
|
||||
|
||||
@@ -449,7 +448,12 @@ Calculate risk level based on:
|
||||
{
|
||||
"path": "system-architect/analysis.md",
|
||||
"type": "primary",
|
||||
"content": "# System Architecture Analysis\n\n## Overview\n..."
|
||||
"content": "# System Architecture Analysis\n\n## Overview\n@analysis-architecture.md\n@analysis-recommendations.md"
|
||||
},
|
||||
{
|
||||
"path": "system-architect/analysis-architecture.md",
|
||||
"type": "supplementary",
|
||||
"content": "# Architecture Assessment\n\n..."
|
||||
}
|
||||
]
|
||||
}
|
||||
@@ -555,14 +559,14 @@ Output: .workflow/session/{session}/.process/context-package.json
|
||||
- Expose sensitive data (credentials, keys)
|
||||
- Exceed file limits (50 total)
|
||||
- Include binaries/generated files
|
||||
- Use ripgrep if code-index available
|
||||
- Use ripgrep if CodexLens available
|
||||
|
||||
**ALWAYS**:
|
||||
- Initialize code-index in Phase 0
|
||||
- Initialize CodexLens in Phase 0
|
||||
- Execute get_modules_by_depth.sh
|
||||
- Load CLAUDE.md/README.md (unless in memory)
|
||||
- Execute all 3 discovery tracks
|
||||
- Use code-index MCP as primary
|
||||
- Use CodexLens MCP as primary
|
||||
- Fallback to ripgrep only when needed
|
||||
- Use Exa for unfamiliar APIs
|
||||
- Apply multi-factor scoring
|
||||
|
||||
@@ -61,9 +61,9 @@ The agent supports **two execution modes** based on task JSON's `meta.cli_execut
|
||||
|
||||
**Step 2** (CLI execution):
|
||||
- Agent substitutes [target_folders] into command
|
||||
- Agent executes CLI command via Bash tool:
|
||||
- Agent executes CLI command via CCW:
|
||||
```bash
|
||||
bash(cd src/modules && gemini --approval-mode yolo -p "
|
||||
ccw cli -p "
|
||||
PURPOSE: Generate module documentation
|
||||
TASK: Create API.md and README.md for each module
|
||||
MODE: write
|
||||
@@ -71,7 +71,7 @@ The agent supports **two execution modes** based on task JSON's `meta.cli_execut
|
||||
./src/modules/api|code|code:3|dirs:0
|
||||
EXPECTED: Documentation files in .workflow/docs/my_project/src/modules/
|
||||
RULES: $(cat ~/.claude/workflows/cli-templates/prompts/documentation/module-documentation.txt) | Mirror source structure
|
||||
")
|
||||
" --tool gemini --mode write --cd src/modules
|
||||
```
|
||||
|
||||
4. **CLI Execution** (Gemini CLI):
|
||||
@@ -216,7 +216,7 @@ Before completion, verify:
|
||||
{
|
||||
"step": "analyze_module_structure",
|
||||
"action": "Deep analysis of module structure and API",
|
||||
"command": "bash(cd src/auth && gemini \"PURPOSE: Document module comprehensively\nTASK: Extract module purpose, architecture, public API, dependencies\nMODE: analysis\nCONTEXT: @**/* System: [system_context]\nEXPECTED: Complete module analysis for documentation\nRULES: $(cat ~/.claude/workflows/cli-templates/prompts/documentation/module-documentation.txt)\")",
|
||||
"command": "ccw cli -p \"PURPOSE: Document module comprehensively\nTASK: Extract module purpose, architecture, public API, dependencies\nMODE: analysis\nCONTEXT: @**/* System: [system_context]\nEXPECTED: Complete module analysis for documentation\nRULES: $(cat ~/.claude/workflows/cli-templates/prompts/documentation/module-documentation.txt)\" --tool gemini --mode analysis --cd src/auth",
|
||||
"output_to": "module_analysis",
|
||||
"on_error": "fail"
|
||||
}
|
||||
|
||||
@@ -8,7 +8,7 @@ You are a documentation update coordinator for complex projects. Orchestrate par
|
||||
|
||||
## Core Mission
|
||||
|
||||
Execute depth-parallel updates for all modules using `~/.claude/scripts/update_module_claude.sh`. **Every module path must be processed**.
|
||||
Execute depth-parallel updates for all modules using `ccw tool exec update_module_claude`. **Every module path must be processed**.
|
||||
|
||||
## Input Context
|
||||
|
||||
@@ -42,12 +42,12 @@ TodoWrite([
|
||||
# 3. Launch parallel jobs (max 4)
|
||||
|
||||
# Depth 5 example (Layer 3 - use multi-layer):
|
||||
~/.claude/scripts/update_module_claude.sh "multi-layer" "./.claude/workflows/cli-templates/prompts/analysis" "gemini" &
|
||||
~/.claude/scripts/update_module_claude.sh "multi-layer" "./.claude/workflows/cli-templates/prompts/development" "gemini" &
|
||||
ccw tool exec update_module_claude '{"strategy":"multi-layer","path":"./.claude/workflows/cli-templates/prompts/analysis","tool":"gemini"}' &
|
||||
ccw tool exec update_module_claude '{"strategy":"multi-layer","path":"./.claude/workflows/cli-templates/prompts/development","tool":"gemini"}' &
|
||||
|
||||
# Depth 1 example (Layer 2 - use single-layer):
|
||||
~/.claude/scripts/update_module_claude.sh "single-layer" "./src/auth" "gemini" &
|
||||
~/.claude/scripts/update_module_claude.sh "single-layer" "./src/api" "gemini" &
|
||||
ccw tool exec update_module_claude '{"strategy":"single-layer","path":"./src/auth","tool":"gemini"}' &
|
||||
ccw tool exec update_module_claude '{"strategy":"single-layer","path":"./src/api","tool":"gemini"}' &
|
||||
# ... up to 4 concurrent jobs
|
||||
|
||||
# 4. Wait for all depth jobs to complete
|
||||
|
||||
@@ -36,10 +36,10 @@ You are a test context discovery specialist focused on gathering test coverage i
|
||||
**Use**: Phase 1 source context loading
|
||||
|
||||
### 2. Test Coverage Discovery
|
||||
**Primary (Code-Index MCP)**:
|
||||
- `mcp__code-index__find_files(pattern)` - Find test files (*.test.*, *.spec.*)
|
||||
- `mcp__code-index__search_code_advanced()` - Search test patterns
|
||||
- `mcp__code-index__get_file_summary()` - Analyze test structure
|
||||
**Primary (CCW CodexLens MCP)**:
|
||||
- `mcp__ccw-tools__codex_lens(action="search_files", query="*.test.*")` - Find test files
|
||||
- `mcp__ccw-tools__codex_lens(action="search", query="pattern")` - Search test patterns
|
||||
- `mcp__ccw-tools__codex_lens(action="symbol", file="path")` - Analyze test structure
|
||||
|
||||
**Fallback (CLI)**:
|
||||
- `rg` (ripgrep) - Fast test pattern search
|
||||
@@ -120,9 +120,10 @@ for (const summary_path of summaries) {
|
||||
|
||||
**2.1 Existing Test Discovery**:
|
||||
```javascript
|
||||
// Method 1: Code-Index MCP (preferred)
|
||||
const test_files = mcp__code-index__find_files({
|
||||
patterns: ["*.test.*", "*.spec.*", "*test_*.py", "*_test.go"]
|
||||
// Method 1: CodexLens MCP (preferred)
|
||||
const test_files = mcp__ccw-tools__codex_lens({
|
||||
action: "search_files",
|
||||
query: "*.test.* OR *.spec.* OR test_*.py OR *_test.go"
|
||||
});
|
||||
|
||||
// Method 2: Fallback CLI
|
||||
|
||||
@@ -83,17 +83,18 @@ When task JSON contains implementation_approach array:
|
||||
- L1 (Unit): `*.test.*`, `*.spec.*` in `__tests__/`, `tests/unit/`
|
||||
- L2 (Integration): `tests/integration/`, `*.integration.test.*`
|
||||
- L3 (E2E): `tests/e2e/`, `*.e2e.test.*`, `cypress/`, `playwright/`
|
||||
- **context-package.json** (CCW Workflow): Extract artifact paths using `jq -r '.brainstorm_artifacts.role_analyses[].files[].path'`
|
||||
- **context-package.json** (CCW Workflow): Use Read tool to get context package from `.workflow/active/{session}/.process/context-package.json`
|
||||
- Identify test commands from project configuration
|
||||
|
||||
```bash
|
||||
# Detect test framework and multi-layered commands
|
||||
if [ -f "package.json" ]; then
|
||||
# Extract layer-specific test commands
|
||||
LINT_CMD=$(cat package.json | jq -r '.scripts.lint // "eslint ."')
|
||||
UNIT_CMD=$(cat package.json | jq -r '.scripts["test:unit"] // .scripts.test')
|
||||
INTEGRATION_CMD=$(cat package.json | jq -r '.scripts["test:integration"] // ""')
|
||||
E2E_CMD=$(cat package.json | jq -r '.scripts["test:e2e"] // ""')
|
||||
# Extract layer-specific test commands using Read tool or jq
|
||||
PKG_JSON=$(cat package.json)
|
||||
LINT_CMD=$(echo "$PKG_JSON" | jq -r '.scripts.lint // "eslint ."')
|
||||
UNIT_CMD=$(echo "$PKG_JSON" | jq -r '.scripts["test:unit"] // .scripts.test')
|
||||
INTEGRATION_CMD=$(echo "$PKG_JSON" | jq -r '.scripts["test:integration"] // ""')
|
||||
E2E_CMD=$(echo "$PKG_JSON" | jq -r '.scripts["test:e2e"] // ""')
|
||||
elif [ -f "pytest.ini" ] || [ -f "setup.py" ]; then
|
||||
LINT_CMD="ruff check . || flake8 ."
|
||||
UNIT_CMD="pytest tests/unit/"
|
||||
|
||||
@@ -191,7 +191,7 @@ target/
|
||||
### Step 2: Workspace Analysis (MANDATORY FIRST)
|
||||
```bash
|
||||
# Analyze workspace structure
|
||||
bash(~/.claude/scripts/get_modules_by_depth.sh json)
|
||||
bash(ccw tool exec get_modules_by_depth '{"format":"json"}')
|
||||
```
|
||||
|
||||
### Step 3: Technology Detection
|
||||
|
||||
@@ -101,10 +101,10 @@ src/ (depth 1) → SINGLE STRATEGY
|
||||
Bash({command: "pwd && basename \"$(pwd)\" && git rev-parse --show-toplevel 2>/dev/null || pwd", run_in_background: false});
|
||||
|
||||
// Get module structure with classification
|
||||
Bash({command: "~/.claude/scripts/get_modules_by_depth.sh list | ~/.claude/scripts/classify-folders.sh", run_in_background: false});
|
||||
Bash({command: "ccw tool exec get_modules_by_depth '{\"format\":\"list\"}' | ccw tool exec classify_folders '{}'", run_in_background: false});
|
||||
|
||||
// OR with path parameter
|
||||
Bash({command: "cd <target-path> && ~/.claude/scripts/get_modules_by_depth.sh list | ~/.claude/scripts/classify-folders.sh", run_in_background: false});
|
||||
Bash({command: "cd <target-path> && ccw tool exec get_modules_by_depth '{\"format\":\"list\"}' | ccw tool exec classify_folders '{}'", run_in_background: false});
|
||||
```
|
||||
|
||||
**Parse output** `depth:N|path:<PATH>|type:<code|navigation>|...` to extract module paths, types, and count.
|
||||
@@ -200,7 +200,7 @@ for (let layer of [3, 2, 1]) {
|
||||
let strategy = module.depth >= 3 ? "full" : "single";
|
||||
for (let tool of tool_order) {
|
||||
Bash({
|
||||
command: `cd ${module.path} && ~/.claude/scripts/generate_module_docs.sh "${strategy}" "." "${project_name}" "${tool}"`,
|
||||
command: `cd ${module.path} && ccw tool exec generate_module_docs '{"strategy":"${strategy}","sourcePath":".","projectName":"${project_name}","tool":"${tool}"}'`,
|
||||
run_in_background: false
|
||||
});
|
||||
if (bash_result.exit_code === 0) {
|
||||
@@ -263,7 +263,7 @@ MODULES:
|
||||
|
||||
TOOLS (try in order): {{tool_1}}, {{tool_2}}, {{tool_3}}
|
||||
|
||||
EXECUTION SCRIPT: ~/.claude/scripts/generate_module_docs.sh
|
||||
EXECUTION SCRIPT: ccw tool exec generate_module_docs
|
||||
- Accepts strategy parameter: full | single
|
||||
- Accepts folder type detection: code | navigation
|
||||
- Tool execution via direct CLI commands (gemini/qwen/codex)
|
||||
@@ -273,7 +273,7 @@ EXECUTION FLOW (for each module):
|
||||
1. Tool fallback loop (exit on first success):
|
||||
for tool in {{tool_1}} {{tool_2}} {{tool_3}}; do
|
||||
Bash({
|
||||
command: `cd "{{module_path}}" && ~/.claude/scripts/generate_module_docs.sh "{{strategy}}" "." "{{project_name}}" "${tool}"`,
|
||||
command: `cd "{{module_path}}" && ccw tool exec generate_module_docs '{"strategy":"{{strategy}}","sourcePath":".","projectName":"{{project_name}}","tool":"${tool}"}'`,
|
||||
run_in_background: false
|
||||
})
|
||||
exit_code=$?
|
||||
@@ -322,7 +322,7 @@ let project_root = get_project_root();
|
||||
report("Generating project README.md...");
|
||||
for (let tool of tool_order) {
|
||||
Bash({
|
||||
command: `cd ${project_root} && ~/.claude/scripts/generate_module_docs.sh "project-readme" "." "${project_name}" "${tool}"`,
|
||||
command: `cd ${project_root} && ccw tool exec generate_module_docs '{"strategy":"project-readme","sourcePath":".","projectName":"${project_name}","tool":"${tool}"}'`,
|
||||
run_in_background: false
|
||||
});
|
||||
if (bash_result.exit_code === 0) {
|
||||
@@ -335,7 +335,7 @@ for (let tool of tool_order) {
|
||||
report("Generating ARCHITECTURE.md and EXAMPLES.md...");
|
||||
for (let tool of tool_order) {
|
||||
Bash({
|
||||
command: `cd ${project_root} && ~/.claude/scripts/generate_module_docs.sh "project-architecture" "." "${project_name}" "${tool}"`,
|
||||
command: `cd ${project_root} && ccw tool exec generate_module_docs '{"strategy":"project-architecture","sourcePath":".","projectName":"${project_name}","tool":"${tool}"}'`,
|
||||
run_in_background: false
|
||||
});
|
||||
if (bash_result.exit_code === 0) {
|
||||
@@ -350,7 +350,7 @@ if (bash_result.stdout.includes("API_FOUND")) {
|
||||
report("Generating HTTP API documentation...");
|
||||
for (let tool of tool_order) {
|
||||
Bash({
|
||||
command: `cd ${project_root} && ~/.claude/scripts/generate_module_docs.sh "http-api" "." "${project_name}" "${tool}"`,
|
||||
command: `cd ${project_root} && ccw tool exec generate_module_docs '{"strategy":"http-api","sourcePath":".","projectName":"${project_name}","tool":"${tool}"}'`,
|
||||
run_in_background: false
|
||||
});
|
||||
if (bash_result.exit_code === 0) {
|
||||
|
||||
@@ -51,7 +51,7 @@ Orchestrates context-aware documentation generation/update for changed modules u
|
||||
Bash({command: "pwd && basename \"$(pwd)\" && git rev-parse --show-toplevel 2>/dev/null || pwd", run_in_background: false});
|
||||
|
||||
// Detect changed modules
|
||||
Bash({command: "~/.claude/scripts/detect_changed_modules.sh list", run_in_background: false});
|
||||
Bash({command: "ccw tool exec detect_changed_modules '{\"format\":\"list\"}'", run_in_background: false});
|
||||
|
||||
// Cache git changes
|
||||
Bash({command: "git add -A 2>/dev/null || true", run_in_background: false});
|
||||
@@ -123,7 +123,7 @@ for (let depth of sorted_depths.reverse()) { // N → 0
|
||||
return async () => {
|
||||
for (let tool of tool_order) {
|
||||
Bash({
|
||||
command: `cd ${module.path} && ~/.claude/scripts/generate_module_docs.sh "single" "." "${project_name}" "${tool}"`,
|
||||
command: `cd ${module.path} && ccw tool exec generate_module_docs '{"strategy":"single","sourcePath":".","projectName":"${project_name}","tool":"${tool}"}'`,
|
||||
run_in_background: false
|
||||
});
|
||||
if (bash_result.exit_code === 0) {
|
||||
@@ -207,21 +207,21 @@ EXECUTION:
|
||||
For each module above:
|
||||
1. Try tool 1:
|
||||
Bash({
|
||||
command: `cd "{{module_path}}" && ~/.claude/scripts/generate_module_docs.sh "single" "." "{{project_name}}" "{{tool_1}}"`,
|
||||
command: `cd "{{module_path}}" && ccw tool exec generate_module_docs '{"strategy":"single","sourcePath":".","projectName":"{{project_name}}","tool":"{{tool_1}}"}'`,
|
||||
run_in_background: false
|
||||
})
|
||||
→ Success: Report "✅ {{module_path}} docs generated with {{tool_1}}", proceed to next module
|
||||
→ Failure: Try tool 2
|
||||
2. Try tool 2:
|
||||
Bash({
|
||||
command: `cd "{{module_path}}" && ~/.claude/scripts/generate_module_docs.sh "single" "." "{{project_name}}" "{{tool_2}}"`,
|
||||
command: `cd "{{module_path}}" && ccw tool exec generate_module_docs '{"strategy":"single","sourcePath":".","projectName":"{{project_name}}","tool":"{{tool_2}}"}'`,
|
||||
run_in_background: false
|
||||
})
|
||||
→ Success: Report "✅ {{module_path}} docs generated with {{tool_2}}", proceed to next module
|
||||
→ Failure: Try tool 3
|
||||
3. Try tool 3:
|
||||
Bash({
|
||||
command: `cd "{{module_path}}" && ~/.claude/scripts/generate_module_docs.sh "single" "." "{{project_name}}" "{{tool_3}}"`,
|
||||
command: `cd "{{module_path}}" && ccw tool exec generate_module_docs '{"strategy":"single","sourcePath":".","projectName":"{{project_name}}","tool":"{{tool_3}}"}'`,
|
||||
run_in_background: false
|
||||
})
|
||||
→ Success: Report "✅ {{module_path}} docs generated with {{tool_3}}", proceed to next module
|
||||
|
||||
@@ -64,12 +64,17 @@ Lightweight planner that analyzes project structure, decomposes documentation wo
|
||||
```bash
|
||||
# Get target path, project name, and root
|
||||
bash(pwd && basename "$(pwd)" && git rev-parse --show-toplevel 2>/dev/null || pwd && date +%Y%m%d-%H%M%S)
|
||||
```
|
||||
|
||||
# Create session directories (replace timestamp)
|
||||
bash(mkdir -p .workflow/active/WFS-docs-{timestamp}/.{task,process,summaries})
|
||||
```javascript
|
||||
// Create docs session (type: docs)
|
||||
SlashCommand(command="/workflow:session:start --type docs --new \"{project_name}-docs-{timestamp}\"")
|
||||
// Parse output to get sessionId
|
||||
```
|
||||
|
||||
# Create workflow-session.json (replace values)
|
||||
bash(echo '{"session_id":"WFS-docs-{timestamp}","project":"{project} documentation","status":"planning","timestamp":"2024-01-20T14:30:22+08:00","path":".","target_path":"{target_path}","project_root":"{project_root}","project_name":"{project_name}","mode":"full","tool":"gemini","cli_execute":false}' | jq '.' > .workflow/active/WFS-docs-{timestamp}/workflow-session.json)
|
||||
```bash
|
||||
# Update workflow-session.json with docs-specific fields
|
||||
ccw session {sessionId} write workflow-session.json '{"target_path":"{target_path}","project_root":"{project_root}","project_name":"{project_name}","mode":"full","tool":"gemini","cli_execute":false}'
|
||||
```
|
||||
|
||||
### Phase 2: Analyze Structure
|
||||
@@ -80,10 +85,10 @@ bash(echo '{"session_id":"WFS-docs-{timestamp}","project":"{project} documentati
|
||||
|
||||
```bash
|
||||
# 1. Run folder analysis
|
||||
bash(~/.claude/scripts/get_modules_by_depth.sh | ~/.claude/scripts/classify-folders.sh)
|
||||
bash(ccw tool exec get_modules_by_depth '{}' | ccw tool exec classify_folders '{}')
|
||||
|
||||
# 2. Get top-level directories (first 2 path levels)
|
||||
bash(~/.claude/scripts/get_modules_by_depth.sh | ~/.claude/scripts/classify-folders.sh | awk -F'|' '{print $1}' | sed 's|^\./||' | awk -F'/' '{if(NF>=2) print $1"/"$2; else if(NF==1) print $1}' | sort -u)
|
||||
bash(ccw tool exec get_modules_by_depth '{}' | ccw tool exec classify_folders '{}' | awk -F'|' '{print $1}' | sed 's|^\./||' | awk -F'/' '{if(NF>=2) print $1"/"$2; else if(NF==1) print $1}' | sort -u)
|
||||
|
||||
# 3. Find existing docs (if directory exists)
|
||||
bash(if [ -d .workflow/docs/\${project_name} ]; then find .workflow/docs/\${project_name} -type f -name "*.md" ! -path "*/README.md" ! -path "*/ARCHITECTURE.md" ! -path "*/EXAMPLES.md" ! -path "*/api/*" 2>/dev/null; fi)
|
||||
@@ -131,7 +136,8 @@ bash(if [ -d .workflow/docs/\${project_name} ]; then find .workflow/docs/\${proj
|
||||
|
||||
```bash
|
||||
# Count existing docs from doc-planning-data.json
|
||||
bash(cat .workflow/active/WFS-docs-{timestamp}/.process/doc-planning-data.json | jq '.existing_docs.file_list | length')
|
||||
ccw session WFS-docs-{timestamp} read .process/doc-planning-data.json --raw | jq '.existing_docs.file_list | length'
|
||||
# Or read entire process file and parse
|
||||
```
|
||||
|
||||
**Data Processing**: Use count result, then use **Edit tool** to update `workflow-session.json`:
|
||||
@@ -185,10 +191,10 @@ Large Projects (single dir >10 docs):
|
||||
|
||||
```bash
|
||||
# 1. Get top-level directories from doc-planning-data.json
|
||||
bash(cat .workflow/active/WFS-docs-{timestamp}/.process/doc-planning-data.json | jq -r '.top_level_dirs[]')
|
||||
ccw session WFS-docs-{timestamp} read .process/doc-planning-data.json --raw | jq -r '.top_level_dirs[]'
|
||||
|
||||
# 2. Get mode from workflow-session.json
|
||||
bash(cat .workflow/active/WFS-docs-{timestamp}/workflow-session.json | jq -r '.mode // "full"')
|
||||
ccw session WFS-docs-{timestamp} read workflow-session.json --raw | jq -r '.mode // "full"'
|
||||
|
||||
# 3. Check for HTTP API
|
||||
bash(grep -r "router\.|@Get\|@Post" src/ 2>/dev/null && echo "API_FOUND" || echo "NO_API")
|
||||
@@ -217,7 +223,7 @@ bash(grep -r "router\.|@Get\|@Post" src/ 2>/dev/null && echo "API_FOUND" || echo
|
||||
|
||||
**Task ID Calculation**:
|
||||
```bash
|
||||
group_count=$(jq '.groups.count' .workflow/active/WFS-docs-{timestamp}/.process/doc-planning-data.json)
|
||||
group_count=$(ccw session WFS-docs-{timestamp} read .process/doc-planning-data.json --raw | jq '.groups.count')
|
||||
readme_id=$((group_count + 1)) # Next ID after groups
|
||||
arch_id=$((group_count + 2))
|
||||
api_id=$((group_count + 3))
|
||||
@@ -230,12 +236,12 @@ api_id=$((group_count + 3))
|
||||
| Mode | cli_execute | Placement | CLI MODE | Approval Flag | Agent Role |
|
||||
|------|-------------|-----------|----------|---------------|------------|
|
||||
| **Agent** | false | pre_analysis | analysis | (none) | Generate docs in implementation_approach |
|
||||
| **CLI** | true | implementation_approach | write | --approval-mode yolo | Execute CLI commands, validate output |
|
||||
| **CLI** | true | implementation_approach | write | --mode write | Execute CLI commands, validate output |
|
||||
|
||||
**Command Patterns**:
|
||||
- Gemini/Qwen: `cd dir && gemini -p "..."`
|
||||
- CLI Mode: `cd dir && gemini --approval-mode yolo -p "..."`
|
||||
- Codex: `codex -C dir --full-auto exec "..." --skip-git-repo-check -s danger-full-access`
|
||||
- Gemini/Qwen: `ccw cli -p "..." --tool gemini --mode analysis --cd dir`
|
||||
- CLI Mode: `ccw cli -p "..." --tool gemini --mode write --cd dir`
|
||||
- Codex: `ccw cli -p "..." --tool codex --mode write --cd dir`
|
||||
|
||||
**Generation Process**:
|
||||
1. Read configuration values (tool, cli_execute, mode) from workflow-session.json
|
||||
@@ -280,8 +286,8 @@ api_id=$((group_count + 3))
|
||||
"step": "load_precomputed_data",
|
||||
"action": "Load Phase 2 analysis and extract group directories",
|
||||
"commands": [
|
||||
"bash(cat ${session_dir}/.process/doc-planning-data.json)",
|
||||
"bash(jq '.groups.assignments[] | select(.group_id == \"${group_number}\") | .directories' ${session_dir}/.process/doc-planning-data.json)"
|
||||
"ccw session ${session_id} read .process/doc-planning-data.json",
|
||||
"ccw session ${session_id} read .process/doc-planning-data.json --raw | jq '.groups.assignments[] | select(.group_id == \"${group_number}\") | .directories'"
|
||||
],
|
||||
"output_to": "phase2_context",
|
||||
"note": "Single JSON file contains all Phase 2 analysis results"
|
||||
@@ -326,7 +332,7 @@ api_id=$((group_count + 3))
|
||||
{
|
||||
"step": 2,
|
||||
"title": "Batch generate documentation via CLI",
|
||||
"command": "bash(dirs=$(jq -r '.groups.assignments[] | select(.group_id == \"${group_number}\") | .directories[]' ${session_dir}/.process/doc-planning-data.json); for dir in $dirs; do cd \"$dir\" && gemini --approval-mode yolo -p \"PURPOSE: Generate module docs\\nTASK: Create documentation\\nMODE: write\\nCONTEXT: @**/* [phase2_context]\\nEXPECTED: API.md and README.md\\nRULES: Mirror structure\" || echo \"Failed: $dir\"; cd -; done)",
|
||||
"command": "ccw cli -p 'PURPOSE: Generate module docs\\nTASK: Create documentation\\nMODE: write\\nCONTEXT: @**/* [phase2_context]\\nEXPECTED: API.md and README.md\\nRULES: Mirror structure' --tool gemini --mode write --cd ${dirs_from_group}",
|
||||
"depends_on": [1],
|
||||
"output": "generated_docs"
|
||||
}
|
||||
@@ -358,7 +364,7 @@ api_id=$((group_count + 3))
|
||||
},
|
||||
{
|
||||
"step": "analyze_project",
|
||||
"command": "bash(gemini \"PURPOSE: Analyze project structure\\nTASK: Extract overview from modules\\nMODE: analysis\\nCONTEXT: [all_module_docs]\\nEXPECTED: Project outline\")",
|
||||
"command": "bash(ccw cli -p \"PURPOSE: Analyze project structure\\nTASK: Extract overview from modules\\nMODE: analysis\\nCONTEXT: [all_module_docs]\\nEXPECTED: Project outline\" --tool gemini --mode analysis)",
|
||||
"output_to": "project_outline"
|
||||
}
|
||||
],
|
||||
@@ -398,7 +404,7 @@ api_id=$((group_count + 3))
|
||||
"pre_analysis": [
|
||||
{"step": "load_existing_docs", "command": "bash(cat .workflow/docs/${project_name}/{ARCHITECTURE,EXAMPLES}.md 2>/dev/null || echo 'No existing docs')", "output_to": "existing_arch_examples"},
|
||||
{"step": "load_all_docs", "command": "bash(cat .workflow/docs/${project_name}/README.md && find .workflow/docs/${project_name} -type f -name '*.md' ! -path '*/README.md' ! -path '*/ARCHITECTURE.md' ! -path '*/EXAMPLES.md' ! -path '*/api/*' | xargs cat)", "output_to": "all_docs"},
|
||||
{"step": "analyze_architecture", "command": "bash(gemini \"PURPOSE: Analyze system architecture\\nTASK: Synthesize architectural overview and examples\\nMODE: analysis\\nCONTEXT: [all_docs]\\nEXPECTED: Architecture + Examples outline\")", "output_to": "arch_examples_outline"}
|
||||
{"step": "analyze_architecture", "command": "bash(ccw cli -p \"PURPOSE: Analyze system architecture\\nTASK: Synthesize architectural overview and examples\\nMODE: analysis\\nCONTEXT: [all_docs]\\nEXPECTED: Architecture + Examples outline\" --tool gemini --mode analysis)", "output_to": "arch_examples_outline"}
|
||||
],
|
||||
"implementation_approach": [
|
||||
{
|
||||
@@ -435,7 +441,7 @@ api_id=$((group_count + 3))
|
||||
"pre_analysis": [
|
||||
{"step": "discover_api", "command": "bash(rg 'router\\.| @(Get|Post)' -g '*.{ts,js}')", "output_to": "endpoint_discovery"},
|
||||
{"step": "load_existing_api", "command": "bash(cat .workflow/docs/${project_name}/api/README.md 2>/dev/null || echo 'No existing API docs')", "output_to": "existing_api_docs"},
|
||||
{"step": "analyze_api", "command": "bash(gemini \"PURPOSE: Document HTTP API\\nTASK: Analyze endpoints\\nMODE: analysis\\nCONTEXT: @src/api/**/* [endpoint_discovery]\\nEXPECTED: API outline\")", "output_to": "api_outline"}
|
||||
{"step": "analyze_api", "command": "bash(ccw cli -p \"PURPOSE: Document HTTP API\\nTASK: Analyze endpoints\\nMODE: analysis\\nCONTEXT: @src/api/**/* [endpoint_discovery]\\nEXPECTED: API outline\" --tool gemini --mode analysis)", "output_to": "api_outline"}
|
||||
],
|
||||
"implementation_approach": [
|
||||
{
|
||||
@@ -596,7 +602,7 @@ api_id=$((group_count + 3))
|
||||
| Mode | CLI Placement | CLI MODE | Approval Flag | Agent Role |
|
||||
|------|---------------|----------|---------------|------------|
|
||||
| **Agent (default)** | pre_analysis | analysis | (none) | Generates documentation content |
|
||||
| **CLI (--cli-execute)** | implementation_approach | write | --approval-mode yolo | Executes CLI commands, validates output |
|
||||
| **CLI (--cli-execute)** | implementation_approach | write | --mode write | Executes CLI commands, validates output |
|
||||
|
||||
**Execution Flow**:
|
||||
- **Phase 2**: Unified analysis once, results in `.process/`
|
||||
|
||||
@@ -5,7 +5,7 @@ argument-hint: "[--tool gemini|qwen] \"task context description\""
|
||||
allowed-tools: Task(*), Bash(*)
|
||||
examples:
|
||||
- /memory:load "在当前前端基础上开发用户认证功能"
|
||||
- /memory:load --tool qwen -p "重构支付模块API"
|
||||
- /memory:load --tool qwen "重构支付模块API"
|
||||
---
|
||||
|
||||
# Memory Load Command (/memory:load)
|
||||
@@ -39,7 +39,7 @@ The command fully delegates to **universal-executor agent**, which autonomously:
|
||||
1. **Analyzes Project Structure**: Executes `get_modules_by_depth.sh` to understand architecture
|
||||
2. **Loads Documentation**: Reads CLAUDE.md, README.md and other key docs
|
||||
3. **Extracts Keywords**: Derives core keywords from task description
|
||||
4. **Discovers Files**: Uses MCP code-index or rg/find to locate relevant files
|
||||
4. **Discovers Files**: Uses CodexLens MCP or rg/find to locate relevant files
|
||||
5. **CLI Deep Analysis**: Executes Gemini/Qwen CLI for deep context analysis
|
||||
6. **Generates Content Package**: Returns structured JSON core content package
|
||||
|
||||
@@ -109,7 +109,7 @@ Task(
|
||||
|
||||
1. **Project Structure**
|
||||
\`\`\`bash
|
||||
bash(~/.claude/scripts/get_modules_by_depth.sh)
|
||||
bash(ccw tool exec get_modules_by_depth '{}')
|
||||
\`\`\`
|
||||
|
||||
2. **Core Documentation**
|
||||
@@ -136,7 +136,7 @@ Task(
|
||||
Execute Gemini/Qwen CLI for deep analysis (saves main thread tokens):
|
||||
|
||||
\`\`\`bash
|
||||
cd . && ${tool} -p "
|
||||
ccw cli -p "
|
||||
PURPOSE: Extract project core context for task: ${task_description}
|
||||
TASK: Analyze project architecture, tech stack, key patterns, relevant files
|
||||
MODE: analysis
|
||||
@@ -147,7 +147,7 @@ RULES:
|
||||
- Identify key architecture patterns and technical constraints
|
||||
- Extract integration points and development standards
|
||||
- Output concise, structured format
|
||||
"
|
||||
" --tool ${tool} --mode analysis
|
||||
\`\`\`
|
||||
|
||||
### Step 4: Generate Core Content Package
|
||||
@@ -212,7 +212,7 @@ Before returning:
|
||||
### Example 2: Using Qwen Tool
|
||||
|
||||
```bash
|
||||
/memory:load --tool qwen -p "重构支付模块API"
|
||||
/memory:load --tool qwen "重构支付模块API"
|
||||
```
|
||||
|
||||
Agent uses Qwen CLI for analysis, returns same structured package.
|
||||
|
||||
@@ -1,477 +1,314 @@
|
||||
---
|
||||
name: tech-research
|
||||
description: 3-phase orchestrator: extract tech stack from session/name → delegate to agent for Exa research and module generation → generate SKILL.md index (skips phase 2 if exists)
|
||||
description: "3-phase orchestrator: extract tech stack → Exa research → generate path-conditional rules (auto-loaded by Claude Code)"
|
||||
argument-hint: "[session-id | tech-stack-name] [--regenerate] [--tool <gemini|qwen>]"
|
||||
allowed-tools: SlashCommand(*), TodoWrite(*), Bash(*), Read(*), Write(*), Task(*)
|
||||
---
|
||||
|
||||
# Tech Stack Research SKILL Generator
|
||||
# Tech Stack Rules Generator
|
||||
|
||||
## Overview
|
||||
|
||||
**Pure Orchestrator with Agent Delegation**: Prepares context paths and delegates ALL work to agent. Agent produces files directly.
|
||||
**Purpose**: Generate multi-layered, path-conditional rules that Claude Code automatically loads based on file context.
|
||||
|
||||
**Auto-Continue Workflow**: Runs fully autonomously once triggered. Each phase completes and automatically triggers the next phase.
|
||||
**Key Difference from SKILL Memory**:
|
||||
- **SKILL**: Manual loading via `Skill(command: "tech-name")`
|
||||
- **Rules**: Automatic loading when working with matching file paths
|
||||
|
||||
**Execution Paths**:
|
||||
- **Full Path**: All 3 phases (no existing SKILL OR `--regenerate` specified)
|
||||
- **Skip Path**: Phase 1 → Phase 3 (existing SKILL found AND no `--regenerate` flag)
|
||||
- **Phase 3 Always Executes**: SKILL index is always generated or updated
|
||||
**Output Structure**:
|
||||
```
|
||||
.claude/rules/tech/{tech-stack}/
|
||||
├── core.md # paths: **/*.{ext} - Core principles
|
||||
├── patterns.md # paths: src/**/*.{ext} - Implementation patterns
|
||||
├── testing.md # paths: **/*.{test,spec}.{ext} - Testing rules
|
||||
├── config.md # paths: *.config.* - Configuration rules
|
||||
├── api.md # paths: **/api/**/* - API rules (backend only)
|
||||
├── components.md # paths: **/components/**/* - Component rules (frontend only)
|
||||
└── metadata.json # Generation metadata
|
||||
```
|
||||
|
||||
**Agent Responsibility**:
|
||||
- Agent does ALL the work: context reading, Exa research, content synthesis, file writing
|
||||
- Orchestrator only provides context paths and waits for completion
|
||||
**Templates Location**: `~/.claude/workflows/cli-templates/prompts/rules/`
|
||||
|
||||
---
|
||||
|
||||
## Core Rules
|
||||
|
||||
1. **Start Immediately**: First action is TodoWrite initialization, second action is Phase 1 execution
|
||||
2. **Context Path Delegation**: Pass session directory or tech stack name to agent, let agent do discovery
|
||||
3. **Agent Produces Files**: Agent directly writes all module files, orchestrator does NOT parse agent output
|
||||
4. **Auto-Continue**: After completing each phase, update TodoWrite and immediately execute next phase
|
||||
5. **No User Prompts**: Never ask user questions or wait for input between phases
|
||||
6. **Track Progress**: Update TodoWrite after EVERY phase completion before starting next phase
|
||||
7. **Lightweight Index**: Phase 3 only generates SKILL.md index by reading existing files
|
||||
1. **Start Immediately**: First action is TodoWrite initialization
|
||||
2. **Path-Conditional Output**: Every rule file includes `paths` frontmatter
|
||||
3. **Template-Driven**: Agent reads templates before generating content
|
||||
4. **Agent Produces Files**: Agent writes all rule files directly
|
||||
5. **No Manual Loading**: Rules auto-activate when Claude works with matching files
|
||||
|
||||
---
|
||||
|
||||
## 3-Phase Execution
|
||||
|
||||
### Phase 1: Prepare Context Paths
|
||||
### Phase 1: Prepare Context & Detect Tech Stack
|
||||
|
||||
**Goal**: Detect input mode, prepare context paths for agent, check existing SKILL
|
||||
**Goal**: Detect input mode, extract tech stack info, determine file extensions
|
||||
|
||||
**Input Mode Detection**:
|
||||
```bash
|
||||
# Get input parameter
|
||||
input="$1"
|
||||
|
||||
# Detect mode
|
||||
if [[ "$input" == WFS-* ]]; then
|
||||
MODE="session"
|
||||
SESSION_ID="$input"
|
||||
CONTEXT_PATH=".workflow/${SESSION_ID}"
|
||||
# Read workflow-session.json to extract tech stack
|
||||
else
|
||||
MODE="direct"
|
||||
TECH_STACK_NAME="$input"
|
||||
CONTEXT_PATH="$input" # Pass tech stack name as context
|
||||
fi
|
||||
```
|
||||
|
||||
**Check Existing SKILL**:
|
||||
```bash
|
||||
# For session mode, peek at session to get tech stack name
|
||||
if [[ "$MODE" == "session" ]]; then
|
||||
bash(test -f ".workflow/${SESSION_ID}/workflow-session.json")
|
||||
Read(.workflow/${SESSION_ID}/workflow-session.json)
|
||||
# Extract tech_stack_name (minimal extraction)
|
||||
fi
|
||||
**Tech Stack Analysis**:
|
||||
```javascript
|
||||
// Decompose composite tech stacks
|
||||
// "typescript-react-nextjs" → ["typescript", "react", "nextjs"]
|
||||
|
||||
# Normalize and check
|
||||
const TECH_EXTENSIONS = {
|
||||
"typescript": "{ts,tsx}",
|
||||
"javascript": "{js,jsx}",
|
||||
"python": "py",
|
||||
"rust": "rs",
|
||||
"go": "go",
|
||||
"java": "java",
|
||||
"csharp": "cs",
|
||||
"ruby": "rb",
|
||||
"php": "php"
|
||||
};
|
||||
|
||||
const FRAMEWORK_TYPE = {
|
||||
"react": "frontend",
|
||||
"vue": "frontend",
|
||||
"angular": "frontend",
|
||||
"nextjs": "fullstack",
|
||||
"nuxt": "fullstack",
|
||||
"fastapi": "backend",
|
||||
"express": "backend",
|
||||
"django": "backend",
|
||||
"rails": "backend"
|
||||
};
|
||||
```
|
||||
|
||||
**Check Existing Rules**:
|
||||
```bash
|
||||
normalized_name=$(echo "$TECH_STACK_NAME" | tr '[:upper:]' '[:lower:]' | tr ' ' '-')
|
||||
bash(test -d ".claude/skills/${normalized_name}" && echo "exists" || echo "not_exists")
|
||||
bash(find ".claude/skills/${normalized_name}" -name "*.md" 2>/dev/null | wc -l || echo 0)
|
||||
rules_dir=".claude/rules/tech/${normalized_name}"
|
||||
existing_count=$(find "${rules_dir}" -name "*.md" 2>/dev/null | wc -l || echo 0)
|
||||
```
|
||||
|
||||
**Skip Decision**:
|
||||
```javascript
|
||||
if (existing_files > 0 && !regenerate_flag) {
|
||||
SKIP_GENERATION = true
|
||||
message = "Tech stack SKILL already exists, skipping Phase 2. Use --regenerate to force regeneration."
|
||||
} else if (regenerate_flag) {
|
||||
bash(rm -rf ".claude/skills/${normalized_name}")
|
||||
SKIP_GENERATION = false
|
||||
message = "Regenerating tech stack SKILL from scratch."
|
||||
} else {
|
||||
SKIP_GENERATION = false
|
||||
message = "No existing SKILL found, generating new tech stack documentation."
|
||||
}
|
||||
```
|
||||
- If `existing_count > 0` AND no `--regenerate` → `SKIP_GENERATION = true`
|
||||
- If `--regenerate` → Delete existing and regenerate
|
||||
|
||||
**Output Variables**:
|
||||
- `MODE`: `session` or `direct`
|
||||
- `SESSION_ID`: Session ID (if session mode)
|
||||
- `CONTEXT_PATH`: Path to session directory OR tech stack name
|
||||
- `TECH_STACK_NAME`: Extracted or provided tech stack name
|
||||
- `SKIP_GENERATION`: Boolean - whether to skip Phase 2
|
||||
- `TECH_STACK_NAME`: Normalized name
|
||||
- `PRIMARY_LANG`: Primary language
|
||||
- `FILE_EXT`: File extension pattern
|
||||
- `FRAMEWORK_TYPE`: frontend | backend | fullstack | library
|
||||
- `COMPONENTS`: Array of tech components
|
||||
- `SKIP_GENERATION`: Boolean
|
||||
|
||||
**TodoWrite**:
|
||||
- If skipping: Mark phase 1 completed, phase 2 completed, phase 3 in_progress
|
||||
- If not skipping: Mark phase 1 completed, phase 2 in_progress
|
||||
**TodoWrite**: Mark phase 1 completed
|
||||
|
||||
---
|
||||
|
||||
### Phase 2: Agent Produces All Files
|
||||
### Phase 2: Agent Produces Path-Conditional Rules
|
||||
|
||||
**Skip Condition**: Skipped if `SKIP_GENERATION = true`
|
||||
|
||||
**Goal**: Delegate EVERYTHING to agent - context reading, Exa research, content synthesis, and file writing
|
||||
|
||||
**Agent Task Specification**:
|
||||
**Goal**: Delegate to agent for Exa research and rule file generation
|
||||
|
||||
**Template Files**:
|
||||
```
|
||||
Task(
|
||||
~/.claude/workflows/cli-templates/prompts/rules/
|
||||
├── tech-rules-agent-prompt.txt # Agent instructions
|
||||
├── rule-core.txt # Core principles template
|
||||
├── rule-patterns.txt # Implementation patterns template
|
||||
├── rule-testing.txt # Testing rules template
|
||||
├── rule-config.txt # Configuration rules template
|
||||
├── rule-api.txt # API rules template (backend)
|
||||
└── rule-components.txt # Component rules template (frontend)
|
||||
```
|
||||
|
||||
**Agent Task**:
|
||||
|
||||
```javascript
|
||||
Task({
|
||||
subagent_type: "general-purpose",
|
||||
description: "Generate tech stack SKILL: {CONTEXT_PATH}",
|
||||
prompt: "
|
||||
Generate a complete tech stack SKILL package with Exa research.
|
||||
description: `Generate tech stack rules: ${TECH_STACK_NAME}`,
|
||||
prompt: `
|
||||
You are generating path-conditional rules for Claude Code.
|
||||
|
||||
**Context Provided**:
|
||||
- Mode: {MODE}
|
||||
- Context Path: {CONTEXT_PATH}
|
||||
## Context
|
||||
- Tech Stack: ${TECH_STACK_NAME}
|
||||
- Primary Language: ${PRIMARY_LANG}
|
||||
- File Extensions: ${FILE_EXT}
|
||||
- Framework Type: ${FRAMEWORK_TYPE}
|
||||
- Components: ${JSON.stringify(COMPONENTS)}
|
||||
- Output Directory: .claude/rules/tech/${TECH_STACK_NAME}/
|
||||
|
||||
**Templates Available**:
|
||||
- Module Format: ~/.claude/workflows/cli-templates/prompts/tech/tech-module-format.txt
|
||||
- SKILL Index: ~/.claude/workflows/cli-templates/prompts/tech/tech-skill-index.txt
|
||||
## Instructions
|
||||
|
||||
**Your Responsibilities**:
|
||||
Read the agent prompt template for detailed instructions:
|
||||
$(cat ~/.claude/workflows/cli-templates/prompts/rules/tech-rules-agent-prompt.txt)
|
||||
|
||||
1. **Extract Tech Stack Information**:
|
||||
## Execution Steps
|
||||
|
||||
IF MODE == 'session':
|
||||
- Read `.workflow/active/{session_id}/workflow-session.json`
|
||||
- Read `.workflow/active/{session_id}/.process/context-package.json`
|
||||
- Extract tech_stack: {language, frameworks, libraries}
|
||||
- Build tech stack name: \"{language}-{framework1}-{framework2}\"
|
||||
- Example: \"typescript-react-nextjs\"
|
||||
1. Execute Exa research queries (see agent prompt)
|
||||
2. Read each rule template
|
||||
3. Generate rule files following template structure
|
||||
4. Write files to output directory
|
||||
5. Write metadata.json
|
||||
6. Report completion
|
||||
|
||||
IF MODE == 'direct':
|
||||
- Tech stack name = CONTEXT_PATH
|
||||
- Parse composite: split by '-' delimiter
|
||||
- Example: \"typescript-react-nextjs\" → [\"typescript\", \"react\", \"nextjs\"]
|
||||
## Variable Substitutions
|
||||
|
||||
2. **Execute Exa Research** (4-6 parallel queries):
|
||||
|
||||
Base Queries (always execute):
|
||||
- mcp__exa__get_code_context_exa(query: \"{tech} core principles best practices 2025\", tokensNum: 8000)
|
||||
- mcp__exa__get_code_context_exa(query: \"{tech} common patterns architecture examples\", tokensNum: 7000)
|
||||
- mcp__exa__web_search_exa(query: \"{tech} configuration setup tooling 2025\", numResults: 5)
|
||||
- mcp__exa__get_code_context_exa(query: \"{tech} testing strategies\", tokensNum: 5000)
|
||||
|
||||
Component Queries (if composite):
|
||||
- For each additional component:
|
||||
mcp__exa__get_code_context_exa(query: \"{main_tech} {component} integration\", tokensNum: 5000)
|
||||
|
||||
3. **Read Module Format Template**:
|
||||
|
||||
Read template for structure guidance:
|
||||
```bash
|
||||
Read(~/.claude/workflows/cli-templates/prompts/tech/tech-module-format.txt)
|
||||
```
|
||||
|
||||
4. **Synthesize Content into 6 Modules**:
|
||||
|
||||
Follow template structure from tech-module-format.txt:
|
||||
- **principles.md** - Core concepts, philosophies (~3K tokens)
|
||||
- **patterns.md** - Implementation patterns with code examples (~5K tokens)
|
||||
- **practices.md** - Best practices, anti-patterns, pitfalls (~4K tokens)
|
||||
- **testing.md** - Testing strategies, frameworks (~3K tokens)
|
||||
- **config.md** - Setup, configuration, tooling (~3K tokens)
|
||||
- **frameworks.md** - Framework integration (only if composite, ~4K tokens)
|
||||
|
||||
Each module follows template format:
|
||||
- Frontmatter (YAML)
|
||||
- Main sections with clear headings
|
||||
- Code examples from Exa research
|
||||
- Best practices sections
|
||||
- References to Exa sources
|
||||
|
||||
5. **Write Files Directly**:
|
||||
|
||||
```javascript
|
||||
// Create directory
|
||||
bash(mkdir -p \".claude/skills/{tech_stack_name}\")
|
||||
|
||||
// Write each module file using Write tool
|
||||
Write({ file_path: \".claude/skills/{tech_stack_name}/principles.md\", content: ... })
|
||||
Write({ file_path: \".claude/skills/{tech_stack_name}/patterns.md\", content: ... })
|
||||
Write({ file_path: \".claude/skills/{tech_stack_name}/practices.md\", content: ... })
|
||||
Write({ file_path: \".claude/skills/{tech_stack_name}/testing.md\", content: ... })
|
||||
Write({ file_path: \".claude/skills/{tech_stack_name}/config.md\", content: ... })
|
||||
// Write frameworks.md only if composite
|
||||
|
||||
// Write metadata.json
|
||||
Write({
|
||||
file_path: \".claude/skills/{tech_stack_name}/metadata.json\",
|
||||
content: JSON.stringify({
|
||||
tech_stack_name,
|
||||
components,
|
||||
is_composite,
|
||||
generated_at: timestamp,
|
||||
source: \"exa-research\",
|
||||
research_summary: { total_queries, total_sources }
|
||||
})
|
||||
})
|
||||
```
|
||||
|
||||
6. **Report Completion**:
|
||||
|
||||
Provide summary:
|
||||
- Tech stack name
|
||||
- Files created (count)
|
||||
- Exa queries executed
|
||||
- Sources consulted
|
||||
|
||||
**CRITICAL**:
|
||||
- MUST read external template files before generating content (step 3 for modules, step 4 for index)
|
||||
- You have FULL autonomy - read files, execute Exa, synthesize content, write files
|
||||
- Do NOT return JSON or structured data - produce actual .md files
|
||||
- Handle errors gracefully (Exa failures, missing files, template read failures)
|
||||
- If tech stack cannot be determined, ask orchestrator to clarify
|
||||
"
|
||||
)
|
||||
Replace in templates:
|
||||
- {TECH_STACK_NAME} → ${TECH_STACK_NAME}
|
||||
- {PRIMARY_LANG} → ${PRIMARY_LANG}
|
||||
- {FILE_EXT} → ${FILE_EXT}
|
||||
- {FRAMEWORK_TYPE} → ${FRAMEWORK_TYPE}
|
||||
`
|
||||
})
|
||||
```
|
||||
|
||||
**Completion Criteria**:
|
||||
- Agent task executed successfully
|
||||
- 5-6 modular files written to `.claude/skills/{tech_stack_name}/`
|
||||
- 4-6 rule files written with proper `paths` frontmatter
|
||||
- metadata.json written
|
||||
- Agent reports completion
|
||||
- Agent reports files created
|
||||
|
||||
**TodoWrite**: Mark phase 2 completed, phase 3 in_progress
|
||||
**TodoWrite**: Mark phase 2 completed
|
||||
|
||||
---
|
||||
|
||||
### Phase 3: Generate SKILL.md Index
|
||||
### Phase 3: Verify & Report
|
||||
|
||||
**Note**: This phase **ALWAYS executes** - generates or updates the SKILL index.
|
||||
|
||||
**Goal**: Read generated module files and create SKILL.md index with loading recommendations
|
||||
**Goal**: Verify generated files and provide usage summary
|
||||
|
||||
**Steps**:
|
||||
|
||||
1. **Verify Generated Files**:
|
||||
1. **Verify Files**:
|
||||
```bash
|
||||
bash(find ".claude/skills/${TECH_STACK_NAME}" -name "*.md" -type f | sort)
|
||||
find ".claude/rules/tech/${TECH_STACK_NAME}" -name "*.md" -type f
|
||||
```
|
||||
|
||||
2. **Read metadata.json**:
|
||||
2. **Validate Frontmatter**:
|
||||
```bash
|
||||
head -5 ".claude/rules/tech/${TECH_STACK_NAME}/core.md"
|
||||
```
|
||||
|
||||
3. **Read Metadata**:
|
||||
```javascript
|
||||
Read(.claude/skills/${TECH_STACK_NAME}/metadata.json)
|
||||
// Extract: tech_stack_name, components, is_composite, research_summary
|
||||
Read(`.claude/rules/tech/${TECH_STACK_NAME}/metadata.json`)
|
||||
```
|
||||
|
||||
3. **Read Module Headers** (optional, first 20 lines):
|
||||
```javascript
|
||||
Read(.claude/skills/${TECH_STACK_NAME}/principles.md, limit: 20)
|
||||
// Repeat for other modules
|
||||
4. **Generate Summary Report**:
|
||||
```
|
||||
Tech Stack Rules Generated
|
||||
|
||||
4. **Read SKILL Index Template**:
|
||||
Tech Stack: {TECH_STACK_NAME}
|
||||
Location: .claude/rules/tech/{TECH_STACK_NAME}/
|
||||
|
||||
```javascript
|
||||
Read(~/.claude/workflows/cli-templates/prompts/tech/tech-skill-index.txt)
|
||||
Files Created:
|
||||
├── core.md → paths: **/*.{ext}
|
||||
├── patterns.md → paths: src/**/*.{ext}
|
||||
├── testing.md → paths: **/*.{test,spec}.{ext}
|
||||
├── config.md → paths: *.config.*
|
||||
├── api.md → paths: **/api/**/* (if backend)
|
||||
└── components.md → paths: **/components/**/* (if frontend)
|
||||
|
||||
Auto-Loading:
|
||||
- Rules apply automatically when editing matching files
|
||||
- No manual loading required
|
||||
|
||||
Example Activation:
|
||||
- Edit src/components/Button.tsx → core.md + patterns.md + components.md
|
||||
- Edit tests/api.test.ts → core.md + testing.md
|
||||
- Edit package.json → config.md
|
||||
```
|
||||
|
||||
5. **Generate SKILL.md Index**:
|
||||
|
||||
Follow template from tech-skill-index.txt with variable substitutions:
|
||||
- `{TECH_STACK_NAME}`: From metadata.json
|
||||
- `{MAIN_TECH}`: Primary technology
|
||||
- `{ISO_TIMESTAMP}`: Current timestamp
|
||||
- `{QUERY_COUNT}`: From research_summary
|
||||
- `{SOURCE_COUNT}`: From research_summary
|
||||
- Conditional sections for composite tech stacks
|
||||
|
||||
Template provides structure for:
|
||||
- Frontmatter with metadata
|
||||
- Overview and tech stack description
|
||||
- Module organization (Core/Practical/Config sections)
|
||||
- Loading recommendations (Quick/Implementation/Complete)
|
||||
- Usage guidelines and auto-trigger keywords
|
||||
- Research metadata and version history
|
||||
|
||||
6. **Write SKILL.md**:
|
||||
```javascript
|
||||
Write({
|
||||
file_path: `.claude/skills/${TECH_STACK_NAME}/SKILL.md`,
|
||||
content: generatedIndexMarkdown
|
||||
})
|
||||
```
|
||||
|
||||
**Completion Criteria**:
|
||||
- SKILL.md index written
|
||||
- All module files verified
|
||||
- Loading recommendations included
|
||||
|
||||
**TodoWrite**: Mark phase 3 completed
|
||||
|
||||
**Final Report**:
|
||||
```
|
||||
Tech Stack SKILL Package Complete
|
||||
|
||||
Tech Stack: {TECH_STACK_NAME}
|
||||
Location: .claude/skills/{TECH_STACK_NAME}/
|
||||
|
||||
Files: SKILL.md + 5-6 modules + metadata.json
|
||||
Exa Research: {queries} queries, {sources} sources
|
||||
|
||||
Usage: Skill(command: "{TECH_STACK_NAME}")
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Implementation Details
|
||||
## Path Pattern Reference
|
||||
|
||||
### TodoWrite Patterns
|
||||
| Pattern | Matches |
|
||||
|---------|---------|
|
||||
| `**/*.ts` | All .ts files |
|
||||
| `src/**/*` | All files under src/ |
|
||||
| `*.config.*` | Config files in root |
|
||||
| `**/*.{ts,tsx}` | .ts and .tsx files |
|
||||
|
||||
**Initialization** (Before Phase 1):
|
||||
```javascript
|
||||
TodoWrite({todos: [
|
||||
{"content": "Prepare context paths", "status": "in_progress", "activeForm": "Preparing context paths"},
|
||||
{"content": "Agent produces all module files", "status": "pending", "activeForm": "Agent producing files"},
|
||||
{"content": "Generate SKILL.md index", "status": "pending", "activeForm": "Generating SKILL index"}
|
||||
]})
|
||||
```
|
||||
|
||||
**Full Path** (SKIP_GENERATION = false):
|
||||
```javascript
|
||||
// After Phase 1
|
||||
TodoWrite({todos: [
|
||||
{"content": "Prepare context paths", "status": "completed", ...},
|
||||
{"content": "Agent produces all module files", "status": "in_progress", ...},
|
||||
{"content": "Generate SKILL.md index", "status": "pending", ...}
|
||||
]})
|
||||
|
||||
// After Phase 2
|
||||
TodoWrite({todos: [
|
||||
{"content": "Prepare context paths", "status": "completed", ...},
|
||||
{"content": "Agent produces all module files", "status": "completed", ...},
|
||||
{"content": "Generate SKILL.md index", "status": "in_progress", ...}
|
||||
]})
|
||||
|
||||
// After Phase 3
|
||||
TodoWrite({todos: [
|
||||
{"content": "Prepare context paths", "status": "completed", ...},
|
||||
{"content": "Agent produces all module files", "status": "completed", ...},
|
||||
{"content": "Generate SKILL.md index", "status": "completed", ...}
|
||||
]})
|
||||
```
|
||||
|
||||
**Skip Path** (SKIP_GENERATION = true):
|
||||
```javascript
|
||||
// After Phase 1 (skip Phase 2)
|
||||
TodoWrite({todos: [
|
||||
{"content": "Prepare context paths", "status": "completed", ...},
|
||||
{"content": "Agent produces all module files", "status": "completed", ...}, // Skipped
|
||||
{"content": "Generate SKILL.md index", "status": "in_progress", ...}
|
||||
]})
|
||||
```
|
||||
|
||||
### Execution Flow
|
||||
|
||||
**Full Path**:
|
||||
```
|
||||
User → TodoWrite Init → Phase 1 (prepare) → Phase 2 (agent writes files) → Phase 3 (write index) → Report
|
||||
```
|
||||
|
||||
**Skip Path**:
|
||||
```
|
||||
User → TodoWrite Init → Phase 1 (detect existing) → Phase 3 (update index) → Report
|
||||
```
|
||||
|
||||
### Error Handling
|
||||
|
||||
**Phase 1 Errors**:
|
||||
- Invalid session ID: Report error, verify session exists
|
||||
- Missing context-package: Warn, fall back to direct mode
|
||||
- No tech stack detected: Ask user to specify tech stack name
|
||||
|
||||
**Phase 2 Errors (Agent)**:
|
||||
- Agent task fails: Retry once, report if fails again
|
||||
- Exa API failures: Agent handles internally with retries
|
||||
- Incomplete results: Warn user, proceed with partial data if minimum sections available
|
||||
|
||||
**Phase 3 Errors**:
|
||||
- Write failures: Report which files failed
|
||||
- Missing files: Note in SKILL.md, suggest regeneration
|
||||
| Tech Stack | Core Pattern | Test Pattern |
|
||||
|------------|--------------|--------------|
|
||||
| TypeScript | `**/*.{ts,tsx}` | `**/*.{test,spec}.{ts,tsx}` |
|
||||
| Python | `**/*.py` | `**/test_*.py, **/*_test.py` |
|
||||
| Rust | `**/*.rs` | `**/tests/**/*.rs` |
|
||||
| Go | `**/*.go` | `**/*_test.go` |
|
||||
|
||||
---
|
||||
|
||||
## Parameters
|
||||
|
||||
```bash
|
||||
/memory:tech-research [session-id | "tech-stack-name"] [--regenerate] [--tool <gemini|qwen>]
|
||||
/memory:tech-research [session-id | "tech-stack-name"] [--regenerate]
|
||||
```
|
||||
|
||||
**Arguments**:
|
||||
- **session-id | tech-stack-name**: Input source (auto-detected by WFS- prefix)
|
||||
- Session mode: `WFS-user-auth-v2` - Extract tech stack from workflow
|
||||
- Direct mode: `"typescript"`, `"typescript-react-nextjs"` - User specifies
|
||||
- **--regenerate**: Force regenerate existing SKILL (deletes and recreates)
|
||||
- **--tool**: Reserved for future CLI integration (default: gemini)
|
||||
- **session-id**: `WFS-*` format - Extract from workflow session
|
||||
- **tech-stack-name**: Direct input - `"typescript"`, `"typescript-react"`
|
||||
- **--regenerate**: Force regenerate existing rules
|
||||
|
||||
---
|
||||
|
||||
## Examples
|
||||
|
||||
**Generated File Structure** (for all examples):
|
||||
```
|
||||
.claude/skills/{tech-stack}/
|
||||
├── SKILL.md # Index (Phase 3)
|
||||
├── principles.md # Agent (Phase 2)
|
||||
├── patterns.md # Agent
|
||||
├── practices.md # Agent
|
||||
├── testing.md # Agent
|
||||
├── config.md # Agent
|
||||
├── frameworks.md # Agent (if composite)
|
||||
└── metadata.json # Agent
|
||||
```
|
||||
|
||||
### Direct Mode - Single Stack
|
||||
### Single Language
|
||||
|
||||
```bash
|
||||
/memory:tech-research "typescript"
|
||||
```
|
||||
|
||||
**Workflow**:
|
||||
1. Phase 1: Detects direct mode, checks existing SKILL
|
||||
2. Phase 2: Agent executes 4 Exa queries, writes 5 modules
|
||||
3. Phase 3: Generates SKILL.md index
|
||||
**Output**: `.claude/rules/tech/typescript/` with 4 rule files
|
||||
|
||||
### Direct Mode - Composite Stack
|
||||
### Frontend Stack
|
||||
|
||||
```bash
|
||||
/memory:tech-research "typescript-react-nextjs"
|
||||
/memory:tech-research "typescript-react"
|
||||
```
|
||||
|
||||
**Workflow**:
|
||||
1. Phase 1: Decomposes into ["typescript", "react", "nextjs"]
|
||||
2. Phase 2: Agent executes 6 Exa queries (4 base + 2 components), writes 6 modules (adds frameworks.md)
|
||||
3. Phase 3: Generates SKILL.md index with framework integration
|
||||
**Output**: `.claude/rules/tech/typescript-react/` with 5 rule files (includes components.md)
|
||||
|
||||
### Session Mode - Extract from Workflow
|
||||
### Backend Stack
|
||||
|
||||
```bash
|
||||
/memory:tech-research "python-fastapi"
|
||||
```
|
||||
|
||||
**Output**: `.claude/rules/tech/python-fastapi/` with 5 rule files (includes api.md)
|
||||
|
||||
### From Session
|
||||
|
||||
```bash
|
||||
/memory:tech-research WFS-user-auth-20251104
|
||||
```
|
||||
|
||||
**Workflow**:
|
||||
1. Phase 1: Reads session, extracts tech stack: `python-fastapi-sqlalchemy`
|
||||
2. Phase 2: Agent researches Python + FastAPI + SQLAlchemy, writes 6 modules
|
||||
3. Phase 3: Generates SKILL.md index
|
||||
**Workflow**: Extract tech stack from session → Generate rules
|
||||
|
||||
### Regenerate Existing
|
||||
---
|
||||
|
||||
```bash
|
||||
/memory:tech-research "react" --regenerate
|
||||
```
|
||||
|
||||
**Workflow**:
|
||||
1. Phase 1: Deletes existing SKILL due to --regenerate
|
||||
2. Phase 2: Agent executes fresh Exa research (latest 2025 practices)
|
||||
3. Phase 3: Generates updated SKILL.md
|
||||
|
||||
### Skip Path - Fast Update
|
||||
|
||||
```bash
|
||||
/memory:tech-research "python"
|
||||
```
|
||||
|
||||
**Scenario**: SKILL already exists with 7 files
|
||||
|
||||
**Workflow**:
|
||||
1. Phase 1: Detects existing SKILL, sets SKIP_GENERATION = true
|
||||
2. Phase 2: **SKIPPED**
|
||||
3. Phase 3: Updates SKILL.md index only (5-10x faster)
|
||||
## Comparison: Rules vs SKILL
|
||||
|
||||
| Aspect | SKILL Memory | Rules |
|
||||
|--------|--------------|-------|
|
||||
| Loading | Manual: `Skill("tech")` | Automatic by path |
|
||||
| Scope | All files when loaded | Only matching files |
|
||||
| Granularity | Monolithic packages | Per-file-type |
|
||||
| Context | Full package | Only relevant rules |
|
||||
|
||||
**When to Use**:
|
||||
- **Rules**: Tech stack conventions per file type
|
||||
- **SKILL**: Reference docs, APIs, examples for manual lookup
|
||||
|
||||
@@ -99,10 +99,10 @@ src/ (depth 1) → SINGLE-LAYER STRATEGY
|
||||
Bash({command: "git add -A 2>/dev/null || true", run_in_background: false});
|
||||
|
||||
// Get module structure
|
||||
Bash({command: "~/.claude/scripts/get_modules_by_depth.sh list", run_in_background: false});
|
||||
Bash({command: "ccw tool exec get_modules_by_depth '{\"format\":\"list\"}'", run_in_background: false});
|
||||
|
||||
// OR with --path
|
||||
Bash({command: "cd <target-path> && ~/.claude/scripts/get_modules_by_depth.sh list", run_in_background: false});
|
||||
Bash({command: "cd <target-path> && ccw tool exec get_modules_by_depth '{\"format\":\"list\"}'", run_in_background: false});
|
||||
```
|
||||
|
||||
**Parse output** `depth:N|path:<PATH>|...` to extract module paths and count.
|
||||
@@ -185,7 +185,7 @@ for (let layer of [3, 2, 1]) {
|
||||
let strategy = module.depth >= 3 ? "multi-layer" : "single-layer";
|
||||
for (let tool of tool_order) {
|
||||
Bash({
|
||||
command: `cd ${module.path} && ~/.claude/scripts/update_module_claude.sh "${strategy}" "." "${tool}"`,
|
||||
command: `cd ${module.path} && ccw tool exec update_module_claude '{"strategy":"${strategy}","path":".","tool":"${tool}"}'`,
|
||||
run_in_background: false
|
||||
});
|
||||
if (bash_result.exit_code === 0) {
|
||||
@@ -244,7 +244,7 @@ MODULES:
|
||||
|
||||
TOOLS (try in order): {{tool_1}}, {{tool_2}}, {{tool_3}}
|
||||
|
||||
EXECUTION SCRIPT: ~/.claude/scripts/update_module_claude.sh
|
||||
EXECUTION SCRIPT: ccw tool exec update_module_claude
|
||||
- Accepts strategy parameter: multi-layer | single-layer
|
||||
- Tool execution via direct CLI commands (gemini/qwen/codex)
|
||||
|
||||
@@ -252,7 +252,7 @@ EXECUTION FLOW (for each module):
|
||||
1. Tool fallback loop (exit on first success):
|
||||
for tool in {{tool_1}} {{tool_2}} {{tool_3}}; do
|
||||
Bash({
|
||||
command: `cd "{{module_path}}" && ~/.claude/scripts/update_module_claude.sh "{{strategy}}" "." "${tool}"`,
|
||||
command: `cd "{{module_path}}" && ccw tool exec update_module_claude '{"strategy":"{{strategy}}","path":".","tool":"${tool}"}'`,
|
||||
run_in_background: false
|
||||
})
|
||||
exit_code=$?
|
||||
|
||||
@@ -41,7 +41,7 @@ Orchestrates context-aware CLAUDE.md updates for changed modules using batched a
|
||||
|
||||
```javascript
|
||||
// Detect changed modules
|
||||
Bash({command: "~/.claude/scripts/detect_changed_modules.sh list", run_in_background: false});
|
||||
Bash({command: "ccw tool exec detect_changed_modules '{\"format\":\"list\"}'", run_in_background: false});
|
||||
|
||||
// Cache git changes
|
||||
Bash({command: "git add -A 2>/dev/null || true", run_in_background: false});
|
||||
@@ -102,7 +102,7 @@ for (let depth of sorted_depths.reverse()) { // N → 0
|
||||
return async () => {
|
||||
for (let tool of tool_order) {
|
||||
Bash({
|
||||
command: `cd ${module.path} && ~/.claude/scripts/update_module_claude.sh "single-layer" "." "${tool}"`,
|
||||
command: `cd ${module.path} && ccw tool exec update_module_claude '{"strategy":"single-layer","path":".","tool":"${tool}"}'`,
|
||||
run_in_background: false
|
||||
});
|
||||
if (bash_result.exit_code === 0) {
|
||||
@@ -184,21 +184,21 @@ EXECUTION:
|
||||
For each module above:
|
||||
1. Try tool 1:
|
||||
Bash({
|
||||
command: `cd "{{module_path}}" && ~/.claude/scripts/update_module_claude.sh "single-layer" "." "{{tool_1}}"`,
|
||||
command: `cd "{{module_path}}" && ccw tool exec update_module_claude '{"strategy":"single-layer","path":".","tool":"{{tool_1}}"}'`,
|
||||
run_in_background: false
|
||||
})
|
||||
→ Success: Report "✅ {{module_path}} updated with {{tool_1}}", proceed to next module
|
||||
→ Failure: Try tool 2
|
||||
2. Try tool 2:
|
||||
Bash({
|
||||
command: `cd "{{module_path}}" && ~/.claude/scripts/update_module_claude.sh "single-layer" "." "{{tool_2}}"`,
|
||||
command: `cd "{{module_path}}" && ccw tool exec update_module_claude '{"strategy":"single-layer","path":".","tool":"{{tool_2}}"}'`,
|
||||
run_in_background: false
|
||||
})
|
||||
→ Success: Report "✅ {{module_path}} updated with {{tool_2}}", proceed to next module
|
||||
→ Failure: Try tool 3
|
||||
3. Try tool 3:
|
||||
Bash({
|
||||
command: `cd "{{module_path}}" && ~/.claude/scripts/update_module_claude.sh "single-layer" "." "{{tool_3}}"`,
|
||||
command: `cd "{{module_path}}" && ccw tool exec update_module_claude '{"strategy":"single-layer","path":".","tool":"{{tool_3}}"}'`,
|
||||
run_in_background: false
|
||||
})
|
||||
→ Success: Report "✅ {{module_path}} updated with {{tool_3}}", proceed to next module
|
||||
|
||||
@@ -187,7 +187,7 @@ Objectives:
|
||||
|
||||
3. Use Gemini for aggregation (optional):
|
||||
Command pattern:
|
||||
cd .workflow/.archives/{session_id} && gemini -p "
|
||||
ccw cli -p "
|
||||
PURPOSE: Extract lessons and conflicts from workflow session
|
||||
TASK:
|
||||
• Analyze IMPL_PLAN and lessons from manifest
|
||||
@@ -198,7 +198,7 @@ Objectives:
|
||||
CONTEXT: @IMPL_PLAN.md @workflow-session.json
|
||||
EXPECTED: Structured lessons and conflicts in JSON format
|
||||
RULES: Template reference from skill-aggregation.txt
|
||||
"
|
||||
" --tool gemini --mode analysis --cd .workflow/.archives/{session_id}
|
||||
|
||||
3.5. **Generate SKILL.md Description** (CRITICAL for auto-loading):
|
||||
|
||||
@@ -334,7 +334,7 @@ Objectives:
|
||||
- Sort sessions by date
|
||||
|
||||
2. Use Gemini for final aggregation:
|
||||
gemini -p "
|
||||
ccw cli -p "
|
||||
PURPOSE: Aggregate lessons and conflicts from all workflow sessions
|
||||
TASK:
|
||||
• Group successes by functional domain
|
||||
@@ -345,7 +345,7 @@ Objectives:
|
||||
CONTEXT: [Provide aggregated JSON data]
|
||||
EXPECTED: Final aggregated structure for SKILL documents
|
||||
RULES: Template reference from skill-aggregation.txt
|
||||
"
|
||||
" --tool gemini --mode analysis
|
||||
|
||||
3. Read templates for formatting (same 4 templates as single mode)
|
||||
|
||||
|
||||
@@ -2,452 +2,359 @@
|
||||
name: artifacts
|
||||
description: Interactive clarification generating confirmed guidance specification through role-based analysis and synthesis
|
||||
argument-hint: "topic or challenge description [--count N]"
|
||||
allowed-tools: TodoWrite(*), Read(*), Write(*), Glob(*)
|
||||
allowed-tools: TodoWrite(*), Read(*), Write(*), Glob(*), AskUserQuestion(*)
|
||||
---
|
||||
|
||||
## Overview
|
||||
|
||||
Six-phase workflow: **Automatic project context collection** → Extract topic challenges → Select roles → Generate task-specific questions → Detect conflicts → Generate confirmed guidance (declarative statements only).
|
||||
Seven-phase workflow: **Context collection** → **Topic analysis** → **Role selection** → **Role questions** → **Conflict resolution** → **Final check** → **Generate specification**
|
||||
|
||||
All user interactions use AskUserQuestion tool (max 4 questions per call, multi-round).
|
||||
|
||||
**Input**: `"GOAL: [objective] SCOPE: [boundaries] CONTEXT: [background]" [--count N]`
|
||||
**Output**: `.workflow/active/WFS-{topic}/.brainstorming/guidance-specification.md` (CONFIRMED/SELECTED format)
|
||||
**Core Principle**: Questions dynamically generated from project context + topic keywords/challenges, NOT from generic templates
|
||||
**Output**: `.workflow/active/WFS-{topic}/.brainstorming/guidance-specification.md`
|
||||
**Core Principle**: Questions dynamically generated from project context + topic keywords, NOT generic templates
|
||||
|
||||
**Parameters**:
|
||||
- `topic` (required): Topic or challenge description (structured format recommended)
|
||||
- `--count N` (optional): Number of roles user WANTS to select (system will recommend N+2 options for user to choose from, default: 3)
|
||||
- `--count N` (optional): Number of roles to select (system recommends N+2 options, default: 3)
|
||||
|
||||
---
|
||||
|
||||
## Quick Reference
|
||||
|
||||
### Phase Summary
|
||||
|
||||
| Phase | Goal | AskUserQuestion | Storage |
|
||||
|-------|------|-----------------|---------|
|
||||
| 0 | Context collection | - | context-package.json |
|
||||
| 1 | Topic analysis | 2-4 questions | intent_context |
|
||||
| 2 | Role selection | 1 multi-select | selected_roles |
|
||||
| 3 | Role questions | 3-4 per role | role_decisions[role] |
|
||||
| 4 | Conflict resolution | max 4 per round | cross_role_decisions |
|
||||
| 4.5 | Final check | progressive rounds | additional_decisions |
|
||||
| 5 | Generate spec | - | guidance-specification.md |
|
||||
|
||||
### AskUserQuestion Pattern
|
||||
|
||||
```javascript
|
||||
// Single-select (Phase 1, 3, 4)
|
||||
AskUserQuestion({
|
||||
questions: [
|
||||
{
|
||||
question: "{问题文本}",
|
||||
header: "{短标签}", // max 12 chars
|
||||
multiSelect: false,
|
||||
options: [
|
||||
{ label: "{选项}", description: "{说明和影响}" },
|
||||
{ label: "{选项}", description: "{说明和影响}" },
|
||||
{ label: "{选项}", description: "{说明和影响}" }
|
||||
]
|
||||
}
|
||||
// ... max 4 questions per call
|
||||
]
|
||||
})
|
||||
|
||||
// Multi-select (Phase 2)
|
||||
AskUserQuestion({
|
||||
questions: [{
|
||||
question: "请选择 {count} 个角色",
|
||||
header: "角色选择",
|
||||
multiSelect: true,
|
||||
options: [/* max 4 options per call */]
|
||||
}]
|
||||
})
|
||||
```
|
||||
|
||||
### Multi-Round Execution
|
||||
|
||||
```javascript
|
||||
const BATCH_SIZE = 4;
|
||||
for (let i = 0; i < allQuestions.length; i += BATCH_SIZE) {
|
||||
const batch = allQuestions.slice(i, i + BATCH_SIZE);
|
||||
AskUserQuestion({ questions: batch });
|
||||
// Store responses before next round
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Task Tracking
|
||||
|
||||
**⚠️ TodoWrite Rule**: EXTEND auto-parallel's task list (NOT replace/overwrite)
|
||||
**TodoWrite Rule**: EXTEND auto-parallel's task list (NOT replace/overwrite)
|
||||
|
||||
**When called from auto-parallel**:
|
||||
- Find the artifacts parent task: "Execute artifacts command for interactive framework generation"
|
||||
- Mark parent task as "in_progress"
|
||||
- APPEND artifacts sub-tasks AFTER the parent task (Phase 0-5)
|
||||
- Mark each sub-task as it completes
|
||||
- When Phase 5 completes, mark parent task as "completed"
|
||||
- **PRESERVE all other auto-parallel tasks** (role agents, synthesis)
|
||||
- Find artifacts parent task → Mark "in_progress"
|
||||
- APPEND sub-tasks (Phase 0-5) → Mark each as completes
|
||||
- When Phase 5 completes → Mark parent "completed"
|
||||
- PRESERVE all other auto-parallel tasks
|
||||
|
||||
**Standalone Mode**:
|
||||
```json
|
||||
[
|
||||
{"content": "Initialize session (.workflow/active/ session check, parse --count parameter)", "status": "pending", "activeForm": "Initializing"},
|
||||
{"content": "Phase 0: Automatic project context collection (call context-gather)", "status": "pending", "activeForm": "Phase 0 context collection"},
|
||||
{"content": "Phase 1: Extract challenges, output 2-4 task-specific questions, wait for user input", "status": "pending", "activeForm": "Phase 1 topic analysis"},
|
||||
{"content": "Phase 2: Recommend count+2 roles, output role selection, wait for user input", "status": "pending", "activeForm": "Phase 2 role selection"},
|
||||
{"content": "Phase 3: Generate 3-4 questions per role, output and wait for answers (max 10 per round)", "status": "pending", "activeForm": "Phase 3 role questions"},
|
||||
{"content": "Phase 4: Detect conflicts, output clarifications, wait for answers (max 10 per round)", "status": "pending", "activeForm": "Phase 4 conflict resolution"},
|
||||
{"content": "Phase 5: Transform Q&A to declarative statements, write guidance-specification.md", "status": "pending", "activeForm": "Phase 5 document generation"}
|
||||
{"content": "Initialize session", "status": "pending", "activeForm": "Initializing"},
|
||||
{"content": "Phase 0: Context collection", "status": "pending", "activeForm": "Phase 0"},
|
||||
{"content": "Phase 1: Topic analysis (2-4 questions)", "status": "pending", "activeForm": "Phase 1"},
|
||||
{"content": "Phase 2: Role selection", "status": "pending", "activeForm": "Phase 2"},
|
||||
{"content": "Phase 3: Role questions (per role)", "status": "pending", "activeForm": "Phase 3"},
|
||||
{"content": "Phase 4: Conflict resolution", "status": "pending", "activeForm": "Phase 4"},
|
||||
{"content": "Phase 4.5: Final clarification", "status": "pending", "activeForm": "Phase 4.5"},
|
||||
{"content": "Phase 5: Generate specification", "status": "pending", "activeForm": "Phase 5"}
|
||||
]
|
||||
```
|
||||
|
||||
## User Interaction Protocol
|
||||
|
||||
### Question Output Format
|
||||
|
||||
All questions output as structured text (detailed format with descriptions):
|
||||
|
||||
```markdown
|
||||
【问题{N} - {短标签}】{问题文本}
|
||||
a) {选项标签}
|
||||
说明:{选项说明和影响}
|
||||
b) {选项标签}
|
||||
说明:{选项说明和影响}
|
||||
c) {选项标签}
|
||||
说明:{选项说明和影响}
|
||||
|
||||
请回答:{N}a 或 {N}b 或 {N}c
|
||||
```
|
||||
|
||||
**Multi-select format** (Phase 2 role selection):
|
||||
```markdown
|
||||
【角色选择】请选择 {count} 个角色参与头脑风暴分析
|
||||
|
||||
a) {role-name} ({中文名})
|
||||
推荐理由:{基于topic的相关性说明}
|
||||
b) {role-name} ({中文名})
|
||||
推荐理由:{基于topic的相关性说明}
|
||||
...
|
||||
|
||||
支持格式:
|
||||
- 分别选择:2a 2c 2d (选择第2题的a、c、d选项)
|
||||
- 合并语法:2acd (选择a、c、d)
|
||||
- 逗号分隔:2a,c,d
|
||||
|
||||
请输入选择:
|
||||
```
|
||||
|
||||
### Input Parsing Rules
|
||||
|
||||
**Supported formats** (intelligent parsing):
|
||||
|
||||
1. **Space-separated**: `1a 2b 3c` → Q1:a, Q2:b, Q3:c
|
||||
2. **Comma-separated**: `1a,2b,3c` → Q1:a, Q2:b, Q3:c
|
||||
3. **Multi-select combined**: `2abc` → Q2: options a,b,c
|
||||
4. **Multi-select spaces**: `2 a b c` → Q2: options a,b,c
|
||||
5. **Multi-select comma**: `2a,b,c` → Q2: options a,b,c
|
||||
6. **Natural language**: `问题1选a` → 1a (fallback parsing)
|
||||
|
||||
**Parsing algorithm**:
|
||||
- Extract question numbers and option letters
|
||||
- Validate question numbers match output
|
||||
- Validate option letters exist for each question
|
||||
- If ambiguous/invalid, output example format and request re-input
|
||||
|
||||
**Error handling** (lenient):
|
||||
- Recognize common variations automatically
|
||||
- If parsing fails, show example and wait for clarification
|
||||
- Support re-input without penalty
|
||||
|
||||
### Batching Strategy
|
||||
|
||||
**Batch limits**:
|
||||
- **Default**: Maximum 10 questions per round
|
||||
- **Phase 2 (role selection)**: Display all recommended roles at once (count+2 roles)
|
||||
- **Auto-split**: If questions > 10, split into multiple rounds with clear round indicators
|
||||
|
||||
**Round indicators**:
|
||||
```markdown
|
||||
===== 第 1 轮问题 (共2轮) =====
|
||||
【问题1 - ...】...
|
||||
【问题2 - ...】...
|
||||
...
|
||||
【问题10 - ...】...
|
||||
|
||||
请回答 (格式: 1a 2b ... 10c):
|
||||
```
|
||||
|
||||
### Interaction Flow
|
||||
|
||||
**Standard flow**:
|
||||
1. Output questions in formatted text
|
||||
2. Output expected input format example
|
||||
3. Wait for user input
|
||||
4. Parse input with intelligent matching
|
||||
5. If parsing succeeds → Store answers and continue
|
||||
6. If parsing fails → Show error, example, and wait for re-input
|
||||
|
||||
**No question/option limits**: Text-based interaction removes previous 4-question and 4-option restrictions
|
||||
---
|
||||
|
||||
## Execution Phases
|
||||
|
||||
### Session Management
|
||||
|
||||
- Check `.workflow/active/` for existing sessions
|
||||
- Multiple sessions → Prompt selection | Single → Use it | None → Create `WFS-[topic-slug]`
|
||||
- Parse `--count N` parameter from user input (default: 3 if not specified)
|
||||
- Store decisions in `workflow-session.json` including count parameter
|
||||
- Multiple → Prompt selection | Single → Use it | None → Create `WFS-[topic-slug]`
|
||||
- Parse `--count N` parameter (default: 3)
|
||||
- Store decisions in `workflow-session.json`
|
||||
|
||||
### Phase 0: Automatic Project Context Collection
|
||||
### Phase 0: Context Collection
|
||||
|
||||
**Goal**: Gather project architecture, documentation, and relevant code context BEFORE user interaction
|
||||
**Goal**: Gather project context BEFORE user interaction
|
||||
|
||||
**Detection Mechanism** (execute first):
|
||||
```javascript
|
||||
// Check if context-package already exists
|
||||
const contextPackagePath = `.workflow/active/WFS-{session-id}/.process/context-package.json`;
|
||||
**Steps**:
|
||||
1. Check if `context-package.json` exists → Skip if valid
|
||||
2. Invoke `context-search-agent` (BRAINSTORM MODE - lightweight)
|
||||
3. Output: `.workflow/active/WFS-{session-id}/.process/context-package.json`
|
||||
|
||||
if (file_exists(contextPackagePath)) {
|
||||
// Validate package
|
||||
const package = Read(contextPackagePath);
|
||||
if (package.metadata.session_id === session_id) {
|
||||
console.log("✅ Valid context-package found, skipping Phase 0");
|
||||
return; // Skip to Phase 1
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Implementation**: Invoke `context-search-agent` only if package doesn't exist
|
||||
**Graceful Degradation**: If agent fails, continue to Phase 1 without context
|
||||
|
||||
```javascript
|
||||
Task(
|
||||
subagent_type="context-search-agent",
|
||||
description="Gather project context for brainstorm",
|
||||
prompt=`
|
||||
You are executing as context-search-agent (.claude/agents/context-search-agent.md).
|
||||
Execute context-search-agent in BRAINSTORM MODE (Phase 1-2 only).
|
||||
|
||||
## Execution Mode
|
||||
**BRAINSTORM MODE** (Lightweight) - Phase 1-2 only (skip deep analysis)
|
||||
Session: ${session_id}
|
||||
Task: ${task_description}
|
||||
Output: .workflow/${session_id}/.process/context-package.json
|
||||
|
||||
## Session Information
|
||||
- **Session ID**: ${session_id}
|
||||
- **Task Description**: ${task_description}
|
||||
- **Output Path**: .workflow/${session_id}/.process/context-package.json
|
||||
|
||||
## Mission
|
||||
Execute complete context-search-agent workflow for implementation planning:
|
||||
|
||||
### Phase 1: Initialization & Pre-Analysis
|
||||
1. **Detection**: Check for existing context-package (early exit if valid)
|
||||
2. **Foundation**: Initialize code-index, get project structure, load docs
|
||||
3. **Analysis**: Extract keywords, determine scope, classify complexity
|
||||
|
||||
### Phase 2: Multi-Source Context Discovery
|
||||
Execute all 3 discovery tracks:
|
||||
- **Track 1**: Reference documentation (CLAUDE.md, architecture docs)
|
||||
- **Track 2**: Web examples (use Exa MCP for unfamiliar tech/APIs)
|
||||
- **Track 3**: Codebase analysis (5-layer discovery: files, content, patterns, deps, config/tests)
|
||||
|
||||
### Phase 3: Synthesis, Assessment & Packaging
|
||||
1. Apply relevance scoring and build dependency graph
|
||||
2. Synthesize 3-source data (docs > code > web)
|
||||
3. Integrate brainstorm artifacts (if .brainstorming/ exists, read content)
|
||||
4. Perform conflict detection with risk assessment
|
||||
5. Generate and validate context-package.json
|
||||
|
||||
## Output Requirements
|
||||
Complete context-package.json with:
|
||||
- **metadata**: task_description, keywords, complexity, tech_stack, session_id
|
||||
- **project_context**: architecture_patterns, coding_conventions, tech_stack
|
||||
- **assets**: {documentation[], source_code[], config[], tests[]} with relevance scores
|
||||
- **dependencies**: {internal[], external[]} with dependency graph
|
||||
- **brainstorm_artifacts**: {guidance_specification, role_analyses[], synthesis_output} with content
|
||||
- **conflict_detection**: {risk_level, risk_factors, affected_modules[], mitigation_strategy}
|
||||
|
||||
## Quality Validation
|
||||
Before completion verify:
|
||||
- [ ] Valid JSON format with all required fields
|
||||
- [ ] File relevance accuracy >80%
|
||||
- [ ] Dependency graph complete (max 2 transitive levels)
|
||||
- [ ] Conflict risk level calculated correctly
|
||||
- [ ] No sensitive data exposed
|
||||
- [ ] Total files ≤50 (prioritize high-relevance)
|
||||
|
||||
Execute autonomously following agent documentation.
|
||||
Report completion with statistics.
|
||||
Required fields: metadata, project_context, assets, dependencies, conflict_detection
|
||||
`
|
||||
)
|
||||
```
|
||||
|
||||
**Graceful Degradation**:
|
||||
- If agent fails: Log warning, continue to Phase 1 without project context
|
||||
- If package invalid: Re-run context-search-agent
|
||||
### Phase 1: Topic Analysis
|
||||
|
||||
### Phase 1: Topic Analysis & Intent Classification
|
||||
|
||||
**Goal**: Extract keywords/challenges to drive all subsequent question generation, **enriched by Phase 0 project context**
|
||||
**Goal**: Extract keywords/challenges enriched by Phase 0 context
|
||||
|
||||
**Steps**:
|
||||
1. **Load Phase 0 context** (if available):
|
||||
- Read `.workflow/active/WFS-{session-id}/.process/context-package.json`
|
||||
- Extract: tech_stack, existing modules, conflict_risk, relevant files
|
||||
1. Load Phase 0 context (tech_stack, modules, conflict_risk)
|
||||
2. Deep topic analysis (entities, challenges, constraints, metrics)
|
||||
3. Generate 2-4 context-aware probing questions
|
||||
4. AskUserQuestion → Store to `session.intent_context`
|
||||
|
||||
2. **Deep topic analysis** (context-aware):
|
||||
- Extract technical entities from topic + existing codebase
|
||||
- Identify core challenges considering existing architecture
|
||||
- Consider constraints (timeline/budget/compliance)
|
||||
- Define success metrics based on current project state
|
||||
|
||||
3. **Generate 2-4 context-aware probing questions**:
|
||||
- Reference existing tech stack in questions
|
||||
- Consider integration with existing modules
|
||||
- Address identified conflict risks from Phase 0
|
||||
- Target root challenges and trade-off priorities
|
||||
|
||||
4. **User interaction**: Output questions using text format (see User Interaction Protocol), wait for user input
|
||||
|
||||
5. **Parse user answers**: Use intelligent parsing to extract answers from user input (support multiple formats)
|
||||
|
||||
6. **Storage**: Store answers to `session.intent_context` with `{extracted_keywords, identified_challenges, user_answers, project_context_used}`
|
||||
|
||||
**Example Output**:
|
||||
```markdown
|
||||
===== Phase 1: 项目意图分析 =====
|
||||
|
||||
【问题1 - 核心挑战】实时协作平台的主要技术挑战?
|
||||
a) 实时数据同步
|
||||
说明:100+用户同时在线,状态同步复杂度高
|
||||
b) 可扩展性架构
|
||||
说明:用户规模增长时的系统扩展能力
|
||||
c) 冲突解决机制
|
||||
说明:多用户同时编辑的冲突处理策略
|
||||
|
||||
【问题2 - 优先级】MVP阶段最关注的指标?
|
||||
a) 功能完整性
|
||||
说明:实现所有核心功能
|
||||
b) 用户体验
|
||||
说明:流畅的交互体验和响应速度
|
||||
c) 系统稳定性
|
||||
说明:高可用性和数据一致性
|
||||
|
||||
请回答 (格式: 1a 2b):
|
||||
**Example**:
|
||||
```javascript
|
||||
AskUserQuestion({
|
||||
questions: [
|
||||
{
|
||||
question: "实时协作平台的主要技术挑战?",
|
||||
header: "核心挑战",
|
||||
multiSelect: false,
|
||||
options: [
|
||||
{ label: "实时数据同步", description: "100+用户同时在线,状态同步复杂度高" },
|
||||
{ label: "可扩展性架构", description: "用户规模增长时的系统扩展能力" },
|
||||
{ label: "冲突解决机制", description: "多用户同时编辑的冲突处理策略" }
|
||||
]
|
||||
},
|
||||
{
|
||||
question: "MVP阶段最关注的指标?",
|
||||
header: "优先级",
|
||||
multiSelect: false,
|
||||
options: [
|
||||
{ label: "功能完整性", description: "实现所有核心功能" },
|
||||
{ label: "用户体验", description: "流畅的交互体验和响应速度" },
|
||||
{ label: "系统稳定性", description: "高可用性和数据一致性" }
|
||||
]
|
||||
}
|
||||
]
|
||||
})
|
||||
```
|
||||
|
||||
**User input examples**:
|
||||
- `1a 2c` → Q1:a, Q2:c
|
||||
- `1a,2c` → Q1:a, Q2:c
|
||||
|
||||
**⚠️ CRITICAL**: Questions MUST reference topic keywords. Generic "Project type?" violates dynamic generation.
|
||||
|
||||
### Phase 2: Role Selection
|
||||
|
||||
**⚠️ CRITICAL**: User MUST interact to select roles. NEVER auto-select without user confirmation.
|
||||
**Goal**: User selects roles from intelligent recommendations
|
||||
|
||||
**Available Roles**:
|
||||
- data-architect (数据架构师)
|
||||
- product-manager (产品经理)
|
||||
- product-owner (产品负责人)
|
||||
- scrum-master (敏捷教练)
|
||||
- subject-matter-expert (领域专家)
|
||||
- system-architect (系统架构师)
|
||||
- test-strategist (测试策略师)
|
||||
- ui-designer (UI 设计师)
|
||||
- ux-expert (UX 专家)
|
||||
**Available Roles**: data-architect, product-manager, product-owner, scrum-master, subject-matter-expert, system-architect, test-strategist, ui-designer, ux-expert
|
||||
|
||||
**Steps**:
|
||||
1. **Intelligent role recommendation** (AI analysis):
|
||||
- Analyze Phase 1 extracted keywords and challenges
|
||||
- Use AI reasoning to determine most relevant roles for the specific topic
|
||||
- Recommend count+2 roles (e.g., if user wants 3 roles, recommend 5 options)
|
||||
- Provide clear rationale for each recommended role based on topic context
|
||||
1. Analyze Phase 1 keywords → Recommend count+2 roles with rationale
|
||||
2. AskUserQuestion (multiSelect=true) → Store to `session.selected_roles`
|
||||
3. If count+2 > 4, split into multiple rounds
|
||||
|
||||
2. **User selection** (text interaction):
|
||||
- Output all recommended roles at once (no batching needed for count+2 roles)
|
||||
- Display roles with labels and relevance rationale
|
||||
- Wait for user input in multi-select format
|
||||
- Parse user input (support multiple formats)
|
||||
- **Storage**: Store selections to `session.selected_roles`
|
||||
|
||||
**Example Output**:
|
||||
```markdown
|
||||
===== Phase 2: 角色选择 =====
|
||||
|
||||
【角色选择】请选择 3 个角色参与头脑风暴分析
|
||||
|
||||
a) system-architect (系统架构师)
|
||||
推荐理由:实时同步架构设计和技术选型的核心角色
|
||||
b) ui-designer (UI设计师)
|
||||
推荐理由:协作界面用户体验和实时状态展示
|
||||
c) product-manager (产品经理)
|
||||
推荐理由:功能优先级和MVP范围决策
|
||||
d) data-architect (数据架构师)
|
||||
推荐理由:数据同步模型和存储方案设计
|
||||
e) ux-expert (UX专家)
|
||||
推荐理由:多用户协作交互流程优化
|
||||
|
||||
支持格式:
|
||||
- 分别选择:2a 2c 2d (选择a、c、d)
|
||||
- 合并语法:2acd (选择a、c、d)
|
||||
- 逗号分隔:2a,c,d (选择a、c、d)
|
||||
|
||||
请输入选择:
|
||||
**Example**:
|
||||
```javascript
|
||||
AskUserQuestion({
|
||||
questions: [{
|
||||
question: "请选择 3 个角色参与头脑风暴分析",
|
||||
header: "角色选择",
|
||||
multiSelect: true,
|
||||
options: [
|
||||
{ label: "system-architect", description: "实时同步架构设计和技术选型" },
|
||||
{ label: "ui-designer", description: "协作界面用户体验和状态展示" },
|
||||
{ label: "product-manager", description: "功能优先级和MVP范围决策" },
|
||||
{ label: "data-architect", description: "数据同步模型和存储方案设计" }
|
||||
]
|
||||
}]
|
||||
})
|
||||
```
|
||||
|
||||
**User input examples**:
|
||||
- `2acd` → Roles: a, c, d (system-architect, product-manager, data-architect)
|
||||
- `2a 2c 2d` → Same result
|
||||
- `2a,c,d` → Same result
|
||||
**⚠️ CRITICAL**: User MUST interact. NEVER auto-select without confirmation.
|
||||
|
||||
**Role Recommendation Rules**:
|
||||
- NO hardcoded keyword-to-role mappings
|
||||
- Use intelligent analysis of topic, challenges, and requirements
|
||||
- Consider role synergies and coverage gaps
|
||||
- Explain WHY each role is relevant to THIS specific topic
|
||||
- Default recommendation: count+2 roles for user to choose from
|
||||
|
||||
### Phase 3: Role-Specific Questions (Dynamic Generation)
|
||||
### Phase 3: Role-Specific Questions
|
||||
|
||||
**Goal**: Generate deep questions mapping role expertise to Phase 1 challenges
|
||||
|
||||
**Algorithm**:
|
||||
```
|
||||
FOR each selected role:
|
||||
1. Map Phase 1 challenges to role domain:
|
||||
- "real-time sync" + system-architect → State management pattern
|
||||
- "100 users" + system-architect → Communication protocol
|
||||
- "low latency" + system-architect → Conflict resolution
|
||||
1. FOR each selected role:
|
||||
- Map Phase 1 challenges to role domain
|
||||
- Generate 3-4 questions (implementation depth, trade-offs, edge cases)
|
||||
- AskUserQuestion per role → Store to `session.role_decisions[role]`
|
||||
2. Process roles sequentially (one at a time for clarity)
|
||||
3. If role needs > 4 questions, split into multiple rounds
|
||||
|
||||
2. Generate 3-4 questions per role probing implementation depth, trade-offs, edge cases:
|
||||
Q: "How handle real-time state sync for 100+ users?" (explores approach)
|
||||
Q: "How resolve conflicts when 2 users edit simultaneously?" (explores edge case)
|
||||
Options: [Event Sourcing/Centralized/CRDT] (concrete, explain trade-offs for THIS use case)
|
||||
|
||||
3. Output questions in text format per role:
|
||||
- Display all questions for current role (3-4 questions, no 10-question limit)
|
||||
- Questions in Chinese (用中文提问)
|
||||
- Wait for user input
|
||||
- Parse answers using intelligent parsing
|
||||
- Store answers to session.role_decisions[role]
|
||||
**Example** (system-architect):
|
||||
```javascript
|
||||
AskUserQuestion({
|
||||
questions: [
|
||||
{
|
||||
question: "100+ 用户实时状态同步方案?",
|
||||
header: "状态同步",
|
||||
multiSelect: false,
|
||||
options: [
|
||||
{ label: "Event Sourcing", description: "完整事件历史,支持回溯,存储成本高" },
|
||||
{ label: "集中式状态管理", description: "实现简单,单点瓶颈风险" },
|
||||
{ label: "CRDT", description: "去中心化,自动合并,学习曲线陡" }
|
||||
]
|
||||
},
|
||||
{
|
||||
question: "两个用户同时编辑冲突如何解决?",
|
||||
header: "冲突解决",
|
||||
multiSelect: false,
|
||||
options: [
|
||||
{ label: "自动合并", description: "用户无感知,可能产生意外结果" },
|
||||
{ label: "手动解决", description: "用户控制,增加交互复杂度" },
|
||||
{ label: "版本控制", description: "保留历史,需要分支管理" }
|
||||
]
|
||||
}
|
||||
]
|
||||
})
|
||||
```
|
||||
|
||||
**Batching Strategy**:
|
||||
- Each role outputs all its questions at once (typically 3-4 questions)
|
||||
- No need to split per role (within 10-question batch limit)
|
||||
- Multiple roles processed sequentially (one role at a time for clarity)
|
||||
### Phase 4: Conflict Resolution
|
||||
|
||||
**Output Format**: Follow standard format from "User Interaction Protocol" section (single-choice question format)
|
||||
|
||||
**Example Topic-Specific Questions** (system-architect role for "real-time collaboration platform"):
|
||||
- "100+ 用户实时状态同步方案?" → Options: Event Sourcing / 集中式状态管理 / CRDT
|
||||
- "两个用户同时编辑冲突如何解决?" → Options: 自动合并 / 手动解决 / 版本控制
|
||||
- "低延迟通信协议选择?" → Options: WebSocket / SSE / 轮询
|
||||
- "系统扩展性架构方案?" → Options: 微服务 / 单体+缓存 / Serverless
|
||||
|
||||
**Quality Requirements**: See "Question Generation Guidelines" section for detailed rules
|
||||
|
||||
### Phase 4: Cross-Role Clarification (Conflict Detection)
|
||||
|
||||
**Goal**: Resolve ACTUAL conflicts from Phase 3 answers, not pre-defined relationships
|
||||
**Goal**: Resolve ACTUAL conflicts from Phase 3 answers
|
||||
|
||||
**Algorithm**:
|
||||
```
|
||||
1. Analyze Phase 3 answers for conflicts:
|
||||
- Contradictory choices: product-manager "fast iteration" vs system-architect "complex Event Sourcing"
|
||||
- Missing integration: ui-designer "Optimistic updates" but system-architect didn't address conflict handling
|
||||
- Implicit dependencies: ui-designer "Live cursors" but no auth approach defined
|
||||
|
||||
2. FOR each detected conflict:
|
||||
Generate clarification questions referencing SPECIFIC Phase 3 choices
|
||||
|
||||
3. Output clarification questions in text format:
|
||||
- Batch conflicts into rounds (max 10 questions per round)
|
||||
- Display questions with context from Phase 3 answers
|
||||
- Questions in Chinese (用中文提问)
|
||||
- Wait for user input
|
||||
- Parse answers using intelligent parsing
|
||||
- Store answers to session.cross_role_decisions
|
||||
|
||||
- Contradictory choices (e.g., "fast iteration" vs "complex Event Sourcing")
|
||||
- Missing integration (e.g., "Optimistic updates" but no conflict handling)
|
||||
- Implicit dependencies (e.g., "Live cursors" but no auth defined)
|
||||
2. Generate clarification questions referencing SPECIFIC Phase 3 choices
|
||||
3. AskUserQuestion (max 4 per call, multi-round) → Store to `session.cross_role_decisions`
|
||||
4. If NO conflicts: Skip Phase 4 (inform user: "未检测到跨角色冲突,跳过Phase 4")
|
||||
|
||||
**Example**:
|
||||
```javascript
|
||||
AskUserQuestion({
|
||||
questions: [{
|
||||
question: "CRDT 与 UI 回滚期望冲突,如何解决?\n背景:system-architect选择CRDT,ui-designer期望回滚UI",
|
||||
header: "架构冲突",
|
||||
multiSelect: false,
|
||||
options: [
|
||||
{ label: "采用 CRDT", description: "保持去中心化,调整UI期望" },
|
||||
{ label: "显示合并界面", description: "增加用户交互,展示冲突详情" },
|
||||
{ label: "切换到 OT", description: "支持回滚,增加服务器复杂度" }
|
||||
]
|
||||
}]
|
||||
})
|
||||
```
|
||||
|
||||
**Batching Strategy**:
|
||||
- Maximum 10 clarification questions per round
|
||||
- If conflicts > 10, split into multiple rounds
|
||||
- Prioritize most critical conflicts first
|
||||
### Phase 4.5: Final Clarification
|
||||
|
||||
**Output Format**: Follow standard format from "User Interaction Protocol" section (single-choice question format with background context)
|
||||
|
||||
**Example Conflict Detection** (from Phase 3 answers):
|
||||
- **Architecture Conflict**: "CRDT 与 UI 回滚期望冲突,如何解决?"
|
||||
- Background: system-architect chose CRDT, ui-designer expects rollback UI
|
||||
- Options: 采用 CRDT / 显示合并界面 / 切换到 OT
|
||||
- **Integration Gap**: "实时光标功能缺少身份认证方案"
|
||||
- Background: ui-designer chose live cursors, no auth defined
|
||||
- Options: OAuth 2.0 / JWT Token / Session-based
|
||||
|
||||
**Quality Requirements**: See "Question Generation Guidelines" section for conflict-specific rules
|
||||
|
||||
### Phase 5: Generate Guidance Specification
|
||||
**Purpose**: Ensure no important points missed before generating specification
|
||||
|
||||
**Steps**:
|
||||
1. Load all decisions: `intent_context` + `selected_roles` + `role_decisions` + `cross_role_decisions`
|
||||
2. Transform Q&A pairs to declarative: Questions → Headers, Answers → CONFIRMED/SELECTED statements
|
||||
3. Generate guidance-specification.md (template below) - **PRIMARY OUTPUT FILE**
|
||||
4. Update workflow-session.json with **METADATA ONLY**:
|
||||
- session_id (e.g., "WFS-topic-slug")
|
||||
- selected_roles[] (array of role names, e.g., ["system-architect", "ui-designer", "product-manager"])
|
||||
- topic (original user input string)
|
||||
- timestamp (ISO-8601 format)
|
||||
- phase_completed: "artifacts"
|
||||
- count_parameter (number from --count flag)
|
||||
5. Validate: No interrogative sentences in .md file, all decisions traceable, no content duplication in .json
|
||||
1. Ask initial check:
|
||||
```javascript
|
||||
AskUserQuestion({
|
||||
questions: [{
|
||||
question: "在生成最终规范之前,是否有前面未澄清的重点需要补充?",
|
||||
header: "补充确认",
|
||||
multiSelect: false,
|
||||
options: [
|
||||
{ label: "无需补充", description: "前面的讨论已经足够完整" },
|
||||
{ label: "需要补充", description: "还有重要内容需要澄清" }
|
||||
]
|
||||
}]
|
||||
})
|
||||
```
|
||||
2. If "需要补充":
|
||||
- Analyze user's additional points
|
||||
- Generate progressive questions (not role-bound, interconnected)
|
||||
- AskUserQuestion (max 4 per round) → Store to `session.additional_decisions`
|
||||
- Repeat until user confirms completion
|
||||
3. If "无需补充": Proceed to Phase 5
|
||||
|
||||
**⚠️ CRITICAL OUTPUT SEPARATION**:
|
||||
- **guidance-specification.md**: Full guidance content (decisions, rationale, integration points)
|
||||
- **workflow-session.json**: Session metadata ONLY (no guidance content, no decisions, no Q&A pairs)
|
||||
- **NO content duplication**: Guidance stays in .md, metadata stays in .json
|
||||
**Progressive Pattern**: Questions interconnected, each round informs next, continue until resolved.
|
||||
|
||||
## Output Document Template
|
||||
### Phase 5: Generate Specification
|
||||
|
||||
**Steps**:
|
||||
1. Load all decisions: `intent_context` + `selected_roles` + `role_decisions` + `cross_role_decisions` + `additional_decisions`
|
||||
2. Transform Q&A to declarative: Questions → Headers, Answers → CONFIRMED/SELECTED statements
|
||||
3. Generate `guidance-specification.md`
|
||||
4. Update `workflow-session.json` (metadata only)
|
||||
5. Validate: No interrogative sentences, all decisions traceable
|
||||
|
||||
---
|
||||
|
||||
## Question Guidelines
|
||||
|
||||
### Core Principle
|
||||
|
||||
**Target**: 开发者(理解技术但需要从用户需求出发)
|
||||
|
||||
**Question Structure**: `[业务场景/需求前提] + [技术关注点]`
|
||||
**Option Structure**: `标签:[技术方案] + 说明:[业务影响] + [技术权衡]`
|
||||
|
||||
### Quality Rules
|
||||
|
||||
**MUST Include**:
|
||||
- ✅ All questions in Chinese (用中文提问)
|
||||
- ✅ 业务场景作为问题前提
|
||||
- ✅ 技术选项的业务影响说明
|
||||
- ✅ 量化指标和约束条件
|
||||
|
||||
**MUST Avoid**:
|
||||
- ❌ 纯技术选型无业务上下文
|
||||
- ❌ 过度抽象的用户体验问题
|
||||
- ❌ 脱离话题的通用架构问题
|
||||
|
||||
### Phase-Specific Requirements
|
||||
|
||||
| Phase | Focus | Key Requirements |
|
||||
|-------|-------|------------------|
|
||||
| 1 | 意图理解 | Reference topic keywords, 用户场景、业务约束、优先级 |
|
||||
| 2 | 角色推荐 | Intelligent analysis (NOT keyword mapping), explain relevance |
|
||||
| 3 | 角色问题 | Reference Phase 1 keywords, concrete options with trade-offs |
|
||||
| 4 | 冲突解决 | Reference SPECIFIC Phase 3 choices, explain impact on both roles |
|
||||
|
||||
---
|
||||
|
||||
## Output & Governance
|
||||
|
||||
### Output Template
|
||||
|
||||
**File**: `.workflow/active/WFS-{topic}/.brainstorming/guidance-specification.md`
|
||||
|
||||
@@ -478,9 +385,9 @@ FOR each selected role:
|
||||
|
||||
## Next Steps
|
||||
**⚠️ Automatic Continuation** (when called from auto-parallel):
|
||||
- auto-parallel will assign agents to generate role-specific analysis documents
|
||||
- Each selected role gets dedicated conceptual-planning-agent
|
||||
- Agents read this guidance-specification.md for framework context
|
||||
- auto-parallel assigns agents for role-specific analysis
|
||||
- Each selected role gets conceptual-planning-agent
|
||||
- Agents read this guidance-specification.md for context
|
||||
|
||||
## Appendix: Decision Tracking
|
||||
| Decision ID | Category | Question | Selected | Phase | Rationale |
|
||||
@@ -490,95 +397,19 @@ FOR each selected role:
|
||||
| D-003+ | [Role] | [Q] | [A] | 3 | [Why] |
|
||||
```
|
||||
|
||||
## Question Generation Guidelines
|
||||
|
||||
### Core Principle: Developer-Facing Questions with User Context
|
||||
|
||||
**Target Audience**: 开发者(理解技术但需要从用户需求出发)
|
||||
|
||||
**Generation Philosophy**:
|
||||
1. **Phase 1**: 用户场景、业务约束、优先级(建立上下文)
|
||||
2. **Phase 2**: 基于话题分析的智能角色推荐(非关键词映射)
|
||||
3. **Phase 3**: 业务需求 + 技术选型(需求驱动的技术决策)
|
||||
4. **Phase 4**: 技术冲突的业务权衡(帮助开发者理解影响)
|
||||
|
||||
### Universal Quality Rules
|
||||
|
||||
**Question Structure** (all phases):
|
||||
```
|
||||
[业务场景/需求前提] + [技术关注点]
|
||||
```
|
||||
|
||||
**Option Structure** (all phases):
|
||||
```
|
||||
标签:[技术方案简称] + (业务特征)
|
||||
说明:[业务影响] + [技术权衡]
|
||||
```
|
||||
|
||||
**MUST Include** (all phases):
|
||||
- ✅ All questions in Chinese (用中文提问)
|
||||
- ✅ 业务场景作为问题前提
|
||||
- ✅ 技术选项的业务影响说明
|
||||
- ✅ 量化指标和约束条件
|
||||
|
||||
**MUST Avoid** (all phases):
|
||||
- ❌ 纯技术选型无业务上下文
|
||||
- ❌ 过度抽象的用户体验问题
|
||||
- ❌ 脱离话题的通用架构问题
|
||||
|
||||
### Phase-Specific Requirements
|
||||
|
||||
**Phase 1 Requirements**:
|
||||
- Questions MUST reference topic keywords (NOT generic "Project type?")
|
||||
- Focus: 用户使用场景(谁用?怎么用?多频繁?)、业务约束(预算、时间、团队、合规)
|
||||
- Success metrics: 性能指标、用户体验目标
|
||||
- Priority ranking: MVP vs 长期规划
|
||||
|
||||
**Phase 3 Requirements**:
|
||||
- Questions MUST reference Phase 1 keywords (e.g., "real-time", "100 users")
|
||||
- Options MUST be concrete approaches with relevance to topic
|
||||
- Each option includes trade-offs specific to this use case
|
||||
- Include 业务需求驱动的技术问题、量化指标(并发数、延迟、可用性)
|
||||
|
||||
**Phase 4 Requirements**:
|
||||
- Questions MUST reference SPECIFIC Phase 3 choices in background context
|
||||
- Options address the detected conflict directly
|
||||
- Each option explains impact on both conflicting roles
|
||||
- NEVER use static "Cross-Role Matrix" - ALWAYS analyze actual Phase 3 answers
|
||||
- Focus: 技术冲突的业务权衡、帮助开发者理解不同选择的影响
|
||||
|
||||
## Validation Checklist
|
||||
|
||||
Generated guidance-specification.md MUST:
|
||||
- ✅ No interrogative sentences (use CONFIRMED/SELECTED)
|
||||
- ✅ Every decision traceable to user answer
|
||||
- ✅ Cross-role conflicts resolved or documented
|
||||
- ✅ Next steps concrete and specific
|
||||
- ✅ All Phase 1-4 decisions in session metadata
|
||||
|
||||
## Update Mechanism
|
||||
### File Structure
|
||||
|
||||
```
|
||||
IF guidance-specification.md EXISTS:
|
||||
Prompt: "Regenerate completely / Update sections / Cancel"
|
||||
ELSE:
|
||||
Run full Phase 1-5 flow
|
||||
.workflow/active/WFS-[topic]/
|
||||
├── workflow-session.json # Metadata ONLY
|
||||
├── .process/
|
||||
│ └── context-package.json # Phase 0 output
|
||||
└── .brainstorming/
|
||||
└── guidance-specification.md # Full guidance content
|
||||
```
|
||||
|
||||
## Governance Rules
|
||||
### Session Metadata
|
||||
|
||||
**Output Requirements**:
|
||||
- All decisions MUST use CONFIRMED/SELECTED (NO "?" in decision sections)
|
||||
- Every decision MUST trace to user answer
|
||||
- Conflicts MUST be resolved (not marked "TBD")
|
||||
- Next steps MUST be actionable
|
||||
- Topic preserved as authoritative reference in session
|
||||
|
||||
**CRITICAL**: Guidance is single source of truth for downstream phases. Ambiguity violates governance.
|
||||
|
||||
## Storage Validation
|
||||
|
||||
**workflow-session.json** (metadata only):
|
||||
```json
|
||||
{
|
||||
"session_id": "WFS-{topic-slug}",
|
||||
@@ -591,14 +422,31 @@ ELSE:
|
||||
}
|
||||
```
|
||||
|
||||
**⚠️ Rule**: Session JSON stores ONLY metadata (session_id, selected_roles[], topic, timestamps). All guidance content goes to guidance-specification.md.
|
||||
**⚠️ Rule**: Session JSON stores ONLY metadata. All guidance content goes to guidance-specification.md.
|
||||
|
||||
## File Structure
|
||||
### Validation Checklist
|
||||
|
||||
- ✅ No interrogative sentences (use CONFIRMED/SELECTED)
|
||||
- ✅ Every decision traceable to user answer
|
||||
- ✅ Cross-role conflicts resolved or documented
|
||||
- ✅ Next steps concrete and specific
|
||||
- ✅ No content duplication between .json and .md
|
||||
|
||||
### Update Mechanism
|
||||
|
||||
```
|
||||
.workflow/active/WFS-[topic]/
|
||||
├── workflow-session.json # Session metadata ONLY
|
||||
└── .brainstorming/
|
||||
└── guidance-specification.md # Full guidance content
|
||||
IF guidance-specification.md EXISTS:
|
||||
Prompt: "Regenerate completely / Update sections / Cancel"
|
||||
ELSE:
|
||||
Run full Phase 0-5 flow
|
||||
```
|
||||
|
||||
### Governance Rules
|
||||
|
||||
- All decisions MUST use CONFIRMED/SELECTED (NO "?" in decision sections)
|
||||
- Every decision MUST trace to user answer
|
||||
- Conflicts MUST be resolved (not marked "TBD")
|
||||
- Next steps MUST be actionable
|
||||
- Topic preserved as authoritative reference
|
||||
|
||||
**CRITICAL**: Guidance is single source of truth for downstream phases. Ambiguity violates governance.
|
||||
|
||||
@@ -141,26 +141,10 @@ OUTPUT_LOCATION: .workflow/active/WFS-{session}/.brainstorming/{role}/
|
||||
TOPIC: {user-provided-topic}
|
||||
|
||||
## Flow Control Steps
|
||||
1. **load_topic_framework**
|
||||
- Action: Load structured topic discussion framework
|
||||
- Command: Read(.workflow/active/WFS-{session}/.brainstorming/guidance-specification.md)
|
||||
- Output: topic_framework_content
|
||||
|
||||
2. **load_role_template**
|
||||
- Action: Load {role-name} planning template
|
||||
- Command: Read(~/.claude/workflows/cli-templates/planning-roles/{role}.md)
|
||||
- Output: role_template_guidelines
|
||||
|
||||
3. **load_session_metadata**
|
||||
- Action: Load session metadata and original user intent
|
||||
- Command: Read(.workflow/active/WFS-{session}/workflow-session.json)
|
||||
- Output: session_context (contains original user prompt as PRIMARY reference)
|
||||
|
||||
4. **load_style_skill** (ONLY for ui-designer role when style_skill_package exists)
|
||||
- Action: Load style SKILL package for design system reference
|
||||
- Command: Read(.claude/skills/style-{style_skill_package}/SKILL.md) AND Read(.workflow/reference_style/{style_skill_package}/design-tokens.json)
|
||||
- Output: style_skill_content, design_tokens
|
||||
- Usage: Apply design tokens in ui-designer analysis and artifacts
|
||||
1. load_topic_framework → .workflow/active/WFS-{session}/.brainstorming/guidance-specification.md
|
||||
2. load_role_template → ~/.claude/workflows/cli-templates/planning-roles/{role}.md
|
||||
3. load_session_metadata → .workflow/active/WFS-{session}/workflow-session.json
|
||||
4. load_style_skill (ui-designer only, if style_skill_package) → .claude/skills/style-{style_skill_package}/
|
||||
|
||||
## Analysis Requirements
|
||||
**Primary Reference**: Original user prompt from workflow-session.json is authoritative
|
||||
@@ -170,13 +154,9 @@ TOPIC: {user-provided-topic}
|
||||
**Template Integration**: Apply role template guidelines within framework structure
|
||||
|
||||
## Expected Deliverables
|
||||
1. **analysis.md**: Comprehensive {role-name} analysis addressing all framework discussion points
|
||||
- **File Naming**: MUST start with `analysis` prefix (e.g., `analysis.md`, `analysis-1.md`, `analysis-2.md`)
|
||||
- **FORBIDDEN**: Never use `recommendations.md` or any filename not starting with `analysis`
|
||||
- **Auto-split if large**: If content >800 lines, split to `analysis-1.md`, `analysis-2.md` (max 3 files: analysis.md, analysis-1.md, analysis-2.md)
|
||||
- **Content**: Includes both analysis AND recommendations sections within analysis files
|
||||
2. **Framework Reference**: Include @../guidance-specification.md reference in analysis
|
||||
3. **User Intent Alignment**: Validate analysis aligns with original user objectives from session_context
|
||||
1. **analysis.md** (optionally with analysis-{slug}.md sub-documents)
|
||||
2. **Framework Reference**: @../guidance-specification.md
|
||||
3. **User Intent Alignment**: Validate against session_context
|
||||
|
||||
## Completion Criteria
|
||||
- Address each discussion point from guidance-specification.md with {role-name} expertise
|
||||
@@ -199,10 +179,10 @@ TOPIC: {user-provided-topic}
|
||||
- guidance-specification.md path
|
||||
|
||||
**Validation**:
|
||||
- Each role creates `.workflow/active/WFS-{topic}/.brainstorming/{role}/analysis.md` (primary file)
|
||||
- If content is large (>800 lines), may split to `analysis-1.md`, `analysis-2.md` (max 3 files total)
|
||||
- **File naming pattern**: ALL files MUST start with `analysis` prefix (use `analysis*.md` for globbing)
|
||||
- **FORBIDDEN naming**: No `recommendations.md`, `recommendations-*.md`, or any non-`analysis` prefixed files
|
||||
- Each role creates `.workflow/active/WFS-{topic}/.brainstorming/{role}/analysis.md`
|
||||
- Optionally with `analysis-{slug}.md` sub-documents (max 5)
|
||||
- **File pattern**: `analysis*.md` for globbing
|
||||
- **FORBIDDEN**: `recommendations.md` or any non-`analysis` prefixed files
|
||||
- All N role analyses completed
|
||||
|
||||
**TodoWrite Update (Phase 2 agents dispatched - tasks attached in parallel)**:
|
||||
@@ -453,12 +433,9 @@ CONTEXT_VARS:
|
||||
├── workflow-session.json # Session metadata ONLY
|
||||
└── .brainstorming/
|
||||
├── guidance-specification.md # Framework (Phase 1)
|
||||
├── {role-1}/
|
||||
│ └── analysis.md # Role analysis (Phase 2)
|
||||
├── {role-2}/
|
||||
│ └── analysis.md
|
||||
├── {role-N}/
|
||||
│ └── analysis.md
|
||||
├── {role}/
|
||||
│ ├── analysis.md # Main document (with optional @references)
|
||||
│ └── analysis-{slug}.md # Section documents (max 5)
|
||||
└── synthesis-specification.md # Integration (Phase 3)
|
||||
```
|
||||
|
||||
|
||||
@@ -2,325 +2,318 @@
|
||||
name: synthesis
|
||||
description: Clarify and refine role analyses through intelligent Q&A and targeted updates with synthesis agent
|
||||
argument-hint: "[optional: --session session-id]"
|
||||
allowed-tools: Task(conceptual-planning-agent), TodoWrite(*), Read(*), Write(*), Edit(*), Glob(*)
|
||||
allowed-tools: Task(conceptual-planning-agent), TodoWrite(*), Read(*), Write(*), Edit(*), Glob(*), AskUserQuestion(*)
|
||||
---
|
||||
|
||||
## Overview
|
||||
|
||||
Three-phase workflow to eliminate ambiguities and enhance conceptual depth in role analyses:
|
||||
Six-phase workflow to eliminate ambiguities and enhance conceptual depth in role analyses:
|
||||
|
||||
**Phase 1-2 (Main Flow)**: Session detection → File discovery → Path preparation
|
||||
**Phase 1-2**: Session detection → File discovery → Path preparation
|
||||
**Phase 3A**: Cross-role analysis agent → Generate recommendations
|
||||
**Phase 4**: User selects enhancements → User answers clarifications (via AskUserQuestion)
|
||||
**Phase 5**: Parallel update agents (one per role)
|
||||
**Phase 6**: Context package update → Metadata update → Completion report
|
||||
|
||||
**Phase 3A (Analysis Agent)**: Cross-role analysis → Generate recommendations
|
||||
|
||||
**Phase 4 (Main Flow)**: User selects enhancements → User answers clarifications → Build update plan
|
||||
|
||||
**Phase 5 (Parallel Update Agents)**: Each agent updates ONE role document → Parallel execution
|
||||
|
||||
**Phase 6 (Main Flow)**: Metadata update → Completion report
|
||||
|
||||
**Key Features**:
|
||||
- Multi-agent architecture (analysis agent + parallel update agents)
|
||||
- Clear separation: Agent analysis vs Main flow interaction
|
||||
- Parallel document updates (one agent per role)
|
||||
- User intent alignment validation
|
||||
All user interactions use AskUserQuestion tool (max 4 questions per call, multi-round).
|
||||
|
||||
**Document Flow**:
|
||||
- Input: `[role]/analysis*.md`, `guidance-specification.md`, session metadata
|
||||
- Output: Updated `[role]/analysis*.md` with Enhancements + Clarifications sections
|
||||
|
||||
---
|
||||
|
||||
## Quick Reference
|
||||
|
||||
### Phase Summary
|
||||
|
||||
| Phase | Goal | Executor | Output |
|
||||
|-------|------|----------|--------|
|
||||
| 1 | Session detection | Main flow | session_id, brainstorm_dir |
|
||||
| 2 | File discovery | Main flow | role_analysis_paths |
|
||||
| 3A | Cross-role analysis | Agent | enhancement_recommendations |
|
||||
| 4 | User interaction | Main flow + AskUserQuestion | update_plan |
|
||||
| 5 | Document updates | Parallel agents | Updated analysis*.md |
|
||||
| 6 | Finalization | Main flow | context-package.json, report |
|
||||
|
||||
### AskUserQuestion Pattern
|
||||
|
||||
```javascript
|
||||
// Enhancement selection (multi-select)
|
||||
AskUserQuestion({
|
||||
questions: [{
|
||||
question: "请选择要应用的改进建议",
|
||||
header: "改进选择",
|
||||
multiSelect: true,
|
||||
options: [
|
||||
{ label: "EP-001: API Contract", description: "添加详细的请求/响应 schema 定义" },
|
||||
{ label: "EP-002: User Intent", description: "明确用户需求优先级和验收标准" }
|
||||
]
|
||||
}]
|
||||
})
|
||||
|
||||
// Clarification questions (single-select, multi-round)
|
||||
AskUserQuestion({
|
||||
questions: [
|
||||
{
|
||||
question: "MVP 阶段的核心目标是什么?",
|
||||
header: "用户意图",
|
||||
multiSelect: false,
|
||||
options: [
|
||||
{ label: "快速验证", description: "最小功能集,快速上线获取反馈" },
|
||||
{ label: "技术壁垒", description: "完善架构,为长期发展打基础" },
|
||||
{ label: "功能完整", description: "覆盖所有规划功能,延迟上线" }
|
||||
]
|
||||
}
|
||||
]
|
||||
})
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Task Tracking
|
||||
|
||||
```json
|
||||
[
|
||||
{"content": "Detect session and validate analyses", "status": "in_progress", "activeForm": "Detecting session"},
|
||||
{"content": "Detect session and validate analyses", "status": "pending", "activeForm": "Detecting session"},
|
||||
{"content": "Discover role analysis file paths", "status": "pending", "activeForm": "Discovering paths"},
|
||||
{"content": "Execute analysis agent (cross-role analysis)", "status": "pending", "activeForm": "Executing analysis agent"},
|
||||
{"content": "Present enhancements for user selection", "status": "pending", "activeForm": "Presenting enhancements"},
|
||||
{"content": "Generate and present clarification questions", "status": "pending", "activeForm": "Clarifying with user"},
|
||||
{"content": "Build update plan from user input", "status": "pending", "activeForm": "Building update plan"},
|
||||
{"content": "Execute parallel update agents (one per role)", "status": "pending", "activeForm": "Updating documents in parallel"},
|
||||
{"content": "Update session metadata and generate report", "status": "pending", "activeForm": "Finalizing session"}
|
||||
{"content": "Execute analysis agent (cross-role analysis)", "status": "pending", "activeForm": "Executing analysis"},
|
||||
{"content": "Present enhancements via AskUserQuestion", "status": "pending", "activeForm": "Selecting enhancements"},
|
||||
{"content": "Clarification questions via AskUserQuestion", "status": "pending", "activeForm": "Clarifying"},
|
||||
{"content": "Execute parallel update agents", "status": "pending", "activeForm": "Updating documents"},
|
||||
{"content": "Update context package and metadata", "status": "pending", "activeForm": "Finalizing"}
|
||||
]
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Execution Phases
|
||||
|
||||
### Phase 1: Discovery & Validation
|
||||
|
||||
1. **Detect Session**: Use `--session` parameter or find `.workflow/active/WFS-*` directories
|
||||
1. **Detect Session**: Use `--session` parameter or find `.workflow/active/WFS-*`
|
||||
2. **Validate Files**:
|
||||
- `guidance-specification.md` (optional, warn if missing)
|
||||
- `*/analysis*.md` (required, error if empty)
|
||||
3. **Load User Intent**: Extract from `workflow-session.json` (project/description field)
|
||||
3. **Load User Intent**: Extract from `workflow-session.json`
|
||||
|
||||
### Phase 2: Role Discovery & Path Preparation
|
||||
|
||||
**Main flow prepares file paths for Agent**:
|
||||
|
||||
1. **Discover Analysis Files**:
|
||||
- Glob(.workflow/active/WFS-{session}/.brainstorming/*/analysis*.md)
|
||||
- Supports: analysis.md, analysis-1.md, analysis-2.md, analysis-3.md
|
||||
- Validate: At least one file exists (error if empty)
|
||||
- Glob: `.workflow/active/WFS-{session}/.brainstorming/*/analysis*.md`
|
||||
- Supports: analysis.md + analysis-{slug}.md (max 5)
|
||||
|
||||
2. **Extract Role Information**:
|
||||
- `role_analysis_paths`: Relative paths from brainstorm_dir
|
||||
- `participating_roles`: Role names extracted from directory paths
|
||||
- `role_analysis_paths`: Relative paths
|
||||
- `participating_roles`: Role names from directories
|
||||
|
||||
3. **Pass to Agent** (Phase 3):
|
||||
- `session_id`
|
||||
- `brainstorm_dir`: .workflow/active/WFS-{session}/.brainstorming/
|
||||
- `role_analysis_paths`: ["product-manager/analysis.md", "system-architect/analysis-1.md", ...]
|
||||
- `participating_roles`: ["product-manager", "system-architect", ...]
|
||||
|
||||
**Main Flow Responsibility**: File discovery and path preparation only (NO file content reading)
|
||||
3. **Pass to Agent**: session_id, brainstorm_dir, role_analysis_paths, participating_roles
|
||||
|
||||
### Phase 3A: Analysis & Enhancement Agent
|
||||
|
||||
**First agent call**: Cross-role analysis and generate enhancement recommendations
|
||||
**Agent executes cross-role analysis**:
|
||||
|
||||
```bash
|
||||
Task(conceptual-planning-agent): "
|
||||
```javascript
|
||||
Task(conceptual-planning-agent, `
|
||||
## Agent Mission
|
||||
Analyze role documents, identify conflicts/gaps, and generate enhancement recommendations
|
||||
Analyze role documents, identify conflicts/gaps, generate enhancement recommendations
|
||||
|
||||
## Input from Main Flow
|
||||
- brainstorm_dir: {brainstorm_dir}
|
||||
- role_analysis_paths: {role_analysis_paths}
|
||||
- participating_roles: {participating_roles}
|
||||
## Input
|
||||
- brainstorm_dir: ${brainstorm_dir}
|
||||
- role_analysis_paths: ${role_analysis_paths}
|
||||
- participating_roles: ${participating_roles}
|
||||
|
||||
## Execution Instructions
|
||||
[FLOW_CONTROL]
|
||||
## Flow Control Steps
|
||||
1. load_session_metadata → Read workflow-session.json
|
||||
2. load_role_analyses → Read all analysis files
|
||||
3. cross_role_analysis → Identify consensus, conflicts, gaps, ambiguities
|
||||
4. generate_recommendations → Format as EP-001, EP-002, ...
|
||||
|
||||
### Flow Control Steps
|
||||
**AGENT RESPONSIBILITY**: Execute these analysis steps sequentially with context accumulation:
|
||||
|
||||
1. **load_session_metadata**
|
||||
- Action: Load original user intent as primary reference
|
||||
- Command: Read({brainstorm_dir}/../workflow-session.json)
|
||||
- Output: original_user_intent (from project/description field)
|
||||
|
||||
2. **load_role_analyses**
|
||||
- Action: Load all role analysis documents
|
||||
- Command: For each path in role_analysis_paths: Read({brainstorm_dir}/{path})
|
||||
- Output: role_analyses_content_map = {role_name: content}
|
||||
|
||||
3. **cross_role_analysis**
|
||||
- Action: Identify consensus themes, conflicts, gaps, underspecified areas
|
||||
- Output: consensus_themes, conflicting_views, gaps_list, ambiguities
|
||||
|
||||
4. **generate_recommendations**
|
||||
- Action: Convert cross-role analysis findings into structured enhancement recommendations
|
||||
- Format: EP-001, EP-002, ... (sequential numbering)
|
||||
- Fields: id, title, affected_roles, category, current_state, enhancement, rationale, priority
|
||||
- Taxonomy: Map to 9 categories (User Intent, Requirements, Architecture, UX, Feasibility, Risk, Process, Decisions, Terminology)
|
||||
- Output: enhancement_recommendations (JSON array)
|
||||
|
||||
### Output to Main Flow
|
||||
Return JSON array:
|
||||
## Output Format
|
||||
[
|
||||
{
|
||||
\"id\": \"EP-001\",
|
||||
\"title\": \"API Contract Specification\",
|
||||
\"affected_roles\": [\"system-architect\", \"api-designer\"],
|
||||
\"category\": \"Architecture\",
|
||||
\"current_state\": \"High-level API descriptions\",
|
||||
\"enhancement\": \"Add detailed contract definitions with request/response schemas\",
|
||||
\"rationale\": \"Enables precise implementation and testing\",
|
||||
\"priority\": \"High\"
|
||||
},
|
||||
...
|
||||
"id": "EP-001",
|
||||
"title": "API Contract Specification",
|
||||
"affected_roles": ["system-architect", "api-designer"],
|
||||
"category": "Architecture",
|
||||
"current_state": "High-level API descriptions",
|
||||
"enhancement": "Add detailed contract definitions",
|
||||
"rationale": "Enables precise implementation",
|
||||
"priority": "High"
|
||||
}
|
||||
]
|
||||
|
||||
"
|
||||
`)
|
||||
```
|
||||
|
||||
### Phase 4: Main Flow User Interaction
|
||||
### Phase 4: User Interaction
|
||||
|
||||
**Main flow handles all user interaction via text output**:
|
||||
**All interactions via AskUserQuestion (Chinese questions)**
|
||||
|
||||
**⚠️ CRITICAL**: ALL questions MUST use Chinese (所有问题必须用中文) for better user understanding
|
||||
#### Step 1: Enhancement Selection
|
||||
|
||||
1. **Present Enhancement Options** (multi-select):
|
||||
```markdown
|
||||
===== Enhancement 选择 =====
|
||||
```javascript
|
||||
// If enhancements > 4, split into multiple rounds
|
||||
const enhancements = [...]; // from Phase 3A
|
||||
const BATCH_SIZE = 4;
|
||||
|
||||
请选择要应用的改进建议(可多选):
|
||||
for (let i = 0; i < enhancements.length; i += BATCH_SIZE) {
|
||||
const batch = enhancements.slice(i, i + BATCH_SIZE);
|
||||
|
||||
a) EP-001: API Contract Specification
|
||||
影响角色:system-architect, api-designer
|
||||
说明:添加详细的请求/响应 schema 定义
|
||||
AskUserQuestion({
|
||||
questions: [{
|
||||
question: `请选择要应用的改进建议 (第${Math.floor(i/BATCH_SIZE)+1}轮)`,
|
||||
header: "改进选择",
|
||||
multiSelect: true,
|
||||
options: batch.map(ep => ({
|
||||
label: `${ep.id}: ${ep.title}`,
|
||||
description: `影响: ${ep.affected_roles.join(', ')} | ${ep.enhancement}`
|
||||
}))
|
||||
}]
|
||||
})
|
||||
|
||||
b) EP-002: User Intent Validation
|
||||
影响角色:product-manager, ux-expert
|
||||
说明:明确用户需求优先级和验收标准
|
||||
// Store selections before next round
|
||||
}
|
||||
|
||||
c) EP-003: Error Handling Strategy
|
||||
影响角色:system-architect
|
||||
说明:统一异常处理和降级方案
|
||||
|
||||
支持格式:1abc 或 1a 1b 1c 或 1a,b,c
|
||||
请输入选择(可跳过输入 skip):
|
||||
// User can also skip: provide "跳过" option
|
||||
```
|
||||
|
||||
2. **Generate Clarification Questions** (based on analysis agent output):
|
||||
- ✅ **ALL questions in Chinese (所有问题必须用中文)**
|
||||
- Use 9-category taxonomy scan results
|
||||
- Prioritize most critical questions (no hard limit)
|
||||
- Each with 2-4 options + descriptions
|
||||
#### Step 2: Clarification Questions
|
||||
|
||||
3. **Interactive Clarification Loop** (max 10 questions per round):
|
||||
```markdown
|
||||
===== Clarification 问题 (第 1/2 轮) =====
|
||||
```javascript
|
||||
// Generate questions based on 9-category taxonomy scan
|
||||
// Categories: User Intent, Requirements, Architecture, UX, Feasibility, Risk, Process, Decisions, Terminology
|
||||
|
||||
【问题1 - 用户意图】MVP 阶段的核心目标是什么?
|
||||
a) 快速验证市场需求
|
||||
说明:最小功能集,快速上线获取反馈
|
||||
b) 建立技术壁垒
|
||||
说明:完善架构,为长期发展打基础
|
||||
c) 实现功能完整性
|
||||
说明:覆盖所有规划功能,延迟上线
|
||||
const clarifications = [...]; // from analysis
|
||||
const BATCH_SIZE = 4;
|
||||
|
||||
【问题2 - 架构决策】技术栈选择的优先考虑因素?
|
||||
a) 团队熟悉度
|
||||
说明:使用现有技术栈,降低学习成本
|
||||
b) 技术先进性
|
||||
说明:采用新技术,提升竞争力
|
||||
c) 生态成熟度
|
||||
说明:选择成熟方案,保证稳定性
|
||||
for (let i = 0; i < clarifications.length; i += BATCH_SIZE) {
|
||||
const batch = clarifications.slice(i, i + BATCH_SIZE);
|
||||
const currentRound = Math.floor(i / BATCH_SIZE) + 1;
|
||||
const totalRounds = Math.ceil(clarifications.length / BATCH_SIZE);
|
||||
|
||||
...(最多10个问题)
|
||||
AskUserQuestion({
|
||||
questions: batch.map(q => ({
|
||||
question: q.question,
|
||||
header: q.category.substring(0, 12),
|
||||
multiSelect: false,
|
||||
options: q.options.map(opt => ({
|
||||
label: opt.label,
|
||||
description: opt.description
|
||||
}))
|
||||
}))
|
||||
})
|
||||
|
||||
请回答 (格式: 1a 2b 3c...):
|
||||
// Store answers before next round
|
||||
}
|
||||
```
|
||||
|
||||
Wait for user input → Parse all answers in batch → Continue to next round if needed
|
||||
### Question Guidelines
|
||||
|
||||
4. **Build Update Plan**:
|
||||
```
|
||||
**Target**: 开发者(理解技术但需要从用户需求出发)
|
||||
|
||||
**Question Structure**: `[跨角色分析发现] + [需要澄清的决策点]`
|
||||
**Option Structure**: `标签:[具体方案] + 说明:[业务影响] + [技术权衡]`
|
||||
|
||||
**9-Category Taxonomy**:
|
||||
|
||||
| Category | Focus | Example Question Pattern |
|
||||
|----------|-------|--------------------------|
|
||||
| User Intent | 用户目标 | "MVP阶段核心目标?" + 验证/壁垒/完整性 |
|
||||
| Requirements | 需求细化 | "功能优先级如何排序?" + 核心/增强/可选 |
|
||||
| Architecture | 架构决策 | "技术栈选择考量?" + 熟悉度/先进性/成熟度 |
|
||||
| UX | 用户体验 | "交互复杂度取舍?" + 简洁/丰富/渐进 |
|
||||
| Feasibility | 可行性 | "资源约束下的范围?" + 最小/标准/完整 |
|
||||
| Risk | 风险管理 | "风险容忍度?" + 保守/平衡/激进 |
|
||||
| Process | 流程规范 | "迭代节奏?" + 快速/稳定/灵活 |
|
||||
| Decisions | 决策确认 | "冲突解决方案?" + 方案A/方案B/折中 |
|
||||
| Terminology | 术语统一 | "统一使用哪个术语?" + 术语A/术语B |
|
||||
|
||||
**Quality Rules**:
|
||||
|
||||
**MUST Include**:
|
||||
- ✅ All questions in Chinese (用中文提问)
|
||||
- ✅ 基于跨角色分析的具体发现
|
||||
- ✅ 选项包含业务影响说明
|
||||
- ✅ 解决实际的模糊点或冲突
|
||||
|
||||
**MUST Avoid**:
|
||||
- ❌ 与角色分析无关的通用问题
|
||||
- ❌ 重复已在 artifacts 阶段确认的内容
|
||||
- ❌ 过于细节的实现级问题
|
||||
|
||||
#### Step 3: Build Update Plan
|
||||
|
||||
```javascript
|
||||
update_plan = {
|
||||
"role1": {
|
||||
"enhancements": [EP-001, EP-003],
|
||||
"enhancements": ["EP-001", "EP-003"],
|
||||
"clarifications": [
|
||||
{"question": "...", "answer": "...", "category": "..."},
|
||||
...
|
||||
{"question": "...", "answer": "...", "category": "..."}
|
||||
]
|
||||
},
|
||||
"role2": {
|
||||
"enhancements": [EP-002],
|
||||
"enhancements": ["EP-002"],
|
||||
"clarifications": [...]
|
||||
},
|
||||
...
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Phase 5: Parallel Document Update Agents
|
||||
|
||||
**Parallel agent calls** (one per role needing updates):
|
||||
**Execute in parallel** (one agent per role):
|
||||
|
||||
```bash
|
||||
# Execute in parallel using single message with multiple Task calls
|
||||
|
||||
Task(conceptual-planning-agent): "
|
||||
```javascript
|
||||
// Single message with multiple Task calls for parallelism
|
||||
Task(conceptual-planning-agent, `
|
||||
## Agent Mission
|
||||
Apply user-confirmed enhancements and clarifications to {role1} analysis document
|
||||
Apply enhancements and clarifications to ${role} analysis
|
||||
|
||||
## Agent Intent
|
||||
- **Goal**: Integrate synthesis results into role-specific analysis
|
||||
- **Scope**: Update ONLY {role1}/analysis.md (isolated, no cross-role dependencies)
|
||||
- **Constraints**: Preserve original insights, add refinements without deletion
|
||||
## Input
|
||||
- role: ${role}
|
||||
- analysis_path: ${brainstorm_dir}/${role}/analysis.md
|
||||
- enhancements: ${role_enhancements}
|
||||
- clarifications: ${role_clarifications}
|
||||
- original_user_intent: ${intent}
|
||||
|
||||
## Input from Main Flow
|
||||
- role: {role1}
|
||||
- analysis_path: {brainstorm_dir}/{role1}/analysis.md
|
||||
- enhancements: [EP-001, EP-003] (user-selected improvements)
|
||||
- clarifications: [{question, answer, category}, ...] (user-confirmed answers)
|
||||
- original_user_intent: {from session metadata}
|
||||
## Flow Control Steps
|
||||
1. load_current_analysis → Read analysis file
|
||||
2. add_clarifications_section → Insert Q&A section
|
||||
3. apply_enhancements → Integrate into relevant sections
|
||||
4. resolve_contradictions → Remove conflicts
|
||||
5. enforce_terminology → Align terminology
|
||||
6. validate_intent → Verify alignment with user intent
|
||||
7. write_updated_file → Save changes
|
||||
|
||||
## Execution Instructions
|
||||
[FLOW_CONTROL]
|
||||
|
||||
### Flow Control Steps
|
||||
**AGENT RESPONSIBILITY**: Execute these update steps sequentially:
|
||||
|
||||
1. **load_current_analysis**
|
||||
- Action: Load existing role analysis document
|
||||
- Command: Read({brainstorm_dir}/{role1}/analysis.md)
|
||||
- Output: current_analysis_content
|
||||
|
||||
2. **add_clarifications_section**
|
||||
- Action: Insert Clarifications section with Q&A
|
||||
- Format: \"## Clarifications\\n### Session {date}\\n- **Q**: {question} (Category: {category})\\n **A**: {answer}\"
|
||||
- Output: analysis_with_clarifications
|
||||
|
||||
3. **apply_enhancements**
|
||||
- Action: Integrate EP-001, EP-003 into relevant sections
|
||||
- Strategy: Locate section by category (Architecture → Architecture section, UX → User Experience section)
|
||||
- Output: analysis_with_enhancements
|
||||
|
||||
4. **resolve_contradictions**
|
||||
- Action: Remove conflicts between original content and clarifications/enhancements
|
||||
- Output: contradiction_free_analysis
|
||||
|
||||
5. **enforce_terminology_consistency**
|
||||
- Action: Align all terminology with user-confirmed choices from clarifications
|
||||
- Output: terminology_consistent_analysis
|
||||
|
||||
6. **validate_user_intent_alignment**
|
||||
- Action: Verify all updates support original_user_intent
|
||||
- Output: validated_analysis
|
||||
|
||||
7. **write_updated_file**
|
||||
- Action: Save final analysis document
|
||||
- Command: Write({brainstorm_dir}/{role1}/analysis.md, validated_analysis)
|
||||
- Output: File update confirmation
|
||||
|
||||
### Output
|
||||
Updated {role1}/analysis.md with Clarifications section + enhanced content
|
||||
")
|
||||
|
||||
Task(conceptual-planning-agent): "
|
||||
## Agent Mission
|
||||
Apply user-confirmed enhancements and clarifications to {role2} analysis document
|
||||
|
||||
## Agent Intent
|
||||
- **Goal**: Integrate synthesis results into role-specific analysis
|
||||
- **Scope**: Update ONLY {role2}/analysis.md (isolated, no cross-role dependencies)
|
||||
- **Constraints**: Preserve original insights, add refinements without deletion
|
||||
|
||||
## Input from Main Flow
|
||||
- role: {role2}
|
||||
- analysis_path: {brainstorm_dir}/{role2}/analysis.md
|
||||
- enhancements: [EP-002] (user-selected improvements)
|
||||
- clarifications: [{question, answer, category}, ...] (user-confirmed answers)
|
||||
- original_user_intent: {from session metadata}
|
||||
|
||||
## Execution Instructions
|
||||
[FLOW_CONTROL]
|
||||
|
||||
### Flow Control Steps
|
||||
**AGENT RESPONSIBILITY**: Execute same 7 update steps as {role1} agent (load → clarifications → enhancements → contradictions → terminology → validation → write)
|
||||
|
||||
### Output
|
||||
Updated {role2}/analysis.md with Clarifications section + enhanced content
|
||||
")
|
||||
|
||||
# ... repeat for each role in update_plan
|
||||
## Output
|
||||
Updated ${role}/analysis.md
|
||||
`)
|
||||
```
|
||||
|
||||
**Agent Characteristics**:
|
||||
- **Intent**: Integrate user-confirmed synthesis results (NOT generate new analysis)
|
||||
- **Isolation**: Each agent updates exactly ONE role (parallel execution safe)
|
||||
- **Context**: Minimal - receives only role-specific enhancements + clarifications
|
||||
- **Dependencies**: Zero cross-agent dependencies (full parallelism)
|
||||
- **Isolation**: Each agent updates exactly ONE role (parallel safe)
|
||||
- **Dependencies**: Zero cross-agent dependencies
|
||||
- **Validation**: All updates must align with original_user_intent
|
||||
|
||||
### Phase 6: Completion & Metadata Update
|
||||
### Phase 6: Finalization
|
||||
|
||||
**Main flow finalizes**:
|
||||
#### Step 1: Update Context Package
|
||||
|
||||
```javascript
|
||||
// Sync updated analyses to context-package.json
|
||||
const context_pkg = Read(".workflow/active/WFS-{session}/.process/context-package.json")
|
||||
|
||||
// Update guidance-specification if exists
|
||||
// Update synthesis-specification if exists
|
||||
// Re-read all role analysis files
|
||||
// Update metadata timestamps
|
||||
|
||||
Write(context_pkg_path, JSON.stringify(context_pkg))
|
||||
```
|
||||
|
||||
#### Step 2: Update Session Metadata
|
||||
|
||||
1. Wait for all parallel agents to complete
|
||||
2. Update workflow-session.json:
|
||||
```json
|
||||
{
|
||||
"phases": {
|
||||
@@ -330,15 +323,13 @@ Updated {role2}/analysis.md with Clarifications section + enhanced content
|
||||
"completed_at": "timestamp",
|
||||
"participating_roles": [...],
|
||||
"clarification_results": {
|
||||
"enhancements_applied": ["EP-001", "EP-002", ...],
|
||||
"enhancements_applied": ["EP-001", "EP-002"],
|
||||
"questions_asked": 3,
|
||||
"categories_clarified": ["Architecture", "UX", ...],
|
||||
"roles_updated": ["role1", "role2", ...],
|
||||
"outstanding_items": []
|
||||
"categories_clarified": ["Architecture", "UX"],
|
||||
"roles_updated": ["role1", "role2"]
|
||||
},
|
||||
"quality_metrics": {
|
||||
"user_intent_alignment": "validated",
|
||||
"requirement_coverage": "comprehensive",
|
||||
"ambiguity_resolution": "complete",
|
||||
"terminology_consistency": "enforced"
|
||||
}
|
||||
@@ -347,7 +338,8 @@ Updated {role2}/analysis.md with Clarifications section + enhanced content
|
||||
}
|
||||
```
|
||||
|
||||
3. Generate completion report (show to user):
|
||||
#### Step 3: Completion Report
|
||||
|
||||
```markdown
|
||||
## ✅ Clarification Complete
|
||||
|
||||
@@ -359,9 +351,11 @@ Updated {role2}/analysis.md with Clarifications section + enhanced content
|
||||
✅ PROCEED: `/workflow:plan --session WFS-{session-id}`
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Output
|
||||
|
||||
**Location**: `.workflow/active/WFS-{session}/.brainstorming/[role]/analysis*.md` (in-place updates)
|
||||
**Location**: `.workflow/active/WFS-{session}/.brainstorming/[role]/analysis*.md`
|
||||
|
||||
**Updated Structure**:
|
||||
```markdown
|
||||
@@ -381,116 +375,24 @@ Updated {role2}/analysis.md with Clarifications section + enhanced content
|
||||
- Ambiguities resolved, placeholders removed
|
||||
- Consistent terminology
|
||||
|
||||
### Phase 6: Update Context Package
|
||||
|
||||
**Purpose**: Sync updated role analyses to context-package.json to avoid stale cache
|
||||
|
||||
**Operations**:
|
||||
```bash
|
||||
context_pkg_path = ".workflow/active/WFS-{session}/.process/context-package.json"
|
||||
|
||||
# 1. Read existing package
|
||||
context_pkg = Read(context_pkg_path)
|
||||
|
||||
# 2. Re-read brainstorm artifacts (now with synthesis enhancements)
|
||||
brainstorm_dir = ".workflow/active/WFS-{session}/.brainstorming"
|
||||
|
||||
# 2.1 Update guidance-specification if exists
|
||||
IF exists({brainstorm_dir}/guidance-specification.md):
|
||||
context_pkg.brainstorm_artifacts.guidance_specification.content = Read({brainstorm_dir}/guidance-specification.md)
|
||||
context_pkg.brainstorm_artifacts.guidance_specification.updated_at = NOW()
|
||||
|
||||
# 2.2 Update synthesis-specification if exists
|
||||
IF exists({brainstorm_dir}/synthesis-specification.md):
|
||||
IF context_pkg.brainstorm_artifacts.synthesis_output:
|
||||
context_pkg.brainstorm_artifacts.synthesis_output.content = Read({brainstorm_dir}/synthesis-specification.md)
|
||||
context_pkg.brainstorm_artifacts.synthesis_output.updated_at = NOW()
|
||||
|
||||
# 2.3 Re-read all role analysis files
|
||||
role_analysis_files = Glob({brainstorm_dir}/*/analysis*.md)
|
||||
context_pkg.brainstorm_artifacts.role_analyses = []
|
||||
|
||||
FOR file IN role_analysis_files:
|
||||
role_name = extract_role_from_path(file) # e.g., "ui-designer"
|
||||
relative_path = file.replace({brainstorm_dir}/, "")
|
||||
|
||||
context_pkg.brainstorm_artifacts.role_analyses.push({
|
||||
"role": role_name,
|
||||
"files": [{
|
||||
"path": relative_path,
|
||||
"type": "primary",
|
||||
"content": Read(file),
|
||||
"updated_at": NOW()
|
||||
}]
|
||||
})
|
||||
|
||||
# 3. Update metadata
|
||||
context_pkg.metadata.updated_at = NOW()
|
||||
context_pkg.metadata.synthesis_timestamp = NOW()
|
||||
|
||||
# 4. Write back
|
||||
Write(context_pkg_path, JSON.stringify(context_pkg, indent=2))
|
||||
|
||||
REPORT: "✅ Updated context-package.json with synthesis results"
|
||||
```
|
||||
|
||||
**TodoWrite Update**:
|
||||
```json
|
||||
{"content": "Update context package with synthesis results", "status": "completed", "activeForm": "Updating context package"}
|
||||
```
|
||||
|
||||
## Session Metadata
|
||||
|
||||
Update `workflow-session.json`:
|
||||
|
||||
```json
|
||||
{
|
||||
"phases": {
|
||||
"BRAINSTORM": {
|
||||
"status": "clarification_completed",
|
||||
"clarification_completed": true,
|
||||
"completed_at": "timestamp",
|
||||
"participating_roles": ["product-manager", "system-architect", ...],
|
||||
"clarification_results": {
|
||||
"questions_asked": 3,
|
||||
"categories_clarified": ["Architecture & Design", ...],
|
||||
"roles_updated": ["system-architect", "ui-designer", ...],
|
||||
"outstanding_items": []
|
||||
},
|
||||
"quality_metrics": {
|
||||
"user_intent_alignment": "validated",
|
||||
"requirement_coverage": "comprehensive",
|
||||
"ambiguity_resolution": "complete",
|
||||
"terminology_consistency": "enforced",
|
||||
"decision_transparency": "documented"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
---
|
||||
|
||||
## Quality Checklist
|
||||
|
||||
**Content**:
|
||||
- All role analyses loaded/analyzed
|
||||
- Cross-role analysis (consensus, conflicts, gaps)
|
||||
- 9-category ambiguity scan
|
||||
- Questions prioritized
|
||||
- Clarifications documented
|
||||
- ✅ All role analyses loaded/analyzed
|
||||
- ✅ Cross-role analysis (consensus, conflicts, gaps)
|
||||
- ✅ 9-category ambiguity scan
|
||||
- ✅ Questions prioritized
|
||||
|
||||
**Analysis**:
|
||||
- User intent validated
|
||||
- Cross-role synthesis complete
|
||||
- Ambiguities resolved
|
||||
- Correct roles updated
|
||||
- Terminology consistent
|
||||
- Contradictions removed
|
||||
- ✅ User intent validated
|
||||
- ✅ Cross-role synthesis complete
|
||||
- ✅ Ambiguities resolved
|
||||
- ✅ Terminology consistent
|
||||
|
||||
**Documents**:
|
||||
- Clarifications section formatted
|
||||
- Sections reflect answers
|
||||
- No placeholders (TODO/TBD)
|
||||
- Valid Markdown
|
||||
- Cross-references maintained
|
||||
|
||||
|
||||
- ✅ Clarifications section formatted
|
||||
- ✅ Sections reflect answers
|
||||
- ✅ No placeholders (TODO/TBD)
|
||||
- ✅ Valid Markdown
|
||||
|
||||
@@ -67,7 +67,9 @@ Phase 4: Execution Strategy & Task Execution
|
||||
├─ Get next in_progress task from TodoWrite
|
||||
├─ Lazy load task JSON
|
||||
├─ Launch agent with task context
|
||||
├─ Mark task completed
|
||||
├─ Mark task completed (update IMPL-*.json status)
|
||||
│ # Update task status with ccw session (auto-tracks status_history):
|
||||
│ # ccw session task ${sessionId} IMPL-X completed
|
||||
└─ Advance to next task
|
||||
|
||||
Phase 5: Completion
|
||||
@@ -90,37 +92,32 @@ Resume Mode (--resume-session):
|
||||
|
||||
**Process**:
|
||||
|
||||
#### Step 1.1: Count Active Sessions
|
||||
#### Step 1.1: List Active Sessions
|
||||
```bash
|
||||
bash(find .workflow/active/ -name "WFS-*" -type d 2>/dev/null | wc -l)
|
||||
ccw session list --location active
|
||||
# Returns: {"success":true,"result":{"active":[...],"total":N}}
|
||||
```
|
||||
|
||||
#### Step 1.2: Handle Session Selection
|
||||
|
||||
**Case A: No Sessions** (count = 0)
|
||||
**Case A: No Sessions** (total = 0)
|
||||
```
|
||||
ERROR: No active workflow sessions found
|
||||
Run /workflow:plan "task description" to create a session
|
||||
```
|
||||
|
||||
**Case B: Single Session** (count = 1)
|
||||
```bash
|
||||
bash(find .workflow/active/ -name "WFS-*" -type d 2>/dev/null | head -1 | xargs basename)
|
||||
```
|
||||
Auto-select and continue to Phase 2.
|
||||
**Case B: Single Session** (total = 1)
|
||||
Auto-select the single session from result.active[0].session_id and continue to Phase 2.
|
||||
|
||||
**Case C: Multiple Sessions** (count > 1)
|
||||
**Case C: Multiple Sessions** (total > 1)
|
||||
|
||||
List sessions with metadata and prompt user selection:
|
||||
List sessions with metadata using ccw session:
|
||||
```bash
|
||||
bash(for dir in .workflow/active/WFS-*/; do
|
||||
session=$(basename "$dir")
|
||||
project=$(jq -r '.project // "Unknown"' "$dir/workflow-session.json" 2>/dev/null)
|
||||
total=$(grep -c "^- \[" "$dir/TODO_LIST.md" 2>/dev/null || echo "0")
|
||||
completed=$(grep -c "^- \[x\]" "$dir/TODO_LIST.md" 2>/dev/null || echo "0")
|
||||
[ "$total" -gt 0 ] && progress=$((completed * 100 / total)) || progress=0
|
||||
echo "${session} | ${project} | ${completed}/${total} tasks (${progress}%)"
|
||||
done)
|
||||
# Get session list with metadata
|
||||
ccw session list --location active
|
||||
|
||||
# For each session, get stats
|
||||
ccw session stats WFS-session-name
|
||||
```
|
||||
|
||||
Use AskUserQuestion to present formatted options (max 4 options shown):
|
||||
@@ -152,12 +149,20 @@ Parse user input (supports: number "1", full ID "WFS-auth-system", or partial "a
|
||||
|
||||
#### Step 1.3: Load Session Metadata
|
||||
```bash
|
||||
bash(cat .workflow/active/${sessionId}/workflow-session.json)
|
||||
ccw session ${sessionId} read workflow-session.json
|
||||
```
|
||||
|
||||
**Output**: Store session metadata in memory
|
||||
**DO NOT read task JSONs yet** - defer until execution phase (lazy loading)
|
||||
|
||||
#### Step 1.4: Update Session Status to Active
|
||||
**Purpose**: Update workflow-session.json status from "planning" to "active" for dashboard monitoring.
|
||||
|
||||
```bash
|
||||
# Update status atomically using ccw session
|
||||
ccw session status ${sessionId} active
|
||||
```
|
||||
|
||||
**Resume Mode**: This entire phase is skipped when `--resume-session="session-id"` flag is provided.
|
||||
|
||||
### Phase 2: Planning Document Validation
|
||||
|
||||
@@ -92,7 +92,7 @@ Analyze project for workflow initialization and generate .workflow/project.json.
|
||||
|
||||
## MANDATORY FIRST STEPS
|
||||
1. Execute: cat ~/.claude/workflows/cli-templates/schemas/project-json-schema.json (get schema reference)
|
||||
2. Execute: ~/.claude/scripts/get_modules_by_depth.sh (get project structure)
|
||||
2. Execute: ccw tool exec get_modules_by_depth '{}' (get project structure)
|
||||
|
||||
## Task
|
||||
Generate complete project.json with:
|
||||
@@ -122,7 +122,7 @@ Generate complete project.json with:
|
||||
1. Structural scan: get_modules_by_depth.sh, find, wc -l
|
||||
2. Semantic analysis: Gemini for patterns/architecture
|
||||
3. Synthesis: Merge findings
|
||||
4. ${regenerate ? 'Merge with preserved features/statistics from .workflow/project.json.backup' : ''}
|
||||
4. ${regenerate ? 'Merge with preserved features/development_index/statistics from .workflow/project.json.backup' : ''}
|
||||
5. Write JSON: Write('.workflow/project.json', jsonContent)
|
||||
6. Report: Return brief completion summary
|
||||
|
||||
|
||||
@@ -473,10 +473,10 @@ Detailed plan: ${executionContext.session.artifacts.plan}`)
|
||||
return prompt
|
||||
}
|
||||
|
||||
codex --full-auto exec "${buildCLIPrompt(batch)}" --skip-git-repo-check -s danger-full-access
|
||||
ccw cli -p "${buildCLIPrompt(batch)}" --tool codex --mode write
|
||||
```
|
||||
|
||||
**Execution with tracking**:
|
||||
**Execution with fixed IDs** (predictable ID pattern):
|
||||
```javascript
|
||||
// Launch CLI in foreground (NOT background)
|
||||
// Timeout based on complexity: Low=40min, Medium=60min, High=100min
|
||||
@@ -486,15 +486,48 @@ const timeoutByComplexity = {
|
||||
"High": 6000000 // 100 minutes
|
||||
}
|
||||
|
||||
// Generate fixed execution ID: ${sessionId}-${groupId}
|
||||
// This enables predictable ID lookup without relying on resume context chains
|
||||
const sessionId = executionContext?.session?.id || 'standalone'
|
||||
const fixedExecutionId = `${sessionId}-${batch.groupId}` // e.g., "implement-auth-2025-12-13-P1"
|
||||
|
||||
// Check if resuming from previous failed execution
|
||||
const previousCliId = batch.resumeFromCliId || null
|
||||
|
||||
// Build command with fixed ID (and optional resume for continuation)
|
||||
const cli_command = previousCliId
|
||||
? `ccw cli -p "${buildCLIPrompt(batch)}" --tool codex --mode write --id ${fixedExecutionId} --resume ${previousCliId}`
|
||||
: `ccw cli -p "${buildCLIPrompt(batch)}" --tool codex --mode write --id ${fixedExecutionId}`
|
||||
|
||||
bash_result = Bash(
|
||||
command=cli_command,
|
||||
timeout=timeoutByComplexity[planObject.complexity] || 3600000
|
||||
)
|
||||
|
||||
// Execution ID is now predictable: ${fixedExecutionId}
|
||||
// Can also extract from output: "ID: implement-auth-2025-12-13-P1"
|
||||
const cliExecutionId = fixedExecutionId
|
||||
|
||||
// Update TodoWrite when execution completes
|
||||
```
|
||||
|
||||
**Result Collection**: After completion, analyze output and collect result following `executionResult` structure
|
||||
**Resume on Failure** (with fixed ID):
|
||||
```javascript
|
||||
// If execution failed or timed out, offer resume option
|
||||
if (bash_result.status === 'failed' || bash_result.status === 'timeout') {
|
||||
console.log(`
|
||||
⚠️ Execution incomplete. Resume available:
|
||||
Fixed ID: ${fixedExecutionId}
|
||||
Lookup: ccw cli detail ${fixedExecutionId}
|
||||
Resume: ccw cli -p "Continue tasks" --resume ${fixedExecutionId} --tool codex --mode write --id ${fixedExecutionId}-retry
|
||||
`)
|
||||
|
||||
// Store for potential retry in same session
|
||||
batch.resumeFromCliId = fixedExecutionId
|
||||
}
|
||||
```
|
||||
|
||||
**Result Collection**: After completion, analyze output and collect result following `executionResult` structure (include `cliExecutionId` for resume capability)
|
||||
|
||||
### Step 4: Progress Tracking
|
||||
|
||||
@@ -541,21 +574,88 @@ RULES: $(cat ~/.claude/workflows/cli-templates/prompts/analysis/02-review-code-q
|
||||
# - Report findings directly
|
||||
|
||||
# Method 2: Gemini Review (recommended)
|
||||
gemini -p "[Shared Prompt Template with artifacts]"
|
||||
ccw cli -p "[Shared Prompt Template with artifacts]" --tool gemini --mode analysis
|
||||
# CONTEXT includes: @**/* @${plan.json} [@${exploration.json}]
|
||||
|
||||
# Method 3: Qwen Review (alternative)
|
||||
qwen -p "[Shared Prompt Template with artifacts]"
|
||||
ccw cli -p "[Shared Prompt Template with artifacts]" --tool qwen --mode analysis
|
||||
# Same prompt as Gemini, different execution engine
|
||||
|
||||
# Method 4: Codex Review (autonomous)
|
||||
codex --full-auto exec "[Verify plan acceptance criteria at ${plan.json}]" --skip-git-repo-check -s danger-full-access
|
||||
ccw cli -p "[Verify plan acceptance criteria at ${plan.json}]" --tool codex --mode write
|
||||
```
|
||||
|
||||
**Multi-Round Review with Fixed IDs**:
|
||||
```javascript
|
||||
// Generate fixed review ID
|
||||
const reviewId = `${sessionId}-review`
|
||||
|
||||
// First review pass with fixed ID
|
||||
const reviewResult = Bash(`ccw cli -p "[Review prompt]" --tool gemini --mode analysis --id ${reviewId}`)
|
||||
|
||||
// If issues found, continue review dialog with fixed ID chain
|
||||
if (hasUnresolvedIssues(reviewResult)) {
|
||||
// Resume with follow-up questions
|
||||
Bash(`ccw cli -p "Clarify the security concerns you mentioned" --resume ${reviewId} --tool gemini --mode analysis --id ${reviewId}-followup`)
|
||||
}
|
||||
```
|
||||
|
||||
**Implementation Note**: Replace `[Shared Prompt Template with artifacts]` placeholder with actual template content, substituting:
|
||||
- `@{plan.json}` → `@${executionContext.session.artifacts.plan}`
|
||||
- `[@{exploration.json}]` → exploration files from artifacts (if exists)
|
||||
|
||||
### Step 6: Update Development Index
|
||||
|
||||
**Trigger**: After all executions complete (regardless of code review)
|
||||
|
||||
**Skip Condition**: Skip if `.workflow/project.json` does not exist
|
||||
|
||||
**Operations**:
|
||||
```javascript
|
||||
const projectJsonPath = '.workflow/project.json'
|
||||
if (!fileExists(projectJsonPath)) return // Silent skip
|
||||
|
||||
const projectJson = JSON.parse(Read(projectJsonPath))
|
||||
|
||||
// Initialize if needed
|
||||
if (!projectJson.development_index) {
|
||||
projectJson.development_index = { feature: [], enhancement: [], bugfix: [], refactor: [], docs: [] }
|
||||
}
|
||||
|
||||
// Detect category from keywords
|
||||
function detectCategory(text) {
|
||||
text = text.toLowerCase()
|
||||
if (/\b(fix|bug|error|issue|crash)\b/.test(text)) return 'bugfix'
|
||||
if (/\b(refactor|cleanup|reorganize)\b/.test(text)) return 'refactor'
|
||||
if (/\b(doc|readme|comment)\b/.test(text)) return 'docs'
|
||||
if (/\b(add|new|create|implement)\b/.test(text)) return 'feature'
|
||||
return 'enhancement'
|
||||
}
|
||||
|
||||
// Detect sub_feature from task file paths
|
||||
function detectSubFeature(tasks) {
|
||||
const dirs = tasks.map(t => t.file?.split('/').slice(-2, -1)[0]).filter(Boolean)
|
||||
const counts = dirs.reduce((a, d) => { a[d] = (a[d] || 0) + 1; return a }, {})
|
||||
return Object.entries(counts).sort((a, b) => b[1] - a[1])[0]?.[0] || 'general'
|
||||
}
|
||||
|
||||
const category = detectCategory(`${planObject.summary} ${planObject.approach}`)
|
||||
const entry = {
|
||||
title: planObject.summary.slice(0, 60),
|
||||
sub_feature: detectSubFeature(planObject.tasks),
|
||||
date: new Date().toISOString().split('T')[0],
|
||||
description: planObject.approach.slice(0, 100),
|
||||
status: previousExecutionResults.every(r => r.status === 'completed') ? 'completed' : 'partial',
|
||||
session_id: executionContext?.session?.id || null
|
||||
}
|
||||
|
||||
projectJson.development_index[category].push(entry)
|
||||
projectJson.statistics.last_updated = new Date().toISOString()
|
||||
Write(projectJsonPath, JSON.stringify(projectJson, null, 2))
|
||||
|
||||
console.log(`✓ Development index: [${category}] ${entry.title}`)
|
||||
```
|
||||
|
||||
## Best Practices
|
||||
|
||||
**Input Modes**: In-memory (lite-plan), prompt (standalone), file (JSON/text)
|
||||
@@ -571,8 +671,10 @@ codex --full-auto exec "[Verify plan acceptance criteria at ${plan.json}]" --ski
|
||||
| Empty file | File exists but no content | Error: "File is empty: {path}. Provide task description." |
|
||||
| Invalid Enhanced Task JSON | JSON missing required fields | Warning: "Missing required fields. Treating as plain text." |
|
||||
| Malformed JSON | JSON parsing fails | Treat as plain text (expected for non-JSON files) |
|
||||
| Execution failure | Agent/Codex crashes | Display error, save partial progress, suggest retry |
|
||||
| Execution failure | Agent/Codex crashes | Display error, use fixed ID `${sessionId}-${groupId}` for resume: `ccw cli -p "Continue" --resume <fixed-id> --id <fixed-id>-retry` |
|
||||
| Execution timeout | CLI exceeded timeout | Use fixed ID for resume with extended timeout |
|
||||
| Codex unavailable | Codex not installed | Show installation instructions, offer Agent execution |
|
||||
| Fixed ID not found | Custom ID lookup failed | Check `ccw cli history`, verify date directories |
|
||||
|
||||
## Data Structures
|
||||
|
||||
@@ -627,8 +729,20 @@ Collected after each execution call completes:
|
||||
tasksSummary: string, // Brief description of tasks handled
|
||||
completionSummary: string, // What was completed
|
||||
keyOutputs: string, // Files created/modified, key changes
|
||||
notes: string // Important context for next execution
|
||||
notes: string, // Important context for next execution
|
||||
fixedCliId: string | null // Fixed CLI execution ID (e.g., "implement-auth-2025-12-13-P1")
|
||||
}
|
||||
```
|
||||
|
||||
Appended to `previousExecutionResults` array for context continuity in multi-execution scenarios.
|
||||
|
||||
**Fixed ID Pattern**: `${sessionId}-${groupId}` enables predictable lookup without auto-generated timestamps.
|
||||
|
||||
**Resume Usage**: If `status` is "partial" or "failed", use `fixedCliId` to resume:
|
||||
```bash
|
||||
# Lookup previous execution
|
||||
ccw cli detail ${fixedCliId}
|
||||
|
||||
# Resume with new fixed ID for retry
|
||||
ccw cli -p "Continue from where we left off" --resume ${fixedCliId} --tool codex --mode write --id ${fixedCliId}-retry
|
||||
```
|
||||
|
||||
@@ -43,11 +43,11 @@ Phase 1: Bug Analysis & Diagnosis
|
||||
|- needsDiagnosis=true -> Launch parallel cli-explore-agents (1-4 based on severity)
|
||||
+- needsDiagnosis=false (hotfix) -> Skip directly to Phase 3 (Fix Planning)
|
||||
|
||||
Phase 2: Clarification (optional)
|
||||
Phase 2: Clarification (optional, multi-round)
|
||||
|- Aggregate clarification_needs from all diagnosis angles
|
||||
|- Deduplicate similar questions
|
||||
+- Decision:
|
||||
|- Has clarifications -> AskUserQuestion (max 4 questions)
|
||||
|- Has clarifications -> AskUserQuestion (max 4 questions per round, multiple rounds allowed)
|
||||
+- No clarifications -> Skip to Phase 3
|
||||
|
||||
Phase 3: Fix Planning (NO CODE EXECUTION - planning only)
|
||||
@@ -71,18 +71,61 @@ Phase 5: Dispatch
|
||||
|
||||
### Phase 1: Intelligent Multi-Angle Diagnosis
|
||||
|
||||
**Session Setup**:
|
||||
**Session Setup** (MANDATORY - follow exactly):
|
||||
|
||||
**Option 1: Using CLI Command** (Recommended for simplicity):
|
||||
```bash
|
||||
# Generate session ID
|
||||
bug_slug=$(echo "${bug_description}" | tr '[:upper:]' '[:lower:]' | tr -cs '[:alnum:]' '-' | cut -c1-40)
|
||||
date_str=$(date -u '+%Y-%m-%d')
|
||||
session_id="${bug_slug}-${date_str}"
|
||||
|
||||
# Initialize lite-fix session (location auto-inferred from type)
|
||||
ccw session init "${session_id}" \
|
||||
--type lite-fix \
|
||||
--content "{\"description\":\"${bug_description}\",\"severity\":\"${severity}\"}"
|
||||
|
||||
|
||||
|
||||
# Get session folder
|
||||
session_folder=".workflow/.lite-fix/${session_id}"
|
||||
echo "Session initialized: ${session_id} at ${session_folder}"
|
||||
```
|
||||
|
||||
**Option 2: Using session_manager Tool** (For programmatic access):
|
||||
```javascript
|
||||
// Helper: Get UTC+8 (China Standard Time) ISO string
|
||||
const getUtc8ISOString = () => new Date(Date.now() + 8 * 60 * 60 * 1000).toISOString()
|
||||
|
||||
const bugSlug = bug_description.toLowerCase().replace(/[^a-z0-9]+/g, '-').substring(0, 40)
|
||||
const timestamp = getUtc8ISOString().replace(/[:.]/g, '-')
|
||||
const shortTimestamp = timestamp.substring(0, 19).replace('T', '-')
|
||||
const sessionId = `${bugSlug}-${shortTimestamp}`
|
||||
const sessionFolder = `.workflow/.lite-fix/${sessionId}`
|
||||
const dateStr = getUtc8ISOString().substring(0, 10) // Format: 2025-12-17
|
||||
|
||||
bash(`mkdir -p ${sessionFolder}`)
|
||||
const sessionId = `${bugSlug}-${dateStr}` // e.g., "user-avatar-upload-fails-2025-12-17"
|
||||
|
||||
|
||||
|
||||
const sessionFolder = initResult.result.path
|
||||
console.log(`Session initialized: ${sessionId} at ${sessionFolder}`)
|
||||
```
|
||||
|
||||
**Session File Structure**:
|
||||
- `session-metadata.json` - Session metadata (created at init, contains description, severity, status)
|
||||
- `fix-plan.json` - Actual fix planning content (created later in Phase 3, contains fix tasks, diagnosis results)
|
||||
|
||||
**Metadata Field Usage**:
|
||||
- `description`: Displayed in dashboard session list (replaces session ID as title)
|
||||
- `severity`: Used for fix planning strategy selection (Low/Medium → Direct Claude, High/Critical → Agent)
|
||||
- `created_at`: Displayed in dashboard timeline
|
||||
- `status`: Updated through workflow (diagnosing → fixing → completed)
|
||||
- Custom fields: Any additional fields in metadata are saved and accessible programmatically
|
||||
|
||||
**Accessing Session Data**:
|
||||
```bash
|
||||
# Read session metadata
|
||||
ccw session ${session_id} read session-metadata.json
|
||||
|
||||
# Read fix plan content (after Phase 3 completion)
|
||||
ccw session ${session_id} read fix-plan.json
|
||||
```
|
||||
|
||||
**Diagnosis Decision Logic**:
|
||||
@@ -177,7 +220,7 @@ Execute **${angle}** diagnosis for bug root cause analysis. Analyze codebase fro
|
||||
|
||||
## MANDATORY FIRST STEPS (Execute by Agent)
|
||||
**You (cli-explore-agent) MUST execute these steps in order:**
|
||||
1. Run: ~/.claude/scripts/get_modules_by_depth.sh (project structure)
|
||||
1. Run: ccw tool exec get_modules_by_depth '{}' (project structure)
|
||||
2. Run: rg -l "{error_keyword_from_bug}" --type ts (locate relevant files)
|
||||
3. Execute: cat ~/.claude/workflows/cli-templates/schemas/diagnosis-json-schema.json (get output schema reference)
|
||||
|
||||
@@ -216,7 +259,7 @@ Execute **${angle}** diagnosis for bug root cause analysis. Analyze codebase fro
|
||||
- fix_hints: Suggested fix approaches from ${angle} viewpoint
|
||||
- dependencies: Dependencies relevant to ${angle} diagnosis
|
||||
- constraints: ${angle}-specific limitations affecting fix
|
||||
- clarification_needs: ${angle}-related ambiguities (with options array)
|
||||
- clarification_needs: ${angle}-related ambiguities (options array + recommended index)
|
||||
- _metadata.diagnosis_angle: "${angle}"
|
||||
- _metadata.diagnosis_index: ${index + 1}
|
||||
|
||||
@@ -228,7 +271,7 @@ Execute **${angle}** diagnosis for bug root cause analysis. Analyze codebase fro
|
||||
- [ ] Fix hints are actionable (specific code changes, not generic advice)
|
||||
- [ ] Reproduction steps are verifiable
|
||||
- [ ] JSON output follows schema exactly
|
||||
- [ ] clarification_needs includes options array
|
||||
- [ ] clarification_needs includes options + recommended
|
||||
|
||||
## Output
|
||||
Write: ${sessionFolder}/diagnosis-${angle}.json
|
||||
@@ -287,10 +330,12 @@ Angles diagnosed: ${diagnosisManifest.diagnoses.map(d => d.angle).join(', ')}
|
||||
|
||||
---
|
||||
|
||||
### Phase 2: Clarification (Optional)
|
||||
### Phase 2: Clarification (Optional, Multi-Round)
|
||||
|
||||
**Skip if**: No diagnosis or `clarification_needs` is empty across all diagnoses
|
||||
|
||||
**⚠️ CRITICAL**: AskUserQuestion tool limits max 4 questions per call. **MUST execute multiple rounds** to exhaust all clarification needs - do NOT stop at round 1.
|
||||
|
||||
**Aggregate clarification needs from all diagnosis angles**:
|
||||
```javascript
|
||||
// Load manifest and all diagnosis files
|
||||
@@ -327,18 +372,35 @@ function deduplicateClarifications(clarifications) {
|
||||
|
||||
const uniqueClarifications = deduplicateClarifications(allClarifications)
|
||||
|
||||
// Multi-round clarification: batch questions (max 4 per round)
|
||||
// ⚠️ MUST execute ALL rounds until uniqueClarifications exhausted
|
||||
if (uniqueClarifications.length > 0) {
|
||||
AskUserQuestion({
|
||||
questions: uniqueClarifications.map(need => ({
|
||||
question: `[${need.source_angle}] ${need.question}\n\nContext: ${need.context}`,
|
||||
header: need.source_angle,
|
||||
multiSelect: false,
|
||||
options: need.options.map(opt => ({
|
||||
label: opt,
|
||||
description: `Use ${opt} approach`
|
||||
const BATCH_SIZE = 4
|
||||
const totalRounds = Math.ceil(uniqueClarifications.length / BATCH_SIZE)
|
||||
|
||||
for (let i = 0; i < uniqueClarifications.length; i += BATCH_SIZE) {
|
||||
const batch = uniqueClarifications.slice(i, i + BATCH_SIZE)
|
||||
const currentRound = Math.floor(i / BATCH_SIZE) + 1
|
||||
|
||||
console.log(`### Clarification Round ${currentRound}/${totalRounds}`)
|
||||
|
||||
AskUserQuestion({
|
||||
questions: batch.map(need => ({
|
||||
question: `[${need.source_angle}] ${need.question}\n\nContext: ${need.context}`,
|
||||
header: need.source_angle,
|
||||
multiSelect: false,
|
||||
options: need.options.map((opt, index) => {
|
||||
const isRecommended = need.recommended === index
|
||||
return {
|
||||
label: isRecommended ? `${opt} ★` : opt,
|
||||
description: isRecommended ? `Use ${opt} approach (Recommended)` : `Use ${opt} approach`
|
||||
}
|
||||
})
|
||||
}))
|
||||
}))
|
||||
})
|
||||
})
|
||||
|
||||
// Store batch responses in clarificationContext before next round
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
@@ -568,7 +630,7 @@ SlashCommand(command="/workflow:lite-execute --in-memory --mode bugfix")
|
||||
## Session Folder Structure
|
||||
|
||||
```
|
||||
.workflow/.lite-fix/{bug-slug}-{timestamp}/
|
||||
.workflow/.lite-fix/{bug-slug}-{YYYY-MM-DD}/
|
||||
|- diagnosis-{angle1}.json # Diagnosis angle 1
|
||||
|- diagnosis-{angle2}.json # Diagnosis angle 2
|
||||
|- diagnosis-{angle3}.json # Diagnosis angle 3 (if applicable)
|
||||
|
||||
@@ -43,11 +43,11 @@ Phase 1: Task Analysis & Exploration
|
||||
├─ needsExploration=true → Launch parallel cli-explore-agents (1-4 based on complexity)
|
||||
└─ needsExploration=false → Skip to Phase 2/3
|
||||
|
||||
Phase 2: Clarification (optional)
|
||||
Phase 2: Clarification (optional, multi-round)
|
||||
├─ Aggregate clarification_needs from all exploration angles
|
||||
├─ Deduplicate similar questions
|
||||
└─ Decision:
|
||||
├─ Has clarifications → AskUserQuestion (max 4 questions)
|
||||
├─ Has clarifications → AskUserQuestion (max 4 questions per round, multiple rounds allowed)
|
||||
└─ No clarifications → Skip to Phase 3
|
||||
|
||||
Phase 3: Planning (NO CODE EXECUTION - planning only)
|
||||
@@ -71,18 +71,58 @@ Phase 5: Dispatch
|
||||
|
||||
### Phase 1: Intelligent Multi-Angle Exploration
|
||||
|
||||
**Session Setup**:
|
||||
**Session Setup** (MANDATORY - follow exactly):
|
||||
|
||||
**Option 1: Using CLI Command** (Recommended for simplicity):
|
||||
```bash
|
||||
# Generate session ID
|
||||
task_slug=$(echo "${task_description}" | tr '[:upper:]' '[:lower:]' | tr -cs '[:alnum:]' '-' | cut -c1-40)
|
||||
date_str=$(date -u '+%Y-%m-%d')
|
||||
session_id="${task_slug}-${date_str}"
|
||||
|
||||
# Initialize lite-plan session (location auto-inferred from type)
|
||||
ccw session init "${session_id}" \
|
||||
--type lite-plan \
|
||||
--content "{\"description\":\"${task_description}\",\"complexity\":\"${complexity}\"}"
|
||||
|
||||
# Get session folder
|
||||
session_folder=".workflow/.lite-plan/${session_id}"
|
||||
echo "Session initialized: ${session_id} at ${session_folder}"
|
||||
```
|
||||
|
||||
**Option 2: Using session_manager Tool** (For programmatic access):
|
||||
```javascript
|
||||
// Helper: Get UTC+8 (China Standard Time) ISO string
|
||||
const getUtc8ISOString = () => new Date(Date.now() + 8 * 60 * 60 * 1000).toISOString()
|
||||
|
||||
const taskSlug = task_description.toLowerCase().replace(/[^a-z0-9]+/g, '-').substring(0, 40)
|
||||
const timestamp = getUtc8ISOString().replace(/[:.]/g, '-')
|
||||
const shortTimestamp = timestamp.substring(0, 19).replace('T', '-')
|
||||
const sessionId = `${taskSlug}-${shortTimestamp}`
|
||||
const sessionFolder = `.workflow/.lite-plan/${sessionId}`
|
||||
const dateStr = getUtc8ISOString().substring(0, 10) // Format: 2025-12-17
|
||||
|
||||
bash(`mkdir -p ${sessionFolder}`)
|
||||
const sessionId = `${taskSlug}-${dateStr}` // e.g., "implement-jwt-refresh-2025-12-17"
|
||||
|
||||
|
||||
|
||||
const sessionFolder = initResult.result.path
|
||||
console.log(`Session initialized: ${sessionId} at ${sessionFolder}`)
|
||||
```
|
||||
|
||||
**Session File Structure**:
|
||||
- `session-metadata.json` - Session metadata (created at init, contains description, complexity, status)
|
||||
- `plan.json` - Actual planning content (created later in Phase 3, contains tasks, steps, dependencies)
|
||||
|
||||
**Metadata Field Usage**:
|
||||
- `description`: Displayed in dashboard session list (replaces session ID as title)
|
||||
- `complexity`: Used for planning strategy selection (Low → Direct Claude, Medium/High → Agent)
|
||||
- `created_at`: Displayed in dashboard timeline
|
||||
- Custom fields: Any additional fields in metadata are saved and accessible programmatically
|
||||
|
||||
**Accessing Session Data**:
|
||||
```bash
|
||||
# Read session metadata
|
||||
ccw session ${session_id} read session-metadata.json
|
||||
|
||||
# Read plan content (after Phase 3 completion)
|
||||
ccw session ${session_id} read plan.json
|
||||
```
|
||||
|
||||
**Exploration Decision Logic**:
|
||||
@@ -152,11 +192,16 @@ Launching ${selectedAngles.length} parallel explorations...
|
||||
|
||||
**Launch Parallel Explorations** - Orchestrator assigns angle to each agent:
|
||||
|
||||
**⚠️ CRITICAL - NO BACKGROUND EXECUTION**:
|
||||
- **MUST NOT use `run_in_background: true`** - exploration results are REQUIRED before planning
|
||||
|
||||
|
||||
```javascript
|
||||
// Launch agents with pre-assigned angles
|
||||
const explorationTasks = selectedAngles.map((angle, index) =>
|
||||
Task(
|
||||
subagent_type="cli-explore-agent",
|
||||
run_in_background=false, // ⚠️ MANDATORY: Must wait for results
|
||||
description=`Explore: ${angle}`,
|
||||
prompt=`
|
||||
## Task Objective
|
||||
@@ -170,7 +215,7 @@ Execute **${angle}** exploration for task planning context. Analyze codebase fro
|
||||
|
||||
## MANDATORY FIRST STEPS (Execute by Agent)
|
||||
**You (cli-explore-agent) MUST execute these steps in order:**
|
||||
1. Run: ~/.claude/scripts/get_modules_by_depth.sh (project structure)
|
||||
1. Run: ccw tool exec get_modules_by_depth '{}' (project structure)
|
||||
2. Run: rg -l "{keyword_from_task}" --type ts (locate relevant files)
|
||||
3. Execute: cat ~/.claude/workflows/cli-templates/schemas/explore-json-schema.json (get output schema reference)
|
||||
|
||||
@@ -206,7 +251,7 @@ Execute **${angle}** exploration for task planning context. Analyze codebase fro
|
||||
- dependencies: Dependencies relevant to ${angle}
|
||||
- integration_points: Where to integrate from ${angle} viewpoint (include file:line locations)
|
||||
- constraints: ${angle}-specific limitations/conventions
|
||||
- clarification_needs: ${angle}-related ambiguities (with options array)
|
||||
- clarification_needs: ${angle}-related ambiguities (options array + recommended index)
|
||||
- _metadata.exploration_angle: "${angle}"
|
||||
|
||||
## Success Criteria
|
||||
@@ -217,7 +262,7 @@ Execute **${angle}** exploration for task planning context. Analyze codebase fro
|
||||
- [ ] Integration points include file:line locations
|
||||
- [ ] Constraints are project-specific to ${angle}
|
||||
- [ ] JSON output follows schema exactly
|
||||
- [ ] clarification_needs includes options array
|
||||
- [ ] clarification_needs includes options + recommended
|
||||
|
||||
## Output
|
||||
Write: ${sessionFolder}/exploration-${angle}.json
|
||||
@@ -276,10 +321,12 @@ Angles explored: ${explorationManifest.explorations.map(e => e.angle).join(', ')
|
||||
|
||||
---
|
||||
|
||||
### Phase 2: Clarification (Optional)
|
||||
### Phase 2: Clarification (Optional, Multi-Round)
|
||||
|
||||
**Skip if**: No exploration or `clarification_needs` is empty across all explorations
|
||||
|
||||
**⚠️ CRITICAL**: AskUserQuestion tool limits max 4 questions per call. **MUST execute multiple rounds** to exhaust all clarification needs - do NOT stop at round 1.
|
||||
|
||||
**Aggregate clarification needs from all exploration angles**:
|
||||
```javascript
|
||||
// Load manifest and all exploration files
|
||||
@@ -302,32 +349,40 @@ explorations.forEach(exp => {
|
||||
}
|
||||
})
|
||||
|
||||
// Deduplicate by question similarity
|
||||
function deduplicateClarifications(clarifications) {
|
||||
const unique = []
|
||||
clarifications.forEach(c => {
|
||||
const isDuplicate = unique.some(u =>
|
||||
u.question.toLowerCase() === c.question.toLowerCase()
|
||||
)
|
||||
if (!isDuplicate) unique.push(c)
|
||||
})
|
||||
return unique
|
||||
}
|
||||
// Deduplicate exact same questions only
|
||||
const seen = new Set()
|
||||
const dedupedClarifications = allClarifications.filter(c => {
|
||||
const key = c.question.toLowerCase()
|
||||
if (seen.has(key)) return false
|
||||
seen.add(key)
|
||||
return true
|
||||
})
|
||||
|
||||
const uniqueClarifications = deduplicateClarifications(allClarifications)
|
||||
// Multi-round clarification: batch questions (max 4 per round)
|
||||
if (dedupedClarifications.length > 0) {
|
||||
const BATCH_SIZE = 4
|
||||
const totalRounds = Math.ceil(dedupedClarifications.length / BATCH_SIZE)
|
||||
|
||||
if (uniqueClarifications.length > 0) {
|
||||
AskUserQuestion({
|
||||
questions: uniqueClarifications.map(need => ({
|
||||
question: `[${need.source_angle}] ${need.question}\n\nContext: ${need.context}`,
|
||||
header: need.source_angle,
|
||||
multiSelect: false,
|
||||
options: need.options.map(opt => ({
|
||||
label: opt,
|
||||
description: `Use ${opt} approach`
|
||||
for (let i = 0; i < dedupedClarifications.length; i += BATCH_SIZE) {
|
||||
const batch = dedupedClarifications.slice(i, i + BATCH_SIZE)
|
||||
const currentRound = Math.floor(i / BATCH_SIZE) + 1
|
||||
|
||||
console.log(`### Clarification Round ${currentRound}/${totalRounds}`)
|
||||
|
||||
AskUserQuestion({
|
||||
questions: batch.map(need => ({
|
||||
question: `[${need.source_angle}] ${need.question}\n\nContext: ${need.context}`,
|
||||
header: need.source_angle.substring(0, 12),
|
||||
multiSelect: false,
|
||||
options: need.options.map((opt, index) => ({
|
||||
label: need.recommended === index ? `${opt} ★` : opt,
|
||||
description: need.recommended === index ? `Recommended` : `Use ${opt}`
|
||||
}))
|
||||
}))
|
||||
}))
|
||||
})
|
||||
})
|
||||
|
||||
// Store batch responses in clarificationContext before next round
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
@@ -346,7 +401,15 @@ if (uniqueClarifications.length > 0) {
|
||||
// Step 1: Read schema
|
||||
const schema = Bash(`cat ~/.claude/workflows/cli-templates/schemas/plan-json-schema.json`)
|
||||
|
||||
// Step 2: Generate plan following schema (Claude directly, no agent)
|
||||
// Step 2: ⚠️ MANDATORY - Read and review ALL exploration files
|
||||
const manifest = JSON.parse(Read(`${sessionFolder}/explorations-manifest.json`))
|
||||
manifest.explorations.forEach(exp => {
|
||||
const explorationData = Read(exp.path)
|
||||
console.log(`\n### Exploration: ${exp.angle}\n${explorationData}`)
|
||||
})
|
||||
|
||||
// Step 3: Generate plan following schema (Claude directly, no agent)
|
||||
// ⚠️ Plan MUST incorporate insights from exploration files read in Step 2
|
||||
const plan = {
|
||||
summary: "...",
|
||||
approach: "...",
|
||||
@@ -357,10 +420,10 @@ const plan = {
|
||||
_metadata: { timestamp: getUtc8ISOString(), source: "direct-planning", planning_mode: "direct" }
|
||||
}
|
||||
|
||||
// Step 3: Write plan to session folder
|
||||
// Step 4: Write plan to session folder
|
||||
Write(`${sessionFolder}/plan.json`, JSON.stringify(plan, null, 2))
|
||||
|
||||
// Step 4: MUST continue to Phase 4 (Confirmation) - DO NOT execute code here
|
||||
// Step 5: MUST continue to Phase 4 (Confirmation) - DO NOT execute code here
|
||||
```
|
||||
|
||||
**Medium/High Complexity** - Invoke cli-lite-planning-agent:
|
||||
@@ -552,7 +615,7 @@ SlashCommand(command="/workflow:lite-execute --in-memory")
|
||||
## Session Folder Structure
|
||||
|
||||
```
|
||||
.workflow/.lite-plan/{task-slug}-{timestamp}/
|
||||
.workflow/.lite-plan/{task-slug}-{YYYY-MM-DD}/
|
||||
├── exploration-{angle1}.json # Exploration angle 1
|
||||
├── exploration-{angle2}.json # Exploration angle 2
|
||||
├── exploration-{angle3}.json # Exploration angle 3 (if applicable)
|
||||
|
||||
@@ -61,7 +61,7 @@ Phase 2: Context Gathering
|
||||
├─ Tasks attached: Analyze structure → Identify integration → Generate package
|
||||
└─ Output: contextPath + conflict_risk
|
||||
|
||||
Phase 3: Conflict Resolution (conditional)
|
||||
Phase 3: Conflict Resolution
|
||||
└─ Decision (conflict_risk check):
|
||||
├─ conflict_risk ≥ medium → Execute /workflow:tools:conflict-resolution
|
||||
│ ├─ Tasks attached: Detect conflicts → Present to user → Apply strategies
|
||||
@@ -168,7 +168,7 @@ SlashCommand(command="/workflow:tools:context-gather --session [sessionId] \"[st
|
||||
|
||||
---
|
||||
|
||||
### Phase 3: Conflict Resolution (Optional - auto-triggered by conflict risk)
|
||||
### Phase 3: Conflict Resolution
|
||||
|
||||
**Trigger**: Only execute when context-package.json indicates conflict_risk is "medium" or "high"
|
||||
|
||||
@@ -185,10 +185,10 @@ SlashCommand(command="/workflow:tools:conflict-resolution --session [sessionId]
|
||||
|
||||
**Parse Output**:
|
||||
- Extract: Execution status (success/skipped/failed)
|
||||
- Verify: CONFLICT_RESOLUTION.md file path (if executed)
|
||||
- Verify: conflict-resolution.json file path (if executed)
|
||||
|
||||
**Validation**:
|
||||
- File `.workflow/active/[sessionId]/.process/CONFLICT_RESOLUTION.md` exists (if executed)
|
||||
- File `.workflow/active/[sessionId]/.process/conflict-resolution.json` exists (if executed)
|
||||
|
||||
**Skip Behavior**:
|
||||
- If conflict_risk is "none" or "low", skip directly to Phase 3.5
|
||||
@@ -497,7 +497,7 @@ Return summary to user
|
||||
- Parse context path from Phase 2 output, store in memory
|
||||
- **Extract conflict_risk from context-package.json**: Determine Phase 3 execution
|
||||
- **If conflict_risk ≥ medium**: Launch Phase 3 conflict-resolution with sessionId and contextPath
|
||||
- Wait for Phase 3 to finish executing (if executed), verify CONFLICT_RESOLUTION.md created
|
||||
- Wait for Phase 3 to finish executing (if executed), verify conflict-resolution.json created
|
||||
- **If conflict_risk is none/low**: Skip Phase 3, proceed directly to Phase 4
|
||||
- **Build Phase 4 command**: `/workflow:tools:task-generate-agent --session [sessionId]`
|
||||
- Pass session ID to Phase 4 command
|
||||
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user