mirror of
https://github.com/catlog22/Claude-Code-Workflow.git
synced 2026-02-06 01:54:11 +08:00
Compare commits
232 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
35ffd3419e | ||
|
|
e3223edbb1 | ||
|
|
a061fc1428 | ||
|
|
0992d27523 | ||
|
|
5aa0c9610d | ||
|
|
7620ff703d | ||
|
|
d705a3e7d9 | ||
|
|
726151bfea | ||
|
|
b58589ddad | ||
|
|
2e493277a1 | ||
|
|
8b19edd2de | ||
|
|
3e54b5f7d8 | ||
|
|
4da06864f8 | ||
|
|
8f310339df | ||
|
|
0157e36344 | ||
|
|
cdf4833977 | ||
|
|
c8a914aeca | ||
|
|
a5ba7c0f6c | ||
|
|
1cf0d92ec2 | ||
|
|
02930bd56b | ||
|
|
4061ae48c4 | ||
|
|
ecd5085e51 | ||
|
|
6bc8b7de95 | ||
|
|
e79e33773f | ||
|
|
0c0301d811 | ||
|
|
89f6ac6804 | ||
|
|
f14c3299bc | ||
|
|
a73828b4d6 | ||
|
|
6244bf0405 | ||
|
|
90852c7788 | ||
|
|
3b842ed290 | ||
|
|
673e1d117a | ||
|
|
f64f619713 | ||
|
|
a742fa0f8a | ||
|
|
6894c7e80b | ||
|
|
203100431b | ||
|
|
e8b9bcae92 | ||
|
|
052351ab5b | ||
|
|
9dd84e3416 | ||
|
|
211c25d969 | ||
|
|
275684d319 | ||
|
|
0f8a47e8f6 | ||
|
|
303c840464 | ||
|
|
b15008fbce | ||
|
|
a8cf3e1ad6 | ||
|
|
0515ef6e8b | ||
|
|
777d5df573 | ||
|
|
c5f379ba01 | ||
|
|
145d38c9bd | ||
|
|
eab957ce00 | ||
|
|
b5fb077ad6 | ||
|
|
ebcbb11cb2 | ||
|
|
a1413dd1b3 | ||
|
|
4e6ee2db25 | ||
|
|
8e744597d1 | ||
|
|
dfa8b541b4 | ||
|
|
1dc55f8811 | ||
|
|
501d9a05d4 | ||
|
|
229d51cd18 | ||
|
|
40e61b30d6 | ||
|
|
3c3ce55842 | ||
|
|
e3e61bcae9 | ||
|
|
dfca4d60ee | ||
|
|
e671b45948 | ||
|
|
b00113d212 | ||
|
|
9b926d1a1e | ||
|
|
98c9f1a830 | ||
|
|
46ac591fe8 | ||
|
|
bf66b095c7 | ||
|
|
5228581324 | ||
|
|
c9c704e671 | ||
|
|
16d4c7c646 | ||
|
|
39056292b7 | ||
|
|
87ffd283ce | ||
|
|
8eb42816f1 | ||
|
|
ebdf64c0b9 | ||
|
|
caab5f476e | ||
|
|
1998f3ae8a | ||
|
|
5ff2a43b70 | ||
|
|
3cd842ca1a | ||
|
|
86cefa7bda | ||
|
|
fdac697f6e | ||
|
|
8203d690cb | ||
|
|
cf58dc0dd3 | ||
|
|
6a69af3bf1 | ||
|
|
acdfbb4644 | ||
|
|
72f24bf535 | ||
|
|
ba23244876 | ||
|
|
624f9f18b4 | ||
|
|
17002345c9 | ||
|
|
f3f2051c45 | ||
|
|
e60d793c8c | ||
|
|
7ecc64614a | ||
|
|
0311237db2 | ||
|
|
11d8187258 | ||
|
|
fc4a9af0cb | ||
|
|
fa64e11a77 | ||
|
|
210f0f1012 | ||
|
|
6d3f10d1d7 | ||
|
|
09483c9f07 | ||
|
|
2871950ab8 | ||
|
|
5849f751bc | ||
|
|
45f92fe066 | ||
|
|
f492f4839a | ||
|
|
fa81793bea | ||
|
|
c12ef3e772 | ||
|
|
6eebdb8898 | ||
|
|
3e9a309079 | ||
|
|
15d5890861 | ||
|
|
89b3475508 | ||
|
|
6e301538ed | ||
|
|
c3a31f2c5d | ||
|
|
559b1e02a7 | ||
|
|
9e4412c7a8 | ||
|
|
6dab38172f | ||
|
|
f1ee46e1ac | ||
|
|
775928456d | ||
|
|
fd4a15c84e | ||
|
|
be725ce21f | ||
|
|
fa31552cc1 | ||
|
|
a3ccf5baed | ||
|
|
8c6225b749 | ||
|
|
89e77c0089 | ||
|
|
b27d8a9570 | ||
|
|
4a3ff82200 | ||
|
|
bfbab44756 | ||
|
|
4458af83d8 | ||
|
|
6b62b5b5a9 | ||
|
|
31cc060837 | ||
|
|
ea284d739a | ||
|
|
ab06ed0083 | ||
|
|
4de4db3c69 | ||
|
|
e1cac5dd50 | ||
|
|
7adde91e9f | ||
|
|
3428642d04 | ||
|
|
2f0cce0089 | ||
|
|
c7ced2bfbb | ||
|
|
69049e3f45 | ||
|
|
e17e9a6473 | ||
|
|
5e91ba6c60 | ||
|
|
9f6e6852da | ||
|
|
68f9de0c69 | ||
|
|
17af615fe2 | ||
|
|
4577be71ce | ||
|
|
0311d63b7d | ||
|
|
440314c16d | ||
|
|
8dd4a513c8 | ||
|
|
e096fc98e2 | ||
|
|
4329bd8e80 | ||
|
|
ae07df612d | ||
|
|
d5d6f1fbbe | ||
|
|
b9d068d6d4 | ||
|
|
48ac43d628 | ||
|
|
79da2c8c17 | ||
|
|
6aac7bb8e3 | ||
|
|
51a61bef31 | ||
|
|
44d84116c3 | ||
|
|
474a1ce027 | ||
|
|
b22839c99f | ||
|
|
8b927f302c | ||
|
|
c16da759b2 | ||
|
|
74a830694c | ||
|
|
d06a3ca12e | ||
|
|
154a9283b5 | ||
|
|
b702791c2c | ||
|
|
d21066c282 | ||
|
|
df23975a0b | ||
|
|
3da0ef2adb | ||
|
|
35485bbbb1 | ||
|
|
894b93e08d | ||
|
|
97640a517a | ||
|
|
ee0886fc48 | ||
|
|
0fe16963cd | ||
|
|
82dcafff00 | ||
|
|
3ffb907a6f | ||
|
|
d91477ad80 | ||
|
|
0529b57694 | ||
|
|
79a2953862 | ||
|
|
8d542b8e45 | ||
|
|
ac9060ab3a | ||
|
|
1c9716e460 | ||
|
|
7e70e4c299 | ||
|
|
ac43cf85ec | ||
|
|
08dc0a0348 | ||
|
|
90adef6cfb | ||
|
|
d4499cc6d7 | ||
|
|
958cf290e2 | ||
|
|
d3a522f3e8 | ||
|
|
52935d4b8e | ||
|
|
32217f87fd | ||
|
|
675aff26ff | ||
|
|
029384c427 | ||
|
|
37417caca2 | ||
|
|
8f58e4e48a | ||
|
|
68c872ad36 | ||
|
|
c780544792 | ||
|
|
23e15e479e | ||
|
|
684618e72b | ||
|
|
93d3df1e08 | ||
|
|
335f5e9ec6 | ||
|
|
30e9ae0153 | ||
|
|
25ac862f46 | ||
|
|
d4e59770d0 | ||
|
|
15122b9ebb | ||
|
|
a41e6d19fd | ||
|
|
e879ec7189 | ||
|
|
4faa5f1c95 | ||
|
|
c42f91a7fe | ||
|
|
92d2085b64 | ||
|
|
6a39f7e69d | ||
|
|
dfa8dbc52a | ||
|
|
a393601ec5 | ||
|
|
b74a90b416 | ||
|
|
77de8d857b | ||
|
|
76c1745269 | ||
|
|
811382775d | ||
|
|
e8f1caa219 | ||
|
|
15c5cd5f6e | ||
|
|
766a8d2145 | ||
|
|
e350e0c7bb | ||
|
|
db4ab85d3e | ||
|
|
cfcd277a58 | ||
|
|
c256fd9379 | ||
|
|
0a4c205105 | ||
|
|
e815c3c10e | ||
|
|
8eb1a4e52e | ||
|
|
19648721fc | ||
|
|
b81d1039c5 | ||
|
|
a667b7548c | ||
|
|
598bea9b21 | ||
|
|
df104d6e9b | ||
|
|
417f3c0f8c |
33
.claude/CLAUDE.md
Normal file
33
.claude/CLAUDE.md
Normal file
@@ -0,0 +1,33 @@
|
|||||||
|
# Claude Instructions
|
||||||
|
|
||||||
|
- **CLI Tools Usage**: @~/.claude/workflows/cli-tools-usage.md
|
||||||
|
- **Coding Philosophy**: @~/.claude/workflows/coding-philosophy.md
|
||||||
|
- **Context Requirements**: @~/.claude/workflows/context-tools-ace.md
|
||||||
|
- **File Modification**: @~/.claude/workflows/file-modification.md
|
||||||
|
- **CLI Endpoints Config**: @.claude/cli-tools.json
|
||||||
|
|
||||||
|
## CLI Endpoints
|
||||||
|
|
||||||
|
**Strictly follow the @.claude/cli-tools.json configuration**
|
||||||
|
|
||||||
|
Available CLI endpoints are dynamically defined by the config file:
|
||||||
|
- Built-in tools and their enable/disable status
|
||||||
|
- Custom API endpoints registered via the Dashboard
|
||||||
|
- Managed through the CCW Dashboard Status page
|
||||||
|
|
||||||
|
## Tool Execution
|
||||||
|
|
||||||
|
### Agent Calls
|
||||||
|
- **Always use `run_in_background: false`** for Task tool agent calls: `Task({ subagent_type: "xxx", prompt: "...", run_in_background: false })` to ensure synchronous execution and immediate result visibility
|
||||||
|
- **TaskOutput usage**: Only use `TaskOutput({ task_id: "xxx", block: false })` + sleep loop to poll completion status. NEVER read intermediate output during agent/CLI execution - wait for final result only
|
||||||
|
|
||||||
|
### CLI Tool Calls (ccw cli)
|
||||||
|
- **Always use `run_in_background: true`** for Bash tool when calling ccw cli:
|
||||||
|
```
|
||||||
|
Bash({ command: "ccw cli -p '...' --tool gemini", run_in_background: true })
|
||||||
|
```
|
||||||
|
- **After CLI call**: If no other tasks, stop immediately - let CLI execute in background, do NOT poll with TaskOutput
|
||||||
|
|
||||||
|
## Code Diagnostics
|
||||||
|
|
||||||
|
- **Prefer `mcp__ide__getDiagnostics`** for code error checking over shell-based TypeScript compilation
|
||||||
4
.claude/active_memory_config.json
Normal file
4
.claude/active_memory_config.json
Normal file
@@ -0,0 +1,4 @@
|
|||||||
|
{
|
||||||
|
"interval": "manual",
|
||||||
|
"tool": "gemini"
|
||||||
|
}
|
||||||
@@ -203,7 +203,13 @@ Generate individual `.task/IMPL-*.json` files with the following structure:
|
|||||||
"id": "IMPL-N",
|
"id": "IMPL-N",
|
||||||
"title": "Descriptive task name",
|
"title": "Descriptive task name",
|
||||||
"status": "pending|active|completed|blocked",
|
"status": "pending|active|completed|blocked",
|
||||||
"context_package_path": ".workflow/active/WFS-{session}/.process/context-package.json"
|
"context_package_path": ".workflow/active/WFS-{session}/.process/context-package.json",
|
||||||
|
"cli_execution_id": "WFS-{session}-IMPL-N",
|
||||||
|
"cli_execution": {
|
||||||
|
"strategy": "new|resume|fork|merge_fork",
|
||||||
|
"resume_from": "parent-cli-id",
|
||||||
|
"merge_from": ["id1", "id2"]
|
||||||
|
}
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
@@ -216,6 +222,50 @@ Generate individual `.task/IMPL-*.json` files with the following structure:
|
|||||||
- `title`: Descriptive task name summarizing the work
|
- `title`: Descriptive task name summarizing the work
|
||||||
- `status`: Task state - `pending` (not started), `active` (in progress), `completed` (done), `blocked` (waiting on dependencies)
|
- `status`: Task state - `pending` (not started), `active` (in progress), `completed` (done), `blocked` (waiting on dependencies)
|
||||||
- `context_package_path`: Path to smart context package containing project structure, dependencies, and brainstorming artifacts catalog
|
- `context_package_path`: Path to smart context package containing project structure, dependencies, and brainstorming artifacts catalog
|
||||||
|
- `cli_execution_id`: Unique CLI conversation ID (format: `{session_id}-{task_id}`)
|
||||||
|
- `cli_execution`: CLI execution strategy based on task dependencies
|
||||||
|
- `strategy`: Execution pattern (`new`, `resume`, `fork`, `merge_fork`)
|
||||||
|
- `resume_from`: Parent task's cli_execution_id (for resume/fork)
|
||||||
|
- `merge_from`: Array of parent cli_execution_ids (for merge_fork)
|
||||||
|
|
||||||
|
**CLI Execution Strategy Rules** (MANDATORY - apply to all tasks):
|
||||||
|
|
||||||
|
| Dependency Pattern | Strategy | CLI Command Pattern |
|
||||||
|
|--------------------|----------|---------------------|
|
||||||
|
| No `depends_on` | `new` | `--id {cli_execution_id}` |
|
||||||
|
| 1 parent, parent has 1 child | `resume` | `--resume {resume_from}` |
|
||||||
|
| 1 parent, parent has N children | `fork` | `--resume {resume_from} --id {cli_execution_id}` |
|
||||||
|
| N parents | `merge_fork` | `--resume {merge_from.join(',')} --id {cli_execution_id}` |
|
||||||
|
|
||||||
|
**Strategy Selection Algorithm**:
|
||||||
|
```javascript
|
||||||
|
function computeCliStrategy(task, allTasks) {
|
||||||
|
const deps = task.context?.depends_on || []
|
||||||
|
const childCount = allTasks.filter(t =>
|
||||||
|
t.context?.depends_on?.includes(task.id)
|
||||||
|
).length
|
||||||
|
|
||||||
|
if (deps.length === 0) {
|
||||||
|
return { strategy: "new" }
|
||||||
|
} else if (deps.length === 1) {
|
||||||
|
const parentTask = allTasks.find(t => t.id === deps[0])
|
||||||
|
const parentChildCount = allTasks.filter(t =>
|
||||||
|
t.context?.depends_on?.includes(deps[0])
|
||||||
|
).length
|
||||||
|
|
||||||
|
if (parentChildCount === 1) {
|
||||||
|
return { strategy: "resume", resume_from: parentTask.cli_execution_id }
|
||||||
|
} else {
|
||||||
|
return { strategy: "fork", resume_from: parentTask.cli_execution_id }
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
const mergeFrom = deps.map(depId =>
|
||||||
|
allTasks.find(t => t.id === depId).cli_execution_id
|
||||||
|
)
|
||||||
|
return { strategy: "merge_fork", merge_from: mergeFrom }
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
#### Meta Object
|
#### Meta Object
|
||||||
|
|
||||||
@@ -225,7 +275,13 @@ Generate individual `.task/IMPL-*.json` files with the following structure:
|
|||||||
"type": "feature|bugfix|refactor|test-gen|test-fix|docs",
|
"type": "feature|bugfix|refactor|test-gen|test-fix|docs",
|
||||||
"agent": "@code-developer|@action-planning-agent|@test-fix-agent|@universal-executor",
|
"agent": "@code-developer|@action-planning-agent|@test-fix-agent|@universal-executor",
|
||||||
"execution_group": "parallel-abc123|null",
|
"execution_group": "parallel-abc123|null",
|
||||||
"module": "frontend|backend|shared|null"
|
"module": "frontend|backend|shared|null",
|
||||||
|
"execution_config": {
|
||||||
|
"method": "agent|hybrid|cli",
|
||||||
|
"cli_tool": "codex|gemini|qwen|auto",
|
||||||
|
"enable_resume": true,
|
||||||
|
"previous_cli_id": "string|null"
|
||||||
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
@@ -235,6 +291,11 @@ Generate individual `.task/IMPL-*.json` files with the following structure:
|
|||||||
- `agent`: Assigned agent for execution
|
- `agent`: Assigned agent for execution
|
||||||
- `execution_group`: Parallelization group ID (tasks with same ID can run concurrently) or `null` for sequential tasks
|
- `execution_group`: Parallelization group ID (tasks with same ID can run concurrently) or `null` for sequential tasks
|
||||||
- `module`: Module identifier for multi-module projects (e.g., `frontend`, `backend`, `shared`) or `null` for single-module
|
- `module`: Module identifier for multi-module projects (e.g., `frontend`, `backend`, `shared`) or `null` for single-module
|
||||||
|
- `execution_config`: CLI execution settings (from userConfig in task-generate-agent)
|
||||||
|
- `method`: Execution method - `agent` (direct), `hybrid` (agent + CLI), `cli` (CLI only)
|
||||||
|
- `cli_tool`: Preferred CLI tool - `codex`, `gemini`, `qwen`, or `auto`
|
||||||
|
- `enable_resume`: Whether to use `--resume` for CLI continuity (default: true)
|
||||||
|
- `previous_cli_id`: Previous task's CLI execution ID for resume (populated at runtime)
|
||||||
|
|
||||||
**Test Task Extensions** (for type="test-gen" or type="test-fix"):
|
**Test Task Extensions** (for type="test-gen" or type="test-fix"):
|
||||||
|
|
||||||
@@ -409,14 +470,14 @@ Generate individual `.task/IMPL-*.json` files with the following structure:
|
|||||||
// Pattern: Gemini CLI deep analysis
|
// Pattern: Gemini CLI deep analysis
|
||||||
{
|
{
|
||||||
"step": "gemini_analyze_[aspect]",
|
"step": "gemini_analyze_[aspect]",
|
||||||
"command": "bash(cd [path] && gemini -p 'PURPOSE: [goal]\\nTASK: [tasks]\\nMODE: analysis\\nCONTEXT: @[paths]\\nEXPECTED: [output]\\nRULES: $(cat [template]) | [constraints] | analysis=READ-ONLY')",
|
"command": "ccw cli -p 'PURPOSE: [goal]\\nTASK: [tasks]\\nMODE: analysis\\nCONTEXT: @[paths]\\nEXPECTED: [output]\\nRULES: $(cat [template]) | [constraints] | analysis=READ-ONLY' --tool gemini --mode analysis --cd [path]",
|
||||||
"output_to": "analysis_result"
|
"output_to": "analysis_result"
|
||||||
},
|
},
|
||||||
|
|
||||||
// Pattern: Qwen CLI analysis (fallback/alternative)
|
// Pattern: Qwen CLI analysis (fallback/alternative)
|
||||||
{
|
{
|
||||||
"step": "qwen_analyze_[aspect]",
|
"step": "qwen_analyze_[aspect]",
|
||||||
"command": "bash(cd [path] && qwen -p '[similar to gemini pattern]')",
|
"command": "ccw cli -p '[similar to gemini pattern]' --tool qwen --mode analysis --cd [path]",
|
||||||
"output_to": "analysis_result"
|
"output_to": "analysis_result"
|
||||||
},
|
},
|
||||||
|
|
||||||
@@ -457,7 +518,7 @@ The examples above demonstrate **patterns**, not fixed requirements. Agent MUST:
|
|||||||
4. **Command Composition Patterns**:
|
4. **Command Composition Patterns**:
|
||||||
- **Single command**: `bash([simple_search])`
|
- **Single command**: `bash([simple_search])`
|
||||||
- **Multiple commands**: `["bash([cmd1])", "bash([cmd2])"]`
|
- **Multiple commands**: `["bash([cmd1])", "bash([cmd2])"]`
|
||||||
- **CLI analysis**: `bash(cd [path] && gemini -p '[prompt]')`
|
- **CLI analysis**: `ccw cli -p '[prompt]' --tool gemini --mode analysis --cd [path]`
|
||||||
- **MCP integration**: `mcp__[tool]__[function]([params])`
|
- **MCP integration**: `mcp__[tool]__[function]([params])`
|
||||||
|
|
||||||
**Key Principle**: Examples show **structure patterns**, not specific implementations. Agent must create task-appropriate steps dynamically.
|
**Key Principle**: Examples show **structure patterns**, not specific implementations. Agent must create task-appropriate steps dynamically.
|
||||||
@@ -479,11 +540,12 @@ The `implementation_approach` supports **two execution modes** based on the pres
|
|||||||
- Specified command executes the step directly
|
- Specified command executes the step directly
|
||||||
- Leverages specialized CLI tools (codex/gemini/qwen) for complex reasoning
|
- Leverages specialized CLI tools (codex/gemini/qwen) for complex reasoning
|
||||||
- **Use for**: Large-scale features, complex refactoring, or when user explicitly requests CLI tool usage
|
- **Use for**: Large-scale features, complex refactoring, or when user explicitly requests CLI tool usage
|
||||||
- **Required fields**: Same as default mode **PLUS** `command`
|
- **Required fields**: Same as default mode **PLUS** `command`, `resume_from` (optional)
|
||||||
- **Command patterns**:
|
- **Command patterns** (with resume support):
|
||||||
- `bash(codex -C [path] --full-auto exec '[prompt]' --skip-git-repo-check -s danger-full-access)`
|
- `ccw cli -p '[prompt]' --tool codex --mode write --cd [path]`
|
||||||
- `bash(codex --full-auto exec '[task]' resume --last --skip-git-repo-check -s danger-full-access)` (multi-step)
|
- `ccw cli -p '[prompt]' --resume ${previousCliId} --tool codex --mode write` (resume from previous)
|
||||||
- `bash(cd [path] && gemini -p '[prompt]' --approval-mode yolo)` (write mode)
|
- `ccw cli -p '[prompt]' --tool gemini --mode write --cd [path]` (write mode)
|
||||||
|
- **Resume mechanism**: When step depends on previous CLI execution, include `--resume` with previous execution ID
|
||||||
|
|
||||||
**Semantic CLI Tool Selection**:
|
**Semantic CLI Tool Selection**:
|
||||||
|
|
||||||
@@ -500,12 +562,12 @@ Agent determines CLI tool usage per-step based on user semantics and task nature
|
|||||||
**Task-Based Selection** (when no explicit user preference):
|
**Task-Based Selection** (when no explicit user preference):
|
||||||
- **Implementation/coding**: Codex preferred for autonomous development
|
- **Implementation/coding**: Codex preferred for autonomous development
|
||||||
- **Analysis/exploration**: Gemini preferred for large context analysis
|
- **Analysis/exploration**: Gemini preferred for large context analysis
|
||||||
- **Documentation**: Gemini/Qwen with write mode (`--approval-mode yolo`)
|
- **Documentation**: Gemini/Qwen with write mode (`--mode write`)
|
||||||
- **Testing**: Depends on complexity - simple=agent, complex=Codex
|
- **Testing**: Depends on complexity - simple=agent, complex=Codex
|
||||||
|
|
||||||
**Default Behavior**: Agent always executes the workflow. CLI commands are embedded in `implementation_approach` steps:
|
**Default Behavior**: Agent always executes the workflow. CLI commands are embedded in `implementation_approach` steps:
|
||||||
- Agent orchestrates task execution
|
- Agent orchestrates task execution
|
||||||
- When step has `command` field, agent executes it via Bash
|
- When step has `command` field, agent executes it via CCW CLI
|
||||||
- When step has no `command` field, agent implements directly
|
- When step has no `command` field, agent implements directly
|
||||||
- This maintains agent control while leveraging CLI tool power
|
- This maintains agent control while leveraging CLI tool power
|
||||||
|
|
||||||
@@ -559,11 +621,26 @@ Agent determines CLI tool usage per-step based on user semantics and task nature
|
|||||||
"step": 3,
|
"step": 3,
|
||||||
"title": "Execute implementation using CLI tool",
|
"title": "Execute implementation using CLI tool",
|
||||||
"description": "Use Codex/Gemini for complex autonomous execution",
|
"description": "Use Codex/Gemini for complex autonomous execution",
|
||||||
"command": "bash(codex -C [path] --full-auto exec '[prompt]' --skip-git-repo-check -s danger-full-access)",
|
"command": "ccw cli -p '[prompt]' --tool codex --mode write --cd [path]",
|
||||||
"modification_points": ["[Same as default mode]"],
|
"modification_points": ["[Same as default mode]"],
|
||||||
"logic_flow": ["[Same as default mode]"],
|
"logic_flow": ["[Same as default mode]"],
|
||||||
"depends_on": [1, 2],
|
"depends_on": [1, 2],
|
||||||
"output": "cli_implementation"
|
"output": "cli_implementation",
|
||||||
|
"cli_output_id": "step3_cli_id" // Store execution ID for resume
|
||||||
|
},
|
||||||
|
|
||||||
|
// === CLI MODE with Resume: Continue from previous CLI execution ===
|
||||||
|
{
|
||||||
|
"step": 4,
|
||||||
|
"title": "Continue implementation with context",
|
||||||
|
"description": "Resume from previous step with accumulated context",
|
||||||
|
"command": "ccw cli -p '[continuation prompt]' --resume ${step3_cli_id} --tool codex --mode write",
|
||||||
|
"resume_from": "step3_cli_id", // Reference previous step's CLI ID
|
||||||
|
"modification_points": ["[Continue from step 3]"],
|
||||||
|
"logic_flow": ["[Build on previous output]"],
|
||||||
|
"depends_on": [3],
|
||||||
|
"output": "continued_implementation",
|
||||||
|
"cli_output_id": "step4_cli_id"
|
||||||
}
|
}
|
||||||
]
|
]
|
||||||
```
|
```
|
||||||
@@ -759,6 +836,8 @@ Use `analysis_results.complexity` or task count to determine structure:
|
|||||||
- Use provided context package: Extract all information from structured context
|
- Use provided context package: Extract all information from structured context
|
||||||
- Respect memory-first rule: Use provided content (already loaded from memory/file)
|
- Respect memory-first rule: Use provided content (already loaded from memory/file)
|
||||||
- Follow 6-field schema: All task JSONs must have id, title, status, context_package_path, meta, context, flow_control
|
- Follow 6-field schema: All task JSONs must have id, title, status, context_package_path, meta, context, flow_control
|
||||||
|
- **Assign CLI execution IDs**: Every task MUST have `cli_execution_id` (format: `{session_id}-{task_id}`)
|
||||||
|
- **Compute CLI execution strategy**: Based on `depends_on`, set `cli_execution.strategy` (new/resume/fork/merge_fork)
|
||||||
- Map artifacts: Use artifacts_inventory to populate task.context.artifacts array
|
- Map artifacts: Use artifacts_inventory to populate task.context.artifacts array
|
||||||
- Add MCP integration: Include MCP tool steps in flow_control.pre_analysis when capabilities available
|
- Add MCP integration: Include MCP tool steps in flow_control.pre_analysis when capabilities available
|
||||||
- Validate task count: Maximum 12 tasks hard limit, request re-scope if exceeded
|
- Validate task count: Maximum 12 tasks hard limit, request re-scope if exceeded
|
||||||
|
|||||||
@@ -100,7 +100,7 @@ CONTEXT: @**/*
|
|||||||
# Specific patterns
|
# Specific patterns
|
||||||
CONTEXT: @CLAUDE.md @src/**/* @*.ts
|
CONTEXT: @CLAUDE.md @src/**/* @*.ts
|
||||||
|
|
||||||
# Cross-directory (requires --include-directories)
|
# Cross-directory (requires --includeDirs)
|
||||||
CONTEXT: @**/* @../shared/**/* @../types/**/*
|
CONTEXT: @**/* @../shared/**/* @../types/**/*
|
||||||
```
|
```
|
||||||
|
|
||||||
@@ -134,7 +134,7 @@ RULES: $(cat {selected_template}) | {constraints}
|
|||||||
```
|
```
|
||||||
analyze|plan → gemini (qwen fallback) + mode=analysis
|
analyze|plan → gemini (qwen fallback) + mode=analysis
|
||||||
execute (simple|medium) → gemini (qwen fallback) + mode=write
|
execute (simple|medium) → gemini (qwen fallback) + mode=write
|
||||||
execute (complex) → codex + mode=auto
|
execute (complex) → codex + mode=write
|
||||||
discuss → multi (gemini + codex parallel)
|
discuss → multi (gemini + codex parallel)
|
||||||
```
|
```
|
||||||
|
|
||||||
@@ -144,43 +144,40 @@ discuss → multi (gemini + codex parallel)
|
|||||||
- Codex: `gpt-5` (default), `gpt5-codex` (large context)
|
- Codex: `gpt-5` (default), `gpt5-codex` (large context)
|
||||||
- **Position**: `-m` after prompt, before flags
|
- **Position**: `-m` after prompt, before flags
|
||||||
|
|
||||||
### Command Templates
|
### Command Templates (CCW Unified CLI)
|
||||||
|
|
||||||
**Gemini/Qwen (Analysis)**:
|
**Gemini/Qwen (Analysis)**:
|
||||||
```bash
|
```bash
|
||||||
cd {dir} && gemini -p "
|
ccw cli -p "
|
||||||
PURPOSE: {goal}
|
PURPOSE: {goal}
|
||||||
TASK: {task}
|
TASK: {task}
|
||||||
MODE: analysis
|
MODE: analysis
|
||||||
CONTEXT: @**/*
|
CONTEXT: @**/*
|
||||||
EXPECTED: {output}
|
EXPECTED: {output}
|
||||||
RULES: $(cat ~/.claude/workflows/cli-templates/prompts/analysis/pattern.txt)
|
RULES: $(cat ~/.claude/workflows/cli-templates/prompts/analysis/pattern.txt)
|
||||||
" -m gemini-2.5-pro
|
" --tool gemini --mode analysis --cd {dir}
|
||||||
|
|
||||||
# Qwen fallback: Replace 'gemini' with 'qwen'
|
# Qwen fallback: Replace '--tool gemini' with '--tool qwen'
|
||||||
```
|
```
|
||||||
|
|
||||||
**Gemini/Qwen (Write)**:
|
**Gemini/Qwen (Write)**:
|
||||||
```bash
|
```bash
|
||||||
cd {dir} && gemini -p "..." --approval-mode yolo
|
ccw cli -p "..." --tool gemini --mode write --cd {dir}
|
||||||
```
|
```
|
||||||
|
|
||||||
**Codex (Auto)**:
|
**Codex (Write)**:
|
||||||
```bash
|
```bash
|
||||||
codex -C {dir} --full-auto exec "..." --skip-git-repo-check -s danger-full-access
|
ccw cli -p "..." --tool codex --mode write --cd {dir}
|
||||||
|
|
||||||
# Resume: Add 'resume --last' after prompt
|
|
||||||
codex --full-auto exec "..." resume --last --skip-git-repo-check -s danger-full-access
|
|
||||||
```
|
```
|
||||||
|
|
||||||
**Cross-Directory** (Gemini/Qwen):
|
**Cross-Directory** (Gemini/Qwen):
|
||||||
```bash
|
```bash
|
||||||
cd src/auth && gemini -p "CONTEXT: @**/* @../shared/**/*" --include-directories ../shared
|
ccw cli -p "CONTEXT: @**/* @../shared/**/*" --tool gemini --mode analysis --cd src/auth --includeDirs ../shared
|
||||||
```
|
```
|
||||||
|
|
||||||
**Directory Scope**:
|
**Directory Scope**:
|
||||||
- `@` only references current directory + subdirectories
|
- `@` only references current directory + subdirectories
|
||||||
- External dirs: MUST use `--include-directories` + explicit CONTEXT reference
|
- External dirs: MUST use `--includeDirs` + explicit CONTEXT reference
|
||||||
|
|
||||||
**Timeout**: Simple 20min | Medium 40min | Complex 60min (Codex ×1.5)
|
**Timeout**: Simple 20min | Medium 40min | Complex 60min (Codex ×1.5)
|
||||||
|
|
||||||
|
|||||||
@@ -78,14 +78,14 @@ rg "^import .* from " -n | head -30
|
|||||||
### Gemini Semantic Analysis (deep-scan, dependency-map)
|
### Gemini Semantic Analysis (deep-scan, dependency-map)
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
cd {dir} && gemini -p "
|
ccw cli -p "
|
||||||
PURPOSE: {from prompt}
|
PURPOSE: {from prompt}
|
||||||
TASK: {from prompt}
|
TASK: {from prompt}
|
||||||
MODE: analysis
|
MODE: analysis
|
||||||
CONTEXT: @**/*
|
CONTEXT: @**/*
|
||||||
EXPECTED: {from prompt}
|
EXPECTED: {from prompt}
|
||||||
RULES: {from prompt, if template specified} | analysis=READ-ONLY
|
RULES: {from prompt, if template specified} | analysis=READ-ONLY
|
||||||
"
|
" --tool gemini --mode analysis --cd {dir}
|
||||||
```
|
```
|
||||||
|
|
||||||
**Fallback Chain**: Gemini → Qwen → Codex → Bash-only
|
**Fallback Chain**: Gemini → Qwen → Codex → Bash-only
|
||||||
|
|||||||
@@ -1,140 +1,117 @@
|
|||||||
---
|
---
|
||||||
name: cli-lite-planning-agent
|
name: cli-lite-planning-agent
|
||||||
description: |
|
description: |
|
||||||
Specialized agent for executing CLI planning tools (Gemini/Qwen) to generate detailed implementation plans. Used by lite-plan workflow for Medium/High complexity tasks.
|
Generic planning agent for lite-plan and lite-fix workflows. Generates structured plan JSON based on provided schema reference.
|
||||||
|
|
||||||
Core capabilities:
|
Core capabilities:
|
||||||
- Task decomposition (1-10 tasks with IDs: T1, T2...)
|
- Schema-driven output (plan-json-schema or fix-plan-json-schema)
|
||||||
- Dependency analysis (depends_on references)
|
- Task decomposition with dependency analysis
|
||||||
- Flow control (parallel/sequential phases)
|
- CLI execution ID assignment for fork/merge strategies
|
||||||
- Multi-angle exploration context integration
|
- Multi-angle context integration (explorations or diagnoses)
|
||||||
color: cyan
|
color: cyan
|
||||||
---
|
---
|
||||||
|
|
||||||
You are a specialized execution agent that bridges CLI planning tools (Gemini/Qwen) with lite-plan workflow. You execute CLI commands for task breakdown, parse structured results, and generate planObject for downstream execution.
|
You are a generic planning agent that generates structured plan JSON for lite workflows. Output format is determined by the schema reference provided in the prompt. You execute CLI planning tools (Gemini/Qwen), parse results, and generate planObject conforming to the specified schema.
|
||||||
|
|
||||||
## Output Schema
|
|
||||||
|
|
||||||
**Reference**: `~/.claude/workflows/cli-templates/schemas/plan-json-schema.json`
|
|
||||||
|
|
||||||
**planObject Structure**:
|
|
||||||
```javascript
|
|
||||||
{
|
|
||||||
summary: string, // 2-3 sentence overview
|
|
||||||
approach: string, // High-level strategy
|
|
||||||
tasks: [TaskObject], // 1-10 structured tasks
|
|
||||||
flow_control: { // Execution phases
|
|
||||||
execution_order: [{ phase, tasks, type }],
|
|
||||||
exit_conditions: { success, failure }
|
|
||||||
},
|
|
||||||
focus_paths: string[], // Affected files (aggregated)
|
|
||||||
estimated_time: string,
|
|
||||||
recommended_execution: "Agent" | "Codex",
|
|
||||||
complexity: "Low" | "Medium" | "High",
|
|
||||||
_metadata: { timestamp, source, planning_mode, exploration_angles, duration_seconds }
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
**TaskObject Structure**:
|
|
||||||
```javascript
|
|
||||||
{
|
|
||||||
id: string, // T1, T2, T3...
|
|
||||||
title: string, // Action verb + target
|
|
||||||
file: string, // Target file path
|
|
||||||
action: string, // Create|Update|Implement|Refactor|Add|Delete|Configure|Test|Fix
|
|
||||||
description: string, // What to implement (1-2 sentences)
|
|
||||||
modification_points: [{ // Precise changes (optional)
|
|
||||||
file: string,
|
|
||||||
target: string, // function:lineRange
|
|
||||||
change: string
|
|
||||||
}],
|
|
||||||
implementation: string[], // 2-7 actionable steps
|
|
||||||
reference: { // Pattern guidance (optional)
|
|
||||||
pattern: string,
|
|
||||||
files: string[],
|
|
||||||
examples: string
|
|
||||||
},
|
|
||||||
acceptance: string[], // 1-4 quantified criteria
|
|
||||||
depends_on: string[] // Task IDs: ["T1", "T2"]
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
## Input Context
|
## Input Context
|
||||||
|
|
||||||
```javascript
|
```javascript
|
||||||
{
|
{
|
||||||
task_description: string,
|
// Required
|
||||||
explorationsContext: { [angle]: ExplorationResult } | null,
|
task_description: string, // Task or bug description
|
||||||
explorationAngles: string[],
|
schema_path: string, // Schema reference path (plan-json-schema or fix-plan-json-schema)
|
||||||
|
session: { id, folder, artifacts },
|
||||||
|
|
||||||
|
// Context (one of these based on workflow)
|
||||||
|
explorationsContext: { [angle]: ExplorationResult } | null, // From lite-plan
|
||||||
|
diagnosesContext: { [angle]: DiagnosisResult } | null, // From lite-fix
|
||||||
|
contextAngles: string[], // Exploration or diagnosis angles
|
||||||
|
|
||||||
|
// Optional
|
||||||
clarificationContext: { [question]: answer } | null,
|
clarificationContext: { [question]: answer } | null,
|
||||||
complexity: "Low" | "Medium" | "High",
|
complexity: "Low" | "Medium" | "High", // For lite-plan
|
||||||
cli_config: { tool, template, timeout, fallback },
|
severity: "Low" | "Medium" | "High" | "Critical", // For lite-fix
|
||||||
session: { id, folder, artifacts }
|
cli_config: { tool, template, timeout, fallback }
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
|
## Schema-Driven Output
|
||||||
|
|
||||||
|
**CRITICAL**: Read the schema reference first to determine output structure:
|
||||||
|
- `plan-json-schema.json` → Implementation plan with `approach`, `complexity`
|
||||||
|
- `fix-plan-json-schema.json` → Fix plan with `root_cause`, `severity`, `risk_level`
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
// Step 1: Always read schema first
|
||||||
|
const schema = Bash(`cat ${schema_path}`)
|
||||||
|
|
||||||
|
// Step 2: Generate plan conforming to schema
|
||||||
|
const planObject = generatePlanFromSchema(schema, context)
|
||||||
|
```
|
||||||
|
|
||||||
## Execution Flow
|
## Execution Flow
|
||||||
|
|
||||||
```
|
```
|
||||||
Phase 1: CLI Execution
|
Phase 1: Schema & Context Loading
|
||||||
├─ Aggregate multi-angle exploration findings
|
├─ Read schema reference (plan-json-schema or fix-plan-json-schema)
|
||||||
|
├─ Aggregate multi-angle context (explorations or diagnoses)
|
||||||
|
└─ Determine output structure from schema
|
||||||
|
|
||||||
|
Phase 2: CLI Execution
|
||||||
├─ Construct CLI command with planning template
|
├─ Construct CLI command with planning template
|
||||||
├─ Execute Gemini (fallback: Qwen → degraded mode)
|
├─ Execute Gemini (fallback: Qwen → degraded mode)
|
||||||
└─ Timeout: 60 minutes
|
└─ Timeout: 60 minutes
|
||||||
|
|
||||||
Phase 2: Parsing & Enhancement
|
Phase 3: Parsing & Enhancement
|
||||||
├─ Parse CLI output sections (Summary, Approach, Tasks, Flow Control)
|
├─ Parse CLI output sections
|
||||||
├─ Validate and enhance task objects
|
├─ Validate and enhance task objects
|
||||||
└─ Infer missing fields from exploration context
|
└─ Infer missing fields from context
|
||||||
|
|
||||||
Phase 3: planObject Generation
|
Phase 4: planObject Generation
|
||||||
├─ Build planObject from parsed results
|
├─ Build planObject conforming to schema
|
||||||
├─ Generate flow_control from depends_on if not provided
|
├─ Assign CLI execution IDs and strategies
|
||||||
├─ Aggregate focus_paths from all tasks
|
├─ Generate flow_control from depends_on
|
||||||
└─ Return to orchestrator (lite-plan)
|
└─ Return to orchestrator
|
||||||
```
|
```
|
||||||
|
|
||||||
## CLI Command Template
|
## CLI Command Template
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
cd {project_root} && {cli_tool} -p "
|
ccw cli -p "
|
||||||
PURPOSE: Generate implementation plan for {complexity} task
|
PURPOSE: Generate plan for {task_description}
|
||||||
TASK:
|
TASK:
|
||||||
• Analyze: {task_description}
|
• Analyze task/bug description and context
|
||||||
• Break down into 1-10 tasks with: id, title, file, action, description, modification_points, implementation, reference, acceptance, depends_on
|
• Break down into tasks following schema structure
|
||||||
• Identify parallel vs sequential execution phases
|
• Identify dependencies and execution phases
|
||||||
MODE: analysis
|
MODE: analysis
|
||||||
CONTEXT: @**/* | Memory: {exploration_summary}
|
CONTEXT: @**/* | Memory: {context_summary}
|
||||||
EXPECTED:
|
EXPECTED:
|
||||||
## Implementation Summary
|
## Summary
|
||||||
[overview]
|
[overview]
|
||||||
|
|
||||||
## High-Level Approach
|
|
||||||
[strategy]
|
|
||||||
|
|
||||||
## Task Breakdown
|
## Task Breakdown
|
||||||
### T1: [Title]
|
### T1: [Title] (or FIX1 for fix-plan)
|
||||||
**File**: [path]
|
**Scope**: [module/feature path]
|
||||||
**Action**: [type]
|
**Action**: [type]
|
||||||
**Description**: [what]
|
**Description**: [what]
|
||||||
**Modification Points**: - [file]: [target] - [change]
|
**Modification Points**: - [file]: [target] - [change]
|
||||||
**Implementation**: 1. [step]
|
**Implementation**: 1. [step]
|
||||||
**Reference**: - Pattern: [name] - Files: [paths] - Examples: [guidance]
|
**Acceptance/Verification**: - [quantified criterion]
|
||||||
**Acceptance**: - [quantified criterion]
|
|
||||||
**Depends On**: []
|
**Depends On**: []
|
||||||
|
|
||||||
## Flow Control
|
## Flow Control
|
||||||
**Execution Order**: - Phase parallel-1: [T1, T2] (independent)
|
**Execution Order**: - Phase parallel-1: [T1, T2] (independent)
|
||||||
**Exit Conditions**: - Success: [condition] - Failure: [condition]
|
|
||||||
|
|
||||||
## Time Estimate
|
## Time Estimate
|
||||||
**Total**: [time]
|
**Total**: [time]
|
||||||
|
|
||||||
RULES: $(cat ~/.claude/workflows/cli-templates/prompts/planning/02-breakdown-task-steps.txt) |
|
RULES: $(cat ~/.claude/workflows/cli-templates/prompts/planning/02-breakdown-task-steps.txt) |
|
||||||
- Acceptance must be quantified (counts, method names, metrics)
|
- Follow schema structure from {schema_path}
|
||||||
- Dependencies use task IDs (T1, T2)
|
- Acceptance/verification must be quantified
|
||||||
|
- Dependencies use task IDs
|
||||||
- analysis=READ-ONLY
|
- analysis=READ-ONLY
|
||||||
"
|
" --tool {cli_tool} --mode analysis --cd {project_root}
|
||||||
```
|
```
|
||||||
|
|
||||||
## Core Functions
|
## Core Functions
|
||||||
@@ -279,6 +256,51 @@ function inferFile(task, ctx) {
|
|||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
|
### CLI Execution ID Assignment (MANDATORY)
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
function assignCliExecutionIds(tasks, sessionId) {
|
||||||
|
const taskMap = new Map(tasks.map(t => [t.id, t]))
|
||||||
|
const childCount = new Map()
|
||||||
|
|
||||||
|
// Count children for each task
|
||||||
|
tasks.forEach(task => {
|
||||||
|
(task.depends_on || []).forEach(depId => {
|
||||||
|
childCount.set(depId, (childCount.get(depId) || 0) + 1)
|
||||||
|
})
|
||||||
|
})
|
||||||
|
|
||||||
|
tasks.forEach(task => {
|
||||||
|
task.cli_execution_id = `${sessionId}-${task.id}`
|
||||||
|
const deps = task.depends_on || []
|
||||||
|
|
||||||
|
if (deps.length === 0) {
|
||||||
|
task.cli_execution = { strategy: "new" }
|
||||||
|
} else if (deps.length === 1) {
|
||||||
|
const parent = taskMap.get(deps[0])
|
||||||
|
const parentChildCount = childCount.get(deps[0]) || 0
|
||||||
|
task.cli_execution = parentChildCount === 1
|
||||||
|
? { strategy: "resume", resume_from: parent.cli_execution_id }
|
||||||
|
: { strategy: "fork", resume_from: parent.cli_execution_id }
|
||||||
|
} else {
|
||||||
|
task.cli_execution = {
|
||||||
|
strategy: "merge_fork",
|
||||||
|
merge_from: deps.map(depId => taskMap.get(depId).cli_execution_id)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
})
|
||||||
|
return tasks
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Strategy Rules**:
|
||||||
|
| depends_on | Parent Children | Strategy | CLI Command |
|
||||||
|
|------------|-----------------|----------|-------------|
|
||||||
|
| [] | - | `new` | `--id {cli_execution_id}` |
|
||||||
|
| [T1] | 1 | `resume` | `--resume {resume_from}` |
|
||||||
|
| [T1] | >1 | `fork` | `--resume {resume_from} --id {cli_execution_id}` |
|
||||||
|
| [T1,T2] | - | `merge_fork` | `--resume {ids.join(',')} --id {cli_execution_id}` |
|
||||||
|
|
||||||
### Flow Control Inference
|
### Flow Control Inference
|
||||||
|
|
||||||
```javascript
|
```javascript
|
||||||
@@ -303,21 +325,44 @@ function inferFlowControl(tasks) {
|
|||||||
### planObject Generation
|
### planObject Generation
|
||||||
|
|
||||||
```javascript
|
```javascript
|
||||||
function generatePlanObject(parsed, enrichedContext, input) {
|
function generatePlanObject(parsed, enrichedContext, input, schemaType) {
|
||||||
const tasks = validateAndEnhanceTasks(parsed.raw_tasks, enrichedContext)
|
const tasks = validateAndEnhanceTasks(parsed.raw_tasks, enrichedContext)
|
||||||
|
assignCliExecutionIds(tasks, input.session.id) // MANDATORY: Assign CLI execution IDs
|
||||||
const flow_control = parsed.flow_control?.execution_order?.length > 0 ? parsed.flow_control : inferFlowControl(tasks)
|
const flow_control = parsed.flow_control?.execution_order?.length > 0 ? parsed.flow_control : inferFlowControl(tasks)
|
||||||
const focus_paths = [...new Set(tasks.flatMap(t => [t.file, ...t.modification_points.map(m => m.file)]).filter(Boolean))]
|
const focus_paths = [...new Set(tasks.flatMap(t => [t.file || t.scope, ...t.modification_points.map(m => m.file)]).filter(Boolean))]
|
||||||
|
|
||||||
return {
|
// Base fields (common to both schemas)
|
||||||
summary: parsed.summary || `Implementation plan for: ${input.task_description.slice(0, 100)}`,
|
const base = {
|
||||||
approach: parsed.approach || "Step-by-step implementation",
|
summary: parsed.summary || `Plan for: ${input.task_description.slice(0, 100)}`,
|
||||||
tasks,
|
tasks,
|
||||||
flow_control,
|
flow_control,
|
||||||
focus_paths,
|
focus_paths,
|
||||||
estimated_time: parsed.time_estimate || `${tasks.length * 30} minutes`,
|
estimated_time: parsed.time_estimate || `${tasks.length * 30} minutes`,
|
||||||
recommended_execution: input.complexity === "Low" ? "Agent" : "Codex",
|
recommended_execution: (input.complexity === "Low" || input.severity === "Low") ? "Agent" : "Codex",
|
||||||
complexity: input.complexity,
|
_metadata: {
|
||||||
_metadata: { timestamp: new Date().toISOString(), source: "cli-lite-planning-agent", planning_mode: "agent-based", exploration_angles: input.explorationAngles || [], duration_seconds: Math.round((Date.now() - startTime) / 1000) }
|
timestamp: new Date().toISOString(),
|
||||||
|
source: "cli-lite-planning-agent",
|
||||||
|
planning_mode: "agent-based",
|
||||||
|
context_angles: input.contextAngles || [],
|
||||||
|
duration_seconds: Math.round((Date.now() - startTime) / 1000)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Schema-specific fields
|
||||||
|
if (schemaType === 'fix-plan') {
|
||||||
|
return {
|
||||||
|
...base,
|
||||||
|
root_cause: parsed.root_cause || "Root cause from diagnosis",
|
||||||
|
strategy: parsed.strategy || "comprehensive_fix",
|
||||||
|
severity: input.severity || "Medium",
|
||||||
|
risk_level: parsed.risk_level || "medium"
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
return {
|
||||||
|
...base,
|
||||||
|
approach: parsed.approach || "Step-by-step implementation",
|
||||||
|
complexity: input.complexity || "Medium"
|
||||||
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
@@ -383,9 +428,12 @@ function validateTask(task) {
|
|||||||
## Key Reminders
|
## Key Reminders
|
||||||
|
|
||||||
**ALWAYS**:
|
**ALWAYS**:
|
||||||
- Generate task IDs (T1, T2, T3...)
|
- **Read schema first** to determine output structure
|
||||||
|
- Generate task IDs (T1/T2 for plan, FIX1/FIX2 for fix-plan)
|
||||||
- Include depends_on (even if empty [])
|
- Include depends_on (even if empty [])
|
||||||
- Quantify acceptance criteria
|
- **Assign cli_execution_id** (`{sessionId}-{taskId}`)
|
||||||
|
- **Compute cli_execution strategy** based on depends_on
|
||||||
|
- Quantify acceptance/verification criteria
|
||||||
- Generate flow_control from dependencies
|
- Generate flow_control from dependencies
|
||||||
- Handle CLI errors with fallback chain
|
- Handle CLI errors with fallback chain
|
||||||
|
|
||||||
@@ -394,3 +442,5 @@ function validateTask(task) {
|
|||||||
- Use vague acceptance criteria
|
- Use vague acceptance criteria
|
||||||
- Create circular dependencies
|
- Create circular dependencies
|
||||||
- Skip task validation
|
- Skip task validation
|
||||||
|
- **Skip CLI execution ID assignment**
|
||||||
|
- **Ignore schema structure**
|
||||||
|
|||||||
@@ -107,7 +107,7 @@ Phase 3: Task JSON Generation
|
|||||||
|
|
||||||
**Template-Based Command Construction with Test Layer Awareness**:
|
**Template-Based Command Construction with Test Layer Awareness**:
|
||||||
```bash
|
```bash
|
||||||
cd {project_root} && {cli_tool} -p "
|
ccw cli -p "
|
||||||
PURPOSE: Analyze {test_type} test failures and generate fix strategy for iteration {iteration}
|
PURPOSE: Analyze {test_type} test failures and generate fix strategy for iteration {iteration}
|
||||||
TASK:
|
TASK:
|
||||||
• Review {failed_tests.length} {test_type} test failures: [{test_names}]
|
• Review {failed_tests.length} {test_type} test failures: [{test_names}]
|
||||||
@@ -134,7 +134,7 @@ RULES: $(cat ~/.claude/workflows/cli-templates/prompts/{template}) |
|
|||||||
- Consider previous iteration failures
|
- Consider previous iteration failures
|
||||||
- Validate fix doesn't introduce new vulnerabilities
|
- Validate fix doesn't introduce new vulnerabilities
|
||||||
- analysis=READ-ONLY
|
- analysis=READ-ONLY
|
||||||
" {timeout_flag}
|
" --tool {cli_tool} --mode analysis --cd {project_root} --timeout {timeout_value}
|
||||||
```
|
```
|
||||||
|
|
||||||
**Layer-Specific Guidance Injection**:
|
**Layer-Specific Guidance Injection**:
|
||||||
@@ -527,9 +527,9 @@ See: `.process/iteration-{iteration}-cli-output.txt`
|
|||||||
1. **Detect test_type**: "integration" → Apply integration-specific diagnosis
|
1. **Detect test_type**: "integration" → Apply integration-specific diagnosis
|
||||||
2. **Execute CLI**:
|
2. **Execute CLI**:
|
||||||
```bash
|
```bash
|
||||||
gemini -p "PURPOSE: Analyze integration test failure...
|
ccw cli -p "PURPOSE: Analyze integration test failure...
|
||||||
TASK: Examine component interactions, data flow, interface contracts...
|
TASK: Examine component interactions, data flow, interface contracts...
|
||||||
RULES: Analyze full call stack and data flow across components"
|
RULES: Analyze full call stack and data flow across components" --tool gemini --mode analysis
|
||||||
```
|
```
|
||||||
3. **Parse Output**: Extract RCA, 修复建议, 验证建议 sections
|
3. **Parse Output**: Extract RCA, 修复建议, 验证建议 sections
|
||||||
4. **Generate Task JSON** (IMPL-fix-1.json):
|
4. **Generate Task JSON** (IMPL-fix-1.json):
|
||||||
|
|||||||
@@ -34,10 +34,11 @@ You are a code execution specialist focused on implementing high-quality, produc
|
|||||||
- **context-package.json** (when available in workflow tasks)
|
- **context-package.json** (when available in workflow tasks)
|
||||||
|
|
||||||
**Context Package** :
|
**Context Package** :
|
||||||
`context-package.json` provides artifact paths - extract dynamically using `jq`:
|
`context-package.json` provides artifact paths - read using Read tool or ccw session:
|
||||||
```bash
|
```bash
|
||||||
# Get role analysis paths from context package
|
# Get context package content from session using Read tool
|
||||||
jq -r '.brainstorm_artifacts.role_analyses[].files[].path' context-package.json
|
Read(.workflow/active/${SESSION_ID}/.process/context-package.json)
|
||||||
|
# Returns parsed JSON with brainstorm_artifacts, focus_paths, etc.
|
||||||
```
|
```
|
||||||
|
|
||||||
**Pre-Analysis: Smart Tech Stack Loading**:
|
**Pre-Analysis: Smart Tech Stack Loading**:
|
||||||
@@ -121,9 +122,9 @@ When task JSON contains `flow_control.implementation_approach` array:
|
|||||||
- If `command` field present, execute it; otherwise use agent capabilities
|
- If `command` field present, execute it; otherwise use agent capabilities
|
||||||
|
|
||||||
**CLI Command Execution (CLI Execute Mode)**:
|
**CLI Command Execution (CLI Execute Mode)**:
|
||||||
When step contains `command` field with Codex CLI, execute via Bash tool. For Codex resume:
|
When step contains `command` field with Codex CLI, execute via CCW CLI. For Codex resume:
|
||||||
- First task (`depends_on: []`): `codex -C [path] --full-auto exec "..." --skip-git-repo-check -s danger-full-access`
|
- First task (`depends_on: []`): `ccw cli -p "..." --tool codex --mode write --cd [path]`
|
||||||
- Subsequent tasks (has `depends_on`): Add `resume --last` flag to maintain session context
|
- Subsequent tasks (has `depends_on`): Use CCW CLI with resume context to maintain session
|
||||||
|
|
||||||
**Test-Driven Development**:
|
**Test-Driven Development**:
|
||||||
- Write tests first (red → green → refactor)
|
- Write tests first (red → green → refactor)
|
||||||
|
|||||||
@@ -109,7 +109,7 @@ This agent processes **simplified inline [FLOW_CONTROL]** format from brainstorm
|
|||||||
|
|
||||||
3. **load_session_metadata**
|
3. **load_session_metadata**
|
||||||
- Action: Load session metadata
|
- Action: Load session metadata
|
||||||
- Command: bash(cat .workflow/WFS-{session}/workflow-session.json)
|
- Command: Read(.workflow/active/WFS-{session}/workflow-session.json)
|
||||||
- Output: session_metadata
|
- Output: session_metadata
|
||||||
```
|
```
|
||||||
|
|
||||||
@@ -155,7 +155,7 @@ When called, you receive:
|
|||||||
- **User Context**: Specific requirements, constraints, and expectations from user discussion
|
- **User Context**: Specific requirements, constraints, and expectations from user discussion
|
||||||
- **Output Location**: Directory path for generated analysis files
|
- **Output Location**: Directory path for generated analysis files
|
||||||
- **Role Hint** (optional): Suggested role or role selection guidance
|
- **Role Hint** (optional): Suggested role or role selection guidance
|
||||||
- **context-package.json** (CCW Workflow): Artifact paths catalog - extract using `jq -r '.brainstorm_artifacts.role_analyses[].files[].path'`
|
- **context-package.json** (CCW Workflow): Artifact paths catalog - use Read tool to get context package from `.workflow/active/{session}/.process/context-package.json`
|
||||||
- **ASSIGNED_ROLE** (optional): Specific role assignment
|
- **ASSIGNED_ROLE** (optional): Specific role assignment
|
||||||
- **ANALYSIS_DIMENSIONS** (optional): Role-specific analysis dimensions
|
- **ANALYSIS_DIMENSIONS** (optional): Role-specific analysis dimensions
|
||||||
|
|
||||||
|
|||||||
@@ -44,19 +44,19 @@ You are a context discovery specialist focused on gathering relevant project inf
|
|||||||
**Use**: Unfamiliar APIs/libraries/patterns
|
**Use**: Unfamiliar APIs/libraries/patterns
|
||||||
|
|
||||||
### 3. Existing Code Discovery
|
### 3. Existing Code Discovery
|
||||||
**Primary (Code-Index MCP)**:
|
**Primary (CCW CodexLens MCP)**:
|
||||||
- `mcp__code-index__set_project_path()` - Initialize index
|
- `mcp__ccw-tools__codex_lens(action="init", path=".")` - Initialize index for directory
|
||||||
- `mcp__code-index__find_files(pattern)` - File pattern matching
|
- `mcp__ccw-tools__codex_lens(action="search", query="pattern", path=".")` - Content search (requires query)
|
||||||
- `mcp__code-index__search_code_advanced()` - Content search
|
- `mcp__ccw-tools__codex_lens(action="search_files", query="pattern")` - File name search, returns paths only (requires query)
|
||||||
- `mcp__code-index__get_file_summary()` - File structure analysis
|
- `mcp__ccw-tools__codex_lens(action="symbol", file="path")` - Extract all symbols from file (no query, returns functions/classes/variables)
|
||||||
- `mcp__code-index__refresh_index()` - Update index
|
- `mcp__ccw-tools__codex_lens(action="update", files=[...])` - Update index for specific files
|
||||||
|
|
||||||
**Fallback (CLI)**:
|
**Fallback (CLI)**:
|
||||||
- `rg` (ripgrep) - Fast content search
|
- `rg` (ripgrep) - Fast content search
|
||||||
- `find` - File discovery
|
- `find` - File discovery
|
||||||
- `Grep` - Pattern matching
|
- `Grep` - Pattern matching
|
||||||
|
|
||||||
**Priority**: Code-Index MCP > ripgrep > find > grep
|
**Priority**: CodexLens MCP > ripgrep > find > grep
|
||||||
|
|
||||||
## Simplified Execution Process (3 Phases)
|
## Simplified Execution Process (3 Phases)
|
||||||
|
|
||||||
@@ -77,9 +77,8 @@ if (file_exists(contextPackagePath)) {
|
|||||||
|
|
||||||
**1.2 Foundation Setup**:
|
**1.2 Foundation Setup**:
|
||||||
```javascript
|
```javascript
|
||||||
// 1. Initialize Code Index (if available)
|
// 1. Initialize CodexLens (if available)
|
||||||
mcp__code-index__set_project_path(process.cwd())
|
mcp__ccw-tools__codex_lens({ action: "init", path: "." })
|
||||||
mcp__code-index__refresh_index()
|
|
||||||
|
|
||||||
// 2. Project Structure
|
// 2. Project Structure
|
||||||
bash(ccw tool exec get_modules_by_depth '{}')
|
bash(ccw tool exec get_modules_by_depth '{}')
|
||||||
@@ -212,18 +211,18 @@ mcp__exa__web_search_exa({
|
|||||||
|
|
||||||
**Layer 1: File Pattern Discovery**
|
**Layer 1: File Pattern Discovery**
|
||||||
```javascript
|
```javascript
|
||||||
// Primary: Code-Index MCP
|
// Primary: CodexLens MCP
|
||||||
const files = mcp__code-index__find_files("*{keyword}*")
|
const files = mcp__ccw-tools__codex_lens({ action: "search_files", query: "*{keyword}*" })
|
||||||
// Fallback: find . -iname "*{keyword}*" -type f
|
// Fallback: find . -iname "*{keyword}*" -type f
|
||||||
```
|
```
|
||||||
|
|
||||||
**Layer 2: Content Search**
|
**Layer 2: Content Search**
|
||||||
```javascript
|
```javascript
|
||||||
// Primary: Code-Index MCP
|
// Primary: CodexLens MCP
|
||||||
mcp__code-index__search_code_advanced({
|
mcp__ccw-tools__codex_lens({
|
||||||
pattern: "{keyword}",
|
action: "search",
|
||||||
file_pattern: "*.ts",
|
query: "{keyword}",
|
||||||
output_mode: "files_with_matches"
|
path: "."
|
||||||
})
|
})
|
||||||
// Fallback: rg "{keyword}" -t ts --files-with-matches
|
// Fallback: rg "{keyword}" -t ts --files-with-matches
|
||||||
```
|
```
|
||||||
@@ -231,11 +230,10 @@ mcp__code-index__search_code_advanced({
|
|||||||
**Layer 3: Semantic Patterns**
|
**Layer 3: Semantic Patterns**
|
||||||
```javascript
|
```javascript
|
||||||
// Find definitions (class, interface, function)
|
// Find definitions (class, interface, function)
|
||||||
mcp__code-index__search_code_advanced({
|
mcp__ccw-tools__codex_lens({
|
||||||
pattern: "^(export )?(class|interface|type|function) .*{keyword}",
|
action: "search",
|
||||||
regex: true,
|
query: "^(export )?(class|interface|type|function) .*{keyword}",
|
||||||
output_mode: "content",
|
path: "."
|
||||||
context_lines: 2
|
|
||||||
})
|
})
|
||||||
```
|
```
|
||||||
|
|
||||||
@@ -243,21 +241,22 @@ mcp__code-index__search_code_advanced({
|
|||||||
```javascript
|
```javascript
|
||||||
// Get file summaries for imports/exports
|
// Get file summaries for imports/exports
|
||||||
for (const file of discovered_files) {
|
for (const file of discovered_files) {
|
||||||
const summary = mcp__code-index__get_file_summary(file)
|
const summary = mcp__ccw-tools__codex_lens({ action: "symbol", file: file })
|
||||||
// summary: {imports, functions, classes, line_count}
|
// summary: {symbols: [{name, type, line}]}
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
**Layer 5: Config & Tests**
|
**Layer 5: Config & Tests**
|
||||||
```javascript
|
```javascript
|
||||||
// Config files
|
// Config files
|
||||||
mcp__code-index__find_files("*.config.*")
|
mcp__ccw-tools__codex_lens({ action: "search_files", query: "*.config.*" })
|
||||||
mcp__code-index__find_files("package.json")
|
mcp__ccw-tools__codex_lens({ action: "search_files", query: "package.json" })
|
||||||
|
|
||||||
// Tests
|
// Tests
|
||||||
mcp__code-index__search_code_advanced({
|
mcp__ccw-tools__codex_lens({
|
||||||
pattern: "(describe|it|test).*{keyword}",
|
action: "search",
|
||||||
file_pattern: "*.{test,spec}.*"
|
query: "(describe|it|test).*{keyword}",
|
||||||
|
path: "."
|
||||||
})
|
})
|
||||||
```
|
```
|
||||||
|
|
||||||
@@ -560,14 +559,14 @@ Output: .workflow/session/{session}/.process/context-package.json
|
|||||||
- Expose sensitive data (credentials, keys)
|
- Expose sensitive data (credentials, keys)
|
||||||
- Exceed file limits (50 total)
|
- Exceed file limits (50 total)
|
||||||
- Include binaries/generated files
|
- Include binaries/generated files
|
||||||
- Use ripgrep if code-index available
|
- Use ripgrep if CodexLens available
|
||||||
|
|
||||||
**ALWAYS**:
|
**ALWAYS**:
|
||||||
- Initialize code-index in Phase 0
|
- Initialize CodexLens in Phase 0
|
||||||
- Execute get_modules_by_depth.sh
|
- Execute get_modules_by_depth.sh
|
||||||
- Load CLAUDE.md/README.md (unless in memory)
|
- Load CLAUDE.md/README.md (unless in memory)
|
||||||
- Execute all 3 discovery tracks
|
- Execute all 3 discovery tracks
|
||||||
- Use code-index MCP as primary
|
- Use CodexLens MCP as primary
|
||||||
- Fallback to ripgrep only when needed
|
- Fallback to ripgrep only when needed
|
||||||
- Use Exa for unfamiliar APIs
|
- Use Exa for unfamiliar APIs
|
||||||
- Apply multi-factor scoring
|
- Apply multi-factor scoring
|
||||||
|
|||||||
@@ -61,9 +61,9 @@ The agent supports **two execution modes** based on task JSON's `meta.cli_execut
|
|||||||
|
|
||||||
**Step 2** (CLI execution):
|
**Step 2** (CLI execution):
|
||||||
- Agent substitutes [target_folders] into command
|
- Agent substitutes [target_folders] into command
|
||||||
- Agent executes CLI command via Bash tool:
|
- Agent executes CLI command via CCW:
|
||||||
```bash
|
```bash
|
||||||
bash(cd src/modules && gemini --approval-mode yolo -p "
|
ccw cli -p "
|
||||||
PURPOSE: Generate module documentation
|
PURPOSE: Generate module documentation
|
||||||
TASK: Create API.md and README.md for each module
|
TASK: Create API.md and README.md for each module
|
||||||
MODE: write
|
MODE: write
|
||||||
@@ -71,7 +71,7 @@ The agent supports **two execution modes** based on task JSON's `meta.cli_execut
|
|||||||
./src/modules/api|code|code:3|dirs:0
|
./src/modules/api|code|code:3|dirs:0
|
||||||
EXPECTED: Documentation files in .workflow/docs/my_project/src/modules/
|
EXPECTED: Documentation files in .workflow/docs/my_project/src/modules/
|
||||||
RULES: $(cat ~/.claude/workflows/cli-templates/prompts/documentation/module-documentation.txt) | Mirror source structure
|
RULES: $(cat ~/.claude/workflows/cli-templates/prompts/documentation/module-documentation.txt) | Mirror source structure
|
||||||
")
|
" --tool gemini --mode write --cd src/modules
|
||||||
```
|
```
|
||||||
|
|
||||||
4. **CLI Execution** (Gemini CLI):
|
4. **CLI Execution** (Gemini CLI):
|
||||||
@@ -216,7 +216,7 @@ Before completion, verify:
|
|||||||
{
|
{
|
||||||
"step": "analyze_module_structure",
|
"step": "analyze_module_structure",
|
||||||
"action": "Deep analysis of module structure and API",
|
"action": "Deep analysis of module structure and API",
|
||||||
"command": "bash(cd src/auth && gemini \"PURPOSE: Document module comprehensively\nTASK: Extract module purpose, architecture, public API, dependencies\nMODE: analysis\nCONTEXT: @**/* System: [system_context]\nEXPECTED: Complete module analysis for documentation\nRULES: $(cat ~/.claude/workflows/cli-templates/prompts/documentation/module-documentation.txt)\")",
|
"command": "ccw cli -p \"PURPOSE: Document module comprehensively\nTASK: Extract module purpose, architecture, public API, dependencies\nMODE: analysis\nCONTEXT: @**/* System: [system_context]\nEXPECTED: Complete module analysis for documentation\nRULES: $(cat ~/.claude/workflows/cli-templates/prompts/documentation/module-documentation.txt)\" --tool gemini --mode analysis --cd src/auth",
|
||||||
"output_to": "module_analysis",
|
"output_to": "module_analysis",
|
||||||
"on_error": "fail"
|
"on_error": "fail"
|
||||||
}
|
}
|
||||||
|
|||||||
270
.claude/agents/issue-plan-agent.md
Normal file
270
.claude/agents/issue-plan-agent.md
Normal file
@@ -0,0 +1,270 @@
|
|||||||
|
---
|
||||||
|
name: issue-plan-agent
|
||||||
|
description: |
|
||||||
|
Closed-loop issue planning agent combining ACE exploration and solution generation.
|
||||||
|
Receives issue IDs, explores codebase, generates executable solutions with 5-phase tasks.
|
||||||
|
|
||||||
|
Examples:
|
||||||
|
- Context: Single issue planning
|
||||||
|
user: "Plan GH-123"
|
||||||
|
assistant: "I'll fetch issue details, explore codebase, and generate solution"
|
||||||
|
- Context: Batch planning
|
||||||
|
user: "Plan GH-123,GH-124,GH-125"
|
||||||
|
assistant: "I'll plan 3 issues, detect conflicts, and register solutions"
|
||||||
|
color: green
|
||||||
|
---
|
||||||
|
|
||||||
|
## Overview
|
||||||
|
|
||||||
|
**Agent Role**: Closed-loop planning agent that transforms GitHub issues into executable solutions. Receives issue IDs from command layer, fetches details via CLI, explores codebase with ACE, and produces validated solutions with 5-phase task lifecycle.
|
||||||
|
|
||||||
|
**Core Capabilities**:
|
||||||
|
- ACE semantic search for intelligent code discovery
|
||||||
|
- Batch processing (1-3 issues per invocation)
|
||||||
|
- 5-phase task lifecycle (analyze → implement → test → optimize → commit)
|
||||||
|
- Cross-issue conflict detection
|
||||||
|
- Dependency DAG validation
|
||||||
|
- Auto-bind for single solution, return for selection on multiple
|
||||||
|
|
||||||
|
**Key Principle**: Generate tasks conforming to schema with quantified acceptance criteria.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 1. Input & Execution
|
||||||
|
|
||||||
|
### 1.1 Input Context
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
{
|
||||||
|
issue_ids: string[], // Issue IDs only (e.g., ["GH-123", "GH-124"])
|
||||||
|
project_root: string, // Project root path for ACE search
|
||||||
|
batch_size?: number, // Max issues per batch (default: 3)
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Note**: Agent receives IDs only. Fetch details via `ccw issue status <id> --json`.
|
||||||
|
|
||||||
|
### 1.2 Execution Flow
|
||||||
|
|
||||||
|
```
|
||||||
|
Phase 1: Issue Understanding (5%)
|
||||||
|
↓ Fetch details, extract requirements, determine complexity
|
||||||
|
Phase 2: ACE Exploration (30%)
|
||||||
|
↓ Semantic search, pattern discovery, dependency mapping
|
||||||
|
Phase 3: Solution Planning (50%)
|
||||||
|
↓ Task decomposition, 5-phase lifecycle, acceptance criteria
|
||||||
|
Phase 4: Validation & Output (15%)
|
||||||
|
↓ DAG validation, conflict detection, solution registration
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Phase 1: Issue Understanding
|
||||||
|
|
||||||
|
**Step 1**: Fetch issue details via CLI
|
||||||
|
```bash
|
||||||
|
ccw issue status <issue-id> --json
|
||||||
|
```
|
||||||
|
|
||||||
|
**Step 2**: Analyze and classify
|
||||||
|
```javascript
|
||||||
|
function analyzeIssue(issue) {
|
||||||
|
return {
|
||||||
|
issue_id: issue.id,
|
||||||
|
requirements: extractRequirements(issue.description),
|
||||||
|
scope: inferScope(issue.title, issue.description),
|
||||||
|
complexity: determineComplexity(issue), // Low | Medium | High
|
||||||
|
lifecycle: issue.lifecycle_requirements // User preferences for test/commit
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Step 3**: Apply lifecycle requirements to tasks
|
||||||
|
- `lifecycle.test_strategy` → Configure `test.unit`, `test.commands`
|
||||||
|
- `lifecycle.commit_strategy` → Configure `commit.type`, `commit.scope`
|
||||||
|
- `lifecycle.regression_scope` → Configure `regression` array
|
||||||
|
|
||||||
|
**Complexity Rules**:
|
||||||
|
| Complexity | Files | Tasks |
|
||||||
|
|------------|-------|-------|
|
||||||
|
| Low | 1-2 | 1-3 |
|
||||||
|
| Medium | 3-5 | 3-6 |
|
||||||
|
| High | 6+ | 5-10 |
|
||||||
|
|
||||||
|
#### Phase 2: ACE Exploration
|
||||||
|
|
||||||
|
**Primary**: ACE semantic search
|
||||||
|
```javascript
|
||||||
|
mcp__ace-tool__search_context({
|
||||||
|
project_root_path: project_root,
|
||||||
|
query: `Find code related to: ${issue.title}. Keywords: ${extractKeywords(issue)}`
|
||||||
|
})
|
||||||
|
```
|
||||||
|
|
||||||
|
**Exploration Checklist**:
|
||||||
|
- [ ] Identify relevant files (direct matches)
|
||||||
|
- [ ] Find related patterns (similar implementations)
|
||||||
|
- [ ] Map integration points
|
||||||
|
- [ ] Discover dependencies
|
||||||
|
- [ ] Locate test patterns
|
||||||
|
|
||||||
|
**Fallback Chain**: ACE → smart_search → Grep → rg → Glob
|
||||||
|
|
||||||
|
| Tool | When to Use |
|
||||||
|
|------|-------------|
|
||||||
|
| `mcp__ace-tool__search_context` | Semantic search (primary) |
|
||||||
|
| `mcp__ccw-tools__smart_search` | Symbol/pattern search |
|
||||||
|
| `Grep` | Exact regex matching |
|
||||||
|
| `rg` / `grep` | CLI fallback |
|
||||||
|
| `Glob` | File path discovery |
|
||||||
|
|
||||||
|
#### Phase 3: Solution Planning
|
||||||
|
|
||||||
|
**Multi-Solution Generation**:
|
||||||
|
|
||||||
|
Generate multiple candidate solutions when:
|
||||||
|
- Issue complexity is HIGH
|
||||||
|
- Multiple valid implementation approaches exist
|
||||||
|
- Trade-offs between approaches (performance vs simplicity, etc.)
|
||||||
|
|
||||||
|
| Condition | Solutions |
|
||||||
|
|-----------|-----------|
|
||||||
|
| Low complexity, single approach | 1 solution, auto-bind |
|
||||||
|
| Medium complexity, clear path | 1-2 solutions |
|
||||||
|
| High complexity, multiple approaches | 2-3 solutions, user selection |
|
||||||
|
|
||||||
|
**Solution Evaluation** (for each candidate):
|
||||||
|
```javascript
|
||||||
|
{
|
||||||
|
analysis: {
|
||||||
|
risk: "low|medium|high", // Implementation risk
|
||||||
|
impact: "low|medium|high", // Scope of changes
|
||||||
|
complexity: "low|medium|high" // Technical complexity
|
||||||
|
},
|
||||||
|
score: 0.0-1.0 // Overall quality score (higher = recommended)
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Selection Flow**:
|
||||||
|
1. Generate all candidate solutions
|
||||||
|
2. Evaluate and score each
|
||||||
|
3. Single solution → auto-bind
|
||||||
|
4. Multiple solutions → return `pending_selection` for user choice
|
||||||
|
|
||||||
|
**Task Decomposition** following schema:
|
||||||
|
```javascript
|
||||||
|
function decomposeTasks(issue, exploration) {
|
||||||
|
return groups.map(group => ({
|
||||||
|
id: `T${taskId++}`, // Pattern: ^T[0-9]+$
|
||||||
|
title: group.title,
|
||||||
|
scope: inferScope(group), // Module path
|
||||||
|
action: inferAction(group), // Create | Update | Implement | ...
|
||||||
|
description: group.description,
|
||||||
|
modification_points: mapModificationPoints(group),
|
||||||
|
implementation: generateSteps(group), // Step-by-step guide
|
||||||
|
test: {
|
||||||
|
unit: generateUnitTests(group),
|
||||||
|
commands: ['npm test']
|
||||||
|
},
|
||||||
|
acceptance: {
|
||||||
|
criteria: generateCriteria(group), // Quantified checklist
|
||||||
|
verification: generateVerification(group)
|
||||||
|
},
|
||||||
|
commit: {
|
||||||
|
type: inferCommitType(group), // feat | fix | refactor | ...
|
||||||
|
scope: inferScope(group),
|
||||||
|
message_template: generateCommitMsg(group)
|
||||||
|
},
|
||||||
|
depends_on: inferDependencies(group, tasks),
|
||||||
|
executor: inferExecutor(group),
|
||||||
|
priority: calculatePriority(group) // 1-5 (1=highest)
|
||||||
|
}))
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Phase 4: Validation & Output
|
||||||
|
|
||||||
|
**Validation**:
|
||||||
|
- DAG validation (no circular dependencies)
|
||||||
|
- Task validation (all 5 phases present)
|
||||||
|
- Conflict detection (cross-issue file modifications)
|
||||||
|
|
||||||
|
**Solution Registration**:
|
||||||
|
```bash
|
||||||
|
# Write solution and register via CLI
|
||||||
|
ccw issue bind <issue-id> --solution /tmp/sol.json
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 2. Output Requirements
|
||||||
|
|
||||||
|
### 2.1 Generate Files (Primary)
|
||||||
|
|
||||||
|
**Solution file per issue**:
|
||||||
|
```
|
||||||
|
.workflow/issues/solutions/{issue-id}.jsonl
|
||||||
|
```
|
||||||
|
|
||||||
|
Each line is a solution JSON containing tasks. Schema: `cat .claude/workflows/cli-templates/schemas/solution-schema.json`
|
||||||
|
|
||||||
|
### 2.2 Binding
|
||||||
|
|
||||||
|
| Scenario | Action |
|
||||||
|
|----------|--------|
|
||||||
|
| Single solution | `ccw issue bind <id> --solution <file>` (auto) |
|
||||||
|
| Multiple solutions | Register only, return for selection |
|
||||||
|
|
||||||
|
### 2.3 Return Summary
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"bound": [{ "issue_id": "...", "solution_id": "...", "task_count": N }],
|
||||||
|
"pending_selection": [{ "issue_id": "...", "solutions": [{ "id": "SOL-001", "description": "...", "task_count": N }] }],
|
||||||
|
"conflicts": [{ "file": "...", "issues": [...] }]
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 3. Quality Standards
|
||||||
|
|
||||||
|
### 3.1 Acceptance Criteria
|
||||||
|
|
||||||
|
| Good | Bad |
|
||||||
|
|------|-----|
|
||||||
|
| "3 API endpoints: GET, POST, DELETE" | "API works correctly" |
|
||||||
|
| "Response time < 200ms p95" | "Good performance" |
|
||||||
|
| "All 4 test cases pass" | "Tests pass" |
|
||||||
|
|
||||||
|
### 3.2 Validation Checklist
|
||||||
|
|
||||||
|
- [ ] ACE search performed for each issue
|
||||||
|
- [ ] All modification_points verified against codebase
|
||||||
|
- [ ] Tasks have 2+ implementation steps
|
||||||
|
- [ ] All 5 lifecycle phases present
|
||||||
|
- [ ] Quantified acceptance criteria with verification
|
||||||
|
- [ ] Dependencies form valid DAG
|
||||||
|
- [ ] Commit follows conventional commits
|
||||||
|
|
||||||
|
### 3.3 Guidelines
|
||||||
|
|
||||||
|
**ALWAYS**:
|
||||||
|
1. Read schema first: `cat .claude/workflows/cli-templates/schemas/solution-schema.json`
|
||||||
|
2. Use ACE semantic search as PRIMARY exploration tool
|
||||||
|
3. Fetch issue details via `ccw issue status <id> --json`
|
||||||
|
4. Quantify acceptance.criteria with testable conditions
|
||||||
|
5. Validate DAG before output
|
||||||
|
6. Evaluate each solution with `analysis` and `score`
|
||||||
|
7. Single solution → auto-bind; Multiple → return `pending_selection`
|
||||||
|
8. For HIGH complexity: generate 2-3 candidate solutions
|
||||||
|
|
||||||
|
**NEVER**:
|
||||||
|
1. Execute implementation (return plan only)
|
||||||
|
2. Use vague criteria ("works correctly", "good performance")
|
||||||
|
3. Create circular dependencies
|
||||||
|
4. Generate more than 10 tasks per issue
|
||||||
|
5. Bind when multiple solutions exist
|
||||||
|
|
||||||
|
**OUTPUT**:
|
||||||
|
1. Register solutions via `ccw issue bind <id> --solution <file>`
|
||||||
|
2. Return JSON with `bound`, `pending_selection`, `conflicts`
|
||||||
|
3. Solutions written to `.workflow/issues/solutions/{issue-id}.jsonl`
|
||||||
227
.claude/agents/issue-queue-agent.md
Normal file
227
.claude/agents/issue-queue-agent.md
Normal file
@@ -0,0 +1,227 @@
|
|||||||
|
---
|
||||||
|
name: issue-queue-agent
|
||||||
|
description: |
|
||||||
|
Task ordering agent for queue formation with dependency analysis and conflict resolution.
|
||||||
|
Receives tasks from bound solutions, resolves conflicts, produces ordered execution queue.
|
||||||
|
|
||||||
|
Examples:
|
||||||
|
- Context: Single issue queue
|
||||||
|
user: "Order tasks for GH-123"
|
||||||
|
assistant: "I'll analyze dependencies and generate execution queue"
|
||||||
|
- Context: Multi-issue queue with conflicts
|
||||||
|
user: "Order tasks for GH-123, GH-124"
|
||||||
|
assistant: "I'll detect conflicts, resolve ordering, and assign groups"
|
||||||
|
color: orange
|
||||||
|
---
|
||||||
|
|
||||||
|
## Overview
|
||||||
|
|
||||||
|
**Agent Role**: Queue formation agent that transforms tasks from bound solutions into an ordered execution queue. Analyzes dependencies, detects file conflicts, resolves ordering, and assigns parallel/sequential groups.
|
||||||
|
|
||||||
|
**Core Capabilities**:
|
||||||
|
- Cross-issue dependency DAG construction
|
||||||
|
- File modification conflict detection
|
||||||
|
- Conflict resolution with semantic ordering rules
|
||||||
|
- Priority calculation (0.0-1.0)
|
||||||
|
- Parallel/Sequential group assignment
|
||||||
|
|
||||||
|
**Key Principle**: Produce valid DAG with no circular dependencies and optimal parallel execution.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 1. Input & Execution
|
||||||
|
|
||||||
|
### 1.1 Input Context
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
{
|
||||||
|
tasks: [{
|
||||||
|
key: string, // e.g., "GH-123:TASK-001"
|
||||||
|
issue_id: string, // e.g., "GH-123"
|
||||||
|
solution_id: string, // e.g., "SOL-001"
|
||||||
|
task_id: string, // e.g., "TASK-001"
|
||||||
|
type: string, // feature | bug | refactor | test | chore | docs
|
||||||
|
file_context: string[],
|
||||||
|
depends_on: string[] // composite keys, e.g., ["GH-123:TASK-001"]
|
||||||
|
}],
|
||||||
|
project_root?: string,
|
||||||
|
rebuild?: boolean
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Note**: Agent generates unique `item_id` (pattern: `T-{N}`) for queue output.
|
||||||
|
|
||||||
|
### 1.2 Execution Flow
|
||||||
|
|
||||||
|
```
|
||||||
|
Phase 1: Dependency Analysis (20%)
|
||||||
|
↓ Parse depends_on, build DAG, detect cycles
|
||||||
|
Phase 2: Conflict Detection (30%)
|
||||||
|
↓ Identify file conflicts across issues
|
||||||
|
Phase 3: Conflict Resolution (25%)
|
||||||
|
↓ Apply ordering rules, update DAG
|
||||||
|
Phase 4: Ordering & Grouping (25%)
|
||||||
|
↓ Topological sort, assign groups
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 2. Processing Logic
|
||||||
|
|
||||||
|
### 2.1 Dependency Graph
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
function buildDependencyGraph(tasks) {
|
||||||
|
const graph = new Map()
|
||||||
|
const fileModifications = new Map()
|
||||||
|
|
||||||
|
for (const item of tasks) {
|
||||||
|
graph.set(item.key, { ...item, inDegree: 0, outEdges: [] })
|
||||||
|
|
||||||
|
for (const file of item.file_context || []) {
|
||||||
|
if (!fileModifications.has(file)) fileModifications.set(file, [])
|
||||||
|
fileModifications.get(file).push(item.key)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Add dependency edges
|
||||||
|
for (const [key, node] of graph) {
|
||||||
|
for (const depKey of node.depends_on || []) {
|
||||||
|
if (graph.has(depKey)) {
|
||||||
|
graph.get(depKey).outEdges.push(key)
|
||||||
|
node.inDegree++
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return { graph, fileModifications }
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### 2.2 Conflict Detection
|
||||||
|
|
||||||
|
Conflict when multiple tasks modify same file:
|
||||||
|
```javascript
|
||||||
|
function detectConflicts(fileModifications, graph) {
|
||||||
|
return [...fileModifications.entries()]
|
||||||
|
.filter(([_, tasks]) => tasks.length > 1)
|
||||||
|
.map(([file, tasks]) => ({
|
||||||
|
type: 'file_conflict',
|
||||||
|
file,
|
||||||
|
tasks,
|
||||||
|
resolved: false
|
||||||
|
}))
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### 2.3 Resolution Rules
|
||||||
|
|
||||||
|
| Priority | Rule | Example |
|
||||||
|
|----------|------|---------|
|
||||||
|
| 1 | Create before Update | T1:Create → T2:Update |
|
||||||
|
| 2 | Foundation before integration | config/ → src/ |
|
||||||
|
| 3 | Types before implementation | types/ → components/ |
|
||||||
|
| 4 | Core before tests | src/ → __tests__/ |
|
||||||
|
| 5 | Delete last | T1:Update → T2:Delete |
|
||||||
|
|
||||||
|
### 2.4 Semantic Priority
|
||||||
|
|
||||||
|
**Base Priority Mapping** (task.priority 1-5 → base score):
|
||||||
|
| task.priority | Base Score | Meaning |
|
||||||
|
|---------------|------------|---------|
|
||||||
|
| 1 | 0.8 | Highest |
|
||||||
|
| 2 | 0.65 | High |
|
||||||
|
| 3 | 0.5 | Medium |
|
||||||
|
| 4 | 0.35 | Low |
|
||||||
|
| 5 | 0.2 | Lowest |
|
||||||
|
|
||||||
|
**Action-based Boost** (applied to base score):
|
||||||
|
| Factor | Boost |
|
||||||
|
|--------|-------|
|
||||||
|
| Create action | +0.2 |
|
||||||
|
| Configure action | +0.15 |
|
||||||
|
| Implement action | +0.1 |
|
||||||
|
| Fix action | +0.05 |
|
||||||
|
| Foundation scope | +0.1 |
|
||||||
|
| Types scope | +0.05 |
|
||||||
|
| Refactor action | -0.05 |
|
||||||
|
| Test action | -0.1 |
|
||||||
|
| Delete action | -0.15 |
|
||||||
|
|
||||||
|
**Formula**: `semantic_priority = clamp(baseScore + sum(boosts), 0.0, 1.0)`
|
||||||
|
|
||||||
|
### 2.5 Group Assignment
|
||||||
|
|
||||||
|
- **Parallel (P*)**: Tasks with no dependencies or conflicts between them
|
||||||
|
- **Sequential (S*)**: Tasks that must run in order due to dependencies or conflicts
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 3. Output Requirements
|
||||||
|
|
||||||
|
### 3.1 Generate Files (Primary)
|
||||||
|
|
||||||
|
**Queue files**:
|
||||||
|
```
|
||||||
|
.workflow/issues/queues/{queue-id}.json # Full queue with tasks, conflicts, groups
|
||||||
|
.workflow/issues/queues/index.json # Update with new queue entry
|
||||||
|
```
|
||||||
|
|
||||||
|
Queue ID format: `QUE-YYYYMMDD-HHMMSS` (UTC timestamp)
|
||||||
|
|
||||||
|
Schema: `cat .claude/workflows/cli-templates/schemas/queue-schema.json`
|
||||||
|
|
||||||
|
### 3.2 Return Summary
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"queue_id": "QUE-20251227-143000",
|
||||||
|
"total_tasks": N,
|
||||||
|
"execution_groups": [{ "id": "P1", "type": "parallel", "count": N }],
|
||||||
|
"conflicts_resolved": N,
|
||||||
|
"issues_queued": ["GH-123", "GH-124"]
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 4. Quality Standards
|
||||||
|
|
||||||
|
### 4.1 Validation Checklist
|
||||||
|
|
||||||
|
- [ ] No circular dependencies
|
||||||
|
- [ ] All conflicts resolved
|
||||||
|
- [ ] Dependencies ordered correctly
|
||||||
|
- [ ] Parallel groups have no conflicts
|
||||||
|
- [ ] Semantic priority calculated
|
||||||
|
|
||||||
|
### 4.2 Error Handling
|
||||||
|
|
||||||
|
| Scenario | Action |
|
||||||
|
|----------|--------|
|
||||||
|
| Circular dependency | Abort, report cycles |
|
||||||
|
| Resolution creates cycle | Flag for manual resolution |
|
||||||
|
| Missing task reference | Skip and warn |
|
||||||
|
| Empty task list | Return empty queue |
|
||||||
|
|
||||||
|
### 4.3 Guidelines
|
||||||
|
|
||||||
|
**ALWAYS**:
|
||||||
|
1. Build dependency graph before ordering
|
||||||
|
2. Detect cycles before and after resolution
|
||||||
|
3. Apply resolution rules consistently
|
||||||
|
4. Calculate semantic priority for all tasks
|
||||||
|
5. Include rationale for conflict resolutions
|
||||||
|
6. Validate ordering before output
|
||||||
|
|
||||||
|
**NEVER**:
|
||||||
|
1. Execute tasks (ordering only)
|
||||||
|
2. Ignore circular dependencies
|
||||||
|
3. Skip conflict detection
|
||||||
|
4. Output invalid DAG
|
||||||
|
5. Merge conflicting tasks in parallel group
|
||||||
|
|
||||||
|
**OUTPUT**:
|
||||||
|
1. Write `.workflow/issues/queues/{queue-id}.json`
|
||||||
|
2. Update `.workflow/issues/queues/index.json`
|
||||||
|
3. Return summary with `queue_id`, `total_tasks`, `execution_groups`, `conflicts_resolved`, `issues_queued`
|
||||||
@@ -36,10 +36,10 @@ You are a test context discovery specialist focused on gathering test coverage i
|
|||||||
**Use**: Phase 1 source context loading
|
**Use**: Phase 1 source context loading
|
||||||
|
|
||||||
### 2. Test Coverage Discovery
|
### 2. Test Coverage Discovery
|
||||||
**Primary (Code-Index MCP)**:
|
**Primary (CCW CodexLens MCP)**:
|
||||||
- `mcp__code-index__find_files(pattern)` - Find test files (*.test.*, *.spec.*)
|
- `mcp__ccw-tools__codex_lens(action="search_files", query="*.test.*")` - Find test files
|
||||||
- `mcp__code-index__search_code_advanced()` - Search test patterns
|
- `mcp__ccw-tools__codex_lens(action="search", query="pattern")` - Search test patterns
|
||||||
- `mcp__code-index__get_file_summary()` - Analyze test structure
|
- `mcp__ccw-tools__codex_lens(action="symbol", file="path")` - Analyze test structure
|
||||||
|
|
||||||
**Fallback (CLI)**:
|
**Fallback (CLI)**:
|
||||||
- `rg` (ripgrep) - Fast test pattern search
|
- `rg` (ripgrep) - Fast test pattern search
|
||||||
@@ -120,9 +120,10 @@ for (const summary_path of summaries) {
|
|||||||
|
|
||||||
**2.1 Existing Test Discovery**:
|
**2.1 Existing Test Discovery**:
|
||||||
```javascript
|
```javascript
|
||||||
// Method 1: Code-Index MCP (preferred)
|
// Method 1: CodexLens MCP (preferred)
|
||||||
const test_files = mcp__code-index__find_files({
|
const test_files = mcp__ccw-tools__codex_lens({
|
||||||
patterns: ["*.test.*", "*.spec.*", "*test_*.py", "*_test.go"]
|
action: "search_files",
|
||||||
|
query: "*.test.* OR *.spec.* OR test_*.py OR *_test.go"
|
||||||
});
|
});
|
||||||
|
|
||||||
// Method 2: Fallback CLI
|
// Method 2: Fallback CLI
|
||||||
|
|||||||
@@ -83,17 +83,18 @@ When task JSON contains implementation_approach array:
|
|||||||
- L1 (Unit): `*.test.*`, `*.spec.*` in `__tests__/`, `tests/unit/`
|
- L1 (Unit): `*.test.*`, `*.spec.*` in `__tests__/`, `tests/unit/`
|
||||||
- L2 (Integration): `tests/integration/`, `*.integration.test.*`
|
- L2 (Integration): `tests/integration/`, `*.integration.test.*`
|
||||||
- L3 (E2E): `tests/e2e/`, `*.e2e.test.*`, `cypress/`, `playwright/`
|
- L3 (E2E): `tests/e2e/`, `*.e2e.test.*`, `cypress/`, `playwright/`
|
||||||
- **context-package.json** (CCW Workflow): Extract artifact paths using `jq -r '.brainstorm_artifacts.role_analyses[].files[].path'`
|
- **context-package.json** (CCW Workflow): Use Read tool to get context package from `.workflow/active/{session}/.process/context-package.json`
|
||||||
- Identify test commands from project configuration
|
- Identify test commands from project configuration
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
# Detect test framework and multi-layered commands
|
# Detect test framework and multi-layered commands
|
||||||
if [ -f "package.json" ]; then
|
if [ -f "package.json" ]; then
|
||||||
# Extract layer-specific test commands
|
# Extract layer-specific test commands using Read tool or jq
|
||||||
LINT_CMD=$(cat package.json | jq -r '.scripts.lint // "eslint ."')
|
PKG_JSON=$(cat package.json)
|
||||||
UNIT_CMD=$(cat package.json | jq -r '.scripts["test:unit"] // .scripts.test')
|
LINT_CMD=$(echo "$PKG_JSON" | jq -r '.scripts.lint // "eslint ."')
|
||||||
INTEGRATION_CMD=$(cat package.json | jq -r '.scripts["test:integration"] // ""')
|
UNIT_CMD=$(echo "$PKG_JSON" | jq -r '.scripts["test:unit"] // .scripts.test')
|
||||||
E2E_CMD=$(cat package.json | jq -r '.scripts["test:e2e"] // ""')
|
INTEGRATION_CMD=$(echo "$PKG_JSON" | jq -r '.scripts["test:integration"] // ""')
|
||||||
|
E2E_CMD=$(echo "$PKG_JSON" | jq -r '.scripts["test:e2e"] // ""')
|
||||||
elif [ -f "pytest.ini" ] || [ -f "setup.py" ]; then
|
elif [ -f "pytest.ini" ] || [ -f "setup.py" ]; then
|
||||||
LINT_CMD="ruff check . || flake8 ."
|
LINT_CMD="ruff check . || flake8 ."
|
||||||
UNIT_CMD="pytest tests/unit/"
|
UNIT_CMD="pytest tests/unit/"
|
||||||
|
|||||||
47
.claude/cli-tools.json
Normal file
47
.claude/cli-tools.json
Normal file
@@ -0,0 +1,47 @@
|
|||||||
|
{
|
||||||
|
"version": "1.0.0",
|
||||||
|
"tools": {
|
||||||
|
"gemini": {
|
||||||
|
"enabled": true,
|
||||||
|
"isBuiltin": true,
|
||||||
|
"command": "gemini",
|
||||||
|
"description": "Google AI for code analysis"
|
||||||
|
},
|
||||||
|
"qwen": {
|
||||||
|
"enabled": true,
|
||||||
|
"isBuiltin": true,
|
||||||
|
"command": "qwen",
|
||||||
|
"description": "Alibaba AI assistant"
|
||||||
|
},
|
||||||
|
"codex": {
|
||||||
|
"enabled": true,
|
||||||
|
"isBuiltin": true,
|
||||||
|
"command": "codex",
|
||||||
|
"description": "OpenAI code generation"
|
||||||
|
},
|
||||||
|
"claude": {
|
||||||
|
"enabled": true,
|
||||||
|
"isBuiltin": true,
|
||||||
|
"command": "claude",
|
||||||
|
"description": "Anthropic AI assistant"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"customEndpoints": [],
|
||||||
|
"defaultTool": "gemini",
|
||||||
|
"settings": {
|
||||||
|
"promptFormat": "plain",
|
||||||
|
"smartContext": {
|
||||||
|
"enabled": false,
|
||||||
|
"maxFiles": 10
|
||||||
|
},
|
||||||
|
"nativeResume": true,
|
||||||
|
"recursiveQuery": true,
|
||||||
|
"cache": {
|
||||||
|
"injectionMode": "auto",
|
||||||
|
"defaultPrefix": "",
|
||||||
|
"defaultSuffix": ""
|
||||||
|
},
|
||||||
|
"codeIndexMcp": "ace"
|
||||||
|
},
|
||||||
|
"$schema": "./cli-tools.schema.json"
|
||||||
|
}
|
||||||
462
.claude/commands/issue/execute.md
Normal file
462
.claude/commands/issue/execute.md
Normal file
@@ -0,0 +1,462 @@
|
|||||||
|
---
|
||||||
|
name: execute
|
||||||
|
description: Execute queue with codex using endpoint-driven task fetching (single task per codex instance)
|
||||||
|
argument-hint: "[--parallel <n>] [--executor codex|gemini]"
|
||||||
|
allowed-tools: TodoWrite(*), Bash(*), Read(*), AskUserQuestion(*)
|
||||||
|
---
|
||||||
|
|
||||||
|
# Issue Execute Command (/issue:execute)
|
||||||
|
|
||||||
|
## Overview
|
||||||
|
|
||||||
|
Execution orchestrator that coordinates codex instances. Each task is executed by an independent codex instance that fetches its task via CLI endpoint. **Codex does NOT read task files** - it calls `ccw issue next` to get task data dynamically.
|
||||||
|
|
||||||
|
**Core design:**
|
||||||
|
- Single task per codex instance (not loop mode)
|
||||||
|
- Endpoint-driven: `ccw issue next` → execute → `ccw issue complete`
|
||||||
|
- No file reading in codex
|
||||||
|
- Orchestrator manages parallelism
|
||||||
|
|
||||||
|
## Storage Structure (Queue History)
|
||||||
|
|
||||||
|
```
|
||||||
|
.workflow/issues/
|
||||||
|
├── issues.jsonl # All issues (one per line)
|
||||||
|
├── queues/ # Queue history directory
|
||||||
|
│ ├── index.json # Queue index (active + history)
|
||||||
|
│ └── {queue-id}.json # Individual queue files
|
||||||
|
└── solutions/
|
||||||
|
├── {issue-id}.jsonl # Solutions for issue
|
||||||
|
└── ...
|
||||||
|
```
|
||||||
|
|
||||||
|
## Usage
|
||||||
|
|
||||||
|
```bash
|
||||||
|
/issue:execute [FLAGS]
|
||||||
|
|
||||||
|
# Examples
|
||||||
|
/issue:execute # Execute all ready tasks
|
||||||
|
/issue:execute --parallel 3 # Execute up to 3 tasks in parallel
|
||||||
|
/issue:execute --executor codex # Force codex executor
|
||||||
|
|
||||||
|
# Flags
|
||||||
|
--parallel <n> Max parallel codex instances (default: 1)
|
||||||
|
--executor <type> Force executor: codex|gemini|agent
|
||||||
|
--dry-run Show what would execute without running
|
||||||
|
```
|
||||||
|
|
||||||
|
## Execution Process
|
||||||
|
|
||||||
|
```
|
||||||
|
Phase 1: Queue Loading
|
||||||
|
├─ Load queue.json
|
||||||
|
├─ Count pending/ready tasks
|
||||||
|
└─ Initialize TodoWrite tracking
|
||||||
|
|
||||||
|
Phase 2: Ready Task Detection
|
||||||
|
├─ Find tasks with satisfied dependencies
|
||||||
|
├─ Group by execution_group (parallel batches)
|
||||||
|
└─ Determine execution order
|
||||||
|
|
||||||
|
Phase 3: Codex Coordination
|
||||||
|
├─ For each ready task:
|
||||||
|
│ ├─ Launch independent codex instance
|
||||||
|
│ ├─ Codex calls: ccw issue next
|
||||||
|
│ ├─ Codex receives task data (NOT file)
|
||||||
|
│ ├─ Codex executes task
|
||||||
|
│ ├─ Codex calls: ccw issue complete <queue-id>
|
||||||
|
│ └─ Update TodoWrite
|
||||||
|
└─ Parallel execution based on --parallel flag
|
||||||
|
|
||||||
|
Phase 4: Completion
|
||||||
|
├─ Generate execution summary
|
||||||
|
├─ Update issue statuses in issues.jsonl
|
||||||
|
└─ Display results
|
||||||
|
```
|
||||||
|
|
||||||
|
## Implementation
|
||||||
|
|
||||||
|
### Phase 1: Queue Loading
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
// Load active queue via CLI endpoint
|
||||||
|
const queueJson = Bash(`ccw issue status --json 2>/dev/null || echo '{}'`);
|
||||||
|
const queue = JSON.parse(queueJson);
|
||||||
|
|
||||||
|
if (!queue.id || queue.tasks?.length === 0) {
|
||||||
|
console.log('No active queue found. Run /issue:queue first.');
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
// Count by status
|
||||||
|
const pending = queue.tasks.filter(q => q.status === 'pending');
|
||||||
|
const executing = queue.tasks.filter(q => q.status === 'executing');
|
||||||
|
const completed = queue.tasks.filter(q => q.status === 'completed');
|
||||||
|
|
||||||
|
console.log(`
|
||||||
|
## Execution Queue Status
|
||||||
|
|
||||||
|
- Pending: ${pending.length}
|
||||||
|
- Executing: ${executing.length}
|
||||||
|
- Completed: ${completed.length}
|
||||||
|
- Total: ${queue.tasks.length}
|
||||||
|
`);
|
||||||
|
|
||||||
|
if (pending.length === 0 && executing.length === 0) {
|
||||||
|
console.log('All tasks completed!');
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Phase 2: Ready Task Detection
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
// Find ready tasks (dependencies satisfied)
|
||||||
|
function getReadyTasks() {
|
||||||
|
const completedIds = new Set(
|
||||||
|
queue.tasks.filter(q => q.status === 'completed').map(q => q.item_id)
|
||||||
|
);
|
||||||
|
|
||||||
|
return queue.tasks.filter(item => {
|
||||||
|
if (item.status !== 'pending') return false;
|
||||||
|
return item.depends_on.every(depId => completedIds.has(depId));
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
const readyTasks = getReadyTasks();
|
||||||
|
|
||||||
|
if (readyTasks.length === 0) {
|
||||||
|
if (executing.length > 0) {
|
||||||
|
console.log('Tasks are currently executing. Wait for completion.');
|
||||||
|
} else {
|
||||||
|
console.log('No ready tasks. Check for blocked dependencies.');
|
||||||
|
}
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
console.log(`Found ${readyTasks.length} ready tasks`);
|
||||||
|
|
||||||
|
// Sort by execution order
|
||||||
|
readyTasks.sort((a, b) => a.execution_order - b.execution_order);
|
||||||
|
|
||||||
|
// Initialize TodoWrite
|
||||||
|
TodoWrite({
|
||||||
|
todos: readyTasks.slice(0, parallelLimit).map(t => ({
|
||||||
|
content: `[${t.item_id}] ${t.issue_id}:${t.task_id}`,
|
||||||
|
status: 'pending',
|
||||||
|
activeForm: `Executing ${t.item_id}`
|
||||||
|
}))
|
||||||
|
});
|
||||||
|
```
|
||||||
|
|
||||||
|
### Phase 3: Codex Coordination (Single Task Mode - Full Lifecycle)
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
// Execute tasks - single codex instance per task with full lifecycle
|
||||||
|
async function executeTask(queueItem) {
|
||||||
|
const codexPrompt = `
|
||||||
|
## Single Task Execution - CLOSED-LOOP LIFECYCLE
|
||||||
|
|
||||||
|
You are executing ONE task from the issue queue. Each task has 5 phases that MUST ALL complete successfully.
|
||||||
|
|
||||||
|
### Step 1: Fetch Task
|
||||||
|
Run this command to get your task:
|
||||||
|
\`\`\`bash
|
||||||
|
ccw issue next
|
||||||
|
\`\`\`
|
||||||
|
|
||||||
|
This returns JSON with full lifecycle definition:
|
||||||
|
- task.implementation: Implementation steps
|
||||||
|
- task.test: Test requirements and commands
|
||||||
|
- task.regression: Regression check commands
|
||||||
|
- task.acceptance: Acceptance criteria and verification
|
||||||
|
- task.commit: Commit specification
|
||||||
|
|
||||||
|
### Step 2: Execute Full Lifecycle
|
||||||
|
|
||||||
|
**Phase 1: IMPLEMENT**
|
||||||
|
1. Follow task.implementation steps in order
|
||||||
|
2. Modify files specified in modification_points
|
||||||
|
3. Use context.relevant_files for reference
|
||||||
|
4. Use context.patterns for code style
|
||||||
|
|
||||||
|
**Phase 2: TEST**
|
||||||
|
1. Run test commands from task.test.commands
|
||||||
|
2. Ensure all unit tests pass (task.test.unit)
|
||||||
|
3. Run integration tests if specified (task.test.integration)
|
||||||
|
4. Verify coverage meets task.test.coverage_target if specified
|
||||||
|
5. If tests fail → fix code and re-run, do NOT proceed until tests pass
|
||||||
|
|
||||||
|
**Phase 3: REGRESSION**
|
||||||
|
1. Run all commands in task.regression
|
||||||
|
2. Ensure no existing tests are broken
|
||||||
|
3. If regression fails → fix and re-run
|
||||||
|
|
||||||
|
**Phase 4: ACCEPTANCE**
|
||||||
|
1. Verify each criterion in task.acceptance.criteria
|
||||||
|
2. Execute verification steps in task.acceptance.verification
|
||||||
|
3. Complete any manual_checks if specified
|
||||||
|
4. All criteria MUST pass before proceeding
|
||||||
|
|
||||||
|
**Phase 5: COMMIT**
|
||||||
|
1. Stage all modified files
|
||||||
|
2. Use task.commit.message_template as commit message
|
||||||
|
3. Commit with: git commit -m "$(cat <<'EOF'\n<message>\nEOF\n)"
|
||||||
|
4. If commit_strategy is 'per-task', commit now
|
||||||
|
5. If commit_strategy is 'atomic' or 'squash', stage but don't commit
|
||||||
|
|
||||||
|
### Step 3: Report Completion
|
||||||
|
When ALL phases complete successfully:
|
||||||
|
\`\`\`bash
|
||||||
|
ccw issue complete <item_id> --result '{
|
||||||
|
"files_modified": ["path1", "path2"],
|
||||||
|
"tests_passed": true,
|
||||||
|
"regression_passed": true,
|
||||||
|
"acceptance_passed": true,
|
||||||
|
"committed": true,
|
||||||
|
"commit_hash": "<hash>",
|
||||||
|
"summary": "What was done"
|
||||||
|
}'
|
||||||
|
\`\`\`
|
||||||
|
|
||||||
|
If any phase fails and cannot be fixed:
|
||||||
|
\`\`\`bash
|
||||||
|
ccw issue fail <item_id> --reason "Phase X failed: <details>"
|
||||||
|
\`\`\`
|
||||||
|
|
||||||
|
### Rules
|
||||||
|
- NEVER skip any lifecycle phase
|
||||||
|
- Tests MUST pass before proceeding to acceptance
|
||||||
|
- Regression MUST pass before commit
|
||||||
|
- ALL acceptance criteria MUST be verified
|
||||||
|
- Report accurate lifecycle status in result
|
||||||
|
|
||||||
|
### Start Now
|
||||||
|
Begin by running: ccw issue next
|
||||||
|
`;
|
||||||
|
|
||||||
|
// Execute codex
|
||||||
|
const executor = queueItem.assigned_executor || flags.executor || 'codex';
|
||||||
|
|
||||||
|
if (executor === 'codex') {
|
||||||
|
Bash(
|
||||||
|
`ccw cli -p "${escapePrompt(codexPrompt)}" --tool codex --mode write --id exec-${queueItem.item_id}`,
|
||||||
|
timeout=3600000 // 1 hour timeout
|
||||||
|
);
|
||||||
|
} else if (executor === 'gemini') {
|
||||||
|
Bash(
|
||||||
|
`ccw cli -p "${escapePrompt(codexPrompt)}" --tool gemini --mode write --id exec-${queueItem.item_id}`,
|
||||||
|
timeout=1800000 // 30 min timeout
|
||||||
|
);
|
||||||
|
} else {
|
||||||
|
// Agent execution
|
||||||
|
Task(
|
||||||
|
subagent_type="code-developer",
|
||||||
|
run_in_background=false,
|
||||||
|
description=`Execute ${queueItem.item_id}`,
|
||||||
|
prompt=codexPrompt
|
||||||
|
);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Execute with parallelism
|
||||||
|
const parallelLimit = flags.parallel || 1;
|
||||||
|
|
||||||
|
for (let i = 0; i < readyTasks.length; i += parallelLimit) {
|
||||||
|
const batch = readyTasks.slice(i, i + parallelLimit);
|
||||||
|
|
||||||
|
console.log(`\n### Executing Batch ${Math.floor(i / parallelLimit) + 1}`);
|
||||||
|
console.log(batch.map(t => `- ${t.item_id}: ${t.issue_id}:${t.task_id}`).join('\n'));
|
||||||
|
|
||||||
|
if (parallelLimit === 1) {
|
||||||
|
// Sequential execution
|
||||||
|
for (const task of batch) {
|
||||||
|
updateTodo(task.item_id, 'in_progress');
|
||||||
|
await executeTask(task);
|
||||||
|
updateTodo(task.item_id, 'completed');
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
// Parallel execution - launch all at once
|
||||||
|
const executions = batch.map(task => {
|
||||||
|
updateTodo(task.item_id, 'in_progress');
|
||||||
|
return executeTask(task);
|
||||||
|
});
|
||||||
|
await Promise.all(executions);
|
||||||
|
batch.forEach(task => updateTodo(task.item_id, 'completed'));
|
||||||
|
}
|
||||||
|
|
||||||
|
// Refresh ready tasks after batch
|
||||||
|
const newReady = getReadyTasks();
|
||||||
|
if (newReady.length > 0) {
|
||||||
|
console.log(`${newReady.length} more tasks now ready`);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Codex Task Fetch Response
|
||||||
|
|
||||||
|
When codex calls `ccw issue next`, it receives:
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"item_id": "T-1",
|
||||||
|
"issue_id": "GH-123",
|
||||||
|
"solution_id": "SOL-001",
|
||||||
|
"task": {
|
||||||
|
"id": "T1",
|
||||||
|
"title": "Create auth middleware",
|
||||||
|
"scope": "src/middleware/",
|
||||||
|
"action": "Create",
|
||||||
|
"description": "Create JWT validation middleware",
|
||||||
|
"modification_points": [
|
||||||
|
{ "file": "src/middleware/auth.ts", "target": "new file", "change": "Create middleware" }
|
||||||
|
],
|
||||||
|
"implementation": [
|
||||||
|
"Create auth.ts file in src/middleware/",
|
||||||
|
"Implement JWT token validation using jsonwebtoken",
|
||||||
|
"Add error handling for invalid/expired tokens",
|
||||||
|
"Export middleware function"
|
||||||
|
],
|
||||||
|
"acceptance": [
|
||||||
|
"Middleware validates JWT tokens successfully",
|
||||||
|
"Returns 401 for invalid or missing tokens",
|
||||||
|
"Passes token payload to request context"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
"context": {
|
||||||
|
"relevant_files": ["src/config/auth.ts", "src/types/auth.d.ts"],
|
||||||
|
"patterns": "Follow existing middleware pattern in src/middleware/logger.ts"
|
||||||
|
},
|
||||||
|
"execution_hints": {
|
||||||
|
"executor": "codex",
|
||||||
|
"estimated_minutes": 30
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Phase 4: Completion Summary
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
// Reload queue for final status via CLI
|
||||||
|
const finalQueueJson = Bash(`ccw issue status --json 2>/dev/null || echo '{}'`);
|
||||||
|
const finalQueue = JSON.parse(finalQueueJson);
|
||||||
|
|
||||||
|
// Use queue._metadata for summary (already calculated by CLI)
|
||||||
|
const summary = finalQueue._metadata || {
|
||||||
|
completed_count: 0,
|
||||||
|
failed_count: 0,
|
||||||
|
pending_count: 0,
|
||||||
|
total_tasks: 0
|
||||||
|
};
|
||||||
|
|
||||||
|
console.log(`
|
||||||
|
## Execution Complete
|
||||||
|
|
||||||
|
**Completed**: ${summary.completed_count}/${summary.total_tasks}
|
||||||
|
**Failed**: ${summary.failed_count}
|
||||||
|
**Pending**: ${summary.pending_count}
|
||||||
|
|
||||||
|
### Task Results
|
||||||
|
${(finalQueue.tasks || []).map(q => {
|
||||||
|
const icon = q.status === 'completed' ? '✓' :
|
||||||
|
q.status === 'failed' ? '✗' :
|
||||||
|
q.status === 'executing' ? '⟳' : '○';
|
||||||
|
return `${icon} ${q.item_id} [${q.issue_id}:${q.task_id}] - ${q.status}`;
|
||||||
|
}).join('\n')}
|
||||||
|
`);
|
||||||
|
|
||||||
|
// Issue status updates are handled by ccw issue complete/fail endpoints
|
||||||
|
// No need to manually update issues.jsonl here
|
||||||
|
|
||||||
|
if (summary.pending_count > 0) {
|
||||||
|
console.log(`
|
||||||
|
### Continue Execution
|
||||||
|
Run \`/issue:execute\` again to execute remaining tasks.
|
||||||
|
`);
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
## Dry Run Mode
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
if (flags.dryRun) {
|
||||||
|
console.log(`
|
||||||
|
## Dry Run - Would Execute
|
||||||
|
|
||||||
|
${readyTasks.map((t, i) => `
|
||||||
|
${i + 1}. ${t.item_id}
|
||||||
|
Issue: ${t.issue_id}
|
||||||
|
Task: ${t.task_id}
|
||||||
|
Executor: ${t.assigned_executor}
|
||||||
|
Group: ${t.execution_group}
|
||||||
|
`).join('')}
|
||||||
|
|
||||||
|
No changes made. Remove --dry-run to execute.
|
||||||
|
`);
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
## Error Handling
|
||||||
|
|
||||||
|
| Error | Resolution |
|
||||||
|
|-------|------------|
|
||||||
|
| Queue not found | Display message, suggest /issue:queue |
|
||||||
|
| No ready tasks | Check dependencies, show blocked tasks |
|
||||||
|
| Codex timeout | Mark as failed, allow retry |
|
||||||
|
| ccw issue next empty | All tasks done or blocked |
|
||||||
|
| Task execution failure | Marked via ccw issue fail, use `ccw issue retry` to reset |
|
||||||
|
|
||||||
|
## Troubleshooting
|
||||||
|
|
||||||
|
### Interrupted Tasks
|
||||||
|
|
||||||
|
If execution was interrupted (crashed/stopped), `ccw issue next` will automatically resume:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Automatically returns the executing task for resumption
|
||||||
|
ccw issue next
|
||||||
|
```
|
||||||
|
|
||||||
|
Tasks in `executing` status are prioritized and returned first, no manual reset needed.
|
||||||
|
|
||||||
|
### Failed Tasks
|
||||||
|
|
||||||
|
If a task failed and you want to retry:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Reset all failed tasks to pending
|
||||||
|
ccw issue retry
|
||||||
|
|
||||||
|
# Reset failed tasks for specific issue
|
||||||
|
ccw issue retry <issue-id>
|
||||||
|
```
|
||||||
|
|
||||||
|
## Endpoint Contract
|
||||||
|
|
||||||
|
### `ccw issue next`
|
||||||
|
- Returns next ready task as JSON
|
||||||
|
- Marks task as 'executing'
|
||||||
|
- Returns `{ status: 'empty' }` when no tasks
|
||||||
|
|
||||||
|
### `ccw issue complete <item-id>`
|
||||||
|
- Marks task as 'completed'
|
||||||
|
- Updates queue.json
|
||||||
|
- Checks if issue is fully complete
|
||||||
|
|
||||||
|
### `ccw issue fail <item-id>`
|
||||||
|
- Marks task as 'failed'
|
||||||
|
- Records failure reason
|
||||||
|
- Allows retry via /issue:execute
|
||||||
|
|
||||||
|
### `ccw issue retry [issue-id]`
|
||||||
|
- Resets failed tasks to 'pending'
|
||||||
|
- Allows re-execution via `ccw issue next`
|
||||||
|
|
||||||
|
## Related Commands
|
||||||
|
|
||||||
|
- `/issue:plan` - Plan issues with solutions
|
||||||
|
- `/issue:queue` - Form execution queue
|
||||||
|
- `ccw issue queue list` - View queue status
|
||||||
|
- `ccw issue retry` - Retry failed tasks
|
||||||
113
.claude/commands/issue/manage.md
Normal file
113
.claude/commands/issue/manage.md
Normal file
@@ -0,0 +1,113 @@
|
|||||||
|
---
|
||||||
|
name: manage
|
||||||
|
description: Interactive issue management (CRUD) via ccw cli endpoints with menu-driven interface
|
||||||
|
argument-hint: "[issue-id] [--action list|view|edit|delete|bulk]"
|
||||||
|
allowed-tools: TodoWrite(*), Bash(*), Read(*), Write(*), AskUserQuestion(*), Task(*)
|
||||||
|
---
|
||||||
|
|
||||||
|
# Issue Manage Command (/issue:manage)
|
||||||
|
|
||||||
|
## Overview
|
||||||
|
|
||||||
|
Interactive menu-driven interface for issue management using `ccw issue` CLI endpoints:
|
||||||
|
- **List**: Browse and filter issues
|
||||||
|
- **View**: Detailed issue inspection
|
||||||
|
- **Edit**: Modify issue fields
|
||||||
|
- **Delete**: Remove issues
|
||||||
|
- **Bulk**: Batch operations on multiple issues
|
||||||
|
|
||||||
|
## CLI Endpoints Reference
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Core endpoints (ccw issue)
|
||||||
|
ccw issue list # List all issues
|
||||||
|
ccw issue list <id> --json # Get issue details
|
||||||
|
ccw issue status <id> # Detailed status
|
||||||
|
ccw issue init <id> --title "..." # Create issue
|
||||||
|
ccw issue task <id> --title "..." # Add task
|
||||||
|
ccw issue bind <id> <solution-id> # Bind solution
|
||||||
|
|
||||||
|
# Queue management
|
||||||
|
ccw issue queue # List current queue
|
||||||
|
ccw issue queue add <id> # Add to queue
|
||||||
|
ccw issue queue list # Queue history
|
||||||
|
ccw issue queue switch <queue-id> # Switch queue
|
||||||
|
ccw issue queue archive # Archive queue
|
||||||
|
ccw issue queue delete <queue-id> # Delete queue
|
||||||
|
ccw issue next # Get next task
|
||||||
|
ccw issue done <queue-id> # Mark completed
|
||||||
|
ccw issue complete <item-id> # (legacy alias for done)
|
||||||
|
```
|
||||||
|
|
||||||
|
## Usage
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Interactive mode (menu-driven)
|
||||||
|
/issue:manage
|
||||||
|
|
||||||
|
# Direct to specific issue
|
||||||
|
/issue:manage GH-123
|
||||||
|
|
||||||
|
# Direct action
|
||||||
|
/issue:manage --action list
|
||||||
|
/issue:manage GH-123 --action edit
|
||||||
|
```
|
||||||
|
|
||||||
|
## Implementation
|
||||||
|
|
||||||
|
This command delegates to the `issue-manage` skill for detailed implementation.
|
||||||
|
|
||||||
|
### Entry Point
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
const issueId = parseIssueId(userInput);
|
||||||
|
const action = flags.action;
|
||||||
|
|
||||||
|
// Show main menu if no action specified
|
||||||
|
if (!action) {
|
||||||
|
await showMainMenu(issueId);
|
||||||
|
} else {
|
||||||
|
await executeAction(action, issueId);
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Main Menu Flow
|
||||||
|
|
||||||
|
1. **Dashboard**: Fetch issues summary via `ccw issue list --json`
|
||||||
|
2. **Menu**: Present action options via AskUserQuestion
|
||||||
|
3. **Route**: Execute selected action (List/View/Edit/Delete/Bulk)
|
||||||
|
4. **Loop**: Return to menu after each action
|
||||||
|
|
||||||
|
### Available Actions
|
||||||
|
|
||||||
|
| Action | Description | CLI Command |
|
||||||
|
|--------|-------------|-------------|
|
||||||
|
| List | Browse with filters | `ccw issue list --json` |
|
||||||
|
| View | Detail view | `ccw issue status <id> --json` |
|
||||||
|
| Edit | Modify fields | Update `issues.jsonl` |
|
||||||
|
| Delete | Remove issue | Clean up all related files |
|
||||||
|
| Bulk | Batch operations | Multi-select + batch update |
|
||||||
|
|
||||||
|
## Data Files
|
||||||
|
|
||||||
|
| File | Purpose |
|
||||||
|
|------|---------|
|
||||||
|
| `.workflow/issues/issues.jsonl` | Issue records |
|
||||||
|
| `.workflow/issues/solutions/<id>.jsonl` | Solutions per issue |
|
||||||
|
| `.workflow/issues/queue.json` | Execution queue |
|
||||||
|
|
||||||
|
## Error Handling
|
||||||
|
|
||||||
|
| Error | Resolution |
|
||||||
|
|-------|------------|
|
||||||
|
| No issues found | Suggest creating with /issue:new |
|
||||||
|
| Issue not found | Show available issues, ask for correction |
|
||||||
|
| Invalid selection | Show error, re-prompt |
|
||||||
|
| Write failure | Check permissions, show error |
|
||||||
|
|
||||||
|
## Related Commands
|
||||||
|
|
||||||
|
- `/issue:new` - Create structured issue
|
||||||
|
- `/issue:plan` - Plan solution for issue
|
||||||
|
- `/issue:queue` - Form execution queue
|
||||||
|
- `/issue:execute` - Execute queued tasks
|
||||||
451
.claude/commands/issue/new.md
Normal file
451
.claude/commands/issue/new.md
Normal file
@@ -0,0 +1,451 @@
|
|||||||
|
---
|
||||||
|
name: new
|
||||||
|
description: Create structured issue from GitHub URL or text description, extracting key elements into issues.jsonl
|
||||||
|
argument-hint: "<github-url | text-description> [--priority 1-5] [--labels label1,label2]"
|
||||||
|
allowed-tools: TodoWrite(*), Bash(*), Read(*), Write(*), WebFetch(*), AskUserQuestion(*)
|
||||||
|
---
|
||||||
|
|
||||||
|
# Issue New Command (/issue:new)
|
||||||
|
|
||||||
|
## Overview
|
||||||
|
|
||||||
|
Creates a new structured issue from either:
|
||||||
|
1. **GitHub Issue URL** - Fetches and parses issue content via `gh` CLI
|
||||||
|
2. **Text Description** - Parses natural language into structured fields
|
||||||
|
|
||||||
|
Outputs a well-formed issue entry to `.workflow/issues/issues.jsonl`.
|
||||||
|
|
||||||
|
## Issue Structure (Closed-Loop)
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
interface Issue {
|
||||||
|
id: string; // GH-123 or ISS-YYYYMMDD-HHMMSS
|
||||||
|
title: string; // Issue title (clear, concise)
|
||||||
|
status: 'registered'; // Initial status
|
||||||
|
priority: number; // 1 (critical) to 5 (low)
|
||||||
|
context: string; // Problem description
|
||||||
|
source: 'github' | 'text'; // Input source type
|
||||||
|
source_url?: string; // GitHub URL if applicable
|
||||||
|
labels?: string[]; // Categorization labels
|
||||||
|
|
||||||
|
// Structured extraction
|
||||||
|
problem_statement: string; // What is the problem?
|
||||||
|
expected_behavior?: string; // What should happen?
|
||||||
|
actual_behavior?: string; // What actually happens?
|
||||||
|
affected_components?: string[];// Files/modules affected
|
||||||
|
reproduction_steps?: string[]; // Steps to reproduce
|
||||||
|
|
||||||
|
// Closed-loop requirements (guide plan generation)
|
||||||
|
lifecycle_requirements: {
|
||||||
|
test_strategy: 'unit' | 'integration' | 'e2e' | 'manual' | 'auto';
|
||||||
|
regression_scope: 'affected' | 'related' | 'full'; // Which tests to run
|
||||||
|
acceptance_type: 'automated' | 'manual' | 'both'; // How to verify
|
||||||
|
commit_strategy: 'per-task' | 'squash' | 'atomic'; // Commit granularity
|
||||||
|
};
|
||||||
|
|
||||||
|
// Metadata
|
||||||
|
bound_solution_id: null;
|
||||||
|
solution_count: 0;
|
||||||
|
created_at: string;
|
||||||
|
updated_at: string;
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
## Lifecycle Requirements
|
||||||
|
|
||||||
|
The `lifecycle_requirements` field guides downstream commands (`/issue:plan`, `/issue:execute`):
|
||||||
|
|
||||||
|
| Field | Options | Purpose |
|
||||||
|
|-------|---------|---------|
|
||||||
|
| `test_strategy` | `unit`, `integration`, `e2e`, `manual`, `auto` | Which test types to generate |
|
||||||
|
| `regression_scope` | `affected`, `related`, `full` | Which tests to run for regression |
|
||||||
|
| `acceptance_type` | `automated`, `manual`, `both` | How to verify completion |
|
||||||
|
| `commit_strategy` | `per-task`, `squash`, `atomic` | Commit granularity |
|
||||||
|
|
||||||
|
> **Note**: Task structure (SolutionTask) is defined in `/issue:plan` - see `.claude/commands/issue/plan.md`
|
||||||
|
|
||||||
|
## Usage
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# From GitHub URL
|
||||||
|
/issue:new https://github.com/owner/repo/issues/123
|
||||||
|
|
||||||
|
# From text description
|
||||||
|
/issue:new "Login fails when password contains special characters. Expected: successful login. Actual: 500 error. Affects src/auth/*"
|
||||||
|
|
||||||
|
# With options
|
||||||
|
/issue:new <url-or-text> --priority 2 --labels "bug,auth"
|
||||||
|
```
|
||||||
|
|
||||||
|
## Implementation
|
||||||
|
|
||||||
|
### Phase 1: Input Detection
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
const input = userInput.trim();
|
||||||
|
const flags = parseFlags(userInput); // --priority, --labels
|
||||||
|
|
||||||
|
// Detect input type
|
||||||
|
const isGitHubUrl = input.match(/github\.com\/[\w-]+\/[\w-]+\/issues\/\d+/);
|
||||||
|
const isGitHubShort = input.match(/^#(\d+)$/); // #123 format
|
||||||
|
|
||||||
|
let issueData = {};
|
||||||
|
|
||||||
|
if (isGitHubUrl || isGitHubShort) {
|
||||||
|
// GitHub issue - fetch via gh CLI
|
||||||
|
issueData = await fetchGitHubIssue(input);
|
||||||
|
} else {
|
||||||
|
// Text description - parse structure
|
||||||
|
issueData = await parseTextDescription(input);
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Phase 2: GitHub Issue Fetching
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
async function fetchGitHubIssue(urlOrNumber) {
|
||||||
|
let issueRef;
|
||||||
|
|
||||||
|
if (urlOrNumber.startsWith('http')) {
|
||||||
|
// Extract owner/repo/number from URL
|
||||||
|
const match = urlOrNumber.match(/github\.com\/([\w-]+)\/([\w-]+)\/issues\/(\d+)/);
|
||||||
|
if (!match) throw new Error('Invalid GitHub URL');
|
||||||
|
issueRef = `${match[1]}/${match[2]}#${match[3]}`;
|
||||||
|
} else {
|
||||||
|
// #123 format - use current repo
|
||||||
|
issueRef = urlOrNumber.replace('#', '');
|
||||||
|
}
|
||||||
|
|
||||||
|
// Fetch via gh CLI
|
||||||
|
const result = Bash(`gh issue view ${issueRef} --json number,title,body,labels,state,url`);
|
||||||
|
const ghIssue = JSON.parse(result);
|
||||||
|
|
||||||
|
// Parse body for structure
|
||||||
|
const parsed = parseIssueBody(ghIssue.body);
|
||||||
|
|
||||||
|
return {
|
||||||
|
id: `GH-${ghIssue.number}`,
|
||||||
|
title: ghIssue.title,
|
||||||
|
source: 'github',
|
||||||
|
source_url: ghIssue.url,
|
||||||
|
labels: ghIssue.labels.map(l => l.name),
|
||||||
|
context: ghIssue.body,
|
||||||
|
...parsed
|
||||||
|
};
|
||||||
|
}
|
||||||
|
|
||||||
|
function parseIssueBody(body) {
|
||||||
|
// Extract structured sections from markdown body
|
||||||
|
const sections = {};
|
||||||
|
|
||||||
|
// Problem/Description
|
||||||
|
const problemMatch = body.match(/##?\s*(problem|description|issue)[:\s]*([\s\S]*?)(?=##|$)/i);
|
||||||
|
if (problemMatch) sections.problem_statement = problemMatch[2].trim();
|
||||||
|
|
||||||
|
// Expected behavior
|
||||||
|
const expectedMatch = body.match(/##?\s*(expected|should)[:\s]*([\s\S]*?)(?=##|$)/i);
|
||||||
|
if (expectedMatch) sections.expected_behavior = expectedMatch[2].trim();
|
||||||
|
|
||||||
|
// Actual behavior
|
||||||
|
const actualMatch = body.match(/##?\s*(actual|current)[:\s]*([\s\S]*?)(?=##|$)/i);
|
||||||
|
if (actualMatch) sections.actual_behavior = actualMatch[2].trim();
|
||||||
|
|
||||||
|
// Steps to reproduce
|
||||||
|
const stepsMatch = body.match(/##?\s*(steps|reproduce)[:\s]*([\s\S]*?)(?=##|$)/i);
|
||||||
|
if (stepsMatch) {
|
||||||
|
const stepsText = stepsMatch[2].trim();
|
||||||
|
sections.reproduction_steps = stepsText
|
||||||
|
.split('\n')
|
||||||
|
.filter(line => line.match(/^\s*[\d\-\*]/))
|
||||||
|
.map(line => line.replace(/^\s*[\d\.\-\*]\s*/, '').trim());
|
||||||
|
}
|
||||||
|
|
||||||
|
// Affected components (from file references)
|
||||||
|
const fileMatches = body.match(/`[^`]*\.(ts|js|tsx|jsx|py|go|rs)[^`]*`/g);
|
||||||
|
if (fileMatches) {
|
||||||
|
sections.affected_components = [...new Set(fileMatches.map(f => f.replace(/`/g, '')))];
|
||||||
|
}
|
||||||
|
|
||||||
|
// Fallback: use entire body as problem statement
|
||||||
|
if (!sections.problem_statement) {
|
||||||
|
sections.problem_statement = body.substring(0, 500);
|
||||||
|
}
|
||||||
|
|
||||||
|
return sections;
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Phase 3: Text Description Parsing
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
async function parseTextDescription(text) {
|
||||||
|
// Generate unique ID
|
||||||
|
const id = `ISS-${new Date().toISOString().replace(/[-:T]/g, '').slice(0, 14)}`;
|
||||||
|
|
||||||
|
// Extract structured elements using patterns
|
||||||
|
const result = {
|
||||||
|
id,
|
||||||
|
source: 'text',
|
||||||
|
title: '',
|
||||||
|
problem_statement: '',
|
||||||
|
expected_behavior: null,
|
||||||
|
actual_behavior: null,
|
||||||
|
affected_components: [],
|
||||||
|
reproduction_steps: []
|
||||||
|
};
|
||||||
|
|
||||||
|
// Pattern: "Title. Description. Expected: X. Actual: Y. Affects: files"
|
||||||
|
const sentences = text.split(/\.(?=\s|$)/);
|
||||||
|
|
||||||
|
// First sentence as title
|
||||||
|
result.title = sentences[0]?.trim() || 'Untitled Issue';
|
||||||
|
|
||||||
|
// Look for keywords
|
||||||
|
for (const sentence of sentences) {
|
||||||
|
const s = sentence.trim();
|
||||||
|
|
||||||
|
if (s.match(/^expected:?\s*/i)) {
|
||||||
|
result.expected_behavior = s.replace(/^expected:?\s*/i, '');
|
||||||
|
} else if (s.match(/^actual:?\s*/i)) {
|
||||||
|
result.actual_behavior = s.replace(/^actual:?\s*/i, '');
|
||||||
|
} else if (s.match(/^affects?:?\s*/i)) {
|
||||||
|
const components = s.replace(/^affects?:?\s*/i, '').split(/[,\s]+/);
|
||||||
|
result.affected_components = components.filter(c => c.includes('/') || c.includes('.'));
|
||||||
|
} else if (s.match(/^steps?:?\s*/i)) {
|
||||||
|
result.reproduction_steps = s.replace(/^steps?:?\s*/i, '').split(/[,;]/);
|
||||||
|
} else if (!result.problem_statement && s.length > 10) {
|
||||||
|
result.problem_statement = s;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Fallback problem statement
|
||||||
|
if (!result.problem_statement) {
|
||||||
|
result.problem_statement = text.substring(0, 300);
|
||||||
|
}
|
||||||
|
|
||||||
|
return result;
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Phase 4: Lifecycle Configuration
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
// Ask for lifecycle requirements (or use smart defaults)
|
||||||
|
const lifecycleAnswer = AskUserQuestion({
|
||||||
|
questions: [
|
||||||
|
{
|
||||||
|
question: 'Test strategy for this issue?',
|
||||||
|
header: 'Test',
|
||||||
|
multiSelect: false,
|
||||||
|
options: [
|
||||||
|
{ label: 'auto', description: 'Auto-detect based on affected files (Recommended)' },
|
||||||
|
{ label: 'unit', description: 'Unit tests only' },
|
||||||
|
{ label: 'integration', description: 'Integration tests' },
|
||||||
|
{ label: 'e2e', description: 'End-to-end tests' },
|
||||||
|
{ label: 'manual', description: 'Manual testing only' }
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
question: 'Regression scope?',
|
||||||
|
header: 'Regression',
|
||||||
|
multiSelect: false,
|
||||||
|
options: [
|
||||||
|
{ label: 'affected', description: 'Only affected module tests (Recommended)' },
|
||||||
|
{ label: 'related', description: 'Affected + dependent modules' },
|
||||||
|
{ label: 'full', description: 'Full test suite' }
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
question: 'Commit strategy?',
|
||||||
|
header: 'Commit',
|
||||||
|
multiSelect: false,
|
||||||
|
options: [
|
||||||
|
{ label: 'per-task', description: 'One commit per task (Recommended)' },
|
||||||
|
{ label: 'atomic', description: 'Single commit for entire issue' },
|
||||||
|
{ label: 'squash', description: 'Squash at the end' }
|
||||||
|
]
|
||||||
|
}
|
||||||
|
]
|
||||||
|
});
|
||||||
|
|
||||||
|
const lifecycle = {
|
||||||
|
test_strategy: lifecycleAnswer.test || 'auto',
|
||||||
|
regression_scope: lifecycleAnswer.regression || 'affected',
|
||||||
|
acceptance_type: 'automated',
|
||||||
|
commit_strategy: lifecycleAnswer.commit || 'per-task'
|
||||||
|
};
|
||||||
|
|
||||||
|
issueData.lifecycle_requirements = lifecycle;
|
||||||
|
```
|
||||||
|
|
||||||
|
### Phase 5: User Confirmation
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
// Show parsed data and ask for confirmation
|
||||||
|
console.log(`
|
||||||
|
## Parsed Issue
|
||||||
|
|
||||||
|
**ID**: ${issueData.id}
|
||||||
|
**Title**: ${issueData.title}
|
||||||
|
**Source**: ${issueData.source}${issueData.source_url ? ` (${issueData.source_url})` : ''}
|
||||||
|
|
||||||
|
### Problem Statement
|
||||||
|
${issueData.problem_statement}
|
||||||
|
|
||||||
|
${issueData.expected_behavior ? `### Expected Behavior\n${issueData.expected_behavior}\n` : ''}
|
||||||
|
${issueData.actual_behavior ? `### Actual Behavior\n${issueData.actual_behavior}\n` : ''}
|
||||||
|
${issueData.affected_components?.length ? `### Affected Components\n${issueData.affected_components.map(c => `- ${c}`).join('\n')}\n` : ''}
|
||||||
|
${issueData.reproduction_steps?.length ? `### Reproduction Steps\n${issueData.reproduction_steps.map((s, i) => `${i+1}. ${s}`).join('\n')}\n` : ''}
|
||||||
|
|
||||||
|
### Lifecycle Configuration
|
||||||
|
- **Test Strategy**: ${lifecycle.test_strategy}
|
||||||
|
- **Regression Scope**: ${lifecycle.regression_scope}
|
||||||
|
- **Commit Strategy**: ${lifecycle.commit_strategy}
|
||||||
|
`);
|
||||||
|
|
||||||
|
// Ask user to confirm or edit
|
||||||
|
const answer = AskUserQuestion({
|
||||||
|
questions: [{
|
||||||
|
question: 'Create this issue?',
|
||||||
|
header: 'Confirm',
|
||||||
|
multiSelect: false,
|
||||||
|
options: [
|
||||||
|
{ label: 'Create', description: 'Save issue to issues.jsonl' },
|
||||||
|
{ label: 'Edit Title', description: 'Modify the issue title' },
|
||||||
|
{ label: 'Edit Priority', description: 'Change priority (1-5)' },
|
||||||
|
{ label: 'Cancel', description: 'Discard and exit' }
|
||||||
|
]
|
||||||
|
}]
|
||||||
|
});
|
||||||
|
|
||||||
|
if (answer.includes('Cancel')) {
|
||||||
|
console.log('Issue creation cancelled.');
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
if (answer.includes('Edit Title')) {
|
||||||
|
const titleAnswer = AskUserQuestion({
|
||||||
|
questions: [{
|
||||||
|
question: 'Enter new title:',
|
||||||
|
header: 'Title',
|
||||||
|
multiSelect: false,
|
||||||
|
options: [
|
||||||
|
{ label: issueData.title.substring(0, 40), description: 'Keep current' }
|
||||||
|
]
|
||||||
|
}]
|
||||||
|
});
|
||||||
|
// Handle custom input via "Other"
|
||||||
|
if (titleAnswer.customText) {
|
||||||
|
issueData.title = titleAnswer.customText;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Phase 6: Write to JSONL
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
// Construct final issue object
|
||||||
|
const priority = flags.priority ? parseInt(flags.priority) : 3;
|
||||||
|
const labels = flags.labels ? flags.labels.split(',').map(l => l.trim()) : [];
|
||||||
|
|
||||||
|
const newIssue = {
|
||||||
|
id: issueData.id,
|
||||||
|
title: issueData.title,
|
||||||
|
status: 'registered',
|
||||||
|
priority,
|
||||||
|
context: issueData.problem_statement,
|
||||||
|
source: issueData.source,
|
||||||
|
source_url: issueData.source_url || null,
|
||||||
|
labels: [...(issueData.labels || []), ...labels],
|
||||||
|
|
||||||
|
// Structured fields
|
||||||
|
problem_statement: issueData.problem_statement,
|
||||||
|
expected_behavior: issueData.expected_behavior || null,
|
||||||
|
actual_behavior: issueData.actual_behavior || null,
|
||||||
|
affected_components: issueData.affected_components || [],
|
||||||
|
reproduction_steps: issueData.reproduction_steps || [],
|
||||||
|
|
||||||
|
// Closed-loop lifecycle requirements
|
||||||
|
lifecycle_requirements: issueData.lifecycle_requirements || {
|
||||||
|
test_strategy: 'auto',
|
||||||
|
regression_scope: 'affected',
|
||||||
|
acceptance_type: 'automated',
|
||||||
|
commit_strategy: 'per-task'
|
||||||
|
},
|
||||||
|
|
||||||
|
// Metadata
|
||||||
|
bound_solution_id: null,
|
||||||
|
solution_count: 0,
|
||||||
|
created_at: new Date().toISOString(),
|
||||||
|
updated_at: new Date().toISOString()
|
||||||
|
};
|
||||||
|
|
||||||
|
// Ensure directory exists
|
||||||
|
Bash('mkdir -p .workflow/issues');
|
||||||
|
|
||||||
|
// Append to issues.jsonl
|
||||||
|
const issuesPath = '.workflow/issues/issues.jsonl';
|
||||||
|
Bash(`echo '${JSON.stringify(newIssue)}' >> "${issuesPath}"`);
|
||||||
|
|
||||||
|
console.log(`
|
||||||
|
## Issue Created
|
||||||
|
|
||||||
|
**ID**: ${newIssue.id}
|
||||||
|
**Title**: ${newIssue.title}
|
||||||
|
**Priority**: ${newIssue.priority}
|
||||||
|
**Labels**: ${newIssue.labels.join(', ') || 'none'}
|
||||||
|
**Source**: ${newIssue.source}
|
||||||
|
|
||||||
|
### Next Steps
|
||||||
|
1. Plan solution: \`/issue:plan ${newIssue.id}\`
|
||||||
|
2. View details: \`ccw issue status ${newIssue.id}\`
|
||||||
|
3. Manage issues: \`/issue:manage\`
|
||||||
|
`);
|
||||||
|
```
|
||||||
|
|
||||||
|
## Examples
|
||||||
|
|
||||||
|
### GitHub Issue
|
||||||
|
|
||||||
|
```bash
|
||||||
|
/issue:new https://github.com/myorg/myrepo/issues/42 --priority 2
|
||||||
|
|
||||||
|
# Output:
|
||||||
|
## Issue Created
|
||||||
|
**ID**: GH-42
|
||||||
|
**Title**: Fix memory leak in WebSocket handler
|
||||||
|
**Priority**: 2
|
||||||
|
**Labels**: bug, performance
|
||||||
|
**Source**: github (https://github.com/myorg/myrepo/issues/42)
|
||||||
|
```
|
||||||
|
|
||||||
|
### Text Description
|
||||||
|
|
||||||
|
```bash
|
||||||
|
/issue:new "API rate limiting not working. Expected: 429 after 100 requests. Actual: No limit. Affects src/middleware/rate-limit.ts"
|
||||||
|
|
||||||
|
# Output:
|
||||||
|
## Issue Created
|
||||||
|
**ID**: ISS-20251227-142530
|
||||||
|
**Title**: API rate limiting not working
|
||||||
|
**Priority**: 3
|
||||||
|
**Labels**: none
|
||||||
|
**Source**: text
|
||||||
|
```
|
||||||
|
|
||||||
|
## Error Handling
|
||||||
|
|
||||||
|
| Error | Resolution |
|
||||||
|
|-------|------------|
|
||||||
|
| Invalid GitHub URL | Show format hint, ask for correction |
|
||||||
|
| gh CLI not available | Fall back to WebFetch for public issues |
|
||||||
|
| Empty description | Prompt user for required fields |
|
||||||
|
| Duplicate issue ID | Auto-increment or suggest merge |
|
||||||
|
| Parse failure | Show raw input, ask for manual structuring |
|
||||||
|
|
||||||
|
## Related Commands
|
||||||
|
|
||||||
|
- `/issue:plan` - Plan solution for issue
|
||||||
|
- `/issue:manage` - Interactive issue management
|
||||||
|
- `ccw issue list` - List all issues
|
||||||
|
- `ccw issue status <id>` - View issue details
|
||||||
304
.claude/commands/issue/plan.md
Normal file
304
.claude/commands/issue/plan.md
Normal file
@@ -0,0 +1,304 @@
|
|||||||
|
---
|
||||||
|
name: plan
|
||||||
|
description: Batch plan issue resolution using issue-plan-agent (explore + plan closed-loop)
|
||||||
|
argument-hint: "--all-pending <issue-id>[,<issue-id>,...] [--batch-size 3] "
|
||||||
|
allowed-tools: TodoWrite(*), Task(*), SlashCommand(*), AskUserQuestion(*), Bash(*), Read(*), Write(*)
|
||||||
|
---
|
||||||
|
|
||||||
|
# Issue Plan Command (/issue:plan)
|
||||||
|
|
||||||
|
## Overview
|
||||||
|
|
||||||
|
Unified planning command using **issue-plan-agent** that combines exploration and planning into a single closed-loop workflow.
|
||||||
|
|
||||||
|
## Output Requirements
|
||||||
|
|
||||||
|
**Generate Files:**
|
||||||
|
1. `.workflow/issues/solutions/{issue-id}.jsonl` - Solution with tasks for each issue
|
||||||
|
|
||||||
|
**Return Summary:**
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"bound": [{ "issue_id": "...", "solution_id": "...", "task_count": N }],
|
||||||
|
"pending_selection": [{ "issue_id": "...", "solutions": [...] }],
|
||||||
|
"conflicts": [{ "file": "...", "issues": [...] }]
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Completion Criteria:**
|
||||||
|
- [ ] Solution file generated for each issue
|
||||||
|
- [ ] Single solution → auto-bound via `ccw issue bind`
|
||||||
|
- [ ] Multiple solutions → returned for user selection
|
||||||
|
- [ ] Tasks conform to schema: `cat .claude/workflows/cli-templates/schemas/solution-schema.json`
|
||||||
|
- [ ] Each task has quantified `acceptance.criteria`
|
||||||
|
|
||||||
|
## Core Capabilities
|
||||||
|
|
||||||
|
- **Closed-loop agent**: issue-plan-agent combines explore + plan
|
||||||
|
- Batch processing: 1 agent processes 1-3 issues
|
||||||
|
- ACE semantic search integrated into planning
|
||||||
|
- Solution with executable tasks and delivery criteria
|
||||||
|
- Automatic solution registration and binding
|
||||||
|
|
||||||
|
## Storage Structure (Flat JSONL)
|
||||||
|
|
||||||
|
```
|
||||||
|
.workflow/issues/
|
||||||
|
├── issues.jsonl # All issues (one per line)
|
||||||
|
├── queue.json # Execution queue
|
||||||
|
└── solutions/
|
||||||
|
├── {issue-id}.jsonl # Solutions for issue (one per line)
|
||||||
|
└── ...
|
||||||
|
```
|
||||||
|
|
||||||
|
## Usage
|
||||||
|
|
||||||
|
```bash
|
||||||
|
/issue:plan <issue-id>[,<issue-id>,...] [FLAGS]
|
||||||
|
|
||||||
|
# Examples
|
||||||
|
/issue:plan GH-123 # Single issue
|
||||||
|
/issue:plan GH-123,GH-124,GH-125 # Batch (up to 3)
|
||||||
|
/issue:plan --all-pending # All pending issues
|
||||||
|
|
||||||
|
# Flags
|
||||||
|
--batch-size <n> Max issues per agent batch (default: 3)
|
||||||
|
```
|
||||||
|
|
||||||
|
## Execution Process
|
||||||
|
|
||||||
|
```
|
||||||
|
Phase 1: Issue Loading
|
||||||
|
├─ Parse input (single, comma-separated, or --all-pending)
|
||||||
|
├─ Fetch issue metadata (ID, title, tags)
|
||||||
|
├─ Validate issues exist (create if needed)
|
||||||
|
└─ Group by similarity (shared tags or title keywords, max 3 per batch)
|
||||||
|
|
||||||
|
Phase 2: Unified Explore + Plan (issue-plan-agent)
|
||||||
|
├─ Launch issue-plan-agent per batch
|
||||||
|
├─ Agent performs:
|
||||||
|
│ ├─ ACE semantic search for each issue
|
||||||
|
│ ├─ Codebase exploration (files, patterns, dependencies)
|
||||||
|
│ ├─ Solution generation with task breakdown
|
||||||
|
│ └─ Conflict detection across issues
|
||||||
|
└─ Output: solution JSON per issue
|
||||||
|
|
||||||
|
Phase 3: Solution Registration & Binding
|
||||||
|
├─ Append solutions to solutions/{issue-id}.jsonl
|
||||||
|
├─ Single solution per issue → auto-bind
|
||||||
|
├─ Multiple candidates → AskUserQuestion to select
|
||||||
|
└─ Update issues.jsonl with bound_solution_id
|
||||||
|
|
||||||
|
Phase 4: Summary
|
||||||
|
├─ Display bound solutions
|
||||||
|
├─ Show task counts per issue
|
||||||
|
└─ Display next steps (/issue:queue)
|
||||||
|
```
|
||||||
|
|
||||||
|
## Implementation
|
||||||
|
|
||||||
|
### Phase 1: Issue Loading (ID + Title + Tags)
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
const batchSize = flags.batchSize || 3;
|
||||||
|
let issues = []; // {id, title, tags}
|
||||||
|
|
||||||
|
if (flags.allPending) {
|
||||||
|
// Get pending issues with metadata via CLI (JSON output)
|
||||||
|
const result = Bash(`ccw issue list --status pending,registered --json`).trim();
|
||||||
|
const parsed = result ? JSON.parse(result) : [];
|
||||||
|
issues = parsed.map(i => ({ id: i.id, title: i.title || '', tags: i.tags || [] }));
|
||||||
|
|
||||||
|
if (issues.length === 0) {
|
||||||
|
console.log('No pending issues found.');
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
console.log(`Found ${issues.length} pending issues`);
|
||||||
|
} else {
|
||||||
|
// Parse comma-separated issue IDs, fetch metadata
|
||||||
|
const ids = userInput.includes(',')
|
||||||
|
? userInput.split(',').map(s => s.trim())
|
||||||
|
: [userInput.trim()];
|
||||||
|
|
||||||
|
for (const id of ids) {
|
||||||
|
Bash(`ccw issue init ${id} --title "Issue ${id}" 2>/dev/null || true`);
|
||||||
|
const info = Bash(`ccw issue status ${id} --json`).trim();
|
||||||
|
const parsed = info ? JSON.parse(info) : {};
|
||||||
|
issues.push({ id, title: parsed.title || '', tags: parsed.tags || [] });
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Intelligent grouping by similarity (tags → title keywords)
|
||||||
|
function groupBySimilarity(issues, maxSize) {
|
||||||
|
const batches = [];
|
||||||
|
const used = new Set();
|
||||||
|
|
||||||
|
for (const issue of issues) {
|
||||||
|
if (used.has(issue.id)) continue;
|
||||||
|
|
||||||
|
const batch = [issue];
|
||||||
|
used.add(issue.id);
|
||||||
|
const issueTags = new Set(issue.tags);
|
||||||
|
const issueWords = new Set(issue.title.toLowerCase().split(/\s+/));
|
||||||
|
|
||||||
|
// Find similar issues
|
||||||
|
for (const other of issues) {
|
||||||
|
if (used.has(other.id) || batch.length >= maxSize) continue;
|
||||||
|
|
||||||
|
// Similarity: shared tags or shared title keywords
|
||||||
|
const sharedTags = other.tags.filter(t => issueTags.has(t)).length;
|
||||||
|
const otherWords = other.title.toLowerCase().split(/\s+/);
|
||||||
|
const sharedWords = otherWords.filter(w => issueWords.has(w) && w.length > 3).length;
|
||||||
|
|
||||||
|
if (sharedTags > 0 || sharedWords >= 2) {
|
||||||
|
batch.push(other);
|
||||||
|
used.add(other.id);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
batches.push(batch);
|
||||||
|
}
|
||||||
|
return batches;
|
||||||
|
}
|
||||||
|
|
||||||
|
const batches = groupBySimilarity(issues, batchSize);
|
||||||
|
console.log(`Processing ${issues.length} issues in ${batches.length} batch(es) (grouped by similarity)`);
|
||||||
|
|
||||||
|
TodoWrite({
|
||||||
|
todos: batches.map((_, i) => ({
|
||||||
|
content: `Plan batch ${i+1}`,
|
||||||
|
status: 'pending',
|
||||||
|
activeForm: `Planning batch ${i+1}`
|
||||||
|
}))
|
||||||
|
});
|
||||||
|
```
|
||||||
|
|
||||||
|
### Phase 2: Unified Explore + Plan (issue-plan-agent)
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
Bash(`mkdir -p .workflow/issues/solutions`);
|
||||||
|
const pendingSelections = []; // Collect multi-solution issues for user selection
|
||||||
|
|
||||||
|
for (const [batchIndex, batch] of batches.entries()) {
|
||||||
|
updateTodo(`Plan batch ${batchIndex + 1}`, 'in_progress');
|
||||||
|
|
||||||
|
// Build issue list with metadata for agent context
|
||||||
|
const issueList = batch.map(i => `- ${i.id}: ${i.title}${i.tags.length ? ` [${i.tags.join(', ')}]` : ''}`).join('\n');
|
||||||
|
|
||||||
|
// Build minimal prompt - agent handles exploration, planning, and binding
|
||||||
|
const issuePrompt = `
|
||||||
|
## Plan Issues
|
||||||
|
|
||||||
|
**Issues** (grouped by similarity):
|
||||||
|
${issueList}
|
||||||
|
|
||||||
|
**Project Root**: ${process.cwd()}
|
||||||
|
|
||||||
|
### Steps
|
||||||
|
1. Fetch: \`ccw issue status <id> --json\`
|
||||||
|
2. Explore (ACE) → Plan solution
|
||||||
|
3. Register & bind: \`ccw issue bind <id> --solution <file>\`
|
||||||
|
|
||||||
|
### Generate Files
|
||||||
|
\`.workflow/issues/solutions/{issue-id}.jsonl\` - Solution with tasks (schema: cat .claude/workflows/cli-templates/schemas/solution-schema.json)
|
||||||
|
|
||||||
|
### Binding Rules
|
||||||
|
- **Single solution**: Auto-bind via \`ccw issue bind <id> --solution <file>\`
|
||||||
|
- **Multiple solutions**: Register only, return for user selection
|
||||||
|
|
||||||
|
### Return Summary
|
||||||
|
\`\`\`json
|
||||||
|
{
|
||||||
|
"bound": [{ "issue_id": "...", "solution_id": "...", "task_count": N }],
|
||||||
|
"pending_selection": [{ "issue_id": "...", "solutions": [{ "id": "...", "description": "...", "task_count": N }] }],
|
||||||
|
"conflicts": [{ "file": "...", "issues": [...] }]
|
||||||
|
}
|
||||||
|
\`\`\`
|
||||||
|
`;
|
||||||
|
|
||||||
|
// Launch issue-plan-agent - agent writes solutions directly
|
||||||
|
const batchIds = batch.map(i => i.id);
|
||||||
|
const result = Task(
|
||||||
|
subagent_type="issue-plan-agent",
|
||||||
|
run_in_background=false,
|
||||||
|
description=`Explore & plan ${batch.length} issues: ${batchIds.join(', ')}`,
|
||||||
|
prompt=issuePrompt
|
||||||
|
);
|
||||||
|
|
||||||
|
// Parse summary from agent
|
||||||
|
const summary = JSON.parse(result);
|
||||||
|
|
||||||
|
// Display auto-bound solutions
|
||||||
|
for (const item of summary.bound || []) {
|
||||||
|
console.log(`✓ ${item.issue_id}: ${item.solution_id} (${item.task_count} tasks)`);
|
||||||
|
}
|
||||||
|
|
||||||
|
// Collect pending selections for Phase 3
|
||||||
|
pendingSelections.push(...(summary.pending_selection || []));
|
||||||
|
|
||||||
|
// Show conflicts
|
||||||
|
if (summary.conflicts?.length > 0) {
|
||||||
|
console.log(`⚠ Conflicts: ${summary.conflicts.map(c => c.file).join(', ')}`);
|
||||||
|
}
|
||||||
|
|
||||||
|
updateTodo(`Plan batch ${batchIndex + 1}`, 'completed');
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Phase 3: Multi-Solution Selection
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
// Only handle issues where agent generated multiple solutions
|
||||||
|
if (pendingSelections.length > 0) {
|
||||||
|
const answer = AskUserQuestion({
|
||||||
|
questions: pendingSelections.map(({ issue_id, solutions }) => ({
|
||||||
|
question: `Select solution for ${issue_id}:`,
|
||||||
|
header: issue_id,
|
||||||
|
multiSelect: false,
|
||||||
|
options: solutions.map(s => ({
|
||||||
|
label: `${s.id} (${s.task_count} tasks)`,
|
||||||
|
description: s.description
|
||||||
|
}))
|
||||||
|
}))
|
||||||
|
});
|
||||||
|
|
||||||
|
// Bind user-selected solutions
|
||||||
|
for (const { issue_id } of pendingSelections) {
|
||||||
|
const selectedId = extractSelectedSolutionId(answer, issue_id);
|
||||||
|
if (selectedId) {
|
||||||
|
Bash(`ccw issue bind ${issue_id} ${selectedId}`);
|
||||||
|
console.log(`✓ ${issue_id}: ${selectedId} bound`);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Phase 4: Summary
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
// Count planned issues via CLI
|
||||||
|
const plannedIds = Bash(`ccw issue list --status planned --ids`).trim();
|
||||||
|
const plannedCount = plannedIds ? plannedIds.split('\n').length : 0;
|
||||||
|
|
||||||
|
console.log(`
|
||||||
|
## Done: ${issues.length} issues → ${plannedCount} planned
|
||||||
|
|
||||||
|
Next: \`/issue:queue\` → \`/issue:execute\`
|
||||||
|
`);
|
||||||
|
```
|
||||||
|
|
||||||
|
## Error Handling
|
||||||
|
|
||||||
|
| Error | Resolution |
|
||||||
|
|-------|------------|
|
||||||
|
| Issue not found | Auto-create in issues.jsonl |
|
||||||
|
| ACE search fails | Agent falls back to ripgrep |
|
||||||
|
| No solutions generated | Display error, suggest manual planning |
|
||||||
|
| User cancels selection | Skip issue, continue with others |
|
||||||
|
| File conflicts | Agent detects and suggests resolution order |
|
||||||
|
|
||||||
|
## Related Commands
|
||||||
|
|
||||||
|
- `/issue:queue` - Form execution queue from bound solutions
|
||||||
|
- `/issue:execute` - Execute queue with codex
|
||||||
|
- `ccw issue list` - List all issues
|
||||||
|
- `ccw issue status` - View issue and solution details
|
||||||
294
.claude/commands/issue/queue.md
Normal file
294
.claude/commands/issue/queue.md
Normal file
@@ -0,0 +1,294 @@
|
|||||||
|
---
|
||||||
|
name: queue
|
||||||
|
description: Form execution queue from bound solutions using issue-queue-agent
|
||||||
|
argument-hint: "[--rebuild] [--issue <id>]"
|
||||||
|
allowed-tools: TodoWrite(*), Task(*), Bash(*), Read(*), Write(*)
|
||||||
|
---
|
||||||
|
|
||||||
|
# Issue Queue Command (/issue:queue)
|
||||||
|
|
||||||
|
## Overview
|
||||||
|
|
||||||
|
Queue formation command using **issue-queue-agent** that analyzes all bound solutions, resolves conflicts, and creates an ordered execution queue.
|
||||||
|
|
||||||
|
## Output Requirements
|
||||||
|
|
||||||
|
**Generate Files:**
|
||||||
|
1. `.workflow/issues/queues/{queue-id}.json` - Full queue with tasks, conflicts, groups
|
||||||
|
2. `.workflow/issues/queues/index.json` - Update with new queue entry
|
||||||
|
|
||||||
|
**Return Summary:**
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"queue_id": "QUE-20251227-143000",
|
||||||
|
"total_tasks": N,
|
||||||
|
"execution_groups": [{ "id": "P1", "type": "parallel", "count": N }],
|
||||||
|
"conflicts_resolved": N,
|
||||||
|
"issues_queued": ["GH-123", "GH-124"]
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Completion Criteria:**
|
||||||
|
- [ ] Queue JSON generated with valid DAG (no cycles)
|
||||||
|
- [ ] All file conflicts resolved with rationale
|
||||||
|
- [ ] Semantic priority calculated for all tasks
|
||||||
|
- [ ] Execution groups assigned (parallel P* / sequential S*)
|
||||||
|
- [ ] Issue statuses updated to `queued` via `ccw issue update`
|
||||||
|
|
||||||
|
## Core Capabilities
|
||||||
|
|
||||||
|
- **Agent-driven**: issue-queue-agent handles all ordering logic
|
||||||
|
- Dependency DAG construction and cycle detection
|
||||||
|
- File conflict detection and resolution
|
||||||
|
- Semantic priority calculation (0.0-1.0)
|
||||||
|
- Parallel/Sequential group assignment
|
||||||
|
|
||||||
|
## Storage Structure (Queue History)
|
||||||
|
|
||||||
|
```
|
||||||
|
.workflow/issues/
|
||||||
|
├── issues.jsonl # All issues (one per line)
|
||||||
|
├── queues/ # Queue history directory
|
||||||
|
│ ├── index.json # Queue index (active + history)
|
||||||
|
│ ├── {queue-id}.json # Individual queue files
|
||||||
|
│ └── ...
|
||||||
|
└── solutions/
|
||||||
|
├── {issue-id}.jsonl # Solutions for issue
|
||||||
|
└── ...
|
||||||
|
```
|
||||||
|
|
||||||
|
### Queue Index Schema
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"active_queue_id": "QUE-20251227-143000",
|
||||||
|
"queues": [
|
||||||
|
{
|
||||||
|
"id": "QUE-20251227-143000",
|
||||||
|
"status": "active",
|
||||||
|
"issue_ids": ["GH-123", "GH-124"],
|
||||||
|
"total_tasks": 8,
|
||||||
|
"completed_tasks": 3,
|
||||||
|
"created_at": "2025-12-27T14:30:00Z"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"id": "QUE-20251226-100000",
|
||||||
|
"status": "completed",
|
||||||
|
"issue_ids": ["GH-120"],
|
||||||
|
"total_tasks": 5,
|
||||||
|
"completed_tasks": 5,
|
||||||
|
"created_at": "2025-12-26T10:00:00Z",
|
||||||
|
"completed_at": "2025-12-26T12:30:00Z"
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
## Usage
|
||||||
|
|
||||||
|
```bash
|
||||||
|
/issue:queue [FLAGS]
|
||||||
|
|
||||||
|
# Examples
|
||||||
|
/issue:queue # Form NEW queue from all bound solutions
|
||||||
|
/issue:queue --issue GH-123 # Form queue for specific issue only
|
||||||
|
/issue:queue --append GH-124 # Append to active queue
|
||||||
|
/issue:queue --list # List all queues (history)
|
||||||
|
/issue:queue --switch QUE-xxx # Switch active queue
|
||||||
|
/issue:queue --archive # Archive completed active queue
|
||||||
|
|
||||||
|
# Flags
|
||||||
|
--issue <id> Form queue for specific issue only
|
||||||
|
--append <id> Append issue to active queue (don't create new)
|
||||||
|
|
||||||
|
# CLI subcommands (ccw issue queue ...)
|
||||||
|
ccw issue queue list List all queues with status
|
||||||
|
ccw issue queue switch <queue-id> Switch active queue
|
||||||
|
ccw issue queue archive Archive current queue
|
||||||
|
ccw issue queue delete <queue-id> Delete queue from history
|
||||||
|
```
|
||||||
|
|
||||||
|
## Execution Process
|
||||||
|
|
||||||
|
```
|
||||||
|
Phase 1: Solution Loading
|
||||||
|
├─ Load issues.jsonl
|
||||||
|
├─ Filter issues with bound_solution_id
|
||||||
|
├─ Read solutions/{issue-id}.jsonl for each issue
|
||||||
|
├─ Find bound solution by ID
|
||||||
|
└─ Extract tasks from bound solutions
|
||||||
|
|
||||||
|
Phase 2-4: Agent-Driven Queue Formation (issue-queue-agent)
|
||||||
|
├─ Launch issue-queue-agent with all tasks
|
||||||
|
├─ Agent performs:
|
||||||
|
│ ├─ Build dependency DAG from depends_on
|
||||||
|
│ ├─ Detect circular dependencies
|
||||||
|
│ ├─ Identify file modification conflicts
|
||||||
|
│ ├─ Resolve conflicts using ordering rules
|
||||||
|
│ ├─ Calculate semantic priority (0.0-1.0)
|
||||||
|
│ └─ Assign execution groups (parallel/sequential)
|
||||||
|
└─ Output: queue JSON with ordered tasks
|
||||||
|
|
||||||
|
Phase 5: Queue Output
|
||||||
|
├─ Write queue.json
|
||||||
|
├─ Update issue statuses in issues.jsonl
|
||||||
|
└─ Display queue summary
|
||||||
|
```
|
||||||
|
|
||||||
|
## Implementation
|
||||||
|
|
||||||
|
### Phase 1: Solution Loading
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
// Load issues.jsonl
|
||||||
|
const issuesPath = '.workflow/issues/issues.jsonl';
|
||||||
|
const allIssues = Bash(`cat "${issuesPath}" 2>/dev/null || echo ''`)
|
||||||
|
.split('\n')
|
||||||
|
.filter(line => line.trim())
|
||||||
|
.map(line => JSON.parse(line));
|
||||||
|
|
||||||
|
// Filter issues with bound solutions
|
||||||
|
const plannedIssues = allIssues.filter(i =>
|
||||||
|
i.status === 'planned' && i.bound_solution_id
|
||||||
|
);
|
||||||
|
|
||||||
|
if (plannedIssues.length === 0) {
|
||||||
|
console.log('No issues with bound solutions found.');
|
||||||
|
console.log('Run /issue:plan first to create and bind solutions.');
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
// Load all tasks from bound solutions
|
||||||
|
const allTasks = [];
|
||||||
|
for (const issue of plannedIssues) {
|
||||||
|
const solPath = `.workflow/issues/solutions/${issue.id}.jsonl`;
|
||||||
|
const solutions = Bash(`cat "${solPath}" 2>/dev/null || echo ''`)
|
||||||
|
.split('\n')
|
||||||
|
.filter(line => line.trim())
|
||||||
|
.map(line => JSON.parse(line));
|
||||||
|
|
||||||
|
// Find bound solution
|
||||||
|
const boundSol = solutions.find(s => s.id === issue.bound_solution_id);
|
||||||
|
|
||||||
|
if (!boundSol) {
|
||||||
|
console.log(`⚠ Bound solution ${issue.bound_solution_id} not found for ${issue.id}`);
|
||||||
|
continue;
|
||||||
|
}
|
||||||
|
|
||||||
|
for (const task of boundSol.tasks || []) {
|
||||||
|
allTasks.push({
|
||||||
|
issue_id: issue.id,
|
||||||
|
solution_id: issue.bound_solution_id,
|
||||||
|
task,
|
||||||
|
exploration_context: boundSol.exploration_context
|
||||||
|
});
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
console.log(`Loaded ${allTasks.length} tasks from ${plannedIssues.length} issues`);
|
||||||
|
```
|
||||||
|
|
||||||
|
### Phase 2-4: Agent-Driven Queue Formation
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
// Build minimal prompt - agent reads schema and handles ordering
|
||||||
|
const agentPrompt = `
|
||||||
|
## Order Tasks
|
||||||
|
|
||||||
|
**Tasks**: ${allTasks.length} from ${plannedIssues.length} issues
|
||||||
|
**Project Root**: ${process.cwd()}
|
||||||
|
|
||||||
|
### Input
|
||||||
|
\`\`\`json
|
||||||
|
${JSON.stringify(allTasks.map(t => ({
|
||||||
|
key: \`\${t.issue_id}:\${t.task.id}\`,
|
||||||
|
type: t.task.type,
|
||||||
|
file_context: t.task.file_context,
|
||||||
|
depends_on: t.task.depends_on
|
||||||
|
})), null, 2)}
|
||||||
|
\`\`\`
|
||||||
|
|
||||||
|
### Steps
|
||||||
|
1. Parse tasks: Extract task keys, types, file contexts, dependencies
|
||||||
|
2. Build DAG: Construct dependency graph from depends_on references
|
||||||
|
3. Detect cycles: Verify no circular dependencies exist (abort if found)
|
||||||
|
4. Detect conflicts: Identify file modification conflicts across issues
|
||||||
|
5. Resolve conflicts: Apply ordering rules (Create→Update→Delete, config→src→tests)
|
||||||
|
6. Calculate priority: Compute semantic priority (0.0-1.0) for each task
|
||||||
|
7. Assign groups: Assign parallel (P*) or sequential (S*) execution groups
|
||||||
|
8. Generate queue: Write queue JSON with ordered tasks
|
||||||
|
9. Update index: Update queues/index.json with new queue entry
|
||||||
|
|
||||||
|
### Rules
|
||||||
|
- **DAG Validity**: Output must be valid DAG with no circular dependencies
|
||||||
|
- **Conflict Resolution**: All file conflicts must be resolved with rationale
|
||||||
|
- **Ordering Priority**:
|
||||||
|
1. Create before Update (files must exist before modification)
|
||||||
|
2. Foundation before integration (config/ → src/)
|
||||||
|
3. Types before implementation (types/ → components/)
|
||||||
|
4. Core before tests (src/ → __tests__/)
|
||||||
|
5. Delete last (preserve dependencies until no longer needed)
|
||||||
|
- **Parallel Safety**: Tasks in same parallel group must have no file conflicts
|
||||||
|
- **Queue ID Format**: \`QUE-YYYYMMDD-HHMMSS\` (UTC timestamp)
|
||||||
|
|
||||||
|
### Generate Files
|
||||||
|
1. \`.workflow/issues/queues/\${queueId}.json\` - Full queue (schema: cat .claude/workflows/cli-templates/schemas/queue-schema.json)
|
||||||
|
2. \`.workflow/issues/queues/index.json\` - Update with new entry
|
||||||
|
|
||||||
|
### Return Summary
|
||||||
|
\`\`\`json
|
||||||
|
{
|
||||||
|
"queue_id": "QUE-YYYYMMDD-HHMMSS",
|
||||||
|
"total_tasks": N,
|
||||||
|
"execution_groups": [{ "id": "P1", "type": "parallel", "count": N }],
|
||||||
|
"conflicts_resolved": N,
|
||||||
|
"issues_queued": ["GH-123"]
|
||||||
|
}
|
||||||
|
\`\`\`
|
||||||
|
`;
|
||||||
|
|
||||||
|
const result = Task(
|
||||||
|
subagent_type="issue-queue-agent",
|
||||||
|
run_in_background=false,
|
||||||
|
description=`Order ${allTasks.length} tasks`,
|
||||||
|
prompt=agentPrompt
|
||||||
|
);
|
||||||
|
|
||||||
|
const summary = JSON.parse(result);
|
||||||
|
```
|
||||||
|
|
||||||
|
### Phase 5: Summary & Status Update
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
// Agent already generated queue files, use summary
|
||||||
|
console.log(`
|
||||||
|
## Queue Formed: ${summary.queue_id}
|
||||||
|
|
||||||
|
**Tasks**: ${summary.total_tasks}
|
||||||
|
**Issues**: ${summary.issues_queued.join(', ')}
|
||||||
|
**Groups**: ${summary.execution_groups.map(g => `${g.id}(${g.count})`).join(', ')}
|
||||||
|
**Conflicts Resolved**: ${summary.conflicts_resolved}
|
||||||
|
|
||||||
|
Next: \`/issue:execute\`
|
||||||
|
`);
|
||||||
|
|
||||||
|
// Update issue statuses via CLI
|
||||||
|
for (const issueId of summary.issues_queued) {
|
||||||
|
Bash(`ccw issue update ${issueId} --status queued`);
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
## Error Handling
|
||||||
|
|
||||||
|
| Error | Resolution |
|
||||||
|
|-------|------------|
|
||||||
|
| No bound solutions | Display message, suggest /issue:plan |
|
||||||
|
| Circular dependency | List cycles, abort queue formation |
|
||||||
|
| Unresolved conflicts | Agent resolves using ordering rules |
|
||||||
|
| Invalid task reference | Skip and warn |
|
||||||
|
|
||||||
|
## Related Commands
|
||||||
|
|
||||||
|
- `/issue:plan` - Plan issues and bind solutions
|
||||||
|
- `/issue:execute` - Execute queue with codex
|
||||||
|
- `ccw issue queue list` - View current queue
|
||||||
383
.claude/commands/memory/compact.md
Normal file
383
.claude/commands/memory/compact.md
Normal file
@@ -0,0 +1,383 @@
|
|||||||
|
---
|
||||||
|
name: compact
|
||||||
|
description: Compact current session memory into structured text for session recovery, extracting objective/plan/files/decisions/constraints/state, and save via MCP core_memory tool
|
||||||
|
argument-hint: "[optional: session description]"
|
||||||
|
allowed-tools: mcp__ccw-tools__core_memory(*), Read(*)
|
||||||
|
examples:
|
||||||
|
- /memory:compact
|
||||||
|
- /memory:compact "completed core-memory module"
|
||||||
|
---
|
||||||
|
|
||||||
|
# Memory Compact Command (/memory:compact)
|
||||||
|
|
||||||
|
## 1. Overview
|
||||||
|
|
||||||
|
The `memory:compact` command **compresses current session working memory** into structured text optimized for **session recovery**, extracts critical information, and saves it to persistent storage via MCP `core_memory` tool.
|
||||||
|
|
||||||
|
**Core Philosophy**:
|
||||||
|
- **Session Recovery First**: Capture everything needed to resume work seamlessly
|
||||||
|
- **Minimize Re-exploration**: Include file paths, decisions, and state to avoid redundant analysis
|
||||||
|
- **Preserve Train of Thought**: Keep notes and hypotheses for complex debugging
|
||||||
|
- **Actionable State**: Record last action result and known issues
|
||||||
|
|
||||||
|
## 2. Parameters
|
||||||
|
|
||||||
|
- `"session description"` (Optional): Session description to supplement objective
|
||||||
|
- Example: "completed core-memory module"
|
||||||
|
- Example: "debugging JWT refresh - suspected memory leak"
|
||||||
|
|
||||||
|
## 3. Structured Output Format
|
||||||
|
|
||||||
|
```markdown
|
||||||
|
## Session ID
|
||||||
|
[WFS-ID if workflow session active, otherwise (none)]
|
||||||
|
|
||||||
|
## Project Root
|
||||||
|
[Absolute path to project root, e.g., D:\Claude_dms3]
|
||||||
|
|
||||||
|
## Objective
|
||||||
|
[High-level goal - the "North Star" of this session]
|
||||||
|
|
||||||
|
## Execution Plan
|
||||||
|
[CRITICAL: Embed the LATEST plan in its COMPLETE and DETAILED form]
|
||||||
|
|
||||||
|
### Source: [workflow | todo | user-stated | inferred]
|
||||||
|
|
||||||
|
<details>
|
||||||
|
<summary>Full Execution Plan (Click to expand)</summary>
|
||||||
|
|
||||||
|
[PRESERVE COMPLETE PLAN VERBATIM - DO NOT SUMMARIZE]
|
||||||
|
- ALL phases, tasks, subtasks
|
||||||
|
- ALL file paths (absolute)
|
||||||
|
- ALL dependencies and prerequisites
|
||||||
|
- ALL acceptance criteria
|
||||||
|
- ALL status markers ([x] done, [ ] pending)
|
||||||
|
- ALL notes and context
|
||||||
|
|
||||||
|
Example:
|
||||||
|
## Phase 1: Setup
|
||||||
|
- [x] Initialize project structure
|
||||||
|
- Created D:\Claude_dms3\src\core\index.ts
|
||||||
|
- Added dependencies: lodash, zod
|
||||||
|
- [ ] Configure TypeScript
|
||||||
|
- Update tsconfig.json for strict mode
|
||||||
|
|
||||||
|
## Phase 2: Implementation
|
||||||
|
- [ ] Implement core API
|
||||||
|
- Target: D:\Claude_dms3\src\api\handler.ts
|
||||||
|
- Dependencies: Phase 1 complete
|
||||||
|
- Acceptance: All tests pass
|
||||||
|
|
||||||
|
</details>
|
||||||
|
|
||||||
|
## Working Files (Modified)
|
||||||
|
[Absolute paths to actively modified files]
|
||||||
|
- D:\Claude_dms3\src\file1.ts (role: main implementation)
|
||||||
|
- D:\Claude_dms3\tests\file1.test.ts (role: unit tests)
|
||||||
|
|
||||||
|
## Reference Files (Read-Only)
|
||||||
|
[Absolute paths to context files - NOT modified but essential for understanding]
|
||||||
|
- D:\Claude_dms3\.claude\CLAUDE.md (role: project instructions)
|
||||||
|
- D:\Claude_dms3\src\types\index.ts (role: type definitions)
|
||||||
|
- D:\Claude_dms3\package.json (role: dependencies)
|
||||||
|
|
||||||
|
## Last Action
|
||||||
|
[Last significant action and its result/status]
|
||||||
|
|
||||||
|
## Decisions
|
||||||
|
- [Decision]: [Reasoning]
|
||||||
|
- [Decision]: [Reasoning]
|
||||||
|
|
||||||
|
## Constraints
|
||||||
|
- [User-specified limitation or preference]
|
||||||
|
|
||||||
|
## Dependencies
|
||||||
|
- [Added/changed packages or environment requirements]
|
||||||
|
|
||||||
|
## Known Issues
|
||||||
|
- [Deferred bug or edge case]
|
||||||
|
|
||||||
|
## Changes Made
|
||||||
|
- [Completed modification]
|
||||||
|
|
||||||
|
## Pending
|
||||||
|
- [Next step] or (none)
|
||||||
|
|
||||||
|
## Notes
|
||||||
|
[Unstructured thoughts, hypotheses, debugging trails]
|
||||||
|
```
|
||||||
|
|
||||||
|
## 4. Field Definitions
|
||||||
|
|
||||||
|
| Field | Purpose | Recovery Value |
|
||||||
|
|-------|---------|----------------|
|
||||||
|
| **Session ID** | Workflow session identifier (WFS-*) | Links memory to specific stateful task execution |
|
||||||
|
| **Project Root** | Absolute path to project directory | Enables correct path resolution in new sessions |
|
||||||
|
| **Objective** | Ultimate goal of the session | Prevents losing track of broader feature |
|
||||||
|
| **Execution Plan** | Complete plan from any source (verbatim) | Preserves full planning context, avoids re-planning |
|
||||||
|
| **Working Files** | Actively modified files (absolute paths) | Immediately identifies where work was happening |
|
||||||
|
| **Reference Files** | Read-only context files (absolute paths) | Eliminates re-exploration for critical context |
|
||||||
|
| **Last Action** | Final tool output/status | Immediate state awareness (success/failure) |
|
||||||
|
| **Decisions** | Architectural choices + reasoning | Prevents re-litigating settled decisions |
|
||||||
|
| **Constraints** | User-imposed limitations | Maintains personalized coding style |
|
||||||
|
| **Dependencies** | Package/environment changes | Prevents missing dependency errors |
|
||||||
|
| **Known Issues** | Deferred bugs/edge cases | Ensures issues aren't forgotten |
|
||||||
|
| **Changes Made** | Completed modifications | Clear record of what was done |
|
||||||
|
| **Pending** | Next steps | Immediate action items |
|
||||||
|
| **Notes** | Hypotheses, debugging trails | Preserves "train of thought" |
|
||||||
|
|
||||||
|
## 5. Execution Flow
|
||||||
|
|
||||||
|
### Step 1: Analyze Current Session
|
||||||
|
|
||||||
|
Extract the following from conversation history:
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
const sessionAnalysis = {
|
||||||
|
sessionId: "", // WFS-* if workflow session active, null otherwise
|
||||||
|
projectRoot: "", // Absolute path: D:\Claude_dms3
|
||||||
|
objective: "", // High-level goal (1-2 sentences)
|
||||||
|
executionPlan: {
|
||||||
|
source: "workflow" | "todo" | "user-stated" | "inferred",
|
||||||
|
content: "" // Full plan content - ALWAYS preserve COMPLETE and DETAILED form
|
||||||
|
},
|
||||||
|
workingFiles: [], // {absolutePath, role} - modified files
|
||||||
|
referenceFiles: [], // {absolutePath, role} - read-only context files
|
||||||
|
lastAction: "", // Last significant action + result
|
||||||
|
decisions: [], // {decision, reasoning}
|
||||||
|
constraints: [], // User-specified limitations
|
||||||
|
dependencies: [], // Added/changed packages
|
||||||
|
knownIssues: [], // Deferred bugs
|
||||||
|
changesMade: [], // Completed modifications
|
||||||
|
pending: [], // Next steps
|
||||||
|
notes: "" // Unstructured thoughts
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Step 2: Generate Structured Text
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
// Helper: Generate execution plan section
|
||||||
|
const generateExecutionPlan = (plan) => {
|
||||||
|
const sourceLabels = {
|
||||||
|
'workflow': 'workflow (IMPL_PLAN.md)',
|
||||||
|
'todo': 'todo (TodoWrite)',
|
||||||
|
'user-stated': 'user-stated',
|
||||||
|
'inferred': 'inferred'
|
||||||
|
};
|
||||||
|
|
||||||
|
// CRITICAL: Preserve complete plan content verbatim - DO NOT summarize
|
||||||
|
return `### Source: ${sourceLabels[plan.source] || plan.source}
|
||||||
|
|
||||||
|
<details>
|
||||||
|
<summary>Full Execution Plan (Click to expand)</summary>
|
||||||
|
|
||||||
|
${plan.content}
|
||||||
|
|
||||||
|
</details>`;
|
||||||
|
};
|
||||||
|
|
||||||
|
const structuredText = `## Session ID
|
||||||
|
${sessionAnalysis.sessionId || '(none)'}
|
||||||
|
|
||||||
|
## Project Root
|
||||||
|
${sessionAnalysis.projectRoot}
|
||||||
|
|
||||||
|
## Objective
|
||||||
|
${sessionAnalysis.objective}
|
||||||
|
|
||||||
|
## Execution Plan
|
||||||
|
${generateExecutionPlan(sessionAnalysis.executionPlan)}
|
||||||
|
|
||||||
|
## Working Files (Modified)
|
||||||
|
${sessionAnalysis.workingFiles.map(f => `- ${f.absolutePath} (role: ${f.role})`).join('\n') || '(none)'}
|
||||||
|
|
||||||
|
## Reference Files (Read-Only)
|
||||||
|
${sessionAnalysis.referenceFiles.map(f => `- ${f.absolutePath} (role: ${f.role})`).join('\n') || '(none)'}
|
||||||
|
|
||||||
|
## Last Action
|
||||||
|
${sessionAnalysis.lastAction}
|
||||||
|
|
||||||
|
## Decisions
|
||||||
|
${sessionAnalysis.decisions.map(d => `- ${d.decision}: ${d.reasoning}`).join('\n') || '(none)'}
|
||||||
|
|
||||||
|
## Constraints
|
||||||
|
${sessionAnalysis.constraints.map(c => `- ${c}`).join('\n') || '(none)'}
|
||||||
|
|
||||||
|
## Dependencies
|
||||||
|
${sessionAnalysis.dependencies.map(d => `- ${d}`).join('\n') || '(none)'}
|
||||||
|
|
||||||
|
## Known Issues
|
||||||
|
${sessionAnalysis.knownIssues.map(i => `- ${i}`).join('\n') || '(none)'}
|
||||||
|
|
||||||
|
## Changes Made
|
||||||
|
${sessionAnalysis.changesMade.map(c => `- ${c}`).join('\n') || '(none)'}
|
||||||
|
|
||||||
|
## Pending
|
||||||
|
${sessionAnalysis.pending.length > 0
|
||||||
|
? sessionAnalysis.pending.map(p => `- ${p}`).join('\n')
|
||||||
|
: '(none)'}
|
||||||
|
|
||||||
|
## Notes
|
||||||
|
${sessionAnalysis.notes || '(none)'}`
|
||||||
|
```
|
||||||
|
|
||||||
|
### Step 3: Import to Core Memory via MCP
|
||||||
|
|
||||||
|
Use the MCP `core_memory` tool to save the structured text:
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
mcp__ccw-tools__core_memory({
|
||||||
|
operation: "import",
|
||||||
|
text: structuredText
|
||||||
|
})
|
||||||
|
```
|
||||||
|
|
||||||
|
Or via CLI (pipe structured text to import):
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Write structured text to temp file, then import
|
||||||
|
echo "$structuredText" | ccw core-memory import
|
||||||
|
|
||||||
|
# Or from a file
|
||||||
|
ccw core-memory import --file /path/to/session-memory.md
|
||||||
|
```
|
||||||
|
|
||||||
|
**Response Format**:
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"operation": "import",
|
||||||
|
"id": "CMEM-YYYYMMDD-HHMMSS",
|
||||||
|
"message": "Created memory: CMEM-YYYYMMDD-HHMMSS"
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Step 4: Report Recovery ID
|
||||||
|
|
||||||
|
After successful import, **clearly display the Recovery ID** to the user:
|
||||||
|
|
||||||
|
```
|
||||||
|
╔════════════════════════════════════════════════════════════════════════════╗
|
||||||
|
║ ✓ Session Memory Saved ║
|
||||||
|
║ ║
|
||||||
|
║ Recovery ID: CMEM-YYYYMMDD-HHMMSS ║
|
||||||
|
║ ║
|
||||||
|
║ To restore: "Please import memory <ID>" ║
|
||||||
|
║ (MCP: core_memory export | CLI: ccw core-memory export --id <ID>) ║
|
||||||
|
╚════════════════════════════════════════════════════════════════════════════╝
|
||||||
|
```
|
||||||
|
|
||||||
|
## 6. Quality Checklist
|
||||||
|
|
||||||
|
Before generating:
|
||||||
|
- [ ] Session ID captured if workflow session active (WFS-*)
|
||||||
|
- [ ] Project Root is absolute path (e.g., D:\Claude_dms3)
|
||||||
|
- [ ] Objective clearly states the "North Star" goal
|
||||||
|
- [ ] Execution Plan: COMPLETE plan preserved VERBATIM (no summarization)
|
||||||
|
- [ ] Plan Source: Clearly identified (workflow | todo | user-stated | inferred)
|
||||||
|
- [ ] Plan Details: ALL phases, tasks, file paths, dependencies, status markers included
|
||||||
|
- [ ] All file paths are ABSOLUTE (not relative)
|
||||||
|
- [ ] Working Files: 3-8 modified files with roles
|
||||||
|
- [ ] Reference Files: Key context files (CLAUDE.md, types, configs)
|
||||||
|
- [ ] Last Action captures final state (success/failure)
|
||||||
|
- [ ] Decisions include reasoning, not just choices
|
||||||
|
- [ ] Known Issues separates deferred from forgotten bugs
|
||||||
|
- [ ] Notes preserve debugging hypotheses if any
|
||||||
|
|
||||||
|
## 7. Path Resolution Rules
|
||||||
|
|
||||||
|
### Project Root Detection
|
||||||
|
1. Check current working directory from environment
|
||||||
|
2. Look for project markers: `.git/`, `package.json`, `.claude/`
|
||||||
|
3. Use the topmost directory containing these markers
|
||||||
|
|
||||||
|
### Absolute Path Conversion
|
||||||
|
```javascript
|
||||||
|
// Convert relative to absolute
|
||||||
|
const toAbsolutePath = (relativePath, projectRoot) => {
|
||||||
|
if (path.isAbsolute(relativePath)) return relativePath;
|
||||||
|
return path.join(projectRoot, relativePath);
|
||||||
|
};
|
||||||
|
|
||||||
|
// Example: "src/api/auth.ts" → "D:\Claude_dms3\src\api\auth.ts"
|
||||||
|
```
|
||||||
|
|
||||||
|
### Reference File Categories
|
||||||
|
| Category | Examples | Priority |
|
||||||
|
|----------|----------|----------|
|
||||||
|
| Project Config | `.claude/CLAUDE.md`, `package.json`, `tsconfig.json` | High |
|
||||||
|
| Type Definitions | `src/types/*.ts`, `*.d.ts` | High |
|
||||||
|
| Related Modules | Parent/sibling modules with shared interfaces | Medium |
|
||||||
|
| Test Files | Corresponding test files for modified code | Medium |
|
||||||
|
| Documentation | `README.md`, `ARCHITECTURE.md` | Low |
|
||||||
|
|
||||||
|
## 8. Plan Detection (Priority Order)
|
||||||
|
|
||||||
|
### Priority 1: Workflow Session (IMPL_PLAN.md)
|
||||||
|
```javascript
|
||||||
|
// Check for active workflow session
|
||||||
|
const manifest = await mcp__ccw-tools__session_manager({
|
||||||
|
operation: "list",
|
||||||
|
location: "active"
|
||||||
|
});
|
||||||
|
|
||||||
|
if (manifest.sessions?.length > 0) {
|
||||||
|
const session = manifest.sessions[0];
|
||||||
|
const plan = await mcp__ccw-tools__session_manager({
|
||||||
|
operation: "read",
|
||||||
|
session_id: session.id,
|
||||||
|
content_type: "plan"
|
||||||
|
});
|
||||||
|
sessionAnalysis.sessionId = session.id;
|
||||||
|
sessionAnalysis.executionPlan.source = "workflow";
|
||||||
|
sessionAnalysis.executionPlan.content = plan.content;
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Priority 2: TodoWrite (Current Session Todos)
|
||||||
|
```javascript
|
||||||
|
// Extract from conversation - look for TodoWrite tool calls
|
||||||
|
// Preserve COMPLETE todo list with all details
|
||||||
|
const todos = extractTodosFromConversation();
|
||||||
|
if (todos.length > 0) {
|
||||||
|
sessionAnalysis.executionPlan.source = "todo";
|
||||||
|
// Format todos with full context - preserve status markers
|
||||||
|
sessionAnalysis.executionPlan.content = todos.map(t =>
|
||||||
|
`- [${t.status === 'completed' ? 'x' : t.status === 'in_progress' ? '>' : ' '}] ${t.content}`
|
||||||
|
).join('\n');
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Priority 3: User-Stated Plan
|
||||||
|
```javascript
|
||||||
|
// Look for explicit plan statements in user messages:
|
||||||
|
// - "Here's my plan: 1. ... 2. ... 3. ..."
|
||||||
|
// - "I want to: first..., then..., finally..."
|
||||||
|
// - Numbered or bulleted lists describing steps
|
||||||
|
const userPlan = extractUserStatedPlan();
|
||||||
|
if (userPlan) {
|
||||||
|
sessionAnalysis.executionPlan.source = "user-stated";
|
||||||
|
sessionAnalysis.executionPlan.content = userPlan;
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Priority 4: Inferred Plan
|
||||||
|
```javascript
|
||||||
|
// If no explicit plan, infer from:
|
||||||
|
// - Task description and breakdown discussion
|
||||||
|
// - Sequence of actions taken
|
||||||
|
// - Outstanding work mentioned
|
||||||
|
const inferredPlan = inferPlanFromDiscussion();
|
||||||
|
if (inferredPlan) {
|
||||||
|
sessionAnalysis.executionPlan.source = "inferred";
|
||||||
|
sessionAnalysis.executionPlan.content = inferredPlan;
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
## 9. Notes
|
||||||
|
|
||||||
|
- **Timing**: Execute at task completion or before context switch
|
||||||
|
- **Frequency**: Once per independent task or milestone
|
||||||
|
- **Recovery**: New session can immediately continue with full context
|
||||||
|
- **Knowledge Graph**: Entity relationships auto-extracted for visualization
|
||||||
|
- **Absolute Paths**: Critical for cross-session recovery on different machines
|
||||||
@@ -235,12 +235,12 @@ api_id=$((group_count + 3))
|
|||||||
| Mode | cli_execute | Placement | CLI MODE | Approval Flag | Agent Role |
|
| Mode | cli_execute | Placement | CLI MODE | Approval Flag | Agent Role |
|
||||||
|------|-------------|-----------|----------|---------------|------------|
|
|------|-------------|-----------|----------|---------------|------------|
|
||||||
| **Agent** | false | pre_analysis | analysis | (none) | Generate docs in implementation_approach |
|
| **Agent** | false | pre_analysis | analysis | (none) | Generate docs in implementation_approach |
|
||||||
| **CLI** | true | implementation_approach | write | --approval-mode yolo | Execute CLI commands, validate output |
|
| **CLI** | true | implementation_approach | write | --mode write | Execute CLI commands, validate output |
|
||||||
|
|
||||||
**Command Patterns**:
|
**Command Patterns**:
|
||||||
- Gemini/Qwen: `cd dir && gemini -p "..."`
|
- Gemini/Qwen: `ccw cli -p "..." --tool gemini --mode analysis --cd dir`
|
||||||
- CLI Mode: `cd dir && gemini --approval-mode yolo -p "..."`
|
- CLI Mode: `ccw cli -p "..." --tool gemini --mode write --cd dir`
|
||||||
- Codex: `codex -C dir --full-auto exec "..." --skip-git-repo-check -s danger-full-access`
|
- Codex: `ccw cli -p "..." --tool codex --mode write --cd dir`
|
||||||
|
|
||||||
**Generation Process**:
|
**Generation Process**:
|
||||||
1. Read configuration values (tool, cli_execute, mode) from workflow-session.json
|
1. Read configuration values (tool, cli_execute, mode) from workflow-session.json
|
||||||
@@ -331,7 +331,7 @@ api_id=$((group_count + 3))
|
|||||||
{
|
{
|
||||||
"step": 2,
|
"step": 2,
|
||||||
"title": "Batch generate documentation via CLI",
|
"title": "Batch generate documentation via CLI",
|
||||||
"command": "bash(dirs=$(jq -r '.groups.assignments[] | select(.group_id == \"${group_number}\") | .directories[]' ${session_dir}/.process/doc-planning-data.json); for dir in $dirs; do cd \"$dir\" && gemini --approval-mode yolo -p \"PURPOSE: Generate module docs\\nTASK: Create documentation\\nMODE: write\\nCONTEXT: @**/* [phase2_context]\\nEXPECTED: API.md and README.md\\nRULES: Mirror structure\" || echo \"Failed: $dir\"; cd -; done)",
|
"command": "ccw cli -p 'PURPOSE: Generate module docs\\nTASK: Create documentation\\nMODE: write\\nCONTEXT: @**/* [phase2_context]\\nEXPECTED: API.md and README.md\\nRULES: Mirror structure' --tool gemini --mode write --cd ${dirs_from_group}",
|
||||||
"depends_on": [1],
|
"depends_on": [1],
|
||||||
"output": "generated_docs"
|
"output": "generated_docs"
|
||||||
}
|
}
|
||||||
@@ -363,7 +363,7 @@ api_id=$((group_count + 3))
|
|||||||
},
|
},
|
||||||
{
|
{
|
||||||
"step": "analyze_project",
|
"step": "analyze_project",
|
||||||
"command": "bash(gemini \"PURPOSE: Analyze project structure\\nTASK: Extract overview from modules\\nMODE: analysis\\nCONTEXT: [all_module_docs]\\nEXPECTED: Project outline\")",
|
"command": "bash(ccw cli -p \"PURPOSE: Analyze project structure\\nTASK: Extract overview from modules\\nMODE: analysis\\nCONTEXT: [all_module_docs]\\nEXPECTED: Project outline\" --tool gemini --mode analysis)",
|
||||||
"output_to": "project_outline"
|
"output_to": "project_outline"
|
||||||
}
|
}
|
||||||
],
|
],
|
||||||
@@ -403,7 +403,7 @@ api_id=$((group_count + 3))
|
|||||||
"pre_analysis": [
|
"pre_analysis": [
|
||||||
{"step": "load_existing_docs", "command": "bash(cat .workflow/docs/${project_name}/{ARCHITECTURE,EXAMPLES}.md 2>/dev/null || echo 'No existing docs')", "output_to": "existing_arch_examples"},
|
{"step": "load_existing_docs", "command": "bash(cat .workflow/docs/${project_name}/{ARCHITECTURE,EXAMPLES}.md 2>/dev/null || echo 'No existing docs')", "output_to": "existing_arch_examples"},
|
||||||
{"step": "load_all_docs", "command": "bash(cat .workflow/docs/${project_name}/README.md && find .workflow/docs/${project_name} -type f -name '*.md' ! -path '*/README.md' ! -path '*/ARCHITECTURE.md' ! -path '*/EXAMPLES.md' ! -path '*/api/*' | xargs cat)", "output_to": "all_docs"},
|
{"step": "load_all_docs", "command": "bash(cat .workflow/docs/${project_name}/README.md && find .workflow/docs/${project_name} -type f -name '*.md' ! -path '*/README.md' ! -path '*/ARCHITECTURE.md' ! -path '*/EXAMPLES.md' ! -path '*/api/*' | xargs cat)", "output_to": "all_docs"},
|
||||||
{"step": "analyze_architecture", "command": "bash(gemini \"PURPOSE: Analyze system architecture\\nTASK: Synthesize architectural overview and examples\\nMODE: analysis\\nCONTEXT: [all_docs]\\nEXPECTED: Architecture + Examples outline\")", "output_to": "arch_examples_outline"}
|
{"step": "analyze_architecture", "command": "bash(ccw cli -p \"PURPOSE: Analyze system architecture\\nTASK: Synthesize architectural overview and examples\\nMODE: analysis\\nCONTEXT: [all_docs]\\nEXPECTED: Architecture + Examples outline\" --tool gemini --mode analysis)", "output_to": "arch_examples_outline"}
|
||||||
],
|
],
|
||||||
"implementation_approach": [
|
"implementation_approach": [
|
||||||
{
|
{
|
||||||
@@ -440,7 +440,7 @@ api_id=$((group_count + 3))
|
|||||||
"pre_analysis": [
|
"pre_analysis": [
|
||||||
{"step": "discover_api", "command": "bash(rg 'router\\.| @(Get|Post)' -g '*.{ts,js}')", "output_to": "endpoint_discovery"},
|
{"step": "discover_api", "command": "bash(rg 'router\\.| @(Get|Post)' -g '*.{ts,js}')", "output_to": "endpoint_discovery"},
|
||||||
{"step": "load_existing_api", "command": "bash(cat .workflow/docs/${project_name}/api/README.md 2>/dev/null || echo 'No existing API docs')", "output_to": "existing_api_docs"},
|
{"step": "load_existing_api", "command": "bash(cat .workflow/docs/${project_name}/api/README.md 2>/dev/null || echo 'No existing API docs')", "output_to": "existing_api_docs"},
|
||||||
{"step": "analyze_api", "command": "bash(gemini \"PURPOSE: Document HTTP API\\nTASK: Analyze endpoints\\nMODE: analysis\\nCONTEXT: @src/api/**/* [endpoint_discovery]\\nEXPECTED: API outline\")", "output_to": "api_outline"}
|
{"step": "analyze_api", "command": "bash(ccw cli -p \"PURPOSE: Document HTTP API\\nTASK: Analyze endpoints\\nMODE: analysis\\nCONTEXT: @src/api/**/* [endpoint_discovery]\\nEXPECTED: API outline\" --tool gemini --mode analysis)", "output_to": "api_outline"}
|
||||||
],
|
],
|
||||||
"implementation_approach": [
|
"implementation_approach": [
|
||||||
{
|
{
|
||||||
@@ -601,7 +601,7 @@ api_id=$((group_count + 3))
|
|||||||
| Mode | CLI Placement | CLI MODE | Approval Flag | Agent Role |
|
| Mode | CLI Placement | CLI MODE | Approval Flag | Agent Role |
|
||||||
|------|---------------|----------|---------------|------------|
|
|------|---------------|----------|---------------|------------|
|
||||||
| **Agent (default)** | pre_analysis | analysis | (none) | Generates documentation content |
|
| **Agent (default)** | pre_analysis | analysis | (none) | Generates documentation content |
|
||||||
| **CLI (--cli-execute)** | implementation_approach | write | --approval-mode yolo | Executes CLI commands, validates output |
|
| **CLI (--cli-execute)** | implementation_approach | write | --mode write | Executes CLI commands, validates output |
|
||||||
|
|
||||||
**Execution Flow**:
|
**Execution Flow**:
|
||||||
- **Phase 2**: Unified analysis once, results in `.process/`
|
- **Phase 2**: Unified analysis once, results in `.process/`
|
||||||
|
|||||||
@@ -5,7 +5,7 @@ argument-hint: "[--tool gemini|qwen] \"task context description\""
|
|||||||
allowed-tools: Task(*), Bash(*)
|
allowed-tools: Task(*), Bash(*)
|
||||||
examples:
|
examples:
|
||||||
- /memory:load "在当前前端基础上开发用户认证功能"
|
- /memory:load "在当前前端基础上开发用户认证功能"
|
||||||
- /memory:load --tool qwen -p "重构支付模块API"
|
- /memory:load --tool qwen "重构支付模块API"
|
||||||
---
|
---
|
||||||
|
|
||||||
# Memory Load Command (/memory:load)
|
# Memory Load Command (/memory:load)
|
||||||
@@ -39,7 +39,7 @@ The command fully delegates to **universal-executor agent**, which autonomously:
|
|||||||
1. **Analyzes Project Structure**: Executes `get_modules_by_depth.sh` to understand architecture
|
1. **Analyzes Project Structure**: Executes `get_modules_by_depth.sh` to understand architecture
|
||||||
2. **Loads Documentation**: Reads CLAUDE.md, README.md and other key docs
|
2. **Loads Documentation**: Reads CLAUDE.md, README.md and other key docs
|
||||||
3. **Extracts Keywords**: Derives core keywords from task description
|
3. **Extracts Keywords**: Derives core keywords from task description
|
||||||
4. **Discovers Files**: Uses MCP code-index or rg/find to locate relevant files
|
4. **Discovers Files**: Uses CodexLens MCP or rg/find to locate relevant files
|
||||||
5. **CLI Deep Analysis**: Executes Gemini/Qwen CLI for deep context analysis
|
5. **CLI Deep Analysis**: Executes Gemini/Qwen CLI for deep context analysis
|
||||||
6. **Generates Content Package**: Returns structured JSON core content package
|
6. **Generates Content Package**: Returns structured JSON core content package
|
||||||
|
|
||||||
@@ -136,7 +136,7 @@ Task(
|
|||||||
Execute Gemini/Qwen CLI for deep analysis (saves main thread tokens):
|
Execute Gemini/Qwen CLI for deep analysis (saves main thread tokens):
|
||||||
|
|
||||||
\`\`\`bash
|
\`\`\`bash
|
||||||
cd . && ${tool} -p "
|
ccw cli -p "
|
||||||
PURPOSE: Extract project core context for task: ${task_description}
|
PURPOSE: Extract project core context for task: ${task_description}
|
||||||
TASK: Analyze project architecture, tech stack, key patterns, relevant files
|
TASK: Analyze project architecture, tech stack, key patterns, relevant files
|
||||||
MODE: analysis
|
MODE: analysis
|
||||||
@@ -147,7 +147,7 @@ RULES:
|
|||||||
- Identify key architecture patterns and technical constraints
|
- Identify key architecture patterns and technical constraints
|
||||||
- Extract integration points and development standards
|
- Extract integration points and development standards
|
||||||
- Output concise, structured format
|
- Output concise, structured format
|
||||||
"
|
" --tool ${tool} --mode analysis
|
||||||
\`\`\`
|
\`\`\`
|
||||||
|
|
||||||
### Step 4: Generate Core Content Package
|
### Step 4: Generate Core Content Package
|
||||||
@@ -212,7 +212,7 @@ Before returning:
|
|||||||
### Example 2: Using Qwen Tool
|
### Example 2: Using Qwen Tool
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
/memory:load --tool qwen -p "重构支付模块API"
|
/memory:load --tool qwen "重构支付模块API"
|
||||||
```
|
```
|
||||||
|
|
||||||
Agent uses Qwen CLI for analysis, returns same structured package.
|
Agent uses Qwen CLI for analysis, returns same structured package.
|
||||||
|
|||||||
310
.claude/commands/memory/tech-research-rules.md
Normal file
310
.claude/commands/memory/tech-research-rules.md
Normal file
@@ -0,0 +1,310 @@
|
|||||||
|
---
|
||||||
|
name: tech-research-rules
|
||||||
|
description: "3-phase orchestrator: extract tech stack → Exa research → generate path-conditional rules (auto-loaded by Claude Code)"
|
||||||
|
argument-hint: "[session-id | tech-stack-name] [--regenerate] [--tool <gemini|qwen>]"
|
||||||
|
allowed-tools: SlashCommand(*), TodoWrite(*), Bash(*), Read(*), Write(*), Task(*)
|
||||||
|
---
|
||||||
|
|
||||||
|
# Tech Stack Rules Generator
|
||||||
|
|
||||||
|
## Overview
|
||||||
|
|
||||||
|
**Purpose**: Generate multi-layered, path-conditional rules that Claude Code automatically loads based on file context.
|
||||||
|
|
||||||
|
**Output Structure**:
|
||||||
|
```
|
||||||
|
.claude/rules/tech/{tech-stack}/
|
||||||
|
├── core.md # paths: **/*.{ext} - Core principles
|
||||||
|
├── patterns.md # paths: src/**/*.{ext} - Implementation patterns
|
||||||
|
├── testing.md # paths: **/*.{test,spec}.{ext} - Testing rules
|
||||||
|
├── config.md # paths: *.config.* - Configuration rules
|
||||||
|
├── api.md # paths: **/api/**/* - API rules (backend only)
|
||||||
|
├── components.md # paths: **/components/**/* - Component rules (frontend only)
|
||||||
|
└── metadata.json # Generation metadata
|
||||||
|
```
|
||||||
|
|
||||||
|
**Templates Location**: `~/.claude/workflows/cli-templates/prompts/rules/`
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Core Rules
|
||||||
|
|
||||||
|
1. **Start Immediately**: First action is TodoWrite initialization
|
||||||
|
2. **Path-Conditional Output**: Every rule file includes `paths` frontmatter
|
||||||
|
3. **Template-Driven**: Agent reads templates before generating content
|
||||||
|
4. **Agent Produces Files**: Agent writes all rule files directly
|
||||||
|
5. **No Manual Loading**: Rules auto-activate when Claude works with matching files
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 3-Phase Execution
|
||||||
|
|
||||||
|
### Phase 1: Prepare Context & Detect Tech Stack
|
||||||
|
|
||||||
|
**Goal**: Detect input mode, extract tech stack info, determine file extensions
|
||||||
|
|
||||||
|
**Input Mode Detection**:
|
||||||
|
```bash
|
||||||
|
input="$1"
|
||||||
|
|
||||||
|
if [[ "$input" == WFS-* ]]; then
|
||||||
|
MODE="session"
|
||||||
|
SESSION_ID="$input"
|
||||||
|
# Read workflow-session.json to extract tech stack
|
||||||
|
else
|
||||||
|
MODE="direct"
|
||||||
|
TECH_STACK_NAME="$input"
|
||||||
|
fi
|
||||||
|
```
|
||||||
|
|
||||||
|
**Tech Stack Analysis**:
|
||||||
|
```javascript
|
||||||
|
// Decompose composite tech stacks
|
||||||
|
// "typescript-react-nextjs" → ["typescript", "react", "nextjs"]
|
||||||
|
|
||||||
|
const TECH_EXTENSIONS = {
|
||||||
|
"typescript": "{ts,tsx}",
|
||||||
|
"javascript": "{js,jsx}",
|
||||||
|
"python": "py",
|
||||||
|
"rust": "rs",
|
||||||
|
"go": "go",
|
||||||
|
"java": "java",
|
||||||
|
"csharp": "cs",
|
||||||
|
"ruby": "rb",
|
||||||
|
"php": "php"
|
||||||
|
};
|
||||||
|
|
||||||
|
const FRAMEWORK_TYPE = {
|
||||||
|
"react": "frontend",
|
||||||
|
"vue": "frontend",
|
||||||
|
"angular": "frontend",
|
||||||
|
"nextjs": "fullstack",
|
||||||
|
"nuxt": "fullstack",
|
||||||
|
"fastapi": "backend",
|
||||||
|
"express": "backend",
|
||||||
|
"django": "backend",
|
||||||
|
"rails": "backend"
|
||||||
|
};
|
||||||
|
```
|
||||||
|
|
||||||
|
**Check Existing Rules**:
|
||||||
|
```bash
|
||||||
|
normalized_name=$(echo "$TECH_STACK_NAME" | tr '[:upper:]' '[:lower:]' | tr ' ' '-')
|
||||||
|
rules_dir=".claude/rules/tech/${normalized_name}"
|
||||||
|
existing_count=$(find "${rules_dir}" -name "*.md" 2>/dev/null | wc -l || echo 0)
|
||||||
|
```
|
||||||
|
|
||||||
|
**Skip Decision**:
|
||||||
|
- If `existing_count > 0` AND no `--regenerate` → `SKIP_GENERATION = true`
|
||||||
|
- If `--regenerate` → Delete existing and regenerate
|
||||||
|
|
||||||
|
**Output Variables**:
|
||||||
|
- `TECH_STACK_NAME`: Normalized name
|
||||||
|
- `PRIMARY_LANG`: Primary language
|
||||||
|
- `FILE_EXT`: File extension pattern
|
||||||
|
- `FRAMEWORK_TYPE`: frontend | backend | fullstack | library
|
||||||
|
- `COMPONENTS`: Array of tech components
|
||||||
|
- `SKIP_GENERATION`: Boolean
|
||||||
|
|
||||||
|
**TodoWrite**: Mark phase 1 completed
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Phase 2: Agent Produces Path-Conditional Rules
|
||||||
|
|
||||||
|
**Skip Condition**: Skipped if `SKIP_GENERATION = true`
|
||||||
|
|
||||||
|
**Goal**: Delegate to agent for Exa research and rule file generation
|
||||||
|
|
||||||
|
**Template Files**:
|
||||||
|
```
|
||||||
|
~/.claude/workflows/cli-templates/prompts/rules/
|
||||||
|
├── tech-rules-agent-prompt.txt # Agent instructions
|
||||||
|
├── rule-core.txt # Core principles template
|
||||||
|
├── rule-patterns.txt # Implementation patterns template
|
||||||
|
├── rule-testing.txt # Testing rules template
|
||||||
|
├── rule-config.txt # Configuration rules template
|
||||||
|
├── rule-api.txt # API rules template (backend)
|
||||||
|
└── rule-components.txt # Component rules template (frontend)
|
||||||
|
```
|
||||||
|
|
||||||
|
**Agent Task**:
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
Task({
|
||||||
|
subagent_type: "general-purpose",
|
||||||
|
description: `Generate tech stack rules: ${TECH_STACK_NAME}`,
|
||||||
|
prompt: `
|
||||||
|
You are generating path-conditional rules for Claude Code.
|
||||||
|
|
||||||
|
## Context
|
||||||
|
- Tech Stack: ${TECH_STACK_NAME}
|
||||||
|
- Primary Language: ${PRIMARY_LANG}
|
||||||
|
- File Extensions: ${FILE_EXT}
|
||||||
|
- Framework Type: ${FRAMEWORK_TYPE}
|
||||||
|
- Components: ${JSON.stringify(COMPONENTS)}
|
||||||
|
- Output Directory: .claude/rules/tech/${TECH_STACK_NAME}/
|
||||||
|
|
||||||
|
## Instructions
|
||||||
|
|
||||||
|
Read the agent prompt template for detailed instructions:
|
||||||
|
$(cat ~/.claude/workflows/cli-templates/prompts/rules/tech-rules-agent-prompt.txt)
|
||||||
|
|
||||||
|
## Execution Steps
|
||||||
|
|
||||||
|
1. Execute Exa research queries (see agent prompt)
|
||||||
|
2. Read each rule template
|
||||||
|
3. Generate rule files following template structure
|
||||||
|
4. Write files to output directory
|
||||||
|
5. Write metadata.json
|
||||||
|
6. Report completion
|
||||||
|
|
||||||
|
## Variable Substitutions
|
||||||
|
|
||||||
|
Replace in templates:
|
||||||
|
- {TECH_STACK_NAME} → ${TECH_STACK_NAME}
|
||||||
|
- {PRIMARY_LANG} → ${PRIMARY_LANG}
|
||||||
|
- {FILE_EXT} → ${FILE_EXT}
|
||||||
|
- {FRAMEWORK_TYPE} → ${FRAMEWORK_TYPE}
|
||||||
|
`
|
||||||
|
})
|
||||||
|
```
|
||||||
|
|
||||||
|
**Completion Criteria**:
|
||||||
|
- 4-6 rule files written with proper `paths` frontmatter
|
||||||
|
- metadata.json written
|
||||||
|
- Agent reports files created
|
||||||
|
|
||||||
|
**TodoWrite**: Mark phase 2 completed
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Phase 3: Verify & Report
|
||||||
|
|
||||||
|
**Goal**: Verify generated files and provide usage summary
|
||||||
|
|
||||||
|
**Steps**:
|
||||||
|
|
||||||
|
1. **Verify Files**:
|
||||||
|
```bash
|
||||||
|
find ".claude/rules/tech/${TECH_STACK_NAME}" -name "*.md" -type f
|
||||||
|
```
|
||||||
|
|
||||||
|
2. **Validate Frontmatter**:
|
||||||
|
```bash
|
||||||
|
head -5 ".claude/rules/tech/${TECH_STACK_NAME}/core.md"
|
||||||
|
```
|
||||||
|
|
||||||
|
3. **Read Metadata**:
|
||||||
|
```javascript
|
||||||
|
Read(`.claude/rules/tech/${TECH_STACK_NAME}/metadata.json`)
|
||||||
|
```
|
||||||
|
|
||||||
|
4. **Generate Summary Report**:
|
||||||
|
```
|
||||||
|
Tech Stack Rules Generated
|
||||||
|
|
||||||
|
Tech Stack: {TECH_STACK_NAME}
|
||||||
|
Location: .claude/rules/tech/{TECH_STACK_NAME}/
|
||||||
|
|
||||||
|
Files Created:
|
||||||
|
├── core.md → paths: **/*.{ext}
|
||||||
|
├── patterns.md → paths: src/**/*.{ext}
|
||||||
|
├── testing.md → paths: **/*.{test,spec}.{ext}
|
||||||
|
├── config.md → paths: *.config.*
|
||||||
|
├── api.md → paths: **/api/**/* (if backend)
|
||||||
|
└── components.md → paths: **/components/**/* (if frontend)
|
||||||
|
|
||||||
|
Auto-Loading:
|
||||||
|
- Rules apply automatically when editing matching files
|
||||||
|
- No manual loading required
|
||||||
|
|
||||||
|
Example Activation:
|
||||||
|
- Edit src/components/Button.tsx → core.md + patterns.md + components.md
|
||||||
|
- Edit tests/api.test.ts → core.md + testing.md
|
||||||
|
- Edit package.json → config.md
|
||||||
|
```
|
||||||
|
|
||||||
|
**TodoWrite**: Mark phase 3 completed
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Path Pattern Reference
|
||||||
|
|
||||||
|
| Pattern | Matches |
|
||||||
|
|---------|---------|
|
||||||
|
| `**/*.ts` | All .ts files |
|
||||||
|
| `src/**/*` | All files under src/ |
|
||||||
|
| `*.config.*` | Config files in root |
|
||||||
|
| `**/*.{ts,tsx}` | .ts and .tsx files |
|
||||||
|
|
||||||
|
| Tech Stack | Core Pattern | Test Pattern |
|
||||||
|
|------------|--------------|--------------|
|
||||||
|
| TypeScript | `**/*.{ts,tsx}` | `**/*.{test,spec}.{ts,tsx}` |
|
||||||
|
| Python | `**/*.py` | `**/test_*.py, **/*_test.py` |
|
||||||
|
| Rust | `**/*.rs` | `**/tests/**/*.rs` |
|
||||||
|
| Go | `**/*.go` | `**/*_test.go` |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Parameters
|
||||||
|
|
||||||
|
```bash
|
||||||
|
/memory:tech-research [session-id | "tech-stack-name"] [--regenerate]
|
||||||
|
```
|
||||||
|
|
||||||
|
**Arguments**:
|
||||||
|
- **session-id**: `WFS-*` format - Extract from workflow session
|
||||||
|
- **tech-stack-name**: Direct input - `"typescript"`, `"typescript-react"`
|
||||||
|
- **--regenerate**: Force regenerate existing rules
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Examples
|
||||||
|
|
||||||
|
### Single Language
|
||||||
|
|
||||||
|
```bash
|
||||||
|
/memory:tech-research "typescript"
|
||||||
|
```
|
||||||
|
|
||||||
|
**Output**: `.claude/rules/tech/typescript/` with 4 rule files
|
||||||
|
|
||||||
|
### Frontend Stack
|
||||||
|
|
||||||
|
```bash
|
||||||
|
/memory:tech-research "typescript-react"
|
||||||
|
```
|
||||||
|
|
||||||
|
**Output**: `.claude/rules/tech/typescript-react/` with 5 rule files (includes components.md)
|
||||||
|
|
||||||
|
### Backend Stack
|
||||||
|
|
||||||
|
```bash
|
||||||
|
/memory:tech-research "python-fastapi"
|
||||||
|
```
|
||||||
|
|
||||||
|
**Output**: `.claude/rules/tech/python-fastapi/` with 5 rule files (includes api.md)
|
||||||
|
|
||||||
|
### From Session
|
||||||
|
|
||||||
|
```bash
|
||||||
|
/memory:tech-research WFS-user-auth-20251104
|
||||||
|
```
|
||||||
|
|
||||||
|
**Workflow**: Extract tech stack from session → Generate rules
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Comparison: Rules vs SKILL
|
||||||
|
|
||||||
|
| Aspect | SKILL Memory | Rules |
|
||||||
|
|--------|--------------|-------|
|
||||||
|
| Loading | Manual: `Skill("tech")` | Automatic by path |
|
||||||
|
| Scope | All files when loaded | Only matching files |
|
||||||
|
| Granularity | Monolithic packages | Per-file-type |
|
||||||
|
| Context | Full package | Only relevant rules |
|
||||||
|
|
||||||
|
**When to Use**:
|
||||||
|
- **Rules**: Tech stack conventions per file type
|
||||||
|
- **SKILL**: Reference docs, APIs, examples for manual lookup
|
||||||
@@ -1,477 +0,0 @@
|
|||||||
---
|
|
||||||
name: tech-research
|
|
||||||
description: 3-phase orchestrator: extract tech stack from session/name → delegate to agent for Exa research and module generation → generate SKILL.md index (skips phase 2 if exists)
|
|
||||||
argument-hint: "[session-id | tech-stack-name] [--regenerate] [--tool <gemini|qwen>]"
|
|
||||||
allowed-tools: SlashCommand(*), TodoWrite(*), Bash(*), Read(*), Write(*), Task(*)
|
|
||||||
---
|
|
||||||
|
|
||||||
# Tech Stack Research SKILL Generator
|
|
||||||
|
|
||||||
## Overview
|
|
||||||
|
|
||||||
**Pure Orchestrator with Agent Delegation**: Prepares context paths and delegates ALL work to agent. Agent produces files directly.
|
|
||||||
|
|
||||||
**Auto-Continue Workflow**: Runs fully autonomously once triggered. Each phase completes and automatically triggers the next phase.
|
|
||||||
|
|
||||||
**Execution Paths**:
|
|
||||||
- **Full Path**: All 3 phases (no existing SKILL OR `--regenerate` specified)
|
|
||||||
- **Skip Path**: Phase 1 → Phase 3 (existing SKILL found AND no `--regenerate` flag)
|
|
||||||
- **Phase 3 Always Executes**: SKILL index is always generated or updated
|
|
||||||
|
|
||||||
**Agent Responsibility**:
|
|
||||||
- Agent does ALL the work: context reading, Exa research, content synthesis, file writing
|
|
||||||
- Orchestrator only provides context paths and waits for completion
|
|
||||||
|
|
||||||
## Core Rules
|
|
||||||
|
|
||||||
1. **Start Immediately**: First action is TodoWrite initialization, second action is Phase 1 execution
|
|
||||||
2. **Context Path Delegation**: Pass session directory or tech stack name to agent, let agent do discovery
|
|
||||||
3. **Agent Produces Files**: Agent directly writes all module files, orchestrator does NOT parse agent output
|
|
||||||
4. **Auto-Continue**: After completing each phase, update TodoWrite and immediately execute next phase
|
|
||||||
5. **No User Prompts**: Never ask user questions or wait for input between phases
|
|
||||||
6. **Track Progress**: Update TodoWrite after EVERY phase completion before starting next phase
|
|
||||||
7. **Lightweight Index**: Phase 3 only generates SKILL.md index by reading existing files
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 3-Phase Execution
|
|
||||||
|
|
||||||
### Phase 1: Prepare Context Paths
|
|
||||||
|
|
||||||
**Goal**: Detect input mode, prepare context paths for agent, check existing SKILL
|
|
||||||
|
|
||||||
**Input Mode Detection**:
|
|
||||||
```bash
|
|
||||||
# Get input parameter
|
|
||||||
input="$1"
|
|
||||||
|
|
||||||
# Detect mode
|
|
||||||
if [[ "$input" == WFS-* ]]; then
|
|
||||||
MODE="session"
|
|
||||||
SESSION_ID="$input"
|
|
||||||
CONTEXT_PATH=".workflow/${SESSION_ID}"
|
|
||||||
else
|
|
||||||
MODE="direct"
|
|
||||||
TECH_STACK_NAME="$input"
|
|
||||||
CONTEXT_PATH="$input" # Pass tech stack name as context
|
|
||||||
fi
|
|
||||||
```
|
|
||||||
|
|
||||||
**Check Existing SKILL**:
|
|
||||||
```bash
|
|
||||||
# For session mode, peek at session to get tech stack name
|
|
||||||
if [[ "$MODE" == "session" ]]; then
|
|
||||||
bash(test -f ".workflow/${SESSION_ID}/workflow-session.json")
|
|
||||||
Read(.workflow/${SESSION_ID}/workflow-session.json)
|
|
||||||
# Extract tech_stack_name (minimal extraction)
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Normalize and check
|
|
||||||
normalized_name=$(echo "$TECH_STACK_NAME" | tr '[:upper:]' '[:lower:]' | tr ' ' '-')
|
|
||||||
bash(test -d ".claude/skills/${normalized_name}" && echo "exists" || echo "not_exists")
|
|
||||||
bash(find ".claude/skills/${normalized_name}" -name "*.md" 2>/dev/null | wc -l || echo 0)
|
|
||||||
```
|
|
||||||
|
|
||||||
**Skip Decision**:
|
|
||||||
```javascript
|
|
||||||
if (existing_files > 0 && !regenerate_flag) {
|
|
||||||
SKIP_GENERATION = true
|
|
||||||
message = "Tech stack SKILL already exists, skipping Phase 2. Use --regenerate to force regeneration."
|
|
||||||
} else if (regenerate_flag) {
|
|
||||||
bash(rm -rf ".claude/skills/${normalized_name}")
|
|
||||||
SKIP_GENERATION = false
|
|
||||||
message = "Regenerating tech stack SKILL from scratch."
|
|
||||||
} else {
|
|
||||||
SKIP_GENERATION = false
|
|
||||||
message = "No existing SKILL found, generating new tech stack documentation."
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
**Output Variables**:
|
|
||||||
- `MODE`: `session` or `direct`
|
|
||||||
- `SESSION_ID`: Session ID (if session mode)
|
|
||||||
- `CONTEXT_PATH`: Path to session directory OR tech stack name
|
|
||||||
- `TECH_STACK_NAME`: Extracted or provided tech stack name
|
|
||||||
- `SKIP_GENERATION`: Boolean - whether to skip Phase 2
|
|
||||||
|
|
||||||
**TodoWrite**:
|
|
||||||
- If skipping: Mark phase 1 completed, phase 2 completed, phase 3 in_progress
|
|
||||||
- If not skipping: Mark phase 1 completed, phase 2 in_progress
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### Phase 2: Agent Produces All Files
|
|
||||||
|
|
||||||
**Skip Condition**: Skipped if `SKIP_GENERATION = true`
|
|
||||||
|
|
||||||
**Goal**: Delegate EVERYTHING to agent - context reading, Exa research, content synthesis, and file writing
|
|
||||||
|
|
||||||
**Agent Task Specification**:
|
|
||||||
|
|
||||||
```
|
|
||||||
Task(
|
|
||||||
subagent_type: "general-purpose",
|
|
||||||
description: "Generate tech stack SKILL: {CONTEXT_PATH}",
|
|
||||||
prompt: "
|
|
||||||
Generate a complete tech stack SKILL package with Exa research.
|
|
||||||
|
|
||||||
**Context Provided**:
|
|
||||||
- Mode: {MODE}
|
|
||||||
- Context Path: {CONTEXT_PATH}
|
|
||||||
|
|
||||||
**Templates Available**:
|
|
||||||
- Module Format: ~/.claude/workflows/cli-templates/prompts/tech/tech-module-format.txt
|
|
||||||
- SKILL Index: ~/.claude/workflows/cli-templates/prompts/tech/tech-skill-index.txt
|
|
||||||
|
|
||||||
**Your Responsibilities**:
|
|
||||||
|
|
||||||
1. **Extract Tech Stack Information**:
|
|
||||||
|
|
||||||
IF MODE == 'session':
|
|
||||||
- Read `.workflow/active/{session_id}/workflow-session.json`
|
|
||||||
- Read `.workflow/active/{session_id}/.process/context-package.json`
|
|
||||||
- Extract tech_stack: {language, frameworks, libraries}
|
|
||||||
- Build tech stack name: \"{language}-{framework1}-{framework2}\"
|
|
||||||
- Example: \"typescript-react-nextjs\"
|
|
||||||
|
|
||||||
IF MODE == 'direct':
|
|
||||||
- Tech stack name = CONTEXT_PATH
|
|
||||||
- Parse composite: split by '-' delimiter
|
|
||||||
- Example: \"typescript-react-nextjs\" → [\"typescript\", \"react\", \"nextjs\"]
|
|
||||||
|
|
||||||
2. **Execute Exa Research** (4-6 parallel queries):
|
|
||||||
|
|
||||||
Base Queries (always execute):
|
|
||||||
- mcp__exa__get_code_context_exa(query: \"{tech} core principles best practices 2025\", tokensNum: 8000)
|
|
||||||
- mcp__exa__get_code_context_exa(query: \"{tech} common patterns architecture examples\", tokensNum: 7000)
|
|
||||||
- mcp__exa__web_search_exa(query: \"{tech} configuration setup tooling 2025\", numResults: 5)
|
|
||||||
- mcp__exa__get_code_context_exa(query: \"{tech} testing strategies\", tokensNum: 5000)
|
|
||||||
|
|
||||||
Component Queries (if composite):
|
|
||||||
- For each additional component:
|
|
||||||
mcp__exa__get_code_context_exa(query: \"{main_tech} {component} integration\", tokensNum: 5000)
|
|
||||||
|
|
||||||
3. **Read Module Format Template**:
|
|
||||||
|
|
||||||
Read template for structure guidance:
|
|
||||||
```bash
|
|
||||||
Read(~/.claude/workflows/cli-templates/prompts/tech/tech-module-format.txt)
|
|
||||||
```
|
|
||||||
|
|
||||||
4. **Synthesize Content into 6 Modules**:
|
|
||||||
|
|
||||||
Follow template structure from tech-module-format.txt:
|
|
||||||
- **principles.md** - Core concepts, philosophies (~3K tokens)
|
|
||||||
- **patterns.md** - Implementation patterns with code examples (~5K tokens)
|
|
||||||
- **practices.md** - Best practices, anti-patterns, pitfalls (~4K tokens)
|
|
||||||
- **testing.md** - Testing strategies, frameworks (~3K tokens)
|
|
||||||
- **config.md** - Setup, configuration, tooling (~3K tokens)
|
|
||||||
- **frameworks.md** - Framework integration (only if composite, ~4K tokens)
|
|
||||||
|
|
||||||
Each module follows template format:
|
|
||||||
- Frontmatter (YAML)
|
|
||||||
- Main sections with clear headings
|
|
||||||
- Code examples from Exa research
|
|
||||||
- Best practices sections
|
|
||||||
- References to Exa sources
|
|
||||||
|
|
||||||
5. **Write Files Directly**:
|
|
||||||
|
|
||||||
```javascript
|
|
||||||
// Create directory
|
|
||||||
bash(mkdir -p \".claude/skills/{tech_stack_name}\")
|
|
||||||
|
|
||||||
// Write each module file using Write tool
|
|
||||||
Write({ file_path: \".claude/skills/{tech_stack_name}/principles.md\", content: ... })
|
|
||||||
Write({ file_path: \".claude/skills/{tech_stack_name}/patterns.md\", content: ... })
|
|
||||||
Write({ file_path: \".claude/skills/{tech_stack_name}/practices.md\", content: ... })
|
|
||||||
Write({ file_path: \".claude/skills/{tech_stack_name}/testing.md\", content: ... })
|
|
||||||
Write({ file_path: \".claude/skills/{tech_stack_name}/config.md\", content: ... })
|
|
||||||
// Write frameworks.md only if composite
|
|
||||||
|
|
||||||
// Write metadata.json
|
|
||||||
Write({
|
|
||||||
file_path: \".claude/skills/{tech_stack_name}/metadata.json\",
|
|
||||||
content: JSON.stringify({
|
|
||||||
tech_stack_name,
|
|
||||||
components,
|
|
||||||
is_composite,
|
|
||||||
generated_at: timestamp,
|
|
||||||
source: \"exa-research\",
|
|
||||||
research_summary: { total_queries, total_sources }
|
|
||||||
})
|
|
||||||
})
|
|
||||||
```
|
|
||||||
|
|
||||||
6. **Report Completion**:
|
|
||||||
|
|
||||||
Provide summary:
|
|
||||||
- Tech stack name
|
|
||||||
- Files created (count)
|
|
||||||
- Exa queries executed
|
|
||||||
- Sources consulted
|
|
||||||
|
|
||||||
**CRITICAL**:
|
|
||||||
- MUST read external template files before generating content (step 3 for modules, step 4 for index)
|
|
||||||
- You have FULL autonomy - read files, execute Exa, synthesize content, write files
|
|
||||||
- Do NOT return JSON or structured data - produce actual .md files
|
|
||||||
- Handle errors gracefully (Exa failures, missing files, template read failures)
|
|
||||||
- If tech stack cannot be determined, ask orchestrator to clarify
|
|
||||||
"
|
|
||||||
)
|
|
||||||
```
|
|
||||||
|
|
||||||
**Completion Criteria**:
|
|
||||||
- Agent task executed successfully
|
|
||||||
- 5-6 modular files written to `.claude/skills/{tech_stack_name}/`
|
|
||||||
- metadata.json written
|
|
||||||
- Agent reports completion
|
|
||||||
|
|
||||||
**TodoWrite**: Mark phase 2 completed, phase 3 in_progress
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### Phase 3: Generate SKILL.md Index
|
|
||||||
|
|
||||||
**Note**: This phase **ALWAYS executes** - generates or updates the SKILL index.
|
|
||||||
|
|
||||||
**Goal**: Read generated module files and create SKILL.md index with loading recommendations
|
|
||||||
|
|
||||||
**Steps**:
|
|
||||||
|
|
||||||
1. **Verify Generated Files**:
|
|
||||||
```bash
|
|
||||||
bash(find ".claude/skills/${TECH_STACK_NAME}" -name "*.md" -type f | sort)
|
|
||||||
```
|
|
||||||
|
|
||||||
2. **Read metadata.json**:
|
|
||||||
```javascript
|
|
||||||
Read(.claude/skills/${TECH_STACK_NAME}/metadata.json)
|
|
||||||
// Extract: tech_stack_name, components, is_composite, research_summary
|
|
||||||
```
|
|
||||||
|
|
||||||
3. **Read Module Headers** (optional, first 20 lines):
|
|
||||||
```javascript
|
|
||||||
Read(.claude/skills/${TECH_STACK_NAME}/principles.md, limit: 20)
|
|
||||||
// Repeat for other modules
|
|
||||||
```
|
|
||||||
|
|
||||||
4. **Read SKILL Index Template**:
|
|
||||||
|
|
||||||
```javascript
|
|
||||||
Read(~/.claude/workflows/cli-templates/prompts/tech/tech-skill-index.txt)
|
|
||||||
```
|
|
||||||
|
|
||||||
5. **Generate SKILL.md Index**:
|
|
||||||
|
|
||||||
Follow template from tech-skill-index.txt with variable substitutions:
|
|
||||||
- `{TECH_STACK_NAME}`: From metadata.json
|
|
||||||
- `{MAIN_TECH}`: Primary technology
|
|
||||||
- `{ISO_TIMESTAMP}`: Current timestamp
|
|
||||||
- `{QUERY_COUNT}`: From research_summary
|
|
||||||
- `{SOURCE_COUNT}`: From research_summary
|
|
||||||
- Conditional sections for composite tech stacks
|
|
||||||
|
|
||||||
Template provides structure for:
|
|
||||||
- Frontmatter with metadata
|
|
||||||
- Overview and tech stack description
|
|
||||||
- Module organization (Core/Practical/Config sections)
|
|
||||||
- Loading recommendations (Quick/Implementation/Complete)
|
|
||||||
- Usage guidelines and auto-trigger keywords
|
|
||||||
- Research metadata and version history
|
|
||||||
|
|
||||||
6. **Write SKILL.md**:
|
|
||||||
```javascript
|
|
||||||
Write({
|
|
||||||
file_path: `.claude/skills/${TECH_STACK_NAME}/SKILL.md`,
|
|
||||||
content: generatedIndexMarkdown
|
|
||||||
})
|
|
||||||
```
|
|
||||||
|
|
||||||
**Completion Criteria**:
|
|
||||||
- SKILL.md index written
|
|
||||||
- All module files verified
|
|
||||||
- Loading recommendations included
|
|
||||||
|
|
||||||
**TodoWrite**: Mark phase 3 completed
|
|
||||||
|
|
||||||
**Final Report**:
|
|
||||||
```
|
|
||||||
Tech Stack SKILL Package Complete
|
|
||||||
|
|
||||||
Tech Stack: {TECH_STACK_NAME}
|
|
||||||
Location: .claude/skills/{TECH_STACK_NAME}/
|
|
||||||
|
|
||||||
Files: SKILL.md + 5-6 modules + metadata.json
|
|
||||||
Exa Research: {queries} queries, {sources} sources
|
|
||||||
|
|
||||||
Usage: Skill(command: "{TECH_STACK_NAME}")
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Implementation Details
|
|
||||||
|
|
||||||
### TodoWrite Patterns
|
|
||||||
|
|
||||||
**Initialization** (Before Phase 1):
|
|
||||||
```javascript
|
|
||||||
TodoWrite({todos: [
|
|
||||||
{"content": "Prepare context paths", "status": "in_progress", "activeForm": "Preparing context paths"},
|
|
||||||
{"content": "Agent produces all module files", "status": "pending", "activeForm": "Agent producing files"},
|
|
||||||
{"content": "Generate SKILL.md index", "status": "pending", "activeForm": "Generating SKILL index"}
|
|
||||||
]})
|
|
||||||
```
|
|
||||||
|
|
||||||
**Full Path** (SKIP_GENERATION = false):
|
|
||||||
```javascript
|
|
||||||
// After Phase 1
|
|
||||||
TodoWrite({todos: [
|
|
||||||
{"content": "Prepare context paths", "status": "completed", ...},
|
|
||||||
{"content": "Agent produces all module files", "status": "in_progress", ...},
|
|
||||||
{"content": "Generate SKILL.md index", "status": "pending", ...}
|
|
||||||
]})
|
|
||||||
|
|
||||||
// After Phase 2
|
|
||||||
TodoWrite({todos: [
|
|
||||||
{"content": "Prepare context paths", "status": "completed", ...},
|
|
||||||
{"content": "Agent produces all module files", "status": "completed", ...},
|
|
||||||
{"content": "Generate SKILL.md index", "status": "in_progress", ...}
|
|
||||||
]})
|
|
||||||
|
|
||||||
// After Phase 3
|
|
||||||
TodoWrite({todos: [
|
|
||||||
{"content": "Prepare context paths", "status": "completed", ...},
|
|
||||||
{"content": "Agent produces all module files", "status": "completed", ...},
|
|
||||||
{"content": "Generate SKILL.md index", "status": "completed", ...}
|
|
||||||
]})
|
|
||||||
```
|
|
||||||
|
|
||||||
**Skip Path** (SKIP_GENERATION = true):
|
|
||||||
```javascript
|
|
||||||
// After Phase 1 (skip Phase 2)
|
|
||||||
TodoWrite({todos: [
|
|
||||||
{"content": "Prepare context paths", "status": "completed", ...},
|
|
||||||
{"content": "Agent produces all module files", "status": "completed", ...}, // Skipped
|
|
||||||
{"content": "Generate SKILL.md index", "status": "in_progress", ...}
|
|
||||||
]})
|
|
||||||
```
|
|
||||||
|
|
||||||
### Execution Flow
|
|
||||||
|
|
||||||
**Full Path**:
|
|
||||||
```
|
|
||||||
User → TodoWrite Init → Phase 1 (prepare) → Phase 2 (agent writes files) → Phase 3 (write index) → Report
|
|
||||||
```
|
|
||||||
|
|
||||||
**Skip Path**:
|
|
||||||
```
|
|
||||||
User → TodoWrite Init → Phase 1 (detect existing) → Phase 3 (update index) → Report
|
|
||||||
```
|
|
||||||
|
|
||||||
### Error Handling
|
|
||||||
|
|
||||||
**Phase 1 Errors**:
|
|
||||||
- Invalid session ID: Report error, verify session exists
|
|
||||||
- Missing context-package: Warn, fall back to direct mode
|
|
||||||
- No tech stack detected: Ask user to specify tech stack name
|
|
||||||
|
|
||||||
**Phase 2 Errors (Agent)**:
|
|
||||||
- Agent task fails: Retry once, report if fails again
|
|
||||||
- Exa API failures: Agent handles internally with retries
|
|
||||||
- Incomplete results: Warn user, proceed with partial data if minimum sections available
|
|
||||||
|
|
||||||
**Phase 3 Errors**:
|
|
||||||
- Write failures: Report which files failed
|
|
||||||
- Missing files: Note in SKILL.md, suggest regeneration
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Parameters
|
|
||||||
|
|
||||||
```bash
|
|
||||||
/memory:tech-research [session-id | "tech-stack-name"] [--regenerate] [--tool <gemini|qwen>]
|
|
||||||
```
|
|
||||||
|
|
||||||
**Arguments**:
|
|
||||||
- **session-id | tech-stack-name**: Input source (auto-detected by WFS- prefix)
|
|
||||||
- Session mode: `WFS-user-auth-v2` - Extract tech stack from workflow
|
|
||||||
- Direct mode: `"typescript"`, `"typescript-react-nextjs"` - User specifies
|
|
||||||
- **--regenerate**: Force regenerate existing SKILL (deletes and recreates)
|
|
||||||
- **--tool**: Reserved for future CLI integration (default: gemini)
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Examples
|
|
||||||
|
|
||||||
**Generated File Structure** (for all examples):
|
|
||||||
```
|
|
||||||
.claude/skills/{tech-stack}/
|
|
||||||
├── SKILL.md # Index (Phase 3)
|
|
||||||
├── principles.md # Agent (Phase 2)
|
|
||||||
├── patterns.md # Agent
|
|
||||||
├── practices.md # Agent
|
|
||||||
├── testing.md # Agent
|
|
||||||
├── config.md # Agent
|
|
||||||
├── frameworks.md # Agent (if composite)
|
|
||||||
└── metadata.json # Agent
|
|
||||||
```
|
|
||||||
|
|
||||||
### Direct Mode - Single Stack
|
|
||||||
|
|
||||||
```bash
|
|
||||||
/memory:tech-research "typescript"
|
|
||||||
```
|
|
||||||
|
|
||||||
**Workflow**:
|
|
||||||
1. Phase 1: Detects direct mode, checks existing SKILL
|
|
||||||
2. Phase 2: Agent executes 4 Exa queries, writes 5 modules
|
|
||||||
3. Phase 3: Generates SKILL.md index
|
|
||||||
|
|
||||||
### Direct Mode - Composite Stack
|
|
||||||
|
|
||||||
```bash
|
|
||||||
/memory:tech-research "typescript-react-nextjs"
|
|
||||||
```
|
|
||||||
|
|
||||||
**Workflow**:
|
|
||||||
1. Phase 1: Decomposes into ["typescript", "react", "nextjs"]
|
|
||||||
2. Phase 2: Agent executes 6 Exa queries (4 base + 2 components), writes 6 modules (adds frameworks.md)
|
|
||||||
3. Phase 3: Generates SKILL.md index with framework integration
|
|
||||||
|
|
||||||
### Session Mode - Extract from Workflow
|
|
||||||
|
|
||||||
```bash
|
|
||||||
/memory:tech-research WFS-user-auth-20251104
|
|
||||||
```
|
|
||||||
|
|
||||||
**Workflow**:
|
|
||||||
1. Phase 1: Reads session, extracts tech stack: `python-fastapi-sqlalchemy`
|
|
||||||
2. Phase 2: Agent researches Python + FastAPI + SQLAlchemy, writes 6 modules
|
|
||||||
3. Phase 3: Generates SKILL.md index
|
|
||||||
|
|
||||||
### Regenerate Existing
|
|
||||||
|
|
||||||
```bash
|
|
||||||
/memory:tech-research "react" --regenerate
|
|
||||||
```
|
|
||||||
|
|
||||||
**Workflow**:
|
|
||||||
1. Phase 1: Deletes existing SKILL due to --regenerate
|
|
||||||
2. Phase 2: Agent executes fresh Exa research (latest 2025 practices)
|
|
||||||
3. Phase 3: Generates updated SKILL.md
|
|
||||||
|
|
||||||
### Skip Path - Fast Update
|
|
||||||
|
|
||||||
```bash
|
|
||||||
/memory:tech-research "python"
|
|
||||||
```
|
|
||||||
|
|
||||||
**Scenario**: SKILL already exists with 7 files
|
|
||||||
|
|
||||||
**Workflow**:
|
|
||||||
1. Phase 1: Detects existing SKILL, sets SKIP_GENERATION = true
|
|
||||||
2. Phase 2: **SKIPPED**
|
|
||||||
3. Phase 3: Updates SKILL.md index only (5-10x faster)
|
|
||||||
|
|
||||||
|
|
||||||
@@ -187,7 +187,7 @@ Objectives:
|
|||||||
|
|
||||||
3. Use Gemini for aggregation (optional):
|
3. Use Gemini for aggregation (optional):
|
||||||
Command pattern:
|
Command pattern:
|
||||||
cd .workflow/.archives/{session_id} && gemini -p "
|
ccw cli -p "
|
||||||
PURPOSE: Extract lessons and conflicts from workflow session
|
PURPOSE: Extract lessons and conflicts from workflow session
|
||||||
TASK:
|
TASK:
|
||||||
• Analyze IMPL_PLAN and lessons from manifest
|
• Analyze IMPL_PLAN and lessons from manifest
|
||||||
@@ -198,7 +198,7 @@ Objectives:
|
|||||||
CONTEXT: @IMPL_PLAN.md @workflow-session.json
|
CONTEXT: @IMPL_PLAN.md @workflow-session.json
|
||||||
EXPECTED: Structured lessons and conflicts in JSON format
|
EXPECTED: Structured lessons and conflicts in JSON format
|
||||||
RULES: Template reference from skill-aggregation.txt
|
RULES: Template reference from skill-aggregation.txt
|
||||||
"
|
" --tool gemini --mode analysis --cd .workflow/.archives/{session_id}
|
||||||
|
|
||||||
3.5. **Generate SKILL.md Description** (CRITICAL for auto-loading):
|
3.5. **Generate SKILL.md Description** (CRITICAL for auto-loading):
|
||||||
|
|
||||||
@@ -334,7 +334,7 @@ Objectives:
|
|||||||
- Sort sessions by date
|
- Sort sessions by date
|
||||||
|
|
||||||
2. Use Gemini for final aggregation:
|
2. Use Gemini for final aggregation:
|
||||||
gemini -p "
|
ccw cli -p "
|
||||||
PURPOSE: Aggregate lessons and conflicts from all workflow sessions
|
PURPOSE: Aggregate lessons and conflicts from all workflow sessions
|
||||||
TASK:
|
TASK:
|
||||||
• Group successes by functional domain
|
• Group successes by functional domain
|
||||||
@@ -345,7 +345,7 @@ Objectives:
|
|||||||
CONTEXT: [Provide aggregated JSON data]
|
CONTEXT: [Provide aggregated JSON data]
|
||||||
EXPECTED: Final aggregated structure for SKILL documents
|
EXPECTED: Final aggregated structure for SKILL documents
|
||||||
RULES: Template reference from skill-aggregation.txt
|
RULES: Template reference from skill-aggregation.txt
|
||||||
"
|
" --tool gemini --mode analysis
|
||||||
|
|
||||||
3. Read templates for formatting (same 4 templates as single mode)
|
3. Read templates for formatting (same 4 templates as single mode)
|
||||||
|
|
||||||
|
|||||||
@@ -81,6 +81,7 @@ ELSE:
|
|||||||
**Framework-Based Analysis** (when guidance-specification.md exists):
|
**Framework-Based Analysis** (when guidance-specification.md exists):
|
||||||
```bash
|
```bash
|
||||||
Task(subagent_type="conceptual-planning-agent",
|
Task(subagent_type="conceptual-planning-agent",
|
||||||
|
run_in_background=false,
|
||||||
prompt="Generate API designer analysis addressing topic framework
|
prompt="Generate API designer analysis addressing topic framework
|
||||||
|
|
||||||
## Framework Integration Required
|
## Framework Integration Required
|
||||||
@@ -136,6 +137,7 @@ Task(subagent_type="conceptual-planning-agent",
|
|||||||
# For existing analysis updates
|
# For existing analysis updates
|
||||||
IF update_mode = "incremental":
|
IF update_mode = "incremental":
|
||||||
Task(subagent_type="conceptual-planning-agent",
|
Task(subagent_type="conceptual-planning-agent",
|
||||||
|
run_in_background=false,
|
||||||
prompt="Update existing API designer analysis
|
prompt="Update existing API designer analysis
|
||||||
|
|
||||||
## Current Analysis Context
|
## Current Analysis Context
|
||||||
|
|||||||
@@ -128,6 +128,7 @@ for (let i = 0; i < allQuestions.length; i += BATCH_SIZE) {
|
|||||||
```javascript
|
```javascript
|
||||||
Task(
|
Task(
|
||||||
subagent_type="context-search-agent",
|
subagent_type="context-search-agent",
|
||||||
|
run_in_background=false,
|
||||||
description="Gather project context for brainstorm",
|
description="Gather project context for brainstorm",
|
||||||
prompt=`
|
prompt=`
|
||||||
Execute context-search-agent in BRAINSTORM MODE (Phase 1-2 only).
|
Execute context-search-agent in BRAINSTORM MODE (Phase 1-2 only).
|
||||||
|
|||||||
@@ -81,6 +81,7 @@ ELSE:
|
|||||||
**Framework-Based Analysis** (when guidance-specification.md exists):
|
**Framework-Based Analysis** (when guidance-specification.md exists):
|
||||||
```bash
|
```bash
|
||||||
Task(subagent_type="conceptual-planning-agent",
|
Task(subagent_type="conceptual-planning-agent",
|
||||||
|
run_in_background=false,
|
||||||
prompt="Generate system architect analysis addressing topic framework
|
prompt="Generate system architect analysis addressing topic framework
|
||||||
|
|
||||||
## Framework Integration Required
|
## Framework Integration Required
|
||||||
@@ -136,6 +137,7 @@ Task(subagent_type="conceptual-planning-agent",
|
|||||||
# For existing analysis updates
|
# For existing analysis updates
|
||||||
IF update_mode = "incremental":
|
IF update_mode = "incremental":
|
||||||
Task(subagent_type="conceptual-planning-agent",
|
Task(subagent_type="conceptual-planning-agent",
|
||||||
|
run_in_background=false,
|
||||||
prompt="Update existing system architect analysis
|
prompt="Update existing system architect analysis
|
||||||
|
|
||||||
## Current Analysis Context
|
## Current Analysis Context
|
||||||
|
|||||||
516
.claude/commands/workflow/clean.md
Normal file
516
.claude/commands/workflow/clean.md
Normal file
@@ -0,0 +1,516 @@
|
|||||||
|
---
|
||||||
|
name: clean
|
||||||
|
description: Intelligent code cleanup with mainline detection, stale artifact discovery, and safe execution
|
||||||
|
argument-hint: "[--dry-run] [\"focus area\"]"
|
||||||
|
allowed-tools: TodoWrite(*), Task(*), AskUserQuestion(*), Read(*), Glob(*), Bash(*), Write(*)
|
||||||
|
---
|
||||||
|
|
||||||
|
# Clean Command (/workflow:clean)
|
||||||
|
|
||||||
|
## Overview
|
||||||
|
|
||||||
|
Intelligent cleanup command that explores the codebase to identify the development mainline, discovers artifacts that have drifted from it, and safely removes stale sessions, abandoned documents, and dead code.
|
||||||
|
|
||||||
|
**Core capabilities:**
|
||||||
|
- Mainline detection: Identify active development branches and core modules
|
||||||
|
- Drift analysis: Find sessions, documents, and code that deviate from mainline
|
||||||
|
- Intelligent discovery: cli-explore-agent based artifact scanning
|
||||||
|
- Safe execution: Confirmation-based cleanup with dry-run preview
|
||||||
|
|
||||||
|
## Usage
|
||||||
|
|
||||||
|
```bash
|
||||||
|
/workflow:clean # Full intelligent cleanup (explore → analyze → confirm → execute)
|
||||||
|
/workflow:clean --dry-run # Explore and analyze only, no execution
|
||||||
|
/workflow:clean "auth module" # Focus cleanup on specific area
|
||||||
|
```
|
||||||
|
|
||||||
|
## Execution Process
|
||||||
|
|
||||||
|
```
|
||||||
|
Phase 1: Mainline Detection
|
||||||
|
├─ Analyze git history for development trends
|
||||||
|
├─ Identify core modules (high commit frequency)
|
||||||
|
├─ Map active vs stale branches
|
||||||
|
└─ Build mainline profile
|
||||||
|
|
||||||
|
Phase 2: Drift Discovery (cli-explore-agent)
|
||||||
|
├─ Scan workflow sessions for orphaned artifacts
|
||||||
|
├─ Identify documents drifted from mainline
|
||||||
|
├─ Detect dead code and unused exports
|
||||||
|
└─ Generate cleanup manifest
|
||||||
|
|
||||||
|
Phase 3: Confirmation
|
||||||
|
├─ Display cleanup summary by category
|
||||||
|
├─ Show impact analysis (files, size, risk)
|
||||||
|
└─ AskUserQuestion: Select categories to clean
|
||||||
|
|
||||||
|
Phase 4: Execution (unless --dry-run)
|
||||||
|
├─ Execute cleanup by category
|
||||||
|
├─ Update manifests and indexes
|
||||||
|
└─ Report results
|
||||||
|
```
|
||||||
|
|
||||||
|
## Implementation
|
||||||
|
|
||||||
|
### Phase 1: Mainline Detection
|
||||||
|
|
||||||
|
**Session Setup**:
|
||||||
|
```javascript
|
||||||
|
const getUtc8ISOString = () => new Date(Date.now() + 8 * 60 * 60 * 1000).toISOString()
|
||||||
|
|
||||||
|
const dateStr = getUtc8ISOString().substring(0, 10)
|
||||||
|
const sessionId = `clean-${dateStr}`
|
||||||
|
const sessionFolder = `.workflow/.clean/${sessionId}`
|
||||||
|
|
||||||
|
Bash(`mkdir -p ${sessionFolder}`)
|
||||||
|
```
|
||||||
|
|
||||||
|
**Step 1.1: Git History Analysis**
|
||||||
|
```bash
|
||||||
|
# Get commit frequency by directory (last 30 days)
|
||||||
|
bash(git log --since="30 days ago" --name-only --pretty=format: | grep -v "^$" | cut -d/ -f1-2 | sort | uniq -c | sort -rn | head -20)
|
||||||
|
|
||||||
|
# Get recent active branches
|
||||||
|
bash(git for-each-ref --sort=-committerdate refs/heads/ --format='%(refname:short) %(committerdate:relative)' | head -10)
|
||||||
|
|
||||||
|
# Get files with most recent changes
|
||||||
|
bash(git log --since="7 days ago" --name-only --pretty=format: | grep -v "^$" | sort | uniq -c | sort -rn | head -30)
|
||||||
|
```
|
||||||
|
|
||||||
|
**Step 1.2: Build Mainline Profile**
|
||||||
|
```javascript
|
||||||
|
const mainlineProfile = {
|
||||||
|
coreModules: [], // High-frequency directories
|
||||||
|
activeFiles: [], // Recently modified files
|
||||||
|
activeBranches: [], // Branches with recent commits
|
||||||
|
staleThreshold: {
|
||||||
|
sessions: 7, // Days
|
||||||
|
branches: 30,
|
||||||
|
documents: 14
|
||||||
|
},
|
||||||
|
timestamp: getUtc8ISOString()
|
||||||
|
}
|
||||||
|
|
||||||
|
// Parse git log output to identify core modules
|
||||||
|
// Modules with >5 commits in last 30 days = core
|
||||||
|
// Modules with 0 commits in last 30 days = potentially stale
|
||||||
|
|
||||||
|
Write(`${sessionFolder}/mainline-profile.json`, JSON.stringify(mainlineProfile, null, 2))
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Phase 2: Drift Discovery
|
||||||
|
|
||||||
|
**Launch cli-explore-agent for intelligent artifact scanning**:
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
Task(
|
||||||
|
subagent_type="cli-explore-agent",
|
||||||
|
run_in_background=false,
|
||||||
|
description="Discover stale artifacts",
|
||||||
|
prompt=`
|
||||||
|
## Task Objective
|
||||||
|
Discover artifacts that have drifted from the development mainline. Identify stale sessions, abandoned documents, and dead code for cleanup.
|
||||||
|
|
||||||
|
## Context
|
||||||
|
- **Session Folder**: ${sessionFolder}
|
||||||
|
- **Mainline Profile**: ${sessionFolder}/mainline-profile.json
|
||||||
|
- **Focus Area**: ${focusArea || "全项目"}
|
||||||
|
|
||||||
|
## Discovery Categories
|
||||||
|
|
||||||
|
### Category 1: Stale Workflow Sessions
|
||||||
|
Scan and analyze workflow session directories:
|
||||||
|
|
||||||
|
**Locations to scan**:
|
||||||
|
- .workflow/active/WFS-* (active sessions)
|
||||||
|
- .workflow/archives/WFS-* (archived sessions)
|
||||||
|
- .workflow/.lite-plan/* (lite-plan sessions)
|
||||||
|
- .workflow/.debug/DBG-* (debug sessions)
|
||||||
|
|
||||||
|
**Staleness criteria**:
|
||||||
|
- Active sessions: No modification >7 days + no related git commits
|
||||||
|
- Archives: >30 days old + no feature references in project.json
|
||||||
|
- Lite-plan: >7 days old + plan.json not executed
|
||||||
|
- Debug: >3 days old + issue not in recent commits
|
||||||
|
|
||||||
|
**Analysis steps**:
|
||||||
|
1. List all session directories with modification times
|
||||||
|
2. Cross-reference with git log (are session topics in recent commits?)
|
||||||
|
3. Check manifest.json for orphan entries
|
||||||
|
4. Identify sessions with .archiving marker (interrupted)
|
||||||
|
|
||||||
|
### Category 2: Drifted Documents
|
||||||
|
Scan documentation that no longer aligns with code:
|
||||||
|
|
||||||
|
**Locations to scan**:
|
||||||
|
- .claude/rules/tech/* (generated tech rules)
|
||||||
|
- .workflow/.scratchpad/* (temporary notes)
|
||||||
|
- **/CLAUDE.md (module documentation)
|
||||||
|
- **/README.md (outdated descriptions)
|
||||||
|
|
||||||
|
**Drift criteria**:
|
||||||
|
- Tech rules: Referenced files no longer exist
|
||||||
|
- Scratchpad: Any file (always temporary)
|
||||||
|
- Module docs: Describe functions/classes that were removed
|
||||||
|
- READMEs: Reference deleted directories
|
||||||
|
|
||||||
|
**Analysis steps**:
|
||||||
|
1. Parse document content for file/function references
|
||||||
|
2. Verify referenced entities still exist in codebase
|
||||||
|
3. Flag documents with >30% broken references
|
||||||
|
|
||||||
|
### Category 3: Dead Code
|
||||||
|
Identify code that is no longer used:
|
||||||
|
|
||||||
|
**Scan patterns**:
|
||||||
|
- Unused exports (exported but never imported)
|
||||||
|
- Orphan files (not imported anywhere)
|
||||||
|
- Commented-out code blocks (>10 lines)
|
||||||
|
- TODO/FIXME comments >90 days old
|
||||||
|
|
||||||
|
**Analysis steps**:
|
||||||
|
1. Build import graph using rg/grep
|
||||||
|
2. Identify exports with no importers
|
||||||
|
3. Find files not in import graph
|
||||||
|
4. Scan for large comment blocks
|
||||||
|
|
||||||
|
## Output Format
|
||||||
|
|
||||||
|
Write to: ${sessionFolder}/cleanup-manifest.json
|
||||||
|
|
||||||
|
\`\`\`json
|
||||||
|
{
|
||||||
|
"generated_at": "ISO timestamp",
|
||||||
|
"mainline_summary": {
|
||||||
|
"core_modules": ["src/core", "src/api"],
|
||||||
|
"active_branches": ["main", "feature/auth"],
|
||||||
|
"health_score": 0.85
|
||||||
|
},
|
||||||
|
"discoveries": {
|
||||||
|
"stale_sessions": [
|
||||||
|
{
|
||||||
|
"path": ".workflow/active/WFS-old-feature",
|
||||||
|
"type": "active",
|
||||||
|
"age_days": 15,
|
||||||
|
"reason": "No related commits in 15 days",
|
||||||
|
"size_kb": 1024,
|
||||||
|
"risk": "low"
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"drifted_documents": [
|
||||||
|
{
|
||||||
|
"path": ".claude/rules/tech/deprecated-lib",
|
||||||
|
"type": "tech_rules",
|
||||||
|
"broken_references": 5,
|
||||||
|
"total_references": 6,
|
||||||
|
"drift_percentage": 83,
|
||||||
|
"reason": "Referenced library removed",
|
||||||
|
"risk": "low"
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"dead_code": [
|
||||||
|
{
|
||||||
|
"path": "src/utils/legacy.ts",
|
||||||
|
"type": "orphan_file",
|
||||||
|
"reason": "Not imported by any file",
|
||||||
|
"last_modified": "2025-10-01",
|
||||||
|
"risk": "medium"
|
||||||
|
}
|
||||||
|
]
|
||||||
|
},
|
||||||
|
"summary": {
|
||||||
|
"total_items": 12,
|
||||||
|
"total_size_mb": 45.2,
|
||||||
|
"by_category": {
|
||||||
|
"stale_sessions": 5,
|
||||||
|
"drifted_documents": 4,
|
||||||
|
"dead_code": 3
|
||||||
|
},
|
||||||
|
"by_risk": {
|
||||||
|
"low": 8,
|
||||||
|
"medium": 3,
|
||||||
|
"high": 1
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
\`\`\`
|
||||||
|
|
||||||
|
## Execution Commands
|
||||||
|
|
||||||
|
\`\`\`bash
|
||||||
|
# Session directories
|
||||||
|
find .workflow -type d -name "WFS-*" -o -name "DBG-*" 2>/dev/null
|
||||||
|
|
||||||
|
# Check modification times (Linux/Mac)
|
||||||
|
stat -c "%Y %n" .workflow/active/WFS-* 2>/dev/null
|
||||||
|
|
||||||
|
# Check modification times (Windows PowerShell via bash)
|
||||||
|
powershell -Command "Get-ChildItem '.workflow/active/WFS-*' | ForEach-Object { Write-Output \"$($_.LastWriteTime) $($_.FullName)\" }"
|
||||||
|
|
||||||
|
# Find orphan exports (TypeScript)
|
||||||
|
rg "export (const|function|class|interface|type)" --type ts -l
|
||||||
|
|
||||||
|
# Find imports
|
||||||
|
rg "import.*from" --type ts
|
||||||
|
|
||||||
|
# Find large comment blocks
|
||||||
|
rg "^\\s*/\\*" -A 10 --type ts
|
||||||
|
|
||||||
|
# Find old TODOs
|
||||||
|
rg "TODO|FIXME" --type ts -n
|
||||||
|
\`\`\`
|
||||||
|
|
||||||
|
## Success Criteria
|
||||||
|
- [ ] All session directories scanned with age calculation
|
||||||
|
- [ ] Documents cross-referenced with existing code
|
||||||
|
- [ ] Dead code detection via import graph analysis
|
||||||
|
- [ ] cleanup-manifest.json written with complete data
|
||||||
|
- [ ] Each item has risk level and cleanup reason
|
||||||
|
`
|
||||||
|
)
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Phase 3: Confirmation
|
||||||
|
|
||||||
|
**Step 3.1: Display Summary**
|
||||||
|
```javascript
|
||||||
|
const manifest = JSON.parse(Read(`${sessionFolder}/cleanup-manifest.json`))
|
||||||
|
|
||||||
|
console.log(`
|
||||||
|
## Cleanup Discovery Report
|
||||||
|
|
||||||
|
**Mainline Health**: ${Math.round(manifest.mainline_summary.health_score * 100)}%
|
||||||
|
**Core Modules**: ${manifest.mainline_summary.core_modules.join(', ')}
|
||||||
|
|
||||||
|
### Summary
|
||||||
|
| Category | Count | Size | Risk |
|
||||||
|
|----------|-------|------|------|
|
||||||
|
| Stale Sessions | ${manifest.summary.by_category.stale_sessions} | - | ${getRiskSummary('sessions')} |
|
||||||
|
| Drifted Documents | ${manifest.summary.by_category.drifted_documents} | - | ${getRiskSummary('documents')} |
|
||||||
|
| Dead Code | ${manifest.summary.by_category.dead_code} | - | ${getRiskSummary('code')} |
|
||||||
|
|
||||||
|
**Total**: ${manifest.summary.total_items} items, ~${manifest.summary.total_size_mb} MB
|
||||||
|
|
||||||
|
### Stale Sessions
|
||||||
|
${manifest.discoveries.stale_sessions.map(s =>
|
||||||
|
`- ${s.path} (${s.age_days}d, ${s.risk}): ${s.reason}`
|
||||||
|
).join('\n')}
|
||||||
|
|
||||||
|
### Drifted Documents
|
||||||
|
${manifest.discoveries.drifted_documents.map(d =>
|
||||||
|
`- ${d.path} (${d.drift_percentage}% broken, ${d.risk}): ${d.reason}`
|
||||||
|
).join('\n')}
|
||||||
|
|
||||||
|
### Dead Code
|
||||||
|
${manifest.discoveries.dead_code.map(c =>
|
||||||
|
`- ${c.path} (${c.type}, ${c.risk}): ${c.reason}`
|
||||||
|
).join('\n')}
|
||||||
|
`)
|
||||||
|
```
|
||||||
|
|
||||||
|
**Step 3.2: Dry-Run Exit**
|
||||||
|
```javascript
|
||||||
|
if (flags.includes('--dry-run')) {
|
||||||
|
console.log(`
|
||||||
|
---
|
||||||
|
**Dry-run mode**: No changes made.
|
||||||
|
Manifest saved to: ${sessionFolder}/cleanup-manifest.json
|
||||||
|
|
||||||
|
To execute cleanup: /workflow:clean
|
||||||
|
`)
|
||||||
|
return
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Step 3.3: User Confirmation**
|
||||||
|
```javascript
|
||||||
|
AskUserQuestion({
|
||||||
|
questions: [
|
||||||
|
{
|
||||||
|
question: "Which categories to clean?",
|
||||||
|
header: "Categories",
|
||||||
|
multiSelect: true,
|
||||||
|
options: [
|
||||||
|
{
|
||||||
|
label: "Sessions",
|
||||||
|
description: `${manifest.summary.by_category.stale_sessions} stale workflow sessions`
|
||||||
|
},
|
||||||
|
{
|
||||||
|
label: "Documents",
|
||||||
|
description: `${manifest.summary.by_category.drifted_documents} drifted documents`
|
||||||
|
},
|
||||||
|
{
|
||||||
|
label: "Dead Code",
|
||||||
|
description: `${manifest.summary.by_category.dead_code} unused code files`
|
||||||
|
}
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
question: "Risk level to include?",
|
||||||
|
header: "Risk",
|
||||||
|
multiSelect: false,
|
||||||
|
options: [
|
||||||
|
{ label: "Low only", description: "Safest - only obviously stale items" },
|
||||||
|
{ label: "Low + Medium", description: "Recommended - includes likely unused items" },
|
||||||
|
{ label: "All", description: "Aggressive - includes high-risk items" }
|
||||||
|
]
|
||||||
|
}
|
||||||
|
]
|
||||||
|
})
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Phase 4: Execution
|
||||||
|
|
||||||
|
**Step 4.1: Filter Items by Selection**
|
||||||
|
```javascript
|
||||||
|
const selectedCategories = userSelection.categories // ['Sessions', 'Documents', ...]
|
||||||
|
const riskLevel = userSelection.risk // 'Low only', 'Low + Medium', 'All'
|
||||||
|
|
||||||
|
const riskFilter = {
|
||||||
|
'Low only': ['low'],
|
||||||
|
'Low + Medium': ['low', 'medium'],
|
||||||
|
'All': ['low', 'medium', 'high']
|
||||||
|
}[riskLevel]
|
||||||
|
|
||||||
|
const itemsToClean = []
|
||||||
|
|
||||||
|
if (selectedCategories.includes('Sessions')) {
|
||||||
|
itemsToClean.push(...manifest.discoveries.stale_sessions.filter(s => riskFilter.includes(s.risk)))
|
||||||
|
}
|
||||||
|
if (selectedCategories.includes('Documents')) {
|
||||||
|
itemsToClean.push(...manifest.discoveries.drifted_documents.filter(d => riskFilter.includes(d.risk)))
|
||||||
|
}
|
||||||
|
if (selectedCategories.includes('Dead Code')) {
|
||||||
|
itemsToClean.push(...manifest.discoveries.dead_code.filter(c => riskFilter.includes(c.risk)))
|
||||||
|
}
|
||||||
|
|
||||||
|
TodoWrite({
|
||||||
|
todos: itemsToClean.map(item => ({
|
||||||
|
content: `Clean: ${item.path}`,
|
||||||
|
status: "pending",
|
||||||
|
activeForm: `Cleaning ${item.path}`
|
||||||
|
}))
|
||||||
|
})
|
||||||
|
```
|
||||||
|
|
||||||
|
**Step 4.2: Execute Cleanup**
|
||||||
|
```javascript
|
||||||
|
const results = { deleted: [], failed: [], skipped: [] }
|
||||||
|
|
||||||
|
for (const item of itemsToClean) {
|
||||||
|
TodoWrite({ todos: [...] }) // Mark current as in_progress
|
||||||
|
|
||||||
|
try {
|
||||||
|
if (item.type === 'orphan_file' || item.type === 'dead_export') {
|
||||||
|
// Dead code: Delete file or remove export
|
||||||
|
Bash({ command: `rm -rf "${item.path}"` })
|
||||||
|
} else {
|
||||||
|
// Sessions and documents: Delete directory/file
|
||||||
|
Bash({ command: `rm -rf "${item.path}"` })
|
||||||
|
}
|
||||||
|
|
||||||
|
results.deleted.push(item.path)
|
||||||
|
TodoWrite({ todos: [...] }) // Mark as completed
|
||||||
|
} catch (error) {
|
||||||
|
results.failed.push({ path: item.path, error: error.message })
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Step 4.3: Update Manifests**
|
||||||
|
```javascript
|
||||||
|
// Update archives manifest if sessions were deleted
|
||||||
|
if (selectedCategories.includes('Sessions')) {
|
||||||
|
const archiveManifestPath = '.workflow/archives/manifest.json'
|
||||||
|
if (fileExists(archiveManifestPath)) {
|
||||||
|
const archiveManifest = JSON.parse(Read(archiveManifestPath))
|
||||||
|
const deletedSessionIds = results.deleted
|
||||||
|
.filter(p => p.includes('WFS-'))
|
||||||
|
.map(p => p.split('/').pop())
|
||||||
|
|
||||||
|
const updatedManifest = archiveManifest.filter(entry =>
|
||||||
|
!deletedSessionIds.includes(entry.session_id)
|
||||||
|
)
|
||||||
|
|
||||||
|
Write(archiveManifestPath, JSON.stringify(updatedManifest, null, 2))
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Update project.json if features referenced deleted sessions
|
||||||
|
const projectPath = '.workflow/project.json'
|
||||||
|
if (fileExists(projectPath)) {
|
||||||
|
const project = JSON.parse(Read(projectPath))
|
||||||
|
const deletedPaths = new Set(results.deleted)
|
||||||
|
|
||||||
|
project.features = project.features.filter(f =>
|
||||||
|
!deletedPaths.has(f.traceability?.archive_path)
|
||||||
|
)
|
||||||
|
|
||||||
|
project.statistics.total_features = project.features.length
|
||||||
|
project.statistics.last_updated = getUtc8ISOString()
|
||||||
|
|
||||||
|
Write(projectPath, JSON.stringify(project, null, 2))
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Step 4.4: Report Results**
|
||||||
|
```javascript
|
||||||
|
console.log(`
|
||||||
|
## Cleanup Complete
|
||||||
|
|
||||||
|
**Deleted**: ${results.deleted.length} items
|
||||||
|
**Failed**: ${results.failed.length} items
|
||||||
|
**Skipped**: ${results.skipped.length} items
|
||||||
|
|
||||||
|
### Deleted Items
|
||||||
|
${results.deleted.map(p => `- ${p}`).join('\n')}
|
||||||
|
|
||||||
|
${results.failed.length > 0 ? `
|
||||||
|
### Failed Items
|
||||||
|
${results.failed.map(f => `- ${f.path}: ${f.error}`).join('\n')}
|
||||||
|
` : ''}
|
||||||
|
|
||||||
|
Cleanup manifest archived to: ${sessionFolder}/cleanup-manifest.json
|
||||||
|
`)
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Session Folder Structure
|
||||||
|
|
||||||
|
```
|
||||||
|
.workflow/.clean/{YYYY-MM-DD}/
|
||||||
|
├── mainline-profile.json # Git history analysis
|
||||||
|
└── cleanup-manifest.json # Discovery results
|
||||||
|
```
|
||||||
|
|
||||||
|
## Risk Level Definitions
|
||||||
|
|
||||||
|
| Risk | Description | Examples |
|
||||||
|
|------|-------------|----------|
|
||||||
|
| **Low** | Safe to delete, no dependencies | Empty sessions, scratchpad files, 100% broken docs |
|
||||||
|
| **Medium** | Likely unused, verify before delete | Orphan files, old archives, partially broken docs |
|
||||||
|
| **High** | May have hidden dependencies | Files with some imports, recent modifications |
|
||||||
|
|
||||||
|
## Error Handling
|
||||||
|
|
||||||
|
| Situation | Action |
|
||||||
|
|-----------|--------|
|
||||||
|
| No git repository | Skip mainline detection, use file timestamps only |
|
||||||
|
| Session in use (.archiving) | Skip with warning |
|
||||||
|
| Permission denied | Report error, continue with others |
|
||||||
|
| Manifest parse error | Regenerate from filesystem scan |
|
||||||
|
| Empty discovery | Report "codebase is clean" |
|
||||||
|
|
||||||
|
## Related Commands
|
||||||
|
|
||||||
|
- `/workflow:session:complete` - Properly archive active sessions
|
||||||
|
- `/memory:compact` - Save session memory before cleanup
|
||||||
|
- `/workflow:status` - View current workflow state
|
||||||
321
.claude/commands/workflow/debug.md
Normal file
321
.claude/commands/workflow/debug.md
Normal file
@@ -0,0 +1,321 @@
|
|||||||
|
---
|
||||||
|
name: debug
|
||||||
|
description: Interactive hypothesis-driven debugging with NDJSON logging, iterative until resolved
|
||||||
|
argument-hint: "\"bug description or error message\""
|
||||||
|
allowed-tools: TodoWrite(*), Task(*), AskUserQuestion(*), Read(*), Grep(*), Glob(*), Bash(*), Edit(*), Write(*)
|
||||||
|
---
|
||||||
|
|
||||||
|
# Workflow Debug Command (/workflow:debug)
|
||||||
|
|
||||||
|
## Overview
|
||||||
|
|
||||||
|
Evidence-based interactive debugging command. Systematically identifies root causes through hypothesis-driven logging and iterative verification.
|
||||||
|
|
||||||
|
**Core workflow**: Explore → Add Logging → Reproduce → Analyze Log → Fix → Verify
|
||||||
|
|
||||||
|
## Usage
|
||||||
|
|
||||||
|
```bash
|
||||||
|
/workflow:debug <BUG_DESCRIPTION>
|
||||||
|
|
||||||
|
# Arguments
|
||||||
|
<bug-description> Bug description, error message, or stack trace (required)
|
||||||
|
```
|
||||||
|
|
||||||
|
## Execution Process
|
||||||
|
|
||||||
|
```
|
||||||
|
Session Detection:
|
||||||
|
├─ Check if debug session exists for this bug
|
||||||
|
├─ EXISTS + debug.log has content → Analyze mode
|
||||||
|
└─ NOT_FOUND or empty log → Explore mode
|
||||||
|
|
||||||
|
Explore Mode:
|
||||||
|
├─ Locate error source in codebase
|
||||||
|
├─ Generate testable hypotheses (dynamic count)
|
||||||
|
├─ Add NDJSON logging instrumentation
|
||||||
|
└─ Output: Hypothesis list + await user reproduction
|
||||||
|
|
||||||
|
Analyze Mode:
|
||||||
|
├─ Parse debug.log, validate each hypothesis
|
||||||
|
└─ Decision:
|
||||||
|
├─ Confirmed → Fix root cause
|
||||||
|
├─ Inconclusive → Add more logging, iterate
|
||||||
|
└─ All rejected → Generate new hypotheses
|
||||||
|
|
||||||
|
Fix & Cleanup:
|
||||||
|
├─ Apply fix based on confirmed hypothesis
|
||||||
|
├─ User verifies
|
||||||
|
├─ Remove debug instrumentation
|
||||||
|
└─ If not fixed → Return to Analyze mode
|
||||||
|
```
|
||||||
|
|
||||||
|
## Implementation
|
||||||
|
|
||||||
|
### Session Setup & Mode Detection
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
const getUtc8ISOString = () => new Date(Date.now() + 8 * 60 * 60 * 1000).toISOString()
|
||||||
|
|
||||||
|
const bugSlug = bug_description.toLowerCase().replace(/[^a-z0-9]+/g, '-').substring(0, 30)
|
||||||
|
const dateStr = getUtc8ISOString().substring(0, 10)
|
||||||
|
|
||||||
|
const sessionId = `DBG-${bugSlug}-${dateStr}`
|
||||||
|
const sessionFolder = `.workflow/.debug/${sessionId}`
|
||||||
|
const debugLogPath = `${sessionFolder}/debug.log`
|
||||||
|
|
||||||
|
// Auto-detect mode
|
||||||
|
const sessionExists = fs.existsSync(sessionFolder)
|
||||||
|
const logHasContent = sessionExists && fs.existsSync(debugLogPath) && fs.statSync(debugLogPath).size > 0
|
||||||
|
|
||||||
|
const mode = logHasContent ? 'analyze' : 'explore'
|
||||||
|
|
||||||
|
if (!sessionExists) {
|
||||||
|
bash(`mkdir -p ${sessionFolder}`)
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Explore Mode
|
||||||
|
|
||||||
|
**Step 1.1: Locate Error Source**
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
// Extract keywords from bug description
|
||||||
|
const keywords = extractErrorKeywords(bug_description)
|
||||||
|
// e.g., ['Stack Length', '未找到', 'registered 0']
|
||||||
|
|
||||||
|
// Search codebase for error locations
|
||||||
|
for (const keyword of keywords) {
|
||||||
|
Grep({ pattern: keyword, path: ".", output_mode: "content", "-C": 3 })
|
||||||
|
}
|
||||||
|
|
||||||
|
// Identify affected files and functions
|
||||||
|
const affectedLocations = [...] // from search results
|
||||||
|
```
|
||||||
|
|
||||||
|
**Step 1.2: Generate Hypotheses (Dynamic)**
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
// Hypothesis categories based on error pattern
|
||||||
|
const HYPOTHESIS_PATTERNS = {
|
||||||
|
"not found|missing|undefined|未找到": "data_mismatch",
|
||||||
|
"0|empty|zero|registered 0": "logic_error",
|
||||||
|
"timeout|connection|sync": "integration_issue",
|
||||||
|
"type|format|parse": "type_mismatch"
|
||||||
|
}
|
||||||
|
|
||||||
|
// Generate hypotheses based on actual issue (NOT fixed count)
|
||||||
|
function generateHypotheses(bugDescription, affectedLocations) {
|
||||||
|
const hypotheses = []
|
||||||
|
|
||||||
|
// Analyze bug and create targeted hypotheses
|
||||||
|
// Each hypothesis has:
|
||||||
|
// - id: H1, H2, ... (dynamic count)
|
||||||
|
// - description: What might be wrong
|
||||||
|
// - testable_condition: What to log
|
||||||
|
// - logging_point: Where to add instrumentation
|
||||||
|
|
||||||
|
return hypotheses // Could be 1, 3, 5, or more
|
||||||
|
}
|
||||||
|
|
||||||
|
const hypotheses = generateHypotheses(bug_description, affectedLocations)
|
||||||
|
```
|
||||||
|
|
||||||
|
**Step 1.3: Add NDJSON Instrumentation**
|
||||||
|
|
||||||
|
For each hypothesis, add logging at the relevant location:
|
||||||
|
|
||||||
|
**Python template**:
|
||||||
|
```python
|
||||||
|
# region debug [H{n}]
|
||||||
|
try:
|
||||||
|
import json, time
|
||||||
|
_dbg = {
|
||||||
|
"sid": "{sessionId}",
|
||||||
|
"hid": "H{n}",
|
||||||
|
"loc": "{file}:{line}",
|
||||||
|
"msg": "{testable_condition}",
|
||||||
|
"data": {
|
||||||
|
# Capture relevant values here
|
||||||
|
},
|
||||||
|
"ts": int(time.time() * 1000)
|
||||||
|
}
|
||||||
|
with open(r"{debugLogPath}", "a", encoding="utf-8") as _f:
|
||||||
|
_f.write(json.dumps(_dbg, ensure_ascii=False) + "\n")
|
||||||
|
except: pass
|
||||||
|
# endregion
|
||||||
|
```
|
||||||
|
|
||||||
|
**JavaScript/TypeScript template**:
|
||||||
|
```javascript
|
||||||
|
// region debug [H{n}]
|
||||||
|
try {
|
||||||
|
require('fs').appendFileSync("{debugLogPath}", JSON.stringify({
|
||||||
|
sid: "{sessionId}",
|
||||||
|
hid: "H{n}",
|
||||||
|
loc: "{file}:{line}",
|
||||||
|
msg: "{testable_condition}",
|
||||||
|
data: { /* Capture relevant values */ },
|
||||||
|
ts: Date.now()
|
||||||
|
}) + "\n");
|
||||||
|
} catch(_) {}
|
||||||
|
// endregion
|
||||||
|
```
|
||||||
|
|
||||||
|
**Output to user**:
|
||||||
|
```
|
||||||
|
## Hypotheses Generated
|
||||||
|
|
||||||
|
Based on error "{bug_description}", generated {n} hypotheses:
|
||||||
|
|
||||||
|
{hypotheses.map(h => `
|
||||||
|
### ${h.id}: ${h.description}
|
||||||
|
- Logging at: ${h.logging_point}
|
||||||
|
- Testing: ${h.testable_condition}
|
||||||
|
`).join('')}
|
||||||
|
|
||||||
|
**Debug log**: ${debugLogPath}
|
||||||
|
|
||||||
|
**Next**: Run reproduction steps, then come back for analysis.
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Analyze Mode
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
// Parse NDJSON log
|
||||||
|
const entries = Read(debugLogPath).split('\n')
|
||||||
|
.filter(l => l.trim())
|
||||||
|
.map(l => JSON.parse(l))
|
||||||
|
|
||||||
|
// Group by hypothesis
|
||||||
|
const byHypothesis = groupBy(entries, 'hid')
|
||||||
|
|
||||||
|
// Validate each hypothesis
|
||||||
|
for (const [hid, logs] of Object.entries(byHypothesis)) {
|
||||||
|
const hypothesis = hypotheses.find(h => h.id === hid)
|
||||||
|
const latestLog = logs[logs.length - 1]
|
||||||
|
|
||||||
|
// Check if evidence confirms or rejects hypothesis
|
||||||
|
const verdict = evaluateEvidence(hypothesis, latestLog.data)
|
||||||
|
// Returns: 'confirmed' | 'rejected' | 'inconclusive'
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Output**:
|
||||||
|
```
|
||||||
|
## Evidence Analysis
|
||||||
|
|
||||||
|
Analyzed ${entries.length} log entries.
|
||||||
|
|
||||||
|
${results.map(r => `
|
||||||
|
### ${r.id}: ${r.description}
|
||||||
|
- **Status**: ${r.verdict}
|
||||||
|
- **Evidence**: ${JSON.stringify(r.evidence)}
|
||||||
|
- **Reason**: ${r.reason}
|
||||||
|
`).join('')}
|
||||||
|
|
||||||
|
${confirmedHypothesis ? `
|
||||||
|
## Root Cause Identified
|
||||||
|
|
||||||
|
**${confirmedHypothesis.id}**: ${confirmedHypothesis.description}
|
||||||
|
|
||||||
|
Ready to fix.
|
||||||
|
` : `
|
||||||
|
## Need More Evidence
|
||||||
|
|
||||||
|
Add more logging or refine hypotheses.
|
||||||
|
`}
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Fix & Cleanup
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
// Apply fix based on confirmed hypothesis
|
||||||
|
// ... Edit affected files
|
||||||
|
|
||||||
|
// After user verifies fix works:
|
||||||
|
|
||||||
|
// Remove debug instrumentation (search for region markers)
|
||||||
|
const instrumentedFiles = Grep({
|
||||||
|
pattern: "# region debug|// region debug",
|
||||||
|
output_mode: "files_with_matches"
|
||||||
|
})
|
||||||
|
|
||||||
|
for (const file of instrumentedFiles) {
|
||||||
|
// Remove content between region markers
|
||||||
|
removeDebugRegions(file)
|
||||||
|
}
|
||||||
|
|
||||||
|
console.log(`
|
||||||
|
## Debug Complete
|
||||||
|
|
||||||
|
- Root cause: ${confirmedHypothesis.description}
|
||||||
|
- Fix applied to: ${modifiedFiles.join(', ')}
|
||||||
|
- Debug instrumentation removed
|
||||||
|
`)
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Debug Log Format (NDJSON)
|
||||||
|
|
||||||
|
Each line is a JSON object:
|
||||||
|
|
||||||
|
```json
|
||||||
|
{"sid":"DBG-xxx-2025-12-18","hid":"H1","loc":"file.py:func:42","msg":"Check dict keys","data":{"keys":["a","b"],"target":"c","found":false},"ts":1734567890123}
|
||||||
|
```
|
||||||
|
|
||||||
|
| Field | Description |
|
||||||
|
|-------|-------------|
|
||||||
|
| `sid` | Session ID |
|
||||||
|
| `hid` | Hypothesis ID (H1, H2, ...) |
|
||||||
|
| `loc` | Code location |
|
||||||
|
| `msg` | What's being tested |
|
||||||
|
| `data` | Captured values |
|
||||||
|
| `ts` | Timestamp (ms) |
|
||||||
|
|
||||||
|
## Session Folder
|
||||||
|
|
||||||
|
```
|
||||||
|
.workflow/.debug/DBG-{slug}-{date}/
|
||||||
|
├── debug.log # NDJSON log (main artifact)
|
||||||
|
└── resolution.md # Summary after fix (optional)
|
||||||
|
```
|
||||||
|
|
||||||
|
## Iteration Flow
|
||||||
|
|
||||||
|
```
|
||||||
|
First Call (/workflow:debug "error"):
|
||||||
|
├─ No session exists → Explore mode
|
||||||
|
├─ Extract error keywords, search codebase
|
||||||
|
├─ Generate hypotheses, add logging
|
||||||
|
└─ Await user reproduction
|
||||||
|
|
||||||
|
After Reproduction (/workflow:debug "error"):
|
||||||
|
├─ Session exists + debug.log has content → Analyze mode
|
||||||
|
├─ Parse log, evaluate hypotheses
|
||||||
|
└─ Decision:
|
||||||
|
├─ Confirmed → Fix → User verify
|
||||||
|
│ ├─ Fixed → Cleanup → Done
|
||||||
|
│ └─ Not fixed → Add logging → Iterate
|
||||||
|
├─ Inconclusive → Add logging → Iterate
|
||||||
|
└─ All rejected → New hypotheses → Iterate
|
||||||
|
|
||||||
|
Output:
|
||||||
|
└─ .workflow/.debug/DBG-{slug}-{date}/debug.log
|
||||||
|
```
|
||||||
|
|
||||||
|
## Error Handling
|
||||||
|
|
||||||
|
| Situation | Action |
|
||||||
|
|-----------|--------|
|
||||||
|
| Empty debug.log | Verify reproduction triggered the code path |
|
||||||
|
| All hypotheses rejected | Generate new hypotheses with broader scope |
|
||||||
|
| Fix doesn't work | Iterate with more granular logging |
|
||||||
|
| >5 iterations | Escalate to `/workflow:lite-fix` with evidence |
|
||||||
1467
.claude/commands/workflow/docs/analyze.md
Normal file
1467
.claude/commands/workflow/docs/analyze.md
Normal file
File diff suppressed because it is too large
Load Diff
1265
.claude/commands/workflow/docs/copyright.md
Normal file
1265
.claude/commands/workflow/docs/copyright.md
Normal file
File diff suppressed because it is too large
Load Diff
@@ -56,6 +56,7 @@ Phase 2: Planning Document Validation
|
|||||||
└─ Validate .task/ contains IMPL-*.json files
|
└─ Validate .task/ contains IMPL-*.json files
|
||||||
|
|
||||||
Phase 3: TodoWrite Generation
|
Phase 3: TodoWrite Generation
|
||||||
|
├─ Update session status to "active" (Step 0)
|
||||||
├─ Parse TODO_LIST.md for task statuses
|
├─ Parse TODO_LIST.md for task statuses
|
||||||
├─ Generate TodoWrite for entire workflow
|
├─ Generate TodoWrite for entire workflow
|
||||||
└─ Prepare session context paths
|
└─ Prepare session context paths
|
||||||
@@ -80,6 +81,7 @@ Phase 5: Completion
|
|||||||
Resume Mode (--resume-session):
|
Resume Mode (--resume-session):
|
||||||
├─ Skip Phase 1 & Phase 2
|
├─ Skip Phase 1 & Phase 2
|
||||||
└─ Entry Point: Phase 3 (TodoWrite Generation)
|
└─ Entry Point: Phase 3 (TodoWrite Generation)
|
||||||
|
├─ Update session status to "active" (if not already)
|
||||||
└─ Continue: Phase 4 → Phase 5
|
└─ Continue: Phase 4 → Phase 5
|
||||||
```
|
```
|
||||||
|
|
||||||
@@ -179,6 +181,16 @@ bash(cat .workflow/active/${sessionId}/workflow-session.json)
|
|||||||
### Phase 3: TodoWrite Generation
|
### Phase 3: TodoWrite Generation
|
||||||
**Applies to**: Both normal and resume modes (resume mode entry point)
|
**Applies to**: Both normal and resume modes (resume mode entry point)
|
||||||
|
|
||||||
|
**Step 0: Update Session Status to Active**
|
||||||
|
Before generating TodoWrite, update session status from "planning" to "active":
|
||||||
|
```bash
|
||||||
|
# Update session status (idempotent - safe to run if already active)
|
||||||
|
jq '.status = "active" | .execution_started_at = (.execution_started_at // now | todate)' \
|
||||||
|
.workflow/active/${sessionId}/workflow-session.json > tmp.json && \
|
||||||
|
mv tmp.json .workflow/active/${sessionId}/workflow-session.json
|
||||||
|
```
|
||||||
|
This ensures the dashboard shows the session as "ACTIVE" during execution.
|
||||||
|
|
||||||
**Process**:
|
**Process**:
|
||||||
1. **Create TodoWrite List**: Generate task list from TODO_LIST.md (not from task JSONs)
|
1. **Create TodoWrite List**: Generate task list from TODO_LIST.md (not from task JSONs)
|
||||||
- Parse TODO_LIST.md to extract all tasks with current statuses
|
- Parse TODO_LIST.md to extract all tasks with current statuses
|
||||||
@@ -380,6 +392,7 @@ TodoWrite({
|
|||||||
|
|
||||||
```bash
|
```bash
|
||||||
Task(subagent_type="{meta.agent}",
|
Task(subagent_type="{meta.agent}",
|
||||||
|
run_in_background=false,
|
||||||
prompt="Execute task: {task.title}
|
prompt="Execute task: {task.title}
|
||||||
|
|
||||||
{[FLOW_CONTROL]}
|
{[FLOW_CONTROL]}
|
||||||
@@ -397,7 +410,6 @@ Task(subagent_type="{meta.agent}",
|
|||||||
1. Read complete task JSON: {session.task_json_path}
|
1. Read complete task JSON: {session.task_json_path}
|
||||||
2. Load context package: {session.context_package_path}
|
2. Load context package: {session.context_package_path}
|
||||||
|
|
||||||
Follow complete execution guidelines in @.claude/agents/{meta.agent}.md
|
|
||||||
|
|
||||||
**Session Paths**:
|
**Session Paths**:
|
||||||
- Workflow Dir: {session.workflow_dir}
|
- Workflow Dir: {session.workflow_dir}
|
||||||
|
|||||||
@@ -86,6 +86,7 @@ bash(cp .workflow/project.json .workflow/project.json.backup)
|
|||||||
```javascript
|
```javascript
|
||||||
Task(
|
Task(
|
||||||
subagent_type="cli-explore-agent",
|
subagent_type="cli-explore-agent",
|
||||||
|
run_in_background=false,
|
||||||
description="Deep project analysis",
|
description="Deep project analysis",
|
||||||
prompt=`
|
prompt=`
|
||||||
Analyze project for workflow initialization and generate .workflow/project.json.
|
Analyze project for workflow initialization and generate .workflow/project.json.
|
||||||
|
|||||||
@@ -258,6 +258,33 @@ TodoWrite({
|
|||||||
|
|
||||||
### Step 3: Launch Execution
|
### Step 3: Launch Execution
|
||||||
|
|
||||||
|
**Executor Resolution** (任务级 executor 优先于全局设置):
|
||||||
|
```javascript
|
||||||
|
// 获取任务的 executor(优先使用 executorAssignments,fallback 到全局 executionMethod)
|
||||||
|
function getTaskExecutor(task) {
|
||||||
|
const assignments = executionContext?.executorAssignments || {}
|
||||||
|
if (assignments[task.id]) {
|
||||||
|
return assignments[task.id].executor // 'gemini' | 'codex' | 'agent'
|
||||||
|
}
|
||||||
|
// Fallback: 全局 executionMethod 映射
|
||||||
|
const method = executionContext?.executionMethod || 'Auto'
|
||||||
|
if (method === 'Agent') return 'agent'
|
||||||
|
if (method === 'Codex') return 'codex'
|
||||||
|
// Auto: 根据复杂度
|
||||||
|
return planObject.complexity === 'Low' ? 'agent' : 'codex'
|
||||||
|
}
|
||||||
|
|
||||||
|
// 按 executor 分组任务
|
||||||
|
function groupTasksByExecutor(tasks) {
|
||||||
|
const groups = { gemini: [], codex: [], agent: [] }
|
||||||
|
tasks.forEach(task => {
|
||||||
|
const executor = getTaskExecutor(task)
|
||||||
|
groups[executor].push(task)
|
||||||
|
})
|
||||||
|
return groups
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
**Execution Flow**: Parallel batches concurrently → Sequential batches in order
|
**Execution Flow**: Parallel batches concurrently → Sequential batches in order
|
||||||
```javascript
|
```javascript
|
||||||
const parallel = executionCalls.filter(c => c.executionType === "parallel")
|
const parallel = executionCalls.filter(c => c.executionType === "parallel")
|
||||||
@@ -283,8 +310,9 @@ for (const call of sequential) {
|
|||||||
**Option A: Agent Execution**
|
**Option A: Agent Execution**
|
||||||
|
|
||||||
When to use:
|
When to use:
|
||||||
- `executionMethod = "Agent"`
|
- `getTaskExecutor(task) === "agent"`
|
||||||
- `executionMethod = "Auto" AND complexity = "Low"`
|
- 或 `executionMethod = "Agent"` (全局 fallback)
|
||||||
|
- 或 `executionMethod = "Auto" AND complexity = "Low"` (全局 fallback)
|
||||||
|
|
||||||
**Task Formatting Principle**: Each task is a self-contained checklist. The agent only needs to know what THIS task requires, not its position or relation to other tasks.
|
**Task Formatting Principle**: Each task is a self-contained checklist. The agent only needs to know what THIS task requires, not its position or relation to other tasks.
|
||||||
|
|
||||||
@@ -333,6 +361,7 @@ Complete each task according to its "Done when" checklist.
|
|||||||
|
|
||||||
Task(
|
Task(
|
||||||
subagent_type="code-developer",
|
subagent_type="code-developer",
|
||||||
|
run_in_background=false,
|
||||||
description=batch.taskSummary,
|
description=batch.taskSummary,
|
||||||
prompt=formatBatchPrompt({
|
prompt=formatBatchPrompt({
|
||||||
tasks: batch.tasks,
|
tasks: batch.tasks,
|
||||||
@@ -399,8 +428,9 @@ function extractRelatedFiles(tasks) {
|
|||||||
**Option B: CLI Execution (Codex)**
|
**Option B: CLI Execution (Codex)**
|
||||||
|
|
||||||
When to use:
|
When to use:
|
||||||
- `executionMethod = "Codex"`
|
- `getTaskExecutor(task) === "codex"`
|
||||||
- `executionMethod = "Auto" AND complexity = "Medium" or "High"`
|
- 或 `executionMethod = "Codex"` (全局 fallback)
|
||||||
|
- 或 `executionMethod = "Auto" AND complexity = "Medium/High"` (全局 fallback)
|
||||||
|
|
||||||
**Task Formatting Principle**: Same as Agent - each task is a self-contained checklist. No task numbering or position awareness.
|
**Task Formatting Principle**: Same as Agent - each task is a self-contained checklist. No task numbering or position awareness.
|
||||||
|
|
||||||
@@ -473,10 +503,10 @@ Detailed plan: ${executionContext.session.artifacts.plan}`)
|
|||||||
return prompt
|
return prompt
|
||||||
}
|
}
|
||||||
|
|
||||||
codex --full-auto exec "${buildCLIPrompt(batch)}" --skip-git-repo-check -s danger-full-access
|
ccw cli -p "${buildCLIPrompt(batch)}" --tool codex --mode write
|
||||||
```
|
```
|
||||||
|
|
||||||
**Execution with tracking**:
|
**Execution with fixed IDs** (predictable ID pattern):
|
||||||
```javascript
|
```javascript
|
||||||
// Launch CLI in foreground (NOT background)
|
// Launch CLI in foreground (NOT background)
|
||||||
// Timeout based on complexity: Low=40min, Medium=60min, High=100min
|
// Timeout based on complexity: Low=40min, Medium=60min, High=100min
|
||||||
@@ -486,15 +516,57 @@ const timeoutByComplexity = {
|
|||||||
"High": 6000000 // 100 minutes
|
"High": 6000000 // 100 minutes
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Generate fixed execution ID: ${sessionId}-${groupId}
|
||||||
|
// This enables predictable ID lookup without relying on resume context chains
|
||||||
|
const sessionId = executionContext?.session?.id || 'standalone'
|
||||||
|
const fixedExecutionId = `${sessionId}-${batch.groupId}` // e.g., "implement-auth-2025-12-13-P1"
|
||||||
|
|
||||||
|
// Check if resuming from previous failed execution
|
||||||
|
const previousCliId = batch.resumeFromCliId || null
|
||||||
|
|
||||||
|
// Build command with fixed ID (and optional resume for continuation)
|
||||||
|
const cli_command = previousCliId
|
||||||
|
? `ccw cli -p "${buildCLIPrompt(batch)}" --tool codex --mode write --id ${fixedExecutionId} --resume ${previousCliId}`
|
||||||
|
: `ccw cli -p "${buildCLIPrompt(batch)}" --tool codex --mode write --id ${fixedExecutionId}`
|
||||||
|
|
||||||
bash_result = Bash(
|
bash_result = Bash(
|
||||||
command=cli_command,
|
command=cli_command,
|
||||||
timeout=timeoutByComplexity[planObject.complexity] || 3600000
|
timeout=timeoutByComplexity[planObject.complexity] || 3600000
|
||||||
)
|
)
|
||||||
|
|
||||||
|
// Execution ID is now predictable: ${fixedExecutionId}
|
||||||
|
// Can also extract from output: "ID: implement-auth-2025-12-13-P1"
|
||||||
|
const cliExecutionId = fixedExecutionId
|
||||||
|
|
||||||
// Update TodoWrite when execution completes
|
// Update TodoWrite when execution completes
|
||||||
```
|
```
|
||||||
|
|
||||||
**Result Collection**: After completion, analyze output and collect result following `executionResult` structure
|
**Resume on Failure** (with fixed ID):
|
||||||
|
```javascript
|
||||||
|
// If execution failed or timed out, offer resume option
|
||||||
|
if (bash_result.status === 'failed' || bash_result.status === 'timeout') {
|
||||||
|
console.log(`
|
||||||
|
⚠️ Execution incomplete. Resume available:
|
||||||
|
Fixed ID: ${fixedExecutionId}
|
||||||
|
Lookup: ccw cli detail ${fixedExecutionId}
|
||||||
|
Resume: ccw cli -p "Continue tasks" --resume ${fixedExecutionId} --tool codex --mode write --id ${fixedExecutionId}-retry
|
||||||
|
`)
|
||||||
|
|
||||||
|
// Store for potential retry in same session
|
||||||
|
batch.resumeFromCliId = fixedExecutionId
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Result Collection**: After completion, analyze output and collect result following `executionResult` structure (include `cliExecutionId` for resume capability)
|
||||||
|
|
||||||
|
**Option C: CLI Execution (Gemini)**
|
||||||
|
|
||||||
|
When to use: `getTaskExecutor(task) === "gemini"` (分析类任务)
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# 使用与 Option B 相同的 formatBatchPrompt,切换 tool 和 mode
|
||||||
|
ccw cli -p "${formatBatchPrompt(batch)}" --tool gemini --mode analysis --id ${sessionId}-${batch.groupId}
|
||||||
|
```
|
||||||
|
|
||||||
### Step 4: Progress Tracking
|
### Step 4: Progress Tracking
|
||||||
|
|
||||||
@@ -541,15 +613,30 @@ RULES: $(cat ~/.claude/workflows/cli-templates/prompts/analysis/02-review-code-q
|
|||||||
# - Report findings directly
|
# - Report findings directly
|
||||||
|
|
||||||
# Method 2: Gemini Review (recommended)
|
# Method 2: Gemini Review (recommended)
|
||||||
gemini -p "[Shared Prompt Template with artifacts]"
|
ccw cli -p "[Shared Prompt Template with artifacts]" --tool gemini --mode analysis
|
||||||
# CONTEXT includes: @**/* @${plan.json} [@${exploration.json}]
|
# CONTEXT includes: @**/* @${plan.json} [@${exploration.json}]
|
||||||
|
|
||||||
# Method 3: Qwen Review (alternative)
|
# Method 3: Qwen Review (alternative)
|
||||||
qwen -p "[Shared Prompt Template with artifacts]"
|
ccw cli -p "[Shared Prompt Template with artifacts]" --tool qwen --mode analysis
|
||||||
# Same prompt as Gemini, different execution engine
|
# Same prompt as Gemini, different execution engine
|
||||||
|
|
||||||
# Method 4: Codex Review (autonomous)
|
# Method 4: Codex Review (autonomous)
|
||||||
codex --full-auto exec "[Verify plan acceptance criteria at ${plan.json}]" --skip-git-repo-check -s danger-full-access
|
ccw cli -p "[Verify plan acceptance criteria at ${plan.json}]" --tool codex --mode write
|
||||||
|
```
|
||||||
|
|
||||||
|
**Multi-Round Review with Fixed IDs**:
|
||||||
|
```javascript
|
||||||
|
// Generate fixed review ID
|
||||||
|
const reviewId = `${sessionId}-review`
|
||||||
|
|
||||||
|
// First review pass with fixed ID
|
||||||
|
const reviewResult = Bash(`ccw cli -p "[Review prompt]" --tool gemini --mode analysis --id ${reviewId}`)
|
||||||
|
|
||||||
|
// If issues found, continue review dialog with fixed ID chain
|
||||||
|
if (hasUnresolvedIssues(reviewResult)) {
|
||||||
|
// Resume with follow-up questions
|
||||||
|
Bash(`ccw cli -p "Clarify the security concerns you mentioned" --resume ${reviewId} --tool gemini --mode analysis --id ${reviewId}-followup`)
|
||||||
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
**Implementation Note**: Replace `[Shared Prompt Template with artifacts]` placeholder with actual template content, substituting:
|
**Implementation Note**: Replace `[Shared Prompt Template with artifacts]` placeholder with actual template content, substituting:
|
||||||
@@ -623,8 +710,10 @@ console.log(`✓ Development index: [${category}] ${entry.title}`)
|
|||||||
| Empty file | File exists but no content | Error: "File is empty: {path}. Provide task description." |
|
| Empty file | File exists but no content | Error: "File is empty: {path}. Provide task description." |
|
||||||
| Invalid Enhanced Task JSON | JSON missing required fields | Warning: "Missing required fields. Treating as plain text." |
|
| Invalid Enhanced Task JSON | JSON missing required fields | Warning: "Missing required fields. Treating as plain text." |
|
||||||
| Malformed JSON | JSON parsing fails | Treat as plain text (expected for non-JSON files) |
|
| Malformed JSON | JSON parsing fails | Treat as plain text (expected for non-JSON files) |
|
||||||
| Execution failure | Agent/Codex crashes | Display error, save partial progress, suggest retry |
|
| Execution failure | Agent/Codex crashes | Display error, use fixed ID `${sessionId}-${groupId}` for resume: `ccw cli -p "Continue" --resume <fixed-id> --id <fixed-id>-retry` |
|
||||||
|
| Execution timeout | CLI exceeded timeout | Use fixed ID for resume with extended timeout |
|
||||||
| Codex unavailable | Codex not installed | Show installation instructions, offer Agent execution |
|
| Codex unavailable | Codex not installed | Show installation instructions, offer Agent execution |
|
||||||
|
| Fixed ID not found | Custom ID lookup failed | Check `ccw cli history`, verify date directories |
|
||||||
|
|
||||||
## Data Structures
|
## Data Structures
|
||||||
|
|
||||||
@@ -646,10 +735,15 @@ Passed from lite-plan via global variable:
|
|||||||
explorationAngles: string[], // List of exploration angles
|
explorationAngles: string[], // List of exploration angles
|
||||||
explorationManifest: {...} | null, // Exploration manifest
|
explorationManifest: {...} | null, // Exploration manifest
|
||||||
clarificationContext: {...} | null,
|
clarificationContext: {...} | null,
|
||||||
executionMethod: "Agent" | "Codex" | "Auto",
|
executionMethod: "Agent" | "Codex" | "Auto", // 全局默认
|
||||||
codeReviewTool: "Skip" | "Gemini Review" | "Agent Review" | string,
|
codeReviewTool: "Skip" | "Gemini Review" | "Agent Review" | string,
|
||||||
originalUserInput: string,
|
originalUserInput: string,
|
||||||
|
|
||||||
|
// 任务级 executor 分配(优先于 executionMethod)
|
||||||
|
executorAssignments: {
|
||||||
|
[taskId]: { executor: "gemini" | "codex" | "agent", reason: string }
|
||||||
|
},
|
||||||
|
|
||||||
// Session artifacts location (saved by lite-plan)
|
// Session artifacts location (saved by lite-plan)
|
||||||
session: {
|
session: {
|
||||||
id: string, // Session identifier: {taskSlug}-{shortTimestamp}
|
id: string, // Session identifier: {taskSlug}-{shortTimestamp}
|
||||||
@@ -679,8 +773,20 @@ Collected after each execution call completes:
|
|||||||
tasksSummary: string, // Brief description of tasks handled
|
tasksSummary: string, // Brief description of tasks handled
|
||||||
completionSummary: string, // What was completed
|
completionSummary: string, // What was completed
|
||||||
keyOutputs: string, // Files created/modified, key changes
|
keyOutputs: string, // Files created/modified, key changes
|
||||||
notes: string // Important context for next execution
|
notes: string, // Important context for next execution
|
||||||
|
fixedCliId: string | null // Fixed CLI execution ID (e.g., "implement-auth-2025-12-13-P1")
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
Appended to `previousExecutionResults` array for context continuity in multi-execution scenarios.
|
Appended to `previousExecutionResults` array for context continuity in multi-execution scenarios.
|
||||||
|
|
||||||
|
**Fixed ID Pattern**: `${sessionId}-${groupId}` enables predictable lookup without auto-generated timestamps.
|
||||||
|
|
||||||
|
**Resume Usage**: If `status` is "partial" or "failed", use `fixedCliId` to resume:
|
||||||
|
```bash
|
||||||
|
# Lookup previous execution
|
||||||
|
ccw cli detail ${fixedCliId}
|
||||||
|
|
||||||
|
# Resume with new fixed ID for retry
|
||||||
|
ccw cli -p "Continue from where we left off" --resume ${fixedCliId} --tool codex --mode write --id ${fixedCliId}-retry
|
||||||
|
```
|
||||||
|
|||||||
@@ -164,6 +164,7 @@ Launching ${selectedAngles.length} parallel diagnoses...
|
|||||||
const diagnosisTasks = selectedAngles.map((angle, index) =>
|
const diagnosisTasks = selectedAngles.map((angle, index) =>
|
||||||
Task(
|
Task(
|
||||||
subagent_type="cli-explore-agent",
|
subagent_type="cli-explore-agent",
|
||||||
|
run_in_background=false,
|
||||||
description=`Diagnose: ${angle}`,
|
description=`Diagnose: ${angle}`,
|
||||||
prompt=`
|
prompt=`
|
||||||
## Task Objective
|
## Task Objective
|
||||||
@@ -400,6 +401,7 @@ Write(`${sessionFolder}/fix-plan.json`, JSON.stringify(fixPlan, null, 2))
|
|||||||
```javascript
|
```javascript
|
||||||
Task(
|
Task(
|
||||||
subagent_type="cli-lite-planning-agent",
|
subagent_type="cli-lite-planning-agent",
|
||||||
|
run_in_background=false,
|
||||||
description="Generate detailed fix plan",
|
description="Generate detailed fix plan",
|
||||||
prompt=`
|
prompt=`
|
||||||
Generate fix plan and write fix-plan.json.
|
Generate fix plan and write fix-plan.json.
|
||||||
|
|||||||
@@ -15,7 +15,7 @@ Intelligent lightweight planning command with dynamic workflow adaptation based
|
|||||||
- Intelligent task analysis with automatic exploration detection
|
- Intelligent task analysis with automatic exploration detection
|
||||||
- Dynamic code exploration (cli-explore-agent) when codebase understanding needed
|
- Dynamic code exploration (cli-explore-agent) when codebase understanding needed
|
||||||
- Interactive clarification after exploration to gather missing information
|
- Interactive clarification after exploration to gather missing information
|
||||||
- Adaptive planning strategy (direct Claude vs cli-lite-planning-agent) based on complexity
|
- Adaptive planning: Low complexity → Direct Claude; Medium/High → cli-lite-planning-agent
|
||||||
- Two-step confirmation: plan display → multi-dimensional input collection
|
- Two-step confirmation: plan display → multi-dimensional input collection
|
||||||
- Execution dispatch with complete context handoff to lite-execute
|
- Execution dispatch with complete context handoff to lite-execute
|
||||||
|
|
||||||
@@ -38,7 +38,7 @@ Phase 1: Task Analysis & Exploration
|
|||||||
├─ Parse input (description or .md file)
|
├─ Parse input (description or .md file)
|
||||||
├─ intelligent complexity assessment (Low/Medium/High)
|
├─ intelligent complexity assessment (Low/Medium/High)
|
||||||
├─ Exploration decision (auto-detect or --explore flag)
|
├─ Exploration decision (auto-detect or --explore flag)
|
||||||
├─ ⚠️ Context protection: If file reading ≥50k chars → force cli-explore-agent
|
├─ Context protection: If file reading ≥50k chars → force cli-explore-agent
|
||||||
└─ Decision:
|
└─ Decision:
|
||||||
├─ needsExploration=true → Launch parallel cli-explore-agents (1-4 based on complexity)
|
├─ needsExploration=true → Launch parallel cli-explore-agents (1-4 based on complexity)
|
||||||
└─ needsExploration=false → Skip to Phase 2/3
|
└─ needsExploration=false → Skip to Phase 2/3
|
||||||
@@ -140,11 +140,17 @@ function selectAngles(taskDescription, count) {
|
|||||||
|
|
||||||
const selectedAngles = selectAngles(task_description, complexity === 'High' ? 4 : (complexity === 'Medium' ? 3 : 1))
|
const selectedAngles = selectAngles(task_description, complexity === 'High' ? 4 : (complexity === 'Medium' ? 3 : 1))
|
||||||
|
|
||||||
|
// Planning strategy determination
|
||||||
|
const planningStrategy = complexity === 'Low'
|
||||||
|
? 'Direct Claude Planning'
|
||||||
|
: 'cli-lite-planning-agent'
|
||||||
|
|
||||||
console.log(`
|
console.log(`
|
||||||
## Exploration Plan
|
## Exploration Plan
|
||||||
|
|
||||||
Task Complexity: ${complexity}
|
Task Complexity: ${complexity}
|
||||||
Selected Angles: ${selectedAngles.join(', ')}
|
Selected Angles: ${selectedAngles.join(', ')}
|
||||||
|
Planning Strategy: ${planningStrategy}
|
||||||
|
|
||||||
Launching ${selectedAngles.length} parallel explorations...
|
Launching ${selectedAngles.length} parallel explorations...
|
||||||
`)
|
`)
|
||||||
@@ -152,11 +158,16 @@ Launching ${selectedAngles.length} parallel explorations...
|
|||||||
|
|
||||||
**Launch Parallel Explorations** - Orchestrator assigns angle to each agent:
|
**Launch Parallel Explorations** - Orchestrator assigns angle to each agent:
|
||||||
|
|
||||||
|
**⚠️ CRITICAL - NO BACKGROUND EXECUTION**:
|
||||||
|
- **MUST NOT use `run_in_background: true`** - exploration results are REQUIRED before planning
|
||||||
|
|
||||||
|
|
||||||
```javascript
|
```javascript
|
||||||
// Launch agents with pre-assigned angles
|
// Launch agents with pre-assigned angles
|
||||||
const explorationTasks = selectedAngles.map((angle, index) =>
|
const explorationTasks = selectedAngles.map((angle, index) =>
|
||||||
Task(
|
Task(
|
||||||
subagent_type="cli-explore-agent",
|
subagent_type="cli-explore-agent",
|
||||||
|
run_in_background=false, // ⚠️ MANDATORY: Must wait for results
|
||||||
description=`Explore: ${angle}`,
|
description=`Explore: ${angle}`,
|
||||||
prompt=`
|
prompt=`
|
||||||
## Task Objective
|
## Task Objective
|
||||||
@@ -304,14 +315,11 @@ explorations.forEach(exp => {
|
|||||||
}
|
}
|
||||||
})
|
})
|
||||||
|
|
||||||
// Deduplicate exact same questions only
|
// Intelligent deduplication: analyze allClarifications by intent
|
||||||
const seen = new Set()
|
// - Identify questions with similar intent across different angles
|
||||||
const dedupedClarifications = allClarifications.filter(c => {
|
// - Merge similar questions: combine options, consolidate context
|
||||||
const key = c.question.toLowerCase()
|
// - Produce dedupedClarifications with unique intents only
|
||||||
if (seen.has(key)) return false
|
const dedupedClarifications = intelligentMerge(allClarifications)
|
||||||
seen.add(key)
|
|
||||||
return true
|
|
||||||
})
|
|
||||||
|
|
||||||
// Multi-round clarification: batch questions (max 4 per round)
|
// Multi-round clarification: batch questions (max 4 per round)
|
||||||
if (dedupedClarifications.length > 0) {
|
if (dedupedClarifications.length > 0) {
|
||||||
@@ -351,12 +359,34 @@ if (dedupedClarifications.length > 0) {
|
|||||||
|
|
||||||
**IMPORTANT**: Phase 3 is **planning only** - NO code execution. All execution happens in Phase 5 via lite-execute.
|
**IMPORTANT**: Phase 3 is **planning only** - NO code execution. All execution happens in Phase 5 via lite-execute.
|
||||||
|
|
||||||
|
**Executor Assignment** (Claude 智能分配,plan 生成后执行):
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
// 分配规则(优先级从高到低):
|
||||||
|
// 1. 用户明确指定:"用 gemini 分析..." → gemini, "codex 实现..." → codex
|
||||||
|
// 2. 默认 → agent
|
||||||
|
|
||||||
|
const executorAssignments = {} // { taskId: { executor: 'gemini'|'codex'|'agent', reason: string } }
|
||||||
|
plan.tasks.forEach(task => {
|
||||||
|
// Claude 根据上述规则语义分析,为每个 task 分配 executor
|
||||||
|
executorAssignments[task.id] = { executor: '...', reason: '...' }
|
||||||
|
})
|
||||||
|
```
|
||||||
|
|
||||||
**Low Complexity** - Direct planning by Claude:
|
**Low Complexity** - Direct planning by Claude:
|
||||||
```javascript
|
```javascript
|
||||||
// Step 1: Read schema
|
// Step 1: Read schema
|
||||||
const schema = Bash(`cat ~/.claude/workflows/cli-templates/schemas/plan-json-schema.json`)
|
const schema = Bash(`cat ~/.claude/workflows/cli-templates/schemas/plan-json-schema.json`)
|
||||||
|
|
||||||
// Step 2: Generate plan following schema (Claude directly, no agent)
|
// Step 2: ⚠️ MANDATORY - Read and review ALL exploration files
|
||||||
|
const manifest = JSON.parse(Read(`${sessionFolder}/explorations-manifest.json`))
|
||||||
|
manifest.explorations.forEach(exp => {
|
||||||
|
const explorationData = Read(exp.path)
|
||||||
|
console.log(`\n### Exploration: ${exp.angle}\n${explorationData}`)
|
||||||
|
})
|
||||||
|
|
||||||
|
// Step 3: Generate plan following schema (Claude directly, no agent)
|
||||||
|
// ⚠️ Plan MUST incorporate insights from exploration files read in Step 2
|
||||||
const plan = {
|
const plan = {
|
||||||
summary: "...",
|
summary: "...",
|
||||||
approach: "...",
|
approach: "...",
|
||||||
@@ -367,10 +397,10 @@ const plan = {
|
|||||||
_metadata: { timestamp: getUtc8ISOString(), source: "direct-planning", planning_mode: "direct" }
|
_metadata: { timestamp: getUtc8ISOString(), source: "direct-planning", planning_mode: "direct" }
|
||||||
}
|
}
|
||||||
|
|
||||||
// Step 3: Write plan to session folder
|
// Step 4: Write plan to session folder
|
||||||
Write(`${sessionFolder}/plan.json`, JSON.stringify(plan, null, 2))
|
Write(`${sessionFolder}/plan.json`, JSON.stringify(plan, null, 2))
|
||||||
|
|
||||||
// Step 4: MUST continue to Phase 4 (Confirmation) - DO NOT execute code here
|
// Step 5: MUST continue to Phase 4 (Confirmation) - DO NOT execute code here
|
||||||
```
|
```
|
||||||
|
|
||||||
**Medium/High Complexity** - Invoke cli-lite-planning-agent:
|
**Medium/High Complexity** - Invoke cli-lite-planning-agent:
|
||||||
@@ -378,6 +408,7 @@ Write(`${sessionFolder}/plan.json`, JSON.stringify(plan, null, 2))
|
|||||||
```javascript
|
```javascript
|
||||||
Task(
|
Task(
|
||||||
subagent_type="cli-lite-planning-agent",
|
subagent_type="cli-lite-planning-agent",
|
||||||
|
run_in_background=false,
|
||||||
description="Generate detailed implementation plan",
|
description="Generate detailed implementation plan",
|
||||||
prompt=`
|
prompt=`
|
||||||
Generate implementation plan and write plan.json.
|
Generate implementation plan and write plan.json.
|
||||||
@@ -407,23 +438,9 @@ ${JSON.stringify(clarificationContext) || "None"}
|
|||||||
${complexity}
|
${complexity}
|
||||||
|
|
||||||
## Requirements
|
## Requirements
|
||||||
Generate plan.json with:
|
Generate plan.json following the schema obtained above. Key constraints:
|
||||||
- summary: 2-3 sentence overview
|
- tasks: 2-7 structured tasks (**group by feature/module, NOT by file**)
|
||||||
- approach: High-level implementation strategy (incorporating insights from all exploration angles)
|
- _metadata.exploration_angles: ${JSON.stringify(manifest.explorations.map(e => e.angle))}
|
||||||
- tasks: 2-7 structured tasks (**IMPORTANT: group by feature/module, NOT by file**)
|
|
||||||
- **Task Granularity Principle**: Each task = one complete feature unit or module
|
|
||||||
- title: action verb + target module/feature (e.g., "Implement auth token refresh")
|
|
||||||
- scope: module path (src/auth/) or feature name, prefer module-level over single file
|
|
||||||
- action, description
|
|
||||||
- modification_points: ALL files to modify for this feature (group related changes)
|
|
||||||
- implementation (3-7 steps covering all modification_points)
|
|
||||||
- reference (pattern, files, examples)
|
|
||||||
- acceptance (2-4 criteria for the entire feature)
|
|
||||||
- depends_on: task IDs this task depends on (use sparingly, only for true dependencies)
|
|
||||||
- estimated_time, recommended_execution, complexity
|
|
||||||
- _metadata:
|
|
||||||
- timestamp, source, planning_mode
|
|
||||||
- exploration_angles: ${JSON.stringify(manifest.explorations.map(e => e.angle))}
|
|
||||||
|
|
||||||
## Task Grouping Rules
|
## Task Grouping Rules
|
||||||
1. **Group by feature**: All changes for one feature = one task (even if 3-5 files)
|
1. **Group by feature**: All changes for one feature = one task (even if 3-5 files)
|
||||||
@@ -435,10 +452,10 @@ Generate plan.json with:
|
|||||||
7. **Prefer parallel**: Most tasks should be independent (no depends_on)
|
7. **Prefer parallel**: Most tasks should be independent (no depends_on)
|
||||||
|
|
||||||
## Execution
|
## Execution
|
||||||
1. Read ALL exploration files for comprehensive context
|
1. Read schema file (cat command above)
|
||||||
2. Execute CLI planning using Gemini (Qwen fallback)
|
2. Execute CLI planning using Gemini (Qwen fallback)
|
||||||
3. Synthesize findings from multiple exploration angles
|
3. Read ALL exploration files for comprehensive context
|
||||||
4. Parse output and structure plan
|
4. Synthesize findings and generate plan following schema
|
||||||
5. Write JSON: Write('${sessionFolder}/plan.json', jsonContent)
|
5. Write JSON: Write('${sessionFolder}/plan.json', jsonContent)
|
||||||
6. Return brief completion summary
|
6. Return brief completion summary
|
||||||
`
|
`
|
||||||
@@ -535,9 +552,13 @@ executionContext = {
|
|||||||
explorationAngles: manifest.explorations.map(e => e.angle),
|
explorationAngles: manifest.explorations.map(e => e.angle),
|
||||||
explorationManifest: manifest,
|
explorationManifest: manifest,
|
||||||
clarificationContext: clarificationContext || null,
|
clarificationContext: clarificationContext || null,
|
||||||
executionMethod: userSelection.execution_method,
|
executionMethod: userSelection.execution_method, // 全局默认,可被 executorAssignments 覆盖
|
||||||
codeReviewTool: userSelection.code_review_tool,
|
codeReviewTool: userSelection.code_review_tool,
|
||||||
originalUserInput: task_description,
|
originalUserInput: task_description,
|
||||||
|
|
||||||
|
// 任务级 executor 分配(优先于全局 executionMethod)
|
||||||
|
executorAssignments: executorAssignments, // { taskId: { executor, reason } }
|
||||||
|
|
||||||
session: {
|
session: {
|
||||||
id: sessionId,
|
id: sessionId,
|
||||||
folder: sessionFolder,
|
folder: sessionFolder,
|
||||||
|
|||||||
@@ -61,7 +61,7 @@ Phase 2: Context Gathering
|
|||||||
├─ Tasks attached: Analyze structure → Identify integration → Generate package
|
├─ Tasks attached: Analyze structure → Identify integration → Generate package
|
||||||
└─ Output: contextPath + conflict_risk
|
└─ Output: contextPath + conflict_risk
|
||||||
|
|
||||||
Phase 3: Conflict Resolution (conditional)
|
Phase 3: Conflict Resolution
|
||||||
└─ Decision (conflict_risk check):
|
└─ Decision (conflict_risk check):
|
||||||
├─ conflict_risk ≥ medium → Execute /workflow:tools:conflict-resolution
|
├─ conflict_risk ≥ medium → Execute /workflow:tools:conflict-resolution
|
||||||
│ ├─ Tasks attached: Detect conflicts → Present to user → Apply strategies
|
│ ├─ Tasks attached: Detect conflicts → Present to user → Apply strategies
|
||||||
@@ -168,7 +168,7 @@ SlashCommand(command="/workflow:tools:context-gather --session [sessionId] \"[st
|
|||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
### Phase 3: Conflict Resolution (Optional - auto-triggered by conflict risk)
|
### Phase 3: Conflict Resolution
|
||||||
|
|
||||||
**Trigger**: Only execute when context-package.json indicates conflict_risk is "medium" or "high"
|
**Trigger**: Only execute when context-package.json indicates conflict_risk is "medium" or "high"
|
||||||
|
|
||||||
@@ -185,10 +185,10 @@ SlashCommand(command="/workflow:tools:conflict-resolution --session [sessionId]
|
|||||||
|
|
||||||
**Parse Output**:
|
**Parse Output**:
|
||||||
- Extract: Execution status (success/skipped/failed)
|
- Extract: Execution status (success/skipped/failed)
|
||||||
- Verify: CONFLICT_RESOLUTION.md file path (if executed)
|
- Verify: conflict-resolution.json file path (if executed)
|
||||||
|
|
||||||
**Validation**:
|
**Validation**:
|
||||||
- File `.workflow/active/[sessionId]/.process/CONFLICT_RESOLUTION.md` exists (if executed)
|
- File `.workflow/active/[sessionId]/.process/conflict-resolution.json` exists (if executed)
|
||||||
|
|
||||||
**Skip Behavior**:
|
**Skip Behavior**:
|
||||||
- If conflict_risk is "none" or "low", skip directly to Phase 3.5
|
- If conflict_risk is "none" or "low", skip directly to Phase 3.5
|
||||||
@@ -497,7 +497,7 @@ Return summary to user
|
|||||||
- Parse context path from Phase 2 output, store in memory
|
- Parse context path from Phase 2 output, store in memory
|
||||||
- **Extract conflict_risk from context-package.json**: Determine Phase 3 execution
|
- **Extract conflict_risk from context-package.json**: Determine Phase 3 execution
|
||||||
- **If conflict_risk ≥ medium**: Launch Phase 3 conflict-resolution with sessionId and contextPath
|
- **If conflict_risk ≥ medium**: Launch Phase 3 conflict-resolution with sessionId and contextPath
|
||||||
- Wait for Phase 3 to finish executing (if executed), verify CONFLICT_RESOLUTION.md created
|
- Wait for Phase 3 to finish executing (if executed), verify conflict-resolution.json created
|
||||||
- **If conflict_risk is none/low**: Skip Phase 3, proceed directly to Phase 4
|
- **If conflict_risk is none/low**: Skip Phase 3, proceed directly to Phase 4
|
||||||
- **Build Phase 4 command**: `/workflow:tools:task-generate-agent --session [sessionId]`
|
- **Build Phase 4 command**: `/workflow:tools:task-generate-agent --session [sessionId]`
|
||||||
- Pass session ID to Phase 4 command
|
- Pass session ID to Phase 4 command
|
||||||
|
|||||||
@@ -391,6 +391,7 @@ done
|
|||||||
```javascript
|
```javascript
|
||||||
Task(
|
Task(
|
||||||
subagent_type="cli-explore-agent",
|
subagent_type="cli-explore-agent",
|
||||||
|
run_in_background=false,
|
||||||
description=`Execute ${dimension} review analysis via Deep Scan`,
|
description=`Execute ${dimension} review analysis via Deep Scan`,
|
||||||
prompt=`
|
prompt=`
|
||||||
## Task Objective
|
## Task Objective
|
||||||
@@ -476,6 +477,7 @@ Task(
|
|||||||
```javascript
|
```javascript
|
||||||
Task(
|
Task(
|
||||||
subagent_type="cli-explore-agent",
|
subagent_type="cli-explore-agent",
|
||||||
|
run_in_background=false,
|
||||||
description=`Deep-dive analysis for critical finding: ${findingTitle} via Dependency Map + Deep Scan`,
|
description=`Deep-dive analysis for critical finding: ${findingTitle} via Dependency Map + Deep Scan`,
|
||||||
prompt=`
|
prompt=`
|
||||||
## Task Objective
|
## Task Objective
|
||||||
|
|||||||
@@ -401,6 +401,7 @@ git log --since="${sessionCreatedAt}" --name-only --pretty=format: | sort -u
|
|||||||
```javascript
|
```javascript
|
||||||
Task(
|
Task(
|
||||||
subagent_type="cli-explore-agent",
|
subagent_type="cli-explore-agent",
|
||||||
|
run_in_background=false,
|
||||||
description=`Execute ${dimension} review analysis via Deep Scan`,
|
description=`Execute ${dimension} review analysis via Deep Scan`,
|
||||||
prompt=`
|
prompt=`
|
||||||
## Task Objective
|
## Task Objective
|
||||||
@@ -487,6 +488,7 @@ Task(
|
|||||||
```javascript
|
```javascript
|
||||||
Task(
|
Task(
|
||||||
subagent_type="cli-explore-agent",
|
subagent_type="cli-explore-agent",
|
||||||
|
run_in_background=false,
|
||||||
description=`Deep-dive analysis for critical finding: ${findingTitle} via Dependency Map + Deep Scan`,
|
description=`Deep-dive analysis for critical finding: ${findingTitle} via Dependency Map + Deep Scan`,
|
||||||
prompt=`
|
prompt=`
|
||||||
## Task Objective
|
## Task Objective
|
||||||
|
|||||||
@@ -112,11 +112,15 @@ After bash validation, the model takes control to:
|
|||||||
|
|
||||||
1. **Load Context**: Read completed task summaries and changed files
|
1. **Load Context**: Read completed task summaries and changed files
|
||||||
```bash
|
```bash
|
||||||
# Load implementation summaries
|
# Load implementation summaries (iterate through .summaries/ directory)
|
||||||
cat .workflow/active/${sessionId}/.summaries/IMPL-*.md
|
for summary in .workflow/active/${sessionId}/.summaries/*.md; do
|
||||||
|
cat "$summary"
|
||||||
|
done
|
||||||
|
|
||||||
# Load test results (if available)
|
# Load test results (if available)
|
||||||
cat .workflow/active/${sessionId}/.summaries/TEST-FIX-*.md 2>/dev/null
|
for test_summary in .workflow/active/${sessionId}/.summaries/TEST-FIX-*.md 2>/dev/null; do
|
||||||
|
cat "$test_summary"
|
||||||
|
done
|
||||||
|
|
||||||
# Get changed files
|
# Get changed files
|
||||||
git log --since="$(cat .workflow/active/${sessionId}/workflow-session.json | jq -r .created_at)" --name-only --pretty=format: | sort -u
|
git log --since="$(cat .workflow/active/${sessionId}/workflow-session.json | jq -r .created_at)" --name-only --pretty=format: | sort -u
|
||||||
@@ -132,51 +136,53 @@ After bash validation, the model takes control to:
|
|||||||
```
|
```
|
||||||
- Use Gemini for security analysis:
|
- Use Gemini for security analysis:
|
||||||
```bash
|
```bash
|
||||||
cd .workflow/active/${sessionId} && gemini -p "
|
ccw cli -p "
|
||||||
PURPOSE: Security audit of completed implementation
|
PURPOSE: Security audit of completed implementation
|
||||||
TASK: Review code for security vulnerabilities, insecure patterns, auth/authz issues
|
TASK: Review code for security vulnerabilities, insecure patterns, auth/authz issues
|
||||||
CONTEXT: @.summaries/IMPL-*.md,../.. @../../CLAUDE.md
|
CONTEXT: @.summaries/IMPL-*.md,../.. @../../CLAUDE.md
|
||||||
EXPECTED: Security findings report with severity levels
|
EXPECTED: Security findings report with severity levels
|
||||||
RULES: Focus on OWASP Top 10, authentication, authorization, data validation, injection risks
|
RULES: Focus on OWASP Top 10, authentication, authorization, data validation, injection risks
|
||||||
" --approval-mode yolo
|
" --tool gemini --mode write --cd .workflow/active/${sessionId}
|
||||||
```
|
```
|
||||||
|
|
||||||
**Architecture Review** (`--type=architecture`):
|
**Architecture Review** (`--type=architecture`):
|
||||||
- Use Qwen for architecture analysis:
|
- Use Qwen for architecture analysis:
|
||||||
```bash
|
```bash
|
||||||
cd .workflow/active/${sessionId} && qwen -p "
|
ccw cli -p "
|
||||||
PURPOSE: Architecture compliance review
|
PURPOSE: Architecture compliance review
|
||||||
TASK: Evaluate adherence to architectural patterns, identify technical debt, review design decisions
|
TASK: Evaluate adherence to architectural patterns, identify technical debt, review design decisions
|
||||||
CONTEXT: @.summaries/IMPL-*.md,../.. @../../CLAUDE.md
|
CONTEXT: @.summaries/IMPL-*.md,../.. @../../CLAUDE.md
|
||||||
EXPECTED: Architecture assessment with recommendations
|
EXPECTED: Architecture assessment with recommendations
|
||||||
RULES: Check for patterns, separation of concerns, modularity, scalability
|
RULES: Check for patterns, separation of concerns, modularity, scalability
|
||||||
" --approval-mode yolo
|
" --tool qwen --mode write --cd .workflow/active/${sessionId}
|
||||||
```
|
```
|
||||||
|
|
||||||
**Quality Review** (`--type=quality`):
|
**Quality Review** (`--type=quality`):
|
||||||
- Use Gemini for code quality:
|
- Use Gemini for code quality:
|
||||||
```bash
|
```bash
|
||||||
cd .workflow/active/${sessionId} && gemini -p "
|
ccw cli -p "
|
||||||
PURPOSE: Code quality and best practices review
|
PURPOSE: Code quality and best practices review
|
||||||
TASK: Assess code readability, maintainability, adherence to best practices
|
TASK: Assess code readability, maintainability, adherence to best practices
|
||||||
CONTEXT: @.summaries/IMPL-*.md,../.. @../../CLAUDE.md
|
CONTEXT: @.summaries/IMPL-*.md,../.. @../../CLAUDE.md
|
||||||
EXPECTED: Quality assessment with improvement suggestions
|
EXPECTED: Quality assessment with improvement suggestions
|
||||||
RULES: Check for code smells, duplication, complexity, naming conventions
|
RULES: Check for code smells, duplication, complexity, naming conventions
|
||||||
" --approval-mode yolo
|
" --tool gemini --mode write --cd .workflow/active/${sessionId}
|
||||||
```
|
```
|
||||||
|
|
||||||
**Action Items Review** (`--type=action-items`):
|
**Action Items Review** (`--type=action-items`):
|
||||||
- Verify all requirements and acceptance criteria met:
|
- Verify all requirements and acceptance criteria met:
|
||||||
```bash
|
```bash
|
||||||
# Load task requirements and acceptance criteria
|
# Load task requirements and acceptance criteria
|
||||||
find .workflow/active/${sessionId}/.task -name "IMPL-*.json" -exec jq -r '
|
for task_file in .workflow/active/${sessionId}/.task/*.json; do
|
||||||
"Task: " + .id + "\n" +
|
cat "$task_file" | jq -r '
|
||||||
"Requirements: " + (.context.requirements | join(", ")) + "\n" +
|
"Task: " + .id + "\n" +
|
||||||
"Acceptance: " + (.context.acceptance | join(", "))
|
"Requirements: " + (.context.requirements | join(", ")) + "\n" +
|
||||||
' {} \;
|
"Acceptance: " + (.context.acceptance | join(", "))
|
||||||
|
'
|
||||||
|
done
|
||||||
|
|
||||||
# Check implementation summaries against requirements
|
# Check implementation summaries against requirements
|
||||||
cd .workflow/active/${sessionId} && gemini -p "
|
ccw cli -p "
|
||||||
PURPOSE: Verify all requirements and acceptance criteria are met
|
PURPOSE: Verify all requirements and acceptance criteria are met
|
||||||
TASK: Cross-check implementation summaries against original requirements
|
TASK: Cross-check implementation summaries against original requirements
|
||||||
CONTEXT: @.task/IMPL-*.json,.summaries/IMPL-*.md,../.. @../../CLAUDE.md
|
CONTEXT: @.task/IMPL-*.json,.summaries/IMPL-*.md,../.. @../../CLAUDE.md
|
||||||
@@ -190,7 +196,7 @@ After bash validation, the model takes control to:
|
|||||||
- Verify all acceptance criteria are met
|
- Verify all acceptance criteria are met
|
||||||
- Flag any incomplete or missing action items
|
- Flag any incomplete or missing action items
|
||||||
- Assess deployment readiness
|
- Assess deployment readiness
|
||||||
" --approval-mode yolo
|
" --tool gemini --mode write --cd .workflow/active/${sessionId}
|
||||||
```
|
```
|
||||||
|
|
||||||
|
|
||||||
|
|||||||
@@ -8,493 +8,146 @@ examples:
|
|||||||
|
|
||||||
# Complete Workflow Session (/workflow:session:complete)
|
# Complete Workflow Session (/workflow:session:complete)
|
||||||
|
|
||||||
## Overview
|
Mark the currently active workflow session as complete, archive it, and update manifests.
|
||||||
Mark the currently active workflow session as complete, analyze it for lessons learned, move it to the archive directory, and remove the active flag marker.
|
|
||||||
|
|
||||||
## Usage
|
## Pre-defined Commands
|
||||||
```bash
|
|
||||||
/workflow:session:complete # Complete current active session
|
|
||||||
/workflow:session:complete --detailed # Show detailed completion summary
|
|
||||||
```
|
|
||||||
|
|
||||||
## Implementation Flow
|
|
||||||
|
|
||||||
### Phase 1: Pre-Archival Preparation (Transactional Setup)
|
|
||||||
|
|
||||||
**Purpose**: Find active session, create archiving marker to prevent concurrent operations. Session remains in active location for agent processing.
|
|
||||||
|
|
||||||
#### Step 1.1: Find Active Session and Get Name
|
|
||||||
```bash
|
|
||||||
# Find active session directory
|
|
||||||
bash(find .workflow/active/ -name "WFS-*" -type d | head -1)
|
|
||||||
|
|
||||||
# Extract session name from directory path
|
|
||||||
bash(basename .workflow/active/WFS-session-name)
|
|
||||||
```
|
|
||||||
**Output**: Session name `WFS-session-name`
|
|
||||||
|
|
||||||
#### Step 1.2: Check for Existing Archiving Marker (Resume Detection)
|
|
||||||
```bash
|
|
||||||
# Check if session is already being archived
|
|
||||||
bash(test -f .workflow/active/WFS-session-name/.archiving && echo "RESUMING" || echo "NEW")
|
|
||||||
```
|
|
||||||
|
|
||||||
**If RESUMING**:
|
|
||||||
- Previous archival attempt was interrupted
|
|
||||||
- Skip to Phase 2 to resume agent analysis
|
|
||||||
|
|
||||||
**If NEW**:
|
|
||||||
- Continue to Step 1.3
|
|
||||||
|
|
||||||
#### Step 1.3: Create Archiving Marker
|
|
||||||
```bash
|
|
||||||
# Mark session as "archiving in progress"
|
|
||||||
bash(touch .workflow/active/WFS-session-name/.archiving)
|
|
||||||
```
|
|
||||||
**Purpose**:
|
|
||||||
- Prevents concurrent operations on this session
|
|
||||||
- Enables recovery if archival fails
|
|
||||||
- Session remains in `.workflow/active/` for agent analysis
|
|
||||||
|
|
||||||
**Result**: Session still at `.workflow/active/WFS-session-name/` with `.archiving` marker
|
|
||||||
|
|
||||||
### Phase 2: Agent Analysis (In-Place Processing)
|
|
||||||
|
|
||||||
**Purpose**: Agent analyzes session WHILE STILL IN ACTIVE LOCATION. Generates metadata but does NOT move files or update manifest.
|
|
||||||
|
|
||||||
#### Agent Invocation
|
|
||||||
|
|
||||||
Invoke `universal-executor` agent to analyze session and prepare archive metadata.
|
|
||||||
|
|
||||||
**Agent Task**:
|
|
||||||
```
|
|
||||||
Task(
|
|
||||||
subagent_type="universal-executor",
|
|
||||||
description="Analyze session for archival",
|
|
||||||
prompt=`
|
|
||||||
Analyze workflow session for archival preparation. Session is STILL in active location.
|
|
||||||
|
|
||||||
## Context
|
|
||||||
- Session: .workflow/active/WFS-session-name/
|
|
||||||
- Status: Marked as archiving (.archiving marker present)
|
|
||||||
- Location: Active sessions directory (NOT archived yet)
|
|
||||||
|
|
||||||
## Tasks
|
|
||||||
|
|
||||||
1. **Extract session data** from workflow-session.json
|
|
||||||
- session_id, description/topic, started_at, completed_at, status
|
|
||||||
- If status != "completed", update it with timestamp
|
|
||||||
|
|
||||||
2. **Count files**: tasks (.task/*.json) and summaries (.summaries/*.md)
|
|
||||||
|
|
||||||
3. **Extract review data** (if .review/ exists):
|
|
||||||
- Count dimension results: .review/dimensions/*.json
|
|
||||||
- Count deep-dive results: .review/iterations/*.json
|
|
||||||
- Extract findings summary from dimension JSONs (total, critical, high, medium, low)
|
|
||||||
- Check fix results if .review/fixes/ exists (fixed_count, failed_count)
|
|
||||||
- Build review_metrics: {dimensions_analyzed, total_findings, severity_distribution, fix_success_rate}
|
|
||||||
|
|
||||||
4. **Generate lessons**: Use gemini with ~/.claude/workflows/cli-templates/prompts/archive/analysis-simple.txt
|
|
||||||
- Return: {successes, challenges, watch_patterns}
|
|
||||||
- If review data exists, include review-specific lessons (common issue patterns, effective fixes)
|
|
||||||
|
|
||||||
5. **Build archive entry**:
|
|
||||||
- Calculate: duration_hours, success_rate, tags (3-5 keywords)
|
|
||||||
- Construct complete JSON with session_id, description, archived_at, metrics, tags, lessons
|
|
||||||
- Include archive_path: ".workflow/archives/WFS-session-name" (future location)
|
|
||||||
- If review data exists, include review_metrics in metrics object
|
|
||||||
|
|
||||||
6. **Extract feature metadata** (for Phase 4):
|
|
||||||
- Parse IMPL_PLAN.md for title (first # heading)
|
|
||||||
- Extract description (first paragraph, max 200 chars)
|
|
||||||
- Generate feature tags (3-5 keywords from content)
|
|
||||||
|
|
||||||
7. **Return result**: Complete metadata package for atomic commit
|
|
||||||
{
|
|
||||||
"status": "success",
|
|
||||||
"session_id": "WFS-session-name",
|
|
||||||
"archive_entry": {
|
|
||||||
"session_id": "...",
|
|
||||||
"description": "...",
|
|
||||||
"archived_at": "...",
|
|
||||||
"archive_path": ".workflow/archives/WFS-session-name",
|
|
||||||
"metrics": {
|
|
||||||
"duration_hours": 2.5,
|
|
||||||
"tasks_completed": 5,
|
|
||||||
"summaries_generated": 3,
|
|
||||||
"review_metrics": { // Optional, only if .review/ exists
|
|
||||||
"dimensions_analyzed": 4,
|
|
||||||
"total_findings": 15,
|
|
||||||
"severity_distribution": {"critical": 1, "high": 3, "medium": 8, "low": 3},
|
|
||||||
"fix_success_rate": 0.87 // Optional, only if .review/fixes/ exists
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"tags": [...],
|
|
||||||
"lessons": {...}
|
|
||||||
},
|
|
||||||
"feature_metadata": {
|
|
||||||
"title": "...",
|
|
||||||
"description": "...",
|
|
||||||
"tags": [...]
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
## Important Constraints
|
|
||||||
- DO NOT move or delete any files
|
|
||||||
- DO NOT update manifest.json yet
|
|
||||||
- Session remains in .workflow/active/ during analysis
|
|
||||||
- Return complete metadata package for orchestrator to commit atomically
|
|
||||||
|
|
||||||
## Error Handling
|
|
||||||
- On failure: return {"status": "error", "task": "...", "message": "..."}
|
|
||||||
- Do NOT modify any files on error
|
|
||||||
`
|
|
||||||
)
|
|
||||||
```
|
|
||||||
|
|
||||||
**Expected Output**:
|
|
||||||
- Agent returns complete metadata package
|
|
||||||
- Session remains in `.workflow/active/` with `.archiving` marker
|
|
||||||
- No files moved or manifests updated yet
|
|
||||||
|
|
||||||
### Phase 3: Atomic Commit (Transactional File Operations)
|
|
||||||
|
|
||||||
**Purpose**: Atomically commit all changes. Only execute if Phase 2 succeeds.
|
|
||||||
|
|
||||||
#### Step 3.1: Create Archive Directory
|
|
||||||
```bash
|
|
||||||
bash(mkdir -p .workflow/archives/)
|
|
||||||
```
|
|
||||||
|
|
||||||
#### Step 3.2: Move Session to Archive
|
|
||||||
```bash
|
|
||||||
bash(mv .workflow/active/WFS-session-name .workflow/archives/WFS-session-name)
|
|
||||||
```
|
|
||||||
**Result**: Session now at `.workflow/archives/WFS-session-name/`
|
|
||||||
|
|
||||||
#### Step 3.3: Update Manifest
|
|
||||||
```bash
|
|
||||||
# Read current manifest (or create empty array if not exists)
|
|
||||||
bash(test -f .workflow/archives/manifest.json && cat .workflow/archives/manifest.json || echo "[]")
|
|
||||||
```
|
|
||||||
|
|
||||||
**JSON Update Logic**:
|
|
||||||
```javascript
|
|
||||||
// Read agent result from Phase 2
|
|
||||||
const agentResult = JSON.parse(agentOutput);
|
|
||||||
const archiveEntry = agentResult.archive_entry;
|
|
||||||
|
|
||||||
// Read existing manifest
|
|
||||||
let manifest = [];
|
|
||||||
try {
|
|
||||||
const manifestContent = Read('.workflow/archives/manifest.json');
|
|
||||||
manifest = JSON.parse(manifestContent);
|
|
||||||
} catch {
|
|
||||||
manifest = []; // Initialize if not exists
|
|
||||||
}
|
|
||||||
|
|
||||||
// Append new entry
|
|
||||||
manifest.push(archiveEntry);
|
|
||||||
|
|
||||||
// Write back
|
|
||||||
Write('.workflow/archives/manifest.json', JSON.stringify(manifest, null, 2));
|
|
||||||
```
|
|
||||||
|
|
||||||
#### Step 3.4: Remove Archiving Marker
|
|
||||||
```bash
|
|
||||||
bash(rm .workflow/archives/WFS-session-name/.archiving)
|
|
||||||
```
|
|
||||||
**Result**: Clean archived session without temporary markers
|
|
||||||
|
|
||||||
**Output Confirmation**:
|
|
||||||
```
|
|
||||||
✓ Session "${sessionId}" archived successfully
|
|
||||||
Location: .workflow/archives/WFS-session-name/
|
|
||||||
Lessons: ${archiveEntry.lessons.successes.length} successes, ${archiveEntry.lessons.challenges.length} challenges
|
|
||||||
Manifest: Updated with ${manifest.length} total sessions
|
|
||||||
${reviewMetrics ? `Review: ${reviewMetrics.total_findings} findings across ${reviewMetrics.dimensions_analyzed} dimensions, ${Math.round(reviewMetrics.fix_success_rate * 100)}% fixed` : ''}
|
|
||||||
```
|
|
||||||
|
|
||||||
### Phase 4: Update Project Feature Registry
|
|
||||||
|
|
||||||
**Purpose**: Record completed session as a project feature in `.workflow/project.json`.
|
|
||||||
|
|
||||||
**Execution**: Uses feature metadata from Phase 2 agent result to update project registry.
|
|
||||||
|
|
||||||
#### Step 4.1: Check Project State Exists
|
|
||||||
```bash
|
|
||||||
bash(test -f .workflow/project.json && echo "EXISTS" || echo "SKIP")
|
|
||||||
```
|
|
||||||
|
|
||||||
**If SKIP**: Output warning and skip Phase 4
|
|
||||||
```
|
|
||||||
WARNING: No project.json found. Run /workflow:session:start to initialize.
|
|
||||||
```
|
|
||||||
|
|
||||||
#### Step 4.2: Extract Feature Information from Agent Result
|
|
||||||
|
|
||||||
**Data Processing** (Uses Phase 2 agent output):
|
|
||||||
```javascript
|
|
||||||
// Extract feature metadata from agent result
|
|
||||||
const agentResult = JSON.parse(agentOutput);
|
|
||||||
const featureMeta = agentResult.feature_metadata;
|
|
||||||
|
|
||||||
// Data already prepared by agent:
|
|
||||||
const title = featureMeta.title;
|
|
||||||
const description = featureMeta.description;
|
|
||||||
const tags = featureMeta.tags;
|
|
||||||
|
|
||||||
// Create feature ID (lowercase slug)
|
|
||||||
const featureId = title.toLowerCase().replace(/[^a-z0-9]+/g, '-').substring(0, 50);
|
|
||||||
```
|
|
||||||
|
|
||||||
#### Step 4.3: Update project.json
|
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
# Read current project state
|
# Phase 1: Find active session
|
||||||
bash(cat .workflow/project.json)
|
SESSION_PATH=$(find .workflow/active/ -maxdepth 1 -name "WFS-*" -type d | head -1)
|
||||||
|
SESSION_ID=$(basename "$SESSION_PATH")
|
||||||
|
|
||||||
|
# Phase 3: Move to archive
|
||||||
|
mkdir -p .workflow/archives/
|
||||||
|
mv .workflow/active/$SESSION_ID .workflow/archives/$SESSION_ID
|
||||||
|
|
||||||
|
# Cleanup marker
|
||||||
|
rm -f .workflow/archives/$SESSION_ID/.archiving
|
||||||
```
|
```
|
||||||
|
|
||||||
**JSON Update Logic**:
|
## Key Files to Read
|
||||||
```javascript
|
|
||||||
// Read existing project.json (created by /workflow:init)
|
|
||||||
// Note: overview field is managed by /workflow:init, not modified here
|
|
||||||
const projectMeta = JSON.parse(Read('.workflow/project.json'));
|
|
||||||
const currentTimestamp = new Date().toISOString();
|
|
||||||
const currentDate = currentTimestamp.split('T')[0]; // YYYY-MM-DD
|
|
||||||
|
|
||||||
// Extract tags from IMPL_PLAN.md (simple keyword extraction)
|
**For manifest.json generation**, read ONLY these files:
|
||||||
const tags = extractTags(planContent); // e.g., ["auth", "security"]
|
|
||||||
|
|
||||||
// Build feature object with complete metadata
|
| File | Extract |
|
||||||
const newFeature = {
|
|------|---------|
|
||||||
id: featureId,
|
| `$SESSION_PATH/workflow-session.json` | session_id, description, started_at, status |
|
||||||
title: title,
|
| `$SESSION_PATH/IMPL_PLAN.md` | title (first # heading), description (first paragraph) |
|
||||||
description: description,
|
| `$SESSION_PATH/.tasks/*.json` | count files |
|
||||||
status: "completed",
|
| `$SESSION_PATH/.summaries/*.md` | count files |
|
||||||
tags: tags,
|
| `$SESSION_PATH/.review/dimensions/*.json` | count + findings summary (optional) |
|
||||||
timeline: {
|
|
||||||
created_at: currentTimestamp,
|
|
||||||
implemented_at: currentDate,
|
|
||||||
updated_at: currentTimestamp
|
|
||||||
},
|
|
||||||
traceability: {
|
|
||||||
session_id: sessionId,
|
|
||||||
archive_path: archivePath, // e.g., ".workflow/archives/WFS-auth-system"
|
|
||||||
commit_hash: getLatestCommitHash() || "" // Optional: git rev-parse HEAD
|
|
||||||
},
|
|
||||||
docs: [], // Placeholder for future doc links
|
|
||||||
relations: [] // Placeholder for feature dependencies
|
|
||||||
};
|
|
||||||
|
|
||||||
// Add new feature to array
|
## Execution Flow
|
||||||
projectMeta.features.push(newFeature);
|
|
||||||
|
|
||||||
// Update statistics
|
### Phase 1: Find Session (2 commands)
|
||||||
projectMeta.statistics.total_features = projectMeta.features.length;
|
|
||||||
projectMeta.statistics.total_sessions += 1;
|
|
||||||
projectMeta.statistics.last_updated = currentTimestamp;
|
|
||||||
|
|
||||||
// Write back
|
```bash
|
||||||
Write('.workflow/project.json', JSON.stringify(projectMeta, null, 2));
|
# 1. Find and extract session
|
||||||
|
SESSION_PATH=$(find .workflow/active/ -maxdepth 1 -name "WFS-*" -type d | head -1)
|
||||||
|
SESSION_ID=$(basename "$SESSION_PATH")
|
||||||
|
|
||||||
|
# 2. Check/create archiving marker
|
||||||
|
test -f "$SESSION_PATH/.archiving" && echo "RESUMING" || touch "$SESSION_PATH/.archiving"
|
||||||
```
|
```
|
||||||
|
|
||||||
**Helper Functions**:
|
**Output**: `SESSION_ID` = e.g., `WFS-auth-feature`
|
||||||
```javascript
|
|
||||||
// Extract tags from IMPL_PLAN.md content
|
|
||||||
function extractTags(planContent) {
|
|
||||||
const tags = [];
|
|
||||||
|
|
||||||
// Look for common keywords
|
### Phase 2: Generate Manifest Entry (Read-only)
|
||||||
const keywords = {
|
|
||||||
'auth': /authentication|login|oauth|jwt/i,
|
|
||||||
'security': /security|encrypt|hash|token/i,
|
|
||||||
'api': /api|endpoint|rest|graphql/i,
|
|
||||||
'ui': /component|page|interface|frontend/i,
|
|
||||||
'database': /database|schema|migration|sql/i,
|
|
||||||
'test': /test|testing|spec|coverage/i
|
|
||||||
};
|
|
||||||
|
|
||||||
for (const [tag, pattern] of Object.entries(keywords)) {
|
Read the key files above, then build this structure:
|
||||||
if (pattern.test(planContent)) {
|
|
||||||
tags.push(tag);
|
```json
|
||||||
|
{
|
||||||
|
"session_id": "<from workflow-session.json>",
|
||||||
|
"description": "<from workflow-session.json>",
|
||||||
|
"archived_at": "<current ISO timestamp>",
|
||||||
|
"archive_path": ".workflow/archives/<SESSION_ID>",
|
||||||
|
"metrics": {
|
||||||
|
"duration_hours": "<(completed_at - started_at) / 3600000>",
|
||||||
|
"tasks_completed": "<count .tasks/*.json>",
|
||||||
|
"summaries_generated": "<count .summaries/*.md>",
|
||||||
|
"review_metrics": {
|
||||||
|
"dimensions_analyzed": "<count .review/dimensions/*.json>",
|
||||||
|
"total_findings": "<sum from dimension JSONs>"
|
||||||
}
|
}
|
||||||
}
|
},
|
||||||
|
"tags": ["<3-5 keywords from IMPL_PLAN.md>"],
|
||||||
return tags.slice(0, 5); // Max 5 tags
|
"lessons": {
|
||||||
}
|
"successes": ["<key wins>"],
|
||||||
|
"challenges": ["<difficulties>"],
|
||||||
// Get latest git commit hash (optional)
|
"watch_patterns": ["<patterns to monitor>"]
|
||||||
function getLatestCommitHash() {
|
|
||||||
try {
|
|
||||||
const result = Bash({
|
|
||||||
command: "git rev-parse --short HEAD 2>/dev/null",
|
|
||||||
description: "Get latest commit hash"
|
|
||||||
});
|
|
||||||
return result.trim();
|
|
||||||
} catch {
|
|
||||||
return "";
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
#### Step 4.4: Output Confirmation
|
**Lessons Generation**: Use gemini with `~/.claude/workflows/cli-templates/prompts/archive/analysis-simple.txt`
|
||||||
|
|
||||||
```
|
### Phase 3: Atomic Commit (4 commands)
|
||||||
✓ Feature "${title}" added to project registry
|
|
||||||
ID: ${featureId}
|
```bash
|
||||||
Session: ${sessionId}
|
# 1. Create archive directory
|
||||||
Location: .workflow/project.json
|
mkdir -p .workflow/archives/
|
||||||
|
|
||||||
|
# 2. Move session
|
||||||
|
mv .workflow/active/$SESSION_ID .workflow/archives/$SESSION_ID
|
||||||
|
|
||||||
|
# 3. Update manifest.json (Read → Append → Write)
|
||||||
|
# Read: .workflow/archives/manifest.json (or [])
|
||||||
|
# Append: archive_entry from Phase 2
|
||||||
|
# Write: updated JSON
|
||||||
|
|
||||||
|
# 4. Remove marker
|
||||||
|
rm -f .workflow/archives/$SESSION_ID/.archiving
|
||||||
```
|
```
|
||||||
|
|
||||||
**Error Handling**:
|
**Output**:
|
||||||
- If project.json malformed: Output error, skip update
|
```
|
||||||
- If feature_metadata missing from agent result: Skip Phase 4
|
✓ Session "$SESSION_ID" archived successfully
|
||||||
- If extraction fails: Use minimal defaults
|
Location: .workflow/archives/$SESSION_ID/
|
||||||
|
Manifest: Updated with N total sessions
|
||||||
|
```
|
||||||
|
|
||||||
**Phase 4 Total Commands**: 1 bash read + JSON manipulation
|
### Phase 4: Update project.json (Optional)
|
||||||
|
|
||||||
|
**Skip if**: `.workflow/project.json` doesn't exist
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Check
|
||||||
|
test -f .workflow/project.json || echo "SKIP"
|
||||||
|
```
|
||||||
|
|
||||||
|
**If exists**, add feature entry:
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"id": "<slugified title>",
|
||||||
|
"title": "<from IMPL_PLAN.md>",
|
||||||
|
"status": "completed",
|
||||||
|
"tags": ["<from Phase 2>"],
|
||||||
|
"timeline": { "implemented_at": "<date>" },
|
||||||
|
"traceability": { "session_id": "<SESSION_ID>", "archive_path": "<path>" }
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Output**:
|
||||||
|
```
|
||||||
|
✓ Feature added to project registry
|
||||||
|
```
|
||||||
|
|
||||||
## Error Recovery
|
## Error Recovery
|
||||||
|
|
||||||
### If Agent Fails (Phase 2)
|
| Phase | Symptom | Recovery |
|
||||||
|
|-------|---------|----------|
|
||||||
|
| 1 | No active session | `No active session found` |
|
||||||
|
| 2 | Analysis fails | Remove marker: `rm $SESSION_PATH/.archiving`, retry |
|
||||||
|
| 3 | Move fails | Session safe in active/, fix issue, retry |
|
||||||
|
| 3 | Manifest fails | Session in archives/, manually add entry, remove marker |
|
||||||
|
|
||||||
**Symptoms**:
|
## Quick Reference
|
||||||
- Agent returns `{"status": "error", ...}`
|
|
||||||
- Agent crashes or times out
|
|
||||||
- Analysis incomplete
|
|
||||||
|
|
||||||
**Recovery Steps**:
|
|
||||||
```bash
|
|
||||||
# Session still in .workflow/active/WFS-session-name
|
|
||||||
# Remove archiving marker
|
|
||||||
bash(rm .workflow/active/WFS-session-name/.archiving)
|
|
||||||
```
|
```
|
||||||
|
Phase 1: find session → create .archiving marker
|
||||||
**User Notification**:
|
Phase 2: read key files → build manifest entry (no writes)
|
||||||
|
Phase 3: mkdir → mv → update manifest.json → rm marker
|
||||||
|
Phase 4: update project.json features array (optional)
|
||||||
```
|
```
|
||||||
ERROR: Session archival failed during analysis phase
|
|
||||||
Reason: [error message from agent]
|
|
||||||
Session remains active in: .workflow/active/WFS-session-name
|
|
||||||
|
|
||||||
Recovery:
|
|
||||||
1. Fix any issues identified in error message
|
|
||||||
2. Retry: /workflow:session:complete
|
|
||||||
|
|
||||||
Session state: SAFE (no changes committed)
|
|
||||||
```
|
|
||||||
|
|
||||||
### If Move Fails (Phase 3)
|
|
||||||
|
|
||||||
**Symptoms**:
|
|
||||||
- `mv` command fails
|
|
||||||
- Permission denied
|
|
||||||
- Disk full
|
|
||||||
|
|
||||||
**Recovery Steps**:
|
|
||||||
```bash
|
|
||||||
# Archiving marker still present
|
|
||||||
# Session still in .workflow/active/ (move failed)
|
|
||||||
# No manifest updated yet
|
|
||||||
```
|
|
||||||
|
|
||||||
**User Notification**:
|
|
||||||
```
|
|
||||||
ERROR: Session archival failed during move operation
|
|
||||||
Reason: [mv error message]
|
|
||||||
Session remains in: .workflow/active/WFS-session-name
|
|
||||||
|
|
||||||
Recovery:
|
|
||||||
1. Fix filesystem issues (permissions, disk space)
|
|
||||||
2. Retry: /workflow:session:complete
|
|
||||||
- System will detect .archiving marker
|
|
||||||
- Will resume from Phase 2 (agent analysis)
|
|
||||||
|
|
||||||
Session state: SAFE (analysis complete, ready to retry move)
|
|
||||||
```
|
|
||||||
|
|
||||||
### If Manifest Update Fails (Phase 3)
|
|
||||||
|
|
||||||
**Symptoms**:
|
|
||||||
- JSON parsing error
|
|
||||||
- Write permission denied
|
|
||||||
- Session moved but manifest not updated
|
|
||||||
|
|
||||||
**Recovery Steps**:
|
|
||||||
```bash
|
|
||||||
# Session moved to .workflow/archives/WFS-session-name
|
|
||||||
# Manifest NOT updated
|
|
||||||
# Archiving marker still present in archived location
|
|
||||||
```
|
|
||||||
|
|
||||||
**User Notification**:
|
|
||||||
```
|
|
||||||
ERROR: Session archived but manifest update failed
|
|
||||||
Reason: [error message]
|
|
||||||
Session location: .workflow/archives/WFS-session-name
|
|
||||||
|
|
||||||
Recovery:
|
|
||||||
1. Fix manifest.json issues (syntax, permissions)
|
|
||||||
2. Manual manifest update:
|
|
||||||
- Add archive entry from agent output
|
|
||||||
- Remove .archiving marker: rm .workflow/archives/WFS-session-name/.archiving
|
|
||||||
|
|
||||||
Session state: PARTIALLY COMPLETE (session archived, manifest needs update)
|
|
||||||
```
|
|
||||||
|
|
||||||
## Workflow Execution Strategy
|
|
||||||
|
|
||||||
### Transactional Four-Phase Approach
|
|
||||||
|
|
||||||
**Phase 1: Pre-Archival Preparation** (Marker creation)
|
|
||||||
- Find active session and extract name
|
|
||||||
- Check for existing `.archiving` marker (resume detection)
|
|
||||||
- Create `.archiving` marker if new
|
|
||||||
- **No data processing** - just state tracking
|
|
||||||
- **Total**: 2-3 bash commands (find + marker check/create)
|
|
||||||
|
|
||||||
**Phase 2: Agent Analysis** (Read-only data processing)
|
|
||||||
- Extract all session data from active location
|
|
||||||
- Count tasks and summaries
|
|
||||||
- Extract review data if .review/ exists (dimension results, findings, fix results)
|
|
||||||
- Generate lessons learned analysis (including review-specific lessons if applicable)
|
|
||||||
- Extract feature metadata from IMPL_PLAN.md
|
|
||||||
- Build complete archive + feature metadata package (with review_metrics if applicable)
|
|
||||||
- **No file modifications** - pure analysis
|
|
||||||
- **Total**: 1 agent invocation
|
|
||||||
|
|
||||||
**Phase 3: Atomic Commit** (Transactional file operations)
|
|
||||||
- Create archive directory
|
|
||||||
- Move session to archive location
|
|
||||||
- Update manifest.json with archive entry
|
|
||||||
- Remove `.archiving` marker
|
|
||||||
- **All-or-nothing**: Either all succeed or session remains in safe state
|
|
||||||
- **Total**: 4 bash commands + JSON manipulation
|
|
||||||
|
|
||||||
**Phase 4: Project Registry Update** (Optional feature tracking)
|
|
||||||
- Check project.json exists
|
|
||||||
- Use feature metadata from Phase 2 agent result
|
|
||||||
- Build feature object with complete traceability
|
|
||||||
- Update project statistics
|
|
||||||
- **Independent**: Can fail without affecting archival
|
|
||||||
- **Total**: 1 bash read + JSON manipulation
|
|
||||||
|
|
||||||
### Transactional Guarantees
|
|
||||||
|
|
||||||
**State Consistency**:
|
|
||||||
- Session NEVER in inconsistent state
|
|
||||||
- `.archiving` marker enables safe resume
|
|
||||||
- Agent failure leaves session in recoverable state
|
|
||||||
- Move/manifest operations grouped in Phase 3
|
|
||||||
|
|
||||||
**Failure Isolation**:
|
|
||||||
- Phase 1 failure: No changes made
|
|
||||||
- Phase 2 failure: Session still active, can retry
|
|
||||||
- Phase 3 failure: Clear error state, manual recovery documented
|
|
||||||
- Phase 4 failure: Does not affect archival success
|
|
||||||
|
|
||||||
**Resume Capability**:
|
|
||||||
- Detect interrupted archival via `.archiving` marker
|
|
||||||
- Resume from Phase 2 (skip marker creation)
|
|
||||||
- Idempotent operations (safe to retry)
|
|
||||||
|
|
||||||
|
|
||||||
|
|||||||
@@ -164,10 +164,10 @@ SlashCommand(command="/workflow:tools:conflict-resolution --session [sessionId]
|
|||||||
|
|
||||||
**Parse Output**:
|
**Parse Output**:
|
||||||
- Extract: Execution status (success/skipped/failed)
|
- Extract: Execution status (success/skipped/failed)
|
||||||
- Verify: CONFLICT_RESOLUTION.md file path (if executed)
|
- Verify: conflict-resolution.json file path (if executed)
|
||||||
|
|
||||||
**Validation**:
|
**Validation**:
|
||||||
- File `.workflow/active/[sessionId]/.process/CONFLICT_RESOLUTION.md` exists (if executed)
|
- File `.workflow/active/[sessionId]/.process/conflict-resolution.json` exists (if executed)
|
||||||
|
|
||||||
**Skip Behavior**:
|
**Skip Behavior**:
|
||||||
- If conflict_risk is "none" or "low", skip directly to Phase 5
|
- If conflict_risk is "none" or "low", skip directly to Phase 5
|
||||||
@@ -402,7 +402,7 @@ TDD Workflow Orchestrator
|
|||||||
│ ├─ Phase 4.1: Detect conflicts with CLI
|
│ ├─ Phase 4.1: Detect conflicts with CLI
|
||||||
│ ├─ Phase 4.2: Present conflicts to user
|
│ ├─ Phase 4.2: Present conflicts to user
|
||||||
│ └─ Phase 4.3: Apply resolution strategies
|
│ └─ Phase 4.3: Apply resolution strategies
|
||||||
│ └─ Returns: CONFLICT_RESOLUTION.md ← COLLAPSED
|
│ └─ Returns: conflict-resolution.json ← COLLAPSED
|
||||||
│ ELSE:
|
│ ELSE:
|
||||||
│ └─ Skip to Phase 5
|
│ └─ Skip to Phase 5
|
||||||
│
|
│
|
||||||
|
|||||||
@@ -77,18 +77,32 @@ find .workflow/active/ -name "WFS-*" -type d | head -1 | sed 's/.*\///'
|
|||||||
|
|
||||||
```bash
|
```bash
|
||||||
# Load all task JSONs
|
# Load all task JSONs
|
||||||
find .workflow/active/{sessionId}/.task/ -name '*.json'
|
for task_file in .workflow/active/{sessionId}/.task/*.json; do
|
||||||
|
cat "$task_file"
|
||||||
|
done
|
||||||
|
|
||||||
# Extract task IDs
|
# Extract task IDs
|
||||||
find .workflow/active/{sessionId}/.task/ -name '*.json' -exec jq -r '.id' {} \;
|
for task_file in .workflow/active/{sessionId}/.task/*.json; do
|
||||||
|
cat "$task_file" | jq -r '.id'
|
||||||
|
done
|
||||||
|
|
||||||
# Check dependencies
|
# Check dependencies - read tasks and filter for IMPL/REFACTOR
|
||||||
find .workflow/active/{sessionId}/.task/ -name 'IMPL-*.json' -exec jq -r '.context.depends_on[]?' {} \;
|
for task_file in .workflow/active/{sessionId}/.task/IMPL-*.json; do
|
||||||
find .workflow/active/{sessionId}/.task/ -name 'REFACTOR-*.json' -exec jq -r '.context.depends_on[]?' {} \;
|
cat "$task_file" | jq -r '.context.depends_on[]?'
|
||||||
|
done
|
||||||
|
|
||||||
|
for task_file in .workflow/active/{sessionId}/.task/REFACTOR-*.json; do
|
||||||
|
cat "$task_file" | jq -r '.context.depends_on[]?'
|
||||||
|
done
|
||||||
|
|
||||||
# Check meta fields
|
# Check meta fields
|
||||||
find .workflow/active/{sessionId}/.task/ -name '*.json' -exec jq -r '.meta.tdd_phase' {} \;
|
for task_file in .workflow/active/{sessionId}/.task/*.json; do
|
||||||
find .workflow/active/{sessionId}/.task/ -name '*.json' -exec jq -r '.meta.agent' {} \;
|
cat "$task_file" | jq -r '.meta.tdd_phase'
|
||||||
|
done
|
||||||
|
|
||||||
|
for task_file in .workflow/active/{sessionId}/.task/*.json; do
|
||||||
|
cat "$task_file" | jq -r '.meta.agent'
|
||||||
|
done
|
||||||
```
|
```
|
||||||
|
|
||||||
**Validation**:
|
**Validation**:
|
||||||
@@ -127,7 +141,7 @@ find .workflow/active/{sessionId}/.task/ -name '*.json' -exec jq -r '.meta.agent
|
|||||||
**Gemini analysis for comprehensive TDD compliance report**
|
**Gemini analysis for comprehensive TDD compliance report**
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
cd project-root && gemini -p "
|
ccw cli -p "
|
||||||
PURPOSE: Generate TDD compliance report
|
PURPOSE: Generate TDD compliance report
|
||||||
TASK: Analyze TDD workflow execution and generate quality report
|
TASK: Analyze TDD workflow execution and generate quality report
|
||||||
CONTEXT: @{.workflow/active/{sessionId}/.task/*.json,.workflow/active/{sessionId}/.summaries/*,.workflow/active/{sessionId}/.process/tdd-cycle-report.md}
|
CONTEXT: @{.workflow/active/{sessionId}/.task/*.json,.workflow/active/{sessionId}/.summaries/*,.workflow/active/{sessionId}/.process/tdd-cycle-report.md}
|
||||||
@@ -139,7 +153,7 @@ EXPECTED:
|
|||||||
- Red-Green-Refactor cycle validation
|
- Red-Green-Refactor cycle validation
|
||||||
- Best practices adherence assessment
|
- Best practices adherence assessment
|
||||||
RULES: Focus on TDD best practices and workflow adherence. Be specific about violations and improvements.
|
RULES: Focus on TDD best practices and workflow adherence. Be specific about violations and improvements.
|
||||||
" > .workflow/active/{sessionId}/TDD_COMPLIANCE_REPORT.md
|
" --tool gemini --mode analysis --cd project-root > .workflow/active/{sessionId}/TDD_COMPLIANCE_REPORT.md
|
||||||
```
|
```
|
||||||
|
|
||||||
**Output**: TDD_COMPLIANCE_REPORT.md
|
**Output**: TDD_COMPLIANCE_REPORT.md
|
||||||
|
|||||||
@@ -221,6 +221,7 @@ return "conservative";
|
|||||||
```javascript
|
```javascript
|
||||||
Task(
|
Task(
|
||||||
subagent_type="cli-planning-agent",
|
subagent_type="cli-planning-agent",
|
||||||
|
run_in_background=false,
|
||||||
description=`Analyze test failures (iteration ${N}) - ${strategy} strategy`,
|
description=`Analyze test failures (iteration ${N}) - ${strategy} strategy`,
|
||||||
prompt=`
|
prompt=`
|
||||||
## Task Objective
|
## Task Objective
|
||||||
@@ -271,6 +272,7 @@ Task(
|
|||||||
```javascript
|
```javascript
|
||||||
Task(
|
Task(
|
||||||
subagent_type="test-fix-agent",
|
subagent_type="test-fix-agent",
|
||||||
|
run_in_background=false,
|
||||||
description=`Execute ${task.meta.type}: ${task.title}`,
|
description=`Execute ${task.meta.type}: ${task.title}`,
|
||||||
prompt=`
|
prompt=`
|
||||||
## Task Objective
|
## Task Objective
|
||||||
|
|||||||
@@ -108,7 +108,7 @@ Phase 4: Apply Modifications
|
|||||||
|
|
||||||
**Agent Delegation**:
|
**Agent Delegation**:
|
||||||
```javascript
|
```javascript
|
||||||
Task(subagent_type="cli-execution-agent", prompt=`
|
Task(subagent_type="cli-execution-agent", run_in_background=false, prompt=`
|
||||||
## Context
|
## Context
|
||||||
- Session: {session_id}
|
- Session: {session_id}
|
||||||
- Risk: {conflict_risk}
|
- Risk: {conflict_risk}
|
||||||
@@ -124,6 +124,9 @@ Task(subagent_type="cli-execution-agent", prompt=`
|
|||||||
|
|
||||||
## Analysis Steps
|
## Analysis Steps
|
||||||
|
|
||||||
|
### 0. Load Output Schema (MANDATORY)
|
||||||
|
Execute: cat ~/.claude/workflows/cli-templates/schemas/conflict-resolution-schema.json
|
||||||
|
|
||||||
### 1. Load Context
|
### 1. Load Context
|
||||||
- Read existing files from conflict_detection.existing_files
|
- Read existing files from conflict_detection.existing_files
|
||||||
- Load plan from .workflow/active/{session_id}/.process/context-package.json
|
- Load plan from .workflow/active/{session_id}/.process/context-package.json
|
||||||
@@ -133,7 +136,7 @@ Task(subagent_type="cli-execution-agent", prompt=`
|
|||||||
### 2. Execute CLI Analysis (Enhanced with Exploration + Scenario Uniqueness)
|
### 2. Execute CLI Analysis (Enhanced with Exploration + Scenario Uniqueness)
|
||||||
|
|
||||||
Primary (Gemini):
|
Primary (Gemini):
|
||||||
cd {project_root} && gemini -p "
|
ccw cli -p "
|
||||||
PURPOSE: Detect conflicts between plan and codebase, using exploration insights
|
PURPOSE: Detect conflicts between plan and codebase, using exploration insights
|
||||||
TASK:
|
TASK:
|
||||||
• **Review pre-identified conflict_indicators from exploration results**
|
• **Review pre-identified conflict_indicators from exploration results**
|
||||||
@@ -152,7 +155,7 @@ Task(subagent_type="cli-execution-agent", prompt=`
|
|||||||
- ModuleOverlap conflicts with overlap_analysis
|
- ModuleOverlap conflicts with overlap_analysis
|
||||||
- Targeted clarification questions
|
- Targeted clarification questions
|
||||||
RULES: $(cat ~/.claude/workflows/cli-templates/prompts/analysis/02-analyze-code-patterns.txt) | Focus on breaking changes, migration needs, and functional overlaps | Prioritize exploration-identified conflicts | analysis=READ-ONLY
|
RULES: $(cat ~/.claude/workflows/cli-templates/prompts/analysis/02-analyze-code-patterns.txt) | Focus on breaking changes, migration needs, and functional overlaps | Prioritize exploration-identified conflicts | analysis=READ-ONLY
|
||||||
"
|
" --tool gemini --mode analysis --cd {project_root}
|
||||||
|
|
||||||
Fallback: Qwen (same prompt) → Claude (manual analysis)
|
Fallback: Qwen (same prompt) → Claude (manual analysis)
|
||||||
|
|
||||||
@@ -169,125 +172,16 @@ Task(subagent_type="cli-execution-agent", prompt=`
|
|||||||
|
|
||||||
### 4. Return Structured Conflict Data
|
### 4. Return Structured Conflict Data
|
||||||
|
|
||||||
⚠️ DO NOT generate CONFLICT_RESOLUTION.md file
|
⚠️ Output to conflict-resolution.json (generated in Phase 4)
|
||||||
|
|
||||||
Return JSON format for programmatic processing:
|
**Schema Reference**: Execute \`cat ~/.claude/workflows/cli-templates/schemas/conflict-resolution-schema.json\` to get full schema
|
||||||
|
|
||||||
\`\`\`json
|
Return JSON following the schema above. Key requirements:
|
||||||
{
|
|
||||||
"conflicts": [
|
|
||||||
{
|
|
||||||
"id": "CON-001",
|
|
||||||
"brief": "一行中文冲突摘要",
|
|
||||||
"severity": "Critical|High|Medium",
|
|
||||||
"category": "Architecture|API|Data|Dependency|ModuleOverlap",
|
|
||||||
"affected_files": [
|
|
||||||
".workflow/active/{session}/.brainstorm/guidance-specification.md",
|
|
||||||
".workflow/active/{session}/.brainstorm/system-architect/analysis.md"
|
|
||||||
],
|
|
||||||
"description": "详细描述冲突 - 什么不兼容",
|
|
||||||
"impact": {
|
|
||||||
"scope": "影响的模块/组件",
|
|
||||||
"compatibility": "Yes|No|Partial",
|
|
||||||
"migration_required": true|false,
|
|
||||||
"estimated_effort": "人天估计"
|
|
||||||
},
|
|
||||||
"overlap_analysis": {
|
|
||||||
"// NOTE": "仅当 category=ModuleOverlap 时需要此字段",
|
|
||||||
"new_module": {
|
|
||||||
"name": "新模块名称",
|
|
||||||
"scenarios": ["场景1", "场景2", "场景3"],
|
|
||||||
"responsibilities": "职责描述"
|
|
||||||
},
|
|
||||||
"existing_modules": [
|
|
||||||
{
|
|
||||||
"file": "src/existing/module.ts",
|
|
||||||
"name": "现有模块名称",
|
|
||||||
"scenarios": ["场景A", "场景B"],
|
|
||||||
"overlap_scenarios": ["重叠场景1", "重叠场景2"],
|
|
||||||
"responsibilities": "现有模块职责"
|
|
||||||
}
|
|
||||||
]
|
|
||||||
},
|
|
||||||
"strategies": [
|
|
||||||
{
|
|
||||||
"name": "策略名称(中文)",
|
|
||||||
"approach": "实现方法简述",
|
|
||||||
"complexity": "Low|Medium|High",
|
|
||||||
"risk": "Low|Medium|High",
|
|
||||||
"effort": "时间估计",
|
|
||||||
"pros": ["优点1", "优点2"],
|
|
||||||
"cons": ["缺点1", "缺点2"],
|
|
||||||
"clarification_needed": [
|
|
||||||
"// NOTE: 仅当需要用户进一步澄清时需要此字段(尤其是 ModuleOverlap)",
|
|
||||||
"新模块的核心职责边界是什么?",
|
|
||||||
"如何与现有模块 X 协作?",
|
|
||||||
"哪些场景应该由新模块处理?"
|
|
||||||
],
|
|
||||||
"modifications": [
|
|
||||||
{
|
|
||||||
"file": ".workflow/active/{session}/.brainstorm/guidance-specification.md",
|
|
||||||
"section": "## 2. System Architect Decisions",
|
|
||||||
"change_type": "update",
|
|
||||||
"old_content": "原始内容片段(用于定位)",
|
|
||||||
"new_content": "修改后的内容",
|
|
||||||
"rationale": "为什么这样改"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"file": ".workflow/active/{session}/.brainstorm/system-architect/analysis.md",
|
|
||||||
"section": "## Design Decisions",
|
|
||||||
"change_type": "update",
|
|
||||||
"old_content": "原始内容片段",
|
|
||||||
"new_content": "修改后的内容",
|
|
||||||
"rationale": "修改理由"
|
|
||||||
}
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"name": "策略2名称",
|
|
||||||
"approach": "...",
|
|
||||||
"complexity": "Medium",
|
|
||||||
"risk": "Low",
|
|
||||||
"effort": "1-2天",
|
|
||||||
"pros": ["优点"],
|
|
||||||
"cons": ["缺点"],
|
|
||||||
"modifications": [...]
|
|
||||||
}
|
|
||||||
],
|
|
||||||
"recommended": 0,
|
|
||||||
"modification_suggestions": [
|
|
||||||
"建议1:具体的修改方向或注意事项",
|
|
||||||
"建议2:可能需要考虑的边界情况",
|
|
||||||
"建议3:相关的最佳实践或模式"
|
|
||||||
]
|
|
||||||
}
|
|
||||||
],
|
|
||||||
"summary": {
|
|
||||||
"total": 2,
|
|
||||||
"critical": 1,
|
|
||||||
"high": 1,
|
|
||||||
"medium": 0
|
|
||||||
}
|
|
||||||
}
|
|
||||||
\`\`\`
|
|
||||||
|
|
||||||
⚠️ CRITICAL Requirements for modifications field:
|
|
||||||
- old_content: Must be exact text from target file (20-100 chars for unique match)
|
|
||||||
- new_content: Complete replacement text (maintains formatting)
|
|
||||||
- change_type: "update" (replace), "add" (insert), "remove" (delete)
|
|
||||||
- file: Full path relative to project root
|
|
||||||
- section: Markdown heading for context (helps locate position)
|
|
||||||
- Minimum 2 strategies per conflict, max 4
|
- Minimum 2 strategies per conflict, max 4
|
||||||
- All text in Chinese for user-facing fields (brief, name, pros, cons)
|
- All text in Chinese for user-facing fields (brief, name, pros, cons, modification_suggestions)
|
||||||
- modification_suggestions: 2-5 actionable suggestions for custom handling (Chinese)
|
- modifications.old_content: 20-100 chars for unique Edit tool matching
|
||||||
|
- modifications.new_content: preserves markdown formatting
|
||||||
Quality Standards:
|
- modification_suggestions: 2-5 actionable suggestions for custom handling
|
||||||
- Each strategy must have actionable modifications
|
|
||||||
- old_content must be precise enough for Edit tool matching
|
|
||||||
- new_content preserves markdown formatting and structure
|
|
||||||
- Recommended strategy (index) based on lowest complexity + risk
|
|
||||||
- modification_suggestions must be specific, actionable, and context-aware
|
|
||||||
- Each suggestion should address a specific aspect (compatibility, migration, testing, etc.)
|
|
||||||
`)
|
`)
|
||||||
```
|
```
|
||||||
|
|
||||||
@@ -312,143 +206,85 @@ Task(subagent_type="cli-execution-agent", prompt=`
|
|||||||
8. Return execution log path
|
8. Return execution log path
|
||||||
```
|
```
|
||||||
|
|
||||||
### Phase 3: Iterative User Interaction with Clarification Loop
|
### Phase 3: User Interaction Loop
|
||||||
|
|
||||||
**Execution Flow**:
|
```javascript
|
||||||
```
|
FOR each conflict:
|
||||||
FOR each conflict (逐个处理,无数量限制):
|
round = 0, clarified = false, userClarifications = []
|
||||||
clarified = false
|
|
||||||
round = 0
|
|
||||||
userClarifications = []
|
|
||||||
|
|
||||||
WHILE (!clarified && round < 10):
|
WHILE (!clarified && round++ < 10):
|
||||||
round++
|
// 1. Display conflict info (text output for context)
|
||||||
|
displayConflictSummary(conflict) // id, brief, severity, overlap_analysis if ModuleOverlap
|
||||||
|
|
||||||
// 1. Display conflict (包含所有关键字段)
|
// 2. Strategy selection via AskUserQuestion
|
||||||
- category, id, brief, severity, description
|
AskUserQuestion({
|
||||||
- IF ModuleOverlap: 展示 overlap_analysis
|
questions: [{
|
||||||
* new_module: {name, scenarios, responsibilities}
|
question: formatStrategiesForDisplay(conflict.strategies),
|
||||||
* existing_modules[]: {file, name, scenarios, overlap_scenarios, responsibilities}
|
header: "策略选择",
|
||||||
|
multiSelect: false,
|
||||||
|
options: [
|
||||||
|
...conflict.strategies.map((s, i) => ({
|
||||||
|
label: `${s.name}${i === conflict.recommended ? ' (推荐)' : ''}`,
|
||||||
|
description: `${s.complexity}复杂度 | ${s.risk}风险${s.clarification_needed?.length ? ' | ⚠️需澄清' : ''}`
|
||||||
|
})),
|
||||||
|
{ label: "自定义修改", description: `建议: ${conflict.modification_suggestions?.slice(0,2).join('; ')}` }
|
||||||
|
]
|
||||||
|
}]
|
||||||
|
})
|
||||||
|
|
||||||
// 2. Display strategies (2-4个策略 + 自定义选项)
|
// 3. Handle selection
|
||||||
- FOR each strategy: {name, approach, complexity, risk, effort, pros, cons}
|
if (userChoice === "自定义修改") {
|
||||||
* IF clarification_needed: 展示待澄清问题列表
|
customConflicts.push({ id, brief, category, suggestions, overlap_analysis })
|
||||||
- 自定义选项: {suggestions: modification_suggestions[]}
|
break
|
||||||
|
}
|
||||||
|
|
||||||
// 3. User selects strategy
|
selectedStrategy = findStrategyByName(userChoice)
|
||||||
userChoice = readInput()
|
|
||||||
|
|
||||||
IF userChoice == "自定义":
|
// 4. Clarification (if needed) - batched max 4 per call
|
||||||
customConflicts.push({id, brief, category, suggestions, overlap_analysis})
|
if (selectedStrategy.clarification_needed?.length > 0) {
|
||||||
clarified = true
|
for (batch of chunk(selectedStrategy.clarification_needed, 4)) {
|
||||||
BREAK
|
AskUserQuestion({
|
||||||
|
questions: batch.map((q, i) => ({
|
||||||
|
question: q, header: `澄清${i+1}`, multiSelect: false,
|
||||||
|
options: [{ label: "详细说明", description: "提供答案" }]
|
||||||
|
}))
|
||||||
|
})
|
||||||
|
userClarifications.push(...collectAnswers(batch))
|
||||||
|
}
|
||||||
|
|
||||||
selectedStrategy = strategies[userChoice]
|
// 5. Agent re-analysis
|
||||||
|
reanalysisResult = Task({
|
||||||
// 4. Clarification loop
|
subagent_type: "cli-execution-agent",
|
||||||
IF selectedStrategy.clarification_needed.length > 0:
|
run_in_background: false,
|
||||||
// 收集澄清答案
|
prompt: `Conflict: ${conflict.id}, Strategy: ${selectedStrategy.name}
|
||||||
FOR each question:
|
User Clarifications: ${JSON.stringify(userClarifications)}
|
||||||
answer = readInput()
|
Output: { uniqueness_confirmed, rationale, updated_strategy, remaining_questions }`
|
||||||
userClarifications.push({question, answer})
|
|
||||||
|
|
||||||
// Agent 重新分析
|
|
||||||
reanalysisResult = Task(cli-execution-agent, prompt={
|
|
||||||
冲突信息: {id, brief, category, 策略}
|
|
||||||
用户澄清: userClarifications[]
|
|
||||||
场景分析: overlap_analysis (if ModuleOverlap)
|
|
||||||
|
|
||||||
输出: {
|
|
||||||
uniqueness_confirmed: bool,
|
|
||||||
rationale: string,
|
|
||||||
updated_strategy: {name, approach, complexity, risk, effort, modifications[]},
|
|
||||||
remaining_questions: [] (如果仍有歧义)
|
|
||||||
}
|
|
||||||
})
|
})
|
||||||
|
|
||||||
IF reanalysisResult.uniqueness_confirmed:
|
if (reanalysisResult.uniqueness_confirmed) {
|
||||||
selectedStrategy = updated_strategy
|
selectedStrategy = { ...reanalysisResult.updated_strategy, clarifications: userClarifications }
|
||||||
selectedStrategy.clarifications = userClarifications
|
|
||||||
clarified = true
|
clarified = true
|
||||||
ELSE:
|
} else {
|
||||||
// 更新澄清问题,继续下一轮
|
selectedStrategy.clarification_needed = reanalysisResult.remaining_questions
|
||||||
selectedStrategy.clarification_needed = remaining_questions
|
}
|
||||||
ELSE:
|
} else {
|
||||||
clarified = true
|
clarified = true
|
||||||
|
}
|
||||||
|
|
||||||
resolvedConflicts.push({conflict, strategy: selectedStrategy})
|
if (clarified) resolvedConflicts.push({ conflict, strategy: selectedStrategy })
|
||||||
END WHILE
|
END WHILE
|
||||||
END FOR
|
END FOR
|
||||||
|
|
||||||
// Build output
|
|
||||||
selectedStrategies = resolvedConflicts.map(r => ({
|
selectedStrategies = resolvedConflicts.map(r => ({
|
||||||
conflict_id, strategy, clarifications[]
|
conflict_id: r.conflict.id, strategy: r.strategy, clarifications: r.strategy.clarifications || []
|
||||||
}))
|
}))
|
||||||
```
|
```
|
||||||
|
|
||||||
**Key Data Structures**:
|
**Key Points**:
|
||||||
|
- AskUserQuestion: max 4 questions/call, batch if more
|
||||||
```javascript
|
- Strategy options: 2-4 strategies + "自定义修改"
|
||||||
// Custom conflict tracking
|
- Clarification loop: max 10 rounds, agent判断 uniqueness_confirmed
|
||||||
customConflicts[] = {
|
- Custom conflicts: 记录 overlap_analysis 供后续手动处理
|
||||||
id, brief, category,
|
|
||||||
suggestions: modification_suggestions[],
|
|
||||||
overlap_analysis: { new_module{}, existing_modules[] } // ModuleOverlap only
|
|
||||||
}
|
|
||||||
|
|
||||||
// Agent re-analysis prompt output
|
|
||||||
{
|
|
||||||
uniqueness_confirmed: bool,
|
|
||||||
rationale: string,
|
|
||||||
updated_strategy: {
|
|
||||||
name, approach, complexity, risk, effort,
|
|
||||||
modifications: [{file, section, change_type, old_content, new_content, rationale}]
|
|
||||||
},
|
|
||||||
remaining_questions: string[]
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
**Text Output Example** (展示关键字段):
|
|
||||||
|
|
||||||
```markdown
|
|
||||||
============================================================
|
|
||||||
冲突 1/3 - 第 1 轮
|
|
||||||
============================================================
|
|
||||||
【ModuleOverlap】CON-001: 新增用户认证服务与现有模块功能重叠
|
|
||||||
严重程度: High | 描述: 计划中的 UserAuthService 与现有 AuthManager 场景重叠
|
|
||||||
|
|
||||||
--- 场景重叠分析 ---
|
|
||||||
新模块: UserAuthService | 场景: 登录, Token验证, 权限, MFA
|
|
||||||
现有模块: AuthManager (src/auth/AuthManager.ts) | 重叠: 登录, Token验证
|
|
||||||
|
|
||||||
--- 解决策略 ---
|
|
||||||
1) 合并 (Low复杂度 | Low风险 | 2-3天)
|
|
||||||
⚠️ 需澄清: AuthManager是否能承担MFA?
|
|
||||||
|
|
||||||
2) 拆分边界 (Medium复杂度 | Medium风险 | 4-5天)
|
|
||||||
⚠️ 需澄清: 基础/高级认证边界? Token验证归谁?
|
|
||||||
|
|
||||||
3) 自定义修改
|
|
||||||
建议: 评估扩展性; 策略模式分离; 定义接口边界
|
|
||||||
|
|
||||||
请选择 (1-3): > 2
|
|
||||||
|
|
||||||
--- 澄清问答 (第1轮) ---
|
|
||||||
Q: 基础/高级认证边界?
|
|
||||||
A: 基础=密码登录+token验证, 高级=MFA+OAuth+SSO
|
|
||||||
|
|
||||||
Q: Token验证归谁?
|
|
||||||
A: 统一由 AuthManager 负责
|
|
||||||
|
|
||||||
🔄 重新分析...
|
|
||||||
✅ 唯一性已确认 | 理由: 边界清晰 - AuthManager(基础+token), UserAuthService(MFA+OAuth+SSO)
|
|
||||||
|
|
||||||
============================================================
|
|
||||||
冲突 2/3 - 第 1 轮 [下一个冲突]
|
|
||||||
============================================================
|
|
||||||
```
|
|
||||||
|
|
||||||
**Loop Characteristics**: 逐个处理 | 无限轮次(max 10) | 动态问题生成 | Agent重新分析判断唯一性 | ModuleOverlap场景边界澄清
|
|
||||||
|
|
||||||
### Phase 4: Apply Modifications
|
### Phase 4: Apply Modifications
|
||||||
|
|
||||||
@@ -467,14 +303,30 @@ selectedStrategies.forEach(item => {
|
|||||||
|
|
||||||
console.log(`\n正在应用 ${modifications.length} 个修改...`);
|
console.log(`\n正在应用 ${modifications.length} 个修改...`);
|
||||||
|
|
||||||
// 2. Apply each modification using Edit tool
|
// 2. Apply each modification using Edit tool (with fallback to context-package.json)
|
||||||
const appliedModifications = [];
|
const appliedModifications = [];
|
||||||
const failedModifications = [];
|
const failedModifications = [];
|
||||||
|
const fallbackConstraints = []; // For files that don't exist
|
||||||
|
|
||||||
modifications.forEach((mod, idx) => {
|
modifications.forEach((mod, idx) => {
|
||||||
try {
|
try {
|
||||||
console.log(`[${idx + 1}/${modifications.length}] 修改 ${mod.file}...`);
|
console.log(`[${idx + 1}/${modifications.length}] 修改 ${mod.file}...`);
|
||||||
|
|
||||||
|
// Check if target file exists (brainstorm files may not exist in lite workflow)
|
||||||
|
if (!file_exists(mod.file)) {
|
||||||
|
console.log(` ⚠️ 文件不存在,写入 context-package.json 作为约束`);
|
||||||
|
fallbackConstraints.push({
|
||||||
|
source: "conflict-resolution",
|
||||||
|
conflict_id: mod.conflict_id,
|
||||||
|
target_file: mod.file,
|
||||||
|
section: mod.section,
|
||||||
|
change_type: mod.change_type,
|
||||||
|
content: mod.new_content,
|
||||||
|
rationale: mod.rationale
|
||||||
|
});
|
||||||
|
return; // Skip to next modification
|
||||||
|
}
|
||||||
|
|
||||||
if (mod.change_type === "update") {
|
if (mod.change_type === "update") {
|
||||||
Edit({
|
Edit({
|
||||||
file_path: mod.file,
|
file_path: mod.file,
|
||||||
@@ -502,14 +354,45 @@ modifications.forEach((mod, idx) => {
|
|||||||
}
|
}
|
||||||
});
|
});
|
||||||
|
|
||||||
// 3. Update context-package.json with resolution details
|
// 2b. Generate conflict-resolution.json output file
|
||||||
|
const resolutionOutput = {
|
||||||
|
session_id: sessionId,
|
||||||
|
resolved_at: new Date().toISOString(),
|
||||||
|
summary: {
|
||||||
|
total_conflicts: conflicts.length,
|
||||||
|
resolved_with_strategy: selectedStrategies.length,
|
||||||
|
custom_handling: customConflicts.length,
|
||||||
|
fallback_constraints: fallbackConstraints.length
|
||||||
|
},
|
||||||
|
resolved_conflicts: selectedStrategies.map(s => ({
|
||||||
|
conflict_id: s.conflict_id,
|
||||||
|
strategy_name: s.strategy.name,
|
||||||
|
strategy_approach: s.strategy.approach,
|
||||||
|
clarifications: s.clarifications || [],
|
||||||
|
modifications_applied: s.strategy.modifications?.filter(m =>
|
||||||
|
appliedModifications.some(am => am.conflict_id === s.conflict_id)
|
||||||
|
) || []
|
||||||
|
})),
|
||||||
|
custom_conflicts: customConflicts.map(c => ({
|
||||||
|
id: c.id,
|
||||||
|
brief: c.brief,
|
||||||
|
category: c.category,
|
||||||
|
suggestions: c.suggestions,
|
||||||
|
overlap_analysis: c.overlap_analysis || null
|
||||||
|
})),
|
||||||
|
planning_constraints: fallbackConstraints, // Constraints for files that don't exist
|
||||||
|
failed_modifications: failedModifications
|
||||||
|
};
|
||||||
|
|
||||||
|
const resolutionPath = `.workflow/active/${sessionId}/.process/conflict-resolution.json`;
|
||||||
|
Write(resolutionPath, JSON.stringify(resolutionOutput, null, 2));
|
||||||
|
console.log(`\n📄 冲突解决结果已保存: ${resolutionPath}`);
|
||||||
|
|
||||||
|
// 3. Update context-package.json with resolution details (reference to JSON file)
|
||||||
const contextPackage = JSON.parse(Read(contextPath));
|
const contextPackage = JSON.parse(Read(contextPath));
|
||||||
contextPackage.conflict_detection.conflict_risk = "resolved";
|
contextPackage.conflict_detection.conflict_risk = "resolved";
|
||||||
contextPackage.conflict_detection.resolved_conflicts = selectedStrategies.map(s => ({
|
contextPackage.conflict_detection.resolution_file = resolutionPath; // Reference to detailed JSON
|
||||||
conflict_id: s.conflict_id,
|
contextPackage.conflict_detection.resolved_conflicts = selectedStrategies.map(s => s.conflict_id);
|
||||||
strategy_name: s.strategy.name,
|
|
||||||
clarifications: s.clarifications
|
|
||||||
}));
|
|
||||||
contextPackage.conflict_detection.custom_conflicts = customConflicts.map(c => c.id);
|
contextPackage.conflict_detection.custom_conflicts = customConflicts.map(c => c.id);
|
||||||
contextPackage.conflict_detection.resolved_at = new Date().toISOString();
|
contextPackage.conflict_detection.resolved_at = new Date().toISOString();
|
||||||
Write(contextPath, JSON.stringify(contextPackage, null, 2));
|
Write(contextPath, JSON.stringify(contextPackage, null, 2));
|
||||||
@@ -582,12 +465,50 @@ return {
|
|||||||
✓ Agent log saved to .workflow/active/{session_id}/.chat/
|
✓ Agent log saved to .workflow/active/{session_id}/.chat/
|
||||||
```
|
```
|
||||||
|
|
||||||
## Output Format: Agent JSON Response
|
## Output Format
|
||||||
|
|
||||||
|
### Primary Output: conflict-resolution.json
|
||||||
|
|
||||||
|
**Path**: `.workflow/active/{session_id}/.process/conflict-resolution.json`
|
||||||
|
|
||||||
|
**Schema**:
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"session_id": "WFS-xxx",
|
||||||
|
"resolved_at": "ISO timestamp",
|
||||||
|
"summary": {
|
||||||
|
"total_conflicts": 3,
|
||||||
|
"resolved_with_strategy": 2,
|
||||||
|
"custom_handling": 1,
|
||||||
|
"fallback_constraints": 0
|
||||||
|
},
|
||||||
|
"resolved_conflicts": [
|
||||||
|
{
|
||||||
|
"conflict_id": "CON-001",
|
||||||
|
"strategy_name": "策略名称",
|
||||||
|
"strategy_approach": "实现方法",
|
||||||
|
"clarifications": [],
|
||||||
|
"modifications_applied": []
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"custom_conflicts": [
|
||||||
|
{
|
||||||
|
"id": "CON-002",
|
||||||
|
"brief": "冲突摘要",
|
||||||
|
"category": "ModuleOverlap",
|
||||||
|
"suggestions": ["建议1", "建议2"],
|
||||||
|
"overlap_analysis": null
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"planning_constraints": [],
|
||||||
|
"failed_modifications": []
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Secondary: Agent JSON Response (stdout)
|
||||||
|
|
||||||
**Focus**: Structured conflict data with actionable modifications for programmatic processing.
|
**Focus**: Structured conflict data with actionable modifications for programmatic processing.
|
||||||
|
|
||||||
**Format**: JSON to stdout (NO file generation)
|
|
||||||
|
|
||||||
**Structure**: Defined in Phase 2, Step 4 (agent prompt)
|
**Structure**: Defined in Phase 2, Step 4 (agent prompt)
|
||||||
|
|
||||||
### Key Requirements
|
### Key Requirements
|
||||||
@@ -635,11 +556,12 @@ If Edit tool fails mid-application:
|
|||||||
- Requires: `conflict_risk ≥ medium`
|
- Requires: `conflict_risk ≥ medium`
|
||||||
|
|
||||||
**Output**:
|
**Output**:
|
||||||
- Modified files:
|
- Generated file:
|
||||||
|
- `.workflow/active/{session_id}/.process/conflict-resolution.json` (primary output)
|
||||||
|
- Modified files (if exist):
|
||||||
- `.workflow/active/{session_id}/.brainstorm/guidance-specification.md`
|
- `.workflow/active/{session_id}/.brainstorm/guidance-specification.md`
|
||||||
- `.workflow/active/{session_id}/.brainstorm/{role}/analysis.md`
|
- `.workflow/active/{session_id}/.brainstorm/{role}/analysis.md`
|
||||||
- `.workflow/active/{session_id}/.process/context-package.json` (conflict_risk → resolved)
|
- `.workflow/active/{session_id}/.process/context-package.json` (conflict_risk → resolved, resolution_file reference)
|
||||||
- NO report file generation
|
|
||||||
|
|
||||||
**User Interaction**:
|
**User Interaction**:
|
||||||
- **Iterative conflict processing**: One conflict at a time, not in batches
|
- **Iterative conflict processing**: One conflict at a time, not in batches
|
||||||
@@ -667,7 +589,7 @@ If Edit tool fails mid-application:
|
|||||||
✓ guidance-specification.md updated with resolved conflicts
|
✓ guidance-specification.md updated with resolved conflicts
|
||||||
✓ Role analyses (*.md) updated with resolved conflicts
|
✓ Role analyses (*.md) updated with resolved conflicts
|
||||||
✓ context-package.json marked as "resolved" with clarification records
|
✓ context-package.json marked as "resolved" with clarification records
|
||||||
✓ No CONFLICT_RESOLUTION.md file generated
|
✓ conflict-resolution.json generated with full resolution details
|
||||||
✓ Modification summary includes:
|
✓ Modification summary includes:
|
||||||
- Total conflicts
|
- Total conflicts
|
||||||
- Resolved with strategy (count)
|
- Resolved with strategy (count)
|
||||||
|
|||||||
@@ -15,7 +15,6 @@ allowed-tools: Task(*), Read(*), Glob(*)
|
|||||||
|
|
||||||
Orchestrator command that invokes `context-search-agent` to gather comprehensive project context for implementation planning. Generates standardized `context-package.json` with codebase analysis, dependencies, and conflict detection.
|
Orchestrator command that invokes `context-search-agent` to gather comprehensive project context for implementation planning. Generates standardized `context-package.json` with codebase analysis, dependencies, and conflict detection.
|
||||||
|
|
||||||
**Agent**: `context-search-agent` (`.claude/agents/context-search-agent.md`)
|
|
||||||
|
|
||||||
## Core Philosophy
|
## Core Philosophy
|
||||||
|
|
||||||
@@ -121,6 +120,7 @@ const sessionFolder = `.workflow/active/${session_id}/.process`;
|
|||||||
const explorationTasks = selectedAngles.map((angle, index) =>
|
const explorationTasks = selectedAngles.map((angle, index) =>
|
||||||
Task(
|
Task(
|
||||||
subagent_type="cli-explore-agent",
|
subagent_type="cli-explore-agent",
|
||||||
|
run_in_background=false,
|
||||||
description=`Explore: ${angle}`,
|
description=`Explore: ${angle}`,
|
||||||
prompt=`
|
prompt=`
|
||||||
## Task Objective
|
## Task Objective
|
||||||
@@ -215,6 +215,7 @@ Write(`${sessionFolder}/explorations-manifest.json`, JSON.stringify(explorationM
|
|||||||
```javascript
|
```javascript
|
||||||
Task(
|
Task(
|
||||||
subagent_type="context-search-agent",
|
subagent_type="context-search-agent",
|
||||||
|
run_in_background=false,
|
||||||
description="Gather comprehensive context for plan",
|
description="Gather comprehensive context for plan",
|
||||||
prompt=`
|
prompt=`
|
||||||
## Execution Mode
|
## Execution Mode
|
||||||
@@ -237,7 +238,7 @@ Execute complete context-search-agent workflow for implementation planning:
|
|||||||
### Phase 1: Initialization & Pre-Analysis
|
### Phase 1: Initialization & Pre-Analysis
|
||||||
1. **Project State Loading**: Read and parse `.workflow/project.json`. Use its `overview` section as the foundational `project_context`. This is your primary source for architecture, tech stack, and key components. If file doesn't exist, proceed with fresh analysis.
|
1. **Project State Loading**: Read and parse `.workflow/project.json`. Use its `overview` section as the foundational `project_context`. This is your primary source for architecture, tech stack, and key components. If file doesn't exist, proceed with fresh analysis.
|
||||||
2. **Detection**: Check for existing context-package (early exit if valid)
|
2. **Detection**: Check for existing context-package (early exit if valid)
|
||||||
3. **Foundation**: Initialize code-index, get project structure, load docs
|
3. **Foundation**: Initialize CodexLens, get project structure, load docs
|
||||||
4. **Analysis**: Extract keywords, determine scope, classify complexity based on task description and project state
|
4. **Analysis**: Extract keywords, determine scope, classify complexity based on task description and project state
|
||||||
|
|
||||||
### Phase 2: Multi-Source Context Discovery
|
### Phase 2: Multi-Source Context Discovery
|
||||||
@@ -429,6 +430,5 @@ if (historicalConflicts.length > 0 && currentRisk === "low") {
|
|||||||
|
|
||||||
- **Detection-first**: Always check for existing package before invoking agent
|
- **Detection-first**: Always check for existing package before invoking agent
|
||||||
- **Project.json integration**: Agent reads `.workflow/project.json` as primary source for project context, avoiding redundant analysis
|
- **Project.json integration**: Agent reads `.workflow/project.json` as primary source for project context, avoiding redundant analysis
|
||||||
- **Agent autonomy**: Agent handles all discovery logic per `.claude/agents/context-search-agent.md`
|
|
||||||
- **No redundancy**: This command is a thin orchestrator, all logic in agent
|
- **No redundancy**: This command is a thin orchestrator, all logic in agent
|
||||||
- **Plan-specific**: Use this for implementation planning; brainstorm mode uses direct agent call
|
- **Plan-specific**: Use this for implementation planning; brainstorm mode uses direct agent call
|
||||||
|
|||||||
@@ -28,6 +28,12 @@ Input Parsing:
|
|||||||
├─ Parse flags: --session
|
├─ Parse flags: --session
|
||||||
└─ Validation: session_id REQUIRED
|
└─ Validation: session_id REQUIRED
|
||||||
|
|
||||||
|
Phase 0: User Configuration (Interactive)
|
||||||
|
├─ Question 1: Supplementary materials/guidelines?
|
||||||
|
├─ Question 2: Execution method preference (Agent/CLI/Hybrid)
|
||||||
|
├─ Question 3: CLI tool preference (if CLI selected)
|
||||||
|
└─ Store: userConfig for agent prompt
|
||||||
|
|
||||||
Phase 1: Context Preparation & Module Detection (Command)
|
Phase 1: Context Preparation & Module Detection (Command)
|
||||||
├─ Assemble session paths (metadata, context package, output dirs)
|
├─ Assemble session paths (metadata, context package, output dirs)
|
||||||
├─ Provide metadata (session_id, execution_mode, mcp_capabilities)
|
├─ Provide metadata (session_id, execution_mode, mcp_capabilities)
|
||||||
@@ -57,6 +63,82 @@ Phase 3: Integration (+1 Coordinator, Multi-Module Only)
|
|||||||
|
|
||||||
## Document Generation Lifecycle
|
## Document Generation Lifecycle
|
||||||
|
|
||||||
|
### Phase 0: User Configuration (Interactive)
|
||||||
|
|
||||||
|
**Purpose**: Collect user preferences before task generation to ensure generated tasks match execution expectations.
|
||||||
|
|
||||||
|
**User Questions**:
|
||||||
|
```javascript
|
||||||
|
AskUserQuestion({
|
||||||
|
questions: [
|
||||||
|
{
|
||||||
|
question: "Do you have supplementary materials or guidelines to include?",
|
||||||
|
header: "Materials",
|
||||||
|
multiSelect: false,
|
||||||
|
options: [
|
||||||
|
{ label: "No additional materials", description: "Use existing context only" },
|
||||||
|
{ label: "Provide file paths", description: "I'll specify paths to include" },
|
||||||
|
{ label: "Provide inline content", description: "I'll paste content directly" }
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
question: "Select execution method for generated tasks:",
|
||||||
|
header: "Execution",
|
||||||
|
multiSelect: false,
|
||||||
|
options: [
|
||||||
|
{ label: "Agent (Recommended)", description: "Claude agent executes tasks directly" },
|
||||||
|
{ label: "Hybrid", description: "Agent orchestrates, calls CLI for complex steps" },
|
||||||
|
{ label: "CLI Only", description: "All execution via CLI tools (codex/gemini/qwen)" }
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
question: "If using CLI, which tool do you prefer?",
|
||||||
|
header: "CLI Tool",
|
||||||
|
multiSelect: false,
|
||||||
|
options: [
|
||||||
|
{ label: "Codex (Recommended)", description: "Best for implementation tasks" },
|
||||||
|
{ label: "Gemini", description: "Best for analysis and large context" },
|
||||||
|
{ label: "Qwen", description: "Alternative analysis tool" },
|
||||||
|
{ label: "Auto", description: "Let agent decide per-task" }
|
||||||
|
]
|
||||||
|
}
|
||||||
|
]
|
||||||
|
})
|
||||||
|
```
|
||||||
|
|
||||||
|
**Handle Materials Response**:
|
||||||
|
```javascript
|
||||||
|
if (userConfig.materials === "Provide file paths") {
|
||||||
|
// Follow-up question for file paths
|
||||||
|
const pathsResponse = AskUserQuestion({
|
||||||
|
questions: [{
|
||||||
|
question: "Enter file paths to include (comma-separated or one per line):",
|
||||||
|
header: "Paths",
|
||||||
|
multiSelect: false,
|
||||||
|
options: [
|
||||||
|
{ label: "Enter paths", description: "Provide paths in text input" }
|
||||||
|
]
|
||||||
|
}]
|
||||||
|
})
|
||||||
|
userConfig.supplementaryPaths = parseUserPaths(pathsResponse)
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Build userConfig**:
|
||||||
|
```javascript
|
||||||
|
const userConfig = {
|
||||||
|
supplementaryMaterials: {
|
||||||
|
type: "none|paths|inline",
|
||||||
|
content: [...], // Parsed paths or inline content
|
||||||
|
},
|
||||||
|
executionMethod: "agent|hybrid|cli",
|
||||||
|
preferredCliTool: "codex|gemini|qwen|auto",
|
||||||
|
enableResume: true // Always enable resume for CLI executions
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Pass to Agent**: Include `userConfig` in agent prompt for Phase 2A/2B.
|
||||||
|
|
||||||
### Phase 1: Context Preparation & Module Detection (Command Responsibility)
|
### Phase 1: Context Preparation & Module Detection (Command Responsibility)
|
||||||
|
|
||||||
**Command prepares session paths, metadata, and detects module structure.**
|
**Command prepares session paths, metadata, and detects module structure.**
|
||||||
@@ -89,6 +171,14 @@ Phase 3: Integration (+1 Coordinator, Multi-Module Only)
|
|||||||
3. **Auto Module Detection** (determines single vs parallel mode):
|
3. **Auto Module Detection** (determines single vs parallel mode):
|
||||||
```javascript
|
```javascript
|
||||||
function autoDetectModules(contextPackage, projectRoot) {
|
function autoDetectModules(contextPackage, projectRoot) {
|
||||||
|
// === Complexity Gate: Only parallelize for High complexity ===
|
||||||
|
const complexity = contextPackage.metadata?.complexity || 'Medium';
|
||||||
|
if (complexity !== 'High') {
|
||||||
|
// Force single agent mode for Low/Medium complexity
|
||||||
|
// This maximizes agent context reuse for related tasks
|
||||||
|
return [{ name: 'main', prefix: '', paths: ['.'] }];
|
||||||
|
}
|
||||||
|
|
||||||
// Priority 1: Explicit frontend/backend separation
|
// Priority 1: Explicit frontend/backend separation
|
||||||
if (exists('src/frontend') && exists('src/backend')) {
|
if (exists('src/frontend') && exists('src/backend')) {
|
||||||
return [
|
return [
|
||||||
@@ -112,8 +202,9 @@ Phase 3: Integration (+1 Coordinator, Multi-Module Only)
|
|||||||
```
|
```
|
||||||
|
|
||||||
**Decision Logic**:
|
**Decision Logic**:
|
||||||
|
- `complexity !== 'High'` → Force Phase 2A (Single Agent, maximize context reuse)
|
||||||
- `modules.length == 1` → Phase 2A (Single Agent, original flow)
|
- `modules.length == 1` → Phase 2A (Single Agent, original flow)
|
||||||
- `modules.length >= 2` → Phase 2B + Phase 3 (N+1 Parallel)
|
- `modules.length >= 2 && complexity == 'High'` → Phase 2B + Phase 3 (N+1 Parallel)
|
||||||
|
|
||||||
**Note**: CLI tool usage is now determined semantically by action-planning-agent based on user's task description, not by flags.
|
**Note**: CLI tool usage is now determined semantically by action-planning-agent based on user's task description, not by flags.
|
||||||
|
|
||||||
@@ -127,6 +218,7 @@ Phase 3: Integration (+1 Coordinator, Multi-Module Only)
|
|||||||
```javascript
|
```javascript
|
||||||
Task(
|
Task(
|
||||||
subagent_type="action-planning-agent",
|
subagent_type="action-planning-agent",
|
||||||
|
run_in_background=false,
|
||||||
description="Generate planning documents (IMPL_PLAN.md, task JSONs, TODO_LIST.md)",
|
description="Generate planning documents (IMPL_PLAN.md, task JSONs, TODO_LIST.md)",
|
||||||
prompt=`
|
prompt=`
|
||||||
## TASK OBJECTIVE
|
## TASK OBJECTIVE
|
||||||
@@ -150,10 +242,21 @@ Output:
|
|||||||
Session ID: {session-id}
|
Session ID: {session-id}
|
||||||
MCP Capabilities: {exa_code, exa_web, code_index}
|
MCP Capabilities: {exa_code, exa_web, code_index}
|
||||||
|
|
||||||
|
## USER CONFIGURATION (from Phase 0)
|
||||||
|
Execution Method: ${userConfig.executionMethod} // agent|hybrid|cli
|
||||||
|
Preferred CLI Tool: ${userConfig.preferredCliTool} // codex|gemini|qwen|auto
|
||||||
|
Supplementary Materials: ${userConfig.supplementaryMaterials}
|
||||||
|
|
||||||
## CLI TOOL SELECTION
|
## CLI TOOL SELECTION
|
||||||
Determine CLI tool usage per-step based on user's task description:
|
Based on userConfig.executionMethod:
|
||||||
- If user specifies "use Codex/Gemini/Qwen for X" → Add command field to relevant steps
|
- "agent": No command field in implementation_approach steps
|
||||||
- Default: Agent execution (no command field) unless user explicitly requests CLI
|
- "hybrid": Add command field to complex steps only (agent handles simple steps)
|
||||||
|
- "cli": Add command field to ALL implementation_approach steps
|
||||||
|
|
||||||
|
CLI Resume Support (MANDATORY for all CLI commands):
|
||||||
|
- Use --resume parameter to continue from previous task execution
|
||||||
|
- Read previous task's cliExecutionId from session state
|
||||||
|
- Format: ccw cli -p "[prompt]" --resume ${previousCliId} --tool ${tool} --mode write
|
||||||
|
|
||||||
## EXPLORATION CONTEXT (from context-package.exploration_results)
|
## EXPLORATION CONTEXT (from context-package.exploration_results)
|
||||||
- Load exploration_results from context-package.json
|
- Load exploration_results from context-package.json
|
||||||
@@ -163,6 +266,13 @@ Determine CLI tool usage per-step based on user's task description:
|
|||||||
- Use aggregated_insights.all_integration_points for precise modification locations
|
- Use aggregated_insights.all_integration_points for precise modification locations
|
||||||
- Use conflict_indicators for risk-aware task sequencing
|
- Use conflict_indicators for risk-aware task sequencing
|
||||||
|
|
||||||
|
## CONFLICT RESOLUTION CONTEXT (if exists)
|
||||||
|
- Check context-package.conflict_detection.resolution_file for conflict-resolution.json path
|
||||||
|
- If exists, load .process/conflict-resolution.json:
|
||||||
|
- Apply planning_constraints as task constraints (for brainstorm-less workflows)
|
||||||
|
- Reference resolved_conflicts for implementation approach alignment
|
||||||
|
- Handle custom_conflicts with explicit task notes
|
||||||
|
|
||||||
## EXPECTED DELIVERABLES
|
## EXPECTED DELIVERABLES
|
||||||
1. Task JSON Files (.task/IMPL-*.json)
|
1. Task JSON Files (.task/IMPL-*.json)
|
||||||
- 6-field schema (id, title, status, context_package_path, meta, context, flow_control)
|
- 6-field schema (id, title, status, context_package_path, meta, context, flow_control)
|
||||||
@@ -170,6 +280,7 @@ Determine CLI tool usage per-step based on user's task description:
|
|||||||
- Artifacts integration from context package
|
- Artifacts integration from context package
|
||||||
- **focus_paths enhanced with exploration critical_files**
|
- **focus_paths enhanced with exploration critical_files**
|
||||||
- Flow control with pre_analysis steps (include exploration integration_points analysis)
|
- Flow control with pre_analysis steps (include exploration integration_points analysis)
|
||||||
|
- **CLI Execution IDs and strategies (MANDATORY)**
|
||||||
|
|
||||||
2. Implementation Plan (IMPL_PLAN.md)
|
2. Implementation Plan (IMPL_PLAN.md)
|
||||||
- Context analysis and artifact references
|
- Context analysis and artifact references
|
||||||
@@ -181,6 +292,27 @@ Determine CLI tool usage per-step based on user's task description:
|
|||||||
- Links to task JSONs and summaries
|
- Links to task JSONs and summaries
|
||||||
- Matches task JSON hierarchy
|
- Matches task JSON hierarchy
|
||||||
|
|
||||||
|
## CLI EXECUTION ID REQUIREMENTS (MANDATORY)
|
||||||
|
Each task JSON MUST include:
|
||||||
|
- **cli_execution_id**: Unique ID for CLI execution (format: `{session_id}-{task_id}`)
|
||||||
|
- **cli_execution**: Strategy object based on depends_on:
|
||||||
|
- No deps → `{ "strategy": "new" }`
|
||||||
|
- 1 dep (single child) → `{ "strategy": "resume", "resume_from": "parent-cli-id" }`
|
||||||
|
- 1 dep (multiple children) → `{ "strategy": "fork", "resume_from": "parent-cli-id" }`
|
||||||
|
- N deps → `{ "strategy": "merge_fork", "merge_from": ["id1", "id2", ...] }`
|
||||||
|
|
||||||
|
**CLI Execution Strategy Rules**:
|
||||||
|
1. **new**: Task has no dependencies - starts fresh CLI conversation
|
||||||
|
2. **resume**: Task has 1 parent AND that parent has only this child - continues same conversation
|
||||||
|
3. **fork**: Task has 1 parent BUT parent has multiple children - creates new branch with parent context
|
||||||
|
4. **merge_fork**: Task has multiple parents - merges all parent contexts into new conversation
|
||||||
|
|
||||||
|
**Execution Command Patterns**:
|
||||||
|
- new: `ccw cli -p "[prompt]" --tool [tool] --mode write --id [cli_execution_id]`
|
||||||
|
- resume: `ccw cli -p "[prompt]" --resume [resume_from] --tool [tool] --mode write`
|
||||||
|
- fork: `ccw cli -p "[prompt]" --resume [resume_from] --id [cli_execution_id] --tool [tool] --mode write`
|
||||||
|
- merge_fork: `ccw cli -p "[prompt]" --resume [merge_from.join(',')] --id [cli_execution_id] --tool [tool] --mode write`
|
||||||
|
|
||||||
## QUALITY STANDARDS
|
## QUALITY STANDARDS
|
||||||
Hard Constraints:
|
Hard Constraints:
|
||||||
- Task count <= 18 (hard limit - request re-scope if exceeded)
|
- Task count <= 18 (hard limit - request re-scope if exceeded)
|
||||||
@@ -203,7 +335,9 @@ Hard Constraints:
|
|||||||
|
|
||||||
**Condition**: `modules.length >= 2` (multi-module detected)
|
**Condition**: `modules.length >= 2` (multi-module detected)
|
||||||
|
|
||||||
**Purpose**: Launch N action-planning-agents simultaneously, one per module, for parallel task generation.
|
**Purpose**: Launch N action-planning-agents simultaneously, one per module, for parallel task JSON generation.
|
||||||
|
|
||||||
|
**Note**: Phase 2B agents generate Task JSONs ONLY. IMPL_PLAN.md and TODO_LIST.md are generated by Phase 3 Coordinator.
|
||||||
|
|
||||||
**Parallel Agent Invocation**:
|
**Parallel Agent Invocation**:
|
||||||
```javascript
|
```javascript
|
||||||
@@ -211,27 +345,123 @@ Hard Constraints:
|
|||||||
const planningTasks = modules.map(module =>
|
const planningTasks = modules.map(module =>
|
||||||
Task(
|
Task(
|
||||||
subagent_type="action-planning-agent",
|
subagent_type="action-planning-agent",
|
||||||
description=`Plan ${module.name} module`,
|
run_in_background=false,
|
||||||
|
description=`Generate ${module.name} module task JSONs`,
|
||||||
prompt=`
|
prompt=`
|
||||||
## SCOPE
|
## TASK OBJECTIVE
|
||||||
|
Generate task JSON files for ${module.name} module within workflow session
|
||||||
|
|
||||||
|
IMPORTANT: This is PLANNING ONLY - generate task JSONs, NOT implementing code.
|
||||||
|
IMPORTANT: Generate Task JSONs ONLY. IMPL_PLAN.md and TODO_LIST.md by Phase 3 Coordinator.
|
||||||
|
|
||||||
|
CRITICAL: Follow the progressive loading strategy defined in agent specification (load analysis.md files incrementally due to file size)
|
||||||
|
|
||||||
|
## MODULE SCOPE
|
||||||
- Module: ${module.name} (${module.type})
|
- Module: ${module.name} (${module.type})
|
||||||
- Focus Paths: ${module.paths.join(', ')}
|
- Focus Paths: ${module.paths.join(', ')}
|
||||||
- Task ID Prefix: IMPL-${module.prefix}
|
- Task ID Prefix: IMPL-${module.prefix}
|
||||||
- Task Limit: ≤9 tasks
|
- Task Limit: ≤9 tasks (hard limit for this module)
|
||||||
- Other Modules: ${otherModules.join(', ')}
|
- Other Modules: ${otherModules.join(', ')} (reference only, do NOT generate tasks for them)
|
||||||
- Cross-module deps format: "CROSS::{module}::{pattern}"
|
|
||||||
|
|
||||||
## SESSION PATHS
|
## SESSION PATHS
|
||||||
Input:
|
Input:
|
||||||
|
- Session Metadata: .workflow/active/{session-id}/workflow-session.json
|
||||||
- Context Package: .workflow/active/{session-id}/.process/context-package.json
|
- Context Package: .workflow/active/{session-id}/.process/context-package.json
|
||||||
|
|
||||||
Output:
|
Output:
|
||||||
- Task Dir: .workflow/active/{session-id}/.task/
|
- Task Dir: .workflow/active/{session-id}/.task/
|
||||||
|
|
||||||
## INSTRUCTIONS
|
## CONTEXT METADATA
|
||||||
- Generate tasks ONLY for ${module.name} module
|
Session ID: {session-id}
|
||||||
- Use task ID format: IMPL-${module.prefix}1, IMPL-${module.prefix}2, ...
|
MCP Capabilities: {exa_code, exa_web, code_index}
|
||||||
- For cross-module dependencies, use: depends_on: ["CROSS::B::api-endpoint"]
|
|
||||||
- Maximum 9 tasks per module
|
## USER CONFIGURATION (from Phase 0)
|
||||||
|
Execution Method: ${userConfig.executionMethod} // agent|hybrid|cli
|
||||||
|
Preferred CLI Tool: ${userConfig.preferredCliTool} // codex|gemini|qwen|auto
|
||||||
|
Supplementary Materials: ${userConfig.supplementaryMaterials}
|
||||||
|
|
||||||
|
## CLI TOOL SELECTION
|
||||||
|
Based on userConfig.executionMethod:
|
||||||
|
- "agent": No command field in implementation_approach steps
|
||||||
|
- "hybrid": Add command field to complex steps only (agent handles simple steps)
|
||||||
|
- "cli": Add command field to ALL implementation_approach steps
|
||||||
|
|
||||||
|
CLI Resume Support (MANDATORY for all CLI commands):
|
||||||
|
- Use --resume parameter to continue from previous task execution
|
||||||
|
- Read previous task's cliExecutionId from session state
|
||||||
|
- Format: ccw cli -p "[prompt]" --resume ${previousCliId} --tool ${tool} --mode write
|
||||||
|
|
||||||
|
## EXPLORATION CONTEXT (from context-package.exploration_results)
|
||||||
|
- Load exploration_results from context-package.json
|
||||||
|
- Filter for ${module.name} module: Use aggregated_insights.critical_files matching ${module.paths.join(', ')}
|
||||||
|
- Apply module-relevant constraints from aggregated_insights.constraints
|
||||||
|
- Reference aggregated_insights.all_patterns applicable to ${module.name}
|
||||||
|
- Use aggregated_insights.all_integration_points for precise modification locations within module scope
|
||||||
|
- Use conflict_indicators for risk-aware task sequencing
|
||||||
|
|
||||||
|
## CONFLICT RESOLUTION CONTEXT (if exists)
|
||||||
|
- Check context-package.conflict_detection.resolution_file for conflict-resolution.json path
|
||||||
|
- If exists, load .process/conflict-resolution.json:
|
||||||
|
- Apply planning_constraints relevant to ${module.name} as task constraints
|
||||||
|
- Reference resolved_conflicts affecting ${module.name} for implementation approach alignment
|
||||||
|
- Handle custom_conflicts with explicit task notes
|
||||||
|
|
||||||
|
## CROSS-MODULE DEPENDENCIES
|
||||||
|
- For dependencies ON other modules: Use placeholder depends_on: ["CROSS::{module}::{pattern}"]
|
||||||
|
- Example: depends_on: ["CROSS::B::api-endpoint"] (this module depends on B's api-endpoint task)
|
||||||
|
- Phase 3 Coordinator resolves to actual task IDs
|
||||||
|
- For dependencies FROM other modules: Document in task context as "provides_for" annotation
|
||||||
|
|
||||||
|
## EXPECTED DELIVERABLES
|
||||||
|
Task JSON Files (.task/IMPL-${module.prefix}*.json):
|
||||||
|
- 6-field schema (id, title, status, context_package_path, meta, context, flow_control)
|
||||||
|
- Task ID format: IMPL-${module.prefix}1, IMPL-${module.prefix}2, ...
|
||||||
|
- Quantified requirements with explicit counts
|
||||||
|
- Artifacts integration from context package (filtered for ${module.name})
|
||||||
|
- **focus_paths enhanced with exploration critical_files (module-scoped)**
|
||||||
|
- Flow control with pre_analysis steps (include exploration integration_points analysis)
|
||||||
|
- **CLI Execution IDs and strategies (MANDATORY)**
|
||||||
|
- Focus ONLY on ${module.name} module scope
|
||||||
|
|
||||||
|
## CLI EXECUTION ID REQUIREMENTS (MANDATORY)
|
||||||
|
Each task JSON MUST include:
|
||||||
|
- **cli_execution_id**: Unique ID for CLI execution (format: `{session_id}-IMPL-${module.prefix}{seq}`)
|
||||||
|
- **cli_execution**: Strategy object based on depends_on:
|
||||||
|
- No deps → `{ "strategy": "new" }`
|
||||||
|
- 1 dep (single child) → `{ "strategy": "resume", "resume_from": "parent-cli-id" }`
|
||||||
|
- 1 dep (multiple children) → `{ "strategy": "fork", "resume_from": "parent-cli-id" }`
|
||||||
|
- N deps → `{ "strategy": "merge_fork", "merge_from": ["id1", "id2", ...] }`
|
||||||
|
- Cross-module dep → `{ "strategy": "cross_module_fork", "resume_from": "CROSS::{module}::{pattern}" }`
|
||||||
|
|
||||||
|
**CLI Execution Strategy Rules**:
|
||||||
|
1. **new**: Task has no dependencies - starts fresh CLI conversation
|
||||||
|
2. **resume**: Task has 1 parent AND that parent has only this child - continues same conversation
|
||||||
|
3. **fork**: Task has 1 parent BUT parent has multiple children - creates new branch with parent context
|
||||||
|
4. **merge_fork**: Task has multiple parents - merges all parent contexts into new conversation
|
||||||
|
5. **cross_module_fork**: Task depends on task from another module - Phase 3 resolves placeholder
|
||||||
|
|
||||||
|
**Execution Command Patterns**:
|
||||||
|
- new: `ccw cli -p "[prompt]" --tool [tool] --mode write --id [cli_execution_id]`
|
||||||
|
- resume: `ccw cli -p "[prompt]" --resume [resume_from] --tool [tool] --mode write`
|
||||||
|
- fork: `ccw cli -p "[prompt]" --resume [resume_from] --id [cli_execution_id] --tool [tool] --mode write`
|
||||||
|
- merge_fork: `ccw cli -p "[prompt]" --resume [merge_from.join(',')] --id [cli_execution_id] --tool [tool] --mode write`
|
||||||
|
- cross_module_fork: (Phase 3 resolves placeholder, then uses fork pattern)
|
||||||
|
|
||||||
|
## QUALITY STANDARDS
|
||||||
|
Hard Constraints:
|
||||||
|
- Task count <= 9 for this module (hard limit - coordinate with Phase 3 if exceeded)
|
||||||
|
- All requirements quantified (explicit counts and enumerated lists)
|
||||||
|
- Acceptance criteria measurable (include verification commands)
|
||||||
|
- Artifact references mapped from context package (module-scoped filter)
|
||||||
|
- Focus paths use absolute paths or clear relative paths from project root
|
||||||
|
- Cross-module dependencies use CROSS:: placeholder format
|
||||||
|
|
||||||
|
## SUCCESS CRITERIA
|
||||||
|
- Task JSONs saved to .task/ with IMPL-${module.prefix}* naming
|
||||||
|
- All task JSONs include cli_execution_id and cli_execution strategy
|
||||||
|
- Cross-module dependencies use CROSS:: placeholder format consistently
|
||||||
|
- Focus paths scoped to ${module.paths.join(', ')} only
|
||||||
|
- Return: task count, task IDs, dependency summary (internal + cross-module)
|
||||||
`
|
`
|
||||||
)
|
)
|
||||||
);
|
);
|
||||||
@@ -255,37 +485,79 @@ await Promise.all(planningTasks);
|
|||||||
- Prefix: A, B, C... (assigned by detection order)
|
- Prefix: A, B, C... (assigned by detection order)
|
||||||
- Sequence: 1, 2, 3... (per-module increment)
|
- Sequence: 1, 2, 3... (per-module increment)
|
||||||
|
|
||||||
### Phase 3: Integration (+1 Coordinator, Multi-Module Only)
|
### Phase 3: Integration (+1 Coordinator Agent, Multi-Module Only)
|
||||||
|
|
||||||
**Condition**: Only executed when `modules.length >= 2`
|
**Condition**: Only executed when `modules.length >= 2`
|
||||||
|
|
||||||
**Purpose**: Collect all module tasks, resolve cross-module dependencies, generate unified documents.
|
**Purpose**: Collect all module tasks, resolve cross-module dependencies, generate unified IMPL_PLAN.md and TODO_LIST.md documents.
|
||||||
|
|
||||||
**Integration Logic**:
|
**Coordinator Agent Invocation**:
|
||||||
```javascript
|
```javascript
|
||||||
// 1. Collect all module task JSONs
|
// Wait for all Phase 2B agents to complete
|
||||||
const allTasks = glob('.task/IMPL-*.json').map(loadJson);
|
const moduleResults = await Promise.all(planningTasks);
|
||||||
|
|
||||||
// 2. Resolve cross-module dependencies
|
// Launch +1 Coordinator Agent
|
||||||
for (const task of allTasks) {
|
Task(
|
||||||
if (task.depends_on) {
|
subagent_type="action-planning-agent",
|
||||||
task.depends_on = task.depends_on.map(dep => {
|
run_in_background=false,
|
||||||
if (dep.startsWith('CROSS::')) {
|
description="Integrate module tasks and generate unified documents",
|
||||||
// CROSS::B::api-endpoint → find matching IMPL-B* task
|
prompt=`
|
||||||
const [, targetModule, pattern] = dep.match(/CROSS::(\w+)::(.+)/);
|
## TASK OBJECTIVE
|
||||||
return findTaskByModuleAndPattern(allTasks, targetModule, pattern);
|
Integrate all module task JSONs, resolve cross-module dependencies, and generate unified IMPL_PLAN.md and TODO_LIST.md
|
||||||
}
|
|
||||||
return dep;
|
|
||||||
});
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// 3. Generate unified IMPL_PLAN.md (grouped by module)
|
IMPORTANT: This is INTEGRATION ONLY - consolidate existing task JSONs, NOT creating new tasks.
|
||||||
generateIMPL_PLAN(allTasks, modules);
|
|
||||||
|
|
||||||
// 4. Generate TODO_LIST.md (hierarchical structure)
|
## SESSION PATHS
|
||||||
generateTODO_LIST(allTasks, modules);
|
Input:
|
||||||
|
- Session Metadata: .workflow/active/{session-id}/workflow-session.json
|
||||||
|
- Context Package: .workflow/active/{session-id}/.process/context-package.json
|
||||||
|
- Task JSONs: .workflow/active/{session-id}/.task/IMPL-*.json (from Phase 2B)
|
||||||
|
Output:
|
||||||
|
- Updated Task JSONs: .workflow/active/{session-id}/.task/IMPL-*.json (resolved dependencies)
|
||||||
|
- IMPL_PLAN: .workflow/active/{session-id}/IMPL_PLAN.md
|
||||||
|
- TODO_LIST: .workflow/active/{session-id}/TODO_LIST.md
|
||||||
|
|
||||||
|
## CONTEXT METADATA
|
||||||
|
Session ID: {session-id}
|
||||||
|
Modules: ${modules.map(m => m.name + '(' + m.prefix + ')').join(', ')}
|
||||||
|
Module Count: ${modules.length}
|
||||||
|
|
||||||
|
## INTEGRATION STEPS
|
||||||
|
1. Collect all .task/IMPL-*.json, group by module prefix
|
||||||
|
2. Resolve CROSS:: dependencies → actual task IDs, update task JSONs
|
||||||
|
3. Generate IMPL_PLAN.md (multi-module format per agent specification)
|
||||||
|
4. Generate TODO_LIST.md (hierarchical format per agent specification)
|
||||||
|
|
||||||
|
## CROSS-MODULE DEPENDENCY RESOLUTION
|
||||||
|
- Pattern: CROSS::{module}::{pattern} → IMPL-{module}* matching title/context
|
||||||
|
- Example: CROSS::B::api-endpoint → IMPL-B1 (if B1 title contains "api-endpoint")
|
||||||
|
- Log unresolved as warnings
|
||||||
|
|
||||||
|
## EXPECTED DELIVERABLES
|
||||||
|
1. Updated Task JSONs with resolved dependency IDs
|
||||||
|
2. IMPL_PLAN.md - multi-module format with cross-dependency section
|
||||||
|
3. TODO_LIST.md - hierarchical by module with cross-dependency section
|
||||||
|
|
||||||
|
## SUCCESS CRITERIA
|
||||||
|
- No CROSS:: placeholders remaining in task JSONs
|
||||||
|
- IMPL_PLAN.md and TODO_LIST.md generated with multi-module structure
|
||||||
|
- Return: task count, per-module breakdown, resolved dependency count
|
||||||
|
`
|
||||||
|
)
|
||||||
```
|
```
|
||||||
|
|
||||||
**Note**: IMPL_PLAN.md and TODO_LIST.md structure definitions are in `action-planning-agent.md`.
|
**Dependency Resolution Algorithm**:
|
||||||
|
```javascript
|
||||||
|
function resolveCrossModuleDependency(placeholder, allTasks) {
|
||||||
|
const [, targetModule, pattern] = placeholder.match(/CROSS::(\w+)::(.+)/);
|
||||||
|
const candidates = allTasks.filter(t =>
|
||||||
|
t.id.startsWith(`IMPL-${targetModule}`) &&
|
||||||
|
(t.title.toLowerCase().includes(pattern.toLowerCase()) ||
|
||||||
|
t.context?.description?.toLowerCase().includes(pattern.toLowerCase()))
|
||||||
|
);
|
||||||
|
return candidates.length > 0
|
||||||
|
? candidates.sort((a, b) => a.id.localeCompare(b.id))[0].id
|
||||||
|
: placeholder; // Keep for manual resolution
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
|||||||
@@ -113,7 +113,7 @@ Phase 2: Agent Execution (Document Generation)
|
|||||||
// Existing test patterns and coverage analysis
|
// Existing test patterns and coverage analysis
|
||||||
},
|
},
|
||||||
"mcp_capabilities": {
|
"mcp_capabilities": {
|
||||||
"code_index": true,
|
"codex_lens": true,
|
||||||
"exa_code": true,
|
"exa_code": true,
|
||||||
"exa_web": true
|
"exa_web": true
|
||||||
}
|
}
|
||||||
@@ -152,9 +152,14 @@ Phase 2: Agent Execution (Document Generation)
|
|||||||
roleAnalysisPaths.forEach(path => Read(path));
|
roleAnalysisPaths.forEach(path => Read(path));
|
||||||
```
|
```
|
||||||
|
|
||||||
5. **Load Conflict Resolution** (from context-package.json, if exists)
|
5. **Load Conflict Resolution** (from conflict-resolution.json, if exists)
|
||||||
```javascript
|
```javascript
|
||||||
if (contextPackage.brainstorm_artifacts.conflict_resolution?.exists) {
|
// Check for new conflict-resolution.json format
|
||||||
|
if (contextPackage.conflict_detection?.resolution_file) {
|
||||||
|
Read(contextPackage.conflict_detection.resolution_file) // .process/conflict-resolution.json
|
||||||
|
}
|
||||||
|
// Fallback: legacy brainstorm_artifacts path
|
||||||
|
else if (contextPackage.brainstorm_artifacts?.conflict_resolution?.exists) {
|
||||||
Read(contextPackage.brainstorm_artifacts.conflict_resolution.path)
|
Read(contextPackage.brainstorm_artifacts.conflict_resolution.path)
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
@@ -189,6 +194,7 @@ const templatePath = hasCliExecuteFlag
|
|||||||
```javascript
|
```javascript
|
||||||
Task(
|
Task(
|
||||||
subagent_type="action-planning-agent",
|
subagent_type="action-planning-agent",
|
||||||
|
run_in_background=false,
|
||||||
description="Generate TDD task JSON and implementation plan",
|
description="Generate TDD task JSON and implementation plan",
|
||||||
prompt=`
|
prompt=`
|
||||||
## Execution Context
|
## Execution Context
|
||||||
@@ -223,7 +229,7 @@ If conflict_risk was medium/high, modifications have been applied to:
|
|||||||
- **guidance-specification.md**: Design decisions updated to resolve conflicts
|
- **guidance-specification.md**: Design decisions updated to resolve conflicts
|
||||||
- **Role analyses (*.md)**: Recommendations adjusted for compatibility
|
- **Role analyses (*.md)**: Recommendations adjusted for compatibility
|
||||||
- **context-package.json**: Marked as "resolved" with conflict IDs
|
- **context-package.json**: Marked as "resolved" with conflict IDs
|
||||||
- NO separate CONFLICT_RESOLUTION.md file (conflicts resolved in-place)
|
- Conflict resolution results stored in conflict-resolution.json
|
||||||
|
|
||||||
### MCP Analysis Results (Optional)
|
### MCP Analysis Results (Optional)
|
||||||
**Code Structure**: {mcp_code_index_results}
|
**Code Structure**: {mcp_code_index_results}
|
||||||
@@ -233,15 +239,6 @@ If conflict_risk was medium/high, modifications have been applied to:
|
|||||||
|
|
||||||
**Agent Configuration Reference**: All TDD task generation rules, quantification requirements, Red-Green-Refactor cycle structure, quality standards, and execution details are defined in action-planning-agent.
|
**Agent Configuration Reference**: All TDD task generation rules, quantification requirements, Red-Green-Refactor cycle structure, quality standards, and execution details are defined in action-planning-agent.
|
||||||
|
|
||||||
Refer to: @.claude/agents/action-planning-agent.md for:
|
|
||||||
- TDD Task Decomposition Standards
|
|
||||||
- Red-Green-Refactor Cycle Requirements
|
|
||||||
- Quantification Requirements (MANDATORY)
|
|
||||||
- 5-Field Task JSON Schema
|
|
||||||
- IMPL_PLAN.md Structure (TDD variant)
|
|
||||||
- TODO_LIST.md Format
|
|
||||||
- TDD Execution Flow & Quality Validation
|
|
||||||
|
|
||||||
### TDD-Specific Requirements Summary
|
### TDD-Specific Requirements Summary
|
||||||
|
|
||||||
#### Task Structure Philosophy
|
#### Task Structure Philosophy
|
||||||
@@ -333,7 +330,7 @@ Generate all three documents and report completion status:
|
|||||||
- TDD cycles configured: N cycles with quantified test cases
|
- TDD cycles configured: N cycles with quantified test cases
|
||||||
- Artifacts integrated: synthesis-spec, guidance-specification, N role analyses
|
- Artifacts integrated: synthesis-spec, guidance-specification, N role analyses
|
||||||
- Test context integrated: existing patterns and coverage
|
- Test context integrated: existing patterns and coverage
|
||||||
- MCP enhancements: code-index, exa-research
|
- MCP enhancements: CodexLens, exa-research
|
||||||
- Session ready for TDD execution: /workflow:execute
|
- Session ready for TDD execution: /workflow:execute
|
||||||
`
|
`
|
||||||
)
|
)
|
||||||
@@ -373,10 +370,12 @@ const agentContext = {
|
|||||||
.flatMap(role => role.files)
|
.flatMap(role => role.files)
|
||||||
.map(file => Read(file.path)),
|
.map(file => Read(file.path)),
|
||||||
|
|
||||||
// Load conflict resolution if exists (from context package)
|
// Load conflict resolution if exists (prefer new JSON format)
|
||||||
conflict_resolution: brainstorm_artifacts.conflict_resolution?.exists
|
conflict_resolution: context_package.conflict_detection?.resolution_file
|
||||||
? Read(brainstorm_artifacts.conflict_resolution.path)
|
? Read(context_package.conflict_detection.resolution_file) // .process/conflict-resolution.json
|
||||||
: null,
|
: (brainstorm_artifacts?.conflict_resolution?.exists
|
||||||
|
? Read(brainstorm_artifacts.conflict_resolution.path)
|
||||||
|
: null),
|
||||||
|
|
||||||
// Optional MCP enhancements
|
// Optional MCP enhancements
|
||||||
mcp_analysis: executeMcpDiscovery()
|
mcp_analysis: executeMcpDiscovery()
|
||||||
@@ -408,7 +407,7 @@ This section provides quick reference for TDD task JSON structure. For complete
|
|||||||
│ ├── IMPL-3.2.json # Complex feature subtask (if needed)
|
│ ├── IMPL-3.2.json # Complex feature subtask (if needed)
|
||||||
│ └── ...
|
│ └── ...
|
||||||
└── .process/
|
└── .process/
|
||||||
├── CONFLICT_RESOLUTION.md # Conflict resolution strategies (if conflict_risk ≥ medium)
|
├── conflict-resolution.json # Conflict resolution results (if conflict_risk ≥ medium)
|
||||||
├── test-context-package.json # Test coverage analysis
|
├── test-context-package.json # Test coverage analysis
|
||||||
├── context-package.json # Input from context-gather
|
├── context-package.json # Input from context-gather
|
||||||
├── context_package_path # Path to smart context package
|
├── context_package_path # Path to smart context package
|
||||||
|
|||||||
@@ -76,6 +76,7 @@ Phase 3: Output Validation (Command)
|
|||||||
```javascript
|
```javascript
|
||||||
Task(
|
Task(
|
||||||
subagent_type="cli-execution-agent",
|
subagent_type="cli-execution-agent",
|
||||||
|
run_in_background=false,
|
||||||
description="Analyze test coverage gaps and generate test strategy",
|
description="Analyze test coverage gaps and generate test strategy",
|
||||||
prompt=`
|
prompt=`
|
||||||
## TASK OBJECTIVE
|
## TASK OBJECTIVE
|
||||||
@@ -89,7 +90,7 @@ Template: ~/.claude/workflows/cli-templates/prompts/test/test-concept-analysis.t
|
|||||||
|
|
||||||
## EXECUTION STEPS
|
## EXECUTION STEPS
|
||||||
1. Execute Gemini analysis:
|
1. Execute Gemini analysis:
|
||||||
cd .workflow/active/{test_session_id}/.process && gemini -p "$(cat ~/.claude/workflows/cli-templates/prompts/test/test-concept-analysis.txt)" --approval-mode yolo
|
ccw cli -p "$(cat ~/.claude/workflows/cli-templates/prompts/test/test-concept-analysis.txt)" --tool gemini --mode write --cd .workflow/active/{test_session_id}/.process
|
||||||
|
|
||||||
2. Generate TEST_ANALYSIS_RESULTS.md:
|
2. Generate TEST_ANALYSIS_RESULTS.md:
|
||||||
Synthesize gemini-test-analysis.md into standardized format for task generation
|
Synthesize gemini-test-analysis.md into standardized format for task generation
|
||||||
|
|||||||
@@ -14,7 +14,7 @@ allowed-tools: Task(*), Read(*), Glob(*)
|
|||||||
|
|
||||||
Orchestrator command that invokes `test-context-search-agent` to gather comprehensive test coverage context for test generation workflows. Generates standardized `test-context-package.json` with coverage analysis, framework detection, and source implementation context.
|
Orchestrator command that invokes `test-context-search-agent` to gather comprehensive test coverage context for test generation workflows. Generates standardized `test-context-package.json` with coverage analysis, framework detection, and source implementation context.
|
||||||
|
|
||||||
**Agent**: `test-context-search-agent` (`.claude/agents/test-context-search-agent.md`)
|
|
||||||
|
|
||||||
## Core Philosophy
|
## Core Philosophy
|
||||||
|
|
||||||
@@ -86,9 +86,9 @@ if (file_exists(testContextPath)) {
|
|||||||
```javascript
|
```javascript
|
||||||
Task(
|
Task(
|
||||||
subagent_type="test-context-search-agent",
|
subagent_type="test-context-search-agent",
|
||||||
|
run_in_background=false,
|
||||||
description="Gather test coverage context",
|
description="Gather test coverage context",
|
||||||
prompt=`
|
prompt=`
|
||||||
You are executing as test-context-search-agent (.claude/agents/test-context-search-agent.md).
|
|
||||||
|
|
||||||
## Execution Mode
|
## Execution Mode
|
||||||
**PLAN MODE** (Comprehensive) - Full Phase 1-3 execution
|
**PLAN MODE** (Comprehensive) - Full Phase 1-3 execution
|
||||||
@@ -228,7 +228,7 @@ Refer to `test-context-search-agent.md` Phase 3.2 for complete `test-context-pac
|
|||||||
## Notes
|
## Notes
|
||||||
|
|
||||||
- **Detection-first**: Always check for existing test-context-package before invoking agent
|
- **Detection-first**: Always check for existing test-context-package before invoking agent
|
||||||
- **Agent autonomy**: Agent handles all coverage analysis logic per `.claude/agents/test-context-search-agent.md`
|
|
||||||
- **No redundancy**: This command is a thin orchestrator, all logic in agent
|
- **No redundancy**: This command is a thin orchestrator, all logic in agent
|
||||||
- **Framework agnostic**: Supports Jest, Mocha, pytest, RSpec, Go testing, etc.
|
- **Framework agnostic**: Supports Jest, Mocha, pytest, RSpec, Go testing, etc.
|
||||||
- **Coverage focus**: Primary goal is identifying implementation files without tests
|
- **Coverage focus**: Primary goal is identifying implementation files without tests
|
||||||
|
|||||||
@@ -94,6 +94,7 @@ Phase 2: Test Document Generation (Agent)
|
|||||||
```javascript
|
```javascript
|
||||||
Task(
|
Task(
|
||||||
subagent_type="action-planning-agent",
|
subagent_type="action-planning-agent",
|
||||||
|
run_in_background=false,
|
||||||
description="Generate test planning documents (IMPL_PLAN.md, task JSONs, TODO_LIST.md)",
|
description="Generate test planning documents (IMPL_PLAN.md, task JSONs, TODO_LIST.md)",
|
||||||
prompt=`
|
prompt=`
|
||||||
## TASK OBJECTIVE
|
## TASK OBJECTIVE
|
||||||
@@ -106,8 +107,6 @@ CRITICAL:
|
|||||||
- Follow the progressive loading strategy defined in your agent specification (load context incrementally from memory-first approach)
|
- Follow the progressive loading strategy defined in your agent specification (load context incrementally from memory-first approach)
|
||||||
|
|
||||||
## AGENT CONFIGURATION REFERENCE
|
## AGENT CONFIGURATION REFERENCE
|
||||||
All test task generation rules, schemas, and quality standards are defined in your agent specification:
|
|
||||||
@.claude/agents/action-planning-agent.md
|
|
||||||
|
|
||||||
Refer to your specification for:
|
Refer to your specification for:
|
||||||
- Test Task JSON Schema (6-field structure with test-specific metadata)
|
- Test Task JSON Schema (6-field structure with test-specific metadata)
|
||||||
|
|||||||
@@ -161,6 +161,7 @@ echo "[Phase 1] Starting parallel agent analysis (3 agents)"
|
|||||||
|
|
||||||
```javascript
|
```javascript
|
||||||
Task(subagent_type="ui-design-agent",
|
Task(subagent_type="ui-design-agent",
|
||||||
|
run_in_background=false,
|
||||||
prompt="[STYLE_TOKENS_EXTRACTION]
|
prompt="[STYLE_TOKENS_EXTRACTION]
|
||||||
Extract visual design tokens from code files using code import extraction pattern.
|
Extract visual design tokens from code files using code import extraction pattern.
|
||||||
|
|
||||||
@@ -180,14 +181,14 @@ Task(subagent_type="ui-design-agent",
|
|||||||
- Pattern: rg → Extract values → Compare → If different → Read full context with comments → Record conflict
|
- Pattern: rg → Extract values → Compare → If different → Read full context with comments → Record conflict
|
||||||
- Alternative (if many files): Execute CLI analysis for comprehensive report:
|
- Alternative (if many files): Execute CLI analysis for comprehensive report:
|
||||||
\`\`\`bash
|
\`\`\`bash
|
||||||
cd ${source} && gemini -p \"
|
ccw cli -p \"
|
||||||
PURPOSE: Detect color token conflicts across all CSS/SCSS/JS files
|
PURPOSE: Detect color token conflicts across all CSS/SCSS/JS files
|
||||||
TASK: • Scan all files for color definitions • Identify conflicting values • Extract semantic comments
|
TASK: • Scan all files for color definitions • Identify conflicting values • Extract semantic comments
|
||||||
MODE: analysis
|
MODE: analysis
|
||||||
CONTEXT: @**/*.css @**/*.scss @**/*.js @**/*.ts
|
CONTEXT: @**/*.css @**/*.scss @**/*.js @**/*.ts
|
||||||
EXPECTED: JSON report listing conflicts with file:line, values, semantic context
|
EXPECTED: JSON report listing conflicts with file:line, values, semantic context
|
||||||
RULES: Focus on core tokens | Report ALL variants | analysis=READ-ONLY
|
RULES: Focus on core tokens | Report ALL variants | analysis=READ-ONLY
|
||||||
\"
|
\" --tool gemini --mode analysis --cd ${source}
|
||||||
\`\`\`
|
\`\`\`
|
||||||
|
|
||||||
**Step 1: Load file list**
|
**Step 1: Load file list**
|
||||||
@@ -276,6 +277,7 @@ Task(subagent_type="ui-design-agent",
|
|||||||
|
|
||||||
```javascript
|
```javascript
|
||||||
Task(subagent_type="ui-design-agent",
|
Task(subagent_type="ui-design-agent",
|
||||||
|
run_in_background=false,
|
||||||
prompt="[ANIMATION_TOKEN_GENERATION_TASK]
|
prompt="[ANIMATION_TOKEN_GENERATION_TASK]
|
||||||
Extract animation tokens from code files using code import extraction pattern.
|
Extract animation tokens from code files using code import extraction pattern.
|
||||||
|
|
||||||
@@ -295,14 +297,14 @@ Task(subagent_type="ui-design-agent",
|
|||||||
- Pattern: rg → Identify animation types → Map framework usage → Prioritize extraction targets
|
- Pattern: rg → Identify animation types → Map framework usage → Prioritize extraction targets
|
||||||
- Alternative (if complex framework mix): Execute CLI analysis for comprehensive report:
|
- Alternative (if complex framework mix): Execute CLI analysis for comprehensive report:
|
||||||
\`\`\`bash
|
\`\`\`bash
|
||||||
cd ${source} && gemini -p \"
|
ccw cli -p \"
|
||||||
PURPOSE: Detect animation frameworks and patterns
|
PURPOSE: Detect animation frameworks and patterns
|
||||||
TASK: • Identify frameworks • Map animation patterns • Categorize by complexity
|
TASK: • Identify frameworks • Map animation patterns • Categorize by complexity
|
||||||
MODE: analysis
|
MODE: analysis
|
||||||
CONTEXT: @**/*.css @**/*.scss @**/*.js @**/*.ts
|
CONTEXT: @**/*.css @**/*.scss @**/*.js @**/*.ts
|
||||||
EXPECTED: JSON report listing frameworks, animation types, file locations
|
EXPECTED: JSON report listing frameworks, animation types, file locations
|
||||||
RULES: Focus on framework consistency | Map all animations | analysis=READ-ONLY
|
RULES: Focus on framework consistency | Map all animations | analysis=READ-ONLY
|
||||||
\"
|
\" --tool gemini --mode analysis --cd ${source}
|
||||||
\`\`\`
|
\`\`\`
|
||||||
|
|
||||||
**Step 1: Load file list**
|
**Step 1: Load file list**
|
||||||
@@ -355,6 +357,7 @@ Task(subagent_type="ui-design-agent",
|
|||||||
|
|
||||||
```javascript
|
```javascript
|
||||||
Task(subagent_type="ui-design-agent",
|
Task(subagent_type="ui-design-agent",
|
||||||
|
run_in_background=false,
|
||||||
prompt="[LAYOUT_TEMPLATE_GENERATION_TASK]
|
prompt="[LAYOUT_TEMPLATE_GENERATION_TASK]
|
||||||
Extract layout patterns from code files using code import extraction pattern.
|
Extract layout patterns from code files using code import extraction pattern.
|
||||||
|
|
||||||
@@ -374,14 +377,14 @@ Task(subagent_type="ui-design-agent",
|
|||||||
- Pattern: rg → Count occurrences → Classify by frequency → Prioritize universal components
|
- Pattern: rg → Count occurrences → Classify by frequency → Prioritize universal components
|
||||||
- Alternative (if large codebase): Execute CLI analysis for comprehensive categorization:
|
- Alternative (if large codebase): Execute CLI analysis for comprehensive categorization:
|
||||||
\`\`\`bash
|
\`\`\`bash
|
||||||
cd ${source} && gemini -p \"
|
ccw cli -p \"
|
||||||
PURPOSE: Classify components as universal vs specialized
|
PURPOSE: Classify components as universal vs specialized
|
||||||
TASK: • Identify UI components • Classify reusability • Map layout systems
|
TASK: • Identify UI components • Classify reusability • Map layout systems
|
||||||
MODE: analysis
|
MODE: analysis
|
||||||
CONTEXT: @**/*.css @**/*.scss @**/*.js @**/*.ts @**/*.html
|
CONTEXT: @**/*.css @**/*.scss @**/*.js @**/*.ts @**/*.html
|
||||||
EXPECTED: JSON report categorizing components, layout patterns, naming conventions
|
EXPECTED: JSON report categorizing components, layout patterns, naming conventions
|
||||||
RULES: Focus on component reusability | Identify layout systems | analysis=READ-ONLY
|
RULES: Focus on component reusability | Identify layout systems | analysis=READ-ONLY
|
||||||
\"
|
\" --tool gemini --mode analysis --cd ${source}
|
||||||
\`\`\`
|
\`\`\`
|
||||||
|
|
||||||
**Step 1: Load file list**
|
**Step 1: Load file list**
|
||||||
|
|||||||
@@ -1,39 +0,0 @@
|
|||||||
#!/bin/bash
|
|
||||||
# ⚠️ DEPRECATED: This script is deprecated.
|
|
||||||
# Please use: ccw tool exec classify_folders '{"path":".","outputFormat":"json"}'
|
|
||||||
# This file will be removed in a future version.
|
|
||||||
|
|
||||||
# Classify folders by type for documentation generation
|
|
||||||
# Usage: get_modules_by_depth.sh | classify-folders.sh
|
|
||||||
# Output: folder_path|folder_type|code:N|dirs:N
|
|
||||||
|
|
||||||
while IFS='|' read -r depth_info path_info files_info types_info claude_info; do
|
|
||||||
# Extract folder path from format "path:./src/modules"
|
|
||||||
folder_path=$(echo "$path_info" | cut -d':' -f2-)
|
|
||||||
|
|
||||||
# Skip if path extraction failed
|
|
||||||
[[ -z "$folder_path" || ! -d "$folder_path" ]] && continue
|
|
||||||
|
|
||||||
# Count code files (maxdepth 1)
|
|
||||||
code_files=$(find "$folder_path" -maxdepth 1 -type f \
|
|
||||||
\( -name "*.ts" -o -name "*.tsx" -o -name "*.js" -o -name "*.jsx" \
|
|
||||||
-o -name "*.py" -o -name "*.go" -o -name "*.java" -o -name "*.rs" \
|
|
||||||
-o -name "*.c" -o -name "*.cpp" -o -name "*.cs" \) \
|
|
||||||
2>/dev/null | wc -l)
|
|
||||||
|
|
||||||
# Count subdirectories
|
|
||||||
subfolders=$(find "$folder_path" -maxdepth 1 -type d \
|
|
||||||
-not -path "$folder_path" 2>/dev/null | wc -l)
|
|
||||||
|
|
||||||
# Determine folder type
|
|
||||||
if [[ $code_files -gt 0 ]]; then
|
|
||||||
folder_type="code" # API.md + README.md
|
|
||||||
elif [[ $subfolders -gt 0 ]]; then
|
|
||||||
folder_type="navigation" # README.md only
|
|
||||||
else
|
|
||||||
folder_type="skip" # Empty or no relevant content
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Output classification result
|
|
||||||
echo "${folder_path}|${folder_type}|code:${code_files}|dirs:${subfolders}"
|
|
||||||
done
|
|
||||||
@@ -1,229 +0,0 @@
|
|||||||
#!/bin/bash
|
|
||||||
# ⚠️ DEPRECATED: This script is deprecated.
|
|
||||||
# Please use: ccw tool exec convert_tokens_to_css '{"inputPath":"design-tokens.json","outputPath":"tokens.css"}'
|
|
||||||
# This file will be removed in a future version.
|
|
||||||
|
|
||||||
# Convert design-tokens.json to tokens.css with Google Fonts import and global font rules
|
|
||||||
# Usage: cat design-tokens.json | ./convert_tokens_to_css.sh > tokens.css
|
|
||||||
# Or: ./convert_tokens_to_css.sh < design-tokens.json > tokens.css
|
|
||||||
|
|
||||||
# Read JSON from stdin
|
|
||||||
json_input=$(cat)
|
|
||||||
|
|
||||||
# Extract metadata for header comment
|
|
||||||
style_name=$(echo "$json_input" | jq -r '.meta.name // "Unknown Style"' 2>/dev/null || echo "Design Tokens")
|
|
||||||
|
|
||||||
# Generate header
|
|
||||||
cat <<EOF
|
|
||||||
/* ========================================
|
|
||||||
Design Tokens: ${style_name}
|
|
||||||
Auto-generated from design-tokens.json
|
|
||||||
======================================== */
|
|
||||||
|
|
||||||
EOF
|
|
||||||
|
|
||||||
# ========================================
|
|
||||||
# Google Fonts Import Generation
|
|
||||||
# ========================================
|
|
||||||
# Extract font families and generate Google Fonts import URL
|
|
||||||
fonts=$(echo "$json_input" | jq -r '
|
|
||||||
.typography.font_family | to_entries[] | .value
|
|
||||||
' 2>/dev/null | sed "s/'//g" | cut -d',' -f1 | sort -u)
|
|
||||||
|
|
||||||
# Build Google Fonts URL
|
|
||||||
google_fonts_url="https://fonts.googleapis.com/css2?"
|
|
||||||
font_params=""
|
|
||||||
|
|
||||||
while IFS= read -r font; do
|
|
||||||
# Skip system fonts and empty lines
|
|
||||||
if [[ -z "$font" ]] || [[ "$font" =~ ^(system-ui|sans-serif|serif|monospace|cursive|fantasy)$ ]]; then
|
|
||||||
continue
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Special handling for common web fonts with weights
|
|
||||||
case "$font" in
|
|
||||||
"Comic Neue")
|
|
||||||
font_params+="family=Comic+Neue:wght@300;400;700&"
|
|
||||||
;;
|
|
||||||
"Patrick Hand"|"Caveat"|"Dancing Script"|"Architects Daughter"|"Indie Flower"|"Shadows Into Light"|"Permanent Marker")
|
|
||||||
# URL-encode font name and add common weights
|
|
||||||
encoded_font=$(echo "$font" | sed 's/ /+/g')
|
|
||||||
font_params+="family=${encoded_font}:wght@400;700&"
|
|
||||||
;;
|
|
||||||
"Segoe Print"|"Bradley Hand"|"Chilanka")
|
|
||||||
# These are system fonts, skip
|
|
||||||
;;
|
|
||||||
*)
|
|
||||||
# Generic font: add with default weights
|
|
||||||
encoded_font=$(echo "$font" | sed 's/ /+/g')
|
|
||||||
font_params+="family=${encoded_font}:wght@400;500;600;700&"
|
|
||||||
;;
|
|
||||||
esac
|
|
||||||
done <<< "$fonts"
|
|
||||||
|
|
||||||
# Generate @import if we have fonts
|
|
||||||
if [[ -n "$font_params" ]]; then
|
|
||||||
# Remove trailing &
|
|
||||||
font_params="${font_params%&}"
|
|
||||||
echo "/* Import Web Fonts */"
|
|
||||||
echo "@import url('${google_fonts_url}${font_params}&display=swap');"
|
|
||||||
echo ""
|
|
||||||
fi
|
|
||||||
|
|
||||||
# ========================================
|
|
||||||
# CSS Custom Properties Generation
|
|
||||||
# ========================================
|
|
||||||
echo ":root {"
|
|
||||||
|
|
||||||
# Colors - Brand
|
|
||||||
echo " /* Colors - Brand */"
|
|
||||||
echo "$json_input" | jq -r '
|
|
||||||
.colors.brand | to_entries[] |
|
|
||||||
" --color-brand-\(.key): \(.value);"
|
|
||||||
' 2>/dev/null
|
|
||||||
|
|
||||||
echo ""
|
|
||||||
|
|
||||||
# Colors - Surface
|
|
||||||
echo " /* Colors - Surface */"
|
|
||||||
echo "$json_input" | jq -r '
|
|
||||||
.colors.surface | to_entries[] |
|
|
||||||
" --color-surface-\(.key): \(.value);"
|
|
||||||
' 2>/dev/null
|
|
||||||
|
|
||||||
echo ""
|
|
||||||
|
|
||||||
# Colors - Semantic
|
|
||||||
echo " /* Colors - Semantic */"
|
|
||||||
echo "$json_input" | jq -r '
|
|
||||||
.colors.semantic | to_entries[] |
|
|
||||||
" --color-semantic-\(.key): \(.value);"
|
|
||||||
' 2>/dev/null
|
|
||||||
|
|
||||||
echo ""
|
|
||||||
|
|
||||||
# Colors - Text
|
|
||||||
echo " /* Colors - Text */"
|
|
||||||
echo "$json_input" | jq -r '
|
|
||||||
.colors.text | to_entries[] |
|
|
||||||
" --color-text-\(.key): \(.value);"
|
|
||||||
' 2>/dev/null
|
|
||||||
|
|
||||||
echo ""
|
|
||||||
|
|
||||||
# Colors - Border
|
|
||||||
echo " /* Colors - Border */"
|
|
||||||
echo "$json_input" | jq -r '
|
|
||||||
.colors.border | to_entries[] |
|
|
||||||
" --color-border-\(.key): \(.value);"
|
|
||||||
' 2>/dev/null
|
|
||||||
|
|
||||||
echo ""
|
|
||||||
|
|
||||||
# Typography - Font Family
|
|
||||||
echo " /* Typography - Font Family */"
|
|
||||||
echo "$json_input" | jq -r '
|
|
||||||
.typography.font_family | to_entries[] |
|
|
||||||
" --font-family-\(.key): \(.value);"
|
|
||||||
' 2>/dev/null
|
|
||||||
|
|
||||||
echo ""
|
|
||||||
|
|
||||||
# Typography - Font Size
|
|
||||||
echo " /* Typography - Font Size */"
|
|
||||||
echo "$json_input" | jq -r '
|
|
||||||
.typography.font_size | to_entries[] |
|
|
||||||
" --font-size-\(.key): \(.value);"
|
|
||||||
' 2>/dev/null
|
|
||||||
|
|
||||||
echo ""
|
|
||||||
|
|
||||||
# Typography - Font Weight
|
|
||||||
echo " /* Typography - Font Weight */"
|
|
||||||
echo "$json_input" | jq -r '
|
|
||||||
.typography.font_weight | to_entries[] |
|
|
||||||
" --font-weight-\(.key): \(.value);"
|
|
||||||
' 2>/dev/null
|
|
||||||
|
|
||||||
echo ""
|
|
||||||
|
|
||||||
# Typography - Line Height
|
|
||||||
echo " /* Typography - Line Height */"
|
|
||||||
echo "$json_input" | jq -r '
|
|
||||||
.typography.line_height | to_entries[] |
|
|
||||||
" --line-height-\(.key): \(.value);"
|
|
||||||
' 2>/dev/null
|
|
||||||
|
|
||||||
echo ""
|
|
||||||
|
|
||||||
# Typography - Letter Spacing
|
|
||||||
echo " /* Typography - Letter Spacing */"
|
|
||||||
echo "$json_input" | jq -r '
|
|
||||||
.typography.letter_spacing | to_entries[] |
|
|
||||||
" --letter-spacing-\(.key): \(.value);"
|
|
||||||
' 2>/dev/null
|
|
||||||
|
|
||||||
echo ""
|
|
||||||
|
|
||||||
# Spacing
|
|
||||||
echo " /* Spacing */"
|
|
||||||
echo "$json_input" | jq -r '
|
|
||||||
.spacing | to_entries[] |
|
|
||||||
" --spacing-\(.key): \(.value);"
|
|
||||||
' 2>/dev/null
|
|
||||||
|
|
||||||
echo ""
|
|
||||||
|
|
||||||
# Border Radius
|
|
||||||
echo " /* Border Radius */"
|
|
||||||
echo "$json_input" | jq -r '
|
|
||||||
.border_radius | to_entries[] |
|
|
||||||
" --border-radius-\(.key): \(.value);"
|
|
||||||
' 2>/dev/null
|
|
||||||
|
|
||||||
echo ""
|
|
||||||
|
|
||||||
# Shadows
|
|
||||||
echo " /* Shadows */"
|
|
||||||
echo "$json_input" | jq -r '
|
|
||||||
.shadows | to_entries[] |
|
|
||||||
" --shadow-\(.key): \(.value);"
|
|
||||||
' 2>/dev/null
|
|
||||||
|
|
||||||
echo ""
|
|
||||||
|
|
||||||
# Breakpoints
|
|
||||||
echo " /* Breakpoints */"
|
|
||||||
echo "$json_input" | jq -r '
|
|
||||||
.breakpoints | to_entries[] |
|
|
||||||
" --breakpoint-\(.key): \(.value);"
|
|
||||||
' 2>/dev/null
|
|
||||||
|
|
||||||
echo "}"
|
|
||||||
echo ""
|
|
||||||
|
|
||||||
# ========================================
|
|
||||||
# Global Font Application
|
|
||||||
# ========================================
|
|
||||||
echo "/* ========================================"
|
|
||||||
echo " Global Font Application"
|
|
||||||
echo " ======================================== */"
|
|
||||||
echo ""
|
|
||||||
echo "body {"
|
|
||||||
echo " font-family: var(--font-family-body);"
|
|
||||||
echo " font-size: var(--font-size-base);"
|
|
||||||
echo " line-height: var(--line-height-normal);"
|
|
||||||
echo " color: var(--color-text-primary);"
|
|
||||||
echo " background-color: var(--color-surface-background);"
|
|
||||||
echo "}"
|
|
||||||
echo ""
|
|
||||||
echo "h1, h2, h3, h4, h5, h6, legend {"
|
|
||||||
echo " font-family: var(--font-family-heading);"
|
|
||||||
echo "}"
|
|
||||||
echo ""
|
|
||||||
echo "/* Reset default margins for better control */"
|
|
||||||
echo "* {"
|
|
||||||
echo " margin: 0;"
|
|
||||||
echo " padding: 0;"
|
|
||||||
echo " box-sizing: border-box;"
|
|
||||||
echo "}"
|
|
||||||
@@ -1,161 +0,0 @@
|
|||||||
#!/bin/bash
|
|
||||||
# ⚠️ DEPRECATED: This script is deprecated.
|
|
||||||
# Please use: ccw tool exec detect_changed_modules '{"baseBranch":"main","format":"list"}'
|
|
||||||
# This file will be removed in a future version.
|
|
||||||
|
|
||||||
# Detect modules affected by git changes or recent modifications
|
|
||||||
# Usage: detect_changed_modules.sh [format]
|
|
||||||
# format: list|grouped|paths (default: paths)
|
|
||||||
#
|
|
||||||
# Features:
|
|
||||||
# - Respects .gitignore patterns (current directory or git root)
|
|
||||||
# - Detects git changes (staged, unstaged, or last commit)
|
|
||||||
# - Falls back to recently modified files (last 24 hours)
|
|
||||||
|
|
||||||
# Build exclusion filters from .gitignore
|
|
||||||
build_exclusion_filters() {
|
|
||||||
local filters=""
|
|
||||||
|
|
||||||
# Common system/cache directories to exclude
|
|
||||||
local system_excludes=(
|
|
||||||
".git" "__pycache__" "node_modules" ".venv" "venv" "env"
|
|
||||||
"dist" "build" ".cache" ".pytest_cache" ".mypy_cache"
|
|
||||||
"coverage" ".nyc_output" "logs" "tmp" "temp"
|
|
||||||
)
|
|
||||||
|
|
||||||
for exclude in "${system_excludes[@]}"; do
|
|
||||||
filters+=" -not -path '*/$exclude' -not -path '*/$exclude/*'"
|
|
||||||
done
|
|
||||||
|
|
||||||
# Find and parse .gitignore (current dir first, then git root)
|
|
||||||
local gitignore_file=""
|
|
||||||
|
|
||||||
# Check current directory first
|
|
||||||
if [ -f ".gitignore" ]; then
|
|
||||||
gitignore_file=".gitignore"
|
|
||||||
else
|
|
||||||
# Try to find git root and check for .gitignore there
|
|
||||||
local git_root=$(git rev-parse --show-toplevel 2>/dev/null)
|
|
||||||
if [ -n "$git_root" ] && [ -f "$git_root/.gitignore" ]; then
|
|
||||||
gitignore_file="$git_root/.gitignore"
|
|
||||||
fi
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Parse .gitignore if found
|
|
||||||
if [ -n "$gitignore_file" ]; then
|
|
||||||
while IFS= read -r line; do
|
|
||||||
# Skip empty lines and comments
|
|
||||||
[[ -z "$line" || "$line" =~ ^[[:space:]]*# ]] && continue
|
|
||||||
|
|
||||||
# Remove trailing slash and whitespace
|
|
||||||
line=$(echo "$line" | sed 's|/$||' | xargs)
|
|
||||||
|
|
||||||
# Skip wildcards patterns (too complex for simple find)
|
|
||||||
[[ "$line" =~ \* ]] && continue
|
|
||||||
|
|
||||||
# Add to filters
|
|
||||||
filters+=" -not -path '*/$line' -not -path '*/$line/*'"
|
|
||||||
done < "$gitignore_file"
|
|
||||||
fi
|
|
||||||
|
|
||||||
echo "$filters"
|
|
||||||
}
|
|
||||||
|
|
||||||
detect_changed_modules() {
|
|
||||||
local format="${1:-paths}"
|
|
||||||
local changed_files=""
|
|
||||||
local affected_dirs=""
|
|
||||||
local exclusion_filters=$(build_exclusion_filters)
|
|
||||||
|
|
||||||
# Step 1: Try to get git changes (staged + unstaged)
|
|
||||||
if git rev-parse --git-dir > /dev/null 2>&1; then
|
|
||||||
changed_files=$(git diff --name-only HEAD 2>/dev/null; git diff --name-only --cached 2>/dev/null)
|
|
||||||
|
|
||||||
# If no changes in working directory, check last commit
|
|
||||||
if [ -z "$changed_files" ]; then
|
|
||||||
changed_files=$(git diff --name-only HEAD~1 HEAD 2>/dev/null)
|
|
||||||
fi
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Step 2: If no git changes, find recently modified source files (last 24 hours)
|
|
||||||
# Apply exclusion filters from .gitignore
|
|
||||||
if [ -z "$changed_files" ]; then
|
|
||||||
changed_files=$(eval "find . -type f \( \
|
|
||||||
-name '*.md' -o \
|
|
||||||
-name '*.js' -o -name '*.ts' -o -name '*.jsx' -o -name '*.tsx' -o \
|
|
||||||
-name '*.py' -o -name '*.go' -o -name '*.rs' -o \
|
|
||||||
-name '*.java' -o -name '*.cpp' -o -name '*.c' -o -name '*.h' -o \
|
|
||||||
-name '*.sh' -o -name '*.ps1' -o \
|
|
||||||
-name '*.json' -o -name '*.yaml' -o -name '*.yml' \
|
|
||||||
\) $exclusion_filters -mtime -1 2>/dev/null")
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Step 3: Extract unique parent directories
|
|
||||||
if [ -n "$changed_files" ]; then
|
|
||||||
affected_dirs=$(echo "$changed_files" | \
|
|
||||||
sed 's|/[^/]*$||' | \
|
|
||||||
grep -v '^\.$' | \
|
|
||||||
sort -u)
|
|
||||||
|
|
||||||
# Add current directory if files are in root
|
|
||||||
if echo "$changed_files" | grep -q '^[^/]*$'; then
|
|
||||||
affected_dirs=$(echo -e ".\n$affected_dirs" | sort -u)
|
|
||||||
fi
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Step 4: Output in requested format
|
|
||||||
case "$format" in
|
|
||||||
"list")
|
|
||||||
if [ -n "$affected_dirs" ]; then
|
|
||||||
echo "$affected_dirs" | while read dir; do
|
|
||||||
if [ -d "$dir" ]; then
|
|
||||||
local file_count=$(find "$dir" -maxdepth 1 -type f 2>/dev/null | wc -l)
|
|
||||||
local depth=$(echo "$dir" | tr -cd '/' | wc -c)
|
|
||||||
if [ "$dir" = "." ]; then depth=0; fi
|
|
||||||
|
|
||||||
local types=$(find "$dir" -maxdepth 1 -type f -name "*.*" 2>/dev/null | \
|
|
||||||
grep -E '\.[^/]*$' | sed 's/.*\.//' | sort -u | tr '\n' ',' | sed 's/,$//')
|
|
||||||
local has_claude="no"
|
|
||||||
[ -f "$dir/CLAUDE.md" ] && has_claude="yes"
|
|
||||||
echo "depth:$depth|path:$dir|files:$file_count|types:[$types]|has_claude:$has_claude|status:changed"
|
|
||||||
fi
|
|
||||||
done
|
|
||||||
fi
|
|
||||||
;;
|
|
||||||
|
|
||||||
"grouped")
|
|
||||||
if [ -n "$affected_dirs" ]; then
|
|
||||||
echo "📊 Affected modules by changes:"
|
|
||||||
# Group by depth
|
|
||||||
echo "$affected_dirs" | while read dir; do
|
|
||||||
if [ -d "$dir" ]; then
|
|
||||||
local depth=$(echo "$dir" | tr -cd '/' | wc -c)
|
|
||||||
if [ "$dir" = "." ]; then depth=0; fi
|
|
||||||
local claude_indicator=""
|
|
||||||
[ -f "$dir/CLAUDE.md" ] && claude_indicator=" [✓]"
|
|
||||||
echo "$depth:$dir$claude_indicator"
|
|
||||||
fi
|
|
||||||
done | sort -n | awk -F: '
|
|
||||||
{
|
|
||||||
if ($1 != prev_depth) {
|
|
||||||
if (prev_depth != "") print ""
|
|
||||||
print " 📁 Depth " $1 ":"
|
|
||||||
prev_depth = $1
|
|
||||||
}
|
|
||||||
print " - " $2 " (changed)"
|
|
||||||
}'
|
|
||||||
else
|
|
||||||
echo "📊 No recent changes detected"
|
|
||||||
fi
|
|
||||||
;;
|
|
||||||
|
|
||||||
"paths"|*)
|
|
||||||
echo "$affected_dirs"
|
|
||||||
;;
|
|
||||||
esac
|
|
||||||
}
|
|
||||||
|
|
||||||
# Execute function if script is run directly
|
|
||||||
if [[ "${BASH_SOURCE[0]}" == "${0}" ]]; then
|
|
||||||
detect_changed_modules "$@"
|
|
||||||
fi
|
|
||||||
@@ -1,87 +0,0 @@
|
|||||||
#!/usr/bin/env bash
|
|
||||||
# ⚠️ DEPRECATED: This script is deprecated.
|
|
||||||
# Please use: ccw tool exec discover_design_files '{"sourceDir":".","outputPath":"output.json"}'
|
|
||||||
# This file will be removed in a future version.
|
|
||||||
|
|
||||||
# discover-design-files.sh - Discover design-related files and output JSON
|
|
||||||
# Usage: discover-design-files.sh <source_dir> <output_json>
|
|
||||||
|
|
||||||
set -euo pipefail
|
|
||||||
|
|
||||||
source_dir="${1:-.}"
|
|
||||||
output_json="${2:-discovered-files.json}"
|
|
||||||
|
|
||||||
# Function to find and format files as JSON array
|
|
||||||
find_files() {
|
|
||||||
local pattern="$1"
|
|
||||||
local files
|
|
||||||
files=$(eval "find \"$source_dir\" -type f $pattern \
|
|
||||||
! -path \"*/node_modules/*\" \
|
|
||||||
! -path \"*/dist/*\" \
|
|
||||||
! -path \"*/.git/*\" \
|
|
||||||
! -path \"*/build/*\" \
|
|
||||||
! -path \"*/coverage/*\" \
|
|
||||||
2>/dev/null | sort || true")
|
|
||||||
|
|
||||||
local count
|
|
||||||
if [ -z "$files" ]; then
|
|
||||||
count=0
|
|
||||||
else
|
|
||||||
count=$(echo "$files" | grep -c . || echo 0)
|
|
||||||
fi
|
|
||||||
local json_files=""
|
|
||||||
|
|
||||||
if [ "$count" -gt 0 ]; then
|
|
||||||
json_files=$(echo "$files" | awk '{printf "\"%s\"%s\n", $0, (NR<'$count'?",":"")}' | tr '\n' ' ')
|
|
||||||
fi
|
|
||||||
|
|
||||||
echo "$count|$json_files"
|
|
||||||
}
|
|
||||||
|
|
||||||
# Discover CSS/SCSS files
|
|
||||||
css_result=$(find_files '\( -name "*.css" -o -name "*.scss" \)')
|
|
||||||
css_count=${css_result%%|*}
|
|
||||||
css_files=${css_result#*|}
|
|
||||||
|
|
||||||
# Discover JS/TS files (all framework files)
|
|
||||||
js_result=$(find_files '\( -name "*.js" -o -name "*.ts" -o -name "*.jsx" -o -name "*.tsx" -o -name "*.mjs" -o -name "*.cjs" -o -name "*.vue" -o -name "*.svelte" \)')
|
|
||||||
js_count=${js_result%%|*}
|
|
||||||
js_files=${js_result#*|}
|
|
||||||
|
|
||||||
# Discover HTML files
|
|
||||||
html_result=$(find_files '-name "*.html"')
|
|
||||||
html_count=${html_result%%|*}
|
|
||||||
html_files=${html_result#*|}
|
|
||||||
|
|
||||||
# Calculate total
|
|
||||||
total_count=$((css_count + js_count + html_count))
|
|
||||||
|
|
||||||
# Generate JSON
|
|
||||||
cat > "$output_json" << EOF
|
|
||||||
{
|
|
||||||
"discovery_time": "$(date -u +%Y-%m-%dT%H:%M:%SZ)",
|
|
||||||
"source_directory": "$(cd "$source_dir" && pwd)",
|
|
||||||
"file_types": {
|
|
||||||
"css": {
|
|
||||||
"count": $css_count,
|
|
||||||
"files": [${css_files}]
|
|
||||||
},
|
|
||||||
"js": {
|
|
||||||
"count": $js_count,
|
|
||||||
"files": [${js_files}]
|
|
||||||
},
|
|
||||||
"html": {
|
|
||||||
"count": $html_count,
|
|
||||||
"files": [${html_files}]
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"total_files": $total_count
|
|
||||||
}
|
|
||||||
EOF
|
|
||||||
|
|
||||||
# Ensure file is fully written and synchronized to disk
|
|
||||||
# This prevents race conditions when the file is immediately read by another process
|
|
||||||
sync "$output_json" 2>/dev/null || sync # Sync specific file, fallback to full sync
|
|
||||||
sleep 0.1 # Additional safety: 100ms delay for filesystem metadata update
|
|
||||||
|
|
||||||
echo "Discovered: CSS=$css_count, JS=$js_count, HTML=$html_count (Total: $total_count)" >&2
|
|
||||||
@@ -1,243 +0,0 @@
|
|||||||
/**
|
|
||||||
* Animation & Transition Extraction Script
|
|
||||||
*
|
|
||||||
* Extracts CSS animations, transitions, and transform patterns from a live web page.
|
|
||||||
* This script runs in the browser context via Chrome DevTools Protocol.
|
|
||||||
*
|
|
||||||
* @returns {Object} Structured animation data
|
|
||||||
*/
|
|
||||||
(() => {
|
|
||||||
const extractionTimestamp = new Date().toISOString();
|
|
||||||
const currentUrl = window.location.href;
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Parse transition shorthand or individual properties
|
|
||||||
*/
|
|
||||||
function parseTransition(element, computedStyle) {
|
|
||||||
const transition = computedStyle.transition || computedStyle.webkitTransition;
|
|
||||||
|
|
||||||
if (!transition || transition === 'none' || transition === 'all 0s ease 0s') {
|
|
||||||
return null;
|
|
||||||
}
|
|
||||||
|
|
||||||
// Parse shorthand: "property duration easing delay"
|
|
||||||
const transitions = [];
|
|
||||||
const parts = transition.split(/,\s*/);
|
|
||||||
|
|
||||||
parts.forEach(part => {
|
|
||||||
const match = part.match(/^(\S+)\s+([\d.]+m?s)\s+(\S+)(?:\s+([\d.]+m?s))?/);
|
|
||||||
if (match) {
|
|
||||||
transitions.push({
|
|
||||||
property: match[1],
|
|
||||||
duration: match[2],
|
|
||||||
easing: match[3],
|
|
||||||
delay: match[4] || '0s'
|
|
||||||
});
|
|
||||||
}
|
|
||||||
});
|
|
||||||
|
|
||||||
return transitions.length > 0 ? transitions : null;
|
|
||||||
}
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Extract animation name and properties
|
|
||||||
*/
|
|
||||||
function parseAnimation(element, computedStyle) {
|
|
||||||
const animationName = computedStyle.animationName || computedStyle.webkitAnimationName;
|
|
||||||
|
|
||||||
if (!animationName || animationName === 'none') {
|
|
||||||
return null;
|
|
||||||
}
|
|
||||||
|
|
||||||
return {
|
|
||||||
name: animationName,
|
|
||||||
duration: computedStyle.animationDuration || computedStyle.webkitAnimationDuration,
|
|
||||||
easing: computedStyle.animationTimingFunction || computedStyle.webkitAnimationTimingFunction,
|
|
||||||
delay: computedStyle.animationDelay || computedStyle.webkitAnimationDelay || '0s',
|
|
||||||
iterationCount: computedStyle.animationIterationCount || computedStyle.webkitAnimationIterationCount || '1',
|
|
||||||
direction: computedStyle.animationDirection || computedStyle.webkitAnimationDirection || 'normal',
|
|
||||||
fillMode: computedStyle.animationFillMode || computedStyle.webkitAnimationFillMode || 'none'
|
|
||||||
};
|
|
||||||
}
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Extract transform value
|
|
||||||
*/
|
|
||||||
function parseTransform(computedStyle) {
|
|
||||||
const transform = computedStyle.transform || computedStyle.webkitTransform;
|
|
||||||
|
|
||||||
if (!transform || transform === 'none') {
|
|
||||||
return null;
|
|
||||||
}
|
|
||||||
|
|
||||||
return transform;
|
|
||||||
}
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Get element selector (simplified for readability)
|
|
||||||
*/
|
|
||||||
function getSelector(element) {
|
|
||||||
if (element.id) {
|
|
||||||
return `#${element.id}`;
|
|
||||||
}
|
|
||||||
|
|
||||||
if (element.className && typeof element.className === 'string') {
|
|
||||||
const classes = element.className.trim().split(/\s+/).slice(0, 2).join('.');
|
|
||||||
if (classes) {
|
|
||||||
return `.${classes}`;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
return element.tagName.toLowerCase();
|
|
||||||
}
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Extract all stylesheets and find @keyframes rules
|
|
||||||
*/
|
|
||||||
function extractKeyframes() {
|
|
||||||
const keyframes = {};
|
|
||||||
|
|
||||||
try {
|
|
||||||
// Iterate through all stylesheets
|
|
||||||
Array.from(document.styleSheets).forEach(sheet => {
|
|
||||||
try {
|
|
||||||
// Skip external stylesheets due to CORS
|
|
||||||
if (sheet.href && !sheet.href.startsWith(window.location.origin)) {
|
|
||||||
return;
|
|
||||||
}
|
|
||||||
|
|
||||||
Array.from(sheet.cssRules || sheet.rules || []).forEach(rule => {
|
|
||||||
// Check for @keyframes rules
|
|
||||||
if (rule.type === CSSRule.KEYFRAMES_RULE || rule.type === CSSRule.WEBKIT_KEYFRAMES_RULE) {
|
|
||||||
const name = rule.name;
|
|
||||||
const frames = {};
|
|
||||||
|
|
||||||
Array.from(rule.cssRules || []).forEach(keyframe => {
|
|
||||||
const key = keyframe.keyText; // e.g., "0%", "50%", "100%"
|
|
||||||
frames[key] = keyframe.style.cssText;
|
|
||||||
});
|
|
||||||
|
|
||||||
keyframes[name] = frames;
|
|
||||||
}
|
|
||||||
});
|
|
||||||
} catch (e) {
|
|
||||||
// Skip stylesheets that can't be accessed (CORS)
|
|
||||||
console.warn('Cannot access stylesheet:', sheet.href, e.message);
|
|
||||||
}
|
|
||||||
});
|
|
||||||
} catch (e) {
|
|
||||||
console.error('Error extracting keyframes:', e);
|
|
||||||
}
|
|
||||||
|
|
||||||
return keyframes;
|
|
||||||
}
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Scan visible elements for animations and transitions
|
|
||||||
*/
|
|
||||||
function scanElements() {
|
|
||||||
const elements = document.querySelectorAll('*');
|
|
||||||
const transitionData = [];
|
|
||||||
const animationData = [];
|
|
||||||
const transformData = [];
|
|
||||||
|
|
||||||
const uniqueTransitions = new Set();
|
|
||||||
const uniqueAnimations = new Set();
|
|
||||||
const uniqueEasings = new Set();
|
|
||||||
const uniqueDurations = new Set();
|
|
||||||
|
|
||||||
elements.forEach(element => {
|
|
||||||
// Skip invisible elements
|
|
||||||
const rect = element.getBoundingClientRect();
|
|
||||||
if (rect.width === 0 && rect.height === 0) {
|
|
||||||
return;
|
|
||||||
}
|
|
||||||
|
|
||||||
const computedStyle = window.getComputedStyle(element);
|
|
||||||
|
|
||||||
// Extract transitions
|
|
||||||
const transitions = parseTransition(element, computedStyle);
|
|
||||||
if (transitions) {
|
|
||||||
const selector = getSelector(element);
|
|
||||||
transitions.forEach(t => {
|
|
||||||
const key = `${t.property}-${t.duration}-${t.easing}`;
|
|
||||||
if (!uniqueTransitions.has(key)) {
|
|
||||||
uniqueTransitions.add(key);
|
|
||||||
transitionData.push({
|
|
||||||
selector,
|
|
||||||
...t
|
|
||||||
});
|
|
||||||
uniqueEasings.add(t.easing);
|
|
||||||
uniqueDurations.add(t.duration);
|
|
||||||
}
|
|
||||||
});
|
|
||||||
}
|
|
||||||
|
|
||||||
// Extract animations
|
|
||||||
const animation = parseAnimation(element, computedStyle);
|
|
||||||
if (animation) {
|
|
||||||
const selector = getSelector(element);
|
|
||||||
const key = `${animation.name}-${animation.duration}`;
|
|
||||||
if (!uniqueAnimations.has(key)) {
|
|
||||||
uniqueAnimations.add(key);
|
|
||||||
animationData.push({
|
|
||||||
selector,
|
|
||||||
...animation
|
|
||||||
});
|
|
||||||
uniqueEasings.add(animation.easing);
|
|
||||||
uniqueDurations.add(animation.duration);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Extract transforms (on hover/active, we only get current state)
|
|
||||||
const transform = parseTransform(computedStyle);
|
|
||||||
if (transform) {
|
|
||||||
const selector = getSelector(element);
|
|
||||||
transformData.push({
|
|
||||||
selector,
|
|
||||||
transform
|
|
||||||
});
|
|
||||||
}
|
|
||||||
});
|
|
||||||
|
|
||||||
return {
|
|
||||||
transitions: transitionData,
|
|
||||||
animations: animationData,
|
|
||||||
transforms: transformData,
|
|
||||||
uniqueEasings: Array.from(uniqueEasings),
|
|
||||||
uniqueDurations: Array.from(uniqueDurations)
|
|
||||||
};
|
|
||||||
}
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Main extraction function
|
|
||||||
*/
|
|
||||||
function extractAnimations() {
|
|
||||||
const elementData = scanElements();
|
|
||||||
const keyframes = extractKeyframes();
|
|
||||||
|
|
||||||
return {
|
|
||||||
metadata: {
|
|
||||||
timestamp: extractionTimestamp,
|
|
||||||
url: currentUrl,
|
|
||||||
method: 'chrome-devtools',
|
|
||||||
version: '1.0.0'
|
|
||||||
},
|
|
||||||
transitions: elementData.transitions,
|
|
||||||
animations: elementData.animations,
|
|
||||||
transforms: elementData.transforms,
|
|
||||||
keyframes: keyframes,
|
|
||||||
summary: {
|
|
||||||
total_transitions: elementData.transitions.length,
|
|
||||||
total_animations: elementData.animations.length,
|
|
||||||
total_transforms: elementData.transforms.length,
|
|
||||||
total_keyframes: Object.keys(keyframes).length,
|
|
||||||
unique_easings: elementData.uniqueEasings,
|
|
||||||
unique_durations: elementData.uniqueDurations
|
|
||||||
}
|
|
||||||
};
|
|
||||||
}
|
|
||||||
|
|
||||||
// Execute extraction
|
|
||||||
return extractAnimations();
|
|
||||||
})();
|
|
||||||
@@ -1,118 +0,0 @@
|
|||||||
/**
|
|
||||||
* Extract Computed Styles from DOM
|
|
||||||
*
|
|
||||||
* This script extracts real CSS computed styles from a webpage's DOM
|
|
||||||
* to provide accurate design tokens for UI replication.
|
|
||||||
*
|
|
||||||
* Usage: Execute this function via Chrome DevTools evaluate_script
|
|
||||||
*/
|
|
||||||
|
|
||||||
(() => {
|
|
||||||
/**
|
|
||||||
* Extract unique values from a set and sort them
|
|
||||||
*/
|
|
||||||
const uniqueSorted = (set) => {
|
|
||||||
return Array.from(set)
|
|
||||||
.filter(v => v && v !== 'none' && v !== '0px' && v !== 'rgba(0, 0, 0, 0)')
|
|
||||||
.sort();
|
|
||||||
};
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Parse rgb/rgba to OKLCH format (placeholder - returns original for now)
|
|
||||||
*/
|
|
||||||
const toOKLCH = (color) => {
|
|
||||||
// TODO: Implement actual RGB to OKLCH conversion
|
|
||||||
// For now, return the original color with a note
|
|
||||||
return `${color} /* TODO: Convert to OKLCH */`;
|
|
||||||
};
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Extract only key styles from an element
|
|
||||||
*/
|
|
||||||
const extractKeyStyles = (element) => {
|
|
||||||
const s = window.getComputedStyle(element);
|
|
||||||
return {
|
|
||||||
color: s.color,
|
|
||||||
bg: s.backgroundColor,
|
|
||||||
borderRadius: s.borderRadius,
|
|
||||||
boxShadow: s.boxShadow,
|
|
||||||
fontSize: s.fontSize,
|
|
||||||
fontWeight: s.fontWeight,
|
|
||||||
padding: s.padding,
|
|
||||||
margin: s.margin
|
|
||||||
};
|
|
||||||
};
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Main extraction function - extract all critical design tokens
|
|
||||||
*/
|
|
||||||
const extractDesignTokens = () => {
|
|
||||||
// Include all key UI elements
|
|
||||||
const selectors = [
|
|
||||||
'button', '.btn', '[role="button"]',
|
|
||||||
'input', 'textarea', 'select',
|
|
||||||
'h1', 'h2', 'h3', 'h4', 'h5', 'h6',
|
|
||||||
'.card', 'article', 'section',
|
|
||||||
'a', 'p', 'nav', 'header', 'footer'
|
|
||||||
];
|
|
||||||
|
|
||||||
// Collect all design tokens
|
|
||||||
const tokens = {
|
|
||||||
colors: new Set(),
|
|
||||||
borderRadii: new Set(),
|
|
||||||
shadows: new Set(),
|
|
||||||
fontSizes: new Set(),
|
|
||||||
fontWeights: new Set(),
|
|
||||||
spacing: new Set()
|
|
||||||
};
|
|
||||||
|
|
||||||
// Extract from all elements
|
|
||||||
selectors.forEach(selector => {
|
|
||||||
try {
|
|
||||||
const elements = document.querySelectorAll(selector);
|
|
||||||
elements.forEach(element => {
|
|
||||||
const s = extractKeyStyles(element);
|
|
||||||
|
|
||||||
// Collect all tokens (no limits)
|
|
||||||
if (s.color && s.color !== 'rgba(0, 0, 0, 0)') tokens.colors.add(s.color);
|
|
||||||
if (s.bg && s.bg !== 'rgba(0, 0, 0, 0)') tokens.colors.add(s.bg);
|
|
||||||
if (s.borderRadius && s.borderRadius !== '0px') tokens.borderRadii.add(s.borderRadius);
|
|
||||||
if (s.boxShadow && s.boxShadow !== 'none') tokens.shadows.add(s.boxShadow);
|
|
||||||
if (s.fontSize) tokens.fontSizes.add(s.fontSize);
|
|
||||||
if (s.fontWeight) tokens.fontWeights.add(s.fontWeight);
|
|
||||||
|
|
||||||
// Extract all spacing values
|
|
||||||
[s.padding, s.margin].forEach(val => {
|
|
||||||
if (val && val !== '0px') {
|
|
||||||
val.split(' ').forEach(v => {
|
|
||||||
if (v && v !== '0px') tokens.spacing.add(v);
|
|
||||||
});
|
|
||||||
}
|
|
||||||
});
|
|
||||||
});
|
|
||||||
} catch (e) {
|
|
||||||
console.warn(`Error: ${selector}`, e);
|
|
||||||
}
|
|
||||||
});
|
|
||||||
|
|
||||||
// Return all tokens (no element details to save context)
|
|
||||||
return {
|
|
||||||
metadata: {
|
|
||||||
extractedAt: new Date().toISOString(),
|
|
||||||
url: window.location.href,
|
|
||||||
method: 'computed-styles'
|
|
||||||
},
|
|
||||||
tokens: {
|
|
||||||
colors: uniqueSorted(tokens.colors),
|
|
||||||
borderRadii: uniqueSorted(tokens.borderRadii), // ALL radius values
|
|
||||||
shadows: uniqueSorted(tokens.shadows), // ALL shadows
|
|
||||||
fontSizes: uniqueSorted(tokens.fontSizes),
|
|
||||||
fontWeights: uniqueSorted(tokens.fontWeights),
|
|
||||||
spacing: uniqueSorted(tokens.spacing)
|
|
||||||
}
|
|
||||||
};
|
|
||||||
};
|
|
||||||
|
|
||||||
// Execute and return results
|
|
||||||
return extractDesignTokens();
|
|
||||||
})();
|
|
||||||
@@ -1,411 +0,0 @@
|
|||||||
/**
|
|
||||||
* Extract Layout Structure from DOM - Enhanced Version
|
|
||||||
*
|
|
||||||
* Extracts real layout information from DOM to provide accurate
|
|
||||||
* structural data for UI replication.
|
|
||||||
*
|
|
||||||
* Features:
|
|
||||||
* - Framework detection (Nuxt.js, Next.js, React, Vue, Angular)
|
|
||||||
* - Multi-strategy container detection (strict → relaxed → class-based → framework-specific)
|
|
||||||
* - Intelligent main content detection with common class names support
|
|
||||||
* - Supports modern SPA frameworks
|
|
||||||
* - Detects non-semantic main containers (.main, .content, etc.)
|
|
||||||
* - Progressive exploration: Auto-discovers missing selectors when standard patterns fail
|
|
||||||
* - Suggests new class names to add to script based on actual page structure
|
|
||||||
*
|
|
||||||
* Progressive Exploration:
|
|
||||||
* When fewer than 3 main containers are found, the script automatically:
|
|
||||||
* 1. Analyzes all large visible containers (≥500×300px)
|
|
||||||
* 2. Extracts class name patterns (main/content/wrapper/container/page/etc.)
|
|
||||||
* 3. Suggests new selectors to add to the script
|
|
||||||
* 4. Returns exploration data in result.exploration
|
|
||||||
*
|
|
||||||
* Usage: Execute via Chrome DevTools evaluate_script
|
|
||||||
* Version: 2.2.0
|
|
||||||
*/
|
|
||||||
|
|
||||||
(() => {
|
|
||||||
/**
|
|
||||||
* Get element's bounding box relative to viewport
|
|
||||||
*/
|
|
||||||
const getBounds = (element) => {
|
|
||||||
const rect = element.getBoundingClientRect();
|
|
||||||
return {
|
|
||||||
x: Math.round(rect.x),
|
|
||||||
y: Math.round(rect.y),
|
|
||||||
width: Math.round(rect.width),
|
|
||||||
height: Math.round(rect.height)
|
|
||||||
};
|
|
||||||
};
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Extract layout properties from an element
|
|
||||||
*/
|
|
||||||
const extractLayoutProps = (element) => {
|
|
||||||
const s = window.getComputedStyle(element);
|
|
||||||
|
|
||||||
return {
|
|
||||||
// Core layout
|
|
||||||
display: s.display,
|
|
||||||
position: s.position,
|
|
||||||
|
|
||||||
// Flexbox
|
|
||||||
flexDirection: s.flexDirection,
|
|
||||||
justifyContent: s.justifyContent,
|
|
||||||
alignItems: s.alignItems,
|
|
||||||
flexWrap: s.flexWrap,
|
|
||||||
gap: s.gap,
|
|
||||||
|
|
||||||
// Grid
|
|
||||||
gridTemplateColumns: s.gridTemplateColumns,
|
|
||||||
gridTemplateRows: s.gridTemplateRows,
|
|
||||||
gridAutoFlow: s.gridAutoFlow,
|
|
||||||
|
|
||||||
// Dimensions
|
|
||||||
width: s.width,
|
|
||||||
height: s.height,
|
|
||||||
maxWidth: s.maxWidth,
|
|
||||||
minWidth: s.minWidth,
|
|
||||||
|
|
||||||
// Spacing
|
|
||||||
padding: s.padding,
|
|
||||||
margin: s.margin
|
|
||||||
};
|
|
||||||
};
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Identify layout pattern for an element
|
|
||||||
*/
|
|
||||||
const identifyPattern = (props) => {
|
|
||||||
const { display, flexDirection, gridTemplateColumns } = props;
|
|
||||||
|
|
||||||
if (display === 'flex' || display === 'inline-flex') {
|
|
||||||
if (flexDirection === 'column') return 'flex-column';
|
|
||||||
if (flexDirection === 'row') return 'flex-row';
|
|
||||||
return 'flex';
|
|
||||||
}
|
|
||||||
|
|
||||||
if (display === 'grid') {
|
|
||||||
const cols = gridTemplateColumns;
|
|
||||||
if (cols && cols !== 'none') {
|
|
||||||
const colCount = cols.split(' ').length;
|
|
||||||
return `grid-${colCount}col`;
|
|
||||||
}
|
|
||||||
return 'grid';
|
|
||||||
}
|
|
||||||
|
|
||||||
if (display === 'block') return 'block';
|
|
||||||
|
|
||||||
return display;
|
|
||||||
};
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Detect frontend framework
|
|
||||||
*/
|
|
||||||
const detectFramework = () => {
|
|
||||||
if (document.querySelector('#__nuxt')) return { name: 'Nuxt.js', version: 'unknown' };
|
|
||||||
if (document.querySelector('#__next')) return { name: 'Next.js', version: 'unknown' };
|
|
||||||
if (document.querySelector('[data-reactroot]')) return { name: 'React', version: 'unknown' };
|
|
||||||
if (document.querySelector('[ng-version]')) return { name: 'Angular', version: 'unknown' };
|
|
||||||
if (window.Vue) return { name: 'Vue.js', version: window.Vue.version || 'unknown' };
|
|
||||||
return { name: 'Unknown', version: 'unknown' };
|
|
||||||
};
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Build layout tree recursively
|
|
||||||
*/
|
|
||||||
const buildLayoutTree = (element, depth = 0, maxDepth = 3) => {
|
|
||||||
if (depth > maxDepth) return null;
|
|
||||||
|
|
||||||
const props = extractLayoutProps(element);
|
|
||||||
const bounds = getBounds(element);
|
|
||||||
const pattern = identifyPattern(props);
|
|
||||||
|
|
||||||
// Get semantic role
|
|
||||||
const tagName = element.tagName.toLowerCase();
|
|
||||||
const classes = Array.from(element.classList).slice(0, 3); // Max 3 classes
|
|
||||||
const role = element.getAttribute('role');
|
|
||||||
|
|
||||||
// Build node
|
|
||||||
const node = {
|
|
||||||
tag: tagName,
|
|
||||||
classes: classes,
|
|
||||||
role: role,
|
|
||||||
pattern: pattern,
|
|
||||||
bounds: bounds,
|
|
||||||
layout: {
|
|
||||||
display: props.display,
|
|
||||||
position: props.position
|
|
||||||
}
|
|
||||||
};
|
|
||||||
|
|
||||||
// Add flex/grid specific properties
|
|
||||||
if (props.display === 'flex' || props.display === 'inline-flex') {
|
|
||||||
node.layout.flexDirection = props.flexDirection;
|
|
||||||
node.layout.justifyContent = props.justifyContent;
|
|
||||||
node.layout.alignItems = props.alignItems;
|
|
||||||
node.layout.gap = props.gap;
|
|
||||||
}
|
|
||||||
|
|
||||||
if (props.display === 'grid') {
|
|
||||||
node.layout.gridTemplateColumns = props.gridTemplateColumns;
|
|
||||||
node.layout.gridTemplateRows = props.gridTemplateRows;
|
|
||||||
node.layout.gap = props.gap;
|
|
||||||
}
|
|
||||||
|
|
||||||
// Process children for container elements
|
|
||||||
if (props.display === 'flex' || props.display === 'grid' || props.display === 'block') {
|
|
||||||
const children = Array.from(element.children);
|
|
||||||
if (children.length > 0 && children.length < 50) { // Limit to 50 children
|
|
||||||
node.children = children
|
|
||||||
.map(child => buildLayoutTree(child, depth + 1, maxDepth))
|
|
||||||
.filter(child => child !== null);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
return node;
|
|
||||||
};
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Find main layout containers with multi-strategy approach
|
|
||||||
*/
|
|
||||||
const findMainContainers = () => {
|
|
||||||
const containers = [];
|
|
||||||
const found = new Set();
|
|
||||||
|
|
||||||
// Strategy 1: Strict selectors (body direct children)
|
|
||||||
const strictSelectors = [
|
|
||||||
'body > header',
|
|
||||||
'body > nav',
|
|
||||||
'body > main',
|
|
||||||
'body > footer'
|
|
||||||
];
|
|
||||||
|
|
||||||
// Strategy 2: Relaxed selectors (any level)
|
|
||||||
const relaxedSelectors = [
|
|
||||||
'header',
|
|
||||||
'nav',
|
|
||||||
'main',
|
|
||||||
'footer',
|
|
||||||
'[role="banner"]',
|
|
||||||
'[role="navigation"]',
|
|
||||||
'[role="main"]',
|
|
||||||
'[role="contentinfo"]'
|
|
||||||
];
|
|
||||||
|
|
||||||
// Strategy 3: Common class-based main content selectors
|
|
||||||
const commonClassSelectors = [
|
|
||||||
'.main',
|
|
||||||
'.content',
|
|
||||||
'.main-content',
|
|
||||||
'.page-content',
|
|
||||||
'.container.main',
|
|
||||||
'.wrapper > .main',
|
|
||||||
'div[class*="main-wrapper"]',
|
|
||||||
'div[class*="content-wrapper"]'
|
|
||||||
];
|
|
||||||
|
|
||||||
// Strategy 4: Framework-specific selectors
|
|
||||||
const frameworkSelectors = [
|
|
||||||
'#__nuxt header', '#__nuxt .main', '#__nuxt main', '#__nuxt footer',
|
|
||||||
'#__next header', '#__next .main', '#__next main', '#__next footer',
|
|
||||||
'#app header', '#app .main', '#app main', '#app footer',
|
|
||||||
'[data-app] header', '[data-app] .main', '[data-app] main', '[data-app] footer'
|
|
||||||
];
|
|
||||||
|
|
||||||
// Try all strategies
|
|
||||||
const allSelectors = [...strictSelectors, ...relaxedSelectors, ...commonClassSelectors, ...frameworkSelectors];
|
|
||||||
|
|
||||||
allSelectors.forEach(selector => {
|
|
||||||
try {
|
|
||||||
const elements = document.querySelectorAll(selector);
|
|
||||||
elements.forEach(element => {
|
|
||||||
// Avoid duplicates and invisible elements
|
|
||||||
if (!found.has(element) && element.offsetParent !== null) {
|
|
||||||
found.add(element);
|
|
||||||
const tree = buildLayoutTree(element, 0, 3);
|
|
||||||
if (tree && tree.bounds.width > 0 && tree.bounds.height > 0) {
|
|
||||||
containers.push(tree);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
});
|
|
||||||
} catch (e) {
|
|
||||||
console.warn(`Selector failed: ${selector}`, e);
|
|
||||||
}
|
|
||||||
});
|
|
||||||
|
|
||||||
// Fallback: If no containers found, use body's direct children
|
|
||||||
if (containers.length === 0) {
|
|
||||||
Array.from(document.body.children).forEach(child => {
|
|
||||||
if (child.offsetParent !== null && !found.has(child)) {
|
|
||||||
const tree = buildLayoutTree(child, 0, 2);
|
|
||||||
if (tree && tree.bounds.width > 100 && tree.bounds.height > 100) {
|
|
||||||
containers.push(tree);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
});
|
|
||||||
}
|
|
||||||
|
|
||||||
return containers;
|
|
||||||
};
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Progressive exploration: Discover main containers when standard selectors fail
|
|
||||||
* Analyzes large visible containers and suggests class name patterns
|
|
||||||
*/
|
|
||||||
const exploreMainContainers = () => {
|
|
||||||
const candidates = [];
|
|
||||||
const minWidth = 500;
|
|
||||||
const minHeight = 300;
|
|
||||||
|
|
||||||
// Find all large visible divs
|
|
||||||
const allDivs = document.querySelectorAll('div');
|
|
||||||
allDivs.forEach(div => {
|
|
||||||
const rect = div.getBoundingClientRect();
|
|
||||||
const style = window.getComputedStyle(div);
|
|
||||||
|
|
||||||
// Filter: large size, visible, not header/footer
|
|
||||||
if (rect.width >= minWidth &&
|
|
||||||
rect.height >= minHeight &&
|
|
||||||
div.offsetParent !== null &&
|
|
||||||
!div.closest('header') &&
|
|
||||||
!div.closest('footer')) {
|
|
||||||
|
|
||||||
const classes = Array.from(div.classList);
|
|
||||||
const area = rect.width * rect.height;
|
|
||||||
|
|
||||||
candidates.push({
|
|
||||||
element: div,
|
|
||||||
classes: classes,
|
|
||||||
area: area,
|
|
||||||
bounds: {
|
|
||||||
width: Math.round(rect.width),
|
|
||||||
height: Math.round(rect.height)
|
|
||||||
},
|
|
||||||
display: style.display,
|
|
||||||
depth: getElementDepth(div)
|
|
||||||
});
|
|
||||||
}
|
|
||||||
});
|
|
||||||
|
|
||||||
// Sort by area (largest first) and take top candidates
|
|
||||||
candidates.sort((a, b) => b.area - a.area);
|
|
||||||
|
|
||||||
// Extract unique class patterns from top candidates
|
|
||||||
const classPatterns = new Set();
|
|
||||||
candidates.slice(0, 20).forEach(c => {
|
|
||||||
c.classes.forEach(cls => {
|
|
||||||
// Identify potential main content class patterns
|
|
||||||
if (cls.match(/main|content|container|wrapper|page|body|layout|app/i)) {
|
|
||||||
classPatterns.add(cls);
|
|
||||||
}
|
|
||||||
});
|
|
||||||
});
|
|
||||||
|
|
||||||
return {
|
|
||||||
candidates: candidates.slice(0, 10).map(c => ({
|
|
||||||
classes: c.classes,
|
|
||||||
bounds: c.bounds,
|
|
||||||
display: c.display,
|
|
||||||
depth: c.depth
|
|
||||||
})),
|
|
||||||
suggestedSelectors: Array.from(classPatterns).map(cls => `.${cls}`)
|
|
||||||
};
|
|
||||||
};
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Get element depth in DOM tree
|
|
||||||
*/
|
|
||||||
const getElementDepth = (element) => {
|
|
||||||
let depth = 0;
|
|
||||||
let current = element;
|
|
||||||
while (current.parentElement) {
|
|
||||||
depth++;
|
|
||||||
current = current.parentElement;
|
|
||||||
}
|
|
||||||
return depth;
|
|
||||||
};
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Analyze layout patterns
|
|
||||||
*/
|
|
||||||
const analyzePatterns = (containers) => {
|
|
||||||
const patterns = {
|
|
||||||
flexColumn: 0,
|
|
||||||
flexRow: 0,
|
|
||||||
grid: 0,
|
|
||||||
sticky: 0,
|
|
||||||
fixed: 0
|
|
||||||
};
|
|
||||||
|
|
||||||
const analyze = (node) => {
|
|
||||||
if (!node) return;
|
|
||||||
|
|
||||||
if (node.pattern === 'flex-column') patterns.flexColumn++;
|
|
||||||
if (node.pattern === 'flex-row') patterns.flexRow++;
|
|
||||||
if (node.pattern && node.pattern.startsWith('grid')) patterns.grid++;
|
|
||||||
if (node.layout.position === 'sticky') patterns.sticky++;
|
|
||||||
if (node.layout.position === 'fixed') patterns.fixed++;
|
|
||||||
|
|
||||||
if (node.children) {
|
|
||||||
node.children.forEach(analyze);
|
|
||||||
}
|
|
||||||
};
|
|
||||||
|
|
||||||
containers.forEach(analyze);
|
|
||||||
return patterns;
|
|
||||||
};
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Main extraction function with progressive exploration
|
|
||||||
*/
|
|
||||||
const extractLayout = () => {
|
|
||||||
const framework = detectFramework();
|
|
||||||
const containers = findMainContainers();
|
|
||||||
const patterns = analyzePatterns(containers);
|
|
||||||
|
|
||||||
// Progressive exploration: if too few containers found, explore and suggest
|
|
||||||
let exploration = null;
|
|
||||||
const minExpectedContainers = 3; // At least header, main, footer
|
|
||||||
|
|
||||||
if (containers.length < minExpectedContainers) {
|
|
||||||
exploration = exploreMainContainers();
|
|
||||||
|
|
||||||
// Add warning message
|
|
||||||
exploration.warning = `Only ${containers.length} containers found. Consider adding these selectors to the script:`;
|
|
||||||
exploration.recommendation = exploration.suggestedSelectors.join(', ');
|
|
||||||
}
|
|
||||||
|
|
||||||
const result = {
|
|
||||||
metadata: {
|
|
||||||
extractedAt: new Date().toISOString(),
|
|
||||||
url: window.location.href,
|
|
||||||
framework: framework,
|
|
||||||
method: 'layout-structure-enhanced',
|
|
||||||
version: '2.2.0'
|
|
||||||
},
|
|
||||||
statistics: {
|
|
||||||
totalContainers: containers.length,
|
|
||||||
patterns: patterns
|
|
||||||
},
|
|
||||||
structure: containers
|
|
||||||
};
|
|
||||||
|
|
||||||
// Add exploration results if triggered
|
|
||||||
if (exploration) {
|
|
||||||
result.exploration = {
|
|
||||||
triggered: true,
|
|
||||||
reason: 'Insufficient containers found with standard selectors',
|
|
||||||
discoveredCandidates: exploration.candidates,
|
|
||||||
suggestedSelectors: exploration.suggestedSelectors,
|
|
||||||
warning: exploration.warning,
|
|
||||||
recommendation: exploration.recommendation
|
|
||||||
};
|
|
||||||
}
|
|
||||||
|
|
||||||
return result;
|
|
||||||
};
|
|
||||||
|
|
||||||
// Execute and return results
|
|
||||||
return extractLayout();
|
|
||||||
})();
|
|
||||||
@@ -1,717 +0,0 @@
|
|||||||
#!/bin/bash
|
|
||||||
# ⚠️ DEPRECATED: This script is deprecated.
|
|
||||||
# Please use: ccw tool exec generate_module_docs '{"path":".","strategy":"single-layer","tool":"gemini"}'
|
|
||||||
# This file will be removed in a future version.
|
|
||||||
|
|
||||||
# Generate documentation for modules and projects with multiple strategies
|
|
||||||
# Usage: generate_module_docs.sh <strategy> <source_path> <project_name> [tool] [model]
|
|
||||||
# strategy: full|single|project-readme|project-architecture|http-api
|
|
||||||
# source_path: Path to the source module directory (or project root for project-level docs)
|
|
||||||
# project_name: Project name for output path (e.g., "myproject")
|
|
||||||
# tool: gemini|qwen|codex (default: gemini)
|
|
||||||
# model: Model name (optional, uses tool defaults)
|
|
||||||
#
|
|
||||||
# Default Models:
|
|
||||||
# gemini: gemini-2.5-flash
|
|
||||||
# qwen: coder-model
|
|
||||||
# codex: gpt5-codex
|
|
||||||
#
|
|
||||||
# Module-Level Strategies:
|
|
||||||
# full: Full documentation generation
|
|
||||||
# - Read: All files in current and subdirectories (@**/*)
|
|
||||||
# - Generate: API.md + README.md for each directory containing code files
|
|
||||||
# - Use: Deep directories (Layer 3), comprehensive documentation
|
|
||||||
#
|
|
||||||
# single: Single-layer documentation
|
|
||||||
# - Read: Current directory code + child API.md/README.md files
|
|
||||||
# - Generate: API.md + README.md only in current directory
|
|
||||||
# - Use: Upper layers (Layer 1-2), incremental updates
|
|
||||||
#
|
|
||||||
# Project-Level Strategies:
|
|
||||||
# project-readme: Project overview documentation
|
|
||||||
# - Read: All module API.md and README.md files
|
|
||||||
# - Generate: README.md (project root)
|
|
||||||
# - Use: After all module docs are generated
|
|
||||||
#
|
|
||||||
# project-architecture: System design documentation
|
|
||||||
# - Read: All module docs + project README
|
|
||||||
# - Generate: ARCHITECTURE.md + EXAMPLES.md
|
|
||||||
# - Use: After project README is generated
|
|
||||||
#
|
|
||||||
# http-api: HTTP API documentation
|
|
||||||
# - Read: API route files + existing docs
|
|
||||||
# - Generate: api/README.md
|
|
||||||
# - Use: For projects with HTTP APIs
|
|
||||||
#
|
|
||||||
# Output Structure:
|
|
||||||
# Module docs: .workflow/docs/{project_name}/{source_path}/API.md
|
|
||||||
# Module docs: .workflow/docs/{project_name}/{source_path}/README.md
|
|
||||||
# Project docs: .workflow/docs/{project_name}/README.md
|
|
||||||
# Project docs: .workflow/docs/{project_name}/ARCHITECTURE.md
|
|
||||||
# Project docs: .workflow/docs/{project_name}/EXAMPLES.md
|
|
||||||
# API docs: .workflow/docs/{project_name}/api/README.md
|
|
||||||
#
|
|
||||||
# Features:
|
|
||||||
# - Path mirroring: source structure → docs structure
|
|
||||||
# - Template-driven generation
|
|
||||||
# - Respects .gitignore patterns
|
|
||||||
# - Detects code vs navigation folders
|
|
||||||
# - Tool fallback support
|
|
||||||
|
|
||||||
# Build exclusion filters from .gitignore
|
|
||||||
build_exclusion_filters() {
|
|
||||||
local filters=""
|
|
||||||
|
|
||||||
# Common system/cache directories to exclude
|
|
||||||
local system_excludes=(
|
|
||||||
".git" "__pycache__" "node_modules" ".venv" "venv" "env"
|
|
||||||
"dist" "build" ".cache" ".pytest_cache" ".mypy_cache"
|
|
||||||
"coverage" ".nyc_output" "logs" "tmp" "temp" ".workflow"
|
|
||||||
)
|
|
||||||
|
|
||||||
for exclude in "${system_excludes[@]}"; do
|
|
||||||
filters+=" -not -path '*/$exclude' -not -path '*/$exclude/*'"
|
|
||||||
done
|
|
||||||
|
|
||||||
# Find and parse .gitignore (current dir first, then git root)
|
|
||||||
local gitignore_file=""
|
|
||||||
|
|
||||||
# Check current directory first
|
|
||||||
if [ -f ".gitignore" ]; then
|
|
||||||
gitignore_file=".gitignore"
|
|
||||||
else
|
|
||||||
# Try to find git root and check for .gitignore there
|
|
||||||
local git_root=$(git rev-parse --show-toplevel 2>/dev/null)
|
|
||||||
if [ -n "$git_root" ] && [ -f "$git_root/.gitignore" ]; then
|
|
||||||
gitignore_file="$git_root/.gitignore"
|
|
||||||
fi
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Parse .gitignore if found
|
|
||||||
if [ -n "$gitignore_file" ]; then
|
|
||||||
while IFS= read -r line; do
|
|
||||||
# Skip empty lines and comments
|
|
||||||
[[ -z "$line" || "$line" =~ ^[[:space:]]*# ]] && continue
|
|
||||||
|
|
||||||
# Remove trailing slash and whitespace
|
|
||||||
line=$(echo "$line" | sed 's|/$||' | xargs)
|
|
||||||
|
|
||||||
# Skip wildcards patterns (too complex for simple find)
|
|
||||||
[[ "$line" =~ \* ]] && continue
|
|
||||||
|
|
||||||
# Add to filters
|
|
||||||
filters+=" -not -path '*/$line' -not -path '*/$line/*'"
|
|
||||||
done < "$gitignore_file"
|
|
||||||
fi
|
|
||||||
|
|
||||||
echo "$filters"
|
|
||||||
}
|
|
||||||
|
|
||||||
# Detect folder type (code vs navigation)
|
|
||||||
detect_folder_type() {
|
|
||||||
local target_path="$1"
|
|
||||||
local exclusion_filters="$2"
|
|
||||||
|
|
||||||
# Count code files (primary indicators)
|
|
||||||
local code_count=$(eval "find \"$target_path\" -maxdepth 1 -type f \\( -name '*.ts' -o -name '*.tsx' -o -name '*.js' -o -name '*.jsx' -o -name '*.py' -o -name '*.sh' -o -name '*.go' -o -name '*.rs' \\) $exclusion_filters 2>/dev/null" | wc -l)
|
|
||||||
|
|
||||||
if [ $code_count -gt 0 ]; then
|
|
||||||
echo "code"
|
|
||||||
else
|
|
||||||
echo "navigation"
|
|
||||||
fi
|
|
||||||
}
|
|
||||||
|
|
||||||
# Scan directory structure and generate structured information
|
|
||||||
scan_directory_structure() {
|
|
||||||
local target_path="$1"
|
|
||||||
local strategy="$2"
|
|
||||||
|
|
||||||
if [ ! -d "$target_path" ]; then
|
|
||||||
echo "Directory not found: $target_path"
|
|
||||||
return 1
|
|
||||||
fi
|
|
||||||
|
|
||||||
local exclusion_filters=$(build_exclusion_filters)
|
|
||||||
local structure_info=""
|
|
||||||
|
|
||||||
# Get basic directory info
|
|
||||||
local dir_name=$(basename "$target_path")
|
|
||||||
local total_files=$(eval "find \"$target_path\" -type f $exclusion_filters 2>/dev/null" | wc -l)
|
|
||||||
local total_dirs=$(eval "find \"$target_path\" -type d $exclusion_filters 2>/dev/null" | wc -l)
|
|
||||||
local folder_type=$(detect_folder_type "$target_path" "$exclusion_filters")
|
|
||||||
|
|
||||||
structure_info+="Directory: $dir_name\n"
|
|
||||||
structure_info+="Total files: $total_files\n"
|
|
||||||
structure_info+="Total directories: $total_dirs\n"
|
|
||||||
structure_info+="Folder type: $folder_type\n\n"
|
|
||||||
|
|
||||||
if [ "$strategy" = "full" ]; then
|
|
||||||
# For full: show all subdirectories with file counts
|
|
||||||
structure_info+="Subdirectories with files:\n"
|
|
||||||
while IFS= read -r dir; do
|
|
||||||
if [ -n "$dir" ] && [ "$dir" != "$target_path" ]; then
|
|
||||||
local rel_path=${dir#$target_path/}
|
|
||||||
local file_count=$(eval "find \"$dir\" -maxdepth 1 -type f $exclusion_filters 2>/dev/null" | wc -l)
|
|
||||||
if [ $file_count -gt 0 ]; then
|
|
||||||
local subdir_type=$(detect_folder_type "$dir" "$exclusion_filters")
|
|
||||||
structure_info+=" - $rel_path/ ($file_count files, type: $subdir_type)\n"
|
|
||||||
fi
|
|
||||||
fi
|
|
||||||
done < <(eval "find \"$target_path\" -type d $exclusion_filters 2>/dev/null")
|
|
||||||
else
|
|
||||||
# For single: show direct children only
|
|
||||||
structure_info+="Direct subdirectories:\n"
|
|
||||||
while IFS= read -r dir; do
|
|
||||||
if [ -n "$dir" ]; then
|
|
||||||
local dir_name=$(basename "$dir")
|
|
||||||
local file_count=$(eval "find \"$dir\" -maxdepth 1 -type f $exclusion_filters 2>/dev/null" | wc -l)
|
|
||||||
local has_api=$([ -f "$dir/API.md" ] && echo " [has API.md]" || echo "")
|
|
||||||
local has_readme=$([ -f "$dir/README.md" ] && echo " [has README.md]" || echo "")
|
|
||||||
structure_info+=" - $dir_name/ ($file_count files)$has_api$has_readme\n"
|
|
||||||
fi
|
|
||||||
done < <(eval "find \"$target_path\" -maxdepth 1 -type d $exclusion_filters 2>/dev/null" | grep -v "^$target_path$")
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Show main file types in current directory
|
|
||||||
structure_info+="\nCurrent directory files:\n"
|
|
||||||
local code_files=$(eval "find \"$target_path\" -maxdepth 1 -type f \\( -name '*.ts' -o -name '*.tsx' -o -name '*.js' -o -name '*.jsx' -o -name '*.py' -o -name '*.sh' -o -name '*.go' -o -name '*.rs' \\) $exclusion_filters 2>/dev/null" | wc -l)
|
|
||||||
local config_files=$(eval "find \"$target_path\" -maxdepth 1 -type f \\( -name '*.json' -o -name '*.yaml' -o -name '*.yml' -o -name '*.toml' \\) $exclusion_filters 2>/dev/null" | wc -l)
|
|
||||||
local doc_files=$(eval "find \"$target_path\" -maxdepth 1 -type f -name '*.md' $exclusion_filters 2>/dev/null" | wc -l)
|
|
||||||
|
|
||||||
structure_info+=" - Code files: $code_files\n"
|
|
||||||
structure_info+=" - Config files: $config_files\n"
|
|
||||||
structure_info+=" - Documentation: $doc_files\n"
|
|
||||||
|
|
||||||
printf "%b" "$structure_info"
|
|
||||||
}
|
|
||||||
|
|
||||||
# Calculate output path based on source path and project name
|
|
||||||
calculate_output_path() {
|
|
||||||
local source_path="$1"
|
|
||||||
local project_name="$2"
|
|
||||||
local project_root="$3"
|
|
||||||
|
|
||||||
# Get absolute path of source (normalize to Unix-style path)
|
|
||||||
local abs_source=$(cd "$source_path" && pwd)
|
|
||||||
|
|
||||||
# Normalize project root to same format
|
|
||||||
local norm_project_root=$(cd "$project_root" && pwd)
|
|
||||||
|
|
||||||
# Calculate relative path from project root
|
|
||||||
local rel_path="${abs_source#$norm_project_root}"
|
|
||||||
|
|
||||||
# Remove leading slash if present
|
|
||||||
rel_path="${rel_path#/}"
|
|
||||||
|
|
||||||
# If source is project root, use project name directly
|
|
||||||
if [ "$abs_source" = "$norm_project_root" ] || [ -z "$rel_path" ]; then
|
|
||||||
echo "$norm_project_root/.workflow/docs/$project_name"
|
|
||||||
else
|
|
||||||
echo "$norm_project_root/.workflow/docs/$project_name/$rel_path"
|
|
||||||
fi
|
|
||||||
}
|
|
||||||
|
|
||||||
generate_module_docs() {
|
|
||||||
local strategy="$1"
|
|
||||||
local source_path="$2"
|
|
||||||
local project_name="$3"
|
|
||||||
local tool="${4:-gemini}"
|
|
||||||
local model="$5"
|
|
||||||
|
|
||||||
# Validate parameters
|
|
||||||
if [ -z "$strategy" ] || [ -z "$source_path" ] || [ -z "$project_name" ]; then
|
|
||||||
echo "❌ Error: Strategy, source path, and project name are required"
|
|
||||||
echo "Usage: generate_module_docs.sh <strategy> <source_path> <project_name> [tool] [model]"
|
|
||||||
echo "Module strategies: full, single"
|
|
||||||
echo "Project strategies: project-readme, project-architecture, http-api"
|
|
||||||
return 1
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Validate strategy
|
|
||||||
local valid_strategies=("full" "single" "project-readme" "project-architecture" "http-api")
|
|
||||||
local strategy_valid=false
|
|
||||||
for valid_strategy in "${valid_strategies[@]}"; do
|
|
||||||
if [ "$strategy" = "$valid_strategy" ]; then
|
|
||||||
strategy_valid=true
|
|
||||||
break
|
|
||||||
fi
|
|
||||||
done
|
|
||||||
|
|
||||||
if [ "$strategy_valid" = false ]; then
|
|
||||||
echo "❌ Error: Invalid strategy '$strategy'"
|
|
||||||
echo "Valid module strategies: full, single"
|
|
||||||
echo "Valid project strategies: project-readme, project-architecture, http-api"
|
|
||||||
return 1
|
|
||||||
fi
|
|
||||||
|
|
||||||
if [ ! -d "$source_path" ]; then
|
|
||||||
echo "❌ Error: Source directory '$source_path' does not exist"
|
|
||||||
return 1
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Set default models if not specified
|
|
||||||
if [ -z "$model" ]; then
|
|
||||||
case "$tool" in
|
|
||||||
gemini)
|
|
||||||
model="gemini-2.5-flash"
|
|
||||||
;;
|
|
||||||
qwen)
|
|
||||||
model="coder-model"
|
|
||||||
;;
|
|
||||||
codex)
|
|
||||||
model="gpt5-codex"
|
|
||||||
;;
|
|
||||||
*)
|
|
||||||
model=""
|
|
||||||
;;
|
|
||||||
esac
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Build exclusion filters
|
|
||||||
local exclusion_filters=$(build_exclusion_filters)
|
|
||||||
|
|
||||||
# Get project root
|
|
||||||
local project_root=$(git rev-parse --show-toplevel 2>/dev/null || pwd)
|
|
||||||
|
|
||||||
# Determine if this is a project-level strategy
|
|
||||||
local is_project_level=false
|
|
||||||
if [[ "$strategy" =~ ^project- ]] || [ "$strategy" = "http-api" ]; then
|
|
||||||
is_project_level=true
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Calculate output path
|
|
||||||
local output_path
|
|
||||||
if [ "$is_project_level" = true ]; then
|
|
||||||
# Project-level docs go to project root
|
|
||||||
if [ "$strategy" = "http-api" ]; then
|
|
||||||
output_path="$project_root/.workflow/docs/$project_name/api"
|
|
||||||
else
|
|
||||||
output_path="$project_root/.workflow/docs/$project_name"
|
|
||||||
fi
|
|
||||||
else
|
|
||||||
output_path=$(calculate_output_path "$source_path" "$project_name" "$project_root")
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Create output directory
|
|
||||||
mkdir -p "$output_path"
|
|
||||||
|
|
||||||
# Detect folder type (only for module-level strategies)
|
|
||||||
local folder_type=""
|
|
||||||
if [ "$is_project_level" = false ]; then
|
|
||||||
folder_type=$(detect_folder_type "$source_path" "$exclusion_filters")
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Load templates based on strategy
|
|
||||||
local api_template=""
|
|
||||||
local readme_template=""
|
|
||||||
local template_content=""
|
|
||||||
|
|
||||||
if [ "$is_project_level" = true ]; then
|
|
||||||
# Project-level templates
|
|
||||||
case "$strategy" in
|
|
||||||
project-readme)
|
|
||||||
local proj_readme_path="$HOME/.claude/workflows/cli-templates/prompts/documentation/project-readme.txt"
|
|
||||||
if [ -f "$proj_readme_path" ]; then
|
|
||||||
template_content=$(cat "$proj_readme_path")
|
|
||||||
echo " 📋 Loaded Project README template: $(wc -l < "$proj_readme_path") lines"
|
|
||||||
fi
|
|
||||||
;;
|
|
||||||
project-architecture)
|
|
||||||
local arch_path="$HOME/.claude/workflows/cli-templates/prompts/documentation/project-architecture.txt"
|
|
||||||
local examples_path="$HOME/.claude/workflows/cli-templates/prompts/documentation/project-examples.txt"
|
|
||||||
if [ -f "$arch_path" ]; then
|
|
||||||
template_content=$(cat "$arch_path")
|
|
||||||
echo " 📋 Loaded Architecture template: $(wc -l < "$arch_path") lines"
|
|
||||||
fi
|
|
||||||
if [ -f "$examples_path" ]; then
|
|
||||||
template_content="$template_content
|
|
||||||
|
|
||||||
EXAMPLES TEMPLATE:
|
|
||||||
$(cat "$examples_path")"
|
|
||||||
echo " 📋 Loaded Examples template: $(wc -l < "$examples_path") lines"
|
|
||||||
fi
|
|
||||||
;;
|
|
||||||
http-api)
|
|
||||||
local api_path="$HOME/.claude/workflows/cli-templates/prompts/documentation/api.txt"
|
|
||||||
if [ -f "$api_path" ]; then
|
|
||||||
template_content=$(cat "$api_path")
|
|
||||||
echo " 📋 Loaded HTTP API template: $(wc -l < "$api_path") lines"
|
|
||||||
fi
|
|
||||||
;;
|
|
||||||
esac
|
|
||||||
else
|
|
||||||
# Module-level templates
|
|
||||||
local api_template_path="$HOME/.claude/workflows/cli-templates/prompts/documentation/api.txt"
|
|
||||||
local readme_template_path="$HOME/.claude/workflows/cli-templates/prompts/documentation/module-readme.txt"
|
|
||||||
local nav_template_path="$HOME/.claude/workflows/cli-templates/prompts/documentation/folder-navigation.txt"
|
|
||||||
|
|
||||||
if [ "$folder_type" = "code" ]; then
|
|
||||||
if [ -f "$api_template_path" ]; then
|
|
||||||
api_template=$(cat "$api_template_path")
|
|
||||||
echo " 📋 Loaded API template: $(wc -l < "$api_template_path") lines"
|
|
||||||
fi
|
|
||||||
if [ -f "$readme_template_path" ]; then
|
|
||||||
readme_template=$(cat "$readme_template_path")
|
|
||||||
echo " 📋 Loaded README template: $(wc -l < "$readme_template_path") lines"
|
|
||||||
fi
|
|
||||||
else
|
|
||||||
# Navigation folder uses navigation template
|
|
||||||
if [ -f "$nav_template_path" ]; then
|
|
||||||
readme_template=$(cat "$nav_template_path")
|
|
||||||
echo " 📋 Loaded Navigation template: $(wc -l < "$nav_template_path") lines"
|
|
||||||
fi
|
|
||||||
fi
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Scan directory structure (only for module-level strategies)
|
|
||||||
local structure_info=""
|
|
||||||
if [ "$is_project_level" = false ]; then
|
|
||||||
echo " 🔍 Scanning directory structure..."
|
|
||||||
structure_info=$(scan_directory_structure "$source_path" "$strategy")
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Prepare logging info
|
|
||||||
local module_name=$(basename "$source_path")
|
|
||||||
|
|
||||||
echo "⚡ Generating docs: $source_path → $output_path"
|
|
||||||
echo " Strategy: $strategy | Tool: $tool | Model: $model | Type: $folder_type"
|
|
||||||
echo " Output: $output_path"
|
|
||||||
|
|
||||||
# Build strategy-specific prompt
|
|
||||||
local final_prompt=""
|
|
||||||
|
|
||||||
# Project-level strategies
|
|
||||||
if [ "$strategy" = "project-readme" ]; then
|
|
||||||
final_prompt="PURPOSE: Generate comprehensive project overview documentation
|
|
||||||
|
|
||||||
PROJECT: $project_name
|
|
||||||
OUTPUT: Current directory (file will be moved to final location)
|
|
||||||
|
|
||||||
Read: @.workflow/docs/$project_name/**/*.md
|
|
||||||
|
|
||||||
Context: All module documentation files from the project
|
|
||||||
|
|
||||||
Generate ONE documentation file in current directory:
|
|
||||||
- README.md - Project root documentation
|
|
||||||
|
|
||||||
Template:
|
|
||||||
$template_content
|
|
||||||
|
|
||||||
Instructions:
|
|
||||||
- Create README.md in CURRENT DIRECTORY
|
|
||||||
- Synthesize information from all module docs
|
|
||||||
- Include project overview, getting started, and navigation
|
|
||||||
- Create clear module navigation with links
|
|
||||||
- Follow template structure exactly"
|
|
||||||
|
|
||||||
elif [ "$strategy" = "project-architecture" ]; then
|
|
||||||
final_prompt="PURPOSE: Generate system design and usage examples documentation
|
|
||||||
|
|
||||||
PROJECT: $project_name
|
|
||||||
OUTPUT: Current directory (files will be moved to final location)
|
|
||||||
|
|
||||||
Read: @.workflow/docs/$project_name/**/*.md
|
|
||||||
|
|
||||||
Context: All project documentation including module docs and project README
|
|
||||||
|
|
||||||
Generate TWO documentation files in current directory:
|
|
||||||
1. ARCHITECTURE.md - System architecture and design patterns
|
|
||||||
2. EXAMPLES.md - End-to-end usage examples
|
|
||||||
|
|
||||||
Template:
|
|
||||||
$template_content
|
|
||||||
|
|
||||||
Instructions:
|
|
||||||
- Create both ARCHITECTURE.md and EXAMPLES.md in CURRENT DIRECTORY
|
|
||||||
- Synthesize architectural patterns from module documentation
|
|
||||||
- Document system structure, module relationships, and design decisions
|
|
||||||
- Provide practical code examples and usage scenarios
|
|
||||||
- Follow template structure for both files"
|
|
||||||
|
|
||||||
elif [ "$strategy" = "http-api" ]; then
|
|
||||||
final_prompt="PURPOSE: Generate HTTP API reference documentation
|
|
||||||
|
|
||||||
PROJECT: $project_name
|
|
||||||
OUTPUT: Current directory (file will be moved to final location)
|
|
||||||
|
|
||||||
Read: @**/*.{ts,js,py,go,rs} @.workflow/docs/$project_name/**/*.md
|
|
||||||
|
|
||||||
Context: API route files and existing documentation
|
|
||||||
|
|
||||||
Generate ONE documentation file in current directory:
|
|
||||||
- README.md - HTTP API documentation (in api/ subdirectory)
|
|
||||||
|
|
||||||
Template:
|
|
||||||
$template_content
|
|
||||||
|
|
||||||
Instructions:
|
|
||||||
- Create README.md in CURRENT DIRECTORY
|
|
||||||
- Document all HTTP endpoints (routes, methods, parameters, responses)
|
|
||||||
- Include authentication requirements and error codes
|
|
||||||
- Provide request/response examples
|
|
||||||
- Follow template structure (Part B: HTTP API documentation)"
|
|
||||||
|
|
||||||
# Module-level strategies
|
|
||||||
elif [ "$strategy" = "full" ]; then
|
|
||||||
# Full strategy: read all files, generate for each directory
|
|
||||||
if [ "$folder_type" = "code" ]; then
|
|
||||||
final_prompt="PURPOSE: Generate comprehensive API and module documentation
|
|
||||||
|
|
||||||
Directory Structure Analysis:
|
|
||||||
$structure_info
|
|
||||||
|
|
||||||
SOURCE: $source_path
|
|
||||||
OUTPUT: Current directory (files will be moved to final location)
|
|
||||||
|
|
||||||
Read: @**/*
|
|
||||||
|
|
||||||
Generate TWO documentation files in current directory:
|
|
||||||
1. API.md - Code API documentation (functions, classes, interfaces)
|
|
||||||
Template:
|
|
||||||
$api_template
|
|
||||||
|
|
||||||
2. README.md - Module overview documentation
|
|
||||||
Template:
|
|
||||||
$readme_template
|
|
||||||
|
|
||||||
Instructions:
|
|
||||||
- Generate both API.md and README.md in CURRENT DIRECTORY
|
|
||||||
- If subdirectories contain code files, generate their docs too (recursive)
|
|
||||||
- Work bottom-up: deepest directories first
|
|
||||||
- Follow template structure exactly
|
|
||||||
- Use structure analysis for context"
|
|
||||||
else
|
|
||||||
# Navigation folder - README only
|
|
||||||
final_prompt="PURPOSE: Generate navigation documentation for folder structure
|
|
||||||
|
|
||||||
Directory Structure Analysis:
|
|
||||||
$structure_info
|
|
||||||
|
|
||||||
SOURCE: $source_path
|
|
||||||
OUTPUT: Current directory (file will be moved to final location)
|
|
||||||
|
|
||||||
Read: @**/*
|
|
||||||
|
|
||||||
Generate ONE documentation file in current directory:
|
|
||||||
- README.md - Navigation and folder overview
|
|
||||||
|
|
||||||
Template:
|
|
||||||
$readme_template
|
|
||||||
|
|
||||||
Instructions:
|
|
||||||
- Create README.md in CURRENT DIRECTORY
|
|
||||||
- Focus on folder structure and navigation
|
|
||||||
- Link to subdirectory documentation
|
|
||||||
- Use structure analysis for context"
|
|
||||||
fi
|
|
||||||
else
|
|
||||||
# Single strategy: read current + child docs only
|
|
||||||
if [ "$folder_type" = "code" ]; then
|
|
||||||
final_prompt="PURPOSE: Generate API and module documentation for current directory
|
|
||||||
|
|
||||||
Directory Structure Analysis:
|
|
||||||
$structure_info
|
|
||||||
|
|
||||||
SOURCE: $source_path
|
|
||||||
OUTPUT: Current directory (files will be moved to final location)
|
|
||||||
|
|
||||||
Read: @*/API.md @*/README.md @*.ts @*.tsx @*.js @*.jsx @*.py @*.sh @*.go @*.rs @*.md @*.json @*.yaml @*.yml
|
|
||||||
|
|
||||||
Generate TWO documentation files in current directory:
|
|
||||||
1. API.md - Code API documentation
|
|
||||||
Template:
|
|
||||||
$api_template
|
|
||||||
|
|
||||||
2. README.md - Module overview
|
|
||||||
Template:
|
|
||||||
$readme_template
|
|
||||||
|
|
||||||
Instructions:
|
|
||||||
- Generate both API.md and README.md in CURRENT DIRECTORY
|
|
||||||
- Reference child documentation, do not duplicate
|
|
||||||
- Follow template structure
|
|
||||||
- Use structure analysis for current directory context"
|
|
||||||
else
|
|
||||||
# Navigation folder - README only
|
|
||||||
final_prompt="PURPOSE: Generate navigation documentation
|
|
||||||
|
|
||||||
Directory Structure Analysis:
|
|
||||||
$structure_info
|
|
||||||
|
|
||||||
SOURCE: $source_path
|
|
||||||
OUTPUT: Current directory (file will be moved to final location)
|
|
||||||
|
|
||||||
Read: @*/API.md @*/README.md @*.md
|
|
||||||
|
|
||||||
Generate ONE documentation file in current directory:
|
|
||||||
- README.md - Navigation and overview
|
|
||||||
|
|
||||||
Template:
|
|
||||||
$readme_template
|
|
||||||
|
|
||||||
Instructions:
|
|
||||||
- Create README.md in CURRENT DIRECTORY
|
|
||||||
- Link to child documentation
|
|
||||||
- Use structure analysis for navigation context"
|
|
||||||
fi
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Execute documentation generation
|
|
||||||
local start_time=$(date +%s)
|
|
||||||
echo " 🔄 Starting documentation generation..."
|
|
||||||
|
|
||||||
if cd "$source_path" 2>/dev/null; then
|
|
||||||
local tool_result=0
|
|
||||||
|
|
||||||
# Store current output path for CLI context
|
|
||||||
export DOC_OUTPUT_PATH="$output_path"
|
|
||||||
|
|
||||||
# Record git HEAD before CLI execution (to detect unwanted auto-commits)
|
|
||||||
local git_head_before=""
|
|
||||||
if git rev-parse --git-dir >/dev/null 2>&1; then
|
|
||||||
git_head_before=$(git rev-parse HEAD 2>/dev/null)
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Execute with selected tool
|
|
||||||
case "$tool" in
|
|
||||||
qwen)
|
|
||||||
if [ "$model" = "coder-model" ]; then
|
|
||||||
qwen -p "$final_prompt" --yolo 2>&1
|
|
||||||
else
|
|
||||||
qwen -p "$final_prompt" -m "$model" --yolo 2>&1
|
|
||||||
fi
|
|
||||||
tool_result=$?
|
|
||||||
;;
|
|
||||||
codex)
|
|
||||||
codex --full-auto exec "$final_prompt" -m "$model" --skip-git-repo-check -s danger-full-access 2>&1
|
|
||||||
tool_result=$?
|
|
||||||
;;
|
|
||||||
gemini)
|
|
||||||
gemini -p "$final_prompt" -m "$model" --yolo 2>&1
|
|
||||||
tool_result=$?
|
|
||||||
;;
|
|
||||||
*)
|
|
||||||
echo " ⚠️ Unknown tool: $tool, defaulting to gemini"
|
|
||||||
gemini -p "$final_prompt" -m "$model" --yolo 2>&1
|
|
||||||
tool_result=$?
|
|
||||||
;;
|
|
||||||
esac
|
|
||||||
|
|
||||||
# Move generated files to output directory
|
|
||||||
local docs_created=0
|
|
||||||
local moved_files=""
|
|
||||||
|
|
||||||
if [ $tool_result -eq 0 ]; then
|
|
||||||
if [ "$is_project_level" = true ]; then
|
|
||||||
# Project-level documentation files
|
|
||||||
case "$strategy" in
|
|
||||||
project-readme)
|
|
||||||
if [ -f "README.md" ]; then
|
|
||||||
mv "README.md" "$output_path/README.md" 2>/dev/null && {
|
|
||||||
docs_created=$((docs_created + 1))
|
|
||||||
moved_files+="README.md "
|
|
||||||
}
|
|
||||||
fi
|
|
||||||
;;
|
|
||||||
project-architecture)
|
|
||||||
if [ -f "ARCHITECTURE.md" ]; then
|
|
||||||
mv "ARCHITECTURE.md" "$output_path/ARCHITECTURE.md" 2>/dev/null && {
|
|
||||||
docs_created=$((docs_created + 1))
|
|
||||||
moved_files+="ARCHITECTURE.md "
|
|
||||||
}
|
|
||||||
fi
|
|
||||||
if [ -f "EXAMPLES.md" ]; then
|
|
||||||
mv "EXAMPLES.md" "$output_path/EXAMPLES.md" 2>/dev/null && {
|
|
||||||
docs_created=$((docs_created + 1))
|
|
||||||
moved_files+="EXAMPLES.md "
|
|
||||||
}
|
|
||||||
fi
|
|
||||||
;;
|
|
||||||
http-api)
|
|
||||||
if [ -f "README.md" ]; then
|
|
||||||
mv "README.md" "$output_path/README.md" 2>/dev/null && {
|
|
||||||
docs_created=$((docs_created + 1))
|
|
||||||
moved_files+="api/README.md "
|
|
||||||
}
|
|
||||||
fi
|
|
||||||
;;
|
|
||||||
esac
|
|
||||||
else
|
|
||||||
# Module-level documentation files
|
|
||||||
# Check and move API.md if it exists
|
|
||||||
if [ "$folder_type" = "code" ] && [ -f "API.md" ]; then
|
|
||||||
mv "API.md" "$output_path/API.md" 2>/dev/null && {
|
|
||||||
docs_created=$((docs_created + 1))
|
|
||||||
moved_files+="API.md "
|
|
||||||
}
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Check and move README.md if it exists
|
|
||||||
if [ -f "README.md" ]; then
|
|
||||||
mv "README.md" "$output_path/README.md" 2>/dev/null && {
|
|
||||||
docs_created=$((docs_created + 1))
|
|
||||||
moved_files+="README.md "
|
|
||||||
}
|
|
||||||
fi
|
|
||||||
fi
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Check if CLI tool auto-committed (and revert if needed)
|
|
||||||
if [ -n "$git_head_before" ]; then
|
|
||||||
local git_head_after=$(git rev-parse HEAD 2>/dev/null)
|
|
||||||
if [ "$git_head_before" != "$git_head_after" ]; then
|
|
||||||
echo " ⚠️ Detected unwanted auto-commit by CLI tool, reverting..."
|
|
||||||
git reset --soft "$git_head_before" 2>/dev/null
|
|
||||||
echo " ✅ Auto-commit reverted (files remain staged)"
|
|
||||||
fi
|
|
||||||
fi
|
|
||||||
|
|
||||||
if [ $docs_created -gt 0 ]; then
|
|
||||||
local end_time=$(date +%s)
|
|
||||||
local duration=$((end_time - start_time))
|
|
||||||
echo " ✅ Generated $docs_created doc(s) in ${duration}s: $moved_files"
|
|
||||||
cd - > /dev/null
|
|
||||||
return 0
|
|
||||||
else
|
|
||||||
echo " ❌ Documentation generation failed for $source_path"
|
|
||||||
cd - > /dev/null
|
|
||||||
return 1
|
|
||||||
fi
|
|
||||||
else
|
|
||||||
echo " ❌ Cannot access directory: $source_path"
|
|
||||||
return 1
|
|
||||||
fi
|
|
||||||
}
|
|
||||||
|
|
||||||
# Execute function if script is run directly
|
|
||||||
if [[ "${BASH_SOURCE[0]}" == "${0}" ]]; then
|
|
||||||
# Show help if no arguments or help requested
|
|
||||||
if [ $# -eq 0 ] || [ "$1" = "-h" ] || [ "$1" = "--help" ]; then
|
|
||||||
echo "Usage: generate_module_docs.sh <strategy> <source_path> <project_name> [tool] [model]"
|
|
||||||
echo ""
|
|
||||||
echo "Module-Level Strategies:"
|
|
||||||
echo " full - Generate docs for all subdirectories with code"
|
|
||||||
echo " single - Generate docs only for current directory"
|
|
||||||
echo ""
|
|
||||||
echo "Project-Level Strategies:"
|
|
||||||
echo " project-readme - Generate project root README.md"
|
|
||||||
echo " project-architecture - Generate ARCHITECTURE.md + EXAMPLES.md"
|
|
||||||
echo " http-api - Generate HTTP API documentation (api/README.md)"
|
|
||||||
echo ""
|
|
||||||
echo "Tools: gemini (default), qwen, codex"
|
|
||||||
echo "Models: Use tool defaults if not specified"
|
|
||||||
echo ""
|
|
||||||
echo "Module Examples:"
|
|
||||||
echo " ./generate_module_docs.sh full ./src/auth myproject"
|
|
||||||
echo " ./generate_module_docs.sh single ./components myproject gemini"
|
|
||||||
echo ""
|
|
||||||
echo "Project Examples:"
|
|
||||||
echo " ./generate_module_docs.sh project-readme . myproject"
|
|
||||||
echo " ./generate_module_docs.sh project-architecture . myproject qwen"
|
|
||||||
echo " ./generate_module_docs.sh http-api . myproject"
|
|
||||||
exit 0
|
|
||||||
fi
|
|
||||||
|
|
||||||
generate_module_docs "$@"
|
|
||||||
fi
|
|
||||||
@@ -1,170 +0,0 @@
|
|||||||
#!/bin/bash
|
|
||||||
# ⚠️ DEPRECATED: This script is deprecated.
|
|
||||||
# Please use: ccw tool exec get_modules_by_depth '{"format":"list","path":"."}' OR ccw tool exec get_modules_by_depth '{}'
|
|
||||||
# This file will be removed in a future version.
|
|
||||||
|
|
||||||
# Get modules organized by directory depth (deepest first)
|
|
||||||
# Usage: get_modules_by_depth.sh [format]
|
|
||||||
# format: list|grouped|json (default: list)
|
|
||||||
|
|
||||||
# Parse .gitignore patterns and build exclusion filters
|
|
||||||
build_exclusion_filters() {
|
|
||||||
local filters=""
|
|
||||||
|
|
||||||
# Always exclude these system/cache directories and common web dev packages
|
|
||||||
local system_excludes=(
|
|
||||||
# Version control and IDE
|
|
||||||
".git" ".gitignore" ".gitmodules" ".gitattributes"
|
|
||||||
".svn" ".hg" ".bzr"
|
|
||||||
".history" ".vscode" ".idea" ".vs" ".vscode-test"
|
|
||||||
".sublime-text" ".atom"
|
|
||||||
|
|
||||||
# Python
|
|
||||||
"__pycache__" ".pytest_cache" ".mypy_cache" ".tox"
|
|
||||||
".coverage" "htmlcov" ".nox" ".venv" "venv" "env"
|
|
||||||
".egg-info" "*.egg-info" ".eggs" ".wheel"
|
|
||||||
"site-packages" ".python-version" ".pyc"
|
|
||||||
|
|
||||||
# Node.js/JavaScript
|
|
||||||
"node_modules" ".npm" ".yarn" ".pnpm" "yarn-error.log"
|
|
||||||
".nyc_output" "coverage" ".next" ".nuxt"
|
|
||||||
".cache" ".parcel-cache" ".vite" "dist" "build"
|
|
||||||
".turbo" ".vercel" ".netlify"
|
|
||||||
|
|
||||||
# Package managers
|
|
||||||
".pnpm-store" "pnpm-lock.yaml" "yarn.lock" "package-lock.json"
|
|
||||||
".bundle" "vendor/bundle" "Gemfile.lock"
|
|
||||||
".gradle" "gradle" "gradlew" "gradlew.bat"
|
|
||||||
".mvn" "target" ".m2"
|
|
||||||
|
|
||||||
# Build/compile outputs
|
|
||||||
"dist" "build" "out" "output" "_site" "public"
|
|
||||||
".output" ".generated" "generated" "gen"
|
|
||||||
"bin" "obj" "Debug" "Release"
|
|
||||||
|
|
||||||
# Testing
|
|
||||||
".pytest_cache" ".coverage" "htmlcov" "test-results"
|
|
||||||
".nyc_output" "junit.xml" "test_results"
|
|
||||||
"cypress/screenshots" "cypress/videos"
|
|
||||||
"playwright-report" ".playwright"
|
|
||||||
|
|
||||||
# Logs and temp files
|
|
||||||
"logs" "*.log" "log" "tmp" "temp" ".tmp" ".temp"
|
|
||||||
".env" ".env.local" ".env.*.local"
|
|
||||||
".DS_Store" "Thumbs.db" "*.tmp" "*.swp" "*.swo"
|
|
||||||
|
|
||||||
# Documentation build outputs
|
|
||||||
"_book" "_site" "docs/_build" "site" "gh-pages"
|
|
||||||
".docusaurus" ".vuepress" ".gitbook"
|
|
||||||
|
|
||||||
# Database files
|
|
||||||
"*.sqlite" "*.sqlite3" "*.db" "data.db"
|
|
||||||
|
|
||||||
# OS and editor files
|
|
||||||
".DS_Store" "Thumbs.db" "desktop.ini"
|
|
||||||
"*.stackdump" "*.core"
|
|
||||||
|
|
||||||
# Cloud and deployment
|
|
||||||
".serverless" ".terraform" "terraform.tfstate"
|
|
||||||
".aws" ".azure" ".gcp"
|
|
||||||
|
|
||||||
# Mobile development
|
|
||||||
".gradle" "build" ".expo" ".metro"
|
|
||||||
"android/app/build" "ios/build" "DerivedData"
|
|
||||||
|
|
||||||
# Game development
|
|
||||||
"Library" "Temp" "ProjectSettings"
|
|
||||||
"Logs" "MemoryCaptures" "UserSettings"
|
|
||||||
)
|
|
||||||
|
|
||||||
for exclude in "${system_excludes[@]}"; do
|
|
||||||
filters+=" -not -path '*/$exclude' -not -path '*/$exclude/*'"
|
|
||||||
done
|
|
||||||
|
|
||||||
# Parse .gitignore if it exists
|
|
||||||
if [ -f ".gitignore" ]; then
|
|
||||||
while IFS= read -r line; do
|
|
||||||
# Skip empty lines and comments
|
|
||||||
[[ -z "$line" || "$line" =~ ^[[:space:]]*# ]] && continue
|
|
||||||
|
|
||||||
# Remove trailing slash and whitespace
|
|
||||||
line=$(echo "$line" | sed 's|/$||' | xargs)
|
|
||||||
|
|
||||||
# Add to filters
|
|
||||||
filters+=" -not -path '*/$line' -not -path '*/$line/*'"
|
|
||||||
done < .gitignore
|
|
||||||
fi
|
|
||||||
|
|
||||||
echo "$filters"
|
|
||||||
}
|
|
||||||
|
|
||||||
get_modules_by_depth() {
|
|
||||||
local format="${1:-list}"
|
|
||||||
local exclusion_filters=$(build_exclusion_filters)
|
|
||||||
local max_depth=$(eval "find . -type d $exclusion_filters 2>/dev/null" | awk -F/ '{print NF-1}' | sort -n | tail -1)
|
|
||||||
|
|
||||||
case "$format" in
|
|
||||||
"grouped")
|
|
||||||
echo "📊 Modules by depth (deepest first):"
|
|
||||||
for depth in $(seq $max_depth -1 0); do
|
|
||||||
local dirs=$(eval "find . -mindepth $depth -maxdepth $depth -type d $exclusion_filters 2>/dev/null" | \
|
|
||||||
while read dir; do
|
|
||||||
if [ $(find "$dir" -maxdepth 1 -type f 2>/dev/null | wc -l) -gt 0 ]; then
|
|
||||||
local claude_indicator=""
|
|
||||||
[ -f "$dir/CLAUDE.md" ] && claude_indicator=" [✓]"
|
|
||||||
echo "$dir$claude_indicator"
|
|
||||||
fi
|
|
||||||
done)
|
|
||||||
if [ -n "$dirs" ]; then
|
|
||||||
echo " 📁 Depth $depth:"
|
|
||||||
echo "$dirs" | sed 's/^/ - /'
|
|
||||||
fi
|
|
||||||
done
|
|
||||||
;;
|
|
||||||
|
|
||||||
"json")
|
|
||||||
echo "{"
|
|
||||||
echo " \"max_depth\": $max_depth,"
|
|
||||||
echo " \"modules\": {"
|
|
||||||
for depth in $(seq $max_depth -1 0); do
|
|
||||||
local dirs=$(eval "find . -mindepth $depth -maxdepth $depth -type d $exclusion_filters 2>/dev/null" | \
|
|
||||||
while read dir; do
|
|
||||||
if [ $(find "$dir" -maxdepth 1 -type f 2>/dev/null | wc -l) -gt 0 ]; then
|
|
||||||
local has_claude="false"
|
|
||||||
[ -f "$dir/CLAUDE.md" ] && has_claude="true"
|
|
||||||
echo "{\"path\":\"$dir\",\"has_claude\":$has_claude}"
|
|
||||||
fi
|
|
||||||
done | tr '\n' ',')
|
|
||||||
if [ -n "$dirs" ]; then
|
|
||||||
dirs=${dirs%,} # Remove trailing comma
|
|
||||||
echo " \"$depth\": [$dirs]"
|
|
||||||
[ $depth -gt 0 ] && echo ","
|
|
||||||
fi
|
|
||||||
done
|
|
||||||
echo " }"
|
|
||||||
echo "}"
|
|
||||||
;;
|
|
||||||
|
|
||||||
"list"|*)
|
|
||||||
# Simple list format (deepest first)
|
|
||||||
for depth in $(seq $max_depth -1 0); do
|
|
||||||
eval "find . -mindepth $depth -maxdepth $depth -type d $exclusion_filters 2>/dev/null" | \
|
|
||||||
while read dir; do
|
|
||||||
if [ $(find "$dir" -maxdepth 1 -type f 2>/dev/null | wc -l) -gt 0 ]; then
|
|
||||||
local file_count=$(find "$dir" -maxdepth 1 -type f 2>/dev/null | wc -l)
|
|
||||||
local types=$(find "$dir" -maxdepth 1 -type f -name "*.*" 2>/dev/null | \
|
|
||||||
grep -E '\.[^/]*$' | sed 's/.*\.//' | sort -u | tr '\n' ',' | sed 's/,$//')
|
|
||||||
local has_claude="no"
|
|
||||||
[ -f "$dir/CLAUDE.md" ] && has_claude="yes"
|
|
||||||
echo "depth:$depth|path:$dir|files:$file_count|types:[$types]|has_claude:$has_claude"
|
|
||||||
fi
|
|
||||||
done
|
|
||||||
done
|
|
||||||
;;
|
|
||||||
esac
|
|
||||||
}
|
|
||||||
|
|
||||||
# Execute function if script is run directly
|
|
||||||
if [[ "${BASH_SOURCE[0]}" == "${0}" ]]; then
|
|
||||||
get_modules_by_depth "$@"
|
|
||||||
fi
|
|
||||||
@@ -1,395 +0,0 @@
|
|||||||
#!/bin/bash
|
|
||||||
# ⚠️ DEPRECATED: This script is deprecated.
|
|
||||||
# Please use: ccw tool exec ui_generate_preview '{"designPath":"design-run-1","outputDir":"preview"}'
|
|
||||||
# This file will be removed in a future version.
|
|
||||||
|
|
||||||
#
|
|
||||||
# UI Generate Preview v2.0 - Template-Based Preview Generation
|
|
||||||
# Purpose: Generate compare.html and index.html using template substitution
|
|
||||||
# Template: ~/.claude/workflows/_template-compare-matrix.html
|
|
||||||
#
|
|
||||||
# Usage: ui-generate-preview.sh <prototypes_dir> [--template <path>]
|
|
||||||
#
|
|
||||||
|
|
||||||
set -e
|
|
||||||
|
|
||||||
# Color output
|
|
||||||
RED='\033[0;31m'
|
|
||||||
GREEN='\033[0;32m'
|
|
||||||
YELLOW='\033[1;33m'
|
|
||||||
NC='\033[0m' # No Color
|
|
||||||
|
|
||||||
# Default template path
|
|
||||||
TEMPLATE_PATH="$HOME/.claude/workflows/_template-compare-matrix.html"
|
|
||||||
|
|
||||||
# Parse arguments
|
|
||||||
prototypes_dir="${1:-.}"
|
|
||||||
shift || true
|
|
||||||
|
|
||||||
while [[ $# -gt 0 ]]; do
|
|
||||||
case $1 in
|
|
||||||
--template)
|
|
||||||
TEMPLATE_PATH="$2"
|
|
||||||
shift 2
|
|
||||||
;;
|
|
||||||
*)
|
|
||||||
echo -e "${RED}Unknown option: $1${NC}"
|
|
||||||
exit 1
|
|
||||||
;;
|
|
||||||
esac
|
|
||||||
done
|
|
||||||
|
|
||||||
if [[ ! -d "$prototypes_dir" ]]; then
|
|
||||||
echo -e "${RED}Error: Directory not found: $prototypes_dir${NC}"
|
|
||||||
exit 1
|
|
||||||
fi
|
|
||||||
|
|
||||||
cd "$prototypes_dir" || exit 1
|
|
||||||
|
|
||||||
echo -e "${GREEN}📊 Auto-detecting matrix dimensions...${NC}"
|
|
||||||
|
|
||||||
# Auto-detect styles, layouts, targets from file patterns
|
|
||||||
# Pattern: {target}-style-{s}-layout-{l}.html
|
|
||||||
styles=$(find . -maxdepth 1 -name "*-style-*-layout-*.html" | \
|
|
||||||
sed 's/.*-style-\([0-9]\+\)-.*/\1/' | sort -un)
|
|
||||||
layouts=$(find . -maxdepth 1 -name "*-style-*-layout-*.html" | \
|
|
||||||
sed 's/.*-layout-\([0-9]\+\)\.html/\1/' | sort -un)
|
|
||||||
targets=$(find . -maxdepth 1 -name "*-style-*-layout-*.html" | \
|
|
||||||
sed 's/\.\///; s/-style-.*//' | sort -u)
|
|
||||||
|
|
||||||
S=$(echo "$styles" | wc -l)
|
|
||||||
L=$(echo "$layouts" | wc -l)
|
|
||||||
T=$(echo "$targets" | wc -l)
|
|
||||||
|
|
||||||
echo -e " Detected: ${GREEN}${S}${NC} styles × ${GREEN}${L}${NC} layouts × ${GREEN}${T}${NC} targets"
|
|
||||||
|
|
||||||
if [[ $S -eq 0 ]] || [[ $L -eq 0 ]] || [[ $T -eq 0 ]]; then
|
|
||||||
echo -e "${RED}Error: No prototype files found matching pattern {target}-style-{s}-layout-{l}.html${NC}"
|
|
||||||
exit 1
|
|
||||||
fi
|
|
||||||
|
|
||||||
# ============================================================================
|
|
||||||
# Generate compare.html from template
|
|
||||||
# ============================================================================
|
|
||||||
|
|
||||||
echo -e "${YELLOW}🎨 Generating compare.html from template...${NC}"
|
|
||||||
|
|
||||||
if [[ ! -f "$TEMPLATE_PATH" ]]; then
|
|
||||||
echo -e "${RED}Error: Template not found: $TEMPLATE_PATH${NC}"
|
|
||||||
exit 1
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Build pages/targets JSON array
|
|
||||||
PAGES_JSON="["
|
|
||||||
first=true
|
|
||||||
for target in $targets; do
|
|
||||||
if [[ "$first" == true ]]; then
|
|
||||||
first=false
|
|
||||||
else
|
|
||||||
PAGES_JSON+=", "
|
|
||||||
fi
|
|
||||||
PAGES_JSON+="\"$target\""
|
|
||||||
done
|
|
||||||
PAGES_JSON+="]"
|
|
||||||
|
|
||||||
# Generate metadata
|
|
||||||
RUN_ID="run-$(date +%Y%m%d-%H%M%S)"
|
|
||||||
SESSION_ID="standalone"
|
|
||||||
TIMESTAMP=$(date -u +%Y-%m-%dT%H:%M:%SZ 2>/dev/null || date -u +"%Y-%m-%d")
|
|
||||||
|
|
||||||
# Replace placeholders in template
|
|
||||||
cat "$TEMPLATE_PATH" | \
|
|
||||||
sed "s|{{run_id}}|${RUN_ID}|g" | \
|
|
||||||
sed "s|{{session_id}}|${SESSION_ID}|g" | \
|
|
||||||
sed "s|{{timestamp}}|${TIMESTAMP}|g" | \
|
|
||||||
sed "s|{{style_variants}}|${S}|g" | \
|
|
||||||
sed "s|{{layout_variants}}|${L}|g" | \
|
|
||||||
sed "s|{{pages_json}}|${PAGES_JSON}|g" \
|
|
||||||
> compare.html
|
|
||||||
|
|
||||||
echo -e "${GREEN} ✓ Generated compare.html from template${NC}"
|
|
||||||
|
|
||||||
# ============================================================================
|
|
||||||
# Generate index.html
|
|
||||||
# ============================================================================
|
|
||||||
|
|
||||||
echo -e "${YELLOW}📋 Generating index.html...${NC}"
|
|
||||||
|
|
||||||
cat > index.html << 'EOF'
|
|
||||||
<!DOCTYPE html>
|
|
||||||
<html lang="en">
|
|
||||||
<head>
|
|
||||||
<meta charset="UTF-8">
|
|
||||||
<meta name="viewport" content="width=device-width, initial-scale=1.0">
|
|
||||||
<title>UI Prototypes Index</title>
|
|
||||||
<style>
|
|
||||||
* { margin: 0; padding: 0; box-sizing: border-box; }
|
|
||||||
body {
|
|
||||||
font-family: -apple-system, BlinkMacSystemFont, 'Segoe UI', Roboto, sans-serif;
|
|
||||||
max-width: 1200px;
|
|
||||||
margin: 0 auto;
|
|
||||||
padding: 40px 20px;
|
|
||||||
background: #f5f5f5;
|
|
||||||
}
|
|
||||||
h1 { margin-bottom: 10px; color: #333; }
|
|
||||||
.subtitle { color: #666; margin-bottom: 30px; }
|
|
||||||
.cta {
|
|
||||||
background: linear-gradient(135deg, #667eea 0%, #764ba2 100%);
|
|
||||||
color: white;
|
|
||||||
padding: 20px;
|
|
||||||
border-radius: 8px;
|
|
||||||
margin-bottom: 30px;
|
|
||||||
box-shadow: 0 4px 6px rgba(0,0,0,0.1);
|
|
||||||
}
|
|
||||||
.cta h2 { margin-bottom: 10px; }
|
|
||||||
.cta a {
|
|
||||||
display: inline-block;
|
|
||||||
background: white;
|
|
||||||
color: #667eea;
|
|
||||||
padding: 10px 20px;
|
|
||||||
border-radius: 6px;
|
|
||||||
text-decoration: none;
|
|
||||||
font-weight: 600;
|
|
||||||
margin-top: 10px;
|
|
||||||
}
|
|
||||||
.cta a:hover { background: #f8f9fa; }
|
|
||||||
.style-section {
|
|
||||||
background: white;
|
|
||||||
padding: 20px;
|
|
||||||
border-radius: 8px;
|
|
||||||
margin-bottom: 20px;
|
|
||||||
box-shadow: 0 2px 4px rgba(0,0,0,0.1);
|
|
||||||
}
|
|
||||||
.style-section h2 {
|
|
||||||
color: #495057;
|
|
||||||
margin-bottom: 15px;
|
|
||||||
padding-bottom: 10px;
|
|
||||||
border-bottom: 2px solid #e9ecef;
|
|
||||||
}
|
|
||||||
.target-group {
|
|
||||||
margin-bottom: 20px;
|
|
||||||
}
|
|
||||||
.target-group h3 {
|
|
||||||
color: #6c757d;
|
|
||||||
font-size: 16px;
|
|
||||||
margin-bottom: 10px;
|
|
||||||
}
|
|
||||||
.link-grid {
|
|
||||||
display: grid;
|
|
||||||
grid-template-columns: repeat(auto-fill, minmax(200px, 1fr));
|
|
||||||
gap: 10px;
|
|
||||||
}
|
|
||||||
.prototype-link {
|
|
||||||
padding: 12px 16px;
|
|
||||||
background: #f8f9fa;
|
|
||||||
border: 1px solid #dee2e6;
|
|
||||||
border-radius: 6px;
|
|
||||||
text-decoration: none;
|
|
||||||
color: #495057;
|
|
||||||
display: flex;
|
|
||||||
justify-content: space-between;
|
|
||||||
align-items: center;
|
|
||||||
transition: all 0.2s;
|
|
||||||
}
|
|
||||||
.prototype-link:hover {
|
|
||||||
background: #e9ecef;
|
|
||||||
border-color: #667eea;
|
|
||||||
transform: translateX(2px);
|
|
||||||
}
|
|
||||||
.prototype-link .label { font-weight: 500; }
|
|
||||||
.prototype-link .icon { color: #667eea; }
|
|
||||||
</style>
|
|
||||||
</head>
|
|
||||||
<body>
|
|
||||||
<h1>🎨 UI Prototypes Index</h1>
|
|
||||||
<p class="subtitle">Generated __S__×__L__×__T__ = __TOTAL__ prototypes</p>
|
|
||||||
|
|
||||||
<div class="cta">
|
|
||||||
<h2>📊 Interactive Comparison</h2>
|
|
||||||
<p>View all styles and layouts side-by-side in an interactive matrix</p>
|
|
||||||
<a href="compare.html">Open Matrix View →</a>
|
|
||||||
</div>
|
|
||||||
|
|
||||||
<h2>📂 All Prototypes</h2>
|
|
||||||
__CONTENT__
|
|
||||||
</body>
|
|
||||||
</html>
|
|
||||||
EOF
|
|
||||||
|
|
||||||
# Build content HTML
|
|
||||||
CONTENT=""
|
|
||||||
for style in $styles; do
|
|
||||||
CONTENT+="<div class='style-section'>"$'\n'
|
|
||||||
CONTENT+="<h2>Style ${style}</h2>"$'\n'
|
|
||||||
|
|
||||||
for target in $targets; do
|
|
||||||
target_capitalized="$(echo ${target:0:1} | tr '[:lower:]' '[:upper:]')${target:1}"
|
|
||||||
CONTENT+="<div class='target-group'>"$'\n'
|
|
||||||
CONTENT+="<h3>${target_capitalized}</h3>"$'\n'
|
|
||||||
CONTENT+="<div class='link-grid'>"$'\n'
|
|
||||||
|
|
||||||
for layout in $layouts; do
|
|
||||||
html_file="${target}-style-${style}-layout-${layout}.html"
|
|
||||||
if [[ -f "$html_file" ]]; then
|
|
||||||
CONTENT+="<a href='${html_file}' class='prototype-link' target='_blank'>"$'\n'
|
|
||||||
CONTENT+="<span class='label'>Layout ${layout}</span>"$'\n'
|
|
||||||
CONTENT+="<span class='icon'>↗</span>"$'\n'
|
|
||||||
CONTENT+="</a>"$'\n'
|
|
||||||
fi
|
|
||||||
done
|
|
||||||
|
|
||||||
CONTENT+="</div></div>"$'\n'
|
|
||||||
done
|
|
||||||
|
|
||||||
CONTENT+="</div>"$'\n'
|
|
||||||
done
|
|
||||||
|
|
||||||
# Calculate total
|
|
||||||
TOTAL_PROTOTYPES=$((S * L * T))
|
|
||||||
|
|
||||||
# Replace placeholders (using a temp file for complex replacement)
|
|
||||||
{
|
|
||||||
echo "$CONTENT" > /tmp/content_tmp.txt
|
|
||||||
sed "s|__S__|${S}|g" index.html | \
|
|
||||||
sed "s|__L__|${L}|g" | \
|
|
||||||
sed "s|__T__|${T}|g" | \
|
|
||||||
sed "s|__TOTAL__|${TOTAL_PROTOTYPES}|g" | \
|
|
||||||
sed -e "/__CONTENT__/r /tmp/content_tmp.txt" -e "/__CONTENT__/d" > /tmp/index_tmp.html
|
|
||||||
mv /tmp/index_tmp.html index.html
|
|
||||||
rm -f /tmp/content_tmp.txt
|
|
||||||
}
|
|
||||||
|
|
||||||
echo -e "${GREEN} ✓ Generated index.html${NC}"
|
|
||||||
|
|
||||||
# ============================================================================
|
|
||||||
# Generate PREVIEW.md
|
|
||||||
# ============================================================================
|
|
||||||
|
|
||||||
echo -e "${YELLOW}📝 Generating PREVIEW.md...${NC}"
|
|
||||||
|
|
||||||
cat > PREVIEW.md << EOF
|
|
||||||
# UI Prototypes Preview Guide
|
|
||||||
|
|
||||||
Generated: $(date +"%Y-%m-%d %H:%M:%S")
|
|
||||||
|
|
||||||
## 📊 Matrix Dimensions
|
|
||||||
|
|
||||||
- **Styles**: ${S}
|
|
||||||
- **Layouts**: ${L}
|
|
||||||
- **Targets**: ${T}
|
|
||||||
- **Total Prototypes**: $((S*L*T))
|
|
||||||
|
|
||||||
## 🌐 How to View
|
|
||||||
|
|
||||||
### Option 1: Interactive Matrix (Recommended)
|
|
||||||
|
|
||||||
Open \`compare.html\` in your browser to see all prototypes in an interactive matrix view.
|
|
||||||
|
|
||||||
**Features**:
|
|
||||||
- Side-by-side comparison of all styles and layouts
|
|
||||||
- Switch between targets using the dropdown
|
|
||||||
- Adjust grid columns for better viewing
|
|
||||||
- Direct links to full-page views
|
|
||||||
- Selection system with export to JSON
|
|
||||||
- Fullscreen mode for detailed inspection
|
|
||||||
|
|
||||||
### Option 2: Simple Index
|
|
||||||
|
|
||||||
Open \`index.html\` for a simple list of all prototypes with direct links.
|
|
||||||
|
|
||||||
### Option 3: Direct File Access
|
|
||||||
|
|
||||||
Each prototype can be opened directly:
|
|
||||||
- Pattern: \`{target}-style-{s}-layout-{l}.html\`
|
|
||||||
- Example: \`dashboard-style-1-layout-1.html\`
|
|
||||||
|
|
||||||
## 📁 File Structure
|
|
||||||
|
|
||||||
\`\`\`
|
|
||||||
prototypes/
|
|
||||||
├── compare.html # Interactive matrix view
|
|
||||||
├── index.html # Simple navigation index
|
|
||||||
├── PREVIEW.md # This file
|
|
||||||
EOF
|
|
||||||
|
|
||||||
for style in $styles; do
|
|
||||||
for target in $targets; do
|
|
||||||
for layout in $layouts; do
|
|
||||||
echo "├── ${target}-style-${style}-layout-${layout}.html" >> PREVIEW.md
|
|
||||||
echo "├── ${target}-style-${style}-layout-${layout}.css" >> PREVIEW.md
|
|
||||||
done
|
|
||||||
done
|
|
||||||
done
|
|
||||||
|
|
||||||
cat >> PREVIEW.md << 'EOF2'
|
|
||||||
```
|
|
||||||
|
|
||||||
## 🎨 Style Variants
|
|
||||||
|
|
||||||
EOF2
|
|
||||||
|
|
||||||
for style in $styles; do
|
|
||||||
cat >> PREVIEW.md << EOF3
|
|
||||||
### Style ${style}
|
|
||||||
|
|
||||||
EOF3
|
|
||||||
style_guide="../style-extraction/style-${style}/style-guide.md"
|
|
||||||
if [[ -f "$style_guide" ]]; then
|
|
||||||
head -n 10 "$style_guide" | tail -n +2 >> PREVIEW.md 2>/dev/null || echo "Design philosophy and tokens" >> PREVIEW.md
|
|
||||||
else
|
|
||||||
echo "Design system ${style}" >> PREVIEW.md
|
|
||||||
fi
|
|
||||||
echo "" >> PREVIEW.md
|
|
||||||
done
|
|
||||||
|
|
||||||
cat >> PREVIEW.md << 'EOF4'
|
|
||||||
|
|
||||||
## 🎯 Targets
|
|
||||||
|
|
||||||
EOF4
|
|
||||||
|
|
||||||
for target in $targets; do
|
|
||||||
target_capitalized="$(echo ${target:0:1} | tr '[:lower:]' '[:upper:]')${target:1}"
|
|
||||||
echo "- **${target_capitalized}**: ${L} layouts × ${S} styles = $((L*S)) variations" >> PREVIEW.md
|
|
||||||
done
|
|
||||||
|
|
||||||
cat >> PREVIEW.md << 'EOF5'
|
|
||||||
|
|
||||||
## 💡 Tips
|
|
||||||
|
|
||||||
1. **Comparison**: Use compare.html to see how different styles affect the same layout
|
|
||||||
2. **Navigation**: Use index.html for quick access to specific prototypes
|
|
||||||
3. **Selection**: Mark favorites in compare.html using star icons
|
|
||||||
4. **Export**: Download selection JSON for implementation planning
|
|
||||||
5. **Inspection**: Open browser DevTools to inspect HTML structure and CSS
|
|
||||||
6. **Sharing**: All files are standalone - can be shared or deployed directly
|
|
||||||
|
|
||||||
## 📝 Next Steps
|
|
||||||
|
|
||||||
1. Review prototypes in compare.html
|
|
||||||
2. Select preferred style × layout combinations
|
|
||||||
3. Export selections as JSON
|
|
||||||
4. Provide feedback for refinement
|
|
||||||
5. Use selected designs for implementation
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
Generated by /workflow:ui-design:generate-v2 (Style-Centric Architecture)
|
|
||||||
EOF5
|
|
||||||
|
|
||||||
echo -e "${GREEN} ✓ Generated PREVIEW.md${NC}"
|
|
||||||
|
|
||||||
# ============================================================================
|
|
||||||
# Completion Summary
|
|
||||||
# ============================================================================
|
|
||||||
|
|
||||||
echo ""
|
|
||||||
echo -e "${GREEN}✅ Preview generation complete!${NC}"
|
|
||||||
echo -e " Files created: compare.html, index.html, PREVIEW.md"
|
|
||||||
echo -e " Matrix: ${S} styles × ${L} layouts × ${T} targets = $((S*L*T)) prototypes"
|
|
||||||
echo ""
|
|
||||||
echo -e "${YELLOW}🌐 Next Steps:${NC}"
|
|
||||||
echo -e " 1. Open compare.html for interactive matrix view"
|
|
||||||
echo -e " 2. Open index.html for simple navigation"
|
|
||||||
echo -e " 3. Read PREVIEW.md for detailed usage guide"
|
|
||||||
echo ""
|
|
||||||
@@ -1,815 +0,0 @@
|
|||||||
#!/bin/bash
|
|
||||||
# ⚠️ DEPRECATED: This script is deprecated.
|
|
||||||
# Please use: ccw tool exec ui_instantiate_prototypes '{"designPath":"design-run-1","outputDir":"output"}'
|
|
||||||
# This file will be removed in a future version.
|
|
||||||
|
|
||||||
|
|
||||||
# UI Prototype Instantiation Script with Preview Generation (v3.0 - Auto-detect)
|
|
||||||
# Purpose: Generate S × L × P final prototypes from templates + interactive preview files
|
|
||||||
# Usage:
|
|
||||||
# Simple: ui-instantiate-prototypes.sh <prototypes_dir>
|
|
||||||
# Full: ui-instantiate-prototypes.sh <base_path> <pages> <style_variants> <layout_variants> [options]
|
|
||||||
|
|
||||||
# Use safer error handling
|
|
||||||
set -o pipefail
|
|
||||||
|
|
||||||
# ============================================================================
|
|
||||||
# Helper Functions
|
|
||||||
# ============================================================================
|
|
||||||
|
|
||||||
log_info() {
|
|
||||||
echo "$1"
|
|
||||||
}
|
|
||||||
|
|
||||||
log_success() {
|
|
||||||
echo "✅ $1"
|
|
||||||
}
|
|
||||||
|
|
||||||
log_error() {
|
|
||||||
echo "❌ $1"
|
|
||||||
}
|
|
||||||
|
|
||||||
log_warning() {
|
|
||||||
echo "⚠️ $1"
|
|
||||||
}
|
|
||||||
|
|
||||||
# Auto-detect pages from templates directory
|
|
||||||
auto_detect_pages() {
|
|
||||||
local templates_dir="$1/_templates"
|
|
||||||
|
|
||||||
if [ ! -d "$templates_dir" ]; then
|
|
||||||
log_error "Templates directory not found: $templates_dir"
|
|
||||||
return 1
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Find unique page names from template files (e.g., login-layout-1.html -> login)
|
|
||||||
local pages=$(find "$templates_dir" -name "*-layout-*.html" -type f | \
|
|
||||||
sed 's|.*/||' | \
|
|
||||||
sed 's|-layout-[0-9]*\.html||' | \
|
|
||||||
sort -u | \
|
|
||||||
tr '\n' ',' | \
|
|
||||||
sed 's/,$//')
|
|
||||||
|
|
||||||
echo "$pages"
|
|
||||||
}
|
|
||||||
|
|
||||||
# Auto-detect style variants count
|
|
||||||
auto_detect_style_variants() {
|
|
||||||
local base_path="$1"
|
|
||||||
local style_dir="$base_path/../style-extraction"
|
|
||||||
|
|
||||||
if [ ! -d "$style_dir" ]; then
|
|
||||||
log_warning "Style consolidation directory not found: $style_dir"
|
|
||||||
echo "3" # Default
|
|
||||||
return
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Count style-* directories
|
|
||||||
local count=$(find "$style_dir" -maxdepth 1 -type d -name "style-*" | wc -l)
|
|
||||||
|
|
||||||
if [ "$count" -eq 0 ]; then
|
|
||||||
echo "3" # Default
|
|
||||||
else
|
|
||||||
echo "$count"
|
|
||||||
fi
|
|
||||||
}
|
|
||||||
|
|
||||||
# Auto-detect layout variants count
|
|
||||||
auto_detect_layout_variants() {
|
|
||||||
local templates_dir="$1/_templates"
|
|
||||||
|
|
||||||
if [ ! -d "$templates_dir" ]; then
|
|
||||||
echo "3" # Default
|
|
||||||
return
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Find the first page and count its layouts
|
|
||||||
local first_page=$(find "$templates_dir" -name "*-layout-1.html" -type f | head -1 | sed 's|.*/||' | sed 's|-layout-1\.html||')
|
|
||||||
|
|
||||||
if [ -z "$first_page" ]; then
|
|
||||||
echo "3" # Default
|
|
||||||
return
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Count layout files for this page
|
|
||||||
local count=$(find "$templates_dir" -name "${first_page}-layout-*.html" -type f | wc -l)
|
|
||||||
|
|
||||||
if [ "$count" -eq 0 ]; then
|
|
||||||
echo "3" # Default
|
|
||||||
else
|
|
||||||
echo "$count"
|
|
||||||
fi
|
|
||||||
}
|
|
||||||
|
|
||||||
# ============================================================================
|
|
||||||
# Parse Arguments
|
|
||||||
# ============================================================================
|
|
||||||
|
|
||||||
show_usage() {
|
|
||||||
cat <<'EOF'
|
|
||||||
Usage:
|
|
||||||
Simple (auto-detect): ui-instantiate-prototypes.sh <prototypes_dir> [options]
|
|
||||||
Full: ui-instantiate-prototypes.sh <base_path> <pages> <style_variants> <layout_variants> [options]
|
|
||||||
|
|
||||||
Simple Mode (Recommended):
|
|
||||||
prototypes_dir Path to prototypes directory (auto-detects everything)
|
|
||||||
|
|
||||||
Full Mode:
|
|
||||||
base_path Base path to prototypes directory
|
|
||||||
pages Comma-separated list of pages/components
|
|
||||||
style_variants Number of style variants (1-5)
|
|
||||||
layout_variants Number of layout variants (1-5)
|
|
||||||
|
|
||||||
Options:
|
|
||||||
--run-id <id> Run ID (default: auto-generated)
|
|
||||||
--session-id <id> Session ID (default: standalone)
|
|
||||||
--mode <page|component> Exploration mode (default: page)
|
|
||||||
--template <path> Path to compare.html template (default: ~/.claude/workflows/_template-compare-matrix.html)
|
|
||||||
--no-preview Skip preview file generation
|
|
||||||
--help Show this help message
|
|
||||||
|
|
||||||
Examples:
|
|
||||||
# Simple usage (auto-detect everything)
|
|
||||||
ui-instantiate-prototypes.sh .workflow/design-run-*/prototypes
|
|
||||||
|
|
||||||
# With options
|
|
||||||
ui-instantiate-prototypes.sh .workflow/design-run-*/prototypes --session-id WFS-auth
|
|
||||||
|
|
||||||
# Full manual mode
|
|
||||||
ui-instantiate-prototypes.sh .workflow/design-run-*/prototypes "login,dashboard" 3 3 --session-id WFS-auth
|
|
||||||
EOF
|
|
||||||
}
|
|
||||||
|
|
||||||
# Default values
|
|
||||||
BASE_PATH=""
|
|
||||||
PAGES=""
|
|
||||||
STYLE_VARIANTS=""
|
|
||||||
LAYOUT_VARIANTS=""
|
|
||||||
RUN_ID="run-$(date +%Y%m%d-%H%M%S)"
|
|
||||||
SESSION_ID="standalone"
|
|
||||||
MODE="page"
|
|
||||||
TEMPLATE_PATH="$HOME/.claude/workflows/_template-compare-matrix.html"
|
|
||||||
GENERATE_PREVIEW=true
|
|
||||||
AUTO_DETECT=false
|
|
||||||
|
|
||||||
# Parse arguments
|
|
||||||
if [ $# -lt 1 ]; then
|
|
||||||
log_error "Missing required arguments"
|
|
||||||
show_usage
|
|
||||||
exit 1
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Check if using simple mode (only 1 positional arg before options)
|
|
||||||
if [ $# -eq 1 ] || [[ "$2" == --* ]]; then
|
|
||||||
# Simple mode - auto-detect
|
|
||||||
AUTO_DETECT=true
|
|
||||||
BASE_PATH="$1"
|
|
||||||
shift 1
|
|
||||||
else
|
|
||||||
# Full mode - manual parameters
|
|
||||||
if [ $# -lt 4 ]; then
|
|
||||||
log_error "Full mode requires 4 positional arguments"
|
|
||||||
show_usage
|
|
||||||
exit 1
|
|
||||||
fi
|
|
||||||
|
|
||||||
BASE_PATH="$1"
|
|
||||||
PAGES="$2"
|
|
||||||
STYLE_VARIANTS="$3"
|
|
||||||
LAYOUT_VARIANTS="$4"
|
|
||||||
shift 4
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Parse optional arguments
|
|
||||||
while [[ $# -gt 0 ]]; do
|
|
||||||
case $1 in
|
|
||||||
--run-id)
|
|
||||||
RUN_ID="$2"
|
|
||||||
shift 2
|
|
||||||
;;
|
|
||||||
--session-id)
|
|
||||||
SESSION_ID="$2"
|
|
||||||
shift 2
|
|
||||||
;;
|
|
||||||
--mode)
|
|
||||||
MODE="$2"
|
|
||||||
shift 2
|
|
||||||
;;
|
|
||||||
--template)
|
|
||||||
TEMPLATE_PATH="$2"
|
|
||||||
shift 2
|
|
||||||
;;
|
|
||||||
--no-preview)
|
|
||||||
GENERATE_PREVIEW=false
|
|
||||||
shift
|
|
||||||
;;
|
|
||||||
--help)
|
|
||||||
show_usage
|
|
||||||
exit 0
|
|
||||||
;;
|
|
||||||
*)
|
|
||||||
log_error "Unknown option: $1"
|
|
||||||
show_usage
|
|
||||||
exit 1
|
|
||||||
;;
|
|
||||||
esac
|
|
||||||
done
|
|
||||||
|
|
||||||
# ============================================================================
|
|
||||||
# Auto-detection (if enabled)
|
|
||||||
# ============================================================================
|
|
||||||
|
|
||||||
if [ "$AUTO_DETECT" = true ]; then
|
|
||||||
log_info "🔍 Auto-detecting configuration from directory..."
|
|
||||||
|
|
||||||
# Detect pages
|
|
||||||
PAGES=$(auto_detect_pages "$BASE_PATH")
|
|
||||||
if [ -z "$PAGES" ]; then
|
|
||||||
log_error "Could not auto-detect pages from templates"
|
|
||||||
exit 1
|
|
||||||
fi
|
|
||||||
log_info " Pages: $PAGES"
|
|
||||||
|
|
||||||
# Detect style variants
|
|
||||||
STYLE_VARIANTS=$(auto_detect_style_variants "$BASE_PATH")
|
|
||||||
log_info " Style variants: $STYLE_VARIANTS"
|
|
||||||
|
|
||||||
# Detect layout variants
|
|
||||||
LAYOUT_VARIANTS=$(auto_detect_layout_variants "$BASE_PATH")
|
|
||||||
log_info " Layout variants: $LAYOUT_VARIANTS"
|
|
||||||
|
|
||||||
echo ""
|
|
||||||
fi
|
|
||||||
|
|
||||||
# ============================================================================
|
|
||||||
# Validation
|
|
||||||
# ============================================================================
|
|
||||||
|
|
||||||
# Validate base path
|
|
||||||
if [ ! -d "$BASE_PATH" ]; then
|
|
||||||
log_error "Base path not found: $BASE_PATH"
|
|
||||||
exit 1
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Validate style and layout variants
|
|
||||||
if [ "$STYLE_VARIANTS" -lt 1 ] || [ "$STYLE_VARIANTS" -gt 5 ]; then
|
|
||||||
log_error "Style variants must be between 1 and 5 (got: $STYLE_VARIANTS)"
|
|
||||||
exit 1
|
|
||||||
fi
|
|
||||||
|
|
||||||
if [ "$LAYOUT_VARIANTS" -lt 1 ] || [ "$LAYOUT_VARIANTS" -gt 5 ]; then
|
|
||||||
log_error "Layout variants must be between 1 and 5 (got: $LAYOUT_VARIANTS)"
|
|
||||||
exit 1
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Validate STYLE_VARIANTS against actual style directories
|
|
||||||
if [ "$STYLE_VARIANTS" -gt 0 ]; then
|
|
||||||
style_dir="$BASE_PATH/../style-extraction"
|
|
||||||
|
|
||||||
if [ ! -d "$style_dir" ]; then
|
|
||||||
log_error "Style consolidation directory not found: $style_dir"
|
|
||||||
log_info "Run /workflow:ui-design:consolidate first"
|
|
||||||
exit 1
|
|
||||||
fi
|
|
||||||
|
|
||||||
actual_styles=$(find "$style_dir" -maxdepth 1 -type d -name "style-*" 2>/dev/null | wc -l)
|
|
||||||
|
|
||||||
if [ "$actual_styles" -eq 0 ]; then
|
|
||||||
log_error "No style directories found in: $style_dir"
|
|
||||||
log_info "Run /workflow:ui-design:consolidate first to generate style design systems"
|
|
||||||
exit 1
|
|
||||||
fi
|
|
||||||
|
|
||||||
if [ "$STYLE_VARIANTS" -gt "$actual_styles" ]; then
|
|
||||||
log_warning "Requested $STYLE_VARIANTS style variants, but only found $actual_styles directories"
|
|
||||||
log_info "Available style directories:"
|
|
||||||
find "$style_dir" -maxdepth 1 -type d -name "style-*" 2>/dev/null | sed 's|.*/||' | sort
|
|
||||||
log_info "Auto-correcting to $actual_styles style variants"
|
|
||||||
STYLE_VARIANTS=$actual_styles
|
|
||||||
fi
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Parse pages into array
|
|
||||||
IFS=',' read -ra PAGE_ARRAY <<< "$PAGES"
|
|
||||||
|
|
||||||
if [ ${#PAGE_ARRAY[@]} -eq 0 ]; then
|
|
||||||
log_error "No pages found"
|
|
||||||
exit 1
|
|
||||||
fi
|
|
||||||
|
|
||||||
# ============================================================================
|
|
||||||
# Header Output
|
|
||||||
# ============================================================================
|
|
||||||
|
|
||||||
echo "========================================="
|
|
||||||
echo "UI Prototype Instantiation & Preview"
|
|
||||||
if [ "$AUTO_DETECT" = true ]; then
|
|
||||||
echo "(Auto-detected configuration)"
|
|
||||||
fi
|
|
||||||
echo "========================================="
|
|
||||||
echo "Base Path: $BASE_PATH"
|
|
||||||
echo "Mode: $MODE"
|
|
||||||
echo "Pages/Components: $PAGES"
|
|
||||||
echo "Style Variants: $STYLE_VARIANTS"
|
|
||||||
echo "Layout Variants: $LAYOUT_VARIANTS"
|
|
||||||
echo "Run ID: $RUN_ID"
|
|
||||||
echo "Session ID: $SESSION_ID"
|
|
||||||
echo "========================================="
|
|
||||||
echo ""
|
|
||||||
|
|
||||||
# Change to base path
|
|
||||||
cd "$BASE_PATH" || exit 1
|
|
||||||
|
|
||||||
# ============================================================================
|
|
||||||
# Phase 1: Instantiate Prototypes
|
|
||||||
# ============================================================================
|
|
||||||
|
|
||||||
log_info "🚀 Phase 1: Instantiating prototypes from templates..."
|
|
||||||
echo ""
|
|
||||||
|
|
||||||
total_generated=0
|
|
||||||
total_failed=0
|
|
||||||
|
|
||||||
for page in "${PAGE_ARRAY[@]}"; do
|
|
||||||
# Trim whitespace
|
|
||||||
page=$(echo "$page" | xargs)
|
|
||||||
|
|
||||||
log_info "Processing page/component: $page"
|
|
||||||
|
|
||||||
for s in $(seq 1 "$STYLE_VARIANTS"); do
|
|
||||||
for l in $(seq 1 "$LAYOUT_VARIANTS"); do
|
|
||||||
# Define file paths
|
|
||||||
TEMPLATE_HTML="_templates/${page}-layout-${l}.html"
|
|
||||||
STRUCTURAL_CSS="_templates/${page}-layout-${l}.css"
|
|
||||||
TOKEN_CSS="../style-extraction/style-${s}/tokens.css"
|
|
||||||
OUTPUT_HTML="${page}-style-${s}-layout-${l}.html"
|
|
||||||
|
|
||||||
# Copy template and replace placeholders
|
|
||||||
if [ -f "$TEMPLATE_HTML" ]; then
|
|
||||||
cp "$TEMPLATE_HTML" "$OUTPUT_HTML" || {
|
|
||||||
log_error "Failed to copy template: $TEMPLATE_HTML"
|
|
||||||
((total_failed++))
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
|
|
||||||
# Replace CSS placeholders (Windows-compatible sed syntax)
|
|
||||||
sed -i "s|{{STRUCTURAL_CSS}}|${STRUCTURAL_CSS}|g" "$OUTPUT_HTML" || true
|
|
||||||
sed -i "s|{{TOKEN_CSS}}|${TOKEN_CSS}|g" "$OUTPUT_HTML" || true
|
|
||||||
|
|
||||||
log_success "Created: $OUTPUT_HTML"
|
|
||||||
((total_generated++))
|
|
||||||
|
|
||||||
# Create implementation notes (simplified)
|
|
||||||
NOTES_FILE="${page}-style-${s}-layout-${l}-notes.md"
|
|
||||||
|
|
||||||
# Generate notes with simple heredoc
|
|
||||||
cat > "$NOTES_FILE" <<NOTESEOF
|
|
||||||
# Implementation Notes: ${page}-style-${s}-layout-${l}
|
|
||||||
|
|
||||||
## Generation Details
|
|
||||||
- **Template**: ${TEMPLATE_HTML}
|
|
||||||
- **Structural CSS**: ${STRUCTURAL_CSS}
|
|
||||||
- **Style Tokens**: ${TOKEN_CSS}
|
|
||||||
- **Layout Strategy**: Layout ${l}
|
|
||||||
- **Style Variant**: Style ${s}
|
|
||||||
- **Mode**: ${MODE}
|
|
||||||
|
|
||||||
## Template Reuse
|
|
||||||
This prototype was generated from a shared layout template to ensure consistency
|
|
||||||
across all style variants. The HTML structure is identical for all ${page}-layout-${l}
|
|
||||||
prototypes, with only the design tokens (colors, fonts, spacing) varying.
|
|
||||||
|
|
||||||
## Design System Reference
|
|
||||||
Refer to \`../style-extraction/style-${s}/style-guide.md\` for:
|
|
||||||
- Design philosophy
|
|
||||||
- Token usage guidelines
|
|
||||||
- Component patterns
|
|
||||||
- Accessibility requirements
|
|
||||||
|
|
||||||
## Customization
|
|
||||||
To modify this prototype:
|
|
||||||
1. Edit the layout template: \`${TEMPLATE_HTML}\` (affects all styles)
|
|
||||||
2. Edit the structural CSS: \`${STRUCTURAL_CSS}\` (affects all styles)
|
|
||||||
3. Edit design tokens: \`${TOKEN_CSS}\` (affects only this style variant)
|
|
||||||
|
|
||||||
## Run Information
|
|
||||||
- **Run ID**: ${RUN_ID}
|
|
||||||
- **Session ID**: ${SESSION_ID}
|
|
||||||
- **Generated**: $(date -u +%Y-%m-%dT%H:%M:%SZ 2>/dev/null || date -u +%Y-%m-%d)
|
|
||||||
NOTESEOF
|
|
||||||
|
|
||||||
else
|
|
||||||
log_error "Template not found: $TEMPLATE_HTML"
|
|
||||||
((total_failed++))
|
|
||||||
fi
|
|
||||||
done
|
|
||||||
done
|
|
||||||
done
|
|
||||||
|
|
||||||
echo ""
|
|
||||||
log_success "Phase 1 complete: Generated ${total_generated} prototypes"
|
|
||||||
if [ $total_failed -gt 0 ]; then
|
|
||||||
log_warning "Failed: ${total_failed} prototypes"
|
|
||||||
fi
|
|
||||||
echo ""
|
|
||||||
|
|
||||||
# ============================================================================
|
|
||||||
# Phase 2: Generate Preview Files (if enabled)
|
|
||||||
# ============================================================================
|
|
||||||
|
|
||||||
if [ "$GENERATE_PREVIEW" = false ]; then
|
|
||||||
log_info "⏭️ Skipping preview generation (--no-preview flag)"
|
|
||||||
exit 0
|
|
||||||
fi
|
|
||||||
|
|
||||||
log_info "🎨 Phase 2: Generating preview files..."
|
|
||||||
echo ""
|
|
||||||
|
|
||||||
# ============================================================================
|
|
||||||
# 2a. Generate compare.html from template
|
|
||||||
# ============================================================================
|
|
||||||
|
|
||||||
if [ ! -f "$TEMPLATE_PATH" ]; then
|
|
||||||
log_warning "Template not found: $TEMPLATE_PATH"
|
|
||||||
log_info " Skipping compare.html generation"
|
|
||||||
else
|
|
||||||
log_info "📄 Generating compare.html from template..."
|
|
||||||
|
|
||||||
# Convert page array to JSON format
|
|
||||||
PAGES_JSON="["
|
|
||||||
for i in "${!PAGE_ARRAY[@]}"; do
|
|
||||||
page=$(echo "${PAGE_ARRAY[$i]}" | xargs)
|
|
||||||
PAGES_JSON+="\"$page\""
|
|
||||||
if [ $i -lt $((${#PAGE_ARRAY[@]} - 1)) ]; then
|
|
||||||
PAGES_JSON+=", "
|
|
||||||
fi
|
|
||||||
done
|
|
||||||
PAGES_JSON+="]"
|
|
||||||
|
|
||||||
TIMESTAMP=$(date -u +%Y-%m-%dT%H:%M:%SZ 2>/dev/null || date -u +%Y-%m-%d)
|
|
||||||
|
|
||||||
# Read template and replace placeholders
|
|
||||||
cat "$TEMPLATE_PATH" | \
|
|
||||||
sed "s|{{run_id}}|${RUN_ID}|g" | \
|
|
||||||
sed "s|{{session_id}}|${SESSION_ID}|g" | \
|
|
||||||
sed "s|{{timestamp}}|${TIMESTAMP}|g" | \
|
|
||||||
sed "s|{{style_variants}}|${STYLE_VARIANTS}|g" | \
|
|
||||||
sed "s|{{layout_variants}}|${LAYOUT_VARIANTS}|g" | \
|
|
||||||
sed "s|{{pages_json}}|${PAGES_JSON}|g" \
|
|
||||||
> compare.html
|
|
||||||
|
|
||||||
log_success "Generated: compare.html"
|
|
||||||
fi
|
|
||||||
|
|
||||||
# ============================================================================
|
|
||||||
# 2b. Generate index.html
|
|
||||||
# ============================================================================
|
|
||||||
|
|
||||||
log_info "📄 Generating index.html..."
|
|
||||||
|
|
||||||
# Calculate total prototypes
|
|
||||||
TOTAL_PROTOTYPES=$((STYLE_VARIANTS * LAYOUT_VARIANTS * ${#PAGE_ARRAY[@]}))
|
|
||||||
|
|
||||||
# Generate index.html with simple heredoc
|
|
||||||
cat > index.html <<'INDEXEOF'
|
|
||||||
<!DOCTYPE html>
|
|
||||||
<html lang="en">
|
|
||||||
<head>
|
|
||||||
<meta charset="UTF-8">
|
|
||||||
<meta name="viewport" content="width=device-width, initial-scale=1.0">
|
|
||||||
<title>UI Prototypes - __MODE__ Mode - __RUN_ID__</title>
|
|
||||||
<style>
|
|
||||||
body {
|
|
||||||
font-family: system-ui, -apple-system, sans-serif;
|
|
||||||
max-width: 900px;
|
|
||||||
margin: 2rem auto;
|
|
||||||
padding: 0 2rem;
|
|
||||||
background: #f9fafb;
|
|
||||||
}
|
|
||||||
.header {
|
|
||||||
background: white;
|
|
||||||
padding: 2rem;
|
|
||||||
border-radius: 0.75rem;
|
|
||||||
box-shadow: 0 1px 3px rgba(0,0,0,0.1);
|
|
||||||
margin-bottom: 2rem;
|
|
||||||
}
|
|
||||||
h1 {
|
|
||||||
color: #2563eb;
|
|
||||||
margin-bottom: 0.5rem;
|
|
||||||
font-size: 2rem;
|
|
||||||
}
|
|
||||||
.meta {
|
|
||||||
color: #6b7280;
|
|
||||||
font-size: 0.875rem;
|
|
||||||
margin-top: 0.5rem;
|
|
||||||
}
|
|
||||||
.info {
|
|
||||||
background: #f3f4f6;
|
|
||||||
padding: 1.5rem;
|
|
||||||
border-radius: 0.5rem;
|
|
||||||
margin: 1.5rem 0;
|
|
||||||
border-left: 4px solid #2563eb;
|
|
||||||
}
|
|
||||||
.cta {
|
|
||||||
display: inline-block;
|
|
||||||
background: #2563eb;
|
|
||||||
color: white;
|
|
||||||
padding: 1rem 2rem;
|
|
||||||
border-radius: 0.5rem;
|
|
||||||
text-decoration: none;
|
|
||||||
font-weight: 600;
|
|
||||||
margin: 1rem 0;
|
|
||||||
transition: background 0.2s;
|
|
||||||
}
|
|
||||||
.cta:hover {
|
|
||||||
background: #1d4ed8;
|
|
||||||
}
|
|
||||||
.stats {
|
|
||||||
display: grid;
|
|
||||||
grid-template-columns: repeat(auto-fit, minmax(150px, 1fr));
|
|
||||||
gap: 1rem;
|
|
||||||
margin: 1.5rem 0;
|
|
||||||
}
|
|
||||||
.stat {
|
|
||||||
background: white;
|
|
||||||
border: 1px solid #e5e7eb;
|
|
||||||
padding: 1.5rem;
|
|
||||||
border-radius: 0.5rem;
|
|
||||||
text-align: center;
|
|
||||||
box-shadow: 0 1px 2px rgba(0,0,0,0.05);
|
|
||||||
}
|
|
||||||
.stat-value {
|
|
||||||
font-size: 2.5rem;
|
|
||||||
font-weight: bold;
|
|
||||||
color: #2563eb;
|
|
||||||
margin-bottom: 0.25rem;
|
|
||||||
}
|
|
||||||
.stat-label {
|
|
||||||
color: #6b7280;
|
|
||||||
font-size: 0.875rem;
|
|
||||||
}
|
|
||||||
.section {
|
|
||||||
background: white;
|
|
||||||
padding: 2rem;
|
|
||||||
border-radius: 0.75rem;
|
|
||||||
margin-bottom: 2rem;
|
|
||||||
box-shadow: 0 1px 3px rgba(0,0,0,0.1);
|
|
||||||
}
|
|
||||||
h2 {
|
|
||||||
color: #1f2937;
|
|
||||||
margin-bottom: 1rem;
|
|
||||||
font-size: 1.5rem;
|
|
||||||
}
|
|
||||||
ul {
|
|
||||||
line-height: 1.8;
|
|
||||||
color: #374151;
|
|
||||||
}
|
|
||||||
.pages-list {
|
|
||||||
list-style: none;
|
|
||||||
padding: 0;
|
|
||||||
}
|
|
||||||
.pages-list li {
|
|
||||||
background: #f9fafb;
|
|
||||||
padding: 0.75rem 1rem;
|
|
||||||
margin: 0.5rem 0;
|
|
||||||
border-radius: 0.375rem;
|
|
||||||
border-left: 3px solid #2563eb;
|
|
||||||
}
|
|
||||||
.badge {
|
|
||||||
display: inline-block;
|
|
||||||
background: #dbeafe;
|
|
||||||
color: #1e40af;
|
|
||||||
padding: 0.25rem 0.75rem;
|
|
||||||
border-radius: 0.25rem;
|
|
||||||
font-size: 0.75rem;
|
|
||||||
font-weight: 600;
|
|
||||||
margin-left: 0.5rem;
|
|
||||||
}
|
|
||||||
</style>
|
|
||||||
</head>
|
|
||||||
<body>
|
|
||||||
<div class="header">
|
|
||||||
<h1>🎨 UI Prototype __MODE__ Mode</h1>
|
|
||||||
<div class="meta">
|
|
||||||
<strong>Run ID:</strong> __RUN_ID__ |
|
|
||||||
<strong>Session:</strong> __SESSION_ID__ |
|
|
||||||
<strong>Generated:</strong> __TIMESTAMP__
|
|
||||||
</div>
|
|
||||||
</div>
|
|
||||||
|
|
||||||
<div class="info">
|
|
||||||
<p><strong>Matrix Configuration:</strong> __STYLE_VARIANTS__ styles × __LAYOUT_VARIANTS__ layouts × __PAGE_COUNT__ __MODE__s</p>
|
|
||||||
<p><strong>Total Prototypes:</strong> __TOTAL_PROTOTYPES__ interactive HTML files</p>
|
|
||||||
</div>
|
|
||||||
|
|
||||||
<a href="compare.html" class="cta">🔍 Open Interactive Matrix Comparison →</a>
|
|
||||||
|
|
||||||
<div class="stats">
|
|
||||||
<div class="stat">
|
|
||||||
<div class="stat-value">__STYLE_VARIANTS__</div>
|
|
||||||
<div class="stat-label">Style Variants</div>
|
|
||||||
</div>
|
|
||||||
<div class="stat">
|
|
||||||
<div class="stat-value">__LAYOUT_VARIANTS__</div>
|
|
||||||
<div class="stat-label">Layout Options</div>
|
|
||||||
</div>
|
|
||||||
<div class="stat">
|
|
||||||
<div class="stat-value">__PAGE_COUNT__</div>
|
|
||||||
<div class="stat-label">__MODE__s</div>
|
|
||||||
</div>
|
|
||||||
<div class="stat">
|
|
||||||
<div class="stat-value">__TOTAL_PROTOTYPES__</div>
|
|
||||||
<div class="stat-label">Total Prototypes</div>
|
|
||||||
</div>
|
|
||||||
</div>
|
|
||||||
|
|
||||||
<div class="section">
|
|
||||||
<h2>🌟 Features</h2>
|
|
||||||
<ul>
|
|
||||||
<li><strong>Interactive Matrix View:</strong> __STYLE_VARIANTS__×__LAYOUT_VARIANTS__ grid with synchronized scrolling</li>
|
|
||||||
<li><strong>Flexible Zoom:</strong> 25%, 50%, 75%, 100% viewport scaling</li>
|
|
||||||
<li><strong>Fullscreen Mode:</strong> Detailed view for individual prototypes</li>
|
|
||||||
<li><strong>Selection System:</strong> Mark favorites with export to JSON</li>
|
|
||||||
<li><strong>__MODE__ Switcher:</strong> Compare different __MODE__s side-by-side</li>
|
|
||||||
<li><strong>Persistent State:</strong> Selections saved in localStorage</li>
|
|
||||||
</ul>
|
|
||||||
</div>
|
|
||||||
|
|
||||||
<div class="section">
|
|
||||||
<h2>📄 Generated __MODE__s</h2>
|
|
||||||
<ul class="pages-list">
|
|
||||||
__PAGES_LIST__
|
|
||||||
</ul>
|
|
||||||
</div>
|
|
||||||
|
|
||||||
<div class="section">
|
|
||||||
<h2>📚 Next Steps</h2>
|
|
||||||
<ol>
|
|
||||||
<li>Open <code>compare.html</code> to explore all variants in matrix view</li>
|
|
||||||
<li>Use zoom and sync scroll controls to compare details</li>
|
|
||||||
<li>Select your preferred style×layout combinations</li>
|
|
||||||
<li>Export selections as JSON for implementation planning</li>
|
|
||||||
<li>Review implementation notes in <code>*-notes.md</code> files</li>
|
|
||||||
</ol>
|
|
||||||
</div>
|
|
||||||
</body>
|
|
||||||
</html>
|
|
||||||
INDEXEOF
|
|
||||||
|
|
||||||
# Build pages list HTML
|
|
||||||
PAGES_LIST_HTML=""
|
|
||||||
for page in "${PAGE_ARRAY[@]}"; do
|
|
||||||
page=$(echo "$page" | xargs)
|
|
||||||
VARIANT_COUNT=$((STYLE_VARIANTS * LAYOUT_VARIANTS))
|
|
||||||
PAGES_LIST_HTML+=" <li>\n"
|
|
||||||
PAGES_LIST_HTML+=" <strong>${page}</strong>\n"
|
|
||||||
PAGES_LIST_HTML+=" <span class=\"badge\">${STYLE_VARIANTS}×${LAYOUT_VARIANTS} = ${VARIANT_COUNT} variants</span>\n"
|
|
||||||
PAGES_LIST_HTML+=" </li>\n"
|
|
||||||
done
|
|
||||||
|
|
||||||
# Replace all placeholders in index.html
|
|
||||||
MODE_UPPER=$(echo "$MODE" | awk '{print toupper(substr($0,1,1)) tolower(substr($0,2))}')
|
|
||||||
sed -i "s|__RUN_ID__|${RUN_ID}|g" index.html
|
|
||||||
sed -i "s|__SESSION_ID__|${SESSION_ID}|g" index.html
|
|
||||||
sed -i "s|__TIMESTAMP__|${TIMESTAMP}|g" index.html
|
|
||||||
sed -i "s|__MODE__|${MODE_UPPER}|g" index.html
|
|
||||||
sed -i "s|__STYLE_VARIANTS__|${STYLE_VARIANTS}|g" index.html
|
|
||||||
sed -i "s|__LAYOUT_VARIANTS__|${LAYOUT_VARIANTS}|g" index.html
|
|
||||||
sed -i "s|__PAGE_COUNT__|${#PAGE_ARRAY[@]}|g" index.html
|
|
||||||
sed -i "s|__TOTAL_PROTOTYPES__|${TOTAL_PROTOTYPES}|g" index.html
|
|
||||||
sed -i "s|__PAGES_LIST__|${PAGES_LIST_HTML}|g" index.html
|
|
||||||
|
|
||||||
log_success "Generated: index.html"
|
|
||||||
|
|
||||||
# ============================================================================
|
|
||||||
# 2c. Generate PREVIEW.md
|
|
||||||
# ============================================================================
|
|
||||||
|
|
||||||
log_info "📄 Generating PREVIEW.md..."
|
|
||||||
|
|
||||||
cat > PREVIEW.md <<PREVIEWEOF
|
|
||||||
# UI Prototype Preview Guide
|
|
||||||
|
|
||||||
## Quick Start
|
|
||||||
1. Open \`index.html\` for overview and navigation
|
|
||||||
2. Open \`compare.html\` for interactive matrix comparison
|
|
||||||
3. Use browser developer tools to inspect responsive behavior
|
|
||||||
|
|
||||||
## Configuration
|
|
||||||
|
|
||||||
- **Exploration Mode:** ${MODE_UPPER}
|
|
||||||
- **Run ID:** ${RUN_ID}
|
|
||||||
- **Session ID:** ${SESSION_ID}
|
|
||||||
- **Style Variants:** ${STYLE_VARIANTS}
|
|
||||||
- **Layout Options:** ${LAYOUT_VARIANTS}
|
|
||||||
- **${MODE_UPPER}s:** ${PAGES}
|
|
||||||
- **Total Prototypes:** ${TOTAL_PROTOTYPES}
|
|
||||||
- **Generated:** ${TIMESTAMP}
|
|
||||||
|
|
||||||
## File Naming Convention
|
|
||||||
|
|
||||||
\`\`\`
|
|
||||||
{${MODE}}-style-{s}-layout-{l}.html
|
|
||||||
\`\`\`
|
|
||||||
|
|
||||||
**Example:** \`dashboard-style-1-layout-2.html\`
|
|
||||||
- ${MODE_UPPER}: dashboard
|
|
||||||
- Style: Design system 1
|
|
||||||
- Layout: Layout variant 2
|
|
||||||
|
|
||||||
## Interactive Features (compare.html)
|
|
||||||
|
|
||||||
### Matrix View
|
|
||||||
- **Grid Layout:** ${STYLE_VARIANTS}×${LAYOUT_VARIANTS} table with all prototypes visible
|
|
||||||
- **Synchronized Scroll:** All iframes scroll together (toggle with button)
|
|
||||||
- **Zoom Controls:** Adjust viewport scale (25%, 50%, 75%, 100%)
|
|
||||||
- **${MODE_UPPER} Selector:** Switch between different ${MODE}s instantly
|
|
||||||
|
|
||||||
### Prototype Actions
|
|
||||||
- **⭐ Selection:** Click star icon to mark favorites
|
|
||||||
- **⛶ Fullscreen:** View prototype in fullscreen overlay
|
|
||||||
- **↗ New Tab:** Open prototype in dedicated browser tab
|
|
||||||
|
|
||||||
### Selection Export
|
|
||||||
1. Select preferred prototypes using star icons
|
|
||||||
2. Click "Export Selection" button
|
|
||||||
3. Downloads JSON file: \`selection-${RUN_ID}.json\`
|
|
||||||
4. Use exported file for implementation planning
|
|
||||||
|
|
||||||
## Design System References
|
|
||||||
|
|
||||||
Each prototype references a specific style design system:
|
|
||||||
PREVIEWEOF
|
|
||||||
|
|
||||||
# Add style references
|
|
||||||
for s in $(seq 1 "$STYLE_VARIANTS"); do
|
|
||||||
cat >> PREVIEW.md <<STYLEEOF
|
|
||||||
|
|
||||||
### Style ${s}
|
|
||||||
- **Tokens:** \`../style-extraction/style-${s}/design-tokens.json\`
|
|
||||||
- **CSS Variables:** \`../style-extraction/style-${s}/tokens.css\`
|
|
||||||
- **Style Guide:** \`../style-extraction/style-${s}/style-guide.md\`
|
|
||||||
STYLEEOF
|
|
||||||
done
|
|
||||||
|
|
||||||
cat >> PREVIEW.md <<'FOOTEREOF'
|
|
||||||
|
|
||||||
## Responsive Testing
|
|
||||||
|
|
||||||
All prototypes are mobile-first responsive. Test at these breakpoints:
|
|
||||||
|
|
||||||
- **Mobile:** 375px - 767px
|
|
||||||
- **Tablet:** 768px - 1023px
|
|
||||||
- **Desktop:** 1024px+
|
|
||||||
|
|
||||||
Use browser DevTools responsive mode for testing.
|
|
||||||
|
|
||||||
## Accessibility Features
|
|
||||||
|
|
||||||
- Semantic HTML5 structure
|
|
||||||
- ARIA attributes for screen readers
|
|
||||||
- Keyboard navigation support
|
|
||||||
- Proper heading hierarchy
|
|
||||||
- Focus indicators
|
|
||||||
|
|
||||||
## Next Steps
|
|
||||||
|
|
||||||
1. **Review:** Open `compare.html` and explore all variants
|
|
||||||
2. **Select:** Mark preferred prototypes using star icons
|
|
||||||
3. **Export:** Download selection JSON for implementation
|
|
||||||
4. **Implement:** Use `/workflow:ui-design:update` to integrate selected designs
|
|
||||||
5. **Plan:** Run `/workflow:plan` to generate implementation tasks
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
**Generated by:** `ui-instantiate-prototypes.sh`
|
|
||||||
**Version:** 3.0 (auto-detect mode)
|
|
||||||
FOOTEREOF
|
|
||||||
|
|
||||||
log_success "Generated: PREVIEW.md"
|
|
||||||
|
|
||||||
# ============================================================================
|
|
||||||
# Completion Summary
|
|
||||||
# ============================================================================
|
|
||||||
|
|
||||||
echo ""
|
|
||||||
echo "========================================="
|
|
||||||
echo "✅ Generation Complete!"
|
|
||||||
echo "========================================="
|
|
||||||
echo ""
|
|
||||||
echo "📊 Summary:"
|
|
||||||
echo " Prototypes: ${total_generated} generated"
|
|
||||||
if [ $total_failed -gt 0 ]; then
|
|
||||||
echo " Failed: ${total_failed}"
|
|
||||||
fi
|
|
||||||
echo " Preview Files: compare.html, index.html, PREVIEW.md"
|
|
||||||
echo " Matrix: ${STYLE_VARIANTS}×${LAYOUT_VARIANTS} (${#PAGE_ARRAY[@]} ${MODE}s)"
|
|
||||||
echo " Total Files: ${TOTAL_PROTOTYPES} prototypes + preview files"
|
|
||||||
echo ""
|
|
||||||
echo "🌐 Next Steps:"
|
|
||||||
echo " 1. Open: ${BASE_PATH}/index.html"
|
|
||||||
echo " 2. Explore: ${BASE_PATH}/compare.html"
|
|
||||||
echo " 3. Review: ${BASE_PATH}/PREVIEW.md"
|
|
||||||
echo ""
|
|
||||||
echo "Performance: Template-based approach with ${STYLE_VARIANTS}× speedup"
|
|
||||||
echo "========================================="
|
|
||||||
@@ -1,337 +0,0 @@
|
|||||||
#!/bin/bash
|
|
||||||
# ⚠️ DEPRECATED: This script is deprecated.
|
|
||||||
# Please use: ccw tool exec update_module_claude '{"strategy":"single-layer","path":".","tool":"gemini"}'
|
|
||||||
# This file will be removed in a future version.
|
|
||||||
|
|
||||||
# Update CLAUDE.md for modules with two strategies
|
|
||||||
# Usage: update_module_claude.sh <strategy> <module_path> [tool] [model]
|
|
||||||
# strategy: single-layer|multi-layer
|
|
||||||
# module_path: Path to the module directory
|
|
||||||
# tool: gemini|qwen|codex (default: gemini)
|
|
||||||
# model: Model name (optional, uses tool defaults)
|
|
||||||
#
|
|
||||||
# Default Models:
|
|
||||||
# gemini: gemini-2.5-flash
|
|
||||||
# qwen: coder-model
|
|
||||||
# codex: gpt5-codex
|
|
||||||
#
|
|
||||||
# Strategies:
|
|
||||||
# single-layer: Upward aggregation
|
|
||||||
# - Read: Current directory code + child CLAUDE.md files
|
|
||||||
# - Generate: Single ./CLAUDE.md in current directory
|
|
||||||
# - Use: Large projects, incremental bottom-up updates
|
|
||||||
#
|
|
||||||
# multi-layer: Downward distribution
|
|
||||||
# - Read: All files in current and subdirectories
|
|
||||||
# - Generate: CLAUDE.md for each directory containing files
|
|
||||||
# - Use: Small projects, full documentation generation
|
|
||||||
#
|
|
||||||
# Features:
|
|
||||||
# - Minimal prompts based on unified template
|
|
||||||
# - Respects .gitignore patterns
|
|
||||||
# - Path-focused processing (script only cares about paths)
|
|
||||||
# - Template-driven generation
|
|
||||||
|
|
||||||
# Build exclusion filters from .gitignore
|
|
||||||
build_exclusion_filters() {
|
|
||||||
local filters=""
|
|
||||||
|
|
||||||
# Common system/cache directories to exclude
|
|
||||||
local system_excludes=(
|
|
||||||
".git" "__pycache__" "node_modules" ".venv" "venv" "env"
|
|
||||||
"dist" "build" ".cache" ".pytest_cache" ".mypy_cache"
|
|
||||||
"coverage" ".nyc_output" "logs" "tmp" "temp"
|
|
||||||
)
|
|
||||||
|
|
||||||
for exclude in "${system_excludes[@]}"; do
|
|
||||||
filters+=" -not -path '*/$exclude' -not -path '*/$exclude/*'"
|
|
||||||
done
|
|
||||||
|
|
||||||
# Find and parse .gitignore (current dir first, then git root)
|
|
||||||
local gitignore_file=""
|
|
||||||
|
|
||||||
# Check current directory first
|
|
||||||
if [ -f ".gitignore" ]; then
|
|
||||||
gitignore_file=".gitignore"
|
|
||||||
else
|
|
||||||
# Try to find git root and check for .gitignore there
|
|
||||||
local git_root=$(git rev-parse --show-toplevel 2>/dev/null)
|
|
||||||
if [ -n "$git_root" ] && [ -f "$git_root/.gitignore" ]; then
|
|
||||||
gitignore_file="$git_root/.gitignore"
|
|
||||||
fi
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Parse .gitignore if found
|
|
||||||
if [ -n "$gitignore_file" ]; then
|
|
||||||
while IFS= read -r line; do
|
|
||||||
# Skip empty lines and comments
|
|
||||||
[[ -z "$line" || "$line" =~ ^[[:space:]]*# ]] && continue
|
|
||||||
|
|
||||||
# Remove trailing slash and whitespace
|
|
||||||
line=$(echo "$line" | sed 's|/$||' | xargs)
|
|
||||||
|
|
||||||
# Skip wildcards patterns (too complex for simple find)
|
|
||||||
[[ "$line" =~ \* ]] && continue
|
|
||||||
|
|
||||||
# Add to filters
|
|
||||||
filters+=" -not -path '*/$line' -not -path '*/$line/*'"
|
|
||||||
done < "$gitignore_file"
|
|
||||||
fi
|
|
||||||
|
|
||||||
echo "$filters"
|
|
||||||
}
|
|
||||||
|
|
||||||
# Scan directory structure and generate structured information
|
|
||||||
scan_directory_structure() {
|
|
||||||
local target_path="$1"
|
|
||||||
local strategy="$2"
|
|
||||||
|
|
||||||
if [ ! -d "$target_path" ]; then
|
|
||||||
echo "Directory not found: $target_path"
|
|
||||||
return 1
|
|
||||||
fi
|
|
||||||
|
|
||||||
local exclusion_filters=$(build_exclusion_filters)
|
|
||||||
local structure_info=""
|
|
||||||
|
|
||||||
# Get basic directory info
|
|
||||||
local dir_name=$(basename "$target_path")
|
|
||||||
local total_files=$(eval "find \"$target_path\" -type f $exclusion_filters 2>/dev/null" | wc -l)
|
|
||||||
local total_dirs=$(eval "find \"$target_path\" -type d $exclusion_filters 2>/dev/null" | wc -l)
|
|
||||||
|
|
||||||
structure_info+="Directory: $dir_name\n"
|
|
||||||
structure_info+="Total files: $total_files\n"
|
|
||||||
structure_info+="Total directories: $total_dirs\n\n"
|
|
||||||
|
|
||||||
if [ "$strategy" = "multi-layer" ]; then
|
|
||||||
# For multi-layer: show all subdirectories with file counts
|
|
||||||
structure_info+="Subdirectories with files:\n"
|
|
||||||
while IFS= read -r dir; do
|
|
||||||
if [ -n "$dir" ] && [ "$dir" != "$target_path" ]; then
|
|
||||||
local rel_path=${dir#$target_path/}
|
|
||||||
local file_count=$(eval "find \"$dir\" -maxdepth 1 -type f $exclusion_filters 2>/dev/null" | wc -l)
|
|
||||||
if [ $file_count -gt 0 ]; then
|
|
||||||
structure_info+=" - $rel_path/ ($file_count files)\n"
|
|
||||||
fi
|
|
||||||
fi
|
|
||||||
done < <(eval "find \"$target_path\" -type d $exclusion_filters 2>/dev/null")
|
|
||||||
else
|
|
||||||
# For single-layer: show direct children only
|
|
||||||
structure_info+="Direct subdirectories:\n"
|
|
||||||
while IFS= read -r dir; do
|
|
||||||
if [ -n "$dir" ]; then
|
|
||||||
local dir_name=$(basename "$dir")
|
|
||||||
local file_count=$(eval "find \"$dir\" -maxdepth 1 -type f $exclusion_filters 2>/dev/null" | wc -l)
|
|
||||||
local has_claude=$([ -f "$dir/CLAUDE.md" ] && echo " [has CLAUDE.md]" || echo "")
|
|
||||||
structure_info+=" - $dir_name/ ($file_count files)$has_claude\n"
|
|
||||||
fi
|
|
||||||
done < <(eval "find \"$target_path\" -maxdepth 1 -type d $exclusion_filters 2>/dev/null" | grep -v "^$target_path$")
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Show main file types in current directory
|
|
||||||
structure_info+="\nCurrent directory files:\n"
|
|
||||||
local code_files=$(eval "find \"$target_path\" -maxdepth 1 -type f \\( -name '*.ts' -o -name '*.tsx' -o -name '*.js' -o -name '*.jsx' -o -name '*.py' -o -name '*.sh' \\) $exclusion_filters 2>/dev/null" | wc -l)
|
|
||||||
local config_files=$(eval "find \"$target_path\" -maxdepth 1 -type f \\( -name '*.json' -o -name '*.yaml' -o -name '*.yml' -o -name '*.toml' \\) $exclusion_filters 2>/dev/null" | wc -l)
|
|
||||||
local doc_files=$(eval "find \"$target_path\" -maxdepth 1 -type f -name '*.md' $exclusion_filters 2>/dev/null" | wc -l)
|
|
||||||
|
|
||||||
structure_info+=" - Code files: $code_files\n"
|
|
||||||
structure_info+=" - Config files: $config_files\n"
|
|
||||||
structure_info+=" - Documentation: $doc_files\n"
|
|
||||||
|
|
||||||
printf "%b" "$structure_info"
|
|
||||||
}
|
|
||||||
|
|
||||||
update_module_claude() {
|
|
||||||
local strategy="$1"
|
|
||||||
local module_path="$2"
|
|
||||||
local tool="${3:-gemini}"
|
|
||||||
local model="$4"
|
|
||||||
|
|
||||||
# Validate parameters
|
|
||||||
if [ -z "$strategy" ] || [ -z "$module_path" ]; then
|
|
||||||
echo "❌ Error: Strategy and module path are required"
|
|
||||||
echo "Usage: update_module_claude.sh <strategy> <module_path> [tool] [model]"
|
|
||||||
echo "Strategies: single-layer|multi-layer"
|
|
||||||
return 1
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Validate strategy
|
|
||||||
if [ "$strategy" != "single-layer" ] && [ "$strategy" != "multi-layer" ]; then
|
|
||||||
echo "❌ Error: Invalid strategy '$strategy'"
|
|
||||||
echo "Valid strategies: single-layer, multi-layer"
|
|
||||||
return 1
|
|
||||||
fi
|
|
||||||
|
|
||||||
if [ ! -d "$module_path" ]; then
|
|
||||||
echo "❌ Error: Directory '$module_path' does not exist"
|
|
||||||
return 1
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Set default models if not specified
|
|
||||||
if [ -z "$model" ]; then
|
|
||||||
case "$tool" in
|
|
||||||
gemini)
|
|
||||||
model="gemini-2.5-flash"
|
|
||||||
;;
|
|
||||||
qwen)
|
|
||||||
model="coder-model"
|
|
||||||
;;
|
|
||||||
codex)
|
|
||||||
model="gpt5-codex"
|
|
||||||
;;
|
|
||||||
*)
|
|
||||||
model=""
|
|
||||||
;;
|
|
||||||
esac
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Build exclusion filters from .gitignore
|
|
||||||
local exclusion_filters=$(build_exclusion_filters)
|
|
||||||
|
|
||||||
# Check if directory has files (excluding gitignored paths)
|
|
||||||
local file_count=$(eval "find \"$module_path\" -maxdepth 1 -type f $exclusion_filters 2>/dev/null" | wc -l)
|
|
||||||
if [ $file_count -eq 0 ]; then
|
|
||||||
echo "⚠️ Skipping '$module_path' - no files found (after .gitignore filtering)"
|
|
||||||
return 0
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Use unified template for all modules
|
|
||||||
local template_path="$HOME/.claude/workflows/cli-templates/prompts/memory/02-document-module-structure.txt"
|
|
||||||
|
|
||||||
# Read template content directly
|
|
||||||
local template_content=""
|
|
||||||
if [ -f "$template_path" ]; then
|
|
||||||
template_content=$(cat "$template_path")
|
|
||||||
echo " 📋 Loaded template: $(wc -l < "$template_path") lines"
|
|
||||||
else
|
|
||||||
echo " ⚠️ Template not found: $template_path"
|
|
||||||
echo " Using fallback template..."
|
|
||||||
template_content="Create comprehensive CLAUDE.md documentation following standard structure with Purpose, Structure, Components, Dependencies, Integration, and Implementation sections."
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Scan directory structure first
|
|
||||||
echo " 🔍 Scanning directory structure..."
|
|
||||||
local structure_info=$(scan_directory_structure "$module_path" "$strategy")
|
|
||||||
|
|
||||||
# Prepare logging info
|
|
||||||
local module_name=$(basename "$module_path")
|
|
||||||
|
|
||||||
echo "⚡ Updating: $module_path"
|
|
||||||
echo " Strategy: $strategy | Tool: $tool | Model: $model | Files: $file_count"
|
|
||||||
echo " Template: $(basename "$template_path") ($(echo "$template_content" | wc -l) lines)"
|
|
||||||
echo " Structure: Scanned $(echo "$structure_info" | wc -l) lines of structure info"
|
|
||||||
|
|
||||||
# Build minimal strategy-specific prompt with explicit paths and structure info
|
|
||||||
local final_prompt=""
|
|
||||||
|
|
||||||
if [ "$strategy" = "multi-layer" ]; then
|
|
||||||
# multi-layer strategy: read all, generate for each directory
|
|
||||||
final_prompt="Directory Structure Analysis:
|
|
||||||
$structure_info
|
|
||||||
|
|
||||||
Read: @**/*
|
|
||||||
|
|
||||||
Generate CLAUDE.md files:
|
|
||||||
- Primary: ./CLAUDE.md (current directory)
|
|
||||||
- Additional: CLAUDE.md in each subdirectory containing files
|
|
||||||
|
|
||||||
Template Guidelines:
|
|
||||||
$template_content
|
|
||||||
|
|
||||||
Instructions:
|
|
||||||
- Work bottom-up: deepest directories first
|
|
||||||
- Parent directories reference children
|
|
||||||
- Each CLAUDE.md file must be in its respective directory
|
|
||||||
- Follow the template guidelines above for consistent structure
|
|
||||||
- Use the structure analysis to understand directory hierarchy"
|
|
||||||
else
|
|
||||||
# single-layer strategy: read current + child CLAUDE.md, generate current only
|
|
||||||
final_prompt="Directory Structure Analysis:
|
|
||||||
$structure_info
|
|
||||||
|
|
||||||
Read: @*/CLAUDE.md @*.ts @*.tsx @*.js @*.jsx @*.py @*.sh @*.md @*.json @*.yaml @*.yml
|
|
||||||
|
|
||||||
Generate single file: ./CLAUDE.md
|
|
||||||
|
|
||||||
Template Guidelines:
|
|
||||||
$template_content
|
|
||||||
|
|
||||||
Instructions:
|
|
||||||
- Create exactly one CLAUDE.md file in the current directory
|
|
||||||
- Reference child CLAUDE.md files, do not duplicate their content
|
|
||||||
- Follow the template guidelines above for consistent structure
|
|
||||||
- Use the structure analysis to understand the current directory context"
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Execute update
|
|
||||||
local start_time=$(date +%s)
|
|
||||||
echo " 🔄 Starting update..."
|
|
||||||
|
|
||||||
if cd "$module_path" 2>/dev/null; then
|
|
||||||
local tool_result=0
|
|
||||||
|
|
||||||
# Execute with selected tool
|
|
||||||
# NOTE: Model parameter (-m) is placed AFTER the prompt
|
|
||||||
case "$tool" in
|
|
||||||
qwen)
|
|
||||||
if [ "$model" = "coder-model" ]; then
|
|
||||||
# coder-model is default, -m is optional
|
|
||||||
qwen -p "$final_prompt" --yolo 2>&1
|
|
||||||
else
|
|
||||||
qwen -p "$final_prompt" -m "$model" --yolo 2>&1
|
|
||||||
fi
|
|
||||||
tool_result=$?
|
|
||||||
;;
|
|
||||||
codex)
|
|
||||||
codex --full-auto exec "$final_prompt" -m "$model" --skip-git-repo-check -s danger-full-access 2>&1
|
|
||||||
tool_result=$?
|
|
||||||
;;
|
|
||||||
gemini)
|
|
||||||
gemini -p "$final_prompt" -m "$model" --yolo 2>&1
|
|
||||||
tool_result=$?
|
|
||||||
;;
|
|
||||||
*)
|
|
||||||
echo " ⚠️ Unknown tool: $tool, defaulting to gemini"
|
|
||||||
gemini -p "$final_prompt" -m "$model" --yolo 2>&1
|
|
||||||
tool_result=$?
|
|
||||||
;;
|
|
||||||
esac
|
|
||||||
|
|
||||||
if [ $tool_result -eq 0 ]; then
|
|
||||||
local end_time=$(date +%s)
|
|
||||||
local duration=$((end_time - start_time))
|
|
||||||
echo " ✅ Completed in ${duration}s"
|
|
||||||
cd - > /dev/null
|
|
||||||
return 0
|
|
||||||
else
|
|
||||||
echo " ❌ Update failed for $module_path"
|
|
||||||
cd - > /dev/null
|
|
||||||
return 1
|
|
||||||
fi
|
|
||||||
else
|
|
||||||
echo " ❌ Cannot access directory: $module_path"
|
|
||||||
return 1
|
|
||||||
fi
|
|
||||||
}
|
|
||||||
|
|
||||||
# Execute function if script is run directly
|
|
||||||
if [[ "${BASH_SOURCE[0]}" == "${0}" ]]; then
|
|
||||||
# Show help if no arguments or help requested
|
|
||||||
if [ $# -eq 0 ] || [ "$1" = "-h" ] || [ "$1" = "--help" ]; then
|
|
||||||
echo "Usage: update_module_claude.sh <strategy> <module_path> [tool] [model]"
|
|
||||||
echo ""
|
|
||||||
echo "Strategies:"
|
|
||||||
echo " single-layer - Read current dir code + child CLAUDE.md, generate ./CLAUDE.md"
|
|
||||||
echo " multi-layer - Read all files, generate CLAUDE.md for each directory"
|
|
||||||
echo ""
|
|
||||||
echo "Tools: gemini (default), qwen, codex"
|
|
||||||
echo "Models: Use tool defaults if not specified"
|
|
||||||
echo ""
|
|
||||||
echo "Examples:"
|
|
||||||
echo " ./update_module_claude.sh single-layer ./src/auth"
|
|
||||||
echo " ./update_module_claude.sh multi-layer ./components gemini gemini-2.5-flash"
|
|
||||||
exit 0
|
|
||||||
fi
|
|
||||||
|
|
||||||
update_module_claude "$@"
|
|
||||||
fi
|
|
||||||
584
.claude/skills/_shared/mermaid-utils.md
Normal file
584
.claude/skills/_shared/mermaid-utils.md
Normal file
@@ -0,0 +1,584 @@
|
|||||||
|
# Mermaid Utilities Library
|
||||||
|
|
||||||
|
Shared utilities for generating and validating Mermaid diagrams across all analysis skills.
|
||||||
|
|
||||||
|
## Sanitization Functions
|
||||||
|
|
||||||
|
### sanitizeId
|
||||||
|
|
||||||
|
Convert any text to a valid Mermaid node ID.
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
/**
|
||||||
|
* Sanitize text to valid Mermaid node ID
|
||||||
|
* - Only alphanumeric and underscore allowed
|
||||||
|
* - Cannot start with number
|
||||||
|
* - Truncates to 50 chars max
|
||||||
|
*
|
||||||
|
* @param {string} text - Input text
|
||||||
|
* @returns {string} - Valid Mermaid ID
|
||||||
|
*/
|
||||||
|
function sanitizeId(text) {
|
||||||
|
if (!text) return '_empty';
|
||||||
|
return text
|
||||||
|
.replace(/[^a-zA-Z0-9_\u4e00-\u9fa5]/g, '_') // Allow Chinese chars
|
||||||
|
.replace(/^[0-9]/, '_$&') // Prefix number with _
|
||||||
|
.replace(/_+/g, '_') // Collapse multiple _
|
||||||
|
.substring(0, 50); // Limit length
|
||||||
|
}
|
||||||
|
|
||||||
|
// Examples:
|
||||||
|
// sanitizeId("User-Service") → "User_Service"
|
||||||
|
// sanitizeId("3rdParty") → "_3rdParty"
|
||||||
|
// sanitizeId("用户服务") → "用户服务"
|
||||||
|
```
|
||||||
|
|
||||||
|
### escapeLabel
|
||||||
|
|
||||||
|
Escape special characters for Mermaid labels.
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
/**
|
||||||
|
* Escape special characters in Mermaid labels
|
||||||
|
* Uses HTML entity encoding for problematic chars
|
||||||
|
*
|
||||||
|
* @param {string} text - Label text
|
||||||
|
* @returns {string} - Escaped label
|
||||||
|
*/
|
||||||
|
function escapeLabel(text) {
|
||||||
|
if (!text) return '';
|
||||||
|
return text
|
||||||
|
.replace(/"/g, "'") // Avoid quote issues
|
||||||
|
.replace(/\(/g, '#40;') // (
|
||||||
|
.replace(/\)/g, '#41;') // )
|
||||||
|
.replace(/\{/g, '#123;') // {
|
||||||
|
.replace(/\}/g, '#125;') // }
|
||||||
|
.replace(/\[/g, '#91;') // [
|
||||||
|
.replace(/\]/g, '#93;') // ]
|
||||||
|
.replace(/</g, '#60;') // <
|
||||||
|
.replace(/>/g, '#62;') // >
|
||||||
|
.replace(/\|/g, '#124;') // |
|
||||||
|
.substring(0, 80); // Limit length
|
||||||
|
}
|
||||||
|
|
||||||
|
// Examples:
|
||||||
|
// escapeLabel("Process(data)") → "Process#40;data#41;"
|
||||||
|
// escapeLabel("Check {valid?}") → "Check #123;valid?#125;"
|
||||||
|
```
|
||||||
|
|
||||||
|
### sanitizeType
|
||||||
|
|
||||||
|
Sanitize type names for class diagrams.
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
/**
|
||||||
|
* Sanitize type names for Mermaid classDiagram
|
||||||
|
* Removes generics syntax that causes issues
|
||||||
|
*
|
||||||
|
* @param {string} type - Type name
|
||||||
|
* @returns {string} - Sanitized type
|
||||||
|
*/
|
||||||
|
function sanitizeType(type) {
|
||||||
|
if (!type) return 'any';
|
||||||
|
return type
|
||||||
|
.replace(/<[^>]*>/g, '') // Remove generics <T>
|
||||||
|
.replace(/\|/g, ' or ') // Union types
|
||||||
|
.replace(/&/g, ' and ') // Intersection types
|
||||||
|
.replace(/\[\]/g, 'Array') // Array notation
|
||||||
|
.substring(0, 30);
|
||||||
|
}
|
||||||
|
|
||||||
|
// Examples:
|
||||||
|
// sanitizeType("Array<string>") → "Array"
|
||||||
|
// sanitizeType("string | number") → "string or number"
|
||||||
|
```
|
||||||
|
|
||||||
|
## Diagram Generation Functions
|
||||||
|
|
||||||
|
### generateFlowchartNode
|
||||||
|
|
||||||
|
Generate a flowchart node with proper shape.
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
/**
|
||||||
|
* Generate flowchart node with shape
|
||||||
|
*
|
||||||
|
* @param {string} id - Node ID
|
||||||
|
* @param {string} label - Display label
|
||||||
|
* @param {string} type - Node type: start|end|process|decision|io|subroutine
|
||||||
|
* @returns {string} - Mermaid node definition
|
||||||
|
*/
|
||||||
|
function generateFlowchartNode(id, label, type = 'process') {
|
||||||
|
const safeId = sanitizeId(id);
|
||||||
|
const safeLabel = escapeLabel(label);
|
||||||
|
|
||||||
|
const shapes = {
|
||||||
|
start: `${safeId}(["${safeLabel}"])`, // Stadium shape
|
||||||
|
end: `${safeId}(["${safeLabel}"])`, // Stadium shape
|
||||||
|
process: `${safeId}["${safeLabel}"]`, // Rectangle
|
||||||
|
decision: `${safeId}{"${safeLabel}"}`, // Diamond
|
||||||
|
io: `${safeId}[/"${safeLabel}"/]`, // Parallelogram
|
||||||
|
subroutine: `${safeId}[["${safeLabel}"]]`, // Subroutine
|
||||||
|
database: `${safeId}[("${safeLabel}")]`, // Cylinder
|
||||||
|
manual: `${safeId}[/"${safeLabel}"\\]` // Trapezoid
|
||||||
|
};
|
||||||
|
|
||||||
|
return shapes[type] || shapes.process;
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### generateFlowchartEdge
|
||||||
|
|
||||||
|
Generate a flowchart edge with optional label.
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
/**
|
||||||
|
* Generate flowchart edge
|
||||||
|
*
|
||||||
|
* @param {string} from - Source node ID
|
||||||
|
* @param {string} to - Target node ID
|
||||||
|
* @param {string} label - Edge label (optional)
|
||||||
|
* @param {string} style - Edge style: solid|dashed|thick
|
||||||
|
* @returns {string} - Mermaid edge definition
|
||||||
|
*/
|
||||||
|
function generateFlowchartEdge(from, to, label = '', style = 'solid') {
|
||||||
|
const safeFrom = sanitizeId(from);
|
||||||
|
const safeTo = sanitizeId(to);
|
||||||
|
const safeLabel = label ? `|"${escapeLabel(label)}"|` : '';
|
||||||
|
|
||||||
|
const arrows = {
|
||||||
|
solid: '-->',
|
||||||
|
dashed: '-.->',
|
||||||
|
thick: '==>'
|
||||||
|
};
|
||||||
|
|
||||||
|
const arrow = arrows[style] || arrows.solid;
|
||||||
|
return ` ${safeFrom} ${arrow}${safeLabel} ${safeTo}`;
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### generateAlgorithmFlowchart (Enhanced)
|
||||||
|
|
||||||
|
Generate algorithm flowchart with branch/loop support.
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
/**
|
||||||
|
* Generate algorithm flowchart with decision support
|
||||||
|
*
|
||||||
|
* @param {Object} algorithm - Algorithm definition
|
||||||
|
* - name: Algorithm name
|
||||||
|
* - inputs: [{name, type}]
|
||||||
|
* - outputs: [{name, type}]
|
||||||
|
* - steps: [{id, description, type, next: [id], conditions: [text]}]
|
||||||
|
* @returns {string} - Complete Mermaid flowchart
|
||||||
|
*/
|
||||||
|
function generateAlgorithmFlowchart(algorithm) {
|
||||||
|
let mermaid = 'flowchart TD\n';
|
||||||
|
|
||||||
|
// Start node
|
||||||
|
mermaid += ` START(["开始: ${escapeLabel(algorithm.name)}"])\n`;
|
||||||
|
|
||||||
|
// Input node (if has inputs)
|
||||||
|
if (algorithm.inputs?.length > 0) {
|
||||||
|
const inputList = algorithm.inputs.map(i => `${i.name}: ${i.type}`).join(', ');
|
||||||
|
mermaid += ` INPUT[/"输入: ${escapeLabel(inputList)}"/]\n`;
|
||||||
|
mermaid += ` START --> INPUT\n`;
|
||||||
|
}
|
||||||
|
|
||||||
|
// Process nodes
|
||||||
|
const steps = algorithm.steps || [];
|
||||||
|
for (const step of steps) {
|
||||||
|
const nodeId = sanitizeId(step.id || `STEP_${step.step_num}`);
|
||||||
|
|
||||||
|
if (step.type === 'decision') {
|
||||||
|
mermaid += ` ${nodeId}{"${escapeLabel(step.description)}"}\n`;
|
||||||
|
} else if (step.type === 'io') {
|
||||||
|
mermaid += ` ${nodeId}[/"${escapeLabel(step.description)}"/]\n`;
|
||||||
|
} else if (step.type === 'loop_start') {
|
||||||
|
mermaid += ` ${nodeId}[["循环: ${escapeLabel(step.description)}"]]\n`;
|
||||||
|
} else {
|
||||||
|
mermaid += ` ${nodeId}["${escapeLabel(step.description)}"]\n`;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Output node
|
||||||
|
const outputDesc = algorithm.outputs?.map(o => o.name).join(', ') || '结果';
|
||||||
|
mermaid += ` OUTPUT[/"输出: ${escapeLabel(outputDesc)}"/]\n`;
|
||||||
|
mermaid += ` END_(["结束"])\n`;
|
||||||
|
|
||||||
|
// Connect first step to input/start
|
||||||
|
if (steps.length > 0) {
|
||||||
|
const firstStep = sanitizeId(steps[0].id || 'STEP_1');
|
||||||
|
if (algorithm.inputs?.length > 0) {
|
||||||
|
mermaid += ` INPUT --> ${firstStep}\n`;
|
||||||
|
} else {
|
||||||
|
mermaid += ` START --> ${firstStep}\n`;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Connect steps based on next array
|
||||||
|
for (const step of steps) {
|
||||||
|
const nodeId = sanitizeId(step.id || `STEP_${step.step_num}`);
|
||||||
|
|
||||||
|
if (step.next && step.next.length > 0) {
|
||||||
|
step.next.forEach((nextId, index) => {
|
||||||
|
const safeNextId = sanitizeId(nextId);
|
||||||
|
const condition = step.conditions?.[index];
|
||||||
|
|
||||||
|
if (condition) {
|
||||||
|
mermaid += ` ${nodeId} -->|"${escapeLabel(condition)}"| ${safeNextId}\n`;
|
||||||
|
} else {
|
||||||
|
mermaid += ` ${nodeId} --> ${safeNextId}\n`;
|
||||||
|
}
|
||||||
|
});
|
||||||
|
} else if (!step.type?.includes('end')) {
|
||||||
|
// Default: connect to next step or output
|
||||||
|
const stepIndex = steps.indexOf(step);
|
||||||
|
if (stepIndex < steps.length - 1) {
|
||||||
|
const nextStep = sanitizeId(steps[stepIndex + 1].id || `STEP_${stepIndex + 2}`);
|
||||||
|
mermaid += ` ${nodeId} --> ${nextStep}\n`;
|
||||||
|
} else {
|
||||||
|
mermaid += ` ${nodeId} --> OUTPUT\n`;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Connect output to end
|
||||||
|
mermaid += ` OUTPUT --> END_\n`;
|
||||||
|
|
||||||
|
return mermaid;
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
## Diagram Validation
|
||||||
|
|
||||||
|
### validateMermaidSyntax
|
||||||
|
|
||||||
|
Comprehensive Mermaid syntax validation.
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
/**
|
||||||
|
* Validate Mermaid diagram syntax
|
||||||
|
*
|
||||||
|
* @param {string} content - Mermaid diagram content
|
||||||
|
* @returns {Object} - {valid: boolean, issues: string[]}
|
||||||
|
*/
|
||||||
|
function validateMermaidSyntax(content) {
|
||||||
|
const issues = [];
|
||||||
|
|
||||||
|
// Check 1: Diagram type declaration
|
||||||
|
if (!content.match(/^(graph|flowchart|classDiagram|sequenceDiagram|stateDiagram|erDiagram|gantt|pie|mindmap)/m)) {
|
||||||
|
issues.push('Missing diagram type declaration');
|
||||||
|
}
|
||||||
|
|
||||||
|
// Check 2: Undefined values
|
||||||
|
if (content.includes('undefined') || content.includes('null')) {
|
||||||
|
issues.push('Contains undefined/null values');
|
||||||
|
}
|
||||||
|
|
||||||
|
// Check 3: Invalid arrow syntax
|
||||||
|
if (content.match(/-->\s*-->/)) {
|
||||||
|
issues.push('Double arrow syntax error');
|
||||||
|
}
|
||||||
|
|
||||||
|
// Check 4: Unescaped special characters in labels
|
||||||
|
const labelMatches = content.match(/\["[^"]*[(){}[\]<>][^"]*"\]/g);
|
||||||
|
if (labelMatches?.some(m => !m.includes('#'))) {
|
||||||
|
issues.push('Unescaped special characters in labels');
|
||||||
|
}
|
||||||
|
|
||||||
|
// Check 5: Node ID starts with number
|
||||||
|
if (content.match(/\n\s*[0-9][a-zA-Z0-9_]*[\[\({]/)) {
|
||||||
|
issues.push('Node ID cannot start with number');
|
||||||
|
}
|
||||||
|
|
||||||
|
// Check 6: Nested subgraph syntax error
|
||||||
|
if (content.match(/subgraph\s+\S+\s*\n[^e]*subgraph/)) {
|
||||||
|
// This is actually valid, only flag if brackets don't match
|
||||||
|
const subgraphCount = (content.match(/subgraph/g) || []).length;
|
||||||
|
const endCount = (content.match(/\bend\b/g) || []).length;
|
||||||
|
if (subgraphCount > endCount) {
|
||||||
|
issues.push('Unbalanced subgraph/end blocks');
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Check 7: Invalid arrow type for diagram type
|
||||||
|
const diagramType = content.match(/^(graph|flowchart|classDiagram|sequenceDiagram)/m)?.[1];
|
||||||
|
if (diagramType === 'classDiagram' && content.includes('-->|')) {
|
||||||
|
issues.push('Invalid edge label syntax for classDiagram');
|
||||||
|
}
|
||||||
|
|
||||||
|
// Check 8: Empty node labels
|
||||||
|
if (content.match(/\[""\]|\{\}|\(\)/)) {
|
||||||
|
issues.push('Empty node labels detected');
|
||||||
|
}
|
||||||
|
|
||||||
|
// Check 9: Reserved keywords as IDs
|
||||||
|
const reserved = ['end', 'graph', 'subgraph', 'direction', 'class', 'click'];
|
||||||
|
for (const keyword of reserved) {
|
||||||
|
const pattern = new RegExp(`\\n\\s*${keyword}\\s*[\\[\\(\\{]`, 'i');
|
||||||
|
if (content.match(pattern)) {
|
||||||
|
issues.push(`Reserved keyword "${keyword}" used as node ID`);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Check 10: Line length (Mermaid has issues with very long lines)
|
||||||
|
const lines = content.split('\n');
|
||||||
|
for (let i = 0; i < lines.length; i++) {
|
||||||
|
if (lines[i].length > 500) {
|
||||||
|
issues.push(`Line ${i + 1} exceeds 500 characters`);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return {
|
||||||
|
valid: issues.length === 0,
|
||||||
|
issues
|
||||||
|
};
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### validateDiagramDirectory
|
||||||
|
|
||||||
|
Validate all diagrams in a directory.
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
/**
|
||||||
|
* Validate all Mermaid diagrams in directory
|
||||||
|
*
|
||||||
|
* @param {string} diagramDir - Path to diagrams directory
|
||||||
|
* @returns {Object[]} - Array of {file, valid, issues}
|
||||||
|
*/
|
||||||
|
function validateDiagramDirectory(diagramDir) {
|
||||||
|
const files = Glob(`${diagramDir}/*.mmd`);
|
||||||
|
const results = [];
|
||||||
|
|
||||||
|
for (const file of files) {
|
||||||
|
const content = Read(file);
|
||||||
|
const validation = validateMermaidSyntax(content);
|
||||||
|
|
||||||
|
results.push({
|
||||||
|
file: file.split('/').pop(),
|
||||||
|
path: file,
|
||||||
|
valid: validation.valid,
|
||||||
|
issues: validation.issues,
|
||||||
|
lines: content.split('\n').length
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
return results;
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
## Class Diagram Utilities
|
||||||
|
|
||||||
|
### generateClassDiagram
|
||||||
|
|
||||||
|
Generate class diagram with relationships.
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
/**
|
||||||
|
* Generate class diagram from analysis data
|
||||||
|
*
|
||||||
|
* @param {Object} analysis - Data structure analysis
|
||||||
|
* - entities: [{name, type, properties, methods}]
|
||||||
|
* - relationships: [{from, to, type, label}]
|
||||||
|
* @param {Object} options - Generation options
|
||||||
|
* - maxClasses: Max classes to include (default: 15)
|
||||||
|
* - maxProperties: Max properties per class (default: 8)
|
||||||
|
* - maxMethods: Max methods per class (default: 6)
|
||||||
|
* @returns {string} - Mermaid classDiagram
|
||||||
|
*/
|
||||||
|
function generateClassDiagram(analysis, options = {}) {
|
||||||
|
const maxClasses = options.maxClasses || 15;
|
||||||
|
const maxProperties = options.maxProperties || 8;
|
||||||
|
const maxMethods = options.maxMethods || 6;
|
||||||
|
|
||||||
|
let mermaid = 'classDiagram\n';
|
||||||
|
|
||||||
|
const entities = (analysis.entities || []).slice(0, maxClasses);
|
||||||
|
|
||||||
|
// Generate classes
|
||||||
|
for (const entity of entities) {
|
||||||
|
const className = sanitizeId(entity.name);
|
||||||
|
mermaid += ` class ${className} {\n`;
|
||||||
|
|
||||||
|
// Properties
|
||||||
|
for (const prop of (entity.properties || []).slice(0, maxProperties)) {
|
||||||
|
const vis = {public: '+', private: '-', protected: '#'}[prop.visibility] || '+';
|
||||||
|
const type = sanitizeType(prop.type);
|
||||||
|
mermaid += ` ${vis}${type} ${prop.name}\n`;
|
||||||
|
}
|
||||||
|
|
||||||
|
// Methods
|
||||||
|
for (const method of (entity.methods || []).slice(0, maxMethods)) {
|
||||||
|
const vis = {public: '+', private: '-', protected: '#'}[method.visibility] || '+';
|
||||||
|
const params = (method.params || []).map(p => p.name).join(', ');
|
||||||
|
const returnType = sanitizeType(method.returnType || 'void');
|
||||||
|
mermaid += ` ${vis}${method.name}(${params}) ${returnType}\n`;
|
||||||
|
}
|
||||||
|
|
||||||
|
mermaid += ' }\n';
|
||||||
|
|
||||||
|
// Add stereotype if applicable
|
||||||
|
if (entity.type === 'interface') {
|
||||||
|
mermaid += ` <<interface>> ${className}\n`;
|
||||||
|
} else if (entity.type === 'abstract') {
|
||||||
|
mermaid += ` <<abstract>> ${className}\n`;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Generate relationships
|
||||||
|
const arrows = {
|
||||||
|
inheritance: '--|>',
|
||||||
|
implementation: '..|>',
|
||||||
|
composition: '*--',
|
||||||
|
aggregation: 'o--',
|
||||||
|
association: '-->',
|
||||||
|
dependency: '..>'
|
||||||
|
};
|
||||||
|
|
||||||
|
for (const rel of (analysis.relationships || [])) {
|
||||||
|
const from = sanitizeId(rel.from);
|
||||||
|
const to = sanitizeId(rel.to);
|
||||||
|
const arrow = arrows[rel.type] || '-->';
|
||||||
|
const label = rel.label ? ` : ${escapeLabel(rel.label)}` : '';
|
||||||
|
|
||||||
|
// Only include if both entities exist
|
||||||
|
if (entities.some(e => sanitizeId(e.name) === from) &&
|
||||||
|
entities.some(e => sanitizeId(e.name) === to)) {
|
||||||
|
mermaid += ` ${from} ${arrow} ${to}${label}\n`;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return mermaid;
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
## Sequence Diagram Utilities
|
||||||
|
|
||||||
|
### generateSequenceDiagram
|
||||||
|
|
||||||
|
Generate sequence diagram from scenario.
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
/**
|
||||||
|
* Generate sequence diagram from scenario
|
||||||
|
*
|
||||||
|
* @param {Object} scenario - Sequence scenario
|
||||||
|
* - name: Scenario name
|
||||||
|
* - actors: [{id, name, type}]
|
||||||
|
* - messages: [{from, to, description, type}]
|
||||||
|
* - blocks: [{type, condition, messages}]
|
||||||
|
* @returns {string} - Mermaid sequenceDiagram
|
||||||
|
*/
|
||||||
|
function generateSequenceDiagram(scenario) {
|
||||||
|
let mermaid = 'sequenceDiagram\n';
|
||||||
|
|
||||||
|
// Title
|
||||||
|
if (scenario.name) {
|
||||||
|
mermaid += ` title ${escapeLabel(scenario.name)}\n`;
|
||||||
|
}
|
||||||
|
|
||||||
|
// Participants
|
||||||
|
for (const actor of scenario.actors || []) {
|
||||||
|
const actorType = actor.type === 'external' ? 'actor' : 'participant';
|
||||||
|
mermaid += ` ${actorType} ${sanitizeId(actor.id)} as ${escapeLabel(actor.name)}\n`;
|
||||||
|
}
|
||||||
|
|
||||||
|
mermaid += '\n';
|
||||||
|
|
||||||
|
// Messages
|
||||||
|
for (const msg of scenario.messages || []) {
|
||||||
|
const from = sanitizeId(msg.from);
|
||||||
|
const to = sanitizeId(msg.to);
|
||||||
|
const desc = escapeLabel(msg.description);
|
||||||
|
|
||||||
|
let arrow;
|
||||||
|
switch (msg.type) {
|
||||||
|
case 'async': arrow = '->>'; break;
|
||||||
|
case 'response': arrow = '-->>'; break;
|
||||||
|
case 'create': arrow = '->>+'; break;
|
||||||
|
case 'destroy': arrow = '->>-'; break;
|
||||||
|
case 'self': arrow = '->>'; break;
|
||||||
|
default: arrow = '->>';
|
||||||
|
}
|
||||||
|
|
||||||
|
mermaid += ` ${from}${arrow}${to}: ${desc}\n`;
|
||||||
|
|
||||||
|
// Activation
|
||||||
|
if (msg.activate) {
|
||||||
|
mermaid += ` activate ${to}\n`;
|
||||||
|
}
|
||||||
|
if (msg.deactivate) {
|
||||||
|
mermaid += ` deactivate ${from}\n`;
|
||||||
|
}
|
||||||
|
|
||||||
|
// Notes
|
||||||
|
if (msg.note) {
|
||||||
|
mermaid += ` Note over ${to}: ${escapeLabel(msg.note)}\n`;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Blocks (loops, alt, opt)
|
||||||
|
for (const block of scenario.blocks || []) {
|
||||||
|
switch (block.type) {
|
||||||
|
case 'loop':
|
||||||
|
mermaid += ` loop ${escapeLabel(block.condition)}\n`;
|
||||||
|
break;
|
||||||
|
case 'alt':
|
||||||
|
mermaid += ` alt ${escapeLabel(block.condition)}\n`;
|
||||||
|
break;
|
||||||
|
case 'opt':
|
||||||
|
mermaid += ` opt ${escapeLabel(block.condition)}\n`;
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
|
||||||
|
for (const m of block.messages || []) {
|
||||||
|
mermaid += ` ${sanitizeId(m.from)}->>${sanitizeId(m.to)}: ${escapeLabel(m.description)}\n`;
|
||||||
|
}
|
||||||
|
|
||||||
|
mermaid += ' end\n';
|
||||||
|
}
|
||||||
|
|
||||||
|
return mermaid;
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
## Usage Examples
|
||||||
|
|
||||||
|
### Example 1: Algorithm with Branches
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
const algorithm = {
|
||||||
|
name: "用户认证流程",
|
||||||
|
inputs: [{name: "credentials", type: "Object"}],
|
||||||
|
outputs: [{name: "token", type: "JWT"}],
|
||||||
|
steps: [
|
||||||
|
{id: "validate", description: "验证输入格式", type: "process"},
|
||||||
|
{id: "check_user", description: "用户是否存在?", type: "decision",
|
||||||
|
next: ["verify_pwd", "error_user"], conditions: ["是", "否"]},
|
||||||
|
{id: "verify_pwd", description: "验证密码", type: "process"},
|
||||||
|
{id: "pwd_ok", description: "密码正确?", type: "decision",
|
||||||
|
next: ["gen_token", "error_pwd"], conditions: ["是", "否"]},
|
||||||
|
{id: "gen_token", description: "生成 JWT Token", type: "process"},
|
||||||
|
{id: "error_user", description: "返回用户不存在", type: "io"},
|
||||||
|
{id: "error_pwd", description: "返回密码错误", type: "io"}
|
||||||
|
]
|
||||||
|
};
|
||||||
|
|
||||||
|
const flowchart = generateAlgorithmFlowchart(algorithm);
|
||||||
|
```
|
||||||
|
|
||||||
|
### Example 2: Validate Before Output
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
const diagram = generateClassDiagram(analysis);
|
||||||
|
const validation = validateMermaidSyntax(diagram);
|
||||||
|
|
||||||
|
if (!validation.valid) {
|
||||||
|
console.log("Diagram has issues:", validation.issues);
|
||||||
|
// Fix issues or regenerate
|
||||||
|
} else {
|
||||||
|
Write(`${outputDir}/class-diagram.mmd`, diagram);
|
||||||
|
}
|
||||||
|
```
|
||||||
@@ -200,21 +200,21 @@ Comprehensive command guide for Claude Code Workflow (CCW) system covering 78 co
|
|||||||
|
|
||||||
**Complex Query** (CLI-assisted analysis):
|
**Complex Query** (CLI-assisted analysis):
|
||||||
1. **Detect complexity indicators** (多个命令对比、工作流程分析、最佳实践)
|
1. **Detect complexity indicators** (多个命令对比、工作流程分析、最佳实践)
|
||||||
2. **Design targeted analysis prompt** for gemini/qwen:
|
2. **Design targeted analysis prompt** for gemini/qwen via CCW:
|
||||||
- Frame user's question precisely
|
- Frame user's question precisely
|
||||||
- Specify required analysis depth
|
- Specify required analysis depth
|
||||||
- Request structured comparison/synthesis
|
- Request structured comparison/synthesis
|
||||||
```bash
|
```bash
|
||||||
gemini -p "
|
ccw cli -p "
|
||||||
PURPOSE: Analyze command documentation to answer user query
|
PURPOSE: Analyze command documentation to answer user query
|
||||||
TASK: [extracted user question with context]
|
TASK: • [extracted user question with context]
|
||||||
MODE: analysis
|
MODE: analysis
|
||||||
CONTEXT: @**/*
|
CONTEXT: @**/*
|
||||||
EXPECTED: Comprehensive answer with examples and recommendations
|
EXPECTED: Comprehensive answer with examples and recommendations
|
||||||
RULES: $(cat ~/.claude/workflows/cli-templates/prompts/analysis/02-analyze-code-patterns.txt) | Focus on practical usage | analysis=READ-ONLY
|
RULES: $(cat ~/.claude/workflows/cli-templates/prompts/analysis/02-analyze-code-patterns.txt) | Focus on practical usage | analysis=READ-ONLY
|
||||||
" -m gemini-3-pro-preview-11-2025 --include-directories ~/.claude/skills/command-guide/reference
|
" --tool gemini --cd ~/.claude/skills/command-guide/reference
|
||||||
```
|
```
|
||||||
Note: Use absolute path `~/.claude/skills/command-guide/reference` for reference documentation access
|
Note: Use `--cd` with absolute path `~/.claude/skills/command-guide/reference` for reference documentation access
|
||||||
3. **Process and integrate CLI analysis**:
|
3. **Process and integrate CLI analysis**:
|
||||||
- Extract key insights from CLI output
|
- Extract key insights from CLI output
|
||||||
- Add context-specific examples
|
- Add context-specific examples
|
||||||
|
|||||||
@@ -112,7 +112,7 @@
|
|||||||
{
|
{
|
||||||
"name": "tech-research",
|
"name": "tech-research",
|
||||||
"command": "/memory:tech-research",
|
"command": "/memory:tech-research",
|
||||||
"description": "3-phase orchestrator: extract tech stack from session/name → delegate to agent for Exa research and module generation → generate SKILL.md index (skips phase 2 if exists)",
|
"description": "3-phase orchestrator: extract tech stack → Exa research → generate path-conditional rules (auto-loaded by Claude Code)",
|
||||||
"arguments": "[session-id | tech-stack-name] [--regenerate] [--tool <gemini|qwen>]",
|
"arguments": "[session-id | tech-stack-name] [--regenerate] [--tool <gemini|qwen>]",
|
||||||
"category": "memory",
|
"category": "memory",
|
||||||
"subcategory": null,
|
"subcategory": null,
|
||||||
|
|||||||
@@ -133,7 +133,7 @@
|
|||||||
{
|
{
|
||||||
"name": "tech-research",
|
"name": "tech-research",
|
||||||
"command": "/memory:tech-research",
|
"command": "/memory:tech-research",
|
||||||
"description": "3-phase orchestrator: extract tech stack from session/name → delegate to agent for Exa research and module generation → generate SKILL.md index (skips phase 2 if exists)",
|
"description": "3-phase orchestrator: extract tech stack → Exa research → generate path-conditional rules (auto-loaded by Claude Code)",
|
||||||
"arguments": "[session-id | tech-stack-name] [--regenerate] [--tool <gemini|qwen>]",
|
"arguments": "[session-id | tech-stack-name] [--regenerate] [--tool <gemini|qwen>]",
|
||||||
"category": "memory",
|
"category": "memory",
|
||||||
"subcategory": null,
|
"subcategory": null,
|
||||||
|
|||||||
@@ -36,7 +36,7 @@
|
|||||||
{
|
{
|
||||||
"name": "tech-research",
|
"name": "tech-research",
|
||||||
"command": "/memory:tech-research",
|
"command": "/memory:tech-research",
|
||||||
"description": "3-phase orchestrator: extract tech stack from session/name → delegate to agent for Exa research and module generation → generate SKILL.md index (skips phase 2 if exists)",
|
"description": "3-phase orchestrator: extract tech stack → Exa research → generate path-conditional rules (auto-loaded by Claude Code)",
|
||||||
"arguments": "[session-id | tech-stack-name] [--regenerate] [--tool <gemini|qwen>]",
|
"arguments": "[session-id | tech-stack-name] [--regenerate] [--tool <gemini|qwen>]",
|
||||||
"category": "memory",
|
"category": "memory",
|
||||||
"subcategory": null,
|
"subcategory": null,
|
||||||
|
|||||||
@@ -203,7 +203,13 @@ Generate individual `.task/IMPL-*.json` files with the following structure:
|
|||||||
"id": "IMPL-N",
|
"id": "IMPL-N",
|
||||||
"title": "Descriptive task name",
|
"title": "Descriptive task name",
|
||||||
"status": "pending|active|completed|blocked",
|
"status": "pending|active|completed|blocked",
|
||||||
"context_package_path": ".workflow/active/WFS-{session}/.process/context-package.json"
|
"context_package_path": ".workflow/active/WFS-{session}/.process/context-package.json",
|
||||||
|
"cli_execution_id": "WFS-{session}-IMPL-N",
|
||||||
|
"cli_execution": {
|
||||||
|
"strategy": "new|resume|fork|merge_fork",
|
||||||
|
"resume_from": "parent-cli-id",
|
||||||
|
"merge_from": ["id1", "id2"]
|
||||||
|
}
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
@@ -216,6 +222,50 @@ Generate individual `.task/IMPL-*.json` files with the following structure:
|
|||||||
- `title`: Descriptive task name summarizing the work
|
- `title`: Descriptive task name summarizing the work
|
||||||
- `status`: Task state - `pending` (not started), `active` (in progress), `completed` (done), `blocked` (waiting on dependencies)
|
- `status`: Task state - `pending` (not started), `active` (in progress), `completed` (done), `blocked` (waiting on dependencies)
|
||||||
- `context_package_path`: Path to smart context package containing project structure, dependencies, and brainstorming artifacts catalog
|
- `context_package_path`: Path to smart context package containing project structure, dependencies, and brainstorming artifacts catalog
|
||||||
|
- `cli_execution_id`: Unique CLI conversation ID (format: `{session_id}-{task_id}`)
|
||||||
|
- `cli_execution`: CLI execution strategy based on task dependencies
|
||||||
|
- `strategy`: Execution pattern (`new`, `resume`, `fork`, `merge_fork`)
|
||||||
|
- `resume_from`: Parent task's cli_execution_id (for resume/fork)
|
||||||
|
- `merge_from`: Array of parent cli_execution_ids (for merge_fork)
|
||||||
|
|
||||||
|
**CLI Execution Strategy Rules** (MANDATORY - apply to all tasks):
|
||||||
|
|
||||||
|
| Dependency Pattern | Strategy | CLI Command Pattern |
|
||||||
|
|--------------------|----------|---------------------|
|
||||||
|
| No `depends_on` | `new` | `--id {cli_execution_id}` |
|
||||||
|
| 1 parent, parent has 1 child | `resume` | `--resume {resume_from}` |
|
||||||
|
| 1 parent, parent has N children | `fork` | `--resume {resume_from} --id {cli_execution_id}` |
|
||||||
|
| N parents | `merge_fork` | `--resume {merge_from.join(',')} --id {cli_execution_id}` |
|
||||||
|
|
||||||
|
**Strategy Selection Algorithm**:
|
||||||
|
```javascript
|
||||||
|
function computeCliStrategy(task, allTasks) {
|
||||||
|
const deps = task.context?.depends_on || []
|
||||||
|
const childCount = allTasks.filter(t =>
|
||||||
|
t.context?.depends_on?.includes(task.id)
|
||||||
|
).length
|
||||||
|
|
||||||
|
if (deps.length === 0) {
|
||||||
|
return { strategy: "new" }
|
||||||
|
} else if (deps.length === 1) {
|
||||||
|
const parentTask = allTasks.find(t => t.id === deps[0])
|
||||||
|
const parentChildCount = allTasks.filter(t =>
|
||||||
|
t.context?.depends_on?.includes(deps[0])
|
||||||
|
).length
|
||||||
|
|
||||||
|
if (parentChildCount === 1) {
|
||||||
|
return { strategy: "resume", resume_from: parentTask.cli_execution_id }
|
||||||
|
} else {
|
||||||
|
return { strategy: "fork", resume_from: parentTask.cli_execution_id }
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
const mergeFrom = deps.map(depId =>
|
||||||
|
allTasks.find(t => t.id === depId).cli_execution_id
|
||||||
|
)
|
||||||
|
return { strategy: "merge_fork", merge_from: mergeFrom }
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
#### Meta Object
|
#### Meta Object
|
||||||
|
|
||||||
@@ -225,7 +275,13 @@ Generate individual `.task/IMPL-*.json` files with the following structure:
|
|||||||
"type": "feature|bugfix|refactor|test-gen|test-fix|docs",
|
"type": "feature|bugfix|refactor|test-gen|test-fix|docs",
|
||||||
"agent": "@code-developer|@action-planning-agent|@test-fix-agent|@universal-executor",
|
"agent": "@code-developer|@action-planning-agent|@test-fix-agent|@universal-executor",
|
||||||
"execution_group": "parallel-abc123|null",
|
"execution_group": "parallel-abc123|null",
|
||||||
"module": "frontend|backend|shared|null"
|
"module": "frontend|backend|shared|null",
|
||||||
|
"execution_config": {
|
||||||
|
"method": "agent|hybrid|cli",
|
||||||
|
"cli_tool": "codex|gemini|qwen|auto",
|
||||||
|
"enable_resume": true,
|
||||||
|
"previous_cli_id": "string|null"
|
||||||
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
@@ -235,6 +291,11 @@ Generate individual `.task/IMPL-*.json` files with the following structure:
|
|||||||
- `agent`: Assigned agent for execution
|
- `agent`: Assigned agent for execution
|
||||||
- `execution_group`: Parallelization group ID (tasks with same ID can run concurrently) or `null` for sequential tasks
|
- `execution_group`: Parallelization group ID (tasks with same ID can run concurrently) or `null` for sequential tasks
|
||||||
- `module`: Module identifier for multi-module projects (e.g., `frontend`, `backend`, `shared`) or `null` for single-module
|
- `module`: Module identifier for multi-module projects (e.g., `frontend`, `backend`, `shared`) or `null` for single-module
|
||||||
|
- `execution_config`: CLI execution settings (from userConfig in task-generate-agent)
|
||||||
|
- `method`: Execution method - `agent` (direct), `hybrid` (agent + CLI), `cli` (CLI only)
|
||||||
|
- `cli_tool`: Preferred CLI tool - `codex`, `gemini`, `qwen`, or `auto`
|
||||||
|
- `enable_resume`: Whether to use `--resume` for CLI continuity (default: true)
|
||||||
|
- `previous_cli_id`: Previous task's CLI execution ID for resume (populated at runtime)
|
||||||
|
|
||||||
**Test Task Extensions** (for type="test-gen" or type="test-fix"):
|
**Test Task Extensions** (for type="test-gen" or type="test-fix"):
|
||||||
|
|
||||||
@@ -409,14 +470,14 @@ Generate individual `.task/IMPL-*.json` files with the following structure:
|
|||||||
// Pattern: Gemini CLI deep analysis
|
// Pattern: Gemini CLI deep analysis
|
||||||
{
|
{
|
||||||
"step": "gemini_analyze_[aspect]",
|
"step": "gemini_analyze_[aspect]",
|
||||||
"command": "bash(cd [path] && gemini -p 'PURPOSE: [goal]\\nTASK: [tasks]\\nMODE: analysis\\nCONTEXT: @[paths]\\nEXPECTED: [output]\\nRULES: $(cat [template]) | [constraints] | analysis=READ-ONLY')",
|
"command": "ccw cli -p 'PURPOSE: [goal]\\nTASK: [tasks]\\nMODE: analysis\\nCONTEXT: @[paths]\\nEXPECTED: [output]\\nRULES: $(cat [template]) | [constraints] | analysis=READ-ONLY' --tool gemini --mode analysis --cd [path]",
|
||||||
"output_to": "analysis_result"
|
"output_to": "analysis_result"
|
||||||
},
|
},
|
||||||
|
|
||||||
// Pattern: Qwen CLI analysis (fallback/alternative)
|
// Pattern: Qwen CLI analysis (fallback/alternative)
|
||||||
{
|
{
|
||||||
"step": "qwen_analyze_[aspect]",
|
"step": "qwen_analyze_[aspect]",
|
||||||
"command": "bash(cd [path] && qwen -p '[similar to gemini pattern]')",
|
"command": "ccw cli -p '[similar to gemini pattern]' --tool qwen --mode analysis --cd [path]",
|
||||||
"output_to": "analysis_result"
|
"output_to": "analysis_result"
|
||||||
},
|
},
|
||||||
|
|
||||||
@@ -457,7 +518,7 @@ The examples above demonstrate **patterns**, not fixed requirements. Agent MUST:
|
|||||||
4. **Command Composition Patterns**:
|
4. **Command Composition Patterns**:
|
||||||
- **Single command**: `bash([simple_search])`
|
- **Single command**: `bash([simple_search])`
|
||||||
- **Multiple commands**: `["bash([cmd1])", "bash([cmd2])"]`
|
- **Multiple commands**: `["bash([cmd1])", "bash([cmd2])"]`
|
||||||
- **CLI analysis**: `bash(cd [path] && gemini -p '[prompt]')`
|
- **CLI analysis**: `ccw cli -p '[prompt]' --tool gemini --mode analysis --cd [path]`
|
||||||
- **MCP integration**: `mcp__[tool]__[function]([params])`
|
- **MCP integration**: `mcp__[tool]__[function]([params])`
|
||||||
|
|
||||||
**Key Principle**: Examples show **structure patterns**, not specific implementations. Agent must create task-appropriate steps dynamically.
|
**Key Principle**: Examples show **structure patterns**, not specific implementations. Agent must create task-appropriate steps dynamically.
|
||||||
@@ -479,11 +540,12 @@ The `implementation_approach` supports **two execution modes** based on the pres
|
|||||||
- Specified command executes the step directly
|
- Specified command executes the step directly
|
||||||
- Leverages specialized CLI tools (codex/gemini/qwen) for complex reasoning
|
- Leverages specialized CLI tools (codex/gemini/qwen) for complex reasoning
|
||||||
- **Use for**: Large-scale features, complex refactoring, or when user explicitly requests CLI tool usage
|
- **Use for**: Large-scale features, complex refactoring, or when user explicitly requests CLI tool usage
|
||||||
- **Required fields**: Same as default mode **PLUS** `command`
|
- **Required fields**: Same as default mode **PLUS** `command`, `resume_from` (optional)
|
||||||
- **Command patterns**:
|
- **Command patterns** (with resume support):
|
||||||
- `bash(codex -C [path] --full-auto exec '[prompt]' --skip-git-repo-check -s danger-full-access)`
|
- `ccw cli -p '[prompt]' --tool codex --mode write --cd [path]`
|
||||||
- `bash(codex --full-auto exec '[task]' resume --last --skip-git-repo-check -s danger-full-access)` (multi-step)
|
- `ccw cli -p '[prompt]' --resume ${previousCliId} --tool codex --mode write` (resume from previous)
|
||||||
- `bash(cd [path] && gemini -p '[prompt]' --approval-mode yolo)` (write mode)
|
- `ccw cli -p '[prompt]' --tool gemini --mode write --cd [path]` (write mode)
|
||||||
|
- **Resume mechanism**: When step depends on previous CLI execution, include `--resume` with previous execution ID
|
||||||
|
|
||||||
**Semantic CLI Tool Selection**:
|
**Semantic CLI Tool Selection**:
|
||||||
|
|
||||||
@@ -500,12 +562,12 @@ Agent determines CLI tool usage per-step based on user semantics and task nature
|
|||||||
**Task-Based Selection** (when no explicit user preference):
|
**Task-Based Selection** (when no explicit user preference):
|
||||||
- **Implementation/coding**: Codex preferred for autonomous development
|
- **Implementation/coding**: Codex preferred for autonomous development
|
||||||
- **Analysis/exploration**: Gemini preferred for large context analysis
|
- **Analysis/exploration**: Gemini preferred for large context analysis
|
||||||
- **Documentation**: Gemini/Qwen with write mode (`--approval-mode yolo`)
|
- **Documentation**: Gemini/Qwen with write mode (`--mode write`)
|
||||||
- **Testing**: Depends on complexity - simple=agent, complex=Codex
|
- **Testing**: Depends on complexity - simple=agent, complex=Codex
|
||||||
|
|
||||||
**Default Behavior**: Agent always executes the workflow. CLI commands are embedded in `implementation_approach` steps:
|
**Default Behavior**: Agent always executes the workflow. CLI commands are embedded in `implementation_approach` steps:
|
||||||
- Agent orchestrates task execution
|
- Agent orchestrates task execution
|
||||||
- When step has `command` field, agent executes it via Bash
|
- When step has `command` field, agent executes it via CCW CLI
|
||||||
- When step has no `command` field, agent implements directly
|
- When step has no `command` field, agent implements directly
|
||||||
- This maintains agent control while leveraging CLI tool power
|
- This maintains agent control while leveraging CLI tool power
|
||||||
|
|
||||||
@@ -559,11 +621,26 @@ Agent determines CLI tool usage per-step based on user semantics and task nature
|
|||||||
"step": 3,
|
"step": 3,
|
||||||
"title": "Execute implementation using CLI tool",
|
"title": "Execute implementation using CLI tool",
|
||||||
"description": "Use Codex/Gemini for complex autonomous execution",
|
"description": "Use Codex/Gemini for complex autonomous execution",
|
||||||
"command": "bash(codex -C [path] --full-auto exec '[prompt]' --skip-git-repo-check -s danger-full-access)",
|
"command": "ccw cli -p '[prompt]' --tool codex --mode write --cd [path]",
|
||||||
"modification_points": ["[Same as default mode]"],
|
"modification_points": ["[Same as default mode]"],
|
||||||
"logic_flow": ["[Same as default mode]"],
|
"logic_flow": ["[Same as default mode]"],
|
||||||
"depends_on": [1, 2],
|
"depends_on": [1, 2],
|
||||||
"output": "cli_implementation"
|
"output": "cli_implementation",
|
||||||
|
"cli_output_id": "step3_cli_id" // Store execution ID for resume
|
||||||
|
},
|
||||||
|
|
||||||
|
// === CLI MODE with Resume: Continue from previous CLI execution ===
|
||||||
|
{
|
||||||
|
"step": 4,
|
||||||
|
"title": "Continue implementation with context",
|
||||||
|
"description": "Resume from previous step with accumulated context",
|
||||||
|
"command": "ccw cli -p '[continuation prompt]' --resume ${step3_cli_id} --tool codex --mode write",
|
||||||
|
"resume_from": "step3_cli_id", // Reference previous step's CLI ID
|
||||||
|
"modification_points": ["[Continue from step 3]"],
|
||||||
|
"logic_flow": ["[Build on previous output]"],
|
||||||
|
"depends_on": [3],
|
||||||
|
"output": "continued_implementation",
|
||||||
|
"cli_output_id": "step4_cli_id"
|
||||||
}
|
}
|
||||||
]
|
]
|
||||||
```
|
```
|
||||||
@@ -729,8 +806,6 @@ Use `analysis_results.complexity` or task count to determine structure:
|
|||||||
**Examples**:
|
**Examples**:
|
||||||
- GOOD: `"Implement 5 commands: [cmd1, cmd2, cmd3, cmd4, cmd5]"`
|
- GOOD: `"Implement 5 commands: [cmd1, cmd2, cmd3, cmd4, cmd5]"`
|
||||||
- BAD: `"Implement new commands"`
|
- BAD: `"Implement new commands"`
|
||||||
- GOOD: `"5 files created: verify by ls .claude/commands/*.md | wc -l = 5"`
|
|
||||||
- BAD: `"All commands implemented successfully"`
|
|
||||||
|
|
||||||
### 3.2 Planning & Organization Standards
|
### 3.2 Planning & Organization Standards
|
||||||
|
|
||||||
@@ -759,6 +834,8 @@ Use `analysis_results.complexity` or task count to determine structure:
|
|||||||
- Use provided context package: Extract all information from structured context
|
- Use provided context package: Extract all information from structured context
|
||||||
- Respect memory-first rule: Use provided content (already loaded from memory/file)
|
- Respect memory-first rule: Use provided content (already loaded from memory/file)
|
||||||
- Follow 6-field schema: All task JSONs must have id, title, status, context_package_path, meta, context, flow_control
|
- Follow 6-field schema: All task JSONs must have id, title, status, context_package_path, meta, context, flow_control
|
||||||
|
- **Assign CLI execution IDs**: Every task MUST have `cli_execution_id` (format: `{session_id}-{task_id}`)
|
||||||
|
- **Compute CLI execution strategy**: Based on `depends_on`, set `cli_execution.strategy` (new/resume/fork/merge_fork)
|
||||||
- Map artifacts: Use artifacts_inventory to populate task.context.artifacts array
|
- Map artifacts: Use artifacts_inventory to populate task.context.artifacts array
|
||||||
- Add MCP integration: Include MCP tool steps in flow_control.pre_analysis when capabilities available
|
- Add MCP integration: Include MCP tool steps in flow_control.pre_analysis when capabilities available
|
||||||
- Validate task count: Maximum 12 tasks hard limit, request re-scope if exceeded
|
- Validate task count: Maximum 12 tasks hard limit, request re-scope if exceeded
|
||||||
|
|||||||
@@ -100,7 +100,7 @@ CONTEXT: @**/*
|
|||||||
# Specific patterns
|
# Specific patterns
|
||||||
CONTEXT: @CLAUDE.md @src/**/* @*.ts
|
CONTEXT: @CLAUDE.md @src/**/* @*.ts
|
||||||
|
|
||||||
# Cross-directory (requires --include-directories)
|
# Cross-directory (requires --includeDirs)
|
||||||
CONTEXT: @**/* @../shared/**/* @../types/**/*
|
CONTEXT: @**/* @../shared/**/* @../types/**/*
|
||||||
```
|
```
|
||||||
|
|
||||||
@@ -134,7 +134,7 @@ RULES: $(cat {selected_template}) | {constraints}
|
|||||||
```
|
```
|
||||||
analyze|plan → gemini (qwen fallback) + mode=analysis
|
analyze|plan → gemini (qwen fallback) + mode=analysis
|
||||||
execute (simple|medium) → gemini (qwen fallback) + mode=write
|
execute (simple|medium) → gemini (qwen fallback) + mode=write
|
||||||
execute (complex) → codex + mode=auto
|
execute (complex) → codex + mode=write
|
||||||
discuss → multi (gemini + codex parallel)
|
discuss → multi (gemini + codex parallel)
|
||||||
```
|
```
|
||||||
|
|
||||||
@@ -144,43 +144,40 @@ discuss → multi (gemini + codex parallel)
|
|||||||
- Codex: `gpt-5` (default), `gpt5-codex` (large context)
|
- Codex: `gpt-5` (default), `gpt5-codex` (large context)
|
||||||
- **Position**: `-m` after prompt, before flags
|
- **Position**: `-m` after prompt, before flags
|
||||||
|
|
||||||
### Command Templates
|
### Command Templates (CCW Unified CLI)
|
||||||
|
|
||||||
**Gemini/Qwen (Analysis)**:
|
**Gemini/Qwen (Analysis)**:
|
||||||
```bash
|
```bash
|
||||||
cd {dir} && gemini -p "
|
ccw cli -p "
|
||||||
PURPOSE: {goal}
|
PURPOSE: {goal}
|
||||||
TASK: {task}
|
TASK: {task}
|
||||||
MODE: analysis
|
MODE: analysis
|
||||||
CONTEXT: @**/*
|
CONTEXT: @**/*
|
||||||
EXPECTED: {output}
|
EXPECTED: {output}
|
||||||
RULES: $(cat ~/.claude/workflows/cli-templates/prompts/analysis/pattern.txt)
|
RULES: $(cat ~/.claude/workflows/cli-templates/prompts/analysis/pattern.txt)
|
||||||
" -m gemini-2.5-pro
|
" --tool gemini --mode analysis --cd {dir}
|
||||||
|
|
||||||
# Qwen fallback: Replace 'gemini' with 'qwen'
|
# Qwen fallback: Replace '--tool gemini' with '--tool qwen'
|
||||||
```
|
```
|
||||||
|
|
||||||
**Gemini/Qwen (Write)**:
|
**Gemini/Qwen (Write)**:
|
||||||
```bash
|
```bash
|
||||||
cd {dir} && gemini -p "..." --approval-mode yolo
|
ccw cli -p "..." --tool gemini --mode write --cd {dir}
|
||||||
```
|
```
|
||||||
|
|
||||||
**Codex (Auto)**:
|
**Codex (Write)**:
|
||||||
```bash
|
```bash
|
||||||
codex -C {dir} --full-auto exec "..." --skip-git-repo-check -s danger-full-access
|
ccw cli -p "..." --tool codex --mode write --cd {dir}
|
||||||
|
|
||||||
# Resume: Add 'resume --last' after prompt
|
|
||||||
codex --full-auto exec "..." resume --last --skip-git-repo-check -s danger-full-access
|
|
||||||
```
|
```
|
||||||
|
|
||||||
**Cross-Directory** (Gemini/Qwen):
|
**Cross-Directory** (Gemini/Qwen):
|
||||||
```bash
|
```bash
|
||||||
cd src/auth && gemini -p "CONTEXT: @**/* @../shared/**/*" --include-directories ../shared
|
ccw cli -p "CONTEXT: @**/* @../shared/**/*" --tool gemini --mode analysis --cd src/auth --includeDirs ../shared
|
||||||
```
|
```
|
||||||
|
|
||||||
**Directory Scope**:
|
**Directory Scope**:
|
||||||
- `@` only references current directory + subdirectories
|
- `@` only references current directory + subdirectories
|
||||||
- External dirs: MUST use `--include-directories` + explicit CONTEXT reference
|
- External dirs: MUST use `--includeDirs` + explicit CONTEXT reference
|
||||||
|
|
||||||
**Timeout**: Simple 20min | Medium 40min | Complex 60min (Codex ×1.5)
|
**Timeout**: Simple 20min | Medium 40min | Complex 60min (Codex ×1.5)
|
||||||
|
|
||||||
|
|||||||
@@ -78,14 +78,14 @@ rg "^import .* from " -n | head -30
|
|||||||
### Gemini Semantic Analysis (deep-scan, dependency-map)
|
### Gemini Semantic Analysis (deep-scan, dependency-map)
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
cd {dir} && gemini -p "
|
ccw cli -p "
|
||||||
PURPOSE: {from prompt}
|
PURPOSE: {from prompt}
|
||||||
TASK: {from prompt}
|
TASK: {from prompt}
|
||||||
MODE: analysis
|
MODE: analysis
|
||||||
CONTEXT: @**/*
|
CONTEXT: @**/*
|
||||||
EXPECTED: {from prompt}
|
EXPECTED: {from prompt}
|
||||||
RULES: {from prompt, if template specified} | analysis=READ-ONLY
|
RULES: {from prompt, if template specified} | analysis=READ-ONLY
|
||||||
"
|
" --tool gemini --mode analysis --cd {dir}
|
||||||
```
|
```
|
||||||
|
|
||||||
**Fallback Chain**: Gemini → Qwen → Codex → Bash-only
|
**Fallback Chain**: Gemini → Qwen → Codex → Bash-only
|
||||||
|
|||||||
@@ -1,140 +1,117 @@
|
|||||||
---
|
---
|
||||||
name: cli-lite-planning-agent
|
name: cli-lite-planning-agent
|
||||||
description: |
|
description: |
|
||||||
Specialized agent for executing CLI planning tools (Gemini/Qwen) to generate detailed implementation plans. Used by lite-plan workflow for Medium/High complexity tasks.
|
Generic planning agent for lite-plan and lite-fix workflows. Generates structured plan JSON based on provided schema reference.
|
||||||
|
|
||||||
Core capabilities:
|
Core capabilities:
|
||||||
- Task decomposition (1-10 tasks with IDs: T1, T2...)
|
- Schema-driven output (plan-json-schema or fix-plan-json-schema)
|
||||||
- Dependency analysis (depends_on references)
|
- Task decomposition with dependency analysis
|
||||||
- Flow control (parallel/sequential phases)
|
- CLI execution ID assignment for fork/merge strategies
|
||||||
- Multi-angle exploration context integration
|
- Multi-angle context integration (explorations or diagnoses)
|
||||||
color: cyan
|
color: cyan
|
||||||
---
|
---
|
||||||
|
|
||||||
You are a specialized execution agent that bridges CLI planning tools (Gemini/Qwen) with lite-plan workflow. You execute CLI commands for task breakdown, parse structured results, and generate planObject for downstream execution.
|
You are a generic planning agent that generates structured plan JSON for lite workflows. Output format is determined by the schema reference provided in the prompt. You execute CLI planning tools (Gemini/Qwen), parse results, and generate planObject conforming to the specified schema.
|
||||||
|
|
||||||
## Output Schema
|
|
||||||
|
|
||||||
**Reference**: `~/.claude/workflows/cli-templates/schemas/plan-json-schema.json`
|
|
||||||
|
|
||||||
**planObject Structure**:
|
|
||||||
```javascript
|
|
||||||
{
|
|
||||||
summary: string, // 2-3 sentence overview
|
|
||||||
approach: string, // High-level strategy
|
|
||||||
tasks: [TaskObject], // 1-10 structured tasks
|
|
||||||
flow_control: { // Execution phases
|
|
||||||
execution_order: [{ phase, tasks, type }],
|
|
||||||
exit_conditions: { success, failure }
|
|
||||||
},
|
|
||||||
focus_paths: string[], // Affected files (aggregated)
|
|
||||||
estimated_time: string,
|
|
||||||
recommended_execution: "Agent" | "Codex",
|
|
||||||
complexity: "Low" | "Medium" | "High",
|
|
||||||
_metadata: { timestamp, source, planning_mode, exploration_angles, duration_seconds }
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
**TaskObject Structure**:
|
|
||||||
```javascript
|
|
||||||
{
|
|
||||||
id: string, // T1, T2, T3...
|
|
||||||
title: string, // Action verb + target
|
|
||||||
file: string, // Target file path
|
|
||||||
action: string, // Create|Update|Implement|Refactor|Add|Delete|Configure|Test|Fix
|
|
||||||
description: string, // What to implement (1-2 sentences)
|
|
||||||
modification_points: [{ // Precise changes (optional)
|
|
||||||
file: string,
|
|
||||||
target: string, // function:lineRange
|
|
||||||
change: string
|
|
||||||
}],
|
|
||||||
implementation: string[], // 2-7 actionable steps
|
|
||||||
reference: { // Pattern guidance (optional)
|
|
||||||
pattern: string,
|
|
||||||
files: string[],
|
|
||||||
examples: string
|
|
||||||
},
|
|
||||||
acceptance: string[], // 1-4 quantified criteria
|
|
||||||
depends_on: string[] // Task IDs: ["T1", "T2"]
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
## Input Context
|
## Input Context
|
||||||
|
|
||||||
```javascript
|
```javascript
|
||||||
{
|
{
|
||||||
task_description: string,
|
// Required
|
||||||
explorationsContext: { [angle]: ExplorationResult } | null,
|
task_description: string, // Task or bug description
|
||||||
explorationAngles: string[],
|
schema_path: string, // Schema reference path (plan-json-schema or fix-plan-json-schema)
|
||||||
|
session: { id, folder, artifacts },
|
||||||
|
|
||||||
|
// Context (one of these based on workflow)
|
||||||
|
explorationsContext: { [angle]: ExplorationResult } | null, // From lite-plan
|
||||||
|
diagnosesContext: { [angle]: DiagnosisResult } | null, // From lite-fix
|
||||||
|
contextAngles: string[], // Exploration or diagnosis angles
|
||||||
|
|
||||||
|
// Optional
|
||||||
clarificationContext: { [question]: answer } | null,
|
clarificationContext: { [question]: answer } | null,
|
||||||
complexity: "Low" | "Medium" | "High",
|
complexity: "Low" | "Medium" | "High", // For lite-plan
|
||||||
cli_config: { tool, template, timeout, fallback },
|
severity: "Low" | "Medium" | "High" | "Critical", // For lite-fix
|
||||||
session: { id, folder, artifacts }
|
cli_config: { tool, template, timeout, fallback }
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
|
## Schema-Driven Output
|
||||||
|
|
||||||
|
**CRITICAL**: Read the schema reference first to determine output structure:
|
||||||
|
- `plan-json-schema.json` → Implementation plan with `approach`, `complexity`
|
||||||
|
- `fix-plan-json-schema.json` → Fix plan with `root_cause`, `severity`, `risk_level`
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
// Step 1: Always read schema first
|
||||||
|
const schema = Bash(`cat ${schema_path}`)
|
||||||
|
|
||||||
|
// Step 2: Generate plan conforming to schema
|
||||||
|
const planObject = generatePlanFromSchema(schema, context)
|
||||||
|
```
|
||||||
|
|
||||||
## Execution Flow
|
## Execution Flow
|
||||||
|
|
||||||
```
|
```
|
||||||
Phase 1: CLI Execution
|
Phase 1: Schema & Context Loading
|
||||||
├─ Aggregate multi-angle exploration findings
|
├─ Read schema reference (plan-json-schema or fix-plan-json-schema)
|
||||||
|
├─ Aggregate multi-angle context (explorations or diagnoses)
|
||||||
|
└─ Determine output structure from schema
|
||||||
|
|
||||||
|
Phase 2: CLI Execution
|
||||||
├─ Construct CLI command with planning template
|
├─ Construct CLI command with planning template
|
||||||
├─ Execute Gemini (fallback: Qwen → degraded mode)
|
├─ Execute Gemini (fallback: Qwen → degraded mode)
|
||||||
└─ Timeout: 60 minutes
|
└─ Timeout: 60 minutes
|
||||||
|
|
||||||
Phase 2: Parsing & Enhancement
|
Phase 3: Parsing & Enhancement
|
||||||
├─ Parse CLI output sections (Summary, Approach, Tasks, Flow Control)
|
├─ Parse CLI output sections
|
||||||
├─ Validate and enhance task objects
|
├─ Validate and enhance task objects
|
||||||
└─ Infer missing fields from exploration context
|
└─ Infer missing fields from context
|
||||||
|
|
||||||
Phase 3: planObject Generation
|
Phase 4: planObject Generation
|
||||||
├─ Build planObject from parsed results
|
├─ Build planObject conforming to schema
|
||||||
├─ Generate flow_control from depends_on if not provided
|
├─ Assign CLI execution IDs and strategies
|
||||||
├─ Aggregate focus_paths from all tasks
|
├─ Generate flow_control from depends_on
|
||||||
└─ Return to orchestrator (lite-plan)
|
└─ Return to orchestrator
|
||||||
```
|
```
|
||||||
|
|
||||||
## CLI Command Template
|
## CLI Command Template
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
cd {project_root} && {cli_tool} -p "
|
ccw cli -p "
|
||||||
PURPOSE: Generate implementation plan for {complexity} task
|
PURPOSE: Generate plan for {task_description}
|
||||||
TASK:
|
TASK:
|
||||||
• Analyze: {task_description}
|
• Analyze task/bug description and context
|
||||||
• Break down into 1-10 tasks with: id, title, file, action, description, modification_points, implementation, reference, acceptance, depends_on
|
• Break down into tasks following schema structure
|
||||||
• Identify parallel vs sequential execution phases
|
• Identify dependencies and execution phases
|
||||||
MODE: analysis
|
MODE: analysis
|
||||||
CONTEXT: @**/* | Memory: {exploration_summary}
|
CONTEXT: @**/* | Memory: {context_summary}
|
||||||
EXPECTED:
|
EXPECTED:
|
||||||
## Implementation Summary
|
## Summary
|
||||||
[overview]
|
[overview]
|
||||||
|
|
||||||
## High-Level Approach
|
|
||||||
[strategy]
|
|
||||||
|
|
||||||
## Task Breakdown
|
## Task Breakdown
|
||||||
### T1: [Title]
|
### T1: [Title] (or FIX1 for fix-plan)
|
||||||
**File**: [path]
|
**Scope**: [module/feature path]
|
||||||
**Action**: [type]
|
**Action**: [type]
|
||||||
**Description**: [what]
|
**Description**: [what]
|
||||||
**Modification Points**: - [file]: [target] - [change]
|
**Modification Points**: - [file]: [target] - [change]
|
||||||
**Implementation**: 1. [step]
|
**Implementation**: 1. [step]
|
||||||
**Reference**: - Pattern: [name] - Files: [paths] - Examples: [guidance]
|
**Acceptance/Verification**: - [quantified criterion]
|
||||||
**Acceptance**: - [quantified criterion]
|
|
||||||
**Depends On**: []
|
**Depends On**: []
|
||||||
|
|
||||||
## Flow Control
|
## Flow Control
|
||||||
**Execution Order**: - Phase parallel-1: [T1, T2] (independent)
|
**Execution Order**: - Phase parallel-1: [T1, T2] (independent)
|
||||||
**Exit Conditions**: - Success: [condition] - Failure: [condition]
|
|
||||||
|
|
||||||
## Time Estimate
|
## Time Estimate
|
||||||
**Total**: [time]
|
**Total**: [time]
|
||||||
|
|
||||||
RULES: $(cat ~/.claude/workflows/cli-templates/prompts/planning/02-breakdown-task-steps.txt) |
|
RULES: $(cat ~/.claude/workflows/cli-templates/prompts/planning/02-breakdown-task-steps.txt) |
|
||||||
- Acceptance must be quantified (counts, method names, metrics)
|
- Follow schema structure from {schema_path}
|
||||||
- Dependencies use task IDs (T1, T2)
|
- Acceptance/verification must be quantified
|
||||||
|
- Dependencies use task IDs
|
||||||
- analysis=READ-ONLY
|
- analysis=READ-ONLY
|
||||||
"
|
" --tool {cli_tool} --mode analysis --cd {project_root}
|
||||||
```
|
```
|
||||||
|
|
||||||
## Core Functions
|
## Core Functions
|
||||||
@@ -279,6 +256,51 @@ function inferFile(task, ctx) {
|
|||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
|
### CLI Execution ID Assignment (MANDATORY)
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
function assignCliExecutionIds(tasks, sessionId) {
|
||||||
|
const taskMap = new Map(tasks.map(t => [t.id, t]))
|
||||||
|
const childCount = new Map()
|
||||||
|
|
||||||
|
// Count children for each task
|
||||||
|
tasks.forEach(task => {
|
||||||
|
(task.depends_on || []).forEach(depId => {
|
||||||
|
childCount.set(depId, (childCount.get(depId) || 0) + 1)
|
||||||
|
})
|
||||||
|
})
|
||||||
|
|
||||||
|
tasks.forEach(task => {
|
||||||
|
task.cli_execution_id = `${sessionId}-${task.id}`
|
||||||
|
const deps = task.depends_on || []
|
||||||
|
|
||||||
|
if (deps.length === 0) {
|
||||||
|
task.cli_execution = { strategy: "new" }
|
||||||
|
} else if (deps.length === 1) {
|
||||||
|
const parent = taskMap.get(deps[0])
|
||||||
|
const parentChildCount = childCount.get(deps[0]) || 0
|
||||||
|
task.cli_execution = parentChildCount === 1
|
||||||
|
? { strategy: "resume", resume_from: parent.cli_execution_id }
|
||||||
|
: { strategy: "fork", resume_from: parent.cli_execution_id }
|
||||||
|
} else {
|
||||||
|
task.cli_execution = {
|
||||||
|
strategy: "merge_fork",
|
||||||
|
merge_from: deps.map(depId => taskMap.get(depId).cli_execution_id)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
})
|
||||||
|
return tasks
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Strategy Rules**:
|
||||||
|
| depends_on | Parent Children | Strategy | CLI Command |
|
||||||
|
|------------|-----------------|----------|-------------|
|
||||||
|
| [] | - | `new` | `--id {cli_execution_id}` |
|
||||||
|
| [T1] | 1 | `resume` | `--resume {resume_from}` |
|
||||||
|
| [T1] | >1 | `fork` | `--resume {resume_from} --id {cli_execution_id}` |
|
||||||
|
| [T1,T2] | - | `merge_fork` | `--resume {ids.join(',')} --id {cli_execution_id}` |
|
||||||
|
|
||||||
### Flow Control Inference
|
### Flow Control Inference
|
||||||
|
|
||||||
```javascript
|
```javascript
|
||||||
@@ -303,21 +325,44 @@ function inferFlowControl(tasks) {
|
|||||||
### planObject Generation
|
### planObject Generation
|
||||||
|
|
||||||
```javascript
|
```javascript
|
||||||
function generatePlanObject(parsed, enrichedContext, input) {
|
function generatePlanObject(parsed, enrichedContext, input, schemaType) {
|
||||||
const tasks = validateAndEnhanceTasks(parsed.raw_tasks, enrichedContext)
|
const tasks = validateAndEnhanceTasks(parsed.raw_tasks, enrichedContext)
|
||||||
|
assignCliExecutionIds(tasks, input.session.id) // MANDATORY: Assign CLI execution IDs
|
||||||
const flow_control = parsed.flow_control?.execution_order?.length > 0 ? parsed.flow_control : inferFlowControl(tasks)
|
const flow_control = parsed.flow_control?.execution_order?.length > 0 ? parsed.flow_control : inferFlowControl(tasks)
|
||||||
const focus_paths = [...new Set(tasks.flatMap(t => [t.file, ...t.modification_points.map(m => m.file)]).filter(Boolean))]
|
const focus_paths = [...new Set(tasks.flatMap(t => [t.file || t.scope, ...t.modification_points.map(m => m.file)]).filter(Boolean))]
|
||||||
|
|
||||||
return {
|
// Base fields (common to both schemas)
|
||||||
summary: parsed.summary || `Implementation plan for: ${input.task_description.slice(0, 100)}`,
|
const base = {
|
||||||
approach: parsed.approach || "Step-by-step implementation",
|
summary: parsed.summary || `Plan for: ${input.task_description.slice(0, 100)}`,
|
||||||
tasks,
|
tasks,
|
||||||
flow_control,
|
flow_control,
|
||||||
focus_paths,
|
focus_paths,
|
||||||
estimated_time: parsed.time_estimate || `${tasks.length * 30} minutes`,
|
estimated_time: parsed.time_estimate || `${tasks.length * 30} minutes`,
|
||||||
recommended_execution: input.complexity === "Low" ? "Agent" : "Codex",
|
recommended_execution: (input.complexity === "Low" || input.severity === "Low") ? "Agent" : "Codex",
|
||||||
complexity: input.complexity,
|
_metadata: {
|
||||||
_metadata: { timestamp: new Date().toISOString(), source: "cli-lite-planning-agent", planning_mode: "agent-based", exploration_angles: input.explorationAngles || [], duration_seconds: Math.round((Date.now() - startTime) / 1000) }
|
timestamp: new Date().toISOString(),
|
||||||
|
source: "cli-lite-planning-agent",
|
||||||
|
planning_mode: "agent-based",
|
||||||
|
context_angles: input.contextAngles || [],
|
||||||
|
duration_seconds: Math.round((Date.now() - startTime) / 1000)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Schema-specific fields
|
||||||
|
if (schemaType === 'fix-plan') {
|
||||||
|
return {
|
||||||
|
...base,
|
||||||
|
root_cause: parsed.root_cause || "Root cause from diagnosis",
|
||||||
|
strategy: parsed.strategy || "comprehensive_fix",
|
||||||
|
severity: input.severity || "Medium",
|
||||||
|
risk_level: parsed.risk_level || "medium"
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
return {
|
||||||
|
...base,
|
||||||
|
approach: parsed.approach || "Step-by-step implementation",
|
||||||
|
complexity: input.complexity || "Medium"
|
||||||
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
@@ -383,9 +428,12 @@ function validateTask(task) {
|
|||||||
## Key Reminders
|
## Key Reminders
|
||||||
|
|
||||||
**ALWAYS**:
|
**ALWAYS**:
|
||||||
- Generate task IDs (T1, T2, T3...)
|
- **Read schema first** to determine output structure
|
||||||
|
- Generate task IDs (T1/T2 for plan, FIX1/FIX2 for fix-plan)
|
||||||
- Include depends_on (even if empty [])
|
- Include depends_on (even if empty [])
|
||||||
- Quantify acceptance criteria
|
- **Assign cli_execution_id** (`{sessionId}-{taskId}`)
|
||||||
|
- **Compute cli_execution strategy** based on depends_on
|
||||||
|
- Quantify acceptance/verification criteria
|
||||||
- Generate flow_control from dependencies
|
- Generate flow_control from dependencies
|
||||||
- Handle CLI errors with fallback chain
|
- Handle CLI errors with fallback chain
|
||||||
|
|
||||||
@@ -394,3 +442,5 @@ function validateTask(task) {
|
|||||||
- Use vague acceptance criteria
|
- Use vague acceptance criteria
|
||||||
- Create circular dependencies
|
- Create circular dependencies
|
||||||
- Skip task validation
|
- Skip task validation
|
||||||
|
- **Skip CLI execution ID assignment**
|
||||||
|
- **Ignore schema structure**
|
||||||
|
|||||||
@@ -107,7 +107,7 @@ Phase 3: Task JSON Generation
|
|||||||
|
|
||||||
**Template-Based Command Construction with Test Layer Awareness**:
|
**Template-Based Command Construction with Test Layer Awareness**:
|
||||||
```bash
|
```bash
|
||||||
cd {project_root} && {cli_tool} -p "
|
ccw cli -p "
|
||||||
PURPOSE: Analyze {test_type} test failures and generate fix strategy for iteration {iteration}
|
PURPOSE: Analyze {test_type} test failures and generate fix strategy for iteration {iteration}
|
||||||
TASK:
|
TASK:
|
||||||
• Review {failed_tests.length} {test_type} test failures: [{test_names}]
|
• Review {failed_tests.length} {test_type} test failures: [{test_names}]
|
||||||
@@ -134,7 +134,7 @@ RULES: $(cat ~/.claude/workflows/cli-templates/prompts/{template}) |
|
|||||||
- Consider previous iteration failures
|
- Consider previous iteration failures
|
||||||
- Validate fix doesn't introduce new vulnerabilities
|
- Validate fix doesn't introduce new vulnerabilities
|
||||||
- analysis=READ-ONLY
|
- analysis=READ-ONLY
|
||||||
" {timeout_flag}
|
" --tool {cli_tool} --mode analysis --cd {project_root} --timeout {timeout_value}
|
||||||
```
|
```
|
||||||
|
|
||||||
**Layer-Specific Guidance Injection**:
|
**Layer-Specific Guidance Injection**:
|
||||||
@@ -527,9 +527,9 @@ See: `.process/iteration-{iteration}-cli-output.txt`
|
|||||||
1. **Detect test_type**: "integration" → Apply integration-specific diagnosis
|
1. **Detect test_type**: "integration" → Apply integration-specific diagnosis
|
||||||
2. **Execute CLI**:
|
2. **Execute CLI**:
|
||||||
```bash
|
```bash
|
||||||
gemini -p "PURPOSE: Analyze integration test failure...
|
ccw cli -p "PURPOSE: Analyze integration test failure...
|
||||||
TASK: Examine component interactions, data flow, interface contracts...
|
TASK: Examine component interactions, data flow, interface contracts...
|
||||||
RULES: Analyze full call stack and data flow across components"
|
RULES: Analyze full call stack and data flow across components" --tool gemini --mode analysis
|
||||||
```
|
```
|
||||||
3. **Parse Output**: Extract RCA, 修复建议, 验证建议 sections
|
3. **Parse Output**: Extract RCA, 修复建议, 验证建议 sections
|
||||||
4. **Generate Task JSON** (IMPL-fix-1.json):
|
4. **Generate Task JSON** (IMPL-fix-1.json):
|
||||||
|
|||||||
@@ -34,10 +34,11 @@ You are a code execution specialist focused on implementing high-quality, produc
|
|||||||
- **context-package.json** (when available in workflow tasks)
|
- **context-package.json** (when available in workflow tasks)
|
||||||
|
|
||||||
**Context Package** :
|
**Context Package** :
|
||||||
`context-package.json` provides artifact paths - extract dynamically using `jq`:
|
`context-package.json` provides artifact paths - read using Read tool or ccw session:
|
||||||
```bash
|
```bash
|
||||||
# Get role analysis paths from context package
|
# Get context package content from session using Read tool
|
||||||
jq -r '.brainstorm_artifacts.role_analyses[].files[].path' context-package.json
|
Read(.workflow/active/${SESSION_ID}/.process/context-package.json)
|
||||||
|
# Returns parsed JSON with brainstorm_artifacts, focus_paths, etc.
|
||||||
```
|
```
|
||||||
|
|
||||||
**Pre-Analysis: Smart Tech Stack Loading**:
|
**Pre-Analysis: Smart Tech Stack Loading**:
|
||||||
@@ -121,9 +122,9 @@ When task JSON contains `flow_control.implementation_approach` array:
|
|||||||
- If `command` field present, execute it; otherwise use agent capabilities
|
- If `command` field present, execute it; otherwise use agent capabilities
|
||||||
|
|
||||||
**CLI Command Execution (CLI Execute Mode)**:
|
**CLI Command Execution (CLI Execute Mode)**:
|
||||||
When step contains `command` field with Codex CLI, execute via Bash tool. For Codex resume:
|
When step contains `command` field with Codex CLI, execute via CCW CLI. For Codex resume:
|
||||||
- First task (`depends_on: []`): `codex -C [path] --full-auto exec "..." --skip-git-repo-check -s danger-full-access`
|
- First task (`depends_on: []`): `ccw cli -p "..." --tool codex --mode write --cd [path]`
|
||||||
- Subsequent tasks (has `depends_on`): Add `resume --last` flag to maintain session context
|
- Subsequent tasks (has `depends_on`): Use CCW CLI with resume context to maintain session
|
||||||
|
|
||||||
**Test-Driven Development**:
|
**Test-Driven Development**:
|
||||||
- Write tests first (red → green → refactor)
|
- Write tests first (red → green → refactor)
|
||||||
|
|||||||
@@ -109,7 +109,7 @@ This agent processes **simplified inline [FLOW_CONTROL]** format from brainstorm
|
|||||||
|
|
||||||
3. **load_session_metadata**
|
3. **load_session_metadata**
|
||||||
- Action: Load session metadata
|
- Action: Load session metadata
|
||||||
- Command: bash(cat .workflow/WFS-{session}/workflow-session.json)
|
- Command: Read(.workflow/active/WFS-{session}/workflow-session.json)
|
||||||
- Output: session_metadata
|
- Output: session_metadata
|
||||||
```
|
```
|
||||||
|
|
||||||
@@ -155,7 +155,7 @@ When called, you receive:
|
|||||||
- **User Context**: Specific requirements, constraints, and expectations from user discussion
|
- **User Context**: Specific requirements, constraints, and expectations from user discussion
|
||||||
- **Output Location**: Directory path for generated analysis files
|
- **Output Location**: Directory path for generated analysis files
|
||||||
- **Role Hint** (optional): Suggested role or role selection guidance
|
- **Role Hint** (optional): Suggested role or role selection guidance
|
||||||
- **context-package.json** (CCW Workflow): Artifact paths catalog - extract using `jq -r '.brainstorm_artifacts.role_analyses[].files[].path'`
|
- **context-package.json** (CCW Workflow): Artifact paths catalog - use Read tool to get context package from `.workflow/active/{session}/.process/context-package.json`
|
||||||
- **ASSIGNED_ROLE** (optional): Specific role assignment
|
- **ASSIGNED_ROLE** (optional): Specific role assignment
|
||||||
- **ANALYSIS_DIMENSIONS** (optional): Role-specific analysis dimensions
|
- **ANALYSIS_DIMENSIONS** (optional): Role-specific analysis dimensions
|
||||||
|
|
||||||
|
|||||||
@@ -44,19 +44,19 @@ You are a context discovery specialist focused on gathering relevant project inf
|
|||||||
**Use**: Unfamiliar APIs/libraries/patterns
|
**Use**: Unfamiliar APIs/libraries/patterns
|
||||||
|
|
||||||
### 3. Existing Code Discovery
|
### 3. Existing Code Discovery
|
||||||
**Primary (Code-Index MCP)**:
|
**Primary (CCW CodexLens MCP)**:
|
||||||
- `mcp__code-index__set_project_path()` - Initialize index
|
- `mcp__ccw-tools__codex_lens(action="init", path=".")` - Initialize index for directory
|
||||||
- `mcp__code-index__find_files(pattern)` - File pattern matching
|
- `mcp__ccw-tools__codex_lens(action="search", query="pattern", path=".")` - Content search (requires query)
|
||||||
- `mcp__code-index__search_code_advanced()` - Content search
|
- `mcp__ccw-tools__codex_lens(action="search_files", query="pattern")` - File name search, returns paths only (requires query)
|
||||||
- `mcp__code-index__get_file_summary()` - File structure analysis
|
- `mcp__ccw-tools__codex_lens(action="symbol", file="path")` - Extract all symbols from file (no query, returns functions/classes/variables)
|
||||||
- `mcp__code-index__refresh_index()` - Update index
|
- `mcp__ccw-tools__codex_lens(action="update", files=[...])` - Update index for specific files
|
||||||
|
|
||||||
**Fallback (CLI)**:
|
**Fallback (CLI)**:
|
||||||
- `rg` (ripgrep) - Fast content search
|
- `rg` (ripgrep) - Fast content search
|
||||||
- `find` - File discovery
|
- `find` - File discovery
|
||||||
- `Grep` - Pattern matching
|
- `Grep` - Pattern matching
|
||||||
|
|
||||||
**Priority**: Code-Index MCP > ripgrep > find > grep
|
**Priority**: CodexLens MCP > ripgrep > find > grep
|
||||||
|
|
||||||
## Simplified Execution Process (3 Phases)
|
## Simplified Execution Process (3 Phases)
|
||||||
|
|
||||||
@@ -77,9 +77,8 @@ if (file_exists(contextPackagePath)) {
|
|||||||
|
|
||||||
**1.2 Foundation Setup**:
|
**1.2 Foundation Setup**:
|
||||||
```javascript
|
```javascript
|
||||||
// 1. Initialize Code Index (if available)
|
// 1. Initialize CodexLens (if available)
|
||||||
mcp__code-index__set_project_path(process.cwd())
|
mcp__ccw-tools__codex_lens({ action: "init", path: "." })
|
||||||
mcp__code-index__refresh_index()
|
|
||||||
|
|
||||||
// 2. Project Structure
|
// 2. Project Structure
|
||||||
bash(ccw tool exec get_modules_by_depth '{}')
|
bash(ccw tool exec get_modules_by_depth '{}')
|
||||||
@@ -212,18 +211,18 @@ mcp__exa__web_search_exa({
|
|||||||
|
|
||||||
**Layer 1: File Pattern Discovery**
|
**Layer 1: File Pattern Discovery**
|
||||||
```javascript
|
```javascript
|
||||||
// Primary: Code-Index MCP
|
// Primary: CodexLens MCP
|
||||||
const files = mcp__code-index__find_files("*{keyword}*")
|
const files = mcp__ccw-tools__codex_lens({ action: "search_files", query: "*{keyword}*" })
|
||||||
// Fallback: find . -iname "*{keyword}*" -type f
|
// Fallback: find . -iname "*{keyword}*" -type f
|
||||||
```
|
```
|
||||||
|
|
||||||
**Layer 2: Content Search**
|
**Layer 2: Content Search**
|
||||||
```javascript
|
```javascript
|
||||||
// Primary: Code-Index MCP
|
// Primary: CodexLens MCP
|
||||||
mcp__code-index__search_code_advanced({
|
mcp__ccw-tools__codex_lens({
|
||||||
pattern: "{keyword}",
|
action: "search",
|
||||||
file_pattern: "*.ts",
|
query: "{keyword}",
|
||||||
output_mode: "files_with_matches"
|
path: "."
|
||||||
})
|
})
|
||||||
// Fallback: rg "{keyword}" -t ts --files-with-matches
|
// Fallback: rg "{keyword}" -t ts --files-with-matches
|
||||||
```
|
```
|
||||||
@@ -231,11 +230,10 @@ mcp__code-index__search_code_advanced({
|
|||||||
**Layer 3: Semantic Patterns**
|
**Layer 3: Semantic Patterns**
|
||||||
```javascript
|
```javascript
|
||||||
// Find definitions (class, interface, function)
|
// Find definitions (class, interface, function)
|
||||||
mcp__code-index__search_code_advanced({
|
mcp__ccw-tools__codex_lens({
|
||||||
pattern: "^(export )?(class|interface|type|function) .*{keyword}",
|
action: "search",
|
||||||
regex: true,
|
query: "^(export )?(class|interface|type|function) .*{keyword}",
|
||||||
output_mode: "content",
|
path: "."
|
||||||
context_lines: 2
|
|
||||||
})
|
})
|
||||||
```
|
```
|
||||||
|
|
||||||
@@ -243,21 +241,22 @@ mcp__code-index__search_code_advanced({
|
|||||||
```javascript
|
```javascript
|
||||||
// Get file summaries for imports/exports
|
// Get file summaries for imports/exports
|
||||||
for (const file of discovered_files) {
|
for (const file of discovered_files) {
|
||||||
const summary = mcp__code-index__get_file_summary(file)
|
const summary = mcp__ccw-tools__codex_lens({ action: "symbol", file: file })
|
||||||
// summary: {imports, functions, classes, line_count}
|
// summary: {symbols: [{name, type, line}]}
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
**Layer 5: Config & Tests**
|
**Layer 5: Config & Tests**
|
||||||
```javascript
|
```javascript
|
||||||
// Config files
|
// Config files
|
||||||
mcp__code-index__find_files("*.config.*")
|
mcp__ccw-tools__codex_lens({ action: "search_files", query: "*.config.*" })
|
||||||
mcp__code-index__find_files("package.json")
|
mcp__ccw-tools__codex_lens({ action: "search_files", query: "package.json" })
|
||||||
|
|
||||||
// Tests
|
// Tests
|
||||||
mcp__code-index__search_code_advanced({
|
mcp__ccw-tools__codex_lens({
|
||||||
pattern: "(describe|it|test).*{keyword}",
|
action: "search",
|
||||||
file_pattern: "*.{test,spec}.*"
|
query: "(describe|it|test).*{keyword}",
|
||||||
|
path: "."
|
||||||
})
|
})
|
||||||
```
|
```
|
||||||
|
|
||||||
@@ -560,14 +559,14 @@ Output: .workflow/session/{session}/.process/context-package.json
|
|||||||
- Expose sensitive data (credentials, keys)
|
- Expose sensitive data (credentials, keys)
|
||||||
- Exceed file limits (50 total)
|
- Exceed file limits (50 total)
|
||||||
- Include binaries/generated files
|
- Include binaries/generated files
|
||||||
- Use ripgrep if code-index available
|
- Use ripgrep if CodexLens available
|
||||||
|
|
||||||
**ALWAYS**:
|
**ALWAYS**:
|
||||||
- Initialize code-index in Phase 0
|
- Initialize CodexLens in Phase 0
|
||||||
- Execute get_modules_by_depth.sh
|
- Execute get_modules_by_depth.sh
|
||||||
- Load CLAUDE.md/README.md (unless in memory)
|
- Load CLAUDE.md/README.md (unless in memory)
|
||||||
- Execute all 3 discovery tracks
|
- Execute all 3 discovery tracks
|
||||||
- Use code-index MCP as primary
|
- Use CodexLens MCP as primary
|
||||||
- Fallback to ripgrep only when needed
|
- Fallback to ripgrep only when needed
|
||||||
- Use Exa for unfamiliar APIs
|
- Use Exa for unfamiliar APIs
|
||||||
- Apply multi-factor scoring
|
- Apply multi-factor scoring
|
||||||
|
|||||||
@@ -61,9 +61,9 @@ The agent supports **two execution modes** based on task JSON's `meta.cli_execut
|
|||||||
|
|
||||||
**Step 2** (CLI execution):
|
**Step 2** (CLI execution):
|
||||||
- Agent substitutes [target_folders] into command
|
- Agent substitutes [target_folders] into command
|
||||||
- Agent executes CLI command via Bash tool:
|
- Agent executes CLI command via CCW:
|
||||||
```bash
|
```bash
|
||||||
bash(cd src/modules && gemini --approval-mode yolo -p "
|
ccw cli -p "
|
||||||
PURPOSE: Generate module documentation
|
PURPOSE: Generate module documentation
|
||||||
TASK: Create API.md and README.md for each module
|
TASK: Create API.md and README.md for each module
|
||||||
MODE: write
|
MODE: write
|
||||||
@@ -71,7 +71,7 @@ The agent supports **two execution modes** based on task JSON's `meta.cli_execut
|
|||||||
./src/modules/api|code|code:3|dirs:0
|
./src/modules/api|code|code:3|dirs:0
|
||||||
EXPECTED: Documentation files in .workflow/docs/my_project/src/modules/
|
EXPECTED: Documentation files in .workflow/docs/my_project/src/modules/
|
||||||
RULES: $(cat ~/.claude/workflows/cli-templates/prompts/documentation/module-documentation.txt) | Mirror source structure
|
RULES: $(cat ~/.claude/workflows/cli-templates/prompts/documentation/module-documentation.txt) | Mirror source structure
|
||||||
")
|
" --tool gemini --mode write --cd src/modules
|
||||||
```
|
```
|
||||||
|
|
||||||
4. **CLI Execution** (Gemini CLI):
|
4. **CLI Execution** (Gemini CLI):
|
||||||
@@ -216,7 +216,7 @@ Before completion, verify:
|
|||||||
{
|
{
|
||||||
"step": "analyze_module_structure",
|
"step": "analyze_module_structure",
|
||||||
"action": "Deep analysis of module structure and API",
|
"action": "Deep analysis of module structure and API",
|
||||||
"command": "bash(cd src/auth && gemini \"PURPOSE: Document module comprehensively\nTASK: Extract module purpose, architecture, public API, dependencies\nMODE: analysis\nCONTEXT: @**/* System: [system_context]\nEXPECTED: Complete module analysis for documentation\nRULES: $(cat ~/.claude/workflows/cli-templates/prompts/documentation/module-documentation.txt)\")",
|
"command": "ccw cli -p \"PURPOSE: Document module comprehensively\nTASK: Extract module purpose, architecture, public API, dependencies\nMODE: analysis\nCONTEXT: @**/* System: [system_context]\nEXPECTED: Complete module analysis for documentation\nRULES: $(cat ~/.claude/workflows/cli-templates/prompts/documentation/module-documentation.txt)\" --tool gemini --mode analysis --cd src/auth",
|
||||||
"output_to": "module_analysis",
|
"output_to": "module_analysis",
|
||||||
"on_error": "fail"
|
"on_error": "fail"
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -36,10 +36,10 @@ You are a test context discovery specialist focused on gathering test coverage i
|
|||||||
**Use**: Phase 1 source context loading
|
**Use**: Phase 1 source context loading
|
||||||
|
|
||||||
### 2. Test Coverage Discovery
|
### 2. Test Coverage Discovery
|
||||||
**Primary (Code-Index MCP)**:
|
**Primary (CCW CodexLens MCP)**:
|
||||||
- `mcp__code-index__find_files(pattern)` - Find test files (*.test.*, *.spec.*)
|
- `mcp__ccw-tools__codex_lens(action="search_files", query="*.test.*")` - Find test files
|
||||||
- `mcp__code-index__search_code_advanced()` - Search test patterns
|
- `mcp__ccw-tools__codex_lens(action="search", query="pattern")` - Search test patterns
|
||||||
- `mcp__code-index__get_file_summary()` - Analyze test structure
|
- `mcp__ccw-tools__codex_lens(action="symbol", file="path")` - Analyze test structure
|
||||||
|
|
||||||
**Fallback (CLI)**:
|
**Fallback (CLI)**:
|
||||||
- `rg` (ripgrep) - Fast test pattern search
|
- `rg` (ripgrep) - Fast test pattern search
|
||||||
@@ -120,9 +120,10 @@ for (const summary_path of summaries) {
|
|||||||
|
|
||||||
**2.1 Existing Test Discovery**:
|
**2.1 Existing Test Discovery**:
|
||||||
```javascript
|
```javascript
|
||||||
// Method 1: Code-Index MCP (preferred)
|
// Method 1: CodexLens MCP (preferred)
|
||||||
const test_files = mcp__code-index__find_files({
|
const test_files = mcp__ccw-tools__codex_lens({
|
||||||
patterns: ["*.test.*", "*.spec.*", "*test_*.py", "*_test.go"]
|
action: "search_files",
|
||||||
|
query: "*.test.* OR *.spec.* OR test_*.py OR *_test.go"
|
||||||
});
|
});
|
||||||
|
|
||||||
// Method 2: Fallback CLI
|
// Method 2: Fallback CLI
|
||||||
|
|||||||
@@ -83,17 +83,18 @@ When task JSON contains implementation_approach array:
|
|||||||
- L1 (Unit): `*.test.*`, `*.spec.*` in `__tests__/`, `tests/unit/`
|
- L1 (Unit): `*.test.*`, `*.spec.*` in `__tests__/`, `tests/unit/`
|
||||||
- L2 (Integration): `tests/integration/`, `*.integration.test.*`
|
- L2 (Integration): `tests/integration/`, `*.integration.test.*`
|
||||||
- L3 (E2E): `tests/e2e/`, `*.e2e.test.*`, `cypress/`, `playwright/`
|
- L3 (E2E): `tests/e2e/`, `*.e2e.test.*`, `cypress/`, `playwright/`
|
||||||
- **context-package.json** (CCW Workflow): Extract artifact paths using `jq -r '.brainstorm_artifacts.role_analyses[].files[].path'`
|
- **context-package.json** (CCW Workflow): Use Read tool to get context package from `.workflow/active/{session}/.process/context-package.json`
|
||||||
- Identify test commands from project configuration
|
- Identify test commands from project configuration
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
# Detect test framework and multi-layered commands
|
# Detect test framework and multi-layered commands
|
||||||
if [ -f "package.json" ]; then
|
if [ -f "package.json" ]; then
|
||||||
# Extract layer-specific test commands
|
# Extract layer-specific test commands using Read tool or jq
|
||||||
LINT_CMD=$(cat package.json | jq -r '.scripts.lint // "eslint ."')
|
PKG_JSON=$(cat package.json)
|
||||||
UNIT_CMD=$(cat package.json | jq -r '.scripts["test:unit"] // .scripts.test')
|
LINT_CMD=$(echo "$PKG_JSON" | jq -r '.scripts.lint // "eslint ."')
|
||||||
INTEGRATION_CMD=$(cat package.json | jq -r '.scripts["test:integration"] // ""')
|
UNIT_CMD=$(echo "$PKG_JSON" | jq -r '.scripts["test:unit"] // .scripts.test')
|
||||||
E2E_CMD=$(cat package.json | jq -r '.scripts["test:e2e"] // ""')
|
INTEGRATION_CMD=$(echo "$PKG_JSON" | jq -r '.scripts["test:integration"] // ""')
|
||||||
|
E2E_CMD=$(echo "$PKG_JSON" | jq -r '.scripts["test:e2e"] // ""')
|
||||||
elif [ -f "pytest.ini" ] || [ -f "setup.py" ]; then
|
elif [ -f "pytest.ini" ] || [ -f "setup.py" ]; then
|
||||||
LINT_CMD="ruff check . || flake8 ."
|
LINT_CMD="ruff check . || flake8 ."
|
||||||
UNIT_CMD="pytest tests/unit/"
|
UNIT_CMD="pytest tests/unit/"
|
||||||
|
|||||||
@@ -74,7 +74,7 @@ SlashCommand(command="/workflow:session:start --type docs --new \"{project_name}
|
|||||||
|
|
||||||
```bash
|
```bash
|
||||||
# Update workflow-session.json with docs-specific fields
|
# Update workflow-session.json with docs-specific fields
|
||||||
bash(jq '. + {"target_path":"{target_path}","project_root":"{project_root}","project_name":"{project_name}","mode":"full","tool":"gemini","cli_execute":false}' .workflow/active/{sessionId}/workflow-session.json > tmp.json && mv tmp.json .workflow/active/{sessionId}/workflow-session.json)
|
ccw session {sessionId} write workflow-session.json '{"target_path":"{target_path}","project_root":"{project_root}","project_name":"{project_name}","mode":"full","tool":"gemini","cli_execute":false}'
|
||||||
```
|
```
|
||||||
|
|
||||||
### Phase 2: Analyze Structure
|
### Phase 2: Analyze Structure
|
||||||
@@ -136,7 +136,8 @@ bash(if [ -d .workflow/docs/\${project_name} ]; then find .workflow/docs/\${proj
|
|||||||
|
|
||||||
```bash
|
```bash
|
||||||
# Count existing docs from doc-planning-data.json
|
# Count existing docs from doc-planning-data.json
|
||||||
bash(cat .workflow/active/WFS-docs-{timestamp}/.process/doc-planning-data.json | jq '.existing_docs.file_list | length')
|
ccw session WFS-docs-{timestamp} read .process/doc-planning-data.json --raw | jq '.existing_docs.file_list | length'
|
||||||
|
# Or read entire process file and parse
|
||||||
```
|
```
|
||||||
|
|
||||||
**Data Processing**: Use count result, then use **Edit tool** to update `workflow-session.json`:
|
**Data Processing**: Use count result, then use **Edit tool** to update `workflow-session.json`:
|
||||||
@@ -190,10 +191,10 @@ Large Projects (single dir >10 docs):
|
|||||||
|
|
||||||
```bash
|
```bash
|
||||||
# 1. Get top-level directories from doc-planning-data.json
|
# 1. Get top-level directories from doc-planning-data.json
|
||||||
bash(cat .workflow/active/WFS-docs-{timestamp}/.process/doc-planning-data.json | jq -r '.top_level_dirs[]')
|
ccw session WFS-docs-{timestamp} read .process/doc-planning-data.json --raw | jq -r '.top_level_dirs[]'
|
||||||
|
|
||||||
# 2. Get mode from workflow-session.json
|
# 2. Get mode from workflow-session.json
|
||||||
bash(cat .workflow/active/WFS-docs-{timestamp}/workflow-session.json | jq -r '.mode // "full"')
|
ccw session WFS-docs-{timestamp} read workflow-session.json --raw | jq -r '.mode // "full"'
|
||||||
|
|
||||||
# 3. Check for HTTP API
|
# 3. Check for HTTP API
|
||||||
bash(grep -r "router\.|@Get\|@Post" src/ 2>/dev/null && echo "API_FOUND" || echo "NO_API")
|
bash(grep -r "router\.|@Get\|@Post" src/ 2>/dev/null && echo "API_FOUND" || echo "NO_API")
|
||||||
@@ -222,7 +223,7 @@ bash(grep -r "router\.|@Get\|@Post" src/ 2>/dev/null && echo "API_FOUND" || echo
|
|||||||
|
|
||||||
**Task ID Calculation**:
|
**Task ID Calculation**:
|
||||||
```bash
|
```bash
|
||||||
group_count=$(jq '.groups.count' .workflow/active/WFS-docs-{timestamp}/.process/doc-planning-data.json)
|
group_count=$(ccw session WFS-docs-{timestamp} read .process/doc-planning-data.json --raw | jq '.groups.count')
|
||||||
readme_id=$((group_count + 1)) # Next ID after groups
|
readme_id=$((group_count + 1)) # Next ID after groups
|
||||||
arch_id=$((group_count + 2))
|
arch_id=$((group_count + 2))
|
||||||
api_id=$((group_count + 3))
|
api_id=$((group_count + 3))
|
||||||
@@ -235,12 +236,12 @@ api_id=$((group_count + 3))
|
|||||||
| Mode | cli_execute | Placement | CLI MODE | Approval Flag | Agent Role |
|
| Mode | cli_execute | Placement | CLI MODE | Approval Flag | Agent Role |
|
||||||
|------|-------------|-----------|----------|---------------|------------|
|
|------|-------------|-----------|----------|---------------|------------|
|
||||||
| **Agent** | false | pre_analysis | analysis | (none) | Generate docs in implementation_approach |
|
| **Agent** | false | pre_analysis | analysis | (none) | Generate docs in implementation_approach |
|
||||||
| **CLI** | true | implementation_approach | write | --approval-mode yolo | Execute CLI commands, validate output |
|
| **CLI** | true | implementation_approach | write | --mode write | Execute CLI commands, validate output |
|
||||||
|
|
||||||
**Command Patterns**:
|
**Command Patterns**:
|
||||||
- Gemini/Qwen: `cd dir && gemini -p "..."`
|
- Gemini/Qwen: `ccw cli -p "..." --tool gemini --mode analysis --cd dir`
|
||||||
- CLI Mode: `cd dir && gemini --approval-mode yolo -p "..."`
|
- CLI Mode: `ccw cli -p "..." --tool gemini --mode write --cd dir`
|
||||||
- Codex: `codex -C dir --full-auto exec "..." --skip-git-repo-check -s danger-full-access`
|
- Codex: `ccw cli -p "..." --tool codex --mode write --cd dir`
|
||||||
|
|
||||||
**Generation Process**:
|
**Generation Process**:
|
||||||
1. Read configuration values (tool, cli_execute, mode) from workflow-session.json
|
1. Read configuration values (tool, cli_execute, mode) from workflow-session.json
|
||||||
@@ -285,8 +286,8 @@ api_id=$((group_count + 3))
|
|||||||
"step": "load_precomputed_data",
|
"step": "load_precomputed_data",
|
||||||
"action": "Load Phase 2 analysis and extract group directories",
|
"action": "Load Phase 2 analysis and extract group directories",
|
||||||
"commands": [
|
"commands": [
|
||||||
"bash(cat ${session_dir}/.process/doc-planning-data.json)",
|
"ccw session ${session_id} read .process/doc-planning-data.json",
|
||||||
"bash(jq '.groups.assignments[] | select(.group_id == \"${group_number}\") | .directories' ${session_dir}/.process/doc-planning-data.json)"
|
"ccw session ${session_id} read .process/doc-planning-data.json --raw | jq '.groups.assignments[] | select(.group_id == \"${group_number}\") | .directories'"
|
||||||
],
|
],
|
||||||
"output_to": "phase2_context",
|
"output_to": "phase2_context",
|
||||||
"note": "Single JSON file contains all Phase 2 analysis results"
|
"note": "Single JSON file contains all Phase 2 analysis results"
|
||||||
@@ -331,7 +332,7 @@ api_id=$((group_count + 3))
|
|||||||
{
|
{
|
||||||
"step": 2,
|
"step": 2,
|
||||||
"title": "Batch generate documentation via CLI",
|
"title": "Batch generate documentation via CLI",
|
||||||
"command": "bash(dirs=$(jq -r '.groups.assignments[] | select(.group_id == \"${group_number}\") | .directories[]' ${session_dir}/.process/doc-planning-data.json); for dir in $dirs; do cd \"$dir\" && gemini --approval-mode yolo -p \"PURPOSE: Generate module docs\\nTASK: Create documentation\\nMODE: write\\nCONTEXT: @**/* [phase2_context]\\nEXPECTED: API.md and README.md\\nRULES: Mirror structure\" || echo \"Failed: $dir\"; cd -; done)",
|
"command": "ccw cli -p 'PURPOSE: Generate module docs\\nTASK: Create documentation\\nMODE: write\\nCONTEXT: @**/* [phase2_context]\\nEXPECTED: API.md and README.md\\nRULES: Mirror structure' --tool gemini --mode write --cd ${dirs_from_group}",
|
||||||
"depends_on": [1],
|
"depends_on": [1],
|
||||||
"output": "generated_docs"
|
"output": "generated_docs"
|
||||||
}
|
}
|
||||||
@@ -363,7 +364,7 @@ api_id=$((group_count + 3))
|
|||||||
},
|
},
|
||||||
{
|
{
|
||||||
"step": "analyze_project",
|
"step": "analyze_project",
|
||||||
"command": "bash(gemini \"PURPOSE: Analyze project structure\\nTASK: Extract overview from modules\\nMODE: analysis\\nCONTEXT: [all_module_docs]\\nEXPECTED: Project outline\")",
|
"command": "bash(ccw cli -p \"PURPOSE: Analyze project structure\\nTASK: Extract overview from modules\\nMODE: analysis\\nCONTEXT: [all_module_docs]\\nEXPECTED: Project outline\" --tool gemini --mode analysis)",
|
||||||
"output_to": "project_outline"
|
"output_to": "project_outline"
|
||||||
}
|
}
|
||||||
],
|
],
|
||||||
@@ -403,7 +404,7 @@ api_id=$((group_count + 3))
|
|||||||
"pre_analysis": [
|
"pre_analysis": [
|
||||||
{"step": "load_existing_docs", "command": "bash(cat .workflow/docs/${project_name}/{ARCHITECTURE,EXAMPLES}.md 2>/dev/null || echo 'No existing docs')", "output_to": "existing_arch_examples"},
|
{"step": "load_existing_docs", "command": "bash(cat .workflow/docs/${project_name}/{ARCHITECTURE,EXAMPLES}.md 2>/dev/null || echo 'No existing docs')", "output_to": "existing_arch_examples"},
|
||||||
{"step": "load_all_docs", "command": "bash(cat .workflow/docs/${project_name}/README.md && find .workflow/docs/${project_name} -type f -name '*.md' ! -path '*/README.md' ! -path '*/ARCHITECTURE.md' ! -path '*/EXAMPLES.md' ! -path '*/api/*' | xargs cat)", "output_to": "all_docs"},
|
{"step": "load_all_docs", "command": "bash(cat .workflow/docs/${project_name}/README.md && find .workflow/docs/${project_name} -type f -name '*.md' ! -path '*/README.md' ! -path '*/ARCHITECTURE.md' ! -path '*/EXAMPLES.md' ! -path '*/api/*' | xargs cat)", "output_to": "all_docs"},
|
||||||
{"step": "analyze_architecture", "command": "bash(gemini \"PURPOSE: Analyze system architecture\\nTASK: Synthesize architectural overview and examples\\nMODE: analysis\\nCONTEXT: [all_docs]\\nEXPECTED: Architecture + Examples outline\")", "output_to": "arch_examples_outline"}
|
{"step": "analyze_architecture", "command": "bash(ccw cli -p \"PURPOSE: Analyze system architecture\\nTASK: Synthesize architectural overview and examples\\nMODE: analysis\\nCONTEXT: [all_docs]\\nEXPECTED: Architecture + Examples outline\" --tool gemini --mode analysis)", "output_to": "arch_examples_outline"}
|
||||||
],
|
],
|
||||||
"implementation_approach": [
|
"implementation_approach": [
|
||||||
{
|
{
|
||||||
@@ -440,7 +441,7 @@ api_id=$((group_count + 3))
|
|||||||
"pre_analysis": [
|
"pre_analysis": [
|
||||||
{"step": "discover_api", "command": "bash(rg 'router\\.| @(Get|Post)' -g '*.{ts,js}')", "output_to": "endpoint_discovery"},
|
{"step": "discover_api", "command": "bash(rg 'router\\.| @(Get|Post)' -g '*.{ts,js}')", "output_to": "endpoint_discovery"},
|
||||||
{"step": "load_existing_api", "command": "bash(cat .workflow/docs/${project_name}/api/README.md 2>/dev/null || echo 'No existing API docs')", "output_to": "existing_api_docs"},
|
{"step": "load_existing_api", "command": "bash(cat .workflow/docs/${project_name}/api/README.md 2>/dev/null || echo 'No existing API docs')", "output_to": "existing_api_docs"},
|
||||||
{"step": "analyze_api", "command": "bash(gemini \"PURPOSE: Document HTTP API\\nTASK: Analyze endpoints\\nMODE: analysis\\nCONTEXT: @src/api/**/* [endpoint_discovery]\\nEXPECTED: API outline\")", "output_to": "api_outline"}
|
{"step": "analyze_api", "command": "bash(ccw cli -p \"PURPOSE: Document HTTP API\\nTASK: Analyze endpoints\\nMODE: analysis\\nCONTEXT: @src/api/**/* [endpoint_discovery]\\nEXPECTED: API outline\" --tool gemini --mode analysis)", "output_to": "api_outline"}
|
||||||
],
|
],
|
||||||
"implementation_approach": [
|
"implementation_approach": [
|
||||||
{
|
{
|
||||||
@@ -601,7 +602,7 @@ api_id=$((group_count + 3))
|
|||||||
| Mode | CLI Placement | CLI MODE | Approval Flag | Agent Role |
|
| Mode | CLI Placement | CLI MODE | Approval Flag | Agent Role |
|
||||||
|------|---------------|----------|---------------|------------|
|
|------|---------------|----------|---------------|------------|
|
||||||
| **Agent (default)** | pre_analysis | analysis | (none) | Generates documentation content |
|
| **Agent (default)** | pre_analysis | analysis | (none) | Generates documentation content |
|
||||||
| **CLI (--cli-execute)** | implementation_approach | write | --approval-mode yolo | Executes CLI commands, validates output |
|
| **CLI (--cli-execute)** | implementation_approach | write | --mode write | Executes CLI commands, validates output |
|
||||||
|
|
||||||
**Execution Flow**:
|
**Execution Flow**:
|
||||||
- **Phase 2**: Unified analysis once, results in `.process/`
|
- **Phase 2**: Unified analysis once, results in `.process/`
|
||||||
|
|||||||
@@ -5,7 +5,7 @@ argument-hint: "[--tool gemini|qwen] \"task context description\""
|
|||||||
allowed-tools: Task(*), Bash(*)
|
allowed-tools: Task(*), Bash(*)
|
||||||
examples:
|
examples:
|
||||||
- /memory:load "在当前前端基础上开发用户认证功能"
|
- /memory:load "在当前前端基础上开发用户认证功能"
|
||||||
- /memory:load --tool qwen -p "重构支付模块API"
|
- /memory:load --tool qwen "重构支付模块API"
|
||||||
---
|
---
|
||||||
|
|
||||||
# Memory Load Command (/memory:load)
|
# Memory Load Command (/memory:load)
|
||||||
@@ -39,7 +39,7 @@ The command fully delegates to **universal-executor agent**, which autonomously:
|
|||||||
1. **Analyzes Project Structure**: Executes `get_modules_by_depth.sh` to understand architecture
|
1. **Analyzes Project Structure**: Executes `get_modules_by_depth.sh` to understand architecture
|
||||||
2. **Loads Documentation**: Reads CLAUDE.md, README.md and other key docs
|
2. **Loads Documentation**: Reads CLAUDE.md, README.md and other key docs
|
||||||
3. **Extracts Keywords**: Derives core keywords from task description
|
3. **Extracts Keywords**: Derives core keywords from task description
|
||||||
4. **Discovers Files**: Uses MCP code-index or rg/find to locate relevant files
|
4. **Discovers Files**: Uses CodexLens MCP or rg/find to locate relevant files
|
||||||
5. **CLI Deep Analysis**: Executes Gemini/Qwen CLI for deep context analysis
|
5. **CLI Deep Analysis**: Executes Gemini/Qwen CLI for deep context analysis
|
||||||
6. **Generates Content Package**: Returns structured JSON core content package
|
6. **Generates Content Package**: Returns structured JSON core content package
|
||||||
|
|
||||||
@@ -136,7 +136,7 @@ Task(
|
|||||||
Execute Gemini/Qwen CLI for deep analysis (saves main thread tokens):
|
Execute Gemini/Qwen CLI for deep analysis (saves main thread tokens):
|
||||||
|
|
||||||
\`\`\`bash
|
\`\`\`bash
|
||||||
cd . && ${tool} -p "
|
ccw cli -p "
|
||||||
PURPOSE: Extract project core context for task: ${task_description}
|
PURPOSE: Extract project core context for task: ${task_description}
|
||||||
TASK: Analyze project architecture, tech stack, key patterns, relevant files
|
TASK: Analyze project architecture, tech stack, key patterns, relevant files
|
||||||
MODE: analysis
|
MODE: analysis
|
||||||
@@ -147,7 +147,7 @@ RULES:
|
|||||||
- Identify key architecture patterns and technical constraints
|
- Identify key architecture patterns and technical constraints
|
||||||
- Extract integration points and development standards
|
- Extract integration points and development standards
|
||||||
- Output concise, structured format
|
- Output concise, structured format
|
||||||
"
|
" --tool ${tool} --mode analysis
|
||||||
\`\`\`
|
\`\`\`
|
||||||
|
|
||||||
### Step 4: Generate Core Content Package
|
### Step 4: Generate Core Content Package
|
||||||
@@ -212,7 +212,7 @@ Before returning:
|
|||||||
### Example 2: Using Qwen Tool
|
### Example 2: Using Qwen Tool
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
/memory:load --tool qwen -p "重构支付模块API"
|
/memory:load --tool qwen "重构支付模块API"
|
||||||
```
|
```
|
||||||
|
|
||||||
Agent uses Qwen CLI for analysis, returns same structured package.
|
Agent uses Qwen CLI for analysis, returns same structured package.
|
||||||
|
|||||||
@@ -1,477 +1,314 @@
|
|||||||
---
|
---
|
||||||
name: tech-research
|
name: tech-research
|
||||||
description: 3-phase orchestrator: extract tech stack from session/name → delegate to agent for Exa research and module generation → generate SKILL.md index (skips phase 2 if exists)
|
description: "3-phase orchestrator: extract tech stack → Exa research → generate path-conditional rules (auto-loaded by Claude Code)"
|
||||||
argument-hint: "[session-id | tech-stack-name] [--regenerate] [--tool <gemini|qwen>]"
|
argument-hint: "[session-id | tech-stack-name] [--regenerate] [--tool <gemini|qwen>]"
|
||||||
allowed-tools: SlashCommand(*), TodoWrite(*), Bash(*), Read(*), Write(*), Task(*)
|
allowed-tools: SlashCommand(*), TodoWrite(*), Bash(*), Read(*), Write(*), Task(*)
|
||||||
---
|
---
|
||||||
|
|
||||||
# Tech Stack Research SKILL Generator
|
# Tech Stack Rules Generator
|
||||||
|
|
||||||
## Overview
|
## Overview
|
||||||
|
|
||||||
**Pure Orchestrator with Agent Delegation**: Prepares context paths and delegates ALL work to agent. Agent produces files directly.
|
**Purpose**: Generate multi-layered, path-conditional rules that Claude Code automatically loads based on file context.
|
||||||
|
|
||||||
**Auto-Continue Workflow**: Runs fully autonomously once triggered. Each phase completes and automatically triggers the next phase.
|
**Key Difference from SKILL Memory**:
|
||||||
|
- **SKILL**: Manual loading via `Skill(command: "tech-name")`
|
||||||
|
- **Rules**: Automatic loading when working with matching file paths
|
||||||
|
|
||||||
**Execution Paths**:
|
**Output Structure**:
|
||||||
- **Full Path**: All 3 phases (no existing SKILL OR `--regenerate` specified)
|
```
|
||||||
- **Skip Path**: Phase 1 → Phase 3 (existing SKILL found AND no `--regenerate` flag)
|
.claude/rules/tech/{tech-stack}/
|
||||||
- **Phase 3 Always Executes**: SKILL index is always generated or updated
|
├── core.md # paths: **/*.{ext} - Core principles
|
||||||
|
├── patterns.md # paths: src/**/*.{ext} - Implementation patterns
|
||||||
|
├── testing.md # paths: **/*.{test,spec}.{ext} - Testing rules
|
||||||
|
├── config.md # paths: *.config.* - Configuration rules
|
||||||
|
├── api.md # paths: **/api/**/* - API rules (backend only)
|
||||||
|
├── components.md # paths: **/components/**/* - Component rules (frontend only)
|
||||||
|
└── metadata.json # Generation metadata
|
||||||
|
```
|
||||||
|
|
||||||
**Agent Responsibility**:
|
**Templates Location**: `~/.claude/workflows/cli-templates/prompts/rules/`
|
||||||
- Agent does ALL the work: context reading, Exa research, content synthesis, file writing
|
|
||||||
- Orchestrator only provides context paths and waits for completion
|
---
|
||||||
|
|
||||||
## Core Rules
|
## Core Rules
|
||||||
|
|
||||||
1. **Start Immediately**: First action is TodoWrite initialization, second action is Phase 1 execution
|
1. **Start Immediately**: First action is TodoWrite initialization
|
||||||
2. **Context Path Delegation**: Pass session directory or tech stack name to agent, let agent do discovery
|
2. **Path-Conditional Output**: Every rule file includes `paths` frontmatter
|
||||||
3. **Agent Produces Files**: Agent directly writes all module files, orchestrator does NOT parse agent output
|
3. **Template-Driven**: Agent reads templates before generating content
|
||||||
4. **Auto-Continue**: After completing each phase, update TodoWrite and immediately execute next phase
|
4. **Agent Produces Files**: Agent writes all rule files directly
|
||||||
5. **No User Prompts**: Never ask user questions or wait for input between phases
|
5. **No Manual Loading**: Rules auto-activate when Claude works with matching files
|
||||||
6. **Track Progress**: Update TodoWrite after EVERY phase completion before starting next phase
|
|
||||||
7. **Lightweight Index**: Phase 3 only generates SKILL.md index by reading existing files
|
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
## 3-Phase Execution
|
## 3-Phase Execution
|
||||||
|
|
||||||
### Phase 1: Prepare Context Paths
|
### Phase 1: Prepare Context & Detect Tech Stack
|
||||||
|
|
||||||
**Goal**: Detect input mode, prepare context paths for agent, check existing SKILL
|
**Goal**: Detect input mode, extract tech stack info, determine file extensions
|
||||||
|
|
||||||
**Input Mode Detection**:
|
**Input Mode Detection**:
|
||||||
```bash
|
```bash
|
||||||
# Get input parameter
|
|
||||||
input="$1"
|
input="$1"
|
||||||
|
|
||||||
# Detect mode
|
|
||||||
if [[ "$input" == WFS-* ]]; then
|
if [[ "$input" == WFS-* ]]; then
|
||||||
MODE="session"
|
MODE="session"
|
||||||
SESSION_ID="$input"
|
SESSION_ID="$input"
|
||||||
CONTEXT_PATH=".workflow/${SESSION_ID}"
|
# Read workflow-session.json to extract tech stack
|
||||||
else
|
else
|
||||||
MODE="direct"
|
MODE="direct"
|
||||||
TECH_STACK_NAME="$input"
|
TECH_STACK_NAME="$input"
|
||||||
CONTEXT_PATH="$input" # Pass tech stack name as context
|
|
||||||
fi
|
fi
|
||||||
```
|
```
|
||||||
|
|
||||||
**Check Existing SKILL**:
|
**Tech Stack Analysis**:
|
||||||
```bash
|
```javascript
|
||||||
# For session mode, peek at session to get tech stack name
|
// Decompose composite tech stacks
|
||||||
if [[ "$MODE" == "session" ]]; then
|
// "typescript-react-nextjs" → ["typescript", "react", "nextjs"]
|
||||||
bash(test -f ".workflow/${SESSION_ID}/workflow-session.json")
|
|
||||||
Read(.workflow/${SESSION_ID}/workflow-session.json)
|
|
||||||
# Extract tech_stack_name (minimal extraction)
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Normalize and check
|
const TECH_EXTENSIONS = {
|
||||||
|
"typescript": "{ts,tsx}",
|
||||||
|
"javascript": "{js,jsx}",
|
||||||
|
"python": "py",
|
||||||
|
"rust": "rs",
|
||||||
|
"go": "go",
|
||||||
|
"java": "java",
|
||||||
|
"csharp": "cs",
|
||||||
|
"ruby": "rb",
|
||||||
|
"php": "php"
|
||||||
|
};
|
||||||
|
|
||||||
|
const FRAMEWORK_TYPE = {
|
||||||
|
"react": "frontend",
|
||||||
|
"vue": "frontend",
|
||||||
|
"angular": "frontend",
|
||||||
|
"nextjs": "fullstack",
|
||||||
|
"nuxt": "fullstack",
|
||||||
|
"fastapi": "backend",
|
||||||
|
"express": "backend",
|
||||||
|
"django": "backend",
|
||||||
|
"rails": "backend"
|
||||||
|
};
|
||||||
|
```
|
||||||
|
|
||||||
|
**Check Existing Rules**:
|
||||||
|
```bash
|
||||||
normalized_name=$(echo "$TECH_STACK_NAME" | tr '[:upper:]' '[:lower:]' | tr ' ' '-')
|
normalized_name=$(echo "$TECH_STACK_NAME" | tr '[:upper:]' '[:lower:]' | tr ' ' '-')
|
||||||
bash(test -d ".claude/skills/${normalized_name}" && echo "exists" || echo "not_exists")
|
rules_dir=".claude/rules/tech/${normalized_name}"
|
||||||
bash(find ".claude/skills/${normalized_name}" -name "*.md" 2>/dev/null | wc -l || echo 0)
|
existing_count=$(find "${rules_dir}" -name "*.md" 2>/dev/null | wc -l || echo 0)
|
||||||
```
|
```
|
||||||
|
|
||||||
**Skip Decision**:
|
**Skip Decision**:
|
||||||
```javascript
|
- If `existing_count > 0` AND no `--regenerate` → `SKIP_GENERATION = true`
|
||||||
if (existing_files > 0 && !regenerate_flag) {
|
- If `--regenerate` → Delete existing and regenerate
|
||||||
SKIP_GENERATION = true
|
|
||||||
message = "Tech stack SKILL already exists, skipping Phase 2. Use --regenerate to force regeneration."
|
|
||||||
} else if (regenerate_flag) {
|
|
||||||
bash(rm -rf ".claude/skills/${normalized_name}")
|
|
||||||
SKIP_GENERATION = false
|
|
||||||
message = "Regenerating tech stack SKILL from scratch."
|
|
||||||
} else {
|
|
||||||
SKIP_GENERATION = false
|
|
||||||
message = "No existing SKILL found, generating new tech stack documentation."
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
**Output Variables**:
|
**Output Variables**:
|
||||||
- `MODE`: `session` or `direct`
|
- `TECH_STACK_NAME`: Normalized name
|
||||||
- `SESSION_ID`: Session ID (if session mode)
|
- `PRIMARY_LANG`: Primary language
|
||||||
- `CONTEXT_PATH`: Path to session directory OR tech stack name
|
- `FILE_EXT`: File extension pattern
|
||||||
- `TECH_STACK_NAME`: Extracted or provided tech stack name
|
- `FRAMEWORK_TYPE`: frontend | backend | fullstack | library
|
||||||
- `SKIP_GENERATION`: Boolean - whether to skip Phase 2
|
- `COMPONENTS`: Array of tech components
|
||||||
|
- `SKIP_GENERATION`: Boolean
|
||||||
|
|
||||||
**TodoWrite**:
|
**TodoWrite**: Mark phase 1 completed
|
||||||
- If skipping: Mark phase 1 completed, phase 2 completed, phase 3 in_progress
|
|
||||||
- If not skipping: Mark phase 1 completed, phase 2 in_progress
|
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
### Phase 2: Agent Produces All Files
|
### Phase 2: Agent Produces Path-Conditional Rules
|
||||||
|
|
||||||
**Skip Condition**: Skipped if `SKIP_GENERATION = true`
|
**Skip Condition**: Skipped if `SKIP_GENERATION = true`
|
||||||
|
|
||||||
**Goal**: Delegate EVERYTHING to agent - context reading, Exa research, content synthesis, and file writing
|
**Goal**: Delegate to agent for Exa research and rule file generation
|
||||||
|
|
||||||
**Agent Task Specification**:
|
|
||||||
|
|
||||||
|
**Template Files**:
|
||||||
```
|
```
|
||||||
Task(
|
~/.claude/workflows/cli-templates/prompts/rules/
|
||||||
|
├── tech-rules-agent-prompt.txt # Agent instructions
|
||||||
|
├── rule-core.txt # Core principles template
|
||||||
|
├── rule-patterns.txt # Implementation patterns template
|
||||||
|
├── rule-testing.txt # Testing rules template
|
||||||
|
├── rule-config.txt # Configuration rules template
|
||||||
|
├── rule-api.txt # API rules template (backend)
|
||||||
|
└── rule-components.txt # Component rules template (frontend)
|
||||||
|
```
|
||||||
|
|
||||||
|
**Agent Task**:
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
Task({
|
||||||
subagent_type: "general-purpose",
|
subagent_type: "general-purpose",
|
||||||
description: "Generate tech stack SKILL: {CONTEXT_PATH}",
|
description: `Generate tech stack rules: ${TECH_STACK_NAME}`,
|
||||||
prompt: "
|
prompt: `
|
||||||
Generate a complete tech stack SKILL package with Exa research.
|
You are generating path-conditional rules for Claude Code.
|
||||||
|
|
||||||
**Context Provided**:
|
## Context
|
||||||
- Mode: {MODE}
|
- Tech Stack: ${TECH_STACK_NAME}
|
||||||
- Context Path: {CONTEXT_PATH}
|
- Primary Language: ${PRIMARY_LANG}
|
||||||
|
- File Extensions: ${FILE_EXT}
|
||||||
|
- Framework Type: ${FRAMEWORK_TYPE}
|
||||||
|
- Components: ${JSON.stringify(COMPONENTS)}
|
||||||
|
- Output Directory: .claude/rules/tech/${TECH_STACK_NAME}/
|
||||||
|
|
||||||
**Templates Available**:
|
## Instructions
|
||||||
- Module Format: ~/.claude/workflows/cli-templates/prompts/tech/tech-module-format.txt
|
|
||||||
- SKILL Index: ~/.claude/workflows/cli-templates/prompts/tech/tech-skill-index.txt
|
|
||||||
|
|
||||||
**Your Responsibilities**:
|
Read the agent prompt template for detailed instructions:
|
||||||
|
$(cat ~/.claude/workflows/cli-templates/prompts/rules/tech-rules-agent-prompt.txt)
|
||||||
|
|
||||||
1. **Extract Tech Stack Information**:
|
## Execution Steps
|
||||||
|
|
||||||
IF MODE == 'session':
|
1. Execute Exa research queries (see agent prompt)
|
||||||
- Read `.workflow/active/{session_id}/workflow-session.json`
|
2. Read each rule template
|
||||||
- Read `.workflow/active/{session_id}/.process/context-package.json`
|
3. Generate rule files following template structure
|
||||||
- Extract tech_stack: {language, frameworks, libraries}
|
4. Write files to output directory
|
||||||
- Build tech stack name: \"{language}-{framework1}-{framework2}\"
|
5. Write metadata.json
|
||||||
- Example: \"typescript-react-nextjs\"
|
6. Report completion
|
||||||
|
|
||||||
IF MODE == 'direct':
|
## Variable Substitutions
|
||||||
- Tech stack name = CONTEXT_PATH
|
|
||||||
- Parse composite: split by '-' delimiter
|
|
||||||
- Example: \"typescript-react-nextjs\" → [\"typescript\", \"react\", \"nextjs\"]
|
|
||||||
|
|
||||||
2. **Execute Exa Research** (4-6 parallel queries):
|
Replace in templates:
|
||||||
|
- {TECH_STACK_NAME} → ${TECH_STACK_NAME}
|
||||||
Base Queries (always execute):
|
- {PRIMARY_LANG} → ${PRIMARY_LANG}
|
||||||
- mcp__exa__get_code_context_exa(query: \"{tech} core principles best practices 2025\", tokensNum: 8000)
|
- {FILE_EXT} → ${FILE_EXT}
|
||||||
- mcp__exa__get_code_context_exa(query: \"{tech} common patterns architecture examples\", tokensNum: 7000)
|
- {FRAMEWORK_TYPE} → ${FRAMEWORK_TYPE}
|
||||||
- mcp__exa__web_search_exa(query: \"{tech} configuration setup tooling 2025\", numResults: 5)
|
`
|
||||||
- mcp__exa__get_code_context_exa(query: \"{tech} testing strategies\", tokensNum: 5000)
|
})
|
||||||
|
|
||||||
Component Queries (if composite):
|
|
||||||
- For each additional component:
|
|
||||||
mcp__exa__get_code_context_exa(query: \"{main_tech} {component} integration\", tokensNum: 5000)
|
|
||||||
|
|
||||||
3. **Read Module Format Template**:
|
|
||||||
|
|
||||||
Read template for structure guidance:
|
|
||||||
```bash
|
|
||||||
Read(~/.claude/workflows/cli-templates/prompts/tech/tech-module-format.txt)
|
|
||||||
```
|
|
||||||
|
|
||||||
4. **Synthesize Content into 6 Modules**:
|
|
||||||
|
|
||||||
Follow template structure from tech-module-format.txt:
|
|
||||||
- **principles.md** - Core concepts, philosophies (~3K tokens)
|
|
||||||
- **patterns.md** - Implementation patterns with code examples (~5K tokens)
|
|
||||||
- **practices.md** - Best practices, anti-patterns, pitfalls (~4K tokens)
|
|
||||||
- **testing.md** - Testing strategies, frameworks (~3K tokens)
|
|
||||||
- **config.md** - Setup, configuration, tooling (~3K tokens)
|
|
||||||
- **frameworks.md** - Framework integration (only if composite, ~4K tokens)
|
|
||||||
|
|
||||||
Each module follows template format:
|
|
||||||
- Frontmatter (YAML)
|
|
||||||
- Main sections with clear headings
|
|
||||||
- Code examples from Exa research
|
|
||||||
- Best practices sections
|
|
||||||
- References to Exa sources
|
|
||||||
|
|
||||||
5. **Write Files Directly**:
|
|
||||||
|
|
||||||
```javascript
|
|
||||||
// Create directory
|
|
||||||
bash(mkdir -p \".claude/skills/{tech_stack_name}\")
|
|
||||||
|
|
||||||
// Write each module file using Write tool
|
|
||||||
Write({ file_path: \".claude/skills/{tech_stack_name}/principles.md\", content: ... })
|
|
||||||
Write({ file_path: \".claude/skills/{tech_stack_name}/patterns.md\", content: ... })
|
|
||||||
Write({ file_path: \".claude/skills/{tech_stack_name}/practices.md\", content: ... })
|
|
||||||
Write({ file_path: \".claude/skills/{tech_stack_name}/testing.md\", content: ... })
|
|
||||||
Write({ file_path: \".claude/skills/{tech_stack_name}/config.md\", content: ... })
|
|
||||||
// Write frameworks.md only if composite
|
|
||||||
|
|
||||||
// Write metadata.json
|
|
||||||
Write({
|
|
||||||
file_path: \".claude/skills/{tech_stack_name}/metadata.json\",
|
|
||||||
content: JSON.stringify({
|
|
||||||
tech_stack_name,
|
|
||||||
components,
|
|
||||||
is_composite,
|
|
||||||
generated_at: timestamp,
|
|
||||||
source: \"exa-research\",
|
|
||||||
research_summary: { total_queries, total_sources }
|
|
||||||
})
|
|
||||||
})
|
|
||||||
```
|
|
||||||
|
|
||||||
6. **Report Completion**:
|
|
||||||
|
|
||||||
Provide summary:
|
|
||||||
- Tech stack name
|
|
||||||
- Files created (count)
|
|
||||||
- Exa queries executed
|
|
||||||
- Sources consulted
|
|
||||||
|
|
||||||
**CRITICAL**:
|
|
||||||
- MUST read external template files before generating content (step 3 for modules, step 4 for index)
|
|
||||||
- You have FULL autonomy - read files, execute Exa, synthesize content, write files
|
|
||||||
- Do NOT return JSON or structured data - produce actual .md files
|
|
||||||
- Handle errors gracefully (Exa failures, missing files, template read failures)
|
|
||||||
- If tech stack cannot be determined, ask orchestrator to clarify
|
|
||||||
"
|
|
||||||
)
|
|
||||||
```
|
```
|
||||||
|
|
||||||
**Completion Criteria**:
|
**Completion Criteria**:
|
||||||
- Agent task executed successfully
|
- 4-6 rule files written with proper `paths` frontmatter
|
||||||
- 5-6 modular files written to `.claude/skills/{tech_stack_name}/`
|
|
||||||
- metadata.json written
|
- metadata.json written
|
||||||
- Agent reports completion
|
- Agent reports files created
|
||||||
|
|
||||||
**TodoWrite**: Mark phase 2 completed, phase 3 in_progress
|
**TodoWrite**: Mark phase 2 completed
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
### Phase 3: Generate SKILL.md Index
|
### Phase 3: Verify & Report
|
||||||
|
|
||||||
**Note**: This phase **ALWAYS executes** - generates or updates the SKILL index.
|
**Goal**: Verify generated files and provide usage summary
|
||||||
|
|
||||||
**Goal**: Read generated module files and create SKILL.md index with loading recommendations
|
|
||||||
|
|
||||||
**Steps**:
|
**Steps**:
|
||||||
|
|
||||||
1. **Verify Generated Files**:
|
1. **Verify Files**:
|
||||||
```bash
|
```bash
|
||||||
bash(find ".claude/skills/${TECH_STACK_NAME}" -name "*.md" -type f | sort)
|
find ".claude/rules/tech/${TECH_STACK_NAME}" -name "*.md" -type f
|
||||||
```
|
```
|
||||||
|
|
||||||
2. **Read metadata.json**:
|
2. **Validate Frontmatter**:
|
||||||
|
```bash
|
||||||
|
head -5 ".claude/rules/tech/${TECH_STACK_NAME}/core.md"
|
||||||
|
```
|
||||||
|
|
||||||
|
3. **Read Metadata**:
|
||||||
```javascript
|
```javascript
|
||||||
Read(.claude/skills/${TECH_STACK_NAME}/metadata.json)
|
Read(`.claude/rules/tech/${TECH_STACK_NAME}/metadata.json`)
|
||||||
// Extract: tech_stack_name, components, is_composite, research_summary
|
|
||||||
```
|
```
|
||||||
|
|
||||||
3. **Read Module Headers** (optional, first 20 lines):
|
4. **Generate Summary Report**:
|
||||||
```javascript
|
|
||||||
Read(.claude/skills/${TECH_STACK_NAME}/principles.md, limit: 20)
|
|
||||||
// Repeat for other modules
|
|
||||||
```
|
```
|
||||||
|
Tech Stack Rules Generated
|
||||||
|
|
||||||
4. **Read SKILL Index Template**:
|
Tech Stack: {TECH_STACK_NAME}
|
||||||
|
Location: .claude/rules/tech/{TECH_STACK_NAME}/
|
||||||
|
|
||||||
```javascript
|
Files Created:
|
||||||
Read(~/.claude/workflows/cli-templates/prompts/tech/tech-skill-index.txt)
|
├── core.md → paths: **/*.{ext}
|
||||||
|
├── patterns.md → paths: src/**/*.{ext}
|
||||||
|
├── testing.md → paths: **/*.{test,spec}.{ext}
|
||||||
|
├── config.md → paths: *.config.*
|
||||||
|
├── api.md → paths: **/api/**/* (if backend)
|
||||||
|
└── components.md → paths: **/components/**/* (if frontend)
|
||||||
|
|
||||||
|
Auto-Loading:
|
||||||
|
- Rules apply automatically when editing matching files
|
||||||
|
- No manual loading required
|
||||||
|
|
||||||
|
Example Activation:
|
||||||
|
- Edit src/components/Button.tsx → core.md + patterns.md + components.md
|
||||||
|
- Edit tests/api.test.ts → core.md + testing.md
|
||||||
|
- Edit package.json → config.md
|
||||||
```
|
```
|
||||||
|
|
||||||
5. **Generate SKILL.md Index**:
|
|
||||||
|
|
||||||
Follow template from tech-skill-index.txt with variable substitutions:
|
|
||||||
- `{TECH_STACK_NAME}`: From metadata.json
|
|
||||||
- `{MAIN_TECH}`: Primary technology
|
|
||||||
- `{ISO_TIMESTAMP}`: Current timestamp
|
|
||||||
- `{QUERY_COUNT}`: From research_summary
|
|
||||||
- `{SOURCE_COUNT}`: From research_summary
|
|
||||||
- Conditional sections for composite tech stacks
|
|
||||||
|
|
||||||
Template provides structure for:
|
|
||||||
- Frontmatter with metadata
|
|
||||||
- Overview and tech stack description
|
|
||||||
- Module organization (Core/Practical/Config sections)
|
|
||||||
- Loading recommendations (Quick/Implementation/Complete)
|
|
||||||
- Usage guidelines and auto-trigger keywords
|
|
||||||
- Research metadata and version history
|
|
||||||
|
|
||||||
6. **Write SKILL.md**:
|
|
||||||
```javascript
|
|
||||||
Write({
|
|
||||||
file_path: `.claude/skills/${TECH_STACK_NAME}/SKILL.md`,
|
|
||||||
content: generatedIndexMarkdown
|
|
||||||
})
|
|
||||||
```
|
|
||||||
|
|
||||||
**Completion Criteria**:
|
|
||||||
- SKILL.md index written
|
|
||||||
- All module files verified
|
|
||||||
- Loading recommendations included
|
|
||||||
|
|
||||||
**TodoWrite**: Mark phase 3 completed
|
**TodoWrite**: Mark phase 3 completed
|
||||||
|
|
||||||
**Final Report**:
|
|
||||||
```
|
|
||||||
Tech Stack SKILL Package Complete
|
|
||||||
|
|
||||||
Tech Stack: {TECH_STACK_NAME}
|
|
||||||
Location: .claude/skills/{TECH_STACK_NAME}/
|
|
||||||
|
|
||||||
Files: SKILL.md + 5-6 modules + metadata.json
|
|
||||||
Exa Research: {queries} queries, {sources} sources
|
|
||||||
|
|
||||||
Usage: Skill(command: "{TECH_STACK_NAME}")
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
## Implementation Details
|
## Path Pattern Reference
|
||||||
|
|
||||||
### TodoWrite Patterns
|
| Pattern | Matches |
|
||||||
|
|---------|---------|
|
||||||
|
| `**/*.ts` | All .ts files |
|
||||||
|
| `src/**/*` | All files under src/ |
|
||||||
|
| `*.config.*` | Config files in root |
|
||||||
|
| `**/*.{ts,tsx}` | .ts and .tsx files |
|
||||||
|
|
||||||
**Initialization** (Before Phase 1):
|
| Tech Stack | Core Pattern | Test Pattern |
|
||||||
```javascript
|
|------------|--------------|--------------|
|
||||||
TodoWrite({todos: [
|
| TypeScript | `**/*.{ts,tsx}` | `**/*.{test,spec}.{ts,tsx}` |
|
||||||
{"content": "Prepare context paths", "status": "in_progress", "activeForm": "Preparing context paths"},
|
| Python | `**/*.py` | `**/test_*.py, **/*_test.py` |
|
||||||
{"content": "Agent produces all module files", "status": "pending", "activeForm": "Agent producing files"},
|
| Rust | `**/*.rs` | `**/tests/**/*.rs` |
|
||||||
{"content": "Generate SKILL.md index", "status": "pending", "activeForm": "Generating SKILL index"}
|
| Go | `**/*.go` | `**/*_test.go` |
|
||||||
]})
|
|
||||||
```
|
|
||||||
|
|
||||||
**Full Path** (SKIP_GENERATION = false):
|
|
||||||
```javascript
|
|
||||||
// After Phase 1
|
|
||||||
TodoWrite({todos: [
|
|
||||||
{"content": "Prepare context paths", "status": "completed", ...},
|
|
||||||
{"content": "Agent produces all module files", "status": "in_progress", ...},
|
|
||||||
{"content": "Generate SKILL.md index", "status": "pending", ...}
|
|
||||||
]})
|
|
||||||
|
|
||||||
// After Phase 2
|
|
||||||
TodoWrite({todos: [
|
|
||||||
{"content": "Prepare context paths", "status": "completed", ...},
|
|
||||||
{"content": "Agent produces all module files", "status": "completed", ...},
|
|
||||||
{"content": "Generate SKILL.md index", "status": "in_progress", ...}
|
|
||||||
]})
|
|
||||||
|
|
||||||
// After Phase 3
|
|
||||||
TodoWrite({todos: [
|
|
||||||
{"content": "Prepare context paths", "status": "completed", ...},
|
|
||||||
{"content": "Agent produces all module files", "status": "completed", ...},
|
|
||||||
{"content": "Generate SKILL.md index", "status": "completed", ...}
|
|
||||||
]})
|
|
||||||
```
|
|
||||||
|
|
||||||
**Skip Path** (SKIP_GENERATION = true):
|
|
||||||
```javascript
|
|
||||||
// After Phase 1 (skip Phase 2)
|
|
||||||
TodoWrite({todos: [
|
|
||||||
{"content": "Prepare context paths", "status": "completed", ...},
|
|
||||||
{"content": "Agent produces all module files", "status": "completed", ...}, // Skipped
|
|
||||||
{"content": "Generate SKILL.md index", "status": "in_progress", ...}
|
|
||||||
]})
|
|
||||||
```
|
|
||||||
|
|
||||||
### Execution Flow
|
|
||||||
|
|
||||||
**Full Path**:
|
|
||||||
```
|
|
||||||
User → TodoWrite Init → Phase 1 (prepare) → Phase 2 (agent writes files) → Phase 3 (write index) → Report
|
|
||||||
```
|
|
||||||
|
|
||||||
**Skip Path**:
|
|
||||||
```
|
|
||||||
User → TodoWrite Init → Phase 1 (detect existing) → Phase 3 (update index) → Report
|
|
||||||
```
|
|
||||||
|
|
||||||
### Error Handling
|
|
||||||
|
|
||||||
**Phase 1 Errors**:
|
|
||||||
- Invalid session ID: Report error, verify session exists
|
|
||||||
- Missing context-package: Warn, fall back to direct mode
|
|
||||||
- No tech stack detected: Ask user to specify tech stack name
|
|
||||||
|
|
||||||
**Phase 2 Errors (Agent)**:
|
|
||||||
- Agent task fails: Retry once, report if fails again
|
|
||||||
- Exa API failures: Agent handles internally with retries
|
|
||||||
- Incomplete results: Warn user, proceed with partial data if minimum sections available
|
|
||||||
|
|
||||||
**Phase 3 Errors**:
|
|
||||||
- Write failures: Report which files failed
|
|
||||||
- Missing files: Note in SKILL.md, suggest regeneration
|
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
## Parameters
|
## Parameters
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
/memory:tech-research [session-id | "tech-stack-name"] [--regenerate] [--tool <gemini|qwen>]
|
/memory:tech-research [session-id | "tech-stack-name"] [--regenerate]
|
||||||
```
|
```
|
||||||
|
|
||||||
**Arguments**:
|
**Arguments**:
|
||||||
- **session-id | tech-stack-name**: Input source (auto-detected by WFS- prefix)
|
- **session-id**: `WFS-*` format - Extract from workflow session
|
||||||
- Session mode: `WFS-user-auth-v2` - Extract tech stack from workflow
|
- **tech-stack-name**: Direct input - `"typescript"`, `"typescript-react"`
|
||||||
- Direct mode: `"typescript"`, `"typescript-react-nextjs"` - User specifies
|
- **--regenerate**: Force regenerate existing rules
|
||||||
- **--regenerate**: Force regenerate existing SKILL (deletes and recreates)
|
|
||||||
- **--tool**: Reserved for future CLI integration (default: gemini)
|
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
## Examples
|
## Examples
|
||||||
|
|
||||||
**Generated File Structure** (for all examples):
|
### Single Language
|
||||||
```
|
|
||||||
.claude/skills/{tech-stack}/
|
|
||||||
├── SKILL.md # Index (Phase 3)
|
|
||||||
├── principles.md # Agent (Phase 2)
|
|
||||||
├── patterns.md # Agent
|
|
||||||
├── practices.md # Agent
|
|
||||||
├── testing.md # Agent
|
|
||||||
├── config.md # Agent
|
|
||||||
├── frameworks.md # Agent (if composite)
|
|
||||||
└── metadata.json # Agent
|
|
||||||
```
|
|
||||||
|
|
||||||
### Direct Mode - Single Stack
|
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
/memory:tech-research "typescript"
|
/memory:tech-research "typescript"
|
||||||
```
|
```
|
||||||
|
|
||||||
**Workflow**:
|
**Output**: `.claude/rules/tech/typescript/` with 4 rule files
|
||||||
1. Phase 1: Detects direct mode, checks existing SKILL
|
|
||||||
2. Phase 2: Agent executes 4 Exa queries, writes 5 modules
|
|
||||||
3. Phase 3: Generates SKILL.md index
|
|
||||||
|
|
||||||
### Direct Mode - Composite Stack
|
### Frontend Stack
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
/memory:tech-research "typescript-react-nextjs"
|
/memory:tech-research "typescript-react"
|
||||||
```
|
```
|
||||||
|
|
||||||
**Workflow**:
|
**Output**: `.claude/rules/tech/typescript-react/` with 5 rule files (includes components.md)
|
||||||
1. Phase 1: Decomposes into ["typescript", "react", "nextjs"]
|
|
||||||
2. Phase 2: Agent executes 6 Exa queries (4 base + 2 components), writes 6 modules (adds frameworks.md)
|
|
||||||
3. Phase 3: Generates SKILL.md index with framework integration
|
|
||||||
|
|
||||||
### Session Mode - Extract from Workflow
|
### Backend Stack
|
||||||
|
|
||||||
|
```bash
|
||||||
|
/memory:tech-research "python-fastapi"
|
||||||
|
```
|
||||||
|
|
||||||
|
**Output**: `.claude/rules/tech/python-fastapi/` with 5 rule files (includes api.md)
|
||||||
|
|
||||||
|
### From Session
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
/memory:tech-research WFS-user-auth-20251104
|
/memory:tech-research WFS-user-auth-20251104
|
||||||
```
|
```
|
||||||
|
|
||||||
**Workflow**:
|
**Workflow**: Extract tech stack from session → Generate rules
|
||||||
1. Phase 1: Reads session, extracts tech stack: `python-fastapi-sqlalchemy`
|
|
||||||
2. Phase 2: Agent researches Python + FastAPI + SQLAlchemy, writes 6 modules
|
|
||||||
3. Phase 3: Generates SKILL.md index
|
|
||||||
|
|
||||||
### Regenerate Existing
|
---
|
||||||
|
|
||||||
```bash
|
## Comparison: Rules vs SKILL
|
||||||
/memory:tech-research "react" --regenerate
|
|
||||||
```
|
|
||||||
|
|
||||||
**Workflow**:
|
|
||||||
1. Phase 1: Deletes existing SKILL due to --regenerate
|
|
||||||
2. Phase 2: Agent executes fresh Exa research (latest 2025 practices)
|
|
||||||
3. Phase 3: Generates updated SKILL.md
|
|
||||||
|
|
||||||
### Skip Path - Fast Update
|
|
||||||
|
|
||||||
```bash
|
|
||||||
/memory:tech-research "python"
|
|
||||||
```
|
|
||||||
|
|
||||||
**Scenario**: SKILL already exists with 7 files
|
|
||||||
|
|
||||||
**Workflow**:
|
|
||||||
1. Phase 1: Detects existing SKILL, sets SKIP_GENERATION = true
|
|
||||||
2. Phase 2: **SKIPPED**
|
|
||||||
3. Phase 3: Updates SKILL.md index only (5-10x faster)
|
|
||||||
|
|
||||||
|
| Aspect | SKILL Memory | Rules |
|
||||||
|
|--------|--------------|-------|
|
||||||
|
| Loading | Manual: `Skill("tech")` | Automatic by path |
|
||||||
|
| Scope | All files when loaded | Only matching files |
|
||||||
|
| Granularity | Monolithic packages | Per-file-type |
|
||||||
|
| Context | Full package | Only relevant rules |
|
||||||
|
|
||||||
|
**When to Use**:
|
||||||
|
- **Rules**: Tech stack conventions per file type
|
||||||
|
- **SKILL**: Reference docs, APIs, examples for manual lookup
|
||||||
|
|||||||
@@ -187,7 +187,7 @@ Objectives:
|
|||||||
|
|
||||||
3. Use Gemini for aggregation (optional):
|
3. Use Gemini for aggregation (optional):
|
||||||
Command pattern:
|
Command pattern:
|
||||||
cd .workflow/.archives/{session_id} && gemini -p "
|
ccw cli -p "
|
||||||
PURPOSE: Extract lessons and conflicts from workflow session
|
PURPOSE: Extract lessons and conflicts from workflow session
|
||||||
TASK:
|
TASK:
|
||||||
• Analyze IMPL_PLAN and lessons from manifest
|
• Analyze IMPL_PLAN and lessons from manifest
|
||||||
@@ -198,7 +198,7 @@ Objectives:
|
|||||||
CONTEXT: @IMPL_PLAN.md @workflow-session.json
|
CONTEXT: @IMPL_PLAN.md @workflow-session.json
|
||||||
EXPECTED: Structured lessons and conflicts in JSON format
|
EXPECTED: Structured lessons and conflicts in JSON format
|
||||||
RULES: Template reference from skill-aggregation.txt
|
RULES: Template reference from skill-aggregation.txt
|
||||||
"
|
" --tool gemini --mode analysis --cd .workflow/.archives/{session_id}
|
||||||
|
|
||||||
3.5. **Generate SKILL.md Description** (CRITICAL for auto-loading):
|
3.5. **Generate SKILL.md Description** (CRITICAL for auto-loading):
|
||||||
|
|
||||||
@@ -334,7 +334,7 @@ Objectives:
|
|||||||
- Sort sessions by date
|
- Sort sessions by date
|
||||||
|
|
||||||
2. Use Gemini for final aggregation:
|
2. Use Gemini for final aggregation:
|
||||||
gemini -p "
|
ccw cli -p "
|
||||||
PURPOSE: Aggregate lessons and conflicts from all workflow sessions
|
PURPOSE: Aggregate lessons and conflicts from all workflow sessions
|
||||||
TASK:
|
TASK:
|
||||||
• Group successes by functional domain
|
• Group successes by functional domain
|
||||||
@@ -345,7 +345,7 @@ Objectives:
|
|||||||
CONTEXT: [Provide aggregated JSON data]
|
CONTEXT: [Provide aggregated JSON data]
|
||||||
EXPECTED: Final aggregated structure for SKILL documents
|
EXPECTED: Final aggregated structure for SKILL documents
|
||||||
RULES: Template reference from skill-aggregation.txt
|
RULES: Template reference from skill-aggregation.txt
|
||||||
"
|
" --tool gemini --mode analysis
|
||||||
|
|
||||||
3. Read templates for formatting (same 4 templates as single mode)
|
3. Read templates for formatting (same 4 templates as single mode)
|
||||||
|
|
||||||
|
|||||||
@@ -67,7 +67,9 @@ Phase 4: Execution Strategy & Task Execution
|
|||||||
├─ Get next in_progress task from TodoWrite
|
├─ Get next in_progress task from TodoWrite
|
||||||
├─ Lazy load task JSON
|
├─ Lazy load task JSON
|
||||||
├─ Launch agent with task context
|
├─ Launch agent with task context
|
||||||
├─ Mark task completed
|
├─ Mark task completed (update IMPL-*.json status)
|
||||||
|
│ # Update task status with ccw session (auto-tracks status_history):
|
||||||
|
│ # ccw session task ${sessionId} IMPL-X completed
|
||||||
└─ Advance to next task
|
└─ Advance to next task
|
||||||
|
|
||||||
Phase 5: Completion
|
Phase 5: Completion
|
||||||
@@ -90,37 +92,32 @@ Resume Mode (--resume-session):
|
|||||||
|
|
||||||
**Process**:
|
**Process**:
|
||||||
|
|
||||||
#### Step 1.1: Count Active Sessions
|
#### Step 1.1: List Active Sessions
|
||||||
```bash
|
```bash
|
||||||
bash(find .workflow/active/ -name "WFS-*" -type d 2>/dev/null | wc -l)
|
ccw session list --location active
|
||||||
|
# Returns: {"success":true,"result":{"active":[...],"total":N}}
|
||||||
```
|
```
|
||||||
|
|
||||||
#### Step 1.2: Handle Session Selection
|
#### Step 1.2: Handle Session Selection
|
||||||
|
|
||||||
**Case A: No Sessions** (count = 0)
|
**Case A: No Sessions** (total = 0)
|
||||||
```
|
```
|
||||||
ERROR: No active workflow sessions found
|
ERROR: No active workflow sessions found
|
||||||
Run /workflow:plan "task description" to create a session
|
Run /workflow:plan "task description" to create a session
|
||||||
```
|
```
|
||||||
|
|
||||||
**Case B: Single Session** (count = 1)
|
**Case B: Single Session** (total = 1)
|
||||||
```bash
|
Auto-select the single session from result.active[0].session_id and continue to Phase 2.
|
||||||
bash(find .workflow/active/ -name "WFS-*" -type d 2>/dev/null | head -1 | xargs basename)
|
|
||||||
```
|
|
||||||
Auto-select and continue to Phase 2.
|
|
||||||
|
|
||||||
**Case C: Multiple Sessions** (count > 1)
|
**Case C: Multiple Sessions** (total > 1)
|
||||||
|
|
||||||
List sessions with metadata and prompt user selection:
|
List sessions with metadata using ccw session:
|
||||||
```bash
|
```bash
|
||||||
bash(for dir in .workflow/active/WFS-*/; do
|
# Get session list with metadata
|
||||||
session=$(basename "$dir")
|
ccw session list --location active
|
||||||
project=$(jq -r '.project // "Unknown"' "$dir/workflow-session.json" 2>/dev/null)
|
|
||||||
total=$(grep -c "^- \[" "$dir/TODO_LIST.md" 2>/dev/null || echo "0")
|
# For each session, get stats
|
||||||
completed=$(grep -c "^- \[x\]" "$dir/TODO_LIST.md" 2>/dev/null || echo "0")
|
ccw session stats WFS-session-name
|
||||||
[ "$total" -gt 0 ] && progress=$((completed * 100 / total)) || progress=0
|
|
||||||
echo "${session} | ${project} | ${completed}/${total} tasks (${progress}%)"
|
|
||||||
done)
|
|
||||||
```
|
```
|
||||||
|
|
||||||
Use AskUserQuestion to present formatted options (max 4 options shown):
|
Use AskUserQuestion to present formatted options (max 4 options shown):
|
||||||
@@ -152,12 +149,20 @@ Parse user input (supports: number "1", full ID "WFS-auth-system", or partial "a
|
|||||||
|
|
||||||
#### Step 1.3: Load Session Metadata
|
#### Step 1.3: Load Session Metadata
|
||||||
```bash
|
```bash
|
||||||
bash(cat .workflow/active/${sessionId}/workflow-session.json)
|
ccw session ${sessionId} read workflow-session.json
|
||||||
```
|
```
|
||||||
|
|
||||||
**Output**: Store session metadata in memory
|
**Output**: Store session metadata in memory
|
||||||
**DO NOT read task JSONs yet** - defer until execution phase (lazy loading)
|
**DO NOT read task JSONs yet** - defer until execution phase (lazy loading)
|
||||||
|
|
||||||
|
#### Step 1.4: Update Session Status to Active
|
||||||
|
**Purpose**: Update workflow-session.json status from "planning" to "active" for dashboard monitoring.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Update status atomically using ccw session
|
||||||
|
ccw session status ${sessionId} active
|
||||||
|
```
|
||||||
|
|
||||||
**Resume Mode**: This entire phase is skipped when `--resume-session="session-id"` flag is provided.
|
**Resume Mode**: This entire phase is skipped when `--resume-session="session-id"` flag is provided.
|
||||||
|
|
||||||
### Phase 2: Planning Document Validation
|
### Phase 2: Planning Document Validation
|
||||||
@@ -395,7 +400,7 @@ Task(subagent_type="{meta.agent}",
|
|||||||
1. Read complete task JSON: {session.task_json_path}
|
1. Read complete task JSON: {session.task_json_path}
|
||||||
2. Load context package: {session.context_package_path}
|
2. Load context package: {session.context_package_path}
|
||||||
|
|
||||||
Follow complete execution guidelines in @.claude/agents/{meta.agent}.md
|
|
||||||
|
|
||||||
**Session Paths**:
|
**Session Paths**:
|
||||||
- Workflow Dir: {session.workflow_dir}
|
- Workflow Dir: {session.workflow_dir}
|
||||||
|
|||||||
@@ -473,10 +473,10 @@ Detailed plan: ${executionContext.session.artifacts.plan}`)
|
|||||||
return prompt
|
return prompt
|
||||||
}
|
}
|
||||||
|
|
||||||
codex --full-auto exec "${buildCLIPrompt(batch)}" --skip-git-repo-check -s danger-full-access
|
ccw cli -p "${buildCLIPrompt(batch)}" --tool codex --mode write
|
||||||
```
|
```
|
||||||
|
|
||||||
**Execution with tracking**:
|
**Execution with fixed IDs** (predictable ID pattern):
|
||||||
```javascript
|
```javascript
|
||||||
// Launch CLI in foreground (NOT background)
|
// Launch CLI in foreground (NOT background)
|
||||||
// Timeout based on complexity: Low=40min, Medium=60min, High=100min
|
// Timeout based on complexity: Low=40min, Medium=60min, High=100min
|
||||||
@@ -486,15 +486,48 @@ const timeoutByComplexity = {
|
|||||||
"High": 6000000 // 100 minutes
|
"High": 6000000 // 100 minutes
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Generate fixed execution ID: ${sessionId}-${groupId}
|
||||||
|
// This enables predictable ID lookup without relying on resume context chains
|
||||||
|
const sessionId = executionContext?.session?.id || 'standalone'
|
||||||
|
const fixedExecutionId = `${sessionId}-${batch.groupId}` // e.g., "implement-auth-2025-12-13-P1"
|
||||||
|
|
||||||
|
// Check if resuming from previous failed execution
|
||||||
|
const previousCliId = batch.resumeFromCliId || null
|
||||||
|
|
||||||
|
// Build command with fixed ID (and optional resume for continuation)
|
||||||
|
const cli_command = previousCliId
|
||||||
|
? `ccw cli -p "${buildCLIPrompt(batch)}" --tool codex --mode write --id ${fixedExecutionId} --resume ${previousCliId}`
|
||||||
|
: `ccw cli -p "${buildCLIPrompt(batch)}" --tool codex --mode write --id ${fixedExecutionId}`
|
||||||
|
|
||||||
bash_result = Bash(
|
bash_result = Bash(
|
||||||
command=cli_command,
|
command=cli_command,
|
||||||
timeout=timeoutByComplexity[planObject.complexity] || 3600000
|
timeout=timeoutByComplexity[planObject.complexity] || 3600000
|
||||||
)
|
)
|
||||||
|
|
||||||
|
// Execution ID is now predictable: ${fixedExecutionId}
|
||||||
|
// Can also extract from output: "ID: implement-auth-2025-12-13-P1"
|
||||||
|
const cliExecutionId = fixedExecutionId
|
||||||
|
|
||||||
// Update TodoWrite when execution completes
|
// Update TodoWrite when execution completes
|
||||||
```
|
```
|
||||||
|
|
||||||
**Result Collection**: After completion, analyze output and collect result following `executionResult` structure
|
**Resume on Failure** (with fixed ID):
|
||||||
|
```javascript
|
||||||
|
// If execution failed or timed out, offer resume option
|
||||||
|
if (bash_result.status === 'failed' || bash_result.status === 'timeout') {
|
||||||
|
console.log(`
|
||||||
|
⚠️ Execution incomplete. Resume available:
|
||||||
|
Fixed ID: ${fixedExecutionId}
|
||||||
|
Lookup: ccw cli detail ${fixedExecutionId}
|
||||||
|
Resume: ccw cli -p "Continue tasks" --resume ${fixedExecutionId} --tool codex --mode write --id ${fixedExecutionId}-retry
|
||||||
|
`)
|
||||||
|
|
||||||
|
// Store for potential retry in same session
|
||||||
|
batch.resumeFromCliId = fixedExecutionId
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Result Collection**: After completion, analyze output and collect result following `executionResult` structure (include `cliExecutionId` for resume capability)
|
||||||
|
|
||||||
### Step 4: Progress Tracking
|
### Step 4: Progress Tracking
|
||||||
|
|
||||||
@@ -541,15 +574,30 @@ RULES: $(cat ~/.claude/workflows/cli-templates/prompts/analysis/02-review-code-q
|
|||||||
# - Report findings directly
|
# - Report findings directly
|
||||||
|
|
||||||
# Method 2: Gemini Review (recommended)
|
# Method 2: Gemini Review (recommended)
|
||||||
gemini -p "[Shared Prompt Template with artifacts]"
|
ccw cli -p "[Shared Prompt Template with artifacts]" --tool gemini --mode analysis
|
||||||
# CONTEXT includes: @**/* @${plan.json} [@${exploration.json}]
|
# CONTEXT includes: @**/* @${plan.json} [@${exploration.json}]
|
||||||
|
|
||||||
# Method 3: Qwen Review (alternative)
|
# Method 3: Qwen Review (alternative)
|
||||||
qwen -p "[Shared Prompt Template with artifacts]"
|
ccw cli -p "[Shared Prompt Template with artifacts]" --tool qwen --mode analysis
|
||||||
# Same prompt as Gemini, different execution engine
|
# Same prompt as Gemini, different execution engine
|
||||||
|
|
||||||
# Method 4: Codex Review (autonomous)
|
# Method 4: Codex Review (autonomous)
|
||||||
codex --full-auto exec "[Verify plan acceptance criteria at ${plan.json}]" --skip-git-repo-check -s danger-full-access
|
ccw cli -p "[Verify plan acceptance criteria at ${plan.json}]" --tool codex --mode write
|
||||||
|
```
|
||||||
|
|
||||||
|
**Multi-Round Review with Fixed IDs**:
|
||||||
|
```javascript
|
||||||
|
// Generate fixed review ID
|
||||||
|
const reviewId = `${sessionId}-review`
|
||||||
|
|
||||||
|
// First review pass with fixed ID
|
||||||
|
const reviewResult = Bash(`ccw cli -p "[Review prompt]" --tool gemini --mode analysis --id ${reviewId}`)
|
||||||
|
|
||||||
|
// If issues found, continue review dialog with fixed ID chain
|
||||||
|
if (hasUnresolvedIssues(reviewResult)) {
|
||||||
|
// Resume with follow-up questions
|
||||||
|
Bash(`ccw cli -p "Clarify the security concerns you mentioned" --resume ${reviewId} --tool gemini --mode analysis --id ${reviewId}-followup`)
|
||||||
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
**Implementation Note**: Replace `[Shared Prompt Template with artifacts]` placeholder with actual template content, substituting:
|
**Implementation Note**: Replace `[Shared Prompt Template with artifacts]` placeholder with actual template content, substituting:
|
||||||
@@ -623,8 +671,10 @@ console.log(`✓ Development index: [${category}] ${entry.title}`)
|
|||||||
| Empty file | File exists but no content | Error: "File is empty: {path}. Provide task description." |
|
| Empty file | File exists but no content | Error: "File is empty: {path}. Provide task description." |
|
||||||
| Invalid Enhanced Task JSON | JSON missing required fields | Warning: "Missing required fields. Treating as plain text." |
|
| Invalid Enhanced Task JSON | JSON missing required fields | Warning: "Missing required fields. Treating as plain text." |
|
||||||
| Malformed JSON | JSON parsing fails | Treat as plain text (expected for non-JSON files) |
|
| Malformed JSON | JSON parsing fails | Treat as plain text (expected for non-JSON files) |
|
||||||
| Execution failure | Agent/Codex crashes | Display error, save partial progress, suggest retry |
|
| Execution failure | Agent/Codex crashes | Display error, use fixed ID `${sessionId}-${groupId}` for resume: `ccw cli -p "Continue" --resume <fixed-id> --id <fixed-id>-retry` |
|
||||||
|
| Execution timeout | CLI exceeded timeout | Use fixed ID for resume with extended timeout |
|
||||||
| Codex unavailable | Codex not installed | Show installation instructions, offer Agent execution |
|
| Codex unavailable | Codex not installed | Show installation instructions, offer Agent execution |
|
||||||
|
| Fixed ID not found | Custom ID lookup failed | Check `ccw cli history`, verify date directories |
|
||||||
|
|
||||||
## Data Structures
|
## Data Structures
|
||||||
|
|
||||||
@@ -679,8 +729,20 @@ Collected after each execution call completes:
|
|||||||
tasksSummary: string, // Brief description of tasks handled
|
tasksSummary: string, // Brief description of tasks handled
|
||||||
completionSummary: string, // What was completed
|
completionSummary: string, // What was completed
|
||||||
keyOutputs: string, // Files created/modified, key changes
|
keyOutputs: string, // Files created/modified, key changes
|
||||||
notes: string // Important context for next execution
|
notes: string, // Important context for next execution
|
||||||
|
fixedCliId: string | null // Fixed CLI execution ID (e.g., "implement-auth-2025-12-13-P1")
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
Appended to `previousExecutionResults` array for context continuity in multi-execution scenarios.
|
Appended to `previousExecutionResults` array for context continuity in multi-execution scenarios.
|
||||||
|
|
||||||
|
**Fixed ID Pattern**: `${sessionId}-${groupId}` enables predictable lookup without auto-generated timestamps.
|
||||||
|
|
||||||
|
**Resume Usage**: If `status` is "partial" or "failed", use `fixedCliId` to resume:
|
||||||
|
```bash
|
||||||
|
# Lookup previous execution
|
||||||
|
ccw cli detail ${fixedCliId}
|
||||||
|
|
||||||
|
# Resume with new fixed ID for retry
|
||||||
|
ccw cli -p "Continue from where we left off" --resume ${fixedCliId} --tool codex --mode write --id ${fixedCliId}-retry
|
||||||
|
```
|
||||||
|
|||||||
@@ -72,17 +72,60 @@ Phase 5: Dispatch
|
|||||||
### Phase 1: Intelligent Multi-Angle Diagnosis
|
### Phase 1: Intelligent Multi-Angle Diagnosis
|
||||||
|
|
||||||
**Session Setup** (MANDATORY - follow exactly):
|
**Session Setup** (MANDATORY - follow exactly):
|
||||||
|
|
||||||
|
**Option 1: Using CLI Command** (Recommended for simplicity):
|
||||||
|
```bash
|
||||||
|
# Generate session ID
|
||||||
|
bug_slug=$(echo "${bug_description}" | tr '[:upper:]' '[:lower:]' | tr -cs '[:alnum:]' '-' | cut -c1-40)
|
||||||
|
date_str=$(date -u '+%Y-%m-%d')
|
||||||
|
session_id="${bug_slug}-${date_str}"
|
||||||
|
|
||||||
|
# Initialize lite-fix session (location auto-inferred from type)
|
||||||
|
ccw session init "${session_id}" \
|
||||||
|
--type lite-fix \
|
||||||
|
--content "{\"description\":\"${bug_description}\",\"severity\":\"${severity}\"}"
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
# Get session folder
|
||||||
|
session_folder=".workflow/.lite-fix/${session_id}"
|
||||||
|
echo "Session initialized: ${session_id} at ${session_folder}"
|
||||||
|
```
|
||||||
|
|
||||||
|
**Option 2: Using session_manager Tool** (For programmatic access):
|
||||||
```javascript
|
```javascript
|
||||||
// Helper: Get UTC+8 (China Standard Time) ISO string
|
// Helper: Get UTC+8 (China Standard Time) ISO string
|
||||||
const getUtc8ISOString = () => new Date(Date.now() + 8 * 60 * 60 * 1000).toISOString()
|
const getUtc8ISOString = () => new Date(Date.now() + 8 * 60 * 60 * 1000).toISOString()
|
||||||
|
|
||||||
const bugSlug = bug_description.toLowerCase().replace(/[^a-z0-9]+/g, '-').substring(0, 40)
|
const bugSlug = bug_description.toLowerCase().replace(/[^a-z0-9]+/g, '-').substring(0, 40)
|
||||||
const dateStr = getUtc8ISOString().substring(0, 10) // Format: 2025-11-29
|
const dateStr = getUtc8ISOString().substring(0, 10) // Format: 2025-12-17
|
||||||
|
|
||||||
const sessionId = `${bugSlug}-${dateStr}` // e.g., "user-avatar-upload-fails-2025-11-29"
|
const sessionId = `${bugSlug}-${dateStr}` // e.g., "user-avatar-upload-fails-2025-12-17"
|
||||||
const sessionFolder = `.workflow/.lite-fix/${sessionId}`
|
|
||||||
|
|
||||||
bash(`mkdir -p ${sessionFolder} && test -d ${sessionFolder} && echo "SUCCESS: ${sessionFolder}" || echo "FAILED: ${sessionFolder}"`)
|
|
||||||
|
|
||||||
|
const sessionFolder = initResult.result.path
|
||||||
|
console.log(`Session initialized: ${sessionId} at ${sessionFolder}`)
|
||||||
|
```
|
||||||
|
|
||||||
|
**Session File Structure**:
|
||||||
|
- `session-metadata.json` - Session metadata (created at init, contains description, severity, status)
|
||||||
|
- `fix-plan.json` - Actual fix planning content (created later in Phase 3, contains fix tasks, diagnosis results)
|
||||||
|
|
||||||
|
**Metadata Field Usage**:
|
||||||
|
- `description`: Displayed in dashboard session list (replaces session ID as title)
|
||||||
|
- `severity`: Used for fix planning strategy selection (Low/Medium → Direct Claude, High/Critical → Agent)
|
||||||
|
- `created_at`: Displayed in dashboard timeline
|
||||||
|
- `status`: Updated through workflow (diagnosing → fixing → completed)
|
||||||
|
- Custom fields: Any additional fields in metadata are saved and accessible programmatically
|
||||||
|
|
||||||
|
**Accessing Session Data**:
|
||||||
|
```bash
|
||||||
|
# Read session metadata
|
||||||
|
ccw session ${session_id} read session-metadata.json
|
||||||
|
|
||||||
|
# Read fix plan content (after Phase 3 completion)
|
||||||
|
ccw session ${session_id} read fix-plan.json
|
||||||
```
|
```
|
||||||
|
|
||||||
**Diagnosis Decision Logic**:
|
**Diagnosis Decision Logic**:
|
||||||
|
|||||||
@@ -72,17 +72,57 @@ Phase 5: Dispatch
|
|||||||
### Phase 1: Intelligent Multi-Angle Exploration
|
### Phase 1: Intelligent Multi-Angle Exploration
|
||||||
|
|
||||||
**Session Setup** (MANDATORY - follow exactly):
|
**Session Setup** (MANDATORY - follow exactly):
|
||||||
|
|
||||||
|
**Option 1: Using CLI Command** (Recommended for simplicity):
|
||||||
|
```bash
|
||||||
|
# Generate session ID
|
||||||
|
task_slug=$(echo "${task_description}" | tr '[:upper:]' '[:lower:]' | tr -cs '[:alnum:]' '-' | cut -c1-40)
|
||||||
|
date_str=$(date -u '+%Y-%m-%d')
|
||||||
|
session_id="${task_slug}-${date_str}"
|
||||||
|
|
||||||
|
# Initialize lite-plan session (location auto-inferred from type)
|
||||||
|
ccw session init "${session_id}" \
|
||||||
|
--type lite-plan \
|
||||||
|
--content "{\"description\":\"${task_description}\",\"complexity\":\"${complexity}\"}"
|
||||||
|
|
||||||
|
# Get session folder
|
||||||
|
session_folder=".workflow/.lite-plan/${session_id}"
|
||||||
|
echo "Session initialized: ${session_id} at ${session_folder}"
|
||||||
|
```
|
||||||
|
|
||||||
|
**Option 2: Using session_manager Tool** (For programmatic access):
|
||||||
```javascript
|
```javascript
|
||||||
// Helper: Get UTC+8 (China Standard Time) ISO string
|
// Helper: Get UTC+8 (China Standard Time) ISO string
|
||||||
const getUtc8ISOString = () => new Date(Date.now() + 8 * 60 * 60 * 1000).toISOString()
|
const getUtc8ISOString = () => new Date(Date.now() + 8 * 60 * 60 * 1000).toISOString()
|
||||||
|
|
||||||
const taskSlug = task_description.toLowerCase().replace(/[^a-z0-9]+/g, '-').substring(0, 40)
|
const taskSlug = task_description.toLowerCase().replace(/[^a-z0-9]+/g, '-').substring(0, 40)
|
||||||
const dateStr = getUtc8ISOString().substring(0, 10) // Format: 2025-11-29
|
const dateStr = getUtc8ISOString().substring(0, 10) // Format: 2025-12-17
|
||||||
|
|
||||||
const sessionId = `${taskSlug}-${dateStr}` // e.g., "implement-jwt-refresh-2025-11-29"
|
const sessionId = `${taskSlug}-${dateStr}` // e.g., "implement-jwt-refresh-2025-12-17"
|
||||||
const sessionFolder = `.workflow/.lite-plan/${sessionId}`
|
|
||||||
|
|
||||||
bash(`mkdir -p ${sessionFolder} && test -d ${sessionFolder} && echo "SUCCESS: ${sessionFolder}" || echo "FAILED: ${sessionFolder}"`)
|
|
||||||
|
|
||||||
|
const sessionFolder = initResult.result.path
|
||||||
|
console.log(`Session initialized: ${sessionId} at ${sessionFolder}`)
|
||||||
|
```
|
||||||
|
|
||||||
|
**Session File Structure**:
|
||||||
|
- `session-metadata.json` - Session metadata (created at init, contains description, complexity, status)
|
||||||
|
- `plan.json` - Actual planning content (created later in Phase 3, contains tasks, steps, dependencies)
|
||||||
|
|
||||||
|
**Metadata Field Usage**:
|
||||||
|
- `description`: Displayed in dashboard session list (replaces session ID as title)
|
||||||
|
- `complexity`: Used for planning strategy selection (Low → Direct Claude, Medium/High → Agent)
|
||||||
|
- `created_at`: Displayed in dashboard timeline
|
||||||
|
- Custom fields: Any additional fields in metadata are saved and accessible programmatically
|
||||||
|
|
||||||
|
**Accessing Session Data**:
|
||||||
|
```bash
|
||||||
|
# Read session metadata
|
||||||
|
ccw session ${session_id} read session-metadata.json
|
||||||
|
|
||||||
|
# Read plan content (after Phase 3 completion)
|
||||||
|
ccw session ${session_id} read plan.json
|
||||||
```
|
```
|
||||||
|
|
||||||
**Exploration Decision Logic**:
|
**Exploration Decision Logic**:
|
||||||
@@ -152,11 +192,16 @@ Launching ${selectedAngles.length} parallel explorations...
|
|||||||
|
|
||||||
**Launch Parallel Explorations** - Orchestrator assigns angle to each agent:
|
**Launch Parallel Explorations** - Orchestrator assigns angle to each agent:
|
||||||
|
|
||||||
|
**⚠️ CRITICAL - NO BACKGROUND EXECUTION**:
|
||||||
|
- **MUST NOT use `run_in_background: true`** - exploration results are REQUIRED before planning
|
||||||
|
|
||||||
|
|
||||||
```javascript
|
```javascript
|
||||||
// Launch agents with pre-assigned angles
|
// Launch agents with pre-assigned angles
|
||||||
const explorationTasks = selectedAngles.map((angle, index) =>
|
const explorationTasks = selectedAngles.map((angle, index) =>
|
||||||
Task(
|
Task(
|
||||||
subagent_type="cli-explore-agent",
|
subagent_type="cli-explore-agent",
|
||||||
|
run_in_background=false, // ⚠️ MANDATORY: Must wait for results
|
||||||
description=`Explore: ${angle}`,
|
description=`Explore: ${angle}`,
|
||||||
prompt=`
|
prompt=`
|
||||||
## Task Objective
|
## Task Objective
|
||||||
@@ -356,7 +401,15 @@ if (dedupedClarifications.length > 0) {
|
|||||||
// Step 1: Read schema
|
// Step 1: Read schema
|
||||||
const schema = Bash(`cat ~/.claude/workflows/cli-templates/schemas/plan-json-schema.json`)
|
const schema = Bash(`cat ~/.claude/workflows/cli-templates/schemas/plan-json-schema.json`)
|
||||||
|
|
||||||
// Step 2: Generate plan following schema (Claude directly, no agent)
|
// Step 2: ⚠️ MANDATORY - Read and review ALL exploration files
|
||||||
|
const manifest = JSON.parse(Read(`${sessionFolder}/explorations-manifest.json`))
|
||||||
|
manifest.explorations.forEach(exp => {
|
||||||
|
const explorationData = Read(exp.path)
|
||||||
|
console.log(`\n### Exploration: ${exp.angle}\n${explorationData}`)
|
||||||
|
})
|
||||||
|
|
||||||
|
// Step 3: Generate plan following schema (Claude directly, no agent)
|
||||||
|
// ⚠️ Plan MUST incorporate insights from exploration files read in Step 2
|
||||||
const plan = {
|
const plan = {
|
||||||
summary: "...",
|
summary: "...",
|
||||||
approach: "...",
|
approach: "...",
|
||||||
@@ -367,10 +420,10 @@ const plan = {
|
|||||||
_metadata: { timestamp: getUtc8ISOString(), source: "direct-planning", planning_mode: "direct" }
|
_metadata: { timestamp: getUtc8ISOString(), source: "direct-planning", planning_mode: "direct" }
|
||||||
}
|
}
|
||||||
|
|
||||||
// Step 3: Write plan to session folder
|
// Step 4: Write plan to session folder
|
||||||
Write(`${sessionFolder}/plan.json`, JSON.stringify(plan, null, 2))
|
Write(`${sessionFolder}/plan.json`, JSON.stringify(plan, null, 2))
|
||||||
|
|
||||||
// Step 4: MUST continue to Phase 4 (Confirmation) - DO NOT execute code here
|
// Step 5: MUST continue to Phase 4 (Confirmation) - DO NOT execute code here
|
||||||
```
|
```
|
||||||
|
|
||||||
**Medium/High Complexity** - Invoke cli-lite-planning-agent:
|
**Medium/High Complexity** - Invoke cli-lite-planning-agent:
|
||||||
|
|||||||
@@ -61,7 +61,7 @@ Phase 2: Context Gathering
|
|||||||
├─ Tasks attached: Analyze structure → Identify integration → Generate package
|
├─ Tasks attached: Analyze structure → Identify integration → Generate package
|
||||||
└─ Output: contextPath + conflict_risk
|
└─ Output: contextPath + conflict_risk
|
||||||
|
|
||||||
Phase 3: Conflict Resolution (conditional)
|
Phase 3: Conflict Resolution
|
||||||
└─ Decision (conflict_risk check):
|
└─ Decision (conflict_risk check):
|
||||||
├─ conflict_risk ≥ medium → Execute /workflow:tools:conflict-resolution
|
├─ conflict_risk ≥ medium → Execute /workflow:tools:conflict-resolution
|
||||||
│ ├─ Tasks attached: Detect conflicts → Present to user → Apply strategies
|
│ ├─ Tasks attached: Detect conflicts → Present to user → Apply strategies
|
||||||
@@ -168,7 +168,7 @@ SlashCommand(command="/workflow:tools:context-gather --session [sessionId] \"[st
|
|||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
### Phase 3: Conflict Resolution (Optional - auto-triggered by conflict risk)
|
### Phase 3: Conflict Resolution
|
||||||
|
|
||||||
**Trigger**: Only execute when context-package.json indicates conflict_risk is "medium" or "high"
|
**Trigger**: Only execute when context-package.json indicates conflict_risk is "medium" or "high"
|
||||||
|
|
||||||
@@ -185,10 +185,10 @@ SlashCommand(command="/workflow:tools:conflict-resolution --session [sessionId]
|
|||||||
|
|
||||||
**Parse Output**:
|
**Parse Output**:
|
||||||
- Extract: Execution status (success/skipped/failed)
|
- Extract: Execution status (success/skipped/failed)
|
||||||
- Verify: CONFLICT_RESOLUTION.md file path (if executed)
|
- Verify: conflict-resolution.json file path (if executed)
|
||||||
|
|
||||||
**Validation**:
|
**Validation**:
|
||||||
- File `.workflow/active/[sessionId]/.process/CONFLICT_RESOLUTION.md` exists (if executed)
|
- File `.workflow/active/[sessionId]/.process/conflict-resolution.json` exists (if executed)
|
||||||
|
|
||||||
**Skip Behavior**:
|
**Skip Behavior**:
|
||||||
- If conflict_risk is "none" or "low", skip directly to Phase 3.5
|
- If conflict_risk is "none" or "low", skip directly to Phase 3.5
|
||||||
@@ -497,7 +497,7 @@ Return summary to user
|
|||||||
- Parse context path from Phase 2 output, store in memory
|
- Parse context path from Phase 2 output, store in memory
|
||||||
- **Extract conflict_risk from context-package.json**: Determine Phase 3 execution
|
- **Extract conflict_risk from context-package.json**: Determine Phase 3 execution
|
||||||
- **If conflict_risk ≥ medium**: Launch Phase 3 conflict-resolution with sessionId and contextPath
|
- **If conflict_risk ≥ medium**: Launch Phase 3 conflict-resolution with sessionId and contextPath
|
||||||
- Wait for Phase 3 to finish executing (if executed), verify CONFLICT_RESOLUTION.md created
|
- Wait for Phase 3 to finish executing (if executed), verify conflict-resolution.json created
|
||||||
- **If conflict_risk is none/low**: Skip Phase 3, proceed directly to Phase 4
|
- **If conflict_risk is none/low**: Skip Phase 3, proceed directly to Phase 4
|
||||||
- **Build Phase 4 command**: `/workflow:tools:task-generate-agent --session [sessionId]`
|
- **Build Phase 4 command**: `/workflow:tools:task-generate-agent --session [sessionId]`
|
||||||
- Pass session ID to Phase 4 command
|
- Pass session ID to Phase 4 command
|
||||||
|
|||||||
@@ -112,14 +112,19 @@ After bash validation, the model takes control to:
|
|||||||
|
|
||||||
1. **Load Context**: Read completed task summaries and changed files
|
1. **Load Context**: Read completed task summaries and changed files
|
||||||
```bash
|
```bash
|
||||||
# Load implementation summaries
|
# Load implementation summaries (iterate through .summaries/ directory)
|
||||||
cat .workflow/active/${sessionId}/.summaries/IMPL-*.md
|
for summary in .workflow/active/${sessionId}/.summaries/*.md; do
|
||||||
|
cat "$summary"
|
||||||
|
done
|
||||||
|
|
||||||
# Load test results (if available)
|
# Load test results (if available)
|
||||||
cat .workflow/active/${sessionId}/.summaries/TEST-FIX-*.md 2>/dev/null
|
for test_summary in .workflow/active/${sessionId}/.summaries/TEST-FIX-*.md 2>/dev/null; do
|
||||||
|
cat "$test_summary"
|
||||||
|
done
|
||||||
|
|
||||||
# Get changed files
|
# Get session created_at for git log filter
|
||||||
git log --since="$(cat .workflow/active/${sessionId}/workflow-session.json | jq -r .created_at)" --name-only --pretty=format: | sort -u
|
created_at=$(ccw session ${sessionId} read workflow-session.json | jq -r .created_at)
|
||||||
|
git log --since="$created_at" --name-only --pretty=format: | sort -u
|
||||||
```
|
```
|
||||||
|
|
||||||
2. **Perform Specialized Review**: Based on `review_type`
|
2. **Perform Specialized Review**: Based on `review_type`
|
||||||
@@ -132,51 +137,53 @@ After bash validation, the model takes control to:
|
|||||||
```
|
```
|
||||||
- Use Gemini for security analysis:
|
- Use Gemini for security analysis:
|
||||||
```bash
|
```bash
|
||||||
cd .workflow/active/${sessionId} && gemini -p "
|
ccw cli -p "
|
||||||
PURPOSE: Security audit of completed implementation
|
PURPOSE: Security audit of completed implementation
|
||||||
TASK: Review code for security vulnerabilities, insecure patterns, auth/authz issues
|
TASK: Review code for security vulnerabilities, insecure patterns, auth/authz issues
|
||||||
CONTEXT: @.summaries/IMPL-*.md,../.. @../../CLAUDE.md
|
CONTEXT: @.summaries/IMPL-*.md,../.. @../../CLAUDE.md
|
||||||
EXPECTED: Security findings report with severity levels
|
EXPECTED: Security findings report with severity levels
|
||||||
RULES: Focus on OWASP Top 10, authentication, authorization, data validation, injection risks
|
RULES: Focus on OWASP Top 10, authentication, authorization, data validation, injection risks
|
||||||
" --approval-mode yolo
|
" --tool gemini --mode write --cd .workflow/active/${sessionId}
|
||||||
```
|
```
|
||||||
|
|
||||||
**Architecture Review** (`--type=architecture`):
|
**Architecture Review** (`--type=architecture`):
|
||||||
- Use Qwen for architecture analysis:
|
- Use Qwen for architecture analysis:
|
||||||
```bash
|
```bash
|
||||||
cd .workflow/active/${sessionId} && qwen -p "
|
ccw cli -p "
|
||||||
PURPOSE: Architecture compliance review
|
PURPOSE: Architecture compliance review
|
||||||
TASK: Evaluate adherence to architectural patterns, identify technical debt, review design decisions
|
TASK: Evaluate adherence to architectural patterns, identify technical debt, review design decisions
|
||||||
CONTEXT: @.summaries/IMPL-*.md,../.. @../../CLAUDE.md
|
CONTEXT: @.summaries/IMPL-*.md,../.. @../../CLAUDE.md
|
||||||
EXPECTED: Architecture assessment with recommendations
|
EXPECTED: Architecture assessment with recommendations
|
||||||
RULES: Check for patterns, separation of concerns, modularity, scalability
|
RULES: Check for patterns, separation of concerns, modularity, scalability
|
||||||
" --approval-mode yolo
|
" --tool qwen --mode write --cd .workflow/active/${sessionId}
|
||||||
```
|
```
|
||||||
|
|
||||||
**Quality Review** (`--type=quality`):
|
**Quality Review** (`--type=quality`):
|
||||||
- Use Gemini for code quality:
|
- Use Gemini for code quality:
|
||||||
```bash
|
```bash
|
||||||
cd .workflow/active/${sessionId} && gemini -p "
|
ccw cli -p "
|
||||||
PURPOSE: Code quality and best practices review
|
PURPOSE: Code quality and best practices review
|
||||||
TASK: Assess code readability, maintainability, adherence to best practices
|
TASK: Assess code readability, maintainability, adherence to best practices
|
||||||
CONTEXT: @.summaries/IMPL-*.md,../.. @../../CLAUDE.md
|
CONTEXT: @.summaries/IMPL-*.md,../.. @../../CLAUDE.md
|
||||||
EXPECTED: Quality assessment with improvement suggestions
|
EXPECTED: Quality assessment with improvement suggestions
|
||||||
RULES: Check for code smells, duplication, complexity, naming conventions
|
RULES: Check for code smells, duplication, complexity, naming conventions
|
||||||
" --approval-mode yolo
|
" --tool gemini --mode write --cd .workflow/active/${sessionId}
|
||||||
```
|
```
|
||||||
|
|
||||||
**Action Items Review** (`--type=action-items`):
|
**Action Items Review** (`--type=action-items`):
|
||||||
- Verify all requirements and acceptance criteria met:
|
- Verify all requirements and acceptance criteria met:
|
||||||
```bash
|
```bash
|
||||||
# Load task requirements and acceptance criteria
|
# Load task requirements and acceptance criteria
|
||||||
find .workflow/active/${sessionId}/.task -name "IMPL-*.json" -exec jq -r '
|
for task_file in .workflow/active/${sessionId}/.task/*.json; do
|
||||||
"Task: " + .id + "\n" +
|
cat "$task_file" | jq -r '
|
||||||
"Requirements: " + (.context.requirements | join(", ")) + "\n" +
|
"Task: " + .id + "\n" +
|
||||||
"Acceptance: " + (.context.acceptance | join(", "))
|
"Requirements: " + (.context.requirements | join(", ")) + "\n" +
|
||||||
' {} \;
|
"Acceptance: " + (.context.acceptance | join(", "))
|
||||||
|
'
|
||||||
|
done
|
||||||
|
|
||||||
# Check implementation summaries against requirements
|
# Check implementation summaries against requirements
|
||||||
cd .workflow/active/${sessionId} && gemini -p "
|
ccw cli -p "
|
||||||
PURPOSE: Verify all requirements and acceptance criteria are met
|
PURPOSE: Verify all requirements and acceptance criteria are met
|
||||||
TASK: Cross-check implementation summaries against original requirements
|
TASK: Cross-check implementation summaries against original requirements
|
||||||
CONTEXT: @.task/IMPL-*.json,.summaries/IMPL-*.md,../.. @../../CLAUDE.md
|
CONTEXT: @.task/IMPL-*.json,.summaries/IMPL-*.md,../.. @../../CLAUDE.md
|
||||||
@@ -190,7 +197,7 @@ After bash validation, the model takes control to:
|
|||||||
- Verify all acceptance criteria are met
|
- Verify all acceptance criteria are met
|
||||||
- Flag any incomplete or missing action items
|
- Flag any incomplete or missing action items
|
||||||
- Assess deployment readiness
|
- Assess deployment readiness
|
||||||
" --approval-mode yolo
|
" --tool gemini --mode write --cd .workflow/active/${sessionId}
|
||||||
```
|
```
|
||||||
|
|
||||||
|
|
||||||
|
|||||||
@@ -25,18 +25,16 @@ Mark the currently active workflow session as complete, analyze it for lessons l
|
|||||||
|
|
||||||
#### Step 1.1: Find Active Session and Get Name
|
#### Step 1.1: Find Active Session and Get Name
|
||||||
```bash
|
```bash
|
||||||
# Find active session directory
|
# Find active session
|
||||||
bash(find .workflow/active/ -name "WFS-*" -type d | head -1)
|
ccw session list --location active
|
||||||
|
# Extract first session_id from result.active array
|
||||||
# Extract session name from directory path
|
|
||||||
bash(basename .workflow/active/WFS-session-name)
|
|
||||||
```
|
```
|
||||||
**Output**: Session name `WFS-session-name`
|
**Output**: Session name `WFS-session-name`
|
||||||
|
|
||||||
#### Step 1.2: Check for Existing Archiving Marker (Resume Detection)
|
#### Step 1.2: Check for Existing Archiving Marker (Resume Detection)
|
||||||
```bash
|
```bash
|
||||||
# Check if session is already being archived
|
# Check if session is already being archived (marker file exists)
|
||||||
bash(test -f .workflow/active/WFS-session-name/.archiving && echo "RESUMING" || echo "NEW")
|
ccw session WFS-session-name read .process/.archiving 2>/dev/null && echo "RESUMING" || echo "NEW"
|
||||||
```
|
```
|
||||||
|
|
||||||
**If RESUMING**:
|
**If RESUMING**:
|
||||||
@@ -49,7 +47,7 @@ bash(test -f .workflow/active/WFS-session-name/.archiving && echo "RESUMING" ||
|
|||||||
#### Step 1.3: Create Archiving Marker
|
#### Step 1.3: Create Archiving Marker
|
||||||
```bash
|
```bash
|
||||||
# Mark session as "archiving in progress"
|
# Mark session as "archiving in progress"
|
||||||
bash(touch .workflow/active/WFS-session-name/.archiving)
|
ccw session WFS-session-name write .process/.archiving ''
|
||||||
```
|
```
|
||||||
**Purpose**:
|
**Purpose**:
|
||||||
- Prevents concurrent operations on this session
|
- Prevents concurrent operations on this session
|
||||||
@@ -161,21 +159,20 @@ Analyze workflow session for archival preparation. Session is STILL in active lo
|
|||||||
|
|
||||||
**Purpose**: Atomically commit all changes. Only execute if Phase 2 succeeds.
|
**Purpose**: Atomically commit all changes. Only execute if Phase 2 succeeds.
|
||||||
|
|
||||||
#### Step 3.1: Create Archive Directory
|
#### Step 3.1: Update Session Status and Archive
|
||||||
```bash
|
```bash
|
||||||
bash(mkdir -p .workflow/archives/)
|
# Archive session (updates status to "completed" and moves to archives)
|
||||||
```
|
ccw session archive WFS-session-name
|
||||||
|
# This operation atomically:
|
||||||
#### Step 3.2: Move Session to Archive
|
# 1. Updates workflow-session.json status to "completed"
|
||||||
```bash
|
# 2. Moves session from .workflow/active/ to .workflow/archives/
|
||||||
bash(mv .workflow/active/WFS-session-name .workflow/archives/WFS-session-name)
|
|
||||||
```
|
```
|
||||||
**Result**: Session now at `.workflow/archives/WFS-session-name/`
|
**Result**: Session now at `.workflow/archives/WFS-session-name/`
|
||||||
|
|
||||||
#### Step 3.3: Update Manifest
|
#### Step 3.2: Update Manifest
|
||||||
```bash
|
```bash
|
||||||
# Read current manifest (or create empty array if not exists)
|
# Check if manifest exists
|
||||||
bash(test -f .workflow/archives/manifest.json && cat .workflow/archives/manifest.json || echo "[]")
|
test -f .workflow/archives/manifest.json && echo "EXISTS" || echo "NOT_FOUND"
|
||||||
```
|
```
|
||||||
|
|
||||||
**JSON Update Logic**:
|
**JSON Update Logic**:
|
||||||
@@ -200,9 +197,10 @@ manifest.push(archiveEntry);
|
|||||||
Write('.workflow/archives/manifest.json', JSON.stringify(manifest, null, 2));
|
Write('.workflow/archives/manifest.json', JSON.stringify(manifest, null, 2));
|
||||||
```
|
```
|
||||||
|
|
||||||
#### Step 3.4: Remove Archiving Marker
|
#### Step 3.5: Remove Archiving Marker
|
||||||
```bash
|
```bash
|
||||||
bash(rm .workflow/archives/WFS-session-name/.archiving)
|
# Remove archiving marker from archived session (use bash rm as ccw has no delete)
|
||||||
|
rm .workflow/archives/WFS-session-name/.process/.archiving 2>/dev/null || true
|
||||||
```
|
```
|
||||||
**Result**: Clean archived session without temporary markers
|
**Result**: Clean archived session without temporary markers
|
||||||
|
|
||||||
@@ -223,7 +221,8 @@ bash(rm .workflow/archives/WFS-session-name/.archiving)
|
|||||||
|
|
||||||
#### Step 4.1: Check Project State Exists
|
#### Step 4.1: Check Project State Exists
|
||||||
```bash
|
```bash
|
||||||
bash(test -f .workflow/project.json && echo "EXISTS" || echo "SKIP")
|
# Check if project.json exists
|
||||||
|
test -f .workflow/project.json && echo "EXISTS" || echo "SKIP"
|
||||||
```
|
```
|
||||||
|
|
||||||
**If SKIP**: Output warning and skip Phase 4
|
**If SKIP**: Output warning and skip Phase 4
|
||||||
@@ -250,11 +249,6 @@ const featureId = title.toLowerCase().replace(/[^a-z0-9]+/g, '-').substring(0, 5
|
|||||||
|
|
||||||
#### Step 4.3: Update project.json
|
#### Step 4.3: Update project.json
|
||||||
|
|
||||||
```bash
|
|
||||||
# Read current project state
|
|
||||||
bash(cat .workflow/project.json)
|
|
||||||
```
|
|
||||||
|
|
||||||
**JSON Update Logic**:
|
**JSON Update Logic**:
|
||||||
```javascript
|
```javascript
|
||||||
// Read existing project.json (created by /workflow:init)
|
// Read existing project.json (created by /workflow:init)
|
||||||
@@ -366,8 +360,8 @@ function getLatestCommitHash() {
|
|||||||
**Recovery Steps**:
|
**Recovery Steps**:
|
||||||
```bash
|
```bash
|
||||||
# Session still in .workflow/active/WFS-session-name
|
# Session still in .workflow/active/WFS-session-name
|
||||||
# Remove archiving marker
|
# Remove archiving marker using bash
|
||||||
bash(rm .workflow/active/WFS-session-name/.archiving)
|
rm .workflow/active/WFS-session-name/.process/.archiving 2>/dev/null || true
|
||||||
```
|
```
|
||||||
|
|
||||||
**User Notification**:
|
**User Notification**:
|
||||||
@@ -464,11 +458,12 @@ Session state: PARTIALLY COMPLETE (session archived, manifest needs update)
|
|||||||
|
|
||||||
**Phase 3: Atomic Commit** (Transactional file operations)
|
**Phase 3: Atomic Commit** (Transactional file operations)
|
||||||
- Create archive directory
|
- Create archive directory
|
||||||
|
- Update session status to "completed"
|
||||||
- Move session to archive location
|
- Move session to archive location
|
||||||
- Update manifest.json with archive entry
|
- Update manifest.json with archive entry
|
||||||
- Remove `.archiving` marker
|
- Remove `.archiving` marker
|
||||||
- **All-or-nothing**: Either all succeed or session remains in safe state
|
- **All-or-nothing**: Either all succeed or session remains in safe state
|
||||||
- **Total**: 4 bash commands + JSON manipulation
|
- **Total**: 5 bash commands + JSON manipulation
|
||||||
|
|
||||||
**Phase 4: Project Registry Update** (Optional feature tracking)
|
**Phase 4: Project Registry Update** (Optional feature tracking)
|
||||||
- Check project.json exists
|
- Check project.json exists
|
||||||
@@ -498,3 +493,55 @@ Session state: PARTIALLY COMPLETE (session archived, manifest needs update)
|
|||||||
- Idempotent operations (safe to retry)
|
- Idempotent operations (safe to retry)
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
## session_manager Tool Alternative
|
||||||
|
|
||||||
|
Use `ccw tool exec session_manager` for session completion operations:
|
||||||
|
|
||||||
|
### List Active Sessions
|
||||||
|
```bash
|
||||||
|
ccw tool exec session_manager '{"operation":"list","location":"active"}'
|
||||||
|
```
|
||||||
|
|
||||||
|
### Update Session Status to Completed
|
||||||
|
```bash
|
||||||
|
ccw tool exec session_manager '{
|
||||||
|
"operation": "update",
|
||||||
|
"session_id": "WFS-xxx",
|
||||||
|
"content_type": "session",
|
||||||
|
"content": {
|
||||||
|
"status": "completed",
|
||||||
|
"archived_at": "2025-12-10T08:00:00Z"
|
||||||
|
}
|
||||||
|
}'
|
||||||
|
```
|
||||||
|
|
||||||
|
### Archive Session
|
||||||
|
```bash
|
||||||
|
ccw tool exec session_manager '{"operation":"archive","session_id":"WFS-xxx"}'
|
||||||
|
|
||||||
|
# This operation:
|
||||||
|
# 1. Updates status to "completed" if update_status=true (default)
|
||||||
|
# 2. Moves session from .workflow/active/ to .workflow/archives/
|
||||||
|
```
|
||||||
|
|
||||||
|
### Read Session Data
|
||||||
|
```bash
|
||||||
|
# Read workflow-session.json
|
||||||
|
ccw tool exec session_manager '{"operation":"read","session_id":"WFS-xxx","content_type":"session"}'
|
||||||
|
|
||||||
|
# Read IMPL_PLAN.md
|
||||||
|
ccw tool exec session_manager '{"operation":"read","session_id":"WFS-xxx","content_type":"plan"}'
|
||||||
|
```
|
||||||
|
|
||||||
|
### Write Archiving Marker
|
||||||
|
```bash
|
||||||
|
ccw tool exec session_manager '{
|
||||||
|
"operation": "write",
|
||||||
|
"session_id": "WFS-xxx",
|
||||||
|
"content_type": "process",
|
||||||
|
"path_params": {"filename": ".archiving"},
|
||||||
|
"content": ""
|
||||||
|
}'
|
||||||
|
```
|
||||||
|
|
||||||
|
|||||||
@@ -17,41 +17,30 @@ Display all workflow sessions with their current status, progress, and metadata.
|
|||||||
|
|
||||||
## Implementation Flow
|
## Implementation Flow
|
||||||
|
|
||||||
### Step 1: Find All Sessions
|
### Step 1: List All Sessions
|
||||||
```bash
|
```bash
|
||||||
ls .workflow/active/WFS-* 2>/dev/null
|
ccw session list --location both
|
||||||
```
|
```
|
||||||
|
|
||||||
### Step 2: Check Active Session
|
### Step 2: Get Session Statistics
|
||||||
```bash
|
```bash
|
||||||
find .workflow/active/ -name "WFS-*" -type d 2>/dev/null | head -1
|
ccw session stats WFS-session
|
||||||
|
# Returns: tasks count by status, summaries count, has_plan
|
||||||
```
|
```
|
||||||
|
|
||||||
### Step 3: Read Session Metadata
|
### Step 3: Read Session Metadata
|
||||||
```bash
|
```bash
|
||||||
jq -r '.session_id, .status, .project' .workflow/active/WFS-session/workflow-session.json
|
ccw session WFS-session read workflow-session.json
|
||||||
|
# Returns: session_id, status, project, created_at, etc.
|
||||||
```
|
```
|
||||||
|
|
||||||
### Step 4: Count Task Progress
|
## Simple Commands
|
||||||
```bash
|
|
||||||
find .workflow/active/WFS-session/.task/ -name "*.json" -type f 2>/dev/null | wc -l
|
|
||||||
find .workflow/active/WFS-session/.summaries/ -name "*.md" -type f 2>/dev/null | wc -l
|
|
||||||
```
|
|
||||||
|
|
||||||
### Step 5: Get Creation Time
|
|
||||||
```bash
|
|
||||||
jq -r '.created_at // "unknown"' .workflow/active/WFS-session/workflow-session.json
|
|
||||||
```
|
|
||||||
|
|
||||||
## Simple Bash Commands
|
|
||||||
|
|
||||||
### Basic Operations
|
### Basic Operations
|
||||||
- **List sessions**: `find .workflow/active/ -name "WFS-*" -type d`
|
- **List all sessions**: `ccw session list`
|
||||||
- **Find active**: `find .workflow/active/ -name "WFS-*" -type d`
|
- **List active only**: `ccw session list --location active`
|
||||||
- **Read session data**: `jq -r '.session_id, .status' session.json`
|
- **Read session data**: `ccw session WFS-xxx read workflow-session.json`
|
||||||
- **Count tasks**: `find .task/ -name "*.json" -type f | wc -l`
|
- **Get task stats**: `ccw session WFS-xxx stats`
|
||||||
- **Count completed**: `find .summaries/ -name "*.md" -type f 2>/dev/null | wc -l`
|
|
||||||
- **Get timestamp**: `jq -r '.created_at' session.json`
|
|
||||||
|
|
||||||
## Simple Output Format
|
## Simple Output Format
|
||||||
|
|
||||||
@@ -88,9 +77,38 @@ Total: 3 sessions (1 active, 1 paused, 1 completed)
|
|||||||
|
|
||||||
### Quick Commands
|
### Quick Commands
|
||||||
```bash
|
```bash
|
||||||
# Count all sessions
|
# Count active sessions using ccw
|
||||||
ls .workflow/active/WFS-* | wc -l
|
ccw session list --location active --no-metadata
|
||||||
|
# Returns session count in result.total
|
||||||
|
|
||||||
# Show recent sessions
|
# Show recent sessions
|
||||||
ls -t .workflow/active/WFS-*/workflow-session.json | head -3
|
ccw session list --location active
|
||||||
```
|
```
|
||||||
|
## session_manager Tool Alternative
|
||||||
|
|
||||||
|
Use `ccw tool exec session_manager` for simplified session listing:
|
||||||
|
|
||||||
|
### List All Sessions (Active + Archived)
|
||||||
|
```bash
|
||||||
|
ccw tool exec session_manager '{"operation":"list","location":"both","include_metadata":true}'
|
||||||
|
|
||||||
|
# Response:
|
||||||
|
# {
|
||||||
|
# "success": true,
|
||||||
|
# "result": {
|
||||||
|
# "active": [{"session_id":"WFS-xxx","metadata":{...}}],
|
||||||
|
# "archived": [{"session_id":"WFS-yyy","metadata":{...}}],
|
||||||
|
# "total": 2
|
||||||
|
# }
|
||||||
|
# }
|
||||||
|
```
|
||||||
|
|
||||||
|
### List Active Sessions Only
|
||||||
|
```bash
|
||||||
|
ccw tool exec session_manager '{"operation":"list","location":"active","include_metadata":true}'
|
||||||
|
```
|
||||||
|
|
||||||
|
### Read Specific Session
|
||||||
|
```bash
|
||||||
|
ccw tool exec session_manager '{"operation":"read","session_id":"WFS-xxx","content_type":"session"}'
|
||||||
|
```
|
||||||
|
|||||||
@@ -17,39 +17,33 @@ Resume the most recently paused workflow session, restoring all context and stat
|
|||||||
|
|
||||||
### Step 1: Find Paused Sessions
|
### Step 1: Find Paused Sessions
|
||||||
```bash
|
```bash
|
||||||
ls .workflow/active/WFS-* 2>/dev/null
|
ccw session list --location active
|
||||||
|
# Filter for sessions with status="paused"
|
||||||
```
|
```
|
||||||
|
|
||||||
### Step 2: Check Session Status
|
### Step 2: Check Session Status
|
||||||
```bash
|
```bash
|
||||||
jq -r '.status' .workflow/active/WFS-session/workflow-session.json
|
ccw session WFS-session read workflow-session.json
|
||||||
|
# Check .status field in response
|
||||||
```
|
```
|
||||||
|
|
||||||
### Step 3: Find Most Recent Paused
|
### Step 3: Find Most Recent Paused
|
||||||
```bash
|
```bash
|
||||||
ls -t .workflow/active/WFS-*/workflow-session.json | head -1
|
ccw session list --location active
|
||||||
|
# Sort by created_at, filter for paused status
|
||||||
```
|
```
|
||||||
|
|
||||||
### Step 4: Update Session Status
|
### Step 4: Update Session Status to Active
|
||||||
```bash
|
```bash
|
||||||
jq '.status = "active"' .workflow/active/WFS-session/workflow-session.json > temp.json
|
ccw session WFS-session status active
|
||||||
mv temp.json .workflow/active/WFS-session/workflow-session.json
|
|
||||||
```
|
```
|
||||||
|
|
||||||
### Step 5: Add Resume Timestamp
|
## Simple Commands
|
||||||
```bash
|
|
||||||
jq '.resumed_at = "'$(date -u +%Y-%m-%dT%H:%M:%SZ)'"' .workflow/active/WFS-session/workflow-session.json > temp.json
|
|
||||||
mv temp.json .workflow/active/WFS-session/workflow-session.json
|
|
||||||
```
|
|
||||||
|
|
||||||
## Simple Bash Commands
|
|
||||||
|
|
||||||
### Basic Operations
|
### Basic Operations
|
||||||
- **List sessions**: `ls .workflow/active/WFS-*`
|
- **List sessions**: `ccw session list --location active`
|
||||||
- **Check status**: `jq -r '.status' session.json`
|
- **Check status**: `ccw session WFS-xxx read workflow-session.json`
|
||||||
- **Find recent**: `ls -t .workflow/active/*/workflow-session.json | head -1`
|
- **Update status**: `ccw session WFS-xxx status active`
|
||||||
- **Update status**: `jq '.status = "active"' session.json > temp.json`
|
|
||||||
- **Add timestamp**: `jq '.resumed_at = "'$(date -u +%Y-%m-%dT%H:%M:%SZ)'"'`
|
|
||||||
|
|
||||||
### Resume Result
|
### Resume Result
|
||||||
```
|
```
|
||||||
@@ -58,4 +52,26 @@ Session WFS-user-auth resumed
|
|||||||
- Paused at: 2025-09-15T14:30:00Z
|
- Paused at: 2025-09-15T14:30:00Z
|
||||||
- Resumed at: 2025-09-15T15:45:00Z
|
- Resumed at: 2025-09-15T15:45:00Z
|
||||||
- Ready for: /workflow:execute
|
- Ready for: /workflow:execute
|
||||||
```
|
```
|
||||||
|
## session_manager Tool Alternative
|
||||||
|
|
||||||
|
Use `ccw tool exec session_manager` for session resume:
|
||||||
|
|
||||||
|
### Update Session Status
|
||||||
|
```bash
|
||||||
|
# Update status to active
|
||||||
|
ccw tool exec session_manager '{
|
||||||
|
"operation": "update",
|
||||||
|
"session_id": "WFS-xxx",
|
||||||
|
"content_type": "session",
|
||||||
|
"content": {
|
||||||
|
"status": "active",
|
||||||
|
"resumed_at": "2025-12-10T08:00:00Z"
|
||||||
|
}
|
||||||
|
}'
|
||||||
|
```
|
||||||
|
|
||||||
|
### Read Session Status
|
||||||
|
```bash
|
||||||
|
ccw tool exec session_manager '{"operation":"read","session_id":"WFS-xxx","content_type":"session"}'
|
||||||
|
```
|
||||||
|
|||||||
@@ -30,10 +30,17 @@ The `--type` parameter classifies sessions for CCW dashboard organization:
|
|||||||
| `tdd` | TDD-based development | `/workflow:tdd-plan` |
|
| `tdd` | TDD-based development | `/workflow:tdd-plan` |
|
||||||
| `test` | Test generation/fix sessions | `/workflow:test-fix-gen` |
|
| `test` | Test generation/fix sessions | `/workflow:test-fix-gen` |
|
||||||
| `docs` | Documentation sessions | `/memory:docs` |
|
| `docs` | Documentation sessions | `/memory:docs` |
|
||||||
|
| `lite-plan` | Lightweight planning workflow | `/workflow:lite-plan` |
|
||||||
|
| `lite-fix` | Lightweight bug fix workflow | `/workflow:lite-fix` |
|
||||||
|
|
||||||
|
**Special Behavior for `lite-plan` and `lite-fix`**:
|
||||||
|
- These types automatically infer the storage location (`.workflow/.lite-plan/` or `.workflow/.lite-fix/`)
|
||||||
|
- No need to specify `--location` parameter when using these types
|
||||||
|
- Alternative: Use `--location lite-plan` or `--location lite-fix` directly
|
||||||
|
|
||||||
**Validation**: If `--type` is provided with invalid value, return error:
|
**Validation**: If `--type` is provided with invalid value, return error:
|
||||||
```
|
```
|
||||||
ERROR: Invalid session type. Valid types: workflow, review, tdd, test, docs
|
ERROR: Invalid session type. Valid types: workflow, review, tdd, test, docs, lite-plan, lite-fix
|
||||||
```
|
```
|
||||||
|
|
||||||
## Step 0: Initialize Project State (First-time Only)
|
## Step 0: Initialize Project State (First-time Only)
|
||||||
@@ -70,12 +77,12 @@ SlashCommand({command: "/workflow:init"});
|
|||||||
|
|
||||||
### Step 1: List Active Sessions
|
### Step 1: List Active Sessions
|
||||||
```bash
|
```bash
|
||||||
bash(ls -1 .workflow/active/ 2>/dev/null | head -5)
|
ccw session list --location active
|
||||||
```
|
```
|
||||||
|
|
||||||
### Step 2: Display Session Metadata
|
### Step 2: Display Session Metadata
|
||||||
```bash
|
```bash
|
||||||
bash(cat .workflow/active/WFS-promptmaster-platform/workflow-session.json)
|
ccw session WFS-promptmaster-platform read workflow-session.json
|
||||||
```
|
```
|
||||||
|
|
||||||
### Step 4: User Decision
|
### Step 4: User Decision
|
||||||
@@ -92,34 +99,29 @@ Present session information and wait for user to select or create session.
|
|||||||
|
|
||||||
### Step 1: Check Active Sessions Count
|
### Step 1: Check Active Sessions Count
|
||||||
```bash
|
```bash
|
||||||
bash(find .workflow/active/ -name "WFS-*" -type d 2>/dev/null | wc -l)
|
ccw session list --location active
|
||||||
|
# Check result.total in response
|
||||||
```
|
```
|
||||||
|
|
||||||
### Step 2a: No Active Sessions → Create New
|
### Step 2a: No Active Sessions → Create New
|
||||||
```bash
|
```bash
|
||||||
# Generate session slug
|
# Generate session slug from description
|
||||||
bash(echo "implement OAuth2 auth" | sed 's/[^a-zA-Z0-9]/-/g' | tr '[:upper:]' '[:lower:]' | cut -c1-50)
|
# Pattern: WFS-{lowercase-slug-from-description}
|
||||||
|
|
||||||
# Create directory structure
|
# Create session with ccw (creates directories + metadata atomically)
|
||||||
bash(mkdir -p .workflow/active/WFS-implement-oauth2-auth/.process)
|
ccw session init WFS-implement-oauth2-auth --type workflow
|
||||||
bash(mkdir -p .workflow/active/WFS-implement-oauth2-auth/.task)
|
|
||||||
bash(mkdir -p .workflow/active/WFS-implement-oauth2-auth/.summaries)
|
|
||||||
|
|
||||||
# Create metadata (include type field, default to "workflow" if not specified)
|
|
||||||
bash(echo '{"session_id":"WFS-implement-oauth2-auth","project":"implement OAuth2 auth","status":"planning","type":"workflow","created_at":"2024-12-04T08:00:00Z"}' > .workflow/active/WFS-implement-oauth2-auth/workflow-session.json)
|
|
||||||
```
|
```
|
||||||
|
|
||||||
**Output**: `SESSION_ID: WFS-implement-oauth2-auth`
|
**Output**: `SESSION_ID: WFS-implement-oauth2-auth`
|
||||||
|
|
||||||
### Step 2b: Single Active Session → Check Relevance
|
### Step 2b: Single Active Session → Check Relevance
|
||||||
```bash
|
```bash
|
||||||
# Extract session ID
|
# Get session list with metadata
|
||||||
bash(find .workflow/active/ -name "WFS-*" -type d 2>/dev/null | head -1 | xargs basename)
|
ccw session list --location active
|
||||||
|
|
||||||
# Read project name from metadata
|
# Read session metadata for relevance check
|
||||||
bash(cat .workflow/active/WFS-promptmaster-platform/workflow-session.json | grep -o '"project":"[^"]*"' | cut -d'"' -f4)
|
ccw session WFS-promptmaster-platform read workflow-session.json
|
||||||
|
|
||||||
# Check keyword match (manual comparison)
|
|
||||||
# If task contains project keywords → Reuse session
|
# If task contains project keywords → Reuse session
|
||||||
# If task unrelated → Create new session (use Step 2a)
|
# If task unrelated → Create new session (use Step 2a)
|
||||||
```
|
```
|
||||||
@@ -129,8 +131,9 @@ bash(cat .workflow/active/WFS-promptmaster-platform/workflow-session.json | grep
|
|||||||
|
|
||||||
### Step 2c: Multiple Active Sessions → Use First
|
### Step 2c: Multiple Active Sessions → Use First
|
||||||
```bash
|
```bash
|
||||||
# Get first active session
|
# Get first active session from list
|
||||||
bash(find .workflow/active/ -name "WFS-*" -type d 2>/dev/null | head -1 | xargs basename)
|
ccw session list --location active
|
||||||
|
# Use first session_id from result.active array
|
||||||
|
|
||||||
# Output warning and session ID
|
# Output warning and session ID
|
||||||
# WARNING: Multiple active sessions detected
|
# WARNING: Multiple active sessions detected
|
||||||
@@ -146,26 +149,48 @@ bash(find .workflow/active/ -name "WFS-*" -type d 2>/dev/null | head -1 | xargs
|
|||||||
|
|
||||||
### Step 1: Generate Unique Session Slug
|
### Step 1: Generate Unique Session Slug
|
||||||
```bash
|
```bash
|
||||||
# Convert to slug
|
# Convert description to slug: lowercase, alphanumeric + hyphen, max 50 chars
|
||||||
bash(echo "fix login bug" | sed 's/[^a-zA-Z0-9]/-/g' | tr '[:upper:]' '[:lower:]' | cut -c1-50)
|
# Check if exists via ccw session list, add counter if collision
|
||||||
|
ccw session list --location active
|
||||||
# Check if exists, add counter if needed
|
|
||||||
bash(ls .workflow/active/WFS-fix-login-bug 2>/dev/null && echo "WFS-fix-login-bug-2" || echo "WFS-fix-login-bug")
|
|
||||||
```
|
```
|
||||||
|
|
||||||
### Step 2: Create Session Structure
|
### Step 2: Create Session Structure
|
||||||
```bash
|
```bash
|
||||||
bash(mkdir -p .workflow/active/WFS-fix-login-bug/.process)
|
# Basic init - creates directories + default metadata
|
||||||
bash(mkdir -p .workflow/active/WFS-fix-login-bug/.task)
|
ccw session init WFS-fix-login-bug --type workflow
|
||||||
bash(mkdir -p .workflow/active/WFS-fix-login-bug/.summaries)
|
|
||||||
|
# Advanced init - with custom metadata
|
||||||
|
ccw session init WFS-oauth-implementation --type workflow --content '{"description":"OAuth2 authentication system","priority":"high","complexity":"medium"}'
|
||||||
```
|
```
|
||||||
|
|
||||||
### Step 3: Create Metadata
|
**Default Metadata** (auto-generated):
|
||||||
```bash
|
```json
|
||||||
# Include type field from --type parameter (default: "workflow")
|
{
|
||||||
bash(echo '{"session_id":"WFS-fix-login-bug","project":"fix login bug","status":"planning","type":"workflow","created_at":"2024-12-04T08:00:00Z"}' > .workflow/active/WFS-fix-login-bug/workflow-session.json)
|
"session_id": "WFS-fix-login-bug",
|
||||||
|
"type": "workflow",
|
||||||
|
"status": "planning",
|
||||||
|
"created_at": "2025-12-17T..."
|
||||||
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
|
**Custom Metadata** (merged with defaults):
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"session_id": "WFS-oauth-implementation",
|
||||||
|
"type": "workflow",
|
||||||
|
"status": "planning",
|
||||||
|
"created_at": "2025-12-17T...",
|
||||||
|
"description": "OAuth2 authentication system",
|
||||||
|
"priority": "high",
|
||||||
|
"complexity": "medium"
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Field Usage**:
|
||||||
|
- `description`: Displayed in dashboard (replaces session_id as title)
|
||||||
|
- `status`: Can override default "planning" (e.g., "active", "implementing")
|
||||||
|
- Custom fields: Any additional fields are saved and accessible programmatically
|
||||||
|
|
||||||
**Output**: `SESSION_ID: WFS-fix-login-bug`
|
**Output**: `SESSION_ID: WFS-fix-login-bug`
|
||||||
|
|
||||||
## Execution Guideline
|
## Execution Guideline
|
||||||
@@ -197,4 +222,36 @@ SESSION_ID: WFS-promptmaster-platform
|
|||||||
- Pattern: `WFS-[lowercase-slug]`
|
- Pattern: `WFS-[lowercase-slug]`
|
||||||
- Characters: `a-z`, `0-9`, `-` only
|
- Characters: `a-z`, `0-9`, `-` only
|
||||||
- Max length: 50 characters
|
- Max length: 50 characters
|
||||||
- Uniqueness: Add numeric suffix if collision (`WFS-auth-2`, `WFS-auth-3`)
|
- Uniqueness: Add numeric suffix if collision (`WFS-auth-2`, `WFS-auth-3`)
|
||||||
|
|
||||||
|
## session_manager Tool Alternative
|
||||||
|
|
||||||
|
The above bash commands can be replaced with `ccw tool exec session_manager`:
|
||||||
|
|
||||||
|
### List Sessions
|
||||||
|
```bash
|
||||||
|
# List active sessions with metadata
|
||||||
|
ccw tool exec session_manager '{"operation":"list","location":"active","include_metadata":true}'
|
||||||
|
|
||||||
|
# Response: {"success":true,"result":{"active":[{"session_id":"WFS-xxx","metadata":{...}}],"total":1}}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Create Session (replaces mkdir + echo)
|
||||||
|
```bash
|
||||||
|
# Single command creates directories + metadata
|
||||||
|
ccw tool exec session_manager '{
|
||||||
|
"operation": "init",
|
||||||
|
"session_id": "WFS-my-session",
|
||||||
|
"metadata": {
|
||||||
|
"project": "my project description",
|
||||||
|
"status": "planning",
|
||||||
|
"type": "workflow",
|
||||||
|
"created_at": "2025-12-10T08:00:00Z"
|
||||||
|
}
|
||||||
|
}'
|
||||||
|
```
|
||||||
|
|
||||||
|
### Read Session Metadata
|
||||||
|
```bash
|
||||||
|
ccw tool exec session_manager '{"operation":"read","session_id":"WFS-xxx","content_type":"session"}'
|
||||||
|
```
|
||||||
|
|||||||
@@ -164,10 +164,10 @@ SlashCommand(command="/workflow:tools:conflict-resolution --session [sessionId]
|
|||||||
|
|
||||||
**Parse Output**:
|
**Parse Output**:
|
||||||
- Extract: Execution status (success/skipped/failed)
|
- Extract: Execution status (success/skipped/failed)
|
||||||
- Verify: CONFLICT_RESOLUTION.md file path (if executed)
|
- Verify: conflict-resolution.json file path (if executed)
|
||||||
|
|
||||||
**Validation**:
|
**Validation**:
|
||||||
- File `.workflow/active/[sessionId]/.process/CONFLICT_RESOLUTION.md` exists (if executed)
|
- File `.workflow/active/[sessionId]/.process/conflict-resolution.json` exists (if executed)
|
||||||
|
|
||||||
**Skip Behavior**:
|
**Skip Behavior**:
|
||||||
- If conflict_risk is "none" or "low", skip directly to Phase 5
|
- If conflict_risk is "none" or "low", skip directly to Phase 5
|
||||||
@@ -402,7 +402,7 @@ TDD Workflow Orchestrator
|
|||||||
│ ├─ Phase 4.1: Detect conflicts with CLI
|
│ ├─ Phase 4.1: Detect conflicts with CLI
|
||||||
│ ├─ Phase 4.2: Present conflicts to user
|
│ ├─ Phase 4.2: Present conflicts to user
|
||||||
│ └─ Phase 4.3: Apply resolution strategies
|
│ └─ Phase 4.3: Apply resolution strategies
|
||||||
│ └─ Returns: CONFLICT_RESOLUTION.md ← COLLAPSED
|
│ └─ Returns: conflict-resolution.json ← COLLAPSED
|
||||||
│ ELSE:
|
│ ELSE:
|
||||||
│ └─ Skip to Phase 5
|
│ └─ Skip to Phase 5
|
||||||
│
|
│
|
||||||
|
|||||||
@@ -77,18 +77,32 @@ find .workflow/active/ -name "WFS-*" -type d | head -1 | sed 's/.*\///'
|
|||||||
|
|
||||||
```bash
|
```bash
|
||||||
# Load all task JSONs
|
# Load all task JSONs
|
||||||
find .workflow/active/{sessionId}/.task/ -name '*.json'
|
for task_file in .workflow/active/{sessionId}/.task/*.json; do
|
||||||
|
cat "$task_file"
|
||||||
|
done
|
||||||
|
|
||||||
# Extract task IDs
|
# Extract task IDs
|
||||||
find .workflow/active/{sessionId}/.task/ -name '*.json' -exec jq -r '.id' {} \;
|
for task_file in .workflow/active/{sessionId}/.task/*.json; do
|
||||||
|
cat "$task_file" | jq -r '.id'
|
||||||
|
done
|
||||||
|
|
||||||
# Check dependencies
|
# Check dependencies - read tasks and filter for IMPL/REFACTOR
|
||||||
find .workflow/active/{sessionId}/.task/ -name 'IMPL-*.json' -exec jq -r '.context.depends_on[]?' {} \;
|
for task_file in .workflow/active/{sessionId}/.task/IMPL-*.json; do
|
||||||
find .workflow/active/{sessionId}/.task/ -name 'REFACTOR-*.json' -exec jq -r '.context.depends_on[]?' {} \;
|
cat "$task_file" | jq -r '.context.depends_on[]?'
|
||||||
|
done
|
||||||
|
|
||||||
|
for task_file in .workflow/active/{sessionId}/.task/REFACTOR-*.json; do
|
||||||
|
cat "$task_file" | jq -r '.context.depends_on[]?'
|
||||||
|
done
|
||||||
|
|
||||||
# Check meta fields
|
# Check meta fields
|
||||||
find .workflow/active/{sessionId}/.task/ -name '*.json' -exec jq -r '.meta.tdd_phase' {} \;
|
for task_file in .workflow/active/{sessionId}/.task/*.json; do
|
||||||
find .workflow/active/{sessionId}/.task/ -name '*.json' -exec jq -r '.meta.agent' {} \;
|
cat "$task_file" | jq -r '.meta.tdd_phase'
|
||||||
|
done
|
||||||
|
|
||||||
|
for task_file in .workflow/active/{sessionId}/.task/*.json; do
|
||||||
|
cat "$task_file" | jq -r '.meta.agent'
|
||||||
|
done
|
||||||
```
|
```
|
||||||
|
|
||||||
**Validation**:
|
**Validation**:
|
||||||
@@ -127,7 +141,7 @@ find .workflow/active/{sessionId}/.task/ -name '*.json' -exec jq -r '.meta.agent
|
|||||||
**Gemini analysis for comprehensive TDD compliance report**
|
**Gemini analysis for comprehensive TDD compliance report**
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
cd project-root && gemini -p "
|
ccw cli -p "
|
||||||
PURPOSE: Generate TDD compliance report
|
PURPOSE: Generate TDD compliance report
|
||||||
TASK: Analyze TDD workflow execution and generate quality report
|
TASK: Analyze TDD workflow execution and generate quality report
|
||||||
CONTEXT: @{.workflow/active/{sessionId}/.task/*.json,.workflow/active/{sessionId}/.summaries/*,.workflow/active/{sessionId}/.process/tdd-cycle-report.md}
|
CONTEXT: @{.workflow/active/{sessionId}/.task/*.json,.workflow/active/{sessionId}/.summaries/*,.workflow/active/{sessionId}/.process/tdd-cycle-report.md}
|
||||||
@@ -139,7 +153,7 @@ EXPECTED:
|
|||||||
- Red-Green-Refactor cycle validation
|
- Red-Green-Refactor cycle validation
|
||||||
- Best practices adherence assessment
|
- Best practices adherence assessment
|
||||||
RULES: Focus on TDD best practices and workflow adherence. Be specific about violations and improvements.
|
RULES: Focus on TDD best practices and workflow adherence. Be specific about violations and improvements.
|
||||||
" > .workflow/active/{sessionId}/TDD_COMPLIANCE_REPORT.md
|
" --tool gemini --mode analysis --cd project-root > .workflow/active/{sessionId}/TDD_COMPLIANCE_REPORT.md
|
||||||
```
|
```
|
||||||
|
|
||||||
**Output**: TDD_COMPLIANCE_REPORT.md
|
**Output**: TDD_COMPLIANCE_REPORT.md
|
||||||
|
|||||||
@@ -133,7 +133,7 @@ Task(subagent_type="cli-execution-agent", prompt=`
|
|||||||
### 2. Execute CLI Analysis (Enhanced with Exploration + Scenario Uniqueness)
|
### 2. Execute CLI Analysis (Enhanced with Exploration + Scenario Uniqueness)
|
||||||
|
|
||||||
Primary (Gemini):
|
Primary (Gemini):
|
||||||
cd {project_root} && gemini -p "
|
ccw cli -p "
|
||||||
PURPOSE: Detect conflicts between plan and codebase, using exploration insights
|
PURPOSE: Detect conflicts between plan and codebase, using exploration insights
|
||||||
TASK:
|
TASK:
|
||||||
• **Review pre-identified conflict_indicators from exploration results**
|
• **Review pre-identified conflict_indicators from exploration results**
|
||||||
@@ -152,7 +152,7 @@ Task(subagent_type="cli-execution-agent", prompt=`
|
|||||||
- ModuleOverlap conflicts with overlap_analysis
|
- ModuleOverlap conflicts with overlap_analysis
|
||||||
- Targeted clarification questions
|
- Targeted clarification questions
|
||||||
RULES: $(cat ~/.claude/workflows/cli-templates/prompts/analysis/02-analyze-code-patterns.txt) | Focus on breaking changes, migration needs, and functional overlaps | Prioritize exploration-identified conflicts | analysis=READ-ONLY
|
RULES: $(cat ~/.claude/workflows/cli-templates/prompts/analysis/02-analyze-code-patterns.txt) | Focus on breaking changes, migration needs, and functional overlaps | Prioritize exploration-identified conflicts | analysis=READ-ONLY
|
||||||
"
|
" --tool gemini --mode analysis --cd {project_root}
|
||||||
|
|
||||||
Fallback: Qwen (same prompt) → Claude (manual analysis)
|
Fallback: Qwen (same prompt) → Claude (manual analysis)
|
||||||
|
|
||||||
@@ -169,7 +169,7 @@ Task(subagent_type="cli-execution-agent", prompt=`
|
|||||||
|
|
||||||
### 4. Return Structured Conflict Data
|
### 4. Return Structured Conflict Data
|
||||||
|
|
||||||
⚠️ DO NOT generate CONFLICT_RESOLUTION.md file
|
⚠️ Output to conflict-resolution.json (generated in Phase 4)
|
||||||
|
|
||||||
Return JSON format for programmatic processing:
|
Return JSON format for programmatic processing:
|
||||||
|
|
||||||
@@ -467,14 +467,30 @@ selectedStrategies.forEach(item => {
|
|||||||
|
|
||||||
console.log(`\n正在应用 ${modifications.length} 个修改...`);
|
console.log(`\n正在应用 ${modifications.length} 个修改...`);
|
||||||
|
|
||||||
// 2. Apply each modification using Edit tool
|
// 2. Apply each modification using Edit tool (with fallback to context-package.json)
|
||||||
const appliedModifications = [];
|
const appliedModifications = [];
|
||||||
const failedModifications = [];
|
const failedModifications = [];
|
||||||
|
const fallbackConstraints = []; // For files that don't exist
|
||||||
|
|
||||||
modifications.forEach((mod, idx) => {
|
modifications.forEach((mod, idx) => {
|
||||||
try {
|
try {
|
||||||
console.log(`[${idx + 1}/${modifications.length}] 修改 ${mod.file}...`);
|
console.log(`[${idx + 1}/${modifications.length}] 修改 ${mod.file}...`);
|
||||||
|
|
||||||
|
// Check if target file exists (brainstorm files may not exist in lite workflow)
|
||||||
|
if (!file_exists(mod.file)) {
|
||||||
|
console.log(` ⚠️ 文件不存在,写入 context-package.json 作为约束`);
|
||||||
|
fallbackConstraints.push({
|
||||||
|
source: "conflict-resolution",
|
||||||
|
conflict_id: mod.conflict_id,
|
||||||
|
target_file: mod.file,
|
||||||
|
section: mod.section,
|
||||||
|
change_type: mod.change_type,
|
||||||
|
content: mod.new_content,
|
||||||
|
rationale: mod.rationale
|
||||||
|
});
|
||||||
|
return; // Skip to next modification
|
||||||
|
}
|
||||||
|
|
||||||
if (mod.change_type === "update") {
|
if (mod.change_type === "update") {
|
||||||
Edit({
|
Edit({
|
||||||
file_path: mod.file,
|
file_path: mod.file,
|
||||||
@@ -502,14 +518,45 @@ modifications.forEach((mod, idx) => {
|
|||||||
}
|
}
|
||||||
});
|
});
|
||||||
|
|
||||||
// 3. Update context-package.json with resolution details
|
// 2b. Generate conflict-resolution.json output file
|
||||||
|
const resolutionOutput = {
|
||||||
|
session_id: sessionId,
|
||||||
|
resolved_at: new Date().toISOString(),
|
||||||
|
summary: {
|
||||||
|
total_conflicts: conflicts.length,
|
||||||
|
resolved_with_strategy: selectedStrategies.length,
|
||||||
|
custom_handling: customConflicts.length,
|
||||||
|
fallback_constraints: fallbackConstraints.length
|
||||||
|
},
|
||||||
|
resolved_conflicts: selectedStrategies.map(s => ({
|
||||||
|
conflict_id: s.conflict_id,
|
||||||
|
strategy_name: s.strategy.name,
|
||||||
|
strategy_approach: s.strategy.approach,
|
||||||
|
clarifications: s.clarifications || [],
|
||||||
|
modifications_applied: s.strategy.modifications?.filter(m =>
|
||||||
|
appliedModifications.some(am => am.conflict_id === s.conflict_id)
|
||||||
|
) || []
|
||||||
|
})),
|
||||||
|
custom_conflicts: customConflicts.map(c => ({
|
||||||
|
id: c.id,
|
||||||
|
brief: c.brief,
|
||||||
|
category: c.category,
|
||||||
|
suggestions: c.suggestions,
|
||||||
|
overlap_analysis: c.overlap_analysis || null
|
||||||
|
})),
|
||||||
|
planning_constraints: fallbackConstraints, // Constraints for files that don't exist
|
||||||
|
failed_modifications: failedModifications
|
||||||
|
};
|
||||||
|
|
||||||
|
const resolutionPath = `.workflow/active/${sessionId}/.process/conflict-resolution.json`;
|
||||||
|
Write(resolutionPath, JSON.stringify(resolutionOutput, null, 2));
|
||||||
|
console.log(`\n📄 冲突解决结果已保存: ${resolutionPath}`);
|
||||||
|
|
||||||
|
// 3. Update context-package.json with resolution details (reference to JSON file)
|
||||||
const contextPackage = JSON.parse(Read(contextPath));
|
const contextPackage = JSON.parse(Read(contextPath));
|
||||||
contextPackage.conflict_detection.conflict_risk = "resolved";
|
contextPackage.conflict_detection.conflict_risk = "resolved";
|
||||||
contextPackage.conflict_detection.resolved_conflicts = selectedStrategies.map(s => ({
|
contextPackage.conflict_detection.resolution_file = resolutionPath; // Reference to detailed JSON
|
||||||
conflict_id: s.conflict_id,
|
contextPackage.conflict_detection.resolved_conflicts = selectedStrategies.map(s => s.conflict_id);
|
||||||
strategy_name: s.strategy.name,
|
|
||||||
clarifications: s.clarifications
|
|
||||||
}));
|
|
||||||
contextPackage.conflict_detection.custom_conflicts = customConflicts.map(c => c.id);
|
contextPackage.conflict_detection.custom_conflicts = customConflicts.map(c => c.id);
|
||||||
contextPackage.conflict_detection.resolved_at = new Date().toISOString();
|
contextPackage.conflict_detection.resolved_at = new Date().toISOString();
|
||||||
Write(contextPath, JSON.stringify(contextPackage, null, 2));
|
Write(contextPath, JSON.stringify(contextPackage, null, 2));
|
||||||
@@ -582,12 +629,50 @@ return {
|
|||||||
✓ Agent log saved to .workflow/active/{session_id}/.chat/
|
✓ Agent log saved to .workflow/active/{session_id}/.chat/
|
||||||
```
|
```
|
||||||
|
|
||||||
## Output Format: Agent JSON Response
|
## Output Format
|
||||||
|
|
||||||
|
### Primary Output: conflict-resolution.json
|
||||||
|
|
||||||
|
**Path**: `.workflow/active/{session_id}/.process/conflict-resolution.json`
|
||||||
|
|
||||||
|
**Schema**:
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"session_id": "WFS-xxx",
|
||||||
|
"resolved_at": "ISO timestamp",
|
||||||
|
"summary": {
|
||||||
|
"total_conflicts": 3,
|
||||||
|
"resolved_with_strategy": 2,
|
||||||
|
"custom_handling": 1,
|
||||||
|
"fallback_constraints": 0
|
||||||
|
},
|
||||||
|
"resolved_conflicts": [
|
||||||
|
{
|
||||||
|
"conflict_id": "CON-001",
|
||||||
|
"strategy_name": "策略名称",
|
||||||
|
"strategy_approach": "实现方法",
|
||||||
|
"clarifications": [],
|
||||||
|
"modifications_applied": []
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"custom_conflicts": [
|
||||||
|
{
|
||||||
|
"id": "CON-002",
|
||||||
|
"brief": "冲突摘要",
|
||||||
|
"category": "ModuleOverlap",
|
||||||
|
"suggestions": ["建议1", "建议2"],
|
||||||
|
"overlap_analysis": null
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"planning_constraints": [],
|
||||||
|
"failed_modifications": []
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Secondary: Agent JSON Response (stdout)
|
||||||
|
|
||||||
**Focus**: Structured conflict data with actionable modifications for programmatic processing.
|
**Focus**: Structured conflict data with actionable modifications for programmatic processing.
|
||||||
|
|
||||||
**Format**: JSON to stdout (NO file generation)
|
|
||||||
|
|
||||||
**Structure**: Defined in Phase 2, Step 4 (agent prompt)
|
**Structure**: Defined in Phase 2, Step 4 (agent prompt)
|
||||||
|
|
||||||
### Key Requirements
|
### Key Requirements
|
||||||
@@ -635,11 +720,12 @@ If Edit tool fails mid-application:
|
|||||||
- Requires: `conflict_risk ≥ medium`
|
- Requires: `conflict_risk ≥ medium`
|
||||||
|
|
||||||
**Output**:
|
**Output**:
|
||||||
- Modified files:
|
- Generated file:
|
||||||
|
- `.workflow/active/{session_id}/.process/conflict-resolution.json` (primary output)
|
||||||
|
- Modified files (if exist):
|
||||||
- `.workflow/active/{session_id}/.brainstorm/guidance-specification.md`
|
- `.workflow/active/{session_id}/.brainstorm/guidance-specification.md`
|
||||||
- `.workflow/active/{session_id}/.brainstorm/{role}/analysis.md`
|
- `.workflow/active/{session_id}/.brainstorm/{role}/analysis.md`
|
||||||
- `.workflow/active/{session_id}/.process/context-package.json` (conflict_risk → resolved)
|
- `.workflow/active/{session_id}/.process/context-package.json` (conflict_risk → resolved, resolution_file reference)
|
||||||
- NO report file generation
|
|
||||||
|
|
||||||
**User Interaction**:
|
**User Interaction**:
|
||||||
- **Iterative conflict processing**: One conflict at a time, not in batches
|
- **Iterative conflict processing**: One conflict at a time, not in batches
|
||||||
@@ -667,7 +753,7 @@ If Edit tool fails mid-application:
|
|||||||
✓ guidance-specification.md updated with resolved conflicts
|
✓ guidance-specification.md updated with resolved conflicts
|
||||||
✓ Role analyses (*.md) updated with resolved conflicts
|
✓ Role analyses (*.md) updated with resolved conflicts
|
||||||
✓ context-package.json marked as "resolved" with clarification records
|
✓ context-package.json marked as "resolved" with clarification records
|
||||||
✓ No CONFLICT_RESOLUTION.md file generated
|
✓ conflict-resolution.json generated with full resolution details
|
||||||
✓ Modification summary includes:
|
✓ Modification summary includes:
|
||||||
- Total conflicts
|
- Total conflicts
|
||||||
- Resolved with strategy (count)
|
- Resolved with strategy (count)
|
||||||
|
|||||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user